About BrentMJensen

I'm a software testing veteran at Microsoft, now living the life of a Data Scientist. In my over 20 years in the industry, I've held almost every position you can in Test and since 2010 have been enjoying the journey towards Data Science. I'm passionate about advancing the science of Quality and learning what we can from Modern techniques and a lot of lateral thinking in order to be more effective at improving software quality. I intend to use this blog to teach and to learn. I intend to do a lot of philosophical soapboxing with the hope that crowdsourcing these big problems will make them more concrete. Failing that, I get to practice my writing skills. No lose, right?

Modern Testing Principles Explained

Nearly 8 years ago, I wrote my very first blog with a great deal of nervousness.   “Why does the world need yet another person espousing the value of test to software teams?”, I thought.   I am glad I did though. It greatly helped my communication skills and were instrumental in focusing my thoughts.    Looking back you can almost follow my own transformation into Modern approaches to Quality. Hell, just the first 5 posts contain many of the thoughts that are now expressed in the Modern Testing Principles (co-authored with Alan Page), including its mission statement of #ataosq.

Today officially marks another “milestone” of sorts for me in my shift away from Traditional Testing. This will be my last post on Testastic.    Why?  In essence, I no longer identify with the Testing brand.   I am a data scientist now;  Customer concerns and the delivery of actual quality is still (and will be forever) an important part of my job and my mission.

BUT, Test is now firmly entrenched in my mind as something I do and not something I am. 

Am I done blogging?   Who knows?!?  A wise friend once told me “I blog because I have something to say and when I don’t, I don’t”.  When/if I do come back to blogging, it will be under some new name that I identify with.  I am fond of the idea that I could further help Quality focused people enter into the data arena specifically. Maybe it will be oriented in that direction.

Before you worry (or rejoice), no, I am not leaving the podcast.  It has always been about Software Quality (not specifically Testing) and will remain so for the foreseeable future.   We named it ‘AB Testing’ as it enables us to talk on a broad array of topics.

The rest of this post is a follow up on Modern Testing Principles that I have been meaning to write for months.   It is my hope that it adds some form of clarity to the community at large.  

What are the Modern Testing Principles?

They are a path forward.  We authored them because we noted that a good deal of Test was struggling.  Struggling to make sense of several changes occurring in how software was being produced and their role in it.  Trying to make sense of memes such as “Test is Dead” or practices like “Testing in Production”.  In the traditional testing culture, these are nonsensical concepts. Several testers were (and are) left still believing that “what got us here, can still get us there”.

The Modern Testing Principles were developed to help Testers leverage their strengths into new positive directions that align better with producing software in the Age of the Customer. They are the antidote to Traditional testing.

Modern Testing is specifically named to reflect a need to leverage the benefits of contemporary practices and learnings.   The name “Modern Testing” is intended to distinguish present-day techniques, activities, and processes from their traditional “Test Last” counterparts.  It is absolutely expected that new learnings and techniques will continue to be discovered and leveraged in the pursuit of improving business’ ability to realize customer value.   Adapting and incorporating these learnings to towards “Accelerating the achievement of shippable quality” is thus a key tenet to Modern Testing.

Why are these needed?  

An astute observer of contemporary software development practices will notice that the principles, in fact, do not reflect anything truly Modern, nor anything about Testing.   Many companies have realized that techniques invented in the waterfall era of software development, no longer scale to current business demands.  As businesses adjust to the new landscape, many unprepared Testers are being left out in the cold.   Several studies have been coming out over the last few years which are giving credibility to a growing belief that while Test is not dead yet, it is dying.  Test is shifting from being a role to being an activity. As these efforts progress, Test, who has always been viewed as a cost to a business, is also being viewed as a bottleneck and irrelevant. BUT, not all software development business have adapted yet, so there is time for many.  Enough time for those, who care to do so, to learn new ways to apply their valuable skills into new positive directions. The Modern Testing Principles are for these Testers. They are guiding light for those looking for a way to create positive change in their lives in the face of a changing Test landscape.  

Markets have always been won by companies being able to adapt quickly to customer needs faster than their competition.   However, the software landscape has changed from the 1980’s waterfall process on several key aspects. Many companies today have learned and reaped the rewards of adaptable software development practices with low cycle times.  As such, it is easier than ever to compete; small teams can build great solutions at a fraction of the time and effort than ever in history. Lower switching costs give customers more freedom to move to competition in friction-free manner.  The online world grants more visibility to customer behavior telemetry than ever before seen. Additionally, software engineering techniques and processes have changed to allow continuous updates to software which makes risks smaller than ever to deliver solutions to problems that customers have today.

Important Note: one day a series of discoveries will occur that will remove the need for human involvement in quality evaluation and adaptation. As a data scientist, I suspect it will surprise no one that I believe Machine Learning will play a huge role in that. When those discoveries are found and implemented, Modern Testing approaches, just like Traditional Testing approaches before it, should immediately be abandoned.

Who is a Modern Tester?

Short answer: No one.  It is not a title. There will certainly never be a certification for it.  Modern Testing is a direction and a pursuit with a specific focus: Accelerating the achievement of shippable quality. A Modern Tester is a systems thinker who has experience in Traditional methods and continuously is learning to find more effective ways to achieve quality.  Some specific notions that a Modern Tester might reflect upon:

  1. The world has changed

    • Technologies like AI, Open Source, Big Data, and the Cloud have made it easier for competitors to win
    • Customers are empowered and likely indifferent.   They can switch to your competitor in seconds.
    • Customers care may only about their problems being solved.
    • They need those solutions today.   Not in 6 mos. Today.
    • Traditional Test approaches (such as scheduled test passes and separate Dev/Test disciplines) were required when we needed to ship Manufacturing and it was expensive to re-ship software.  They are may only be optional now.
    • Businesses need to care about Code Correctness and Craftsmanship in order to sustain speed of delivery.  Their Customers, though, don’t.
    • Several very successful companies have shipped bugs (and continue to do so) and don’t have Testers.
  2. Several Traditional Testing methods might no longer be needed or are unable to scale to current demands; keeping them alive may harm business goals by:
    • Creating unnecessary delays
    • Creating unnecessary costs
    • Being focused on code and spec correctness instead of quality
    • Enabling a detrimental ‘safety net’ culture and failing to deliver on the promise of moving quality upstream
    • Treating Test activities as an innate specialization that cannot be taught
    • Favoring dogmatic isolationism (“specialization”) over community and Whole team approaches
    • Being a passive contributor to decision making
    • Favoring Intuition and theory over work that measurably moves key positive business KPIs, such Customer Satisfaction or Cycle Time.
    • Failing to ACTUALLY measure and then, reduce risk

Explaining the Principles

In this section, I will explain the rationale around each of the principles.  More detail can be found in a variety of places. Visiting ModernTesting.org is an obvious first start, but I would also include with Episode 67 of the podcast and continuing through to 93.   The Ministry of Testing also has a fantastic synthesis here (and comes with a totally awesome printable graphic).   Lastly, many of the #Three (the term of endearment for our regular listeners) have created their own presentations or references.  My current favorite of which is a brilliant mind map of Modern Testing by Maciej Wyrodek.   Several of the #Three are active participants in MT community discussions on our Slack channel.   Click here to join.  (Don’t forget to go to the #iwantasticker channel.  The stickers are cool). I will also add additional book references for those who wish to understand the principles deeper.

My Nicknames

For each of the principles, I have “short titles” for them that simplify their intent. These are:

  1. Business First
  2. Bottlenecks out
  3. Learn
  4. Leadership
  5. Real Quality
  6. Product Hypotheses
  7. Embrace the Fear

#1 – Our priority is improving the business.

Book Reference: The Lean Startup by Eric Ries

This has turned out to be one of the least controversial principles in the list. Honestly, I fear this is due to a misunderstanding of the principle. In business leadership circles, Test is most often seen as a cost of doing business. Almost an insurance policy. But is it? When Test is questioned on the return, the answers are generally vague or subjective. The Test Community at Large does not have a good answer for this. The most common answer I’ve seen is “reducing cost”. However, digging in deeper often reveals that this is false. Its theoretical.

Test is generally very strong in Customer Empathy. The Modern Tester adds to this a non-theoretical look at what the goals the business is trying to achieve, looks deeply at how their work contributes to it, objectively measures it and becomes part of the solution moving that KPI. They remove the theory and execute in terms of ROI towards business goals. Generally, businesses are trying to do one of 2 things: Grow or Reap tangible rewards. Find out from your business leaders what your company is trying to do, connect your work to it, measure it, and improve. I highly recommend keeping an active eye on Anne-Marie Charrett and her work on Quality Engineering. She is going deep on this topic and imho, it is absolutely a critical step.

Last, we get feedback quite often around “Profit isn’t everything”. We agree, but it is important. A businesses primary mission is to serve its customers. Profit is a necessary secondary mission, though, so the company can exist and grow.

#2 – We accelerate the team, and use models like Lean Thinking and the Theory of Constraints to help identify, prioritize and mitigate bottlenecks from the system.

Book Reference: The Principles of Product Development Flow by Donald Reinertsen

The Traditional Tester is building the reputation of slowing down ship cycles. The Modern Tester attacks this head on and works to instead speed them up. This is done by identifying bottlenecks and other causes of delay and work to remove them. Reinertsen’s book talks directly about Flow and goes deep into what practices work and why. While is not a book written for QA specifically, it does cover many topics on what causes delays. For example, a recent scientific study published by Dr. Nancy Forsgren in her book ‘Accelerate’, calls out that 1) Developers primarily creating and maintaining tests is correlated with business performance, but 2) Automation owned and maintained by QA is NOT. Why? 1) the code naturally becomes “more testable when Developers write the tests” and 2) they “care about them and will invest more effort into maintaining them”. QA has known for years that ‘moving quality upstream’ will result in the removal of many delays in the system. In this case, Dr. Forsgren’s conclusion is the same as my number one suggestion: Teach your dev’s TDD; help them succeed. Once that is done, discover the next major cause of delay and root it out too.

#3 – We are a force for continuous improvement, helping the team adapt and optimize in order to succeed, rather than providing a safety net to catch failures.

Book Reference: Implementing Lean Software Development by Mary/Tom Poppendieck

The primary goal here is to incorporate a constant incremental “learning loop” into the Development process that shifts Code Correctness concerns and accountability to the people who write the code. I hear quite often that “Dev doesn’t want to do test” as a justification for keeping the Test Last approaches alive. First off, creating an identity for a discipline (QA) based on what a different discipline (dev) doesn’t want to do is not good. Modern Testers seek missions of direct value to the business. Next, Developers are perfectly capable of writing solid code and this includes Test Code. They are just not always incentivized to do so. Furthermore, with a good test team as their safety net, they don’t have to. The safety net keeps the customer protected from code correctness concerns; however, it also keeps dev insulated from those consequences and encourages the ‘it’s your job, not mine’ approach. Instead, start shifting away from this protector role and begin the process of teaching dev to succeed on their own.

#4 – We care deeply about the quality culture of our team, and we coach, lead, and nurture the team towards a more mature quality culture.

Book Reference: The Starfish and the Spider by Ori Brafman

“Culture eats strategy for breakfast, lunch, and dinner.” Creating and sustaining an inclusive community of people is perhaps one of the best ways to create and sustain positive initiatives. There’s no real good way to create successful software if one ignores people. People are key, especially in knowledge work. As an example, I have recently been diving into the science behind the push for Diversity and Inclusiveness. Multiple studies lately have shown that Women in leadership roles not only improve business outcomes, but greatly so. One failing of the movement (imho) is the lack of explaining why. Not surprisingly, it is NOT because what body parts a human does/does not have. Women are statistically more likely to value relationships between other people, collaborate, and include other ideas towards common goals.   These are all learnable skills.  Improvements in Knowledge Work requires new ideas. Ideas are born from old ideas coming together. Diversity grows the number of ‘old ideas’ in the mix. Diversity is not possible without inclusiveness. Building a community where people share their efforts and thoughts towards commons goals creates a breeding ground of practical ideas, including ones you can use now.  Leadership (especially servant leadership) is necessary to make these changes stick and self-sustain.

#5 – We believe that the customer is the only one capable to judge and evaluate the quality of our product

Book Reference: The Four Steps to the Epiphany by Steve Blank

One of the more controversial principles. When correctly interpreted, it can create an identity crisis for those entrenched in Traditional Test culture. It states the customer is the only real judge of quality. The corollary: Test is not a judge.

Who is the Customer? Whoever holds the power over the final buying decision. For example, are you working on a B2B product? Then I hope you are actively helping *your* customers to help *their* customers.

Why is the Customer the only one capable? Because Quality is not about bugs or features. It’s about the application of a solution to a recurring problem.  A problem your customer has.  Quality is a subjective point of view and the only subjective point of view that matters: the customer.  A friend of the podcast, Steve Rowe, said it best. “Test has lost its way“. The Modern Tester cares about quality over code correctness and requirements. And the Customer is the only one that knows whether or not the product is solving their problem. Quality is customer happiness.  (not leads to…   is)

#6 – We use data extensively to deeply understand customer usage and then close the gaps between product hypotheses and business impact.

Book Reference: Lean Analytics by Alistair Croll

Without the data, you are just guessing. It may be an informed guess, but a guess nevertheless. Our job is to deliver value to the customer. Fixing bugs or adding features that don’t matter to the customer is simply wasting time. Time is the one crucial resource that once it is gone, you can not recover. Data is the key ingredient needed to take your “product requirements” (aka guesses) as a start and triangulate the actual needs of your customer. You don’t have to shift to being a Data Scientist as I did. (but it was absolutely a worthwhile journey for me.) However, it is critical that you become accustomed to using data for driving decisions. Traditional Testing was about being an information provider. Our suggestion is go further. Guide the decision and translate it into action. I did a webinar on this for AST. If you can power through all of ‘Ums’ I do in the presentation (I can’t), I do think there’s a lot of helpful content in here for you to start your journey.

#7 – We expand testing abilities and knowhow across the team; understanding that this may reduce (or eliminate) the need for a dedicated testing specialist.

Book Reference: More Agile Testing: Learning Journeys for the Whole Team by Janet Gregory/Lisa Crispin

We acknowledge that principle #7 is scary. Some interpret it as Alan and Brent’s version of “Test is dead.” Others interpret it as Alan and Brent are saying to “Kill Test”. I will be very clear. We are saying none of these things. However, we are saying Testing is an activity (and a necessary one) and further, we are saying this is an activity better owned across the whole team. Even if it eliminates your job as a testing specialist.

Both Alan and I agree with the growing sentiment that Dev and Test positions will eventually merge, however, you don’t have to. Instead, I offer you this suggestion: whether we are eventually proven right or wrong, following the principles will grow you in your capabilities for delivering business results. If we are wrong, you will be a stronger test specialist for the experience. If we are right, then you will be prepared and have options and experiences that will make you even more valuable than before.

In recent years, I’ve had the opportunity to connect with Lisa Crispin a number of times and walked away better for the experience. They say never meet your heroes, because they will disappoint you. Lisa has not. In fact, I am proud of her. She’s acting on the opportunity the changes created for the Quality focused folk amongst us, which “led her down a whole new [exciting] path”.  Keep an active eye on Lisa as well.

Lisa Crispin and Janet Gregory have already been working the Whole team aspect for years. Heck, if all you do is start a book club with your devs, that will be a fantastic start.

While we can not guarantee your experience will match ours, but literally everyone I know who has made the journey away from Traditional Test has stated they would never go back.   Its quite a journey and not always easy, but the “grass is greener” here.   It is my hope that we will get their stories up on the Podcast soon.


Well, that’s it then. Thanks for taking the time to read this and for allowing me to help guide in the written form. I really appreciate the Quality Community and look forward to collaboration going forward. Hopefully, I will start a new blog site soon under a different theme, but until then, feel free to connect with me on Twitter or the Podcast’s slack chanel.




What’s so special about specialists?

A problem I am constantly asked to address is which are better Generalists, Specialists, or something in between? I have written about it, presented about it, and podcasted about it, but nothing as extensively as I plan for this post. Honestly, I don’t know how much additional value I will add to the community of thought on this topic, but at minimum, I will now have a URL to send people who want to know more details. So for me, this will be a big time saver. Much of the content comes from years of research and experience in coaching teams through an agile process.

The problem

Inevitably, the teams I am helping will face a period of rejecting agile because of some challenge a person or set of people are facing with the change. Usually, I get something like: “I am a specialist and this is doesn’t optimize for me”. This has always happened when I am coaching a team conversion from waterfall methods to agile. However, even the more successful teams face some challenge if they overdo the shift and swing the pendulum too far in the opposite direction. “I am a generalist now, just like everyone else. How do I differentiate for performance reviews?” If you facing similar questions on how your team should be staffed, then it is my hope that this post will help you answer what is right for you, why is that so, and some hints on considerations should you desire to implement changes in your environment. If you desire a simple one-size-fits all answer, then you want T-Shaped people. If you looking for a more nuanced answer, then I will encourage you to read on.


There multiple examples to draw from when comparing specialists and generalists. In the animal kingdom, a Koala is known as a specialist species. It can only live in one environment and has only one source of food. Compare it to a Raccoon, which has been known to be able to thrive just about anywhere. In the medical profession, you may have a general practitioner who will refer you to a heart specialist. Even jobs like Sign Spinners and Dog walkers are considered specialized jobs as they generally are only seen in specialized environments (heavily populated areas). Whereas, you can find a Handyman just about anywhere. It is interesting to note that Sign Spinners are becoming automated (crudely automated, sure, but automation always starts with basic beginnings). Lastly, there is the software world, where it is quite common to hear developers and testers to claim themselves to be specialists.


I am going to be exploring this topic as it relates to software development and the software development lifecycle. In addition, while almost every change requires shifts in people, processes, and technology, I am not going to spend a lot of time on tech. I am also going to assume (falsely) that delivered code is always perfect. This will help to keep the explanation simpler. As the patient reader will discover, this is a deep topic and I will only barely scratch the surface.


Some key terms used throughout this post.

Knowledge Worker – Thomas Davenport says: “Knowledge workers have high degrees of expertise, education, or experience, and the primary purpose of their jobs involves the creation, distribution or application of knowledge.” It is important to note that every human being is a bit different in terms of how they think and create, therefore it should not be a surprise that every knowledge worker has unique characteristics when it comes to producing results as well as the optimal environment needed to do so. However, this uniqueness is abstract to some degree. For example, I have knowledge for blogging, podcasting, agile practices, customer quality, data science/handling, testing, etc.

Generalist – someone with a wide array of knowledge; a person competent in several different fields or activities:

Specialist – a person who concentrates primarily on a particular subject or activity; a person highly skilled in a specific and restricted field.

Versatilist – someone who can be in a specialist role but at the same time switch to another role with ease; also known as a t-shaped person.

The Goals of Business

The goal of any business is to find a stable business plan that creates value and can grow the reach of the market to as many folks as possible. As part of this, optimizing the utilization of the resources used to produce new products is important to keep costs low and make more widgets. An important KPI when discussing Specialists and Generalists is Productivity. Productivity is defined as the number of outputs per unit of input. Since we are talking about software, let’s say the outputs are features that vary in their degree of value to the customer and the inputs are team members and time. However, while this is a necessary condition, unless you can easily replace those folks who leave your team, Employee Engagement is also going to be an important consideration. For the purposes of this post, I will define this to mean the degree in which your employees enthusiastically act to deliver business outcomes. Happy people are productive people.

Do NOT confuse productivity with efficiency. They are not synonyms. Efficiency is a measure of the ability to minimize waste while producing outputs.   It is common enough to misunderstand these terms.  Their differences can be subtle.  Here’s another reference comparing both.


The first known use of the term Specialist was in 1855 and it was coined to describe a medical practitioner who devoted their study to a particular branch of medicine. One year later the man would be born who would eventually create the system of management used most often today, Frederick Winslow Taylor. Taylor created the topic known as Management Science (aka Operations Research or Taylorism) back in the 1800’s deep within the industrial age. What started it was a study he did that recognized that deeply studying how workers performed their jobs and making small tweaks would greatly improve productivity. He used the term Generalists to describe workers that could be applied to most/any machine and specialists to describe those whose knowledge could only be applied to a very specific one. Taylor is accredited with noticing that the more repetitive the work could be made and the more inputs could be made consistent, then specialists would produce far more than generalists. He would prescribe folks to stay on a single machine and measure everything about it. Ultimately, he perfected the efficiency of the worker on that machine. But it came with a consequence, the resulting jobs required “less brains, less muscle, less independence”. In those days, this was fantastic. Getting new workers was relatively easy to come by and you could train them fairly readily on a new machine, and reap the rewards of high productivity, but it came at a cost. The workers became so specialized that they could not well function in a changing environment. Indeed, they could only take their inputs and turned them into the desired outputs. They were entirely unawares of the whole system in place. In order for the system to function, Taylor proposed a system of Managers to be constructed; it would be their job to 1) be the brains, 2) study productivity and 3) study and control the system. For the last 200 yrs, this was how the manager/employee relationship functioned, so when computer related jobs started to grow it was the business model most were used to.

The Pros and Cons of Specialists & Generalists

Specialists are optimized for “going deep” on repetitive tasks that require a minimum number of dependencies to succeed. In addition, they do not take a system view of problems, often requiring management to control their workloads in order to optimize productivity. In addition, as eluded to above, they are highly inflexible. Generally, they thrive in very specific environments with very specific workloads. Their inflexibility can be a bit of a problem and readily can cause workload starvation or bottlenecking of the process. For example, consider a UI development specialist whose UI is frozen (meaning no longer permitted to change for the release), what shall they do now? If it’s clear, the product will have a subsequent release, it may be fruitful to have them work on that now. This is more efficient, but if the product is stalled do to database changes then it would be more productive for the UI developer to apply their talents there. If they aren’t capable of doing so, I’ve seen teams go as far as add new UI features just so their developers aren’t idle. This creates a product that is optimized for the skills of your developers AND NOT for the needs of your customers. In the world where your customers can go to a competitor in a split second, this sadly is a really bad idea. However, there are really solid places where specialists make sense. In particular, those critical needs for a software release that are rarely called upon. Security and performance are examples. These require deep knowledge, generally valuable, and (unlike UI development) can be challenging for someone to pick up on a moment’s notice.

In contrast, Generalists are those folks optimized for flexibility. They are do take a system view, are very good at “filling in the cracks” and can be applied at just about any time to just about any problem. They are exceptional at solving 80% of any development problem (or 80% of all development problem… depending on how you want to look at it.. aside: both numbers totally made up), which leads them to be massively productive, since there is plenty of work them to pick up and move forward on. However, they tend to be inefficient with much of their time spent on ramping up on a new code base or learning a new skill. A frequent reader of this blog and/or listener to the AB Testing podcast will recall that I spent time as a manager of a dev team in the Unified Engineering model. There I encountered a phenomena: folks that had come from Test were out-producing those from Dev. I believe this was for 2 reasons: 1) Test development is a generalist skillset and 2) there was nothing particularly challenging about my product that required deep expertise in any particular area.

So what’s the silver bullet?

Ok, so specialists are more efficient than generalists, but generalists are more productive. But ideally, you’d want both: High Output and Low Waste. With a couple of caveats (see below), if you have a highly controllable business environment, then specialists are the way to go alongside a management team than controls how/when these folks are applied. Management will need to make sure all dependencies are cleared before the work gets handed off, of course. However, if you are operating in an chaotic business environment that needs to adapt to changing business requirements, you are better suited to hire a couple of key specialists and have the bulk of your team be generalists. Management may need to control costs directly, focusing the attention of the Generalists to only a few areas for each person. In both models, if the environment changes, you will need to be prepared to fire, then rehire your team if productivity/efficiency suffers as a result. This approach too may not be good enough though. It’s hard to quickly hire a team of any given size and time consuming. Once in my career, I completely hired a team of 30 people in just one month. It was a strong team, but it was grueling and required me to spend 3 weeks of interviewing 40 hrs a day, which was quite hard on my psyche (especially since I had other work to do as well). A better solution is hire versatilists. Think of them as specialists in 3 to 4 of the skills your team needs. As an example, I run a data science team today. I required every member of my team to be strong in at least 2 of the following: Data Science, Development, or Data Engineering. They need to also have a passion to learn the 3rd. As I hired, my emphasis for the remaining open positions focused on which area of my team showed the greatest weakness. This is strong and I am proud to claim they are known for being highly productive closers.


  • People have their own incentives. Above I mention a few caveats, much of these article is representing things from a pure emotionless business angle. However, until we’ve automated the development role, your employees will have emotions and care for more than just the business. Daniel Pink describes this human element as driving for Mastery (getting better at doing) something), Purpose (working towards something larger than ourselves), and Automony (self-direction). The value proposition of a product will, hopefully, drive their sense of Purpose. Specialists are going to be strong at pursuing Mastery and complain about being told what to do and when as will be required in order to keep your team productive. Likewise, Generalists are more easily granted autonomy (especially in an agile process such as Kanban), but they may not grow and may feel like they are constantly investing in and then abandoning skills. The Versatilist approach strikes a balance between these 2 extremes.
  • Software development is knowledge work. Very few things, in my mind, actually falls into the camp of “repetitive work” and low dependencies required to make specialists thrive. In fact, Steve Johnson’s work suggests that we would not want this even if we could construct it. Innovation comes from old ideas mixing together. A team of Versatilists, each required to switch areas every so often with 3-4 deep specialties are going to help knowledge sharing grow and help you innovate and adapt both productively and efficiency.

Send me your feedback and/or comment below. As I mentioned above, I’ve been meaning to write this for quite a while. I hope it’s helpful.

The Combined Engineering Software Model

Happy New Year, everyone! I find myself looking forward to 2016 despite, perhaps, some of the worrisome predictions I may have made in our most recent podcast.  2016 should be the year when I finally complete my Master’s degree, celebrate 20 years of marriage with my adorable wife, take a really exciting vacation (TBD) with my family (my eldest turns 18 this year, so want to do a “big bang”), and make some serious progress advancing the principles of customer focused quality and engineering in my Data Science job. Big year!   I hope your year turns out even better than mine will be. J

Today’s topic is something that Alan and I have gotten lots of tweet, emails, and questions.   It IS something we’ve talked about a lot on the podcast, but not in a very cogent fashion.   A quick search on the web yields nothing, so I’d thought I would add some detail.

Please note: on the podcast, Alan and I will often switch between discussing Combined Engineering and Unified Engineering models. These are the same things.     The model is called Combined Engineering, but honestly, we both think Unified Engineering is better.   I think of an Oil and Vinegar salad dressing.   You can shake it really hard and combine it together, but it’s really not cohesive and will separate again. You will have to shake it up again to reap the rewards. Too much work.   Unified in my mind is more like Ranch dressing.   Several ingredients melded together to make a whole new awesome thing.

A quick note

This is not my model. It was first deployed by a pair of leaders in the Bing team (I am leaving out their names to “protect the innocent”). One who grew up in Test and the other in Dev. Both are brilliant guys and have ascended to high ranking positions in my company.   However, I was in Bing when the model was piloted and was instrumental, a year and a half later, to helping my next organization move to the model when I left Bing.  About 2 years after the model was first piloted, it hit the rest of the company like gangbusters. The majority of the company was switched to this model now and dedicated positions in Test are remarkably rare.   I’ve met only 1 person in the last year who’s title is still Test Manager and honestly, I felt sad for her.   It takes time and effort to do the transition and she and a team were already about 1 year behind the rest of the testers in the company. I remember thinking at the time: “How can I help this person? Surely being in Test as well as being in management is going to make this person a target for the next set of layoffs (God forbid that they happen again sometime soon)”.

I am not fully aware of what was the inspiration for the guys that piloted the CE model, but I am very positive that it was grounded in Agile.


As of this month, I have been with the company for 22 years and during that time I have experienced 3 very different organizational models.   The first was the PUM (Product Unit Manager) model where a business was run entirely by one person, the PUM, who would have middle managers for each of the disciplines reporting to them. The PUM would report to a VP and the middle managers would have frontline managers as directs.   The frontline managers would generally partner with the managers of the other disciplines and their collective teams would work to add features together, but with clear dev, test, and PM boundaries. The second was known as the Functional model. It took the idea of clear distinctions between the disciplines even further. PUM roles were eliminated and in replaced by 3 new heads, one for each discipline. This continued up the entire organizational stack.   Directors and even VPs in Test, were quite common. The idea of this model was that each discipline would be far more efficient due to allowing them further control over optimizing their craft. It would reduce waste and duplicated effort.   Lastly, the CE model.   I think of this model as mostly the polar opposite of the Functional model.   It rejects the notion that discipline optimization is the key for producing ROI, and suggests getting teams to use their very different skills together towards business goals in a tight knit fashion is more effective.  It is important to note that on all teams that I have encountered, the CE model has almost no impact on PM. However, Dev and Test are combined into the same frontline team a single leader and are accountable for both the development and testing of the features they are producing.

CE Goal

OK, there’s a LOT to discuss with respect to the Combined Engineering model, but absolutely beyond a shadow of doubt the first and foremost should be the goal of it. Be forewarned: in no way should “implementing CE” be the goal.   In order to execute on a business strategy, CE is one of many implementations to consider for your organizational model, but it should be considered alongside the business outcomes and strategies.   The goal in a nutshell is to create an environment where empowered individuals with complementary, but different specializations, are working together towards common team goals.

If it makes sense for your business, do enter into CE, but don’t do it flippantly. There can be a lot of change incurred with the model and such change brings risks.   Consequences I have seen include significant slowdown in project effectiveness and speed and massive morale decrease.   Where change management strategy has been neglected, I’ve seen those teams lose their best people which further disturbs the downward spiral.   However, a strong implementation results in the opposite.   Huge performance and morale boosts.   Teams that feel like families and work together to achieve the best and most important goals for the business.

Combined Engineering, accordingly, is a very strong complement to your favorite Agile Methodology.   I’ve regaled on the importance of considering software development as knowledge work.   Dig into that statement further than just surface level. As Peter Drucker tells us, knowledge work is distinct in that usually the task is not known, but rather, must be determined.   On an assembly line, the work is known upfront. In Knowledge work, there is only outcomes and a selection of choices. Each person on a knowledge working team holds a set of the pieces for the business jigsaw puzzle. CE works to activate Knowledge Sharing amongst team members and get the collective knowledge of the team working together towards business goals.   It creates collaboration and unity and from this, high ROI productivity.

When to implement

My Inner Agile Coach screams “always” as a response to when to implement CE as part of the business strategy, but this is not true. If your product has static and relatively stable goals and/or long release cycles, then your product likely has no fast feedback loop between the engineers and customers. No real reason to change or adapt. Software being released to governments, packaged products or software embedded in your coffee maker or car are examples.   Honestly, I would push to make product changes to enable fast feedback loops before I would consider a CE model.   Otherwise, the benefits are unlikely to be worth the cost, pain, and risk incurred from the changes.

However, if you are able to ship at “internet speed”, have a strong leadership that is trying to push decision making down to its soon-to-be-empowered staff, have the ability to react quickly to competitive pressures and/or customers’ demands AND have a discipline-segregated workforce, then IMHO, this is a no-brainer.

How to Implement

On paper, the implementation is cake. I recommend using the simplest implementation.   Yes, it won’t be perfect, but you can change it 3 to 6 months later once you’ve learned the next change problem to resolve.   Making everything “perfect” up front takes time and creates anxiety.   Better to just start by making the smallest amount of required change NOW. Here are the simple (naïve) steps:

  • All frontline Dev/Test managers are now Engineering managers and no longer representatives for just their discipline.   Managers, who were once partners (in the functional model), are now peers.  (note for simplicity, I will continue to use their former titles though)
  • The product they used to co-own as peers should split in half as vertical slices. One half going to the former Test manager and the other to the former Dev manager.
  • Team accountabilities are the same as they were previously except now instead of being split by discipline each leader owns this within their team.   Both teams are responsible for feature development, unit tests, test harness improvements, etc.
  • The individual developers and testers need to reorg. How?
    • Split into 2 teams of equal sizes
    • The TOP 20% Best Testers go automatically to the Dev Manager’s team
    • The TOP 20% Best Developers go automatically to the Test Manager’s team
    • The Test/Dev Manager work together to appropriately place the rest of the individuals into each team.
  • Create Monthly team based goals for each vertical slice and do so *without* acknowledging where the team members came from. Ie.   The Test manager should not have easy to achieve development goals, nor should the Dev manager have softened quality goals.
  • The Test manager gets a new mandate:
    • He was given the best developers in order to bootstrap his success and help transform the rest of his team.   He should use these folks to aggressively train the others.
    • Goal: At the end of 6 months, every former developer should have created and executed their own test suite and every former tester should have checked in their own bug fixes/features into the product.
    • Training will be offered, of course, but the accountability for success falls to the Test manager
    • He has 6 months to achieve that goal. At the end of the period, each of the developers that were forced to move will be given the opportunity to change teams, if desired.
  • The Dev manager gets a similar mandate especially regarding the best testers
  • Obviously, this relies on a strong leadership team that can communicate the expectations and outcomes in a fashion that alleviates concerns. We are forcing people to move teams.   Most uncool, but the 6 month escape clause helps here and honestly, you really do need these experts to help with the training.
  • Lastly, you will need a public “incentive” plan that clearly remarks on how performance reviews will be judged based on the ROI of the features that individuals produce.    You will not be able to get a good review score by either the previous test review standard or dev’s equivalent.   You are NOT incorporating testers into the dev team (or vice versa), nor are you now going to judge Testers by the long practiced dev standard.   A new Engineering standard will need to be developed that makes sense for your business/company, which will be 50% weighted towards test’s benefit and 50% weighted towards devs. (Remember, you will change this later when you learn more).   The goal here is to breakdown the old us vs. them discipline thinking. Your ideal standard will be one that equivalently makes both dev & test comfortable that they can succeed.

The agile-wary reader will notice that, in essence, you are trying to take a team of specialists (dev) and a team of generalists (test) and merge them into more fluid teams of generalizing specialists and specializing generalists.   What I have found, is that the developers have a much harder time with this than the testers.   The testers will be mostly concerned about all of the bugs that will be shipped to the customer now without their protection.   They will also be concerned how they will be judged by the dev standard. Leadership simply needs to stick to their guns on the 6 month objective.  Consistently sticking to these goals are key to them being achieved. You should not call it a failure or a success until you’ve seen movements in the new “muscle memory” of the team.

If it ain’t broke…

Many will tell you the old model wasn’t broken so why are we changing it. Rarely is it the case that I have found their assertion true. More often than not, the reason these folks are proclaiming it is not broken is because they are not measuring the right goals. The name of the game in today’s software world is speed and the single most precious resource you have: calendar time. We no longer can afford to find huge design flaws a month before we ship and we certainly can’t afford the month long test pass at the end.   These are common consequences of the functional model and optimizing for discipline strengths.   They are harmful to today’s business strategies.   They are too slow.


Optional ingredients

To close, I will list out some challenges I have encountered alongside recommendations

Lack of training – training is key for success. We’ve already given teams their own in-house experts, but this may not be enough.   Listen carefully to the struggles of the team and bring in outside help in necessary.   Looking at external coding bootcamps for your former testers as well as design patterns, algorithms, etc. Treat your high ranking individuals as the leaders they should be and get them presenting brownbags and setting the example of the transformative outcomes you are expecting.

Specialist vs. Generalist battle – many devs will proclaim loadly and proudly that they are specialists and they don’t want to be generalists.   Much of this comes from how they got good review scores in the past (focused on owning deep complex code).   Respect their strengths, but be firm on expecting them to broaden.   At the end of the day, you don’t want either specialists or generalists, you want people who can go deep when it matters but you want those same folks to be able fluidly move to higher ROI tasks. This creates stronger adaptability – a key business imperative.

Shared assets and true specialities – Some topic of concern such as security, performance, and government compliance are deep topics by their nature.   It is very unreasonable to expect one team to be able to master these AND do development AND do testing. Consider funding a separate team to own this for all. Their goal is to create those assets that make it easier for the regular engineering teams to just “do the right thing” automatically.   Test Harnesses and Bug databases and other internal “products” should be owned by a single team with a strong sense of customer focus.   Their job is to accelerate the success of the product engineering teams and prevent duplication of effort.   Customer focus is key here.    Too often I see this teams build what they want to build and senior leadership force others to use it. Far too often, I see that these systems fail to achieve the needs of the engineering team due to this.   They need to be built to achieve goals, not to look good for the veep.

Area Owners – I recommend heavily to ban individuals from owning areas. I encourage centers of gravity and pursue areas of passion, but make it clear they these areas are owned by the team and anyone can and will work on it.   This helps to remove the reliance on specialists to solve problems and removes a key bottleneck from your system flow.   Lastly, it really amps up knowledge sharing on the team.   If multiple people are familiar with an area of the product that’s broken, then the odds of a fast and righteous fix being implemented is much, much higher than otherwise.

Command and Control – Leadership is the single easiest way to break or stall this transformation. Get rid of command and control methods.   As mentioned above, they don’t align with knowledge work anyways.   Tell your people the outcomes you want and the criteria for what defines it’s completeness.   They are professionals they will be able to figure out the next action to take as well as the subsequent ones.



I hope this is helpful.   Please feel free to post any comments and questions below.


Happy New Year!


Unicorns, Data Scientists, and other mythical creatures

Hi all!  It has been a while since I’ve written a blog and since my last post in January, lots of exciting things have happened to me.   Those who have been following me on LinkedIn or listening to the AB Testing Podcast know that I have taken a new job as the data scientist manager in the Azure team.  In the short time since I have started in March, I can easily say this is absolutely the best job I have ever had.   In no small sense, it really feels like a job I was meant to do from the start of my career.   It’s funny to me.   As I sit and type this, I am reminded of my mentor in High School, my calculus teacher.   He was a cranky man, who in one part loved his job, but in another felt like he had accepted second best in life.   He frequently mentioned that one day when *his* high school math teacher died, he would go piss on that grave.   He felt that his mentors had not set him up to succeed in life and was driven to not repeat that error with his students.   A story for another day, but soon after college, I would learn that same sense of responsibility to teach those what I wasn’t taught.   My mentor wanted me to be an Actuary.   I loved math (still do) and went to college with that in mind.   I bolted on a Computer Science degree once I learned how much I loved it, but upon graduating, my Math degree would encapsulate knowledge that, in general, I would not use for 20 years.   Until about 4 years ago.     Needless to say, I really love my job, what I am doing, and problems I am solving.   I do wish, now and again, that I could wind back the clock and be where I am today, but have another 20 years to master my new direction.  C’est la vie.  So much cool stuff to learn and use and so little time.

Very similar to my last team, my new team did not have much experience in their new space when I started and I am grateful to be on the ground floor of what we are building there.  Much like in many places in my company, many of the folks are old SDETs and dealing with the change is an ongoing challenge, but not one that I am unfamiliar with.   Honestly, it is going nicely in my humble opinion, but as more and more people are learning what data can do for a business, the pressure to hire and train more data scientists is ever increasing.     In the last 9 months, spontaneous 1:1s with folks have increased by an order of magnitude with folks who are: 1) looking to hire a “data scientist”, 2) looking to become one, or 3) looking to preserve their current position.    Today’s post is mostly about issue #1.  Although, #2 is also interesting to me as the majority of these folks have been Program Managers lately (which might indicate a sign of change in that discipline).   #3?   I would say that’s the majority of what I speak to on my blog and on the podcast, but if there are specific questions, send me a tweet.   We will feature it as part of our Mailbag segment on the podcast.  We love the mailbag!

This post was inspired by a talk I attended at this year’s Strata conference in New York.  The presenter, Katie Kent, did a talk on Unicorn hunting in the data science world, which I thought was fantastic.  Her company Galvanize.com offers a 12 week immersive course that claims to prep you for a Data Science role with a 94% success rate.   I haven’t researched this myself, but maybe I will test it with a few employees of mine and report back.   Might be another alternative for the #2 issues I mentioned above.  I can say Katie’s talk was great and it resonated.   Many of the discussions I have had recently was *exactly* in this problem space.   Managers coming to me or my manager wondering how to quickly learn from us being vanguards and asking how to take advantage of Data Science and “maybe if I can get an open position, you can help me hire one?”.


Yup, that’s right.   One!


Katie’s talk was about Unicorn Hunting.   The elusive Unicorn was the perfect singular Data Scientist a company could hire that could solve all of its needs in the data science space.    I regret to inform all of you except Katie.   They are extremely rare (perhaps, rarer than the Unicorn) and if you can find one, you will probably not be able to afford him/her.  (note: if you can though, you should!).   The challenge here is this perfect Data Scientist would have to be an expert in too many very distinct fields.  The ones I am aware of that exist are indeed Rockstars (in the Stats world), but these aren’t the folks you will successfully hire into your one-off position.

One new experience for me on my new team has been to hire Data Scientists.   Almost all of the folks I have interviewed have been PhD’s, but the best have only achieved Master’s degrees.   To date, I have not met a candidate with only a Bachelor’s degree.  <aside> This, by itself, is interesting.   If one can become a data scientist in only 12 weeks, why not 4 years?    </aside>   Master’s candidates are great, I think, because they have 1) learned some depth and 2) have stayed grounded in the application of their craft.   I’ve learned that there’s a big debate in academia with respect to post-doctoral individuals and whether or not to join Industry.     They are pushed to push the boundaries of the science and I think somewhere along the way, the drive to apply it to real life dissipates.  Masters students are more applied scientists, in my humble opinion, than theorists and as a result, more immediately useful to a business.  This is an exaggeration, of course, but highlights another cause for the Data Scientist shortage.  The PhD folks that do venture into industry and survive it are helpful.     They are able to pull the practiced, but old, learnings of Data Science closer to what’s currently known which causes an acceleration of all involved.

However, this is knowledge work and there’s too much of it.   Due to the cognitive limitations of any single human, it will *always* be rare for just 1 person to be even good enough for what is needed end-to-end.

Depending on who you talk to, there are multiple definitions of a Data Scientist’s job.  My present favorite: A Data Scientist helps a business drive action by understanding and exploiting relationships present in the data.   There are 4 key principles buried in that definition.

4 Key Data Science Principles:

  1. Actionability – the recommendations must be interesting, valuable, and within the means of the business
  2. Credibility – In this business, Objective Truth is *everything*.   It is ok to communicate confidence intervals, but it is not ok to be wrong. Data Science teams get, AT MOST, one time to present wrong data, insights, recommendations to an executive. It is wise to remember this.
  3. Understanding Relationships – this is the bread and butter work most data scientists are hired to do.   There are a vast sea of techniques to use for digging knowledge out of data. One must also have the ability to understand what it means.   Domain Knowledge is critical.
  4. Data – lots and lots and lots of it

To be able to succeed, one must turn data into knowledge and make knowledge work.   Sounds good as a t-shirt motto, but in practice, this is hard.   It takes deep knowledge from several disciplines to turn this into something efficient that scales to not only the amount of data being processed, but also the timeline the business requires in order to benefit from the discoveries.


In my experience, it takes a team to pull this off.


In my observations, here’s what you need:

  1. Data scientists – starting with the obvious.   However, what may not be obvious is that Data Science is a very wide umbrella.   But there are 2 major branches, and likely, you will need both.
    1. Applied Statistics – The ability to prove/disprove known hypotheses in a deductive manner.   You have a belief already in hand and you are trying to prove/disprove it. Applied Stats techniques tend to be faster than Machine Learning.   A few simple histograms are easy to pull together as an (exaggerated) example.
    2. Data Mining – Using Machine Learning techniques in an inductive manner. You start without a preformed belief, but instead a goal (such as predict when a customer will churn) and let interesting patterns within the data unveil themselves. (note: interesting is a Data Science term and it can be measured. )   Machine learning techniques, in my experience, handles Big Data problems better.   They can scale to the size of data better.
  2. Data Engineers – Engineering the movement, storage, indexing, parallelization, cleansing, and normalization of data is a very hard problem and MOST data scientists do not know how to do this.   As Big data grows, this role, already critical, becomes even more so.   Credibility starts with the data and these guys are key to caretaking and monitoring it. They should be paying attention to not only traditional RDBMS solutions, but to technologies such as Hadoop, Splunk, Azure Data Lake, etc.   Each of these solutions come with their own pro’s and con’s and you need someone who knows what they are doing. These folks should understand the architectures end to end from data emission to visualization. There is NO silver bullet and you need a person who understands the trade offs.   Every executive wants 1) a cheap solution, 2) near real time, and 3) inclusive of all the data. Cheap, Fast, or good: Pick 2.
  3. Computer Scientist – Especially in distributed computing.   The current state of the art for Big Data is to parallelize and send your code to the machine that is storing the data to do calculations (Map Reduce, in nutshell). This greatly reduces the time spent as code is easier to move than the data, but even so many of the techniques are O(N2).   Polynomial time is too slow (or expensive even with millions of machines running in parallel).   There’s an active quest for O(N) and O(1) solutions to Data Science problems as well as clever approaches to Data Structures that helps to improve speed and storage costs.   One new item that I have not spent nearly enough time on is in the use of heuristics.   More here later.
  4. Domain Knowledge Expert – Even if you got the people with the skills above, you still need to be able to understand what the data *means* in order to move forward. Typically, the data is being emitted from telemetry from lots of product developers.   Unreasonable to expect 1 person can know everything about how the product works, but you will NOT succeed if your in-house data science team knows nothing.
  5. Business Expert – You need to be able to understand what the business goals are and how to translate your uncovered insights to support decision making.   This takes art in communication and visualization.
  6. Agile – An agile coach is needed in order to pull together these folks and get them working together towards common goals.   It does NOT make sense to over focus on *any* of the above specialties.   all roles are necessary to succeed and since this team is working to improve the business, adaptability is key.   As new knowledge is gained, the team needs to be able to shift in the new direction sustainably.   This happens A LOT!
  7. Manager – Really I wanted to put Orchestrator here, but you really need someone who is a Systems Thinker who is crafting strategies to the best effect for the business.   These folks need to work together.


You can scale these “requirements” up or down depending on the problems your team is facing, but the people on the team should be working in unison like a choreographed dance or an orchestra.   They should be one team and not individual teams and focus on vertical slices of “value” being delivered.  No one person can do all of the above.  3 to 4 might be the minimum to produce a team that is outputting something that is considered valuable.  Imho, 5-7 folks with the above skills is about right as long as you have the right depth and breadth.

Lastly, AB podcast listeners will know that I frown on specialists (more info) in “pure” development teams.   This is true here as well.   Every person on your team should be able to perform at least 2 of the functions above (ideally, 3) with at least 1 place where they can competently achieve deep results.  In addition, it’s a good idea to minimize the overall overlap of expertise on your team and rely on your Agile coach to create an environment of knowledge sharing and team cooperation.

In closing:

While if you are patient, you will be able to create a team of folks who, together, have mastery of the above and if you are also lucky, keep that team small, but don’t try for the Unicorn.   They do exist, but are too hard and too expensive.  Even if you manage to land one, if the business isn’t prepped to include them in the strategy, you will likely not get the value you hope for from them.   Knowledge work cannot be done in a silo in the fleeting windows of opportunity many are encountering today.

Thanks for reading and Happy Holidays!

In pursuit of quality: shifting the Program Manager Mindset

Hello all and a very happy New Year’s wish to you. It has been a LOOOOOOONG time, since I have written a post (as several folks have reminded me). While it remains my goal to post at least once a month, I know I won’t always achieve that goal. These past several months have been a particular drain on my time, probably my most valuable commodity at this time in my life. One of my New Year’s resolutions is to reprioritize how I spend my time. My goal is to focus more deeply on my personal retraining and on my family. I believe many of my readers are considering or following a similar path forward as am I. With luck, the time will avail itself, so I can publish what I’ve learned and share.

As part of that retraining, I’ve gone back to school. I am quite happy to report that I am now done with my first year of a MS in Analytics and thus far, I’m a straight A student. Honestly, though, it is a bit odd going back to school in your mid-40’s. I really could care less about the grades (it has been decades since someone has asked me my GPA). I am very passionate about learning the topic AND I have school age children who are watching my every move. I just simply can’t have teenagers calling me a hypocrite. The drive to learn the subject matter is really helpful to get the grades. I wish I had such a similar motivation the first time through college. J Thus far, I have been able to bring every class I have taken back into the applied context of my work and have done something valuable with it. I’ve got a ways to go, for sure, but the journey is been a lot of work and a lot of fun.

At work, I am helping to land something *very cool* for shifting the world into a more Data-Driven culture. I am on the Power BI (Business Intelligence) team in the SQL divisions. We are enhancing our SaaS offering and a preview for the public is available now with our new features. Check it out. The price tag for the public offering? Absolutely free.

I am quite proud to be a part of this effort. I hope you check it out and provide feedback.

News Update:

Since it has been a while since I last posted, I thought I’d cover a couple of quick topics:

  • Layoffs – Several of my brothers and sisters at Microsoft got laid off in the last few months. Most of the impacted folks that I knew have already landed new jobs and are reporting being even happier than they were. There are still a few people that I know are looking. If you are looking or seeking, please feel welcome to send me a note on LinkedIn and/or Twitter. I’ll see if I can leverage my network to help broker new relationships for people.
  • The AB Testing Podcast is back after a couple of months hiatus. (ok, this is old news if you are one of the “three”) You are welcome to check us out and send any question, comment, or feedback you’d like. Alan and I are both change agents of a sort and believers in collaboration and community for accelerating achieving goals. I, for one, absolutely adore the Mailbag segment. Any way we can improve the podcast, talk on something that might improve your life, or share your success story, send us a note. We’ll talk about it “on the air”.
  • Other agents of change: I’d like to call out others who are putting “pen to paper” in to help the community grow and converge.
    • Ben Bourland – Has started a blog recently and is journaling his experiences and insights on the shift from Test to Quality.
    • Steve Rowe – A long time blogger and QA manager in the Windows org. He is also trying to influence change towards data-driven.
    • Michael Hunter – Another long time test blogger and friend did a live presentation recently at SASQAG. The link to the video is here. His talk is on how he came to the realization that he had strengths that were valuable in many disciplines and that actually testing was not something he had every actually enjoyed. I share this in case his journey inspires others to let go of their fear that the discipline is changing.

More Changes

About a year ago, I wrote a post with the intent of helping Testers manage the change. A lot has changed in my company as well as others around us, and it is now fairly clear (to me, at least): Test as a dedicated team is a thing of the past. Testers have shifted into Data Analysts roles, development roles, infrastructure roles, or specialists in NFRs (non-functional requirements) such as End to End Integration and/or Performance. Very few “Testers” still exist. However, the transition is far from done. Many Testers have just been bolted onto Development teams under the guise of combined engineering, but the actions of those involved haven’t changed. In some teams (thankfully, including mine), Testers are gone and their skillset is slowly being incorporated into the team as a whole. However, it’s a slow difficult change for the prior development team to fill the void. A wise manager once told me, “Brent, be patient. It’s a marathon, not a sprint.” This change is far from over, but definitely moving in a more productive direction.

However, one role that persists that continues to show very little sign of change is PM. Program managers historically have owned the requirements for defining what we are bringing customers, then driving a schedule towards delivering it. Most PMs spend time speaking with customers and build a strong intuition on what customers want. They are generally very charismatic and likeable and work hard to try to make people happy. While I have never been in PM, back in the day, my father was Director of PM for multiple companies. We now have very interesting conversations about our differing experiences. For example, one commonality my father seemed to share with other PM’s I know was a drive to get into the executive role. I have had an opportunity on multiple occasions to work directly for the executive in my team. I think my dad was more than a little disappointed when I have told him “I have very little drive or desire to do that job.”

I was once told by a mentor that “Test doesn’t understand the customer” and that this was the main reason why PM advanced to executive more often than not. I think most PM’s I have spoken with in various degrees or another share this opinion. However, it is my belief that the next big culture shift that is about to come. Right now, PM, too, either won’t understand the customer and/or their ability to act on the appropriate window of opportunity will be too slow. I believe the primary root cause for this will be an over reliance on their intuition and a reluctance to test it out. PM’s are used to being able to rely on their soft skills and having a long time to react to the market. In comparison to techniques used by competition, the result is slow and often just plain wrong. Moving forward requires a mindshift in the program management organization.


I have mentioned that in today’s world: speed to market on value adds is paramount. Old school PM Techniques, unfortunately, are too slow and do not scale. Thus far, PM has remained relatively unscathed as part of these changes, but I firmly believe their judgment day is coming next. Their careers are at stake. Consider this: as data science takes further hold into organizations and teams light up the ability for individual engineers to make correct and actionable decisions on their own, the need for a team Customer Expert (ie. PM) becomes *dramatically* reduced. For those paying attention, one will notice that need for PM to take on the role of schedule meister has already been cut way back. Unlike Test, I believe PM will be still needed, but I do believe we are nearing a time where we will see their numbers greatly reduced. PM must learn to adapt and apply this new knowledge to not only survive, but thrive. IMHO, it will be those who learn to balance their intuition with the data towards actionable and valuable decision making that will differentiate themselves from the pack.

Better Together

I have had multiple conversations with the Data Science groups throughout the company and a new series of problems are becoming quite clear:

  1. No Customers – Complaints from the data guys that they are building assets that should be being used, but aren’t.
  2. DRIP (Data Rich, Insight Poor) – The items they are building that DO get used are, in essence, Scorecards and Dashboards filled with Vanity Metrics that aren’t shifting the needle towards anything the business values.

There’s an obvious symbiotic relationship between program management and data science. These folks need to be working together in concert like a well-oiled machine. I really don’t understand why they aren’t, but it’s clear from the number of people talking to me about it, this isn’t happening or not happening sufficiently enough. I think a big part of it though is PM doesn’t know how to take advantage of the data teams and the data teams don’t know how to express their value in a way that resonates with the PM in a non-threatening fashion. By that I mean a win-win scenario, I’ve spoken with several PM’s that very strongly believe this is all “data crap” and their intuition is all that is needed. True, Steve Jobs did it. I would argue that Mr. Jobs was special. He was one in a million whose intuition just happened to be right. The rest of us are wrong all of the time. The positive thing though: PM doesn’t have to do it alone (and, in my belief, they probably can’t). Your data science team is ready, willing, and eager to help.

Show Me the Evidence

I believe PM need to invest more heavily into understanding how to do Evidence-based decision making (aka Hypothesis Testing). A key principle for their lives going forward should be: No matter how right you think you are, due to uncertainty, there is *some* chance you are wrong. Therein lies the problem, there is always some risk associated with uncertainty (such as wasting time/resources on a problem customers don’t care about us solving). Please feel free to leverage your intuition; this new world is *NOT* about intuition versus facts, but rather intuition validated by facts. Both… Together… Intuition on its own can very quickly lead you in the wrong direction. Likewise, facts on their own can lead you to optimizing your current business, but will not help you to find the breakthrough game-changer with higher business potential.

Hypothesis testing in a nutshell

The goal of hypothesis testing is to be able to confidently select the best available next action to take.

NOTE: “Best” is relative. Good enough *IS* in fact Good enough. One common error is see PM’s making a lot lately is deferring a decision until they have 100% accurate and precise information. You simply do not need this. Leveraging heuristics and a solid understanding of how to use confidence intervals will take you far. For example, Douglas Hubbard’s Rule of Five tells us that there is a greater than 93% probability that the median of a population is between the smallest and largest values in a random sample of only five items. Do you *really* need to know the median? Or is knowing the range all you need?


  • Idea Phase
    • Write down your hypotheses individually
      • These are in a form of a statement (not a question)
      • These must also reflect a business KPI.
  • Knowledge Phase (knowledge is information in action. The ability to use information)
    • For each hypothesis, enumerate your possible actions
      • What actions will you take if the hypothesis was true?
      • What actions will you take if false?
  • Information Phase (information is data organized in a meaningful way)
    • Now you need to enumerate the questions needed in order to confidently select the appropriate action.
      • Battle confirmation bias – the human tendency to search for, interpret, and remember information in a way that confirms one’s preconceptions
  • Data Phase (data are raw facts that are meaningless by themselves)
    • The last phase is to simply enumerate the data points you need to answer your questions.
      • Most of the time you will be measuring customer behavior. Measure the behavior you want customers to take, instead of trying to measure everything.
      • Be wary of Hawthorne’s effect: People’s behavior will change in accordance to how they know they are being measured.
    • Get these instrumented or build a heuristic based on your existing instrumentation that can be used to answer your questions.

Hubbard provides 4 very useful measurement assumptions to consider when you are designing your data phase:

  1. Your problem is not as unique as you think
  2. you have more data than you think
  3. you need less data than you think
  4. an adequate amount of new data is more accessible than you think

Example: (oversimplified)

Hypothesis: Within my product, enabling Users to share with other users via Facebook will cause a notable increase in Acquisition and Engagement KPIs.

Possible Actions: (not complete)

  • True
    • No action
    • Optimize – make it really easy for users to share
    • Enhance – improve the sharing content to entice receivers
  • False
    • Abandon future feature development or cut
    • If acquisition improves, but not engagement, develop new hypothesis.
    • If engagement, but not acquisition, improves, develop new hypothesis.

Possible Questions: (not complete)

  • Which users shared?
  • Which users received invites?
  • Which receivers entered the service due to a sharing invite?
  • How does acquisition via sharing compare to other acquisition means?
  • How does sharer’s engagement level compare with those who don’t?
  • How does receiver’s engagement level compare with those who don’t?

Data Needed:

  • Users who share & receive
  • Receivers’ acquisition date & means
  • Acquisition rate correlated with receiving rate
  • Engagement rate correlated with sharing rate
  • Engagement rate correlated with receiving rate

If PMs and Data teams work together to craft their hypotheses (and experiments), more precise and accurate decisions will be made. Ideally, with the minimal amount of new instrumentation having to be added to the product. Be prepared to be wrong… A lot… But celebrate it. The faster you are wrong also turns out to be the faster you will *stop* being wrong by building fast, actionable knowledge towards business goals.

One final note:

There is one other phenomenon that seems to becoming common in the post-tester world that needs to be addressed.

PM, you need to STOP testing for your developers. Yes, yes, I know. “But Brent, I am now being held accountable for the customer satisfaction level for my feature and what I keep getting is bugs, bugs, and bugs.” Remember that this is a system problem. We’ve changed the software development system by removing testers from it, but by no means does this imply that this was done with the right tooling and training in place. Nor does it mean that you and your peers in PM have figured out how to truly determine the right number of features your dev team can produce in order to confidently get to done, done, done. There is much work to do here, for sure. You need to play your part.

How? Stop being the safety net. It is absolutely ok for you to be a part of the team and help everyone improve the quality of the product under development, but always consider whether or not your role in doing this is becoming more and more required. This might imply a dysfunction in the system and the team has learned that they can avoid doing testing themselves by shipping their crap to you. It’s super seductive for a dev. I, myself, have found my directs doing this, at least a couple of times in the past several months. It’s also seductive for you. Usually, when you find that critical issue, you will get praised for how well you really saved my bacon. Both sides get an endorphin rush.

Instead, I encourage you to strike a different balance between improving and shipping your features and playing your new role in this system. My recommendation is to take on the strict role of the PO for your team. A key responsibility of the Product Owner is own the Acceptance phase. Acceptance does not mean you find bugs. It means you own the decision as to whether or not the feature is ready to ship to your customer.

You can do this easily by doing 2 things:

  1. Making sure your team must doing an Acceptance Interview with you before they do final checkin into the shipping build.
    1. It is an interview only. DO NOT open the product. This is for the short term only and is required in order to set expectations: it is not your job to disprove/prove that they are done. It is theirs. It is your job to be satisfied that they have done so. Once your team has shown signs of using the new expected behaviors, feel free to change the interview process if it makes everyone’s life easier, but until then, resist the urge.
    2. Usually, you will have 2 lists for acceptance: 1) the requirements (acceptance criteria) defined when the item was still in the backlog, and 2) the non-functional requirements (perf, scale, localization, security, etc) that all features must be scrutinized against.
    3. For each requirement, simply ask the developer: “how did your prove readiness for this requirement?”. If their answer is vague, follow up with more precision questioning.
    4. Using your best judgment, if you don’t like what you are hearing, then Reject with your rationale.
  2. Being brave. For a short period of time, you will be causing disruption in the system. Your devs may have already gotten used to your new role. Change of this sort almost always causes anxiety. This anxiety may come with consequences depending on your team’s culture. In addition, your actions will likely result in several of your key stories/features not completing in this sprint. Remember your job is first and foremost to satisfy your customer. Your preference should be to do that alongside your team, but if your team wants something else, you may need to “go it alone” for a little while. Remember also everyone on your team are professionals, but they may have some bad behaviors due to existing muscle memory. Your job should be to help coach them to learn new ways.

As always, thank you for reading and I appreciate and welcome any feedback you’d like to offer.

Systems Thinking: Why I am done with agility as a goal

    Recently, I was writing up a presentation where I was going to state that the New Tester’s job definition was to “accelerate business agility”. One of my peers looked at it and remarked “Isn’t that sort of redundant?”. After some discussion, it became clear that “agility” did not have a clear well-understood definition.

To be clear, I am MOST definitely not done with Agile methods, but as best as I will be able to, I am done with using the word ‘agility’ to describe it. If one looks this up in your favorite dictionary, you will find it described as “moving quickly”. While moving quickly is certainly a valuable goal, it is pitifully insufficient in the modern day software world and if not tempered correctly, can actually lead to more pain than what you may have started with. When I now give talks on Agile, my usual starting point is to first clarify that Agile is NOT about moving quickly, so much as it is about changing direction quickly. So in a nutshell, Agile is not about agility. One problem I am trying to unwind is the dominance of strong-willed, high paid folks proclaiming that Agility is the goal and quite simply, they do not know what they are talking about as evidenced by the typical lack of details explaining behavior and/or success changes their team should be making. This causes their reports to “follow” this guidance, but left to their own devices to make it up. A few clever folks actually study it and realize that shifting to Agile is quite a paradigm shift to succeed and hard to do. This can be a slow process, which seems to contradict the goal of “moving quickly”, so gets abandoned for a faster version of Waterfall or similar dysfunctional hybrid. There’s a common phrase in MBA classes, “Pick 2: Cheap, fast, or good”. This implies a singular focus on fast is likely to deliver crap and at a high cost.

One quick test to see if your leader understands: Ask how much are we going to invest in real-time learning. Then observe how those words align with actions. Moving fast without learning along the way is definitely NOT Agile, but more importantly, it is wrought with peril.

Many of my recent blog posts are on the topic of leadership lately. If you find yourself in such a role and are trying to lead a team towards Agile, my guidance is that you think carefully about the goals and behaviors you are expecting and use the word that describes it better. If you don’t know what you want, then get trained. In my experience, using Agile methods is very painful if the team leadership does not know what, why, and how to use them.

Consider these word alternatives:

  • Nimble: quick to understand, think, devise, etc.
  • Dexterity: the ability to move skillfully
  • Adaptability: the ability to change (or be changed) to fit changed circumstances

These ALL make more sense to me than “moving quickly”, but adaptability is what fits the bill the best in my mind.

    In my last post, I focused on one aspect of the shift paradigm shift happening in the world of test towards the goal of improving adaptability. I have mentioned before my passion (and the primary reason I write this blog) is about Quality. However, to make a business well-functioning in this modern age, a singular focus on changing the paradigm on quality is not sufficient. As Test makes its shift, other pieces of the system must take up the slack. For example, a very common situation happening is that Test simply stops testing in favor of higher value activities. Dev then needs to take up that slack. If they don’t (and most likely they won’t initially), then they will ship bugs to customer and then depending of customer impact, cause chaos as dev attempts to push testing back. We need to consider the whole system, not just one part of it.

A couple of months ago, I was asked to begin thinking through the next phase of shifting the org towards better adaptability. Almost immediately, I rattled off the following list of paradigm shifts that need to be done to the system as a whole.








Spider teams

Starfish teams


Quality (value)



NIH is bad

NIH is Awesome

Large batch

Small Batch







Green is good

Red is good






Shared Accountability


Hopefully, you can see that moving quickly is certainly a part of this, but more importantly, this list shows a series of changes needed for focus, sharing, understanding the current environment, and learning…

Recently, I have come upon some material from Dr. Amjad Umar (currently, a senior strategist at the UN and one of my favorite professors) where he argues that companies should be plan-fully considering the overall “smartness” of their systems. He states that technologies alone cannot improve smartness. But you can improve it by starting with the right combination of changes to your existing People, Processes, and Technology. Smartness, by the way, is analogous to Adaptability.

I have taken his concept and broadened it to something I call “Umar’s Smartness Cube”. I think it nicely describes at a high level what needs to be considered when one makes System changes. The goal of the whole cube, of course, is to improve Business Value.

How to use this to improve your system:

  1. First determine and objectively measure the goal you are trying to achieve.
  2. Consider the smartness cube and enumerate opportunities to improve the above goal.
  3. Consider tradeoffs between other elements to achieve goals better. For example, maybe we don’t need the world’s best technical widget if we just change the process for using what we have to reduce the training burden.
  4. Prioritize these opportunities (I like to use (BizValue+TimeCriticality)/Cost)
  5. Get them in a backlog that acts like a priority queue and start executing.


This, of course, is over-simplified, but hopefully, sets you in an actionable direction for “accelerating the adaptability of your Business (system)”.

As thinking-in-progress, any feedback is appreciated.

AB Testing – Episode 0100

– Alan’s upcoming presentations at Star East
– We explore the potential topic for Alan’s 5 minute Lightning talk
– We spend a good deal of time on the continuing need for Leaders to help resuscitate Test Zombies.
– very briefly talk about Gamification for Engagement
– And Alan drops a surprise bomb on me

It’s been awhile since I’ve actually written something and I have a couple of topics I am itching to talk about. I will try to get one of those out this weekend.

Want to subscribe to the podcast?
RSS feed

Also, on the Windows Phone store. Search for “AB Testing”.

AB Testing Podcast – Episode 2 – Leading change

The Angry Weasel and I have put together another podcast. In this episode, we talk about problems we see in leading change towards a better direction. We cover some changes we face, change management, and reasons change fails. We talk about the importance of “why” and leverage the “Waterfall vs. Agile” religious war, as an example.

We managed to keep our “shinythingitis” to a minimum of slightly less than 70% of the time. 🙂 Enjoy!

Want to subscribe to the podcast?
RSS feed

A/B Testing Podcast goes live

So one day, a colleague and friend, Alan Page,  said “Hey, Brent, why don’t we start a podcast?”.   After much discourse (Alan pushing me) and challenges (me finding the time), I am happy to announce that I suck at it, but am doing it anyways.  I am very thankful to have an old pro who is letting me tag along for the ride. Anyways, if you’ve got a 30 min to kill and want to hear some more from a couple of guys who are trying to help lead change, please check it out. AB, in this context, stands for Alan/Brent and in “Episode 1” we explore Testing vs. Quality as well as recent changes happening at Microsoft.  Enjoy.  Feedback is always welcome.   We are likely to keep going until someone tells us that the pain is unbearable.  🙂

Download:AB Testing – “Episode” 1

Or play it now:

Want to subscribe to the podcast? Here’s the RSS feed.