In pursuit of quality: shifting the Program Manager Mindset

Hello all and a very happy New Year’s wish to you. It has been a LOOOOOOONG time, since I have written a post (as several folks have reminded me). While it remains my goal to post at least once a month, I know I won’t always achieve that goal. These past several months have been a particular drain on my time, probably my most valuable commodity at this time in my life. One of my New Year’s resolutions is to reprioritize how I spend my time. My goal is to focus more deeply on my personal retraining and on my family. I believe many of my readers are considering or following a similar path forward as am I. With luck, the time will avail itself, so I can publish what I’ve learned and share.

As part of that retraining, I’ve gone back to school. I am quite happy to report that I am now done with my first year of a MS in Analytics and thus far, I’m a straight A student. Honestly, though, it is a bit odd going back to school in your mid-40’s. I really could care less about the grades (it has been decades since someone has asked me my GPA). I am very passionate about learning the topic AND I have school age children who are watching my every move. I just simply can’t have teenagers calling me a hypocrite. The drive to learn the subject matter is really helpful to get the grades. I wish I had such a similar motivation the first time through college. J Thus far, I have been able to bring every class I have taken back into the applied context of my work and have done something valuable with it. I’ve got a ways to go, for sure, but the journey is been a lot of work and a lot of fun.

At work, I am helping to land something *very cool* for shifting the world into a more Data-Driven culture. I am on the Power BI (Business Intelligence) team in the SQL divisions. We are enhancing our SaaS offering and a preview for the public is available now with our new features. Check it out. The price tag for the public offering? Absolutely free.

I am quite proud to be a part of this effort. I hope you check it out and provide feedback.

News Update:

Since it has been a while since I last posted, I thought I’d cover a couple of quick topics:

  • Layoffs – Several of my brothers and sisters at Microsoft got laid off in the last few months. Most of the impacted folks that I knew have already landed new jobs and are reporting being even happier than they were. There are still a few people that I know are looking. If you are looking or seeking, please feel welcome to send me a note on LinkedIn and/or Twitter. I’ll see if I can leverage my network to help broker new relationships for people.
  • The AB Testing Podcast is back after a couple of months hiatus. (ok, this is old news if you are one of the “three”) You are welcome to check us out and send any question, comment, or feedback you’d like. Alan and I are both change agents of a sort and believers in collaboration and community for accelerating achieving goals. I, for one, absolutely adore the Mailbag segment. Any way we can improve the podcast, talk on something that might improve your life, or share your success story, send us a note. We’ll talk about it “on the air”.
  • Other agents of change: I’d like to call out others who are putting “pen to paper” in to help the community grow and converge.
    • Ben Bourland – Has started a blog recently and is journaling his experiences and insights on the shift from Test to Quality.
    • Steve Rowe – A long time blogger and QA manager in the Windows org. He is also trying to influence change towards data-driven.
    • Michael Hunter – Another long time test blogger and friend did a live presentation recently at SASQAG. The link to the video is here. His talk is on how he came to the realization that he had strengths that were valuable in many disciplines and that actually testing was not something he had every actually enjoyed. I share this in case his journey inspires others to let go of their fear that the discipline is changing.

More Changes

About a year ago, I wrote a post with the intent of helping Testers manage the change. A lot has changed in my company as well as others around us, and it is now fairly clear (to me, at least): Test as a dedicated team is a thing of the past. Testers have shifted into Data Analysts roles, development roles, infrastructure roles, or specialists in NFRs (non-functional requirements) such as End to End Integration and/or Performance. Very few “Testers” still exist. However, the transition is far from done. Many Testers have just been bolted onto Development teams under the guise of combined engineering, but the actions of those involved haven’t changed. In some teams (thankfully, including mine), Testers are gone and their skillset is slowly being incorporated into the team as a whole. However, it’s a slow difficult change for the prior development team to fill the void. A wise manager once told me, “Brent, be patient. It’s a marathon, not a sprint.” This change is far from over, but definitely moving in a more productive direction.

However, one role that persists that continues to show very little sign of change is PM. Program managers historically have owned the requirements for defining what we are bringing customers, then driving a schedule towards delivering it. Most PMs spend time speaking with customers and build a strong intuition on what customers want. They are generally very charismatic and likeable and work hard to try to make people happy. While I have never been in PM, back in the day, my father was Director of PM for multiple companies. We now have very interesting conversations about our differing experiences. For example, one commonality my father seemed to share with other PM’s I know was a drive to get into the executive role. I have had an opportunity on multiple occasions to work directly for the executive in my team. I think my dad was more than a little disappointed when I have told him “I have very little drive or desire to do that job.”

I was once told by a mentor that “Test doesn’t understand the customer” and that this was the main reason why PM advanced to executive more often than not. I think most PM’s I have spoken with in various degrees or another share this opinion. However, it is my belief that the next big culture shift that is about to come. Right now, PM, too, either won’t understand the customer and/or their ability to act on the appropriate window of opportunity will be too slow. I believe the primary root cause for this will be an over reliance on their intuition and a reluctance to test it out. PM’s are used to being able to rely on their soft skills and having a long time to react to the market. In comparison to techniques used by competition, the result is slow and often just plain wrong. Moving forward requires a mindshift in the program management organization.

Consequences

I have mentioned that in today’s world: speed to market on value adds is paramount. Old school PM Techniques, unfortunately, are too slow and do not scale. Thus far, PM has remained relatively unscathed as part of these changes, but I firmly believe their judgment day is coming next. Their careers are at stake. Consider this: as data science takes further hold into organizations and teams light up the ability for individual engineers to make correct and actionable decisions on their own, the need for a team Customer Expert (ie. PM) becomes *dramatically* reduced. For those paying attention, one will notice that need for PM to take on the role of schedule meister has already been cut way back. Unlike Test, I believe PM will be still needed, but I do believe we are nearing a time where we will see their numbers greatly reduced. PM must learn to adapt and apply this new knowledge to not only survive, but thrive. IMHO, it will be those who learn to balance their intuition with the data towards actionable and valuable decision making that will differentiate themselves from the pack.

Better Together

I have had multiple conversations with the Data Science groups throughout the company and a new series of problems are becoming quite clear:

  1. No Customers – Complaints from the data guys that they are building assets that should be being used, but aren’t.
  2. DRIP (Data Rich, Insight Poor) – The items they are building that DO get used are, in essence, Scorecards and Dashboards filled with Vanity Metrics that aren’t shifting the needle towards anything the business values.

There’s an obvious symbiotic relationship between program management and data science. These folks need to be working together in concert like a well-oiled machine. I really don’t understand why they aren’t, but it’s clear from the number of people talking to me about it, this isn’t happening or not happening sufficiently enough. I think a big part of it though is PM doesn’t know how to take advantage of the data teams and the data teams don’t know how to express their value in a way that resonates with the PM in a non-threatening fashion. By that I mean a win-win scenario, I’ve spoken with several PM’s that very strongly believe this is all “data crap” and their intuition is all that is needed. True, Steve Jobs did it. I would argue that Mr. Jobs was special. He was one in a million whose intuition just happened to be right. The rest of us are wrong all of the time. The positive thing though: PM doesn’t have to do it alone (and, in my belief, they probably can’t). Your data science team is ready, willing, and eager to help.

Show Me the Evidence

I believe PM need to invest more heavily into understanding how to do Evidence-based decision making (aka Hypothesis Testing). A key principle for their lives going forward should be: No matter how right you think you are, due to uncertainty, there is *some* chance you are wrong. Therein lies the problem, there is always some risk associated with uncertainty (such as wasting time/resources on a problem customers don’t care about us solving). Please feel free to leverage your intuition; this new world is *NOT* about intuition versus facts, but rather intuition validated by facts. Both… Together… Intuition on its own can very quickly lead you in the wrong direction. Likewise, facts on their own can lead you to optimizing your current business, but will not help you to find the breakthrough game-changer with higher business potential.

Hypothesis testing in a nutshell

The goal of hypothesis testing is to be able to confidently select the best available next action to take.

NOTE: “Best” is relative. Good enough *IS* in fact Good enough. One common error is see PM’s making a lot lately is deferring a decision until they have 100% accurate and precise information. You simply do not need this. Leveraging heuristics and a solid understanding of how to use confidence intervals will take you far. For example, Douglas Hubbard’s Rule of Five tells us that there is a greater than 93% probability that the median of a population is between the smallest and largest values in a random sample of only five items. Do you *really* need to know the median? Or is knowing the range all you need?

Steps

  • Idea Phase
    • Write down your hypotheses individually
      • These are in a form of a statement (not a question)
      • These must also reflect a business KPI.
  • Knowledge Phase (knowledge is information in action. The ability to use information)
    • For each hypothesis, enumerate your possible actions
      • What actions will you take if the hypothesis was true?
      • What actions will you take if false?
  • Information Phase (information is data organized in a meaningful way)
    • Now you need to enumerate the questions needed in order to confidently select the appropriate action.
      • Battle confirmation bias – the human tendency to search for, interpret, and remember information in a way that confirms one’s preconceptions
  • Data Phase (data are raw facts that are meaningless by themselves)
    • The last phase is to simply enumerate the data points you need to answer your questions.
      • Most of the time you will be measuring customer behavior. Measure the behavior you want customers to take, instead of trying to measure everything.
      • Be wary of Hawthorne’s effect: People’s behavior will change in accordance to how they know they are being measured.
    • Get these instrumented or build a heuristic based on your existing instrumentation that can be used to answer your questions.

Hubbard provides 4 very useful measurement assumptions to consider when you are designing your data phase:

  1. Your problem is not as unique as you think
  2. you have more data than you think
  3. you need less data than you think
  4. an adequate amount of new data is more accessible than you think

Example: (oversimplified)

Hypothesis: Within my product, enabling Users to share with other users via Facebook will cause a notable increase in Acquisition and Engagement KPIs.

Possible Actions: (not complete)

  • True
    • No action
    • Optimize – make it really easy for users to share
    • Enhance – improve the sharing content to entice receivers
  • False
    • Abandon future feature development or cut
    • If acquisition improves, but not engagement, develop new hypothesis.
    • If engagement, but not acquisition, improves, develop new hypothesis.

Possible Questions: (not complete)

  • Which users shared?
  • Which users received invites?
  • Which receivers entered the service due to a sharing invite?
  • How does acquisition via sharing compare to other acquisition means?
  • How does sharer’s engagement level compare with those who don’t?
  • How does receiver’s engagement level compare with those who don’t?

Data Needed:

  • Users who share & receive
  • Receivers’ acquisition date & means
  • Acquisition rate correlated with receiving rate
  • Engagement rate correlated with sharing rate
  • Engagement rate correlated with receiving rate

If PMs and Data teams work together to craft their hypotheses (and experiments), more precise and accurate decisions will be made. Ideally, with the minimal amount of new instrumentation having to be added to the product. Be prepared to be wrong… A lot… But celebrate it. The faster you are wrong also turns out to be the faster you will *stop* being wrong by building fast, actionable knowledge towards business goals.

One final note:

There is one other phenomenon that seems to becoming common in the post-tester world that needs to be addressed.

PM, you need to STOP testing for your developers. Yes, yes, I know. “But Brent, I am now being held accountable for the customer satisfaction level for my feature and what I keep getting is bugs, bugs, and bugs.” Remember that this is a system problem. We’ve changed the software development system by removing testers from it, but by no means does this imply that this was done with the right tooling and training in place. Nor does it mean that you and your peers in PM have figured out how to truly determine the right number of features your dev team can produce in order to confidently get to done, done, done. There is much work to do here, for sure. You need to play your part.

How? Stop being the safety net. It is absolutely ok for you to be a part of the team and help everyone improve the quality of the product under development, but always consider whether or not your role in doing this is becoming more and more required. This might imply a dysfunction in the system and the team has learned that they can avoid doing testing themselves by shipping their crap to you. It’s super seductive for a dev. I, myself, have found my directs doing this, at least a couple of times in the past several months. It’s also seductive for you. Usually, when you find that critical issue, you will get praised for how well you really saved my bacon. Both sides get an endorphin rush.

Instead, I encourage you to strike a different balance between improving and shipping your features and playing your new role in this system. My recommendation is to take on the strict role of the PO for your team. A key responsibility of the Product Owner is own the Acceptance phase. Acceptance does not mean you find bugs. It means you own the decision as to whether or not the feature is ready to ship to your customer.

You can do this easily by doing 2 things:

  1. Making sure your team must doing an Acceptance Interview with you before they do final checkin into the shipping build.
    1. It is an interview only. DO NOT open the product. This is for the short term only and is required in order to set expectations: it is not your job to disprove/prove that they are done. It is theirs. It is your job to be satisfied that they have done so. Once your team has shown signs of using the new expected behaviors, feel free to change the interview process if it makes everyone’s life easier, but until then, resist the urge.
    2. Usually, you will have 2 lists for acceptance: 1) the requirements (acceptance criteria) defined when the item was still in the backlog, and 2) the non-functional requirements (perf, scale, localization, security, etc) that all features must be scrutinized against.
    3. For each requirement, simply ask the developer: “how did your prove readiness for this requirement?”. If their answer is vague, follow up with more precision questioning.
    4. Using your best judgment, if you don’t like what you are hearing, then Reject with your rationale.
  2. Being brave. For a short period of time, you will be causing disruption in the system. Your devs may have already gotten used to your new role. Change of this sort almost always causes anxiety. This anxiety may come with consequences depending on your team’s culture. In addition, your actions will likely result in several of your key stories/features not completing in this sprint. Remember your job is first and foremost to satisfy your customer. Your preference should be to do that alongside your team, but if your team wants something else, you may need to “go it alone” for a little while. Remember also everyone on your team are professionals, but they may have some bad behaviors due to existing muscle memory. Your job should be to help coach them to learn new ways.

As always, thank you for reading and I appreciate and welcome any feedback you’d like to offer.

Systems Thinking: Why I am done with agility as a goal

    Recently, I was writing up a presentation where I was going to state that the New Tester’s job definition was to “accelerate business agility”. One of my peers looked at it and remarked “Isn’t that sort of redundant?”. After some discussion, it became clear that “agility” did not have a clear well-understood definition.

To be clear, I am MOST definitely not done with Agile methods, but as best as I will be able to, I am done with using the word ‘agility’ to describe it. If one looks this up in your favorite dictionary, you will find it described as “moving quickly”. While moving quickly is certainly a valuable goal, it is pitifully insufficient in the modern day software world and if not tempered correctly, can actually lead to more pain than what you may have started with. When I now give talks on Agile, my usual starting point is to first clarify that Agile is NOT about moving quickly, so much as it is about changing direction quickly. So in a nutshell, Agile is not about agility. One problem I am trying to unwind is the dominance of strong-willed, high paid folks proclaiming that Agility is the goal and quite simply, they do not know what they are talking about as evidenced by the typical lack of details explaining behavior and/or success changes their team should be making. This causes their reports to “follow” this guidance, but left to their own devices to make it up. A few clever folks actually study it and realize that shifting to Agile is quite a paradigm shift to succeed and hard to do. This can be a slow process, which seems to contradict the goal of “moving quickly”, so gets abandoned for a faster version of Waterfall or similar dysfunctional hybrid. There’s a common phrase in MBA classes, “Pick 2: Cheap, fast, or good”. This implies a singular focus on fast is likely to deliver crap and at a high cost.

One quick test to see if your leader understands: Ask how much are we going to invest in real-time learning. Then observe how those words align with actions. Moving fast without learning along the way is definitely NOT Agile, but more importantly, it is wrought with peril.

Many of my recent blog posts are on the topic of leadership lately. If you find yourself in such a role and are trying to lead a team towards Agile, my guidance is that you think carefully about the goals and behaviors you are expecting and use the word that describes it better. If you don’t know what you want, then get trained. In my experience, using Agile methods is very painful if the team leadership does not know what, why, and how to use them.

Consider these word alternatives:

  • Nimble: quick to understand, think, devise, etc.
  • Dexterity: the ability to move skillfully
  • Adaptability: the ability to change (or be changed) to fit changed circumstances

These ALL make more sense to me than “moving quickly”, but adaptability is what fits the bill the best in my mind.

    In my last post, I focused on one aspect of the shift paradigm shift happening in the world of test towards the goal of improving adaptability. I have mentioned before my passion (and the primary reason I write this blog) is about Quality. However, to make a business well-functioning in this modern age, a singular focus on changing the paradigm on quality is not sufficient. As Test makes its shift, other pieces of the system must take up the slack. For example, a very common situation happening is that Test simply stops testing in favor of higher value activities. Dev then needs to take up that slack. If they don’t (and most likely they won’t initially), then they will ship bugs to customer and then depending of customer impact, cause chaos as dev attempts to push testing back. We need to consider the whole system, not just one part of it.

A couple of months ago, I was asked to begin thinking through the next phase of shifting the org towards better adaptability. Almost immediately, I rattled off the following list of paradigm shifts that need to be done to the system as a whole.

 

From

To

Prevention

Reaction

QoS

QoE

Spider teams

Starfish teams

Correctness

Quality (value)

Intuition

Truth

NIH is bad

NIH is Awesome

Large batch

Small Batch

Schedule

Throughput

Vanity

Action

Hero

Team

Green is good

Red is good

Yearly

Daily

Absolutes

Probabilities

Ownership

Shared Accountability

 

Hopefully, you can see that moving quickly is certainly a part of this, but more importantly, this list shows a series of changes needed for focus, sharing, understanding the current environment, and learning…

Recently, I have come upon some material from Dr. Amjad Umar (currently, a senior strategist at the UN and one of my favorite professors) where he argues that companies should be plan-fully considering the overall “smartness” of their systems. He states that technologies alone cannot improve smartness. But you can improve it by starting with the right combination of changes to your existing People, Processes, and Technology. Smartness, by the way, is analogous to Adaptability.

I have taken his concept and broadened it to something I call “Umar’s Smartness Cube”. I think it nicely describes at a high level what needs to be considered when one makes System changes. The goal of the whole cube, of course, is to improve Business Value.

How to use this to improve your system:

  1. First determine and objectively measure the goal you are trying to achieve.
  2. Consider the smartness cube and enumerate opportunities to improve the above goal.
  3. Consider tradeoffs between other elements to achieve goals better. For example, maybe we don’t need the world’s best technical widget if we just change the process for using what we have to reduce the training burden.
  4. Prioritize these opportunities (I like to use (BizValue+TimeCriticality)/Cost)
  5. Get them in a backlog that acts like a priority queue and start executing.

 

This, of course, is over-simplified, but hopefully, sets you in an actionable direction for “accelerating the adaptability of your Business (system)”.

As thinking-in-progress, any feedback is appreciated.

AB Testing – Episode 0100

Topics:
– Alan’s upcoming presentations at Star East
– We explore the potential topic for Alan’s 5 minute Lightning talk
– We spend a good deal of time on the continuing need for Leaders to help resuscitate Test Zombies.
– very briefly talk about Gamification for Engagement
– And Alan drops a surprise bomb on me

It’s been awhile since I’ve actually written something and I have a couple of topics I am itching to talk about. I will try to get one of those out this weekend.

Want to subscribe to the podcast?
RSS feed
iTunes

Also, on the Windows Phone store. Search for “AB Testing”.

AB Testing Podcast – Episode 2 – Leading change

The Angry Weasel and I have put together another podcast. In this episode, we talk about problems we see in leading change towards a better direction. We cover some changes we face, change management, and reasons change fails. We talk about the importance of “why” and leverage the “Waterfall vs. Agile” religious war, as an example.

We managed to keep our “shinythingitis” to a minimum of slightly less than 70% of the time. :) Enjoy!

Want to subscribe to the podcast?
RSS feed
iTunes

A/B Testing Podcast goes live

So one day, a colleague and friend, Alan Page,  said “Hey, Brent, why don’t we start a podcast?”.   After much discourse (Alan pushing me) and challenges (me finding the time), I am happy to announce that I suck at it, but am doing it anyways.  I am very thankful to have an old pro who is letting me tag along for the ride. Anyways, if you’ve got a 30 min to kill and want to hear some more from a couple of guys who are trying to help lead change, please check it out. AB, in this context, stands for Alan/Brent and in “Episode 1″ we explore Testing vs. Quality as well as recent changes happening at Microsoft.  Enjoy.  Feedback is always welcome.   We are likely to keep going until someone tells us that the pain is unbearable.  :)

Download:AB Testing – “Episode” 1

Or play it now:

Want to subscribe to the podcast? Here’s the RSS feed.

In Pursuit of Quality: Shifting the Tester mindset

Last time, I wrote a book review on Lean Analytics. Towards the end of that post, I lamented that I see a lot of testers in my neck of the woods trying to map their old way of thinking into what’s coming next. Several folks (Individual contributors and managers of same) have come to me wondering why should test move into this world of “data crap” and why is how they have been previously operating so wrong now. It is my hope today to explain this out.

But first, before continuing, I’d like to try something new and offer you a poll to take.

Please consider the following:

So which did you pick? Over time, it will be interesting to me to track how people are viewing this simple comparison. I have been doing this example for almost a year now. When I first started it, about 1 in 2 testers polled would select the bug-free code. Whereas with testers I talk to lately, about 1 in 3 will select it. I definitely view this as a good sign and that folks are starting to reflect on these changes and adapting. My ideal world is that 1 year from now the ratio is closer to 1 in 10.

Why is this poll so hard for folks?

Primarily, it is due to our training. Test was the last line of defense – a safety net – needed to assure we didn’t do a recall when we released product out to manufacturing. When I first started in the software development world, 1.44 floppy disks were the prevailing way customers installed new software on to their system. Windows NT 3.1, as example, required 22 of them. It was horrible. Installation of a new machine would take the better part of the day, disks would be asked for out of order, and lastly, people would often get to the end of the install to discover that a setting they were asked for at the very beginning was wrong and that it was easier to just redo install than to hunt through the manual to figure out how to fix it after the install.

Customers who got their system up and running successfully and found a major bug afterwards would be quite sore with us. Thankfully, I have not heard of this is quite some time, but back then, Microsoft had the reputation of shipping quality in version 3.0. There was a strong and successful push within the company to get our testers trained with a singular mission: find the bugs before our customers do and push to get them fixed. I was proud to state back then that Microsoft was the best in the world at doing this.

The problem I am attempting to address is the perceived value loss in Test’s innate ability to prevent bugs from hitting the customer. A couple of months ago I presented to a group of testers and one of the questions asked “All of this reacting to customer stuff is great, but how can we prevent bugs in the first place?” Thankfully, someone else answered that question more helpfully as my initial response would’ve been “Stop trying to do so“.

The core of the issue, imo, is that we have continued to view our efforts as statically valuable. That our efforts to find bugs up front (assuring code correctness) will always be highly regarded. Unfortunately, we neglected to notice that the world was changing. That, in fact, it was more dynamic: Our need to get correctness right before shipping was actually tied to another variable: Our ability to react to bugs found by customers after shipping. The longer the time it takes us to react, the more we need to prevent correctness issues.

“Quality redefinition” – from correctness to customer value

A couple of years ago, I wrote a blog, Quality is a 4 letter word. Unfortunately, it seems that I wrote it well before it’s time. I have received feedback recently from folks stating that series of posts were quite helpful to them now. One such person had read it then and had a violent allergic reaction to the post:

“Brent, you can’t redefine quality”.

“I’m not!”, I replied, “We’ve *always* had it wrong! But up until now, it’s been okay. Now we need to journey in a different direction.”

While I now refer to the 4 pillars of Quality differently, their essence remains the same. I encourage you to read that post.

The wholeness of Quality should now be evaluated on 4 fronts:

  • Features that customers use to create value
  • The correctness of those features
  • The extent to which those features feel finished/polished
  • The context in which those features should be used for maximum value.

Certainly, correctness is an important aspect of quality, but usage is a significantly greater one. If you take anything away from today’s post, please take this:

Fixing correctness issues on a piece of code that no one is using is a waste of time & resources.

We need to change

In today’s world, with services lighting up left and right, we need to shift to a model that allows us to identify and improve Quality faster. This is a market differentiator.

It is my belief that in the short term, the best way to do this is to focus on the following strategy:

    • Pre-production
      • Train your testers to rewrite their automation such that Pass/Fail is not determined by the automation, but rather, leveraging the instrumentation and data exhaust outputted by the system. Automation becomes a user simulator, but testers grow muscle in using product logs to evaluate the truth. This set of measurements can be directly applied to production traffic when the code ships live.
      • Train your testers to be comfortable with tweaking and adding instrumentation to enable measurement of the above.
    • Next, move to Post-production
      • Leverage their Correctness skillset and their new measurement muscle to learn to understand the system behavior under actual usage load
      • This is an evaluation of QoS, Quality of Service. What you want Testers learning is what & why the system does what it does under prodtraffic.
      • You can start here in order to grow their muscle in statistical analysis.
    • Then, focus their attention on Customer Behavior
      • Teach them to look for patterns in the data that show:
        • Places in the code where customers are trying to achieve some goal but encountering pain (errors, crashes, etc) or friction (latency issues, convoluted paths to goal, etc). This is very easy to find generally.
        • Places in the code where customers are succeeding in achieving goal and are walking away delighted. These are patterns that create entertainment or freedom for the customer. Unlike the above, this is much harder to find, it will require hypothesis testing, flighting, and experimentation, but are significantly more valuable to the business at hand.
      • Being stronger in stats muscle will be key here. Since Quality is a subjective point of view, this will force Test away from a world of absolutes (pass/fail) and into one of probabilities (likelihood of adding value to customers vs. not). Definitely, it is wise to befriend your local Data Scientist and get them to share the magic. This will help you and your team to scale sustainably.
      • This is an evaluation of QoE, Quality of Experience. What you want Testers learning is what & why the Customers do what they do
    • You will then want to form up a dynamic set of metrics and KPI’s that capture the up-to-date learnings and help the organization quickly operationalize their goals of taking action towards adding customer value. This will generate Quality!

Lastly, while executing on these mindshifts, it will be paramount to remain balanced. The message of this blog is NOT that we should stop preventing bugs (despite my visceral response above). Bugs, in my world view, fall into 2 camps: Catastrophes and Other. In order to have Quality high, it is critical that we continue to work to prevent Catastrophe class bugs from hitting our customers. At the same time, we need to build infrastructure that will enable us to react very quickly.

I simply ask you to consider that:

    As the speed to which we can react to our customers INCREASES, the number of equivalence classes of bugs that fall into Catastrophe class DECREASES. Sacrificing speed to delivery, in the name of quality, makes delivering actual Quality so much harder. Usage, now, defines Quality better than correctness.

Ken Johnston: a friend, former manager, and co-author of the book “How we test at Microsoft” recently published a blog on something he calls “MVQ”.    Ken is still quite active on the Test Conference scene (one next month), but if you ever get the chance to ask, ask him “If he were to starting writing the Second Edition of his book, how much of the content would still be important?“. His response is quite interesting, but I’ll not steal that thunder here. J

Here’s a graphic from his post for your consideration. I think it presents a very nice balance:

Thank you for reading.