AB Testing Podcast – Episode 3

Our Topics this time:
- Alan recruiting testers in the new world
- Dashboards and how to make them not suck
- Planning and estimation in the new world
- and why I hate the Agile Manifesto. (actually, it’s why I hate the people who read it wrong…) :-)


Want to subscribe to the podcast?
RSS feed
iTunes

AB Testing Podcast – Episode 2 – Leading change

The Angry Weasel and I have put together another podcast. In this episode, we talk about problems we see in leading change towards a better direction. We cover some changes we face, change management, and reasons change fails. We talk about the importance of “why” and leverage the “Waterfall vs. Agile” religious war, as an example.

We managed to keep our “shinythingitis” to a minimum of slightly less than 70% of the time. :) Enjoy!


Want to subscribe to the podcast?
RSS feed
iTunes

A/B Testing Podcast goes live

So one day, a colleague and friend, Alan Page,  said “Hey, Brent, why don’t we start a podcast?”.   After much discourse (Alan pushing me) and challenges (me finding the time), I am happy to announce that I suck at it, but am doing it anyways.  I am very thankful to have an old pro who is letting me tag along for the ride. Anyways, if you’ve got a 30 min to kill and want to hear some more from a couple of guys who are trying to help lead change, please check it out. AB, in this context, stands for Alan/Brent and in “Episode 1″ we explore Testing vs. Quality as well as recent changes happening at Microsoft.  Enjoy.  Feedback is always welcome.   We are likely to keep going until someone tells us that the pain is unbearable.  :)

Download:AB Testing – “Episode” 1

Or play it now:


Want to subscribe to the podcast? Here’s the RSS feed.

In Pursuit of Quality: Shifting the Tester mindset

Last time, I wrote a book review on Lean Analytics. Towards the end of that post, I lamented that I see a lot of testers in my neck of the woods trying to map their old way of thinking into what’s coming next. Several folks (Individual contributors and managers of same) have come to me wondering why should test move into this world of “data crap” and why is how they have been previously operating so wrong now. It is my hope today to explain this out.

But first, before continuing, I’d like to try something new and offer you a poll to take.

Please consider the following:

So which did you pick? Over time, it will be interesting to me to track how people are viewing this simple comparison. I have been doing this example for almost a year now. When I first started it, about 1 in 2 testers polled would select the bug-free code. Whereas with testers I talk to lately, about 1 in 3 will select it. I definitely view this as a good sign and that folks are starting to reflect on these changes and adapting. My ideal world is that 1 year from now the ratio is closer to 1 in 10.

Why is this poll so hard for folks?

Primarily, it is due to our training. Test was the last line of defense – a safety net – needed to assure we didn’t do a recall when we released product out to manufacturing. When I first started in the software development world, 1.44 floppy disks were the prevailing way customers installed new software on to their system. Windows NT 3.1, as example, required 22 of them. It was horrible. Installation of a new machine would take the better part of the day, disks would be asked for out of order, and lastly, people would often get to the end of the install to discover that a setting they were asked for at the very beginning was wrong and that it was easier to just redo install than to hunt through the manual to figure out how to fix it after the install.

Customers who got their system up and running successfully and found a major bug afterwards would be quite sore with us. Thankfully, I have not heard of this is quite some time, but back then, Microsoft had the reputation of shipping quality in version 3.0. There was a strong and successful push within the company to get our testers trained with a singular mission: find the bugs before our customers do and push to get them fixed. I was proud to state back then that Microsoft was the best in the world at doing this.

The problem I am attempting to address is the perceived value loss in Test’s innate ability to prevent bugs from hitting the customer. A couple of months ago I presented to a group of testers and one of the questions asked “All of this reacting to customer stuff is great, but how can we prevent bugs in the first place?” Thankfully, someone else answered that question more helpfully as my initial response would’ve been “Stop trying to do so“.

The core of the issue, imo, is that we have continued to view our efforts as statically valuable. That our efforts to find bugs up front (assuring code correctness) will always be highly regarded. Unfortunately, we neglected to notice that the world was changing. That, in fact, it was more dynamic: Our need to get correctness right before shipping was actually tied to another variable: Our ability to react to bugs found by customers after shipping. The longer the time it takes us to react, the more we need to prevent correctness issues.

“Quality redefinition” – from correctness to customer value

A couple of years ago, I wrote a blog, Quality is a 4 letter word. Unfortunately, it seems that I wrote it well before it’s time. I have received feedback recently from folks stating that series of posts were quite helpful to them now. One such person had read it then and had a violent allergic reaction to the post:

“Brent, you can’t redefine quality”.

“I’m not!”, I replied, “We’ve *always* had it wrong! But up until now, it’s been okay. Now we need to journey in a different direction.”

While I now refer to the 4 pillars of Quality differently, their essence remains the same. I encourage you to read that post.

The wholeness of Quality should now be evaluated on 4 fronts:

  • Features that customers use to create value
  • The correctness of those features
  • The extent to which those features feel finished/polished
  • The context in which those features should be used for maximum value.

Certainly, correctness is an important aspect of quality, but usage is a significantly greater one. If you take anything away from today’s post, please take this:

Fixing correctness issues on a piece of code that no one is using is a waste of time & resources.

We need to change

In today’s world, with services lighting up left and right, we need to shift to a model that allows us to identify and improve Quality faster. This is a market differentiator.

It is my belief that in the short term, the best way to do this is to focus on the following strategy:

    • Pre-production
      • Train your testers to rewrite their automation such that Pass/Fail is not determined by the automation, but rather, leveraging the instrumentation and data exhaust outputted by the system. Automation becomes a user simulator, but testers grow muscle in using product logs to evaluate the truth. This set of measurements can be directly applied to production traffic when the code ships live.
      • Train your testers to be comfortable with tweaking and adding instrumentation to enable measurement of the above.
    • Next, move to Post-production
      • Leverage their Correctness skillset and their new measurement muscle to learn to understand the system behavior under actual usage load
      • This is an evaluation of QoS, Quality of Service. What you want Testers learning is what & why the system does what it does under prodtraffic.
      • You can start here in order to grow their muscle in statistical analysis.
    • Then, focus their attention on Customer Behavior
      • Teach them to look for patterns in the data that show:
        • Places in the code where customers are trying to achieve some goal but encountering pain (errors, crashes, etc) or friction (latency issues, convoluted paths to goal, etc). This is very easy to find generally.
        • Places in the code where customers are succeeding in achieving goal and are walking away delighted. These are patterns that create entertainment or freedom for the customer. Unlike the above, this is much harder to find, it will require hypothesis testing, flighting, and experimentation, but are significantly more valuable to the business at hand.
      • Being stronger in stats muscle will be key here. Since Quality is a subjective point of view, this will force Test away from a world of absolutes (pass/fail) and into one of probabilities (likelihood of adding value to customers vs. not). Definitely, it is wise to befriend your local Data Scientist and get them to share the magic. This will help you and your team to scale sustainably.
      • This is an evaluation of QoE, Quality of Experience. What you want Testers learning is what & why the Customers do what they do
    • You will then want to form up a dynamic set of metrics and KPI’s that capture the up-to-date learnings and help the organization quickly operationalize their goals of taking action towards adding customer value. This will generate Quality!

Lastly, while executing on these mindshifts, it will be paramount to remain balanced. The message of this blog is NOT that we should stop preventing bugs (despite my visceral response above). Bugs, in my world view, fall into 2 camps: Catastrophes and Other. In order to have Quality high, it is critical that we continue to work to prevent Catastrophe class bugs from hitting our customers. At the same time, we need to build infrastructure that will enable us to react very quickly.

I simply ask you to consider that:

    As the speed to which we can react to our customers INCREASES, the number of equivalence classes of bugs that fall into Catastrophe class DECREASES. Sacrificing speed to delivery, in the name of quality, makes delivering actual Quality so much harder. Usage, now, defines Quality better than correctness.

Ken Johnston: a friend, former manager, and co-author of the book “How we test at Microsoft” recently published a blog on something he calls “MVQ”. (For the life of me, I can’t find that post. I’ll circle back one I have it.) Ken is still quite active on the Test Conference scene (one next month), but if you ever get the chance to ask, ask him “If he were to starting writing the Second Edition of his book, how much of the content would still be important?“. His response is quite interesting, but I’ll not steal that thunder here. J

Here’s a graphic from his post for your consideration. I think it presents a very nice balance:

Thank you for reading.

Book Review: Lean Analytics – great primer for moving to DDE world

 

So O’Reilly has this great program for bloggers, called the Reader Review Program. They will let me pick out a book of my choosing & read it for free, as long as I write up an honest review of the book here on my blog site. Because I know that I will eventually posting reviews here, I will be picking books that I think might have value to the audience that is following me. This is my first foray into this model. Right now, I think it’s an “everybody wins” thing, but I will pay heightened attention to how this affects the integrity & reputation of the site. Since I am generally reading 5-10 books at a time, I highly doubt that I will post blogs like this more than once or twice a year. Your feedback is welcome.

 

Lean Analytics by Alistair Croll & Benjamin Yoskovitz; O’ Reilly Media

    The title above will take you to OReilly’s site so you can delve further if you choose.

 

Review

    As the title suggests, Lean Analytics is a solid combination of two powerful movements in the Software Engineering world, Lean Agile and Data/Business Analytics. While there are several books out there discussing the need for data science and growth in statistics, this book really covers the What, How, and Why for using data to drive decision making in your specific business. Without being too technicial or academic, it introduces readers to techniques, metrics, and visualizations needed for several common business start-up models in operation in today’s world.

    

    I am ***REALLY*** fond of the Head-First Series of books and that is just about the only thing that could make this book better. After The Lean Startup this is probably the most useful book for those trying to iterate fast in today’s software engineering world. I found the information to be very straightforward and easy to follow. While I think the authors really tried to cram everything they could into the book (at times, making it read awkwardly), they introduce you to practical examples of how to use the material and when.

 

Several sections of the book are quite good… looking at some lightweight case studies of startups and the analytics they used to navigate muddy waters. The book tries to make all types of software business accessible. Ranging from how to categorize the growth phase of your company, what things to use during your value phase, what analytics are appropriate for various types of companies (mobile apps versus SaaS e.g.), and even how to operate within enterprises. As a result, though, the depth at times can be lacking but if you are looking for a breadth book that covers all of the basics this one might be good for you. Reading it is one of the reason I have decided to start my Masters in Analytics. With more information in the case studies, and more examples of actual data to look at and suggestions on how to avoid false metrics and gives guidance on what to look for.

 

One of the struggles that I am seeing at my place of employ is that Test is shifting away from automation roles and into data pipeline roles. This means we are just changing the way in which we deliver information so that others can analyze it and make the “adult” decision. This, imho, is not good. But it falls within Test Wheelhouse, so it is safe. Please Please Please instead grab this book and take a leadership role. This book will help us start the disciple move into a direction setting role instead of just a measurement one.

 

This will likely be the topic of my next post. Thanks for reading…

 

What would a Leader do?

Just over 2 years ago I wrote my most viewed blog post to date: The Tester’s Job. As 2013 comes to a close, there is much hullaballoo happening in the Microsoft Test community. After a series of non-trivial reorgs, it is clear Microsoft is making a huge step away from the traditional role for Testers and even the word Test is being stricken from the vernacular with our fearless leaders’ titles being replaced with a new one Director of Quality. Followers of this blog know this is change I felt was coming for a while now as well as aggressively supported.

Last May, my own team underwent this change with some great successes and some even greater learnings which I presented recently to a set of Test leaders. This presentation has become the closest thing to viral that I’ve ever experienced. Mostly, I believe, because folks are anxious about the change and are eager to use whatever data they can find to help predict their own future. The deck mentions some pretty significant changes needed to make the paradigm shift happen. The most controversial being a very large change in the Dev to Test ratio (a number that was historically used to determine the “right size” of test teams during reorgs). In my experience, some folks are more comfortable with innovating and being a starter. Whereas, other folks are superb at executing and getting work closed. Between the two, I have always been much more interested in being on the frontlines of change. Accordingly, I’ve never much been afraid of change, and view it as an opportunity to explore something new. I thoroughly enjoy seeing how these new learnings can be used to grow myself, my team, and whatever product I happen to be working on. And as a result, my love of the New and of Learning has helped to make me quite adaptable over the years. However, I understand that not everyone’s built this way and even those who are, the changes coming might be more than they can tolerate.

One colleague sent me the following in email after seeing my presentation:

Sobering. This is a lot of where we are headed in [my group], but without (so far) shifting any resources.  This may be really hard on test from the sounds of it.  Are there any suggestions on how to lessen the pain? Do we just rip the band aid and give people retraining?”

Since this is something that I am getting a lot of questions about lately, I felt it would be a good topic for a post. As I mentioned last time, Spider organizations don’t scale to the degree we will need them, so we need to build up Starfish muscle. While this does mean a move towards Headless organizations, it by no means describes one that is leaderless. In fact, leaders become critical. One of the challenges with this shift is that people are so accustomed to living in Spider organizations that they forget, nay, afraid to lead. This is the first change I think people need to do.

Here’s a very simple strategy I have found that helps me when times are troubling:

  • Ask and answer: What would a Leader do? – If there were an actual leader right here and now, what would they be doing? Why? What goal would they be trying to achieve? How would they go about it?
  • Be that leader – Why not you? Everyone has the ability to lead. It’s just easier not to. Choose to lead. Bravery is doing the right thing even though you are afraid.

So what would a leader do in these times? Here’s what I think:

  1. Keep their head on straight. There are 4 key things people need from their leaders. Trust, Hope, Compassion, and Stability. You cannot provide *any* of these if you join the panic. Imagine the state of the fireman who is going UP the stairs of a burning building.
  2. Manage the Change
    1. Explain what, why, and how the change is occurring. All three! Many times I see leaders leave one of them out. Folks need all three in order to triangulate on the correct direction.
    2. Explain the goal and new direction. Telling folks where to head is easier and more beneficial than telling them what to avoid. “We need to ship weekly” is better than “We need to stop shipping every 3 years” as examples.
    3. Enlist others in making the change happen. People are more likely to follow something that they contributed to creating. I’ve always been a fan of enlisting the team to come up with the logistics and dates and to place themselves into the positions that they are the most passionate about.
    4. Pull the trigger. Be the tipping point to get the momentum going.
  3. Train ThemselvesThis is probably the single most important item. You cannot help others if you have not helped yourself. You need to learn more about the new world you are heading towards. Dive in. Then and only then will you be in a position to guide others. Seek out internal experts. Change jobs. Go back to school. Head to conferences. I read recently that if you just read 1 hour a day in the field of your choosing, you would be an international expert on that topic in just 7 years. Those investments add up very quickly. Do not underestimate it. (I, myself, will be starting my Masters in Analytics on January 6th! (very excited about it))
  4. Train Others – You need to distribute what you have learned. A couple of things I have been doing lately:
    1. When someone asks me to talk to them on a topic, I assume they will not be the last. So organize it as a presentation.
    2. Record it, so it can be shared.
    3. Create a local community and ask each interested person if they’d like to join it. I can no longer find the reference, but something like 80% people will join if they are just asked. Try to drive participation on the community to make it self-sustaining.
  5. Get out of the way – Remain as the bottleneck to the change for as little time as possible. Someone may need to stay at the helm in order to make sure the momentum continues in the right direction, but once it is clear that it has, get out of the decision making process and let the team be empowered.

Rip the Band-Aid?

    To be honest, I am a proponent of Band-Aid ripping in these situations. People are afraid to make changes due to the unknown consequences. As Brad Pitt’s character asked in MoneyBall, “Which would you prefer, a clean shot in the head or 5 shots to the body and bleed to death?” The longer you wait, the harder the pain will be for those involved. But DO NOT rip the Band-Aid without a plan.

One last note: One very popular question people have been asking me lately is do I think that Test is dying? I believe Test (as we know it) is like a chicken with its head cut off. It’s dead, but the rest of the body is still flapping about and doesn’t quite know it yet. I have now been at the company for 20 years and, in that time, have seen a number of these big transitions occur in the Test discipline. I find it wise to remember: Each time these transitions occurred, a fairly large number of people were affected, but as a whole, we improved and became more valuable to the company. I think this time around will be no different. My view is that, given our innate ability to code and test coupled with our passionate pursuit of quality, our staff is well suited for being the engineers of the future and perhaps better than any of the other disciplines. However, whichever way the wind blows, it’s clear we will need to change. My New Year’s Resolution is to help anyone and everyone I can to help make this migration. After all, it’s what a leader would do.

HAPPY NEW YEAR!

Irresponsible Accountability

How familiar is this narrative to you?

<Setting: Triage/shiproom discussion not so far away and not so long ago… Key contributors and decision makers are all in the room. They are trying to collaborate towards determining what bugs need to be fixed before they can ship. The planned ship date is looming ominously just around the corner….>

Dev Manager (of Managers): We should fix these bugs…

Dev: What’s the bar? I thought we were recall class only as of today.

Dev Manager: They are easy… We should do them… You guys should just be able to get them out of the way.

….

Sometime later

Dev: There’s one last thing I’d like to discuss. It’s clear that from the bugs we accepted today, we are ignoring the recall class only bar.

Dev Manager: I don’t care about the bar; we need to do the right thing.

Dev: But if we keep doing that, how are going to land this project in time?

Dev Manager: I’m sorry. I don’t understand your question or concern.

Program Manager: um, Mr. DM sir, what Mr. Dev means is we’d like to understand what Leadership’s plan is to get to the quality goals with the dates specified?

Dev Manager: What?!?! That’s *your* job. You want me to do that too?!?

When I heard this story from one of my mentees, I had already been thinking on the nature of Accountability Cultures. His team is in a bit of a mess right now as they quickly try to align reality with promises made to customers… They have bitten off far more than can chew. This is made worse by the strong lack of leadership in his team as evident by the Dev Manager in the story above, who is clearly a JeDI (and no, that is not good!).

So what do I mean by an accountability culture? I mean those workplaces where management is focused on assigning owners to tasks/work for the purpose of “knowing who is responsible”. These organizations are generally hierarchical in nature and this ownership model is intended to both “empower” and “control” how work is being done. However, far too often, the results it achieves is simply the knowledge of who, precisely, to blame when it fails. In other words, it does not optimize for helping teams succeed, but rather for helping them to fail. Teams who want to succeed figure out ways to work around their management’s policies by “going dark” and doing work in secret or working much harder than needed to satisfy management’s quest for owners and the desire to move the business forward.

My litmus test for the accountable person for anything: S/he is the one who apologizes when things go astray and is the one looking to make things right. This is significantly different than “the one who is to blame”.

Thankfully, I have only worked in 2 such teams where I felt the culture simply was too toxic to be fixed. Management usually does not want, nor understand, that their behavior is causing a negative downstream effect on their staff.

How do teams get to this point?

There’s how I think this happens.

  • Spider is beating the Starfish: Human society is constantly fighting it out over the best way to organize in order to produce the best results. Ori Brafman writes about this in his book The Starfish and the Spider.
    • Starfish models are headless. Teams of people work together to achieve goals. The individual is tantamount and bands together with others towards complimentary goals. If you played or observed World of Warcraft or similar games, you’ve seen this model in action. They are tribal in nature, fast, and have a lot of flexibility, but they don’t efficiently utilize resources in a single direction. Instead they are effective at being able to change direction at any moment in time and collaboration amongst the team’s members thrives. Decisions can be made quickly as the framework is principled, not tactical. (eg “do the features in your team’s backlog that add the most fiscal ROI”, vs. “Add the ability to Bold text in a help file”)
    • Spider models have a head. They scale to greater number of people, but have a critical flaw. Kill the head and the body dies with it. If the head has a bad plan, the body goes along for the ride. This is also known as “Command and control“. Competition thrives in this model. Folks quickly understand that to “win” you don’t need to convince others….. Only the head. Spider models are more effective when you know the objective to be achieved and how it needs to happen, but they are not effective when decisions need to be made quickly and repeatedly, such as is common in the Software services world.
  • Peter Principle is alive and well (it’s proven!)
    • People are getting promoted to where they are no longer competent.
      • They are not taking training to understand how to scale to their new role, nor are they learning the state of the art techniques
      • They have to make more and more decisions with less and less data, which is harder and harder. So the validity of their decision becomes more and more unstable.
  • Jensen’s Law of Politics – For years, I have been teaching folks this insight: “He who is defensive, loses… Always…”
    • One cannot win *any* game by playing defense only.”

So what happens is: Spider models get propagated (for a variety of reasons (control, $$, clarity of purpose, etc). People then get promoted to the point where they can no longer scale to efficiently provide good decisions to their subordinates. Since spider models are competitive by nature, people quickly (subconsciously) begin to realize the way *they* continue to win/keep the head position is by taking the offensive position.

    They blame others.

This then gets sustained in a vicious loop:

  1. First off, Leaders hate getting blamed.
  2. But, Leaders don’t have the time to learn how to do things differently
  3. But, this process, as a decision making framework, is too slow. Leaders don’t (can’t?) take time to understand the root causes to failure or the dependencies needed to satisfy success. So they keep plodding on. Essentially, “hoping” they succeed.
  4. But, since they don’t, they resort to finger pointing and brute force tactics (“30 hr” workdays) to set things straight again.

Is this fixable?

I think so, but teams *must* learn Starfish techniques and the environment must support it.

  1. In my mind, this means any form of Adaptive project styles (scrum, kanban, XP, etc). But they need to make sure they are *adapting* and not just iterating… Teams should be encouraged to act, but to validate everything with actual learning.
  2. Create an environment where people commit to goals, not told to commit. Don’t fool yourself. People cannot be told to commit to anything. In order for them to commit, it must fulfill some important purpose to them, it must be achievable, and they must understand the risks, the rewards, and the degrees of freedom they have if the project goes out into the weeds.
  3. Teams need to be taught to work together to achieve goals. Taught to trust and (more importantly) rely on their peers to move forward.
  4. There is an important distinction between solving problems and owning solutions. Owning the solution doesn’t guarantee that it actually solves any problem that is important. For the past 3 teams I have led, I have had a team motto that I adore: “my team’s job is to solve problems, not write tools or code. If I can leverage another team’s component, I will. NIH kicks ass!” (not invented here)
  5. Give folks guidance and principles to make decisions on their own. Enforce this. This one is probably the hardest to do. Folks get used to not owning their own decisions. It’s uncomfortable for them. “Are you going to blame me if I fail?” My style is to let individuals make the decision, but reinforce that the team will own cleanup if the decision fails. I try to create a spirit on the team that individuals are shepherding work, but the whole team is accountable for all work in progress (see earlier post). This helps people feel empowered and forces the team to have each other’s back.

Lastly, I recently got certified as a Program Consultant for the Scaled Agile Framework, which means for the next year I am allowed to teach it and certify others. One of the really great things I think those folks have figured out, is that in order to truly scale, you need to find an efficient way to decouple the business strategy from the implementation tactics. I’m over-simplifying, but in essence:

  • Management owns setting strategic direction, measurement of success, cadence, and funding
  • Team owns creating an implementation they commit to.
  • Management owns creating the decision framework that is principled based. Teams are pre-informed with the constraints.
  • Team owns making the decisions, staying within the constraints.  Team owns correctly setting expectations with Management.
  • When things go wrong, both sit in a room as peers to work out how to adapt. Both have an important and distinct role to serve. When they are working together to thrive, the business does too.

Special Bonuses:

  • Here’s one of my favorite videos of the famous All Blacks rugby team. Showing what committed teamwork looks like.
  • The US military defines Command and Control as: “The Exercise of authority and direction by a properly designated commander over assigned forces in the accomplishment of the mission. Command and control functions are performed through an arrangement of personnel, equipment, communications, facilities, and procedures which are employed by a commander in planning, directing, coordinating, and controlling forces and operations in the accomplishment of the mission”

    Their most recent investigations show that C2 doesn’t scale due:

    • The essence of command lies in the cognitive processes of the commander.
    • the influx of data now available
    • The speed at which the operation must execute.