Hello all and a very happy New Year’s wish to you. It has been a LOOOOOOONG time, since I have written a post (as several folks have reminded me). While it remains my goal to post at least once a month, I know I won’t always achieve that goal. These past several months have been a particular drain on my time, probably my most valuable commodity at this time in my life. One of my New Year’s resolutions is to reprioritize how I spend my time. My goal is to focus more deeply on my personal retraining and on my family. I believe many of my readers are considering or following a similar path forward as am I. With luck, the time will avail itself, so I can publish what I’ve learned and share.
As part of that retraining, I’ve gone back to school. I am quite happy to report that I am now done with my first year of a MS in Analytics and thus far, I’m a straight A student. Honestly, though, it is a bit odd going back to school in your mid-40’s. I really could care less about the grades (it has been decades since someone has asked me my GPA). I am very passionate about learning the topic AND I have school age children who are watching my every move. I just simply can’t have teenagers calling me a hypocrite. The drive to learn the subject matter is really helpful to get the grades. I wish I had such a similar motivation the first time through college. J Thus far, I have been able to bring every class I have taken back into the applied context of my work and have done something valuable with it. I’ve got a ways to go, for sure, but the journey is been a lot of work and a lot of fun.
At work, I am helping to land something *very cool* for shifting the world into a more Data-Driven culture. I am on the Power BI (Business Intelligence) team in the SQL divisions. We are enhancing our SaaS offering and a preview for the public is available now with our new features. Check it out. The price tag for the public offering? Absolutely free.
I am quite proud to be a part of this effort. I hope you check it out and provide feedback.
Since it has been a while since I last posted, I thought I’d cover a couple of quick topics:
- Layoffs – Several of my brothers and sisters at Microsoft got laid off in the last few months. Most of the impacted folks that I knew have already landed new jobs and are reporting being even happier than they were. There are still a few people that I know are looking. If you are looking or seeking, please feel welcome to send me a note on LinkedIn and/or Twitter. I’ll see if I can leverage my network to help broker new relationships for people.
- The AB Testing Podcast is back after a couple of months hiatus. (ok, this is old news if you are one of the “three”) You are welcome to check us out and send any question, comment, or feedback you’d like. Alan and I are both change agents of a sort and believers in collaboration and community for accelerating achieving goals. I, for one, absolutely adore the Mailbag segment. Any way we can improve the podcast, talk on something that might improve your life, or share your success story, send us a note. We’ll talk about it “on the air”.
Other agents of change: I’d like to call out others who are putting “pen to paper” in to help the community grow and converge.
- Ben Bourland – Has started a blog recently and is journaling his experiences and insights on the shift from Test to Quality.
- Steve Rowe – A long time blogger and QA manager in the Windows org. He is also trying to influence change towards data-driven.
- Michael Hunter – Another long time test blogger and friend did a live presentation recently at SASQAG. The link to the video is here. His talk is on how he came to the realization that he had strengths that were valuable in many disciplines and that actually testing was not something he had every actually enjoyed. I share this in case his journey inspires others to let go of their fear that the discipline is changing.
About a year ago, I wrote a post with the intent of helping Testers manage the change. A lot has changed in my company as well as others around us, and it is now fairly clear (to me, at least): Test as a dedicated team is a thing of the past. Testers have shifted into Data Analysts roles, development roles, infrastructure roles, or specialists in NFRs (non-functional requirements) such as End to End Integration and/or Performance. Very few “Testers” still exist. However, the transition is far from done. Many Testers have just been bolted onto Development teams under the guise of combined engineering, but the actions of those involved haven’t changed. In some teams (thankfully, including mine), Testers are gone and their skillset is slowly being incorporated into the team as a whole. However, it’s a slow difficult change for the prior development team to fill the void. A wise manager once told me, “Brent, be patient. It’s a marathon, not a sprint.” This change is far from over, but definitely moving in a more productive direction.
However, one role that persists that continues to show very little sign of change is PM. Program managers historically have owned the requirements for defining what we are bringing customers, then driving a schedule towards delivering it. Most PMs spend time speaking with customers and build a strong intuition on what customers want. They are generally very charismatic and likeable and work hard to try to make people happy. While I have never been in PM, back in the day, my father was Director of PM for multiple companies. We now have very interesting conversations about our differing experiences. For example, one commonality my father seemed to share with other PM’s I know was a drive to get into the executive role. I have had an opportunity on multiple occasions to work directly for the executive in my team. I think my dad was more than a little disappointed when I have told him “I have very little drive or desire to do that job.”
I was once told by a mentor that “Test doesn’t understand the customer” and that this was the main reason why PM advanced to executive more often than not. I think most PM’s I have spoken with in various degrees or another share this opinion. However, it is my belief that the next big culture shift that is about to come. Right now, PM, too, either won’t understand the customer and/or their ability to act on the appropriate window of opportunity will be too slow. I believe the primary root cause for this will be an over reliance on their intuition and a reluctance to test it out. PM’s are used to being able to rely on their soft skills and having a long time to react to the market. In comparison to techniques used by competition, the result is slow and often just plain wrong. Moving forward requires a mindshift in the program management organization.
I have mentioned that in today’s world: speed to market on value adds is paramount. Old school PM Techniques, unfortunately, are too slow and do not scale. Thus far, PM has remained relatively unscathed as part of these changes, but I firmly believe their judgment day is coming next. Their careers are at stake. Consider this: as data science takes further hold into organizations and teams light up the ability for individual engineers to make correct and actionable decisions on their own, the need for a team Customer Expert (ie. PM) becomes *dramatically* reduced. For those paying attention, one will notice that need for PM to take on the role of schedule meister has already been cut way back. Unlike Test, I believe PM will be still needed, but I do believe we are nearing a time where we will see their numbers greatly reduced. PM must learn to adapt and apply this new knowledge to not only survive, but thrive. IMHO, it will be those who learn to balance their intuition with the data towards actionable and valuable decision making that will differentiate themselves from the pack.
I have had multiple conversations with the Data Science groups throughout the company and a new series of problems are becoming quite clear:
- No Customers – Complaints from the data guys that they are building assets that should be being used, but aren’t.
DRIP (Data Rich, Insight Poor) – The items they are building that DO get used are, in essence, Scorecards and Dashboards filled with Vanity Metrics that aren’t shifting the needle towards anything the business values.
There’s an obvious symbiotic relationship between program management and data science. These folks need to be working together in concert like a well-oiled machine. I really don’t understand why they aren’t, but it’s clear from the number of people talking to me about it, this isn’t happening or not happening sufficiently enough. I think a big part of it though is PM doesn’t know how to take advantage of the data teams and the data teams don’t know how to express their value in a way that resonates with the PM in a non-threatening fashion. By that I mean a win-win scenario, I’ve spoken with several PM’s that very strongly believe this is all “data crap” and their intuition is all that is needed. True, Steve Jobs did it. I would argue that Mr. Jobs was special. He was one in a million whose intuition just happened to be right. The rest of us are wrong all of the time. The positive thing though: PM doesn’t have to do it alone (and, in my belief, they probably can’t). Your data science team is ready, willing, and eager to help.
Show Me the Evidence
I believe PM need to invest more heavily into understanding how to do Evidence-based decision making (aka Hypothesis Testing). A key principle for their lives going forward should be: No matter how right you think you are, due to uncertainty, there is *some* chance you are wrong. Therein lies the problem, there is always some risk associated with uncertainty (such as wasting time/resources on a problem customers don’t care about us solving). Please feel free to leverage your intuition; this new world is *NOT* about intuition versus facts, but rather intuition validated by facts. Both… Together… Intuition on its own can very quickly lead you in the wrong direction. Likewise, facts on their own can lead you to optimizing your current business, but will not help you to find the breakthrough game-changer with higher business potential.
Hypothesis testing in a nutshell
The goal of hypothesis testing is to be able to confidently select the best available next action to take.
NOTE: “Best” is relative. Good enough *IS* in fact Good enough. One common error is see PM’s making a lot lately is deferring a decision until they have 100% accurate and precise information. You simply do not need this. Leveraging heuristics and a solid understanding of how to use confidence intervals will take you far. For example, Douglas Hubbard’s Rule of Five tells us that there is a greater than 93% probability that the median of a population is between the smallest and largest values in a random sample of only five items. Do you *really* need to know the median? Or is knowing the range all you need?
Write down your hypotheses individually
- These are in a form of a statement (not a question)
- These must also reflect a business KPI.
Knowledge Phase (knowledge is information in action. The ability to use information)
For each hypothesis, enumerate your possible actions
- What actions will you take if the hypothesis was true?
- What actions will you take if false?
Information Phase (information is data organized in a meaningful way)
Now you need to enumerate the questions needed in order to confidently select the appropriate action.
- Battle confirmation bias – the human tendency to search for, interpret, and remember information in a way that confirms one’s preconceptions
Data Phase (data are raw facts that are meaningless by themselves)
The last phase is to simply enumerate the data points you need to answer your questions.
- Most of the time you will be measuring customer behavior. Measure the behavior you want customers to take, instead of trying to measure everything.
- Be wary of Hawthorne’s effect: People’s behavior will change in accordance to how they know they are being measured.
- Get these instrumented or build a heuristic based on your existing instrumentation that can be used to answer your questions.
Hubbard provides 4 very useful measurement assumptions to consider when you are designing your data phase:
- Your problem is not as unique as you think
- you have more data than you think
- you need less data than you think
- an adequate amount of new data is more accessible than you think
Hypothesis: Within my product, enabling Users to share with other users via Facebook will cause a notable increase in Acquisition and Engagement KPIs.
Possible Actions: (not complete)
- No action
- Optimize – make it really easy for users to share
- Enhance – improve the sharing content to entice receivers
- Abandon future feature development or cut
- If acquisition improves, but not engagement, develop new hypothesis.
- If engagement, but not acquisition, improves, develop new hypothesis.
Possible Questions: (not complete)
- Which users shared?
- Which users received invites?
- Which receivers entered the service due to a sharing invite?
- How does acquisition via sharing compare to other acquisition means?
- How does sharer’s engagement level compare with those who don’t?
- How does receiver’s engagement level compare with those who don’t?
- Users who share & receive
- Receivers’ acquisition date & means
- Acquisition rate correlated with receiving rate
- Engagement rate correlated with sharing rate
- Engagement rate correlated with receiving rate
If PMs and Data teams work together to craft their hypotheses (and experiments), more precise and accurate decisions will be made. Ideally, with the minimal amount of new instrumentation having to be added to the product. Be prepared to be wrong… A lot… But celebrate it. The faster you are wrong also turns out to be the faster you will *stop* being wrong by building fast, actionable knowledge towards business goals.
One final note:
There is one other phenomenon that seems to becoming common in the post-tester world that needs to be addressed.
PM, you need to STOP testing for your developers. Yes, yes, I know. “But Brent, I am now being held accountable for the customer satisfaction level for my feature and what I keep getting is bugs, bugs, and bugs.” Remember that this is a system problem. We’ve changed the software development system by removing testers from it, but by no means does this imply that this was done with the right tooling and training in place. Nor does it mean that you and your peers in PM have figured out how to truly determine the right number of features your dev team can produce in order to confidently get to done, done, done. There is much work to do here, for sure. You need to play your part.
How? Stop being the safety net. It is absolutely ok for you to be a part of the team and help everyone improve the quality of the product under development, but always consider whether or not your role in doing this is becoming more and more required. This might imply a dysfunction in the system and the team has learned that they can avoid doing testing themselves by shipping their crap to you. It’s super seductive for a dev. I, myself, have found my directs doing this, at least a couple of times in the past several months. It’s also seductive for you. Usually, when you find that critical issue, you will get praised for how well you really saved my bacon. Both sides get an endorphin rush.
Instead, I encourage you to strike a different balance between improving and shipping your features and playing your new role in this system. My recommendation is to take on the strict role of the PO for your team. A key responsibility of the Product Owner is own the Acceptance phase. Acceptance does not mean you find bugs. It means you own the decision as to whether or not the feature is ready to ship to your customer.
You can do this easily by doing 2 things:
Making sure your team must doing an Acceptance Interview with you before they do final checkin into the shipping build.
- It is an interview only. DO NOT open the product. This is for the short term only and is required in order to set expectations: it is not your job to disprove/prove that they are done. It is theirs. It is your job to be satisfied that they have done so. Once your team has shown signs of using the new expected behaviors, feel free to change the interview process if it makes everyone’s life easier, but until then, resist the urge.
- Usually, you will have 2 lists for acceptance: 1) the requirements (acceptance criteria) defined when the item was still in the backlog, and 2) the non-functional requirements (perf, scale, localization, security, etc) that all features must be scrutinized against.
- For each requirement, simply ask the developer: “how did your prove readiness for this requirement?”. If their answer is vague, follow up with more precision questioning.
- Using your best judgment, if you don’t like what you are hearing, then Reject with your rationale.
- Being brave. For a short period of time, you will be causing disruption in the system. Your devs may have already gotten used to your new role. Change of this sort almost always causes anxiety. This anxiety may come with consequences depending on your team’s culture. In addition, your actions will likely result in several of your key stories/features not completing in this sprint. Remember your job is first and foremost to satisfy your customer. Your preference should be to do that alongside your team, but if your team wants something else, you may need to “go it alone” for a little while. Remember also everyone on your team are professionals, but they may have some bad behaviors due to existing muscle memory. Your job should be to help coach them to learn new ways.
As always, thank you for reading and I appreciate and welcome any feedback you’d like to offer.