Unicorns, Data Scientists, and other mythical creatures

Hi all!  It has been a while since I’ve written a blog and since my last post in January, lots of exciting things have happened to me.   Those who have been following me on LinkedIn or listening to the AB Testing Podcast know that I have taken a new job as the data scientist manager in the Azure team.  In the short time since I have started in March, I can easily say this is absolutely the best job I have ever had.   In no small sense, it really feels like a job I was meant to do from the start of my career.   It’s funny to me.   As I sit and type this, I am reminded of my mentor in High School, my calculus teacher.   He was a cranky man, who in one part loved his job, but in another felt like he had accepted second best in life.   He frequently mentioned that one day when *his* high school math teacher died, he would go piss on that grave.   He felt that his mentors had not set him up to succeed in life and was driven to not repeat that error with his students.   A story for another day, but soon after college, I would learn that same sense of responsibility to teach those what I wasn’t taught.   My mentor wanted me to be an Actuary.   I loved math (still do) and went to college with that in mind.   I bolted on a Computer Science degree once I learned how much I loved it, but upon graduating, my Math degree would encapsulate knowledge that, in general, I would not use for 20 years.   Until about 4 years ago.     Needless to say, I really love my job, what I am doing, and problems I am solving.   I do wish, now and again, that I could wind back the clock and be where I am today, but have another 20 years to master my new direction.  C’est la vie.  So much cool stuff to learn and use and so little time.

Very similar to my last team, my new team did not have much experience in their new space when I started and I am grateful to be on the ground floor of what we are building there.  Much like in many places in my company, many of the folks are old SDETs and dealing with the change is an ongoing challenge, but not one that I am unfamiliar with.   Honestly, it is going nicely in my humble opinion, but as more and more people are learning what data can do for a business, the pressure to hire and train more data scientists is ever increasing.     In the last 9 months, spontaneous 1:1s with folks have increased by an order of magnitude with folks who are: 1) looking to hire a “data scientist”, 2) looking to become one, or 3) looking to preserve their current position.    Today’s post is mostly about issue #1.  Although, #2 is also interesting to me as the majority of these folks have been Program Managers lately (which might indicate a sign of change in that discipline).   #3?   I would say that’s the majority of what I speak to on my blog and on the podcast, but if there are specific questions, send me a tweet.   We will feature it as part of our Mailbag segment on the podcast.  We love the mailbag!

This post was inspired by a talk I attended at this year’s Strata conference in New York.  The presenter, Katie Kent, did a talk on Unicorn hunting in the data science world, which I thought was fantastic.  Her company Galvanize.com offers a 12 week immersive course that claims to prep you for a Data Science role with a 94% success rate.   I haven’t researched this myself, but maybe I will test it with a few employees of mine and report back.   Might be another alternative for the #2 issues I mentioned above.  I can say Katie’s talk was great and it resonated.   Many of the discussions I have had recently was *exactly* in this problem space.   Managers coming to me or my manager wondering how to quickly learn from us being vanguards and asking how to take advantage of Data Science and “maybe if I can get an open position, you can help me hire one?”.


Yup, that’s right.   One!


Katie’s talk was about Unicorn Hunting.   The elusive Unicorn was the perfect singular Data Scientist a company could hire that could solve all of its needs in the data science space.    I regret to inform all of you except Katie.   They are extremely rare (perhaps, rarer than the Unicorn) and if you can find one, you will probably not be able to afford him/her.  (note: if you can though, you should!).   The challenge here is this perfect Data Scientist would have to be an expert in too many very distinct fields.  The ones I am aware of that exist are indeed Rockstars (in the Stats world), but these aren’t the folks you will successfully hire into your one-off position.

One new experience for me on my new team has been to hire Data Scientists.   Almost all of the folks I have interviewed have been PhD’s, but the best have only achieved Master’s degrees.   To date, I have not met a candidate with only a Bachelor’s degree.  <aside> This, by itself, is interesting.   If one can become a data scientist in only 12 weeks, why not 4 years?    </aside>   Master’s candidates are great, I think, because they have 1) learned some depth and 2) have stayed grounded in the application of their craft.   I’ve learned that there’s a big debate in academia with respect to post-doctoral individuals and whether or not to join Industry.     They are pushed to push the boundaries of the science and I think somewhere along the way, the drive to apply it to real life dissipates.  Masters students are more applied scientists, in my humble opinion, than theorists and as a result, more immediately useful to a business.  This is an exaggeration, of course, but highlights another cause for the Data Scientist shortage.  The PhD folks that do venture into industry and survive it are helpful.     They are able to pull the practiced, but old, learnings of Data Science closer to what’s currently known which causes an acceleration of all involved.

However, this is knowledge work and there’s too much of it.   Due to the cognitive limitations of any single human, it will *always* be rare for just 1 person to be even good enough for what is needed end-to-end.

Depending on who you talk to, there are multiple definitions of a Data Scientist’s job.  My present favorite: A Data Scientist helps a business drive action by understanding and exploiting relationships present in the data.   There are 4 key principles buried in that definition.

4 Key Data Science Principles:

  1. Actionability – the recommendations must be interesting, valuable, and within the means of the business
  2. Credibility – In this business, Objective Truth is *everything*.   It is ok to communicate confidence intervals, but it is not ok to be wrong. Data Science teams get, AT MOST, one time to present wrong data, insights, recommendations to an executive. It is wise to remember this.
  3. Understanding Relationships – this is the bread and butter work most data scientists are hired to do.   There are a vast sea of techniques to use for digging knowledge out of data. One must also have the ability to understand what it means.   Domain Knowledge is critical.
  4. Data – lots and lots and lots of it

To be able to succeed, one must turn data into knowledge and make knowledge work.   Sounds good as a t-shirt motto, but in practice, this is hard.   It takes deep knowledge from several disciplines to turn this into something efficient that scales to not only the amount of data being processed, but also the timeline the business requires in order to benefit from the discoveries.


In my experience, it takes a team to pull this off.


In my observations, here’s what you need:

  1. Data scientists – starting with the obvious.   However, what may not be obvious is that Data Science is a very wide umbrella.   But there are 2 major branches, and likely, you will need both.
    1. Applied Statistics – The ability to prove/disprove known hypotheses in a deductive manner.   You have a belief already in hand and you are trying to prove/disprove it. Applied Stats techniques tend to be faster than Machine Learning.   A few simple histograms are easy to pull together as an (exaggerated) example.
    2. Data Mining – Using Machine Learning techniques in an inductive manner. You start without a preformed belief, but instead a goal (such as predict when a customer will churn) and let interesting patterns within the data unveil themselves. (note: interesting is a Data Science term and it can be measured. )   Machine learning techniques, in my experience, handles Big Data problems better.   They can scale to the size of data better.
  2. Data Engineers – Engineering the movement, storage, indexing, parallelization, cleansing, and normalization of data is a very hard problem and MOST data scientists do not know how to do this.   As Big data grows, this role, already critical, becomes even more so.   Credibility starts with the data and these guys are key to caretaking and monitoring it. They should be paying attention to not only traditional RDBMS solutions, but to technologies such as Hadoop, Splunk, Azure Data Lake, etc.   Each of these solutions come with their own pro’s and con’s and you need someone who knows what they are doing. These folks should understand the architectures end to end from data emission to visualization. There is NO silver bullet and you need a person who understands the trade offs.   Every executive wants 1) a cheap solution, 2) near real time, and 3) inclusive of all the data. Cheap, Fast, or good: Pick 2.
  3. Computer Scientist – Especially in distributed computing.   The current state of the art for Big Data is to parallelize and send your code to the machine that is storing the data to do calculations (Map Reduce, in nutshell). This greatly reduces the time spent as code is easier to move than the data, but even so many of the techniques are O(N2).   Polynomial time is too slow (or expensive even with millions of machines running in parallel).   There’s an active quest for O(N) and O(1) solutions to Data Science problems as well as clever approaches to Data Structures that helps to improve speed and storage costs.   One new item that I have not spent nearly enough time on is in the use of heuristics.   More here later.
  4. Domain Knowledge Expert – Even if you got the people with the skills above, you still need to be able to understand what the data *means* in order to move forward. Typically, the data is being emitted from telemetry from lots of product developers.   Unreasonable to expect 1 person can know everything about how the product works, but you will NOT succeed if your in-house data science team knows nothing.
  5. Business Expert – You need to be able to understand what the business goals are and how to translate your uncovered insights to support decision making.   This takes art in communication and visualization.
  6. Agile – An agile coach is needed in order to pull together these folks and get them working together towards common goals.   It does NOT make sense to over focus on *any* of the above specialties.   all roles are necessary to succeed and since this team is working to improve the business, adaptability is key.   As new knowledge is gained, the team needs to be able to shift in the new direction sustainably.   This happens A LOT!
  7. Manager – Really I wanted to put Orchestrator here, but you really need someone who is a Systems Thinker who is crafting strategies to the best effect for the business.   These folks need to work together.


You can scale these “requirements” up or down depending on the problems your team is facing, but the people on the team should be working in unison like a choreographed dance or an orchestra.   They should be one team and not individual teams and focus on vertical slices of “value” being delivered.  No one person can do all of the above.  3 to 4 might be the minimum to produce a team that is outputting something that is considered valuable.  Imho, 5-7 folks with the above skills is about right as long as you have the right depth and breadth.

Lastly, AB podcast listeners will know that I frown on specialists (more info) in “pure” development teams.   This is true here as well.   Every person on your team should be able to perform at least 2 of the functions above (ideally, 3) with at least 1 place where they can competently achieve deep results.  In addition, it’s a good idea to minimize the overall overlap of expertise on your team and rely on your Agile coach to create an environment of knowledge sharing and team cooperation.

In closing:

While if you are patient, you will be able to create a team of folks who, together, have mastery of the above and if you are also lucky, keep that team small, but don’t try for the Unicorn.   They do exist, but are too hard and too expensive.  Even if you manage to land one, if the business isn’t prepped to include them in the strategy, you will likely not get the value you hope for from them.   Knowledge work cannot be done in a silo in the fleeting windows of opportunity many are encountering today.

Thanks for reading and Happy Holidays!


Systems Thinking: Why I am done with agility as a goal

    Recently, I was writing up a presentation where I was going to state that the New Tester’s job definition was to “accelerate business agility”. One of my peers looked at it and remarked “Isn’t that sort of redundant?”. After some discussion, it became clear that “agility” did not have a clear well-understood definition.

To be clear, I am MOST definitely not done with Agile methods, but as best as I will be able to, I am done with using the word ‘agility’ to describe it. If one looks this up in your favorite dictionary, you will find it described as “moving quickly”. While moving quickly is certainly a valuable goal, it is pitifully insufficient in the modern day software world and if not tempered correctly, can actually lead to more pain than what you may have started with. When I now give talks on Agile, my usual starting point is to first clarify that Agile is NOT about moving quickly, so much as it is about changing direction quickly. So in a nutshell, Agile is not about agility. One problem I am trying to unwind is the dominance of strong-willed, high paid folks proclaiming that Agility is the goal and quite simply, they do not know what they are talking about as evidenced by the typical lack of details explaining behavior and/or success changes their team should be making. This causes their reports to “follow” this guidance, but left to their own devices to make it up. A few clever folks actually study it and realize that shifting to Agile is quite a paradigm shift to succeed and hard to do. This can be a slow process, which seems to contradict the goal of “moving quickly”, so gets abandoned for a faster version of Waterfall or similar dysfunctional hybrid. There’s a common phrase in MBA classes, “Pick 2: Cheap, fast, or good”. This implies a singular focus on fast is likely to deliver crap and at a high cost.

One quick test to see if your leader understands: Ask how much are we going to invest in real-time learning. Then observe how those words align with actions. Moving fast without learning along the way is definitely NOT Agile, but more importantly, it is wrought with peril.

Many of my recent blog posts are on the topic of leadership lately. If you find yourself in such a role and are trying to lead a team towards Agile, my guidance is that you think carefully about the goals and behaviors you are expecting and use the word that describes it better. If you don’t know what you want, then get trained. In my experience, using Agile methods is very painful if the team leadership does not know what, why, and how to use them.

Consider these word alternatives:

  • Nimble: quick to understand, think, devise, etc.
  • Dexterity: the ability to move skillfully
  • Adaptability: the ability to change (or be changed) to fit changed circumstances

These ALL make more sense to me than “moving quickly”, but adaptability is what fits the bill the best in my mind.

    In my last post, I focused on one aspect of the shift paradigm shift happening in the world of test towards the goal of improving adaptability. I have mentioned before my passion (and the primary reason I write this blog) is about Quality. However, to make a business well-functioning in this modern age, a singular focus on changing the paradigm on quality is not sufficient. As Test makes its shift, other pieces of the system must take up the slack. For example, a very common situation happening is that Test simply stops testing in favor of higher value activities. Dev then needs to take up that slack. If they don’t (and most likely they won’t initially), then they will ship bugs to customer and then depending of customer impact, cause chaos as dev attempts to push testing back. We need to consider the whole system, not just one part of it.

A couple of months ago, I was asked to begin thinking through the next phase of shifting the org towards better adaptability. Almost immediately, I rattled off the following list of paradigm shifts that need to be done to the system as a whole.








Spider teams

Starfish teams


Quality (value)



NIH is bad

NIH is Awesome

Large batch

Small Batch







Green is good

Red is good






Shared Accountability


Hopefully, you can see that moving quickly is certainly a part of this, but more importantly, this list shows a series of changes needed for focus, sharing, understanding the current environment, and learning…

Recently, I have come upon some material from Dr. Amjad Umar (currently, a senior strategist at the UN and one of my favorite professors) where he argues that companies should be plan-fully considering the overall “smartness” of their systems. He states that technologies alone cannot improve smartness. But you can improve it by starting with the right combination of changes to your existing People, Processes, and Technology. Smartness, by the way, is analogous to Adaptability.

I have taken his concept and broadened it to something I call “Umar’s Smartness Cube”. I think it nicely describes at a high level what needs to be considered when one makes System changes. The goal of the whole cube, of course, is to improve Business Value.

How to use this to improve your system:

  1. First determine and objectively measure the goal you are trying to achieve.
  2. Consider the smartness cube and enumerate opportunities to improve the above goal.
  3. Consider tradeoffs between other elements to achieve goals better. For example, maybe we don’t need the world’s best technical widget if we just change the process for using what we have to reduce the training burden.
  4. Prioritize these opportunities (I like to use (BizValue+TimeCriticality)/Cost)
  5. Get them in a backlog that acts like a priority queue and start executing.


This, of course, is over-simplified, but hopefully, sets you in an actionable direction for “accelerating the adaptability of your Business (system)”.

As thinking-in-progress, any feedback is appreciated.

AB Testing Podcast – Episode 2 – Leading change

The Angry Weasel and I have put together another podcast. In this episode, we talk about problems we see in leading change towards a better direction. We cover some changes we face, change management, and reasons change fails. We talk about the importance of “why” and leverage the “Waterfall vs. Agile” religious war, as an example.

We managed to keep our “shinythingitis” to a minimum of slightly less than 70% of the time. 🙂 Enjoy!

Want to subscribe to the podcast?
RSS feed

User Stories and 5 Why Analysis


Today I stumbled upon a very useful technique I felt warranted sharing. I am in the process of drafting a blog around the frailty of introducing Agile techniques to a waterfall team. I’m confident I’ll post that either this weekend or next. In the meantime, one thing I have learned from you, the blog reader, is that you tend to appreciate and forward on useful techniques to others when they are practical solutions to present day problems. May this help you.


My love/hate relationship with User Stories


There are a lot of great User Story resources. My absolute favorite is Mike Cohn’s site, here. The term “User Stories” is often overloaded, so I recommend visiting Mike’s site to make sense of the rest of the post. My view is compatible with his (if not a blatant overlap).

One of the first things I try to do when I am helping a team shift to Agile is to teach them to stop scheduling workitems and, instead, start delivering outcomes. This approach enables teams to decouple the problem they are trying to solve from the solution they currently favor. There a lot of benefits to doing this. Not in the least is the ability to define DONE in terms of value added to the customer. There are other techniques to defining Done up front (such as ATDD), but I have found that User Stories is really palatable to newbies trying out Agile.

But even User Stories have problems. They can be hard to construct. When I know I need, say, a new report created, it is just easier to write “Create new report” on the ticket and place it on the task board in the backlog. They can be rather verbose and hard to communicate succinctly. “As a decision maker for the release, I want to see fresh execution reports online, so that I can weigh-in on readiness armed with the right data to make an informed choice“. The friction created in the longer format makes the shorter format very alluring. Especially for those who haven’t yet experienced the value of the longer version.


The problem

For folks from waterfall world, they get the workitem approach. “Create new report” seems easy to understand and execute on. Today’s problem came from one of the testers on the team, who quite honestly, had gotten tired of me complaining about Scope Creep in their “stories”. As the Agile Master for the team, I push hard on making sure we are maintaining a high, consistent and predictable velocity. Scope Creep makes this very difficult, causes delays, and creates the potential of significant waste of effort incurring within the system. My team is using Lean and Kanban. We do not timebox our iterations, but each story has a 2 week SLA. The story this tester was working on was about some tooling we are creating. They were coming to me to let me know that “the design has changed again” and wanted to know what to do about it. The ticket was already passed its 2 week SLA.
In addition, the ticket was similar to “enable performance thresholds”. IE. It was ambiguously worded and it was entirely unclear when we would be done. I had warned of this before, but my style of Agile Mastering is to let teams make the decision and the mistake in order to enable learning, so I let it stick.

The solution

It is insufficient to point out what not to do. If you want folks to learn, tell them what to do instead. Here I suggested that the problem with the ticket was that Done was not clear. I said use a User Story instead… now as well as in the future. This particular tester had a hard problem with that. Even after explaining Stories. They could not pivot the workitem into a story. It was just unnatural for them to think of the outcome they needed. Unfortunately, this a far too common experience for me.

However, during a rare flash of total insight, I fixed this for them, by throwing in SixSigma’s Why Analysis technique. Primarily used to determine root causes of things, basically, the way it works is you simply ask ‘why’ 5 times.

The dialogue

I started off: “Ok, let’s try a different thought process. What if I were to tell you I see no value in this “enable performance thresholds” task, so I am going to cut it. How do you feel about that?”

Tester: ” I hate that.”

Me: “Why?”

Tester: “Because we need it”

Me: “Why?”

Tester: “Because dev needs it”

Me: “Why?”

Tester: “So they can decide if the product is good or not”

Me: “so what you are saying is that ‘as a dev on this team, you want performance thresholds enabled, so that you can decide if the product is good or not’?”

Tester: “yes”

Me: <blank stare>

Tester: “ooooohhhhh!!!!!”


We then talked about how adding additional why’s adds precision to the outcome desired and clarity around Done, while in most cases, keeping the implementation decoupled from the outcome. In addition, we talked about how to determine when the story is too vague (likely an Epic) and needing to be broken down into smaller stories.

I will see how well it plays out over the coming weeks, but the tester, at least, believed they would be able to confidently break stories into smaller outcome-based stories and with that defend against scope creep while still handling undiscovered stories in an Agile fashion.



Kanban for Chores

Today’s post is a lot more practical than most. It’s fun to mix things up now and again. I really enjoy it when work related activities can improve the home. I feel like the time invested in learning pays off doubly so in those cases.

Today I am going to share a new chore system that has been rolled out in the Jensen family. So far, I’m very happy with the results. To give credit where credit is due: months ago, in one of Alan Page’s blog posts, he told a story about how he introduced his family to Kanban. His kids, in particular, really seemed to dig it. I would share out the link to the post, but, sadly, I cannot find it. I think it lies in one of Alan’s older posts. If somehow it shows up, I will post an update here. (EDIT: as pointed out in comments, it was a series of tweets) As I recall, Alan’s system was to put the chores up on a kanban-related taskboard and everyone works together to get the chores done on the weekend before heading out for family fun.

As you may recall from my last post, I have moved recently. The new house is much bigger than the old one and we have an active after work/school life, so chores were going to the wayside at times. This, to me, was annoying. I remembered Alan’s post and said “hmm, that actually sounds like fun and with a few tweaks should work great for our household”.

Here’s how I did mine.

First, I acquired several supplies:

  1. A 4′ Magnetic Dry Erase Board
  2. A set of Dry Erase Pens (Although, I will probably change this to Painter’s Tape later… It looks cleaner.)
  3. 4 sets of Planning Poker cards by using the PDF available here.
  4. A set of Ink Jet Magnetic Business Cards by Avery

The board will serve as a base for the taskboard. Since both the cards and the board are magnetic, the cards will be a perfect medium for being task tickets. The dry erase pens will mark out the columns, WIP, and flow.

Next, I sat with my wife and we worked through the chores that we felt should/could be done by anyone in the household. Things like doing the dishes, taking out the trash, cleaning your room, etc. are not on the list. These chores we felt the kids should just do anyways… Family Tax. I then took each of those chores and printed it on its own business card. These ended up looking really sharp. Had I the patience or the time, I would have put pictures or decorated the cards so they showed the image of the chore. One of my sons really likes that type of work, so I may ask him to do it when these cards wear out. Plus I find if you get folks to help to work on the system, they feel more vested in its success. Can’t hurt.

At Dinner, I brought out the magnetic cards. This was the first time the kids had seen them and since all of the cards had work items on it that they recognized they were immediately suspicious. I trudged forward fearlessly and walked them and my wife through a variant of planning poker (without them realizing that what I was doing had that name).

Planning poker process we followed:

  • Ordered the chore list from easiest to do to hardest
  • I picked the one in the middle “Vacuum Downstairs” and announced it was worth 5 points. (My eldest, still suspicious, protested the use of points, claiming that it further proved that doom and gloom was coming his way… Teenagers!)
  • I then took all of the chores and put them into a single randomized stack.
  • I wrote a “5” on “Vacuum Downstairs” and kept it out of the stack
  • Then starting at the top, I asked “if Vacuum Downstairs is worth 5 points of difficulty, how many points is ___________________?” (for example, “Clean Refrigerator Door” or “Weed Front Yard”)
  • I told them how to use the Planning Poker cards I had made and iterated through all of the chores.
  • After we agreed on a point total for the chore (as well as got clear agreement on what Done meant), I wrote the final number on the card and went to the next card.

There was some discussion from the kids regarding the available point values, but I stuck to Fibonacci numbers. (they wanted a 4 and a 7)


Now I had all of the chores with points assigned, I went and made the simplest of taskboards (see below). Since I wanted the board to replace the current (and totally failing) allowance process, I then figured out the point to $ conversion by figuring out how much I would be willing to pay weekly and dividing that by the total of all of the chore points.

Then I told the kids (and my wife) the rules:

  • Only mom and dad can move tickets from the Backlog column to the Ready column. We will do so when we think that chore needs to be done.
  • Anybody can grab a chore they want from the Ready column, if they have available capacity.
  • Each person can have, at most, 2 tickets in the Doing column. They are not required to have any.
  • No one else can take that ticket as long as the owner completes it in 2 days. If they do not, then someone else can take it with notification.
  • At the end of the week, each kid will get paid according to the sum of their points.
  • Then the Done tickets will be moved to Backlog and my wife and I will re-fill the Ready column as needed.
  • Also, at the end of the week, we will as a family talk about the chores and adjust points up or down. I will adjust the point conversation rate as necessary, so that I am paying a constant amount each week.

The Results

Truly amazing. For those who have deployed similar things at work, it really shouldn’t surprise, but it did me. My kids really picked it up excitedly. The first day half of the chores got done. It was kind of a whirlwind. Both kids viewed it as a game and had a blast just moving a ticket to done. They both are eager to get the big ticket items, but when one figured out that he could get the smaller tickets in faster time than the big ones, he quickly surpassed his brother. Right now, it is definitely a situation of everyone wins. Kids are in charge of earning what they want and when. Mom and Dad are in charge of prioritizing the work and the house is staying clean and spiffy and as another positive note an unexpected “bonus” has popped up: The kids do not want Mom and Dad doing tickets. Why? Because then *they* don’t get paid. Brilliant, I say! Might just give me more time for blog writing!




Beware the JeDI

Last Friday, I had a meeting with a boss of mine from long, long ago.   Currently, he is one of the Directors of Test for Windows, which is a stronghold of waterfall.   He wanted my opinion on how to use what I’ve learned in my recent job positions.   These are teams that have deployed Agile techniques and have begun the march towards reducing headcount in Test Generalist positions.    So he wanted me to bring him up to speed.

In retrospect, I wish I had done my best spooky voice and stated:

“Beware the JeDI”

Over the years, I have personally witnessed several teams struggle to rollout Agile.  After doing my own analysis of common symptoms and behaviors, I have created a simple litmus test you can use to determine which your team is executing.  You may be surprised.  The first step to using it is to observe your team’s behaviors and ask: what is the question that these behaviors are trying to answer?

A Waterfall team asks: “Is the plan on track?

An Agile team asks: “Is the plan correct?

A JeDI team asks: “Plan? Just Do It!

In Waterfall, the team is executing on a plan and getting the code out to market in accordance to the plan is pivotal for all of the synchronized machinations to work properly.   (Marketing, Sales, Customer Service, Hot Fix Engineering, etc.)

In Agile, the team has a baseline plan, but focused on shipping small bits of the code to the customer with the intent of determining if the plan is the right direction.   Based on observing the customer, the plan will change and a new direction set.  (Note:  I adore Elisabeth Hendrickson’s definition of Agile here)

A JeDI team believes they are Agile and each individual works 16 to 20 hr days in order to stay on track.    They think this is just a reality of software engineering.   Scaling way back on planning and documentation, they are heavily focused on “Code Velocity”, and their customer is someone they believe they will delight, but their view of the customer is theoretical for the most part.   When they actually observe customer behavior though, the runway is too short for them to make changes.  In these teams, Test Generalists are very often reactive to development.   In a world where “Code Velocity” is king, partnering with Test is an afterthought.  In some cases, I’ve even seen a total lack of documentation…  If it were constructed, that might help Test solve its own problems. But no.  There’s none.   It’s not dissimilar to Cowboy Development.

In a previous post, I was asked to explain why I didn’t recommend the Agile Manifesto.   It is simply this:

The Agile Manifesto creates JeDI teams.



On JeDI teams, there are 3 common characteristics I have noticed of the people driving the teams:

  1. They are all wicked smart.  Often the best at what they do.
  2. They are good at making things efficient
  3. They are very focused on getting stuff done… and yesterday.

When combined with the Agile Manifesto, I’ve seen practices created that end up (ironically) being highly dysfunctional to the project overall.

Consider this:

  • These folks like things simple.  Black & white is better than shades of grey.
  • They will tend to use only the Agile Manifesto to understand Agile.  And there, only the 4 Values, not the 12 principles.  (In fact, I’ve had conversations with strong admitted Manifesto supporters who did not even know of the principles.)
  • These folks will take the 4 values and oversimplify to:
    • Agile == Low Overhead  AND/OR
    • Agile == Flexibility
  • They will roll out their new “Agile” program.  Unchecked, Low Overhead and Flexibility quickly becomes: “I get to do what I want (flexibility) without being accountable to management (low overhead). “

Once that has taken hold in your team, it is really, really hard to fix it (perhaps, a blog for another day).  In my experience, some key value gets subtracted from the Manifesto:

  • The principle of Customer collaboration gets devalued.   Working with the customer “hand-in-hand” is important to succeed with Agile.   Learning your customer iteratively.
  • “Responding to Change over following a plan”  is, IMHO, intended to mean Fail Fast and change the plan based on what you have learned, not to execute without a plan.

Are you in a similar situation and trying to understand Agile?

Here’s my advice: (modified from a George Paci quote here)

“[Treat Agile] like an off-the-rack suit: it’s not likely to fit you perfectly, but you’d better try it on before you make alterations.”

The best way to avoid the JeDI is to prevent your team from becoming one in the first place.  The Agile Manifesto does not tell you what to do.   It tells you what to value.  It’s a definition, not an implementation plan.  Taking its values to excess will cause dysfunction on your team.  Heed my warning.

Instead of starting with the Manifesto, implement the variant of Agile that most fits your teams skills, resources, and needs.   I think Alan Shalloway and his team has a good resource you can use.  (I’ve sent Alan a note asking.   If I get it or find another resource for you, I’ll update this post). [Edit: And he did! (thanks, Alan) He recommends this link (Overview) and this one  (additional resources) if you want to understand more]   The three most widely used variants to my knowledge are XP, SCRUM, and Lean.   My personal preference is Lean with Kanban.

Whichever you choose, implement it exactly as intended from the sources.  Once you have experienced it firsthand, then feel free to make changes.  Agile *is* about learning, after all.

Once you have tried it out, the Manifest will become clear and true to you and it is a thing of beauty.