Postmortems with Teeth… But No Bite!

Jane Miceli of Micron and I are doing a presentation on “Postmortems With Teeth … But No Bite!” at DevOps Days in Boise. We wanted to share an article that can go into more detail than we’ll be able to fit into our 30 minute window. Enjoy! 


It’s been said that a person’s character is revealed when things go wrong. So when things go wrong at your enterprise – what happens? What kind of character does your company show when the chips are down? 

We’re guessing one of two things happen. First is the “outage? What outage?” type of response. It’s possible that your company has NO postmortem process; when failure happens, there might be a few words, but it’s kept informal, within the family. That’s a big mistake, for reasons we’ll go into below. The second and most common is the “rub the puppy’s nose in it” response – where the bad employee(s) that triggered the outage are named, shamed, and blamed. We’d like to spend a few minutes on why both of these common reactions are so harmful, and set you up for better success with a proven antidote -the blameless postmortem.  

Why We Need Postmortems 

[Dave] I tell the story in my book about when I was working for an insurance company. On my way in to work, I stopped by to grab a coffee and a donut (OK, several donuts!) and took a glance at the Oregonian newspaper. I almost spit out my coffee, right there at the counter. There, at the top of the front page, was my company – right where we did NOT want to be. Someone had sent out a mailer, and it had included personal information (names, addresses, DOB, SS#). Worse, many of these mailers ended up in the wrong subscriber’s hands. It was a massive data leak, and there was no place for us to hide from it. I knew the team that had made this mistake – I even knew who’d sent out the mailer. Hmm, I thought, as I headed into the office. We’ve got a long week of damage control ahead of us. I wonder what’s going to happen to Bobby? 

And that’s the interesting part. Nothing happened. There was a few high-level meetings with executives – no engineers or operators allowed in the room of course – on how to best position us and recover from the PR hits we were taking. But while nothing happened to Bobby – which was a good thing, he was just tired and had made a mistake – we didn’t learn anything from it either. No report, no knowledgebase article – it was like nothing had happened. It was only a matter of time until the next time a tired operator triggered yet another leak of sensitive information. 

This type of reaction is understandable, and it’s rooted deep in our psychology. None of us likes to look too closely at our failures or mistakes. But without understanding that mistakes and errors are a normal part of any complex system, we’re missing out on a huge opportunity to learn. And you could make a strong argument that without a postmortem process, any DevOps process is handcuffed. Winning companies that we admire – names like Amazon, Google, Etsy – all make the same mistakes that other companies make. There’s a critical difference though in how they learn from those mistakes, and how they view them.  

Why We Need BLAMELESS Postmortems 

A blameless postmortem focuses on identifying contributing causes of an incident, without calling out any particular individual team for being “bad” or handling things incompetently. It assumes good intentions and that everyone acted in the proper way – given the information, capabilities and processes available at the time. By investigating more into the context behind a failure – what caused that operator to make that decision at 1:30 in the morning? – we can create safer processes. 

And it’s a critical part of several companies DevOps implementations. Google, for example, views blameless postmortems as being a critical part of their culture – so much so that both the excellent “Site Reliability Engineering” and the SRE Handbook have entire chapters on it. Etsy in particular has made some very profound statements on blameless postmortems:  

One option is to assume the single cause is incompetence and scream at engineers to make them “pay attention!” or “be more careful!” …Another option is to take a hard look at how the accident actually happened, treat the engineers involved with respect, and learn from the event… 

Blameless culture originated in the healthcare and avionics industries where mistakes can be fatal. These industries nurture an environment where every “mistake” is seen as an opportunity to strengthen the system. When postmortems shift from allocating blame to investigating the systematic reasons why an individual or team had incomplete or incorrect information, effective prevention plans can be put in place. You can’t “fix” people, but you can fix systems and processes to better support people making the right choices when designing and maintaining complex systems. 

…We believe that this detail is paramount to improving safety at Etsy. …If we go with “blame” as the predominant approach, then we’re implicitly accepting that deterrence is how organizations become safer. This is founded in the belief that individuals, not situations, cause errors. It’s also aligned with the idea there has to be some fear that not doing one’s job correctly could lead to punishment. Because the fear of punishment will motivate people to act correctly in the future. Right? 

There’s a great book called “Barriers and Accident Prevention”  by Erik Hollnagel that deserves more reading than it gets. In it, Erik Hollnagel says the “Bad Apple” theory above – that if we punish or remove the “bad apples” that are causing these failures, that we’ll improve safety – is fundamentally flawed because it assumes bad motives or incompetence: 

We must strive to understand that accidents don’t happen because people gamble and lose. 
Accidents happen because the person believes that: 
…what is about to happen is not possible, 
…or what is about to happen has no connection to what they are doing, 
…or that the possibility of getting the intended outcome is well worth whatever risk there is. 

Accidents Are Emergent; Accidents Are Normal 

The root fallacy here is thinking that accidents are abnormal or an anomaly. Accidents or mistakes are instead a byproduct; they are emergent, a consequence of change and the normal adjustments associated with complex systems. This is the true genius behind the SRE movement begun by Google; instead of striving for the impossible (Zero Defect meetings! Long inquisitor-type sessions to determine who is at fault and administer punishment over any failure!) – they say that errors and mistakes are going to happen, and it is going to result in downtime. Now, how much is acceptable to our business stakeholders? The more downtime (mistakes) we allow – as a byproduct of change – the faster we can innovate. But that extra few 9’s of availability – if the business insists on it – means a dramatic slowdown to any change, because any change to a complex system carries the risk of unintended side effects.  

I’m turning to John Allspaw again as his blog post is (still) unequalled on the topic: 

Of course, for all this, it is also important to mention that no matter how hard we try, this incident will happen again, we cannot prevent the future from happening. What we can do is prepare: make sure we have better tools, more (helpful) information, and a better understanding of our systems next time this happens. Emphasizing this often helps people keep the right priorities top of mind during the meeting, rather than rushing to remediation items and looking for that “one fix that will prevent this from happening next time”. It also puts the focus on thinking about what tools and information would be helpful to have available next time and leads to a more flourishing discussion, instead of the usual feeling of “well we got our fix, we are done now”. 

…We want the engineer who has made an error give details about why (either explicitly or implicitly) he or she did what they did; why the action made sense to them at the time. This is paramount to understanding the pathology of the failure. The action made sense to the person at the time they took it, because if it hadn’t made sense to them at the time, they wouldn’t have taken the action in the first place. 

So, good postmortems don’t stop at blaming the silly / incompetent / dangerous humans, and recognizes that mistakes and disasters are a normal part of doing business. Our job is to collect as much information as possible so we can provide more information to the people who need it the next time that combination of events takes place, shortening the recovery cycle. 

I remember saying this when I was at Columbia Sportswear, long before I knew what a blameless postmortem was, when something went awry: “I’m OK with making mistakes. I just want to make new and different mistakes.”  

Stopping At Human Causes Is Lazy 

During the postmortem process, the facilitator helps the team drill down a little deeper behind human error: 

… As we go along the logs, the facilitator looks out for so-called second stories – things that aren’t obvious from the log context, things people have thought about, that prompted them to say what they did, even things they didn’t say. Anything that could give us a better understanding of what people were doing at the time – what they tried and what worked. The idea here being again that we want to get a complete picture of the past and focusing only on what you can see when you follow the logs gives us an impression of a linear causal chain of events that does not reflect the reality. 

Etsy didn’t invent that; this comes from the great book “Behind Human Error” by David Woods and Sidney Dekker, which distinguished between the obvious (human) culprits and the elusive “second story” -what caused the humans involved to make a mistake: 

First Stories 

Second Stories 

Human error is seen as cause of failure 

Human error is seen as the effect of systemic vulnerabilities deeper inside the organization 

Saying what people should have done is a satisfying way to describe failure 

Saying what people should have done doesn’t explain why it made sense for them to do what they did 

Telling people to be more careful will make the problem go away 

Only by constantly seeking out its vulnerabilities can organizations enhance safety 

The other giant in the field is Sidney Dekker, who called processes that stop at human error as the “Bad Apple Theory”. The thinking goes that if we get rid of bad apples, we’ll get rid of human-triggered errors. This type of thinking is seductive, tempting. But it simply does not go far enough, and will end up encouraging less transparency. Engineers will stop trusting management, information flow upwards will dry up. Systems will become harder to manage and unstable as less information is shared even within teams. Lacking understanding of the context behind how an incident occurred practically guarantees a repeat incident. 

There Is No Root Cause (The Problem With The Five Whys) 

Reading accounts about any disaster – the 1996 Everest disaster that claimed 8 lives, the Chernobyl disaster, even the Challenger explosion – there is never one single root cause. Almost always, it’s a chain of events – as Richard Cook put it, failures in complex systems require multiple contributing causes, each necessary but only jointly sufficient. 

This goes against our instincts as engineers and architects, who are used to reducing complex problems down as much as possible. A single, easily avoidable root cause is comforting – we’ve plugged the mouse hole, that won’t happen again. Whew – all done! But complex systems can’t be represented as a cherry-picked list of events, a chain of dominoes; pretending otherwise means we trick ourselves into a false sense of security and miss the real lessons.  

The SRE movement is very careful not to stop at human error; it’s also careful not to stop at a single root cause, which is what the famous “Five Whys” linear type drilldown encouraged by Toyota promotes. As the original SRE book put it: 

This is why we focus not on the action itself – which is most often the most prominent thing people point to as the cause – but on exploring the conditions and context that influenced decisions and actions. After all there is no root cause. We are trying to reconstruct the past as close to what really happened as possible. 

Who Needs To Be In The Room? 

Well, you’re going to want to have at least a few people there: 

  • The engineer(s) / personnel most directly involved in the incident 
  • A facilitator 
  • On-call staff or anyone else that can help with gathering information 
  • Stakeholders and business partners 

Why the engineers/operators involved? We mentioned a little earlier the antipattern of business- or executive-only discussions. You want to have the people closest to the incident telling the story as it happens. And, this just happens to be the biggest counter to that “lack of accountability” static you are likely to get. John Allspaw put it best: 

A funny thing happens when engineers make mistakes and feel safe when giving details about it: they are not only willing to be held accountable, they are also enthusiastic in helping the rest of the company avoid the same error in the future. They are, after all, the most expert in their own error. They ought to be heavily involved in coming up with remediation items. So technically, engineers are not at all “off the hook” with a blameless PostMortem process. They are very much on the hook for helping Etsy become safer and more resilient, in the end. And lo and behold: most engineers I know find this idea of making things better for others a worthwhile exercise.  

…Instead of punishing engineers, we instead give them the requisite authority to improve safety by allowing them to give detailed accounts of their contributions to failures. We enable and encourage people who do make mistakes to be the experts on educating the rest of the organization how not to make them in the future. 

Why a facilitator? This is a “playground umpire”, someone who enforces the rules of behavior. This person’s job is to keep the discussion within bounds.  

The Google SRE book goes into the psychology behind disasters and the role of language in great detail. But you’re going to want to eliminate the use of counterfactuals: the belief that if only we had known, had done that one thing different, the incident would not have happened – the domino theory. Etsy is very careful to have the facilitator watch for any use of the phrases “would have”, “should have”, etc in writeups and retrospectives: 

Common phrases that indicate counterfactuals are “they should have”, “she failed to”, “he could have” and others that talk about a reality that didn’t actually happen. Remember that in a debriefing we want to learn what happened and how we can supply more guardrails, tools, and resources next time a person is in this situation. If we discuss things that didn’t happen, we are basing our discussion on a reality that doesn’t exist and are trying to fix things that aren’t a problem. We all are continuously drawn to that one single explanation that perfectly lays out how everything works in our complex systems. The belief that someone just did that one thing differently, everything would have been fine. It’s so tempting. But it’s not the reality. The past is not a linear sequence of events, it’s not a domino setup where you can take one away and the whole thing stops from unraveling. We are trying to make sense of the past and reconstruct as much as possible from memory and evidence we have. And if we want to get it right, we have to focus on what really happened and that includes watching out for counterfactuals that are describing an alternative reality. 

Interestingly enough, it’s usually the main participants that are the most prone to falling into this coulda-shoulda-woulda type thinking. It’s the facilitator’s job to keep the discussion within bounds and prevent accusations / self-immolation.  

How To Do Blameless Postmortems Right 

There’s two great postmortem examples we often point to: the first is found in both the SRE books (see the Appendix). The second – which Jane often uses – was a very prominent outage at GitLab, found here.  

A great writeup like this doesn’t come from nowhere. Likely, the teams shared a draft internally – and even had it vetted for completeness by some senior architects/engineers. The reviewers will want to make sure that the account has a detailed timeline, showing the actions taken, what expectations and assumptions were made, and the timeline. They’ll also want to make sure the root cause is deep enough, that information was broadcasted appropriately, and the action items are complete and prioritized correctly.  

If you have an hour long postmortem review, you may spend more than half of that time going over a timeline. That seems like an absurd waste of time, but don’t skip it. During a stressful event, it’s easy to misremember or omit facts. If the timeline isn’t as close as possible to what actually happened, you won’t end up with the right remediation steps. And, it may also expose gaps in your logging and telemetry.  

Once the timeline is set, it’s time to drill down a little deeper. Google keeps the discussion informal but always aimed at uncovering the Second Story: 

This discussion doesn’t follow a strict format but is guided by questions that can be especially helpful, including: “Did we detect something was wrong properly/fast enough?”, “Did we notify our customers, support people, users appropriately?”, “Was there any cleanup to do?”, “Did we have all the tools available or did we have to improvise?”, “Did we have enough visibility?”. And if the outage continued over a longer period of time “Was there troubleshooting fatigue?”, “Did we do a good handoff?”. Some of those questions will almost always yield the answer “No, and we should do something about it”. Alerting rules, for example, always have room for improvement. 

The Postmortem Playbook 

[Jane] When I started to be on call, I had a lot of questions. Especially after the adrenaline rush of an incident is over and the thought of the mounting paperwork to come. And even worse, now I get to pick apart everything I’ve done for an audience. Once an incident happened, I asked my manager/team mates a lot of questions. How does one really facilitate a good retrospective? What exactly does it look like? 

What template do you use? 

I usually use some form of a template here. What I usually keep from it often depends on the tolerance for the paperwork in the company or team. I usually keep the intro pieces, timeline, customer impact and action items as a minimum.  

How do you start? What words do you choose? 

Here is exactly what to say at the beginning to set the expectations and rules of engagement: 

“Hi All. Thank you for coming. We’re here to have a post mortem on <Title>. This is a blameless retrospective. This isn’t a meeting to assign blame and find the next scape goat. We want to learn something. That means we aren’t focused on what we could’ve/should’ve/would’ve done. We use neutral language instead of inflammatory language. We focus on facts instead of emotions, intent or neglect. All follow up action items will be assigned to a team/individual before the end of the meeting. If the item is not going to be top priority leaving the meeting, don’t make it a follow up item. We want to capture both things we need to change, and what new genius ways we’ve stumbled upon. We even want to capture we’re we’ve been lucky. Our agenda is to understand these working agreements, customer impact, focus on the timeline, contributing factors to failure and action items. Everyone is expected to update documentation and participate. We value transparency, and this will be published, without individual names of course. Let’s get started….”  

What does your meeting invite look like? 

Title: “Post Mortem for Incident 2019 Jan 1 at 7 UTC” or a “Post Mortem for Incident 2019 Jan 1 HelloWorld Outage” 

What’s in the body of the message? 


Let’s have a phone call on the retrospective related to the <Incident Title used in Subject>. 

Please forward as you see appropriate.  

Prep work should be added and filled out before the start of the retrospective [here|link] 

  • Read through the post mortem 
  • Please help add timeline details and events. Sources for timeline artifacts may be phone calls, email, texts, chats, alerts, support desk tickets, etc and converted to UTC 
  • Proposed action items to take 

1.      This is a blameless retrospective.  

2.      We will not focus on the past events as they pertain to “could’ve”, “should’ve”, etc.  

3.      All follow up action items will be assigned to a team/individual before the end of the meeting. If the item is not going to be top priority leaving the meeting, don’t make it a follow up item.  

<Information for conference bridge> 

When is it scheduled? 

Within 2-3 business days of the end of the incident. 

What prework/homework do I do? 

As the person who was on call, immediately capture all logs bridge logs, call/page records, alert detection times, escalation times, actions taken and time of action, system logs, chat logs, etc and out in a time line. There may be some time conversations to a standard format date and time for your timeline. Put it all un UTC as a standard. Not all information is relevant, but it’s useful to have if called upon to add to the timeline. 

What’s the facilitator objectives? 

  • Set expectations of blameless retrospective. 
  • Talk about impact to customers/partners. 
  • Present timeline and walk through it. 
  • Get agreement on timeline. 
  • Talk about what went well. 
  • Get agreement on action items. 
  • Assign action items to people/teams.  
  • Keep the playground fair. Do not allow a blame/shame statement to stand. 

What’s the follow up for the facilitator? 

Publish the report per your companies’ policies and choose max privilege vs least in the context. 

Send report and items to customers. 

Make sure it’s logged in the post mortem log history. Do not create a blamebase. 

Update with links to features/bugs/user stories are added for traceability and transparency. 

What Makes For A Good Action Item?  

Action items are how you complete the loop – how you give a postmortem teeth, so to speak.  

Interestingly, Etsy finds its usually comes down to making more and higher quality information available to those on the scene via metrics, logging, dashboarding, documentation, and error alerts – i.e. building a better guardrail:  

There is no need (and almost certainly no time) to go into specifics here. But it should be clear what is worthy of a remediation item and noted as such. Another area that can almost always use some improvement is metrics reporting and documentation. During an outage there was almost certainly someone digging through a log file or introspecting a process on a server who found a very helpful piece of information. Logically, in subsequent incidents this information should be as visible and accessible as possible. So it’s not rare that we end up with a new graph or a new saved search in our log aggregation tool that makes it easier to find that information next time. Once easily accessible, it becomes a resource so anyone can either find out how to fix the same situation or eliminate it as a contributing factor to the current outage. 

…this is not about an actor who needs better training, it’s about establishing guardrails through critical knowledge sharing. If we are advocating that people just need better training, we are again putting the onus on the human to just have to know better next time instead of providing helpful tooling to give better information about the situation. By making information accessible the human actor can make informed decisions about what actions to take. 

Ben Treynor, the founder of SRE, said the following: 

A postmortem without subsequent action is indistinguishable from no postmortem. Therefore, all postmortems which follow a user-affecting outage must have at least one P[01] bug associated with them. I personally review exceptions. There are very few exceptions. 

Vague or massive bowling-ball sized to-do’s are to be avoided at all cost; these are often worse than no action item at all. Google and Etsy both are very careful to make sure that action items follow the SMART criteria – actionable, measurable, relevant. In fact, Google has a rule of thumb that any remediation action item should be completed in 30 days or less; if these action items linger past that, they’re revisited and either rewritten, reprioritized, or dropped.  

Completing the Loop 

Once the report is written up and finalized – and available to all other incident responders for learning – you’re not quite done yet. Google for example tells of a story where an engineer that caused a high-impact incident was commended and even given a small cash reward for quick mitigation: 

Google’s founders Larry Page and Sergey Brin host TGIF, a weekly all-hands held live at our headquarters in Mountain View, California, and broadcast to Google offices around the world. A 2014 TGIF focused on “The Art of the Postmortem,” which featured SRE discussion of high-impact incidents. One SRE discussed a release he had recently pushed; despite thorough testing, an unexpected interaction inadvertently took down a critical service for four minutes. The incident only lasted four minutes because the SRE had the presence of mind to roll back the change immediately, averting a much longer and larger-scale outage. Not only did this engineer receive two peer bonuses82 immediately afterward in recognition of his quick and level-headed handling of the incident, but he also received a huge round of applause from the TGIF audience, which included the company’s founders and an audience of Googlers numbering in the thousands. In addition to such a visible forum, Google has an array of internal social networks that drive peer praise toward well-written postmortems and exceptional incident handling. This is one example of many where recognition of these contributions comes from peers, CEOs, and everyone in between. 

We’ve seen a couple great examples of companies using the incident report and postmortem process to help with their DR role playing exercises, sharing incident writeups in a monthly newsletter or for group discussions. But visibly rewarding people for doing the right thing – as Google handled the situation above – is about as far as you can get from the “rub the puppy’s nose in it” antipattern. We think you’ll create a safer organization when you foster a postmortem process that encourages sharing information and understanding context – versus naming, shaming, and blaming.  


Jane Miceli 


Today, I am a Cloud Enterprise Architect at Micron Technology. Before Micron, most recently I lead a Cloud SRE team at HP Inc. I’ve got 17 years’ experience working at companies like Rockwell Automation, HP, Bodybuilding,com, Sensus (now Xylem), Silverback Learning Solutions, and now Micron. The earliest experience I’ve had at a company using the cloud was in 2010. In the 9 years since, I’ve had a lot of failures along the way. I talk about them, so others don’t repeat them and hopefully make new ones to share with me. The accolades I consider failures are the times I’ve run into the same situation and didn’t change my behavior. I endeavor to always find new ways to fail. 

Dave Harrison 

I’m a Senior Application Development Manager (ADM) working for Microsoft Premier. As a development lead and project manager, I’ve spearheaded cultural revolutions in several large retail and insurance organizations making the leap to Agile and Continuous Delivery. An enthusiastic promoter of Azure DevOps, Chef, Puppet, Ansible, Docker, and all other tools – he believe very firmly that, as with Agile, the exact tool selected is less important than having the people and processes in place and ready. On a personal note, I’m the proud father of two beautiful girls and have been married to my lovely wife Jennifer for 24 years, and am based out of Portland, Oregon, USA. I enjoy fishing, reading history books, and in my spare time often wonder if I should be doing more around the house versus goofing off. I’m on LinkedIn, post to my blog semi-frequently, and – thanks to Jane! – am belatedly on Twitter too… 



  1. Resilience Engineering, Hollnagel, Woods, Dekker and Cook, 
  2. Hollnagel’s talk, On How (Not) To Learn From Accidents – 
  3. Sidney Dekker, The Field Guide to Understanding Human Behavior, 
  4. Morgue software tool – 
  6. “Practical Postmortems at Etsy”, Daniel Schauenberg, 
  7. John Allspaw, “Blameless PostMortems and a Just Culture”, 
  8. Chapter 15, Portmortem Culture: Learning From Failure (Google SRE book – The discussions on hindsight and outcome bias are particularly valuable.  
  9. Great postmortem example: (I love the detailed timeline. ) 
  10. Sample (bogus!) postmortem entry here: Note the sections on Lessons Learned: What went well, what went wrong, where we got lucky. There’s an extensive timeline and a link to supporting info (i.e. the monitoring dashboard). Impact, summary, root causes, trigger, resolution, detection. And then a list of action items and their status.  




DevOps Practices – Part 1, Spotify.

Got 10 minutes?

We’re celebrating the upcoming launch of our book by putting out a series of videos covering that thorniest of issues – culture. There’s a lot to be learned from the companies that have been able to make DevOps work.

For example, take Spotify. They’ve been able to instill a risk-friendly environment, centered around the concept of autonomous teams called squads. (There’s also tribes and guilds, but that’s another story!)

Click any of the images below to watch the video:



DevOps Stories – Interview with Nigel Kersten of Puppet

Nigel came to Puppet from Google HQ in Mountain View, where he was responsible for the design and implementation of one of the largest Puppet deployments in the world. At Puppet, Nigel was responsible for the development of the initial versions of Puppet Enterprise and has since served in a variety of roles, including head of product, CTO, and CIO. He’s currently the VP of Ecosystem Engineering at Puppet. He has been deeply involved in Puppet’s DevOps initiatives, and regularly speaks around the world about the adoption of DevOps in the enterprise and IT organizational transformation.

Note – these and other interviews and case studies will form the backbone of our upcoming book “Achieving DevOps” from Apress, due out in mid 2019 and available now for pre-order!

The Deep End of the Pool

I grew up in Australia; I was lucky enough to be one of those kids that got a computer. It turns out that people would pay me to do stuff with them! So I ended up doing just that – and found myself at a local college, managing large fleets of Macs and handling a lot of multimedia and audio needs there. Very early in my career, I found hundreds of people – students and staff – very dependent on me to be The Man, to fix their problems. And I loved being the hero – there’s such a dopamine hit, a real rush! The late nights, the miracle saves – I couldn’t get enough.

Then the strangest thing happened – I started realizing there was more to life than work. I started getting very serious about music, to the point where I was performing. And I was trying a startup with a friend on the side. So, for a year or two, work became – for the first time – just work. Suddenly I didn’t want to spend my life on call, 24 hours a day – I had better things to do! I started killing off all my manual work around infrastructure and operations, replacing it with automation and scripts.

That led me to Google, where I worked for about five years. I thought I was a scripting and infrastructure ninja – but I got torn to shreds by the Site Reliability Engineers there. It was a powerful learning experience for me – I grew in ways I couldn’t have anywhere else. For starters, it was the deep end of the pool. We had a team of four managing 80,000 machines. And these weren’t servers in a webfarm – these were roaming laptops, suddenly appearing on strange networks, getting infected with malware, suffering from unreliable network connections. So we had to automate – we had no choice about it. As an Ops person, this was a huge leap forward for me – it forced me to sink or swim, really learn under fire.

Then I left for Puppet – I think I was employee #13 there – now we’re at almost 500 and growing. I’m the Chief Technical Strategist, but that’s still very much a working title – I run engineering and product teams, and handle a lot of our community evangelism and architectural vision. Really though it all comes down to trying to set our customers up for success.

Impoverished Communication

I don’t think our biggest challenge is ever technical – it’s much more fundamental than that, and it comes down to communication. There’s often a real disconnect between what executives think is true – what they are presenting at conferences and in papers – and what is actually happening on the ground. There’s a very famous paper from the Harvard Business Review back in the 70’s that said that communication is like water. Communication downwards is rarely a problem, and it works much better than most managers realize. However, open and honest communication up the chain is hard, like trying to pump water up a hill. It gets filtered or spun, as people report upwards what their manager wants to believe or what will reflect well on them – and next thing you know you have an upper management layer that thinks they are well informed but really is in an echo chamber. Just for example, take the Challenger shuttle disaster – technical data that clearly showed problems ahead of the explosion were filtered out, glossed over, made more optimistic for senior management consumption.

We see some enterprises out there struggling and it becomes this very negative mindset – “oh, the enterprise is slow, they make bad decisions, they’re not cutting edge.” And of course that’s just not true, in most cases. These are usually good people, very smart people, stuck in processes or environments where it’s difficult to do things the right way. Just for example, I was talking recently to some very bright engineers trying to implement change management, but they were completely stuck. This is a company that is about 100,000 people – for every action, they had to go outside their department to get work done. So piecemeal work was killing them – death by a thousand cuts.

Where To Start

In most larger enterprises aiming for complete automation, end to end, is somewhat of a pipe dream – just because these companies have so many groups and siloes and dependencies. But that’s not saying that DevOps is impossible, even in shared services type orgs. This isn’t nuclear science, it’s like learning to play the piano. It doesn’t require brilliance, it’s not art – it’s just hard work. It just takes discipline and practice, daily practice.

I have the strong impression that many companies out there SAY they are doing DevOps, whatever that means – but really it hasn’t even gotten off the ground. They’re still on Square 1, analyzing and trying to come up with the right recipe or roadmap that will fit every single use case they might encounter, past present and future. So what’s the best way forward if you’re stuck in that position?

Well, first off, how much control do you have over your infrastructure? Do you have the ability to provision your VM’s, self-service? If so you’ve got some more cards to play with. Assuming you do – you start with version control. Just pick one – ideally a system you already have. Even if it’s something ancient like Subversion – if that’s what you have, use it as your one single source of truth. Don’t try to migrate to latest and greatest hipster VC system. You just need to be able to programmatically create and revert commits. Put all your shell scripts in there and start managing your infrastructure from there, as code.

Now you’ve got your artifacts in version control and you’re using it as a single repository, right? Great – then talk to the people running deployments on your team. What’s the most painful thing about releases? Make a list of these items, and pick one and try to automate it. And always prioritize building blocks that can be consumed elsewhere. For example, don’t attempt to start by picking a snowflake production webserver and trying to automate EVERYTHING about it – you’ll just end up with a monolith of infrastructure code you can’t reuse elsewhere, your quality needle won’t budge. No, instead you’d want to take something simple and in common and create a building block out of it.

For example, time synchronization – it’s shocking, once you talk to Operations people, how something so simple and obvious as a timestamp difference between servers can cause major issues – forcing a rollback due to cascading issues or a troubleshooting crunch because the clocks on two servers drifted out of synch and it broke your database replication. That’s literally fixed in Linux by installing a single package and config. But think about the reward you’ll get in terms of quality and stability with this very unglamorous but fundamental little shift.

Take that list and work on what’s causing pain for your on-call people, what’s causing your deployments to break. The more you can automate this, the better. And make it as self-service as possible – instead of having the devs fire off an email to you, where you create a ticket, then provision test environments – all those manual chokepoints – wouldn’t it be better to have the devs have the ability to call an API or click on a website button and get a test environment spun up automatically that’s set up just like production? That’s a force multiplier in terms of improving your quality right at the get-go.

 Now you’ve got version control, you can provision from code, you can roll out changes and roll them back. Maybe you add in inventory and discoverability of what’s actually running in your infrastructure. It’s amazing how few organizations really have a handle on what’s actually running, holistically. But as you go, you identify some goals and work out the practices you want to implement – then choose the software tool that seems the best fit.

Continuous Delivery Is The Finish Line

The end goal though is always the same. Your target, your goal is to get as close as you can to Continuous Integration / Continuous delivery. Aiming for continuous delivery is the most productive single thing an enterprise can do, pure and simple. There’s tools around this – obviously working for Puppet I have my personal bias as to what’s best. But pick one, after some thought – and play with it. Start growing out your testing skills, so you can trust your release gates.

With COTS products you can’t always adopt all of these practices – but you can get pretty close, even with big-splash, multi-GB releases. For example, you can use deployment slots and script as much as you can. Yes, there’s going to be some manual steps – but the more you can automate even this, the happier you’ll be.

Over time, kind of naturally, you’ll see a set of teams appear that are using CI/CD, and automation, and the company can point to these as success stories. That’s when an executive sponsor can step in and set this as a mandate, top down. But just about every DevOps success story we’ve seen goes through this pioneering phase where they’re trying things out squad by squad and experimenting – that’s a good thing. You can’t skip this, no more than a caterpillar can go right to being a butterfly.

DevOps Teams

At first I really hated the whole DevOps Team concept – and in the long term, it doesn’t make sense. It’s actually a common failure point – a senior manager starts holding this “A” team up as an example. This creates a whole legion of haters and enemies, people working with traditional systems who haven’t been given the opportunity to change like the cool kids – the guys always off at conferences, running stuff in the cloud, blah blah. But in the short term it totally has its place. You need to attach yourself to symbols that makes it clear you’re trying to change. If you try to boil the ocean or spin it out with dozens of teams, it gets diluted and your risk rises, it could lose credibility. Word of mouth needs to be in your favor, kind of like band t-shirts for teenagers. So you can start with a small group initially for your experiments – just don’t let it stay that way too long.

But what if you DON’T have that self-provisioning authority? Well there’s ways around that as well. You see departments doing things like doing capacity planning and reserving large pools of machines ahead of time. That’s obviously suboptimal and it’s disappearing now that more people are seeing what a powerful game-changer the cloud and self-provisioned environments are. The point is – very rarely are we completely shackled and constrained when it comes to infrastructure.

Automation and Paying Off Technical Debt

It’s all too easy to get bogged down in minutiae when it comes to automation. I said earlier that DevOps isn’t art, it’s just hard work – and that’s true. But focus that hard work on the things that really matter. Your responsibility is to make sure you guard your time and that of the people around you. If you’re not careful, you’ll end up replacing this infinite backlog of manual work you have to do with an infinite amount of tasks you need to automate. That’s really demoralizing, and it really hasn’t made your life that much better!

Let’s take the example of a classic three-tier web app you have onprem. And you’ve sunk a lot of time into it so that now it fails every week versus every 6 months – terrific! But for that next step – instead of trying to automate it completely end to end, which you could do – how could you change it so that its more service oriented, more loosely coupled, so your maintenance drops even more and changes are less risky? Maybe building part of it as a microservice, or putting up that classic Martin Fowler strangler fig, will give you this dramatic payoff you would never get with grinding out automation for the sake of automation and never asking if there’s a better way.

Paying off technical debt is a grind, just like paying off your credit card and paying off the mortgage. Of course you need to do that – but it shouldn’t be all you do! Maybe you’ll take some money and sink it into an investment somewhere, and get that big boost to your bottom line. So instead of mindlessly just paying off your technical debt, realize you have options – some great investment areas open to you – that you can invest part of your effort in.

Optimism Bias and Culture

This brings us right back to where we started, communication. There is a fundamental blind spot in a lot of books and presentations I see on DevOps, and it has to do with our optimism bias. DevOps started out as a grassroots, community driven movement – led and championed by passionate people that really care about what they’re doing, why they’re doing it. Pioneers like this are a small subset of the community though – but too often we assume ‘everyone is just like us’! What about the category a lot of people fall in – the ones who just want to show up, do their job, and then go home? If we come to them with this crusade for efficiency and productivity, it just won’t resonate with the 9 to 5 crowd. They like the job they have – they do a lot of manual changes, true, but they know how to do it, it guarantees a steady flow of work and therefore income, and any kind of change will not be viewed as an improvement – no matter how you try to sell it. You could call this “bad”, or just realize that not everyone is motivated by the same things or thinks the same way. In your approach, you may have to mix a little bit of pragmatism in with that DevOpsy-starry eyed idealism – think of different ways to reach them, work around them, or wait for a strong management drive to collapse this kind of resistance.

DevOps Stories – Interview with John Weers of Micron

John Weers is Senior Manager of DevOps and Software Quality at Micron. He works to build highly capable teams that trust each other, build high quality software, deliver value with each sprint and realize there’s more to life than work.

Note – these and other interviews and case studies will form the backbone of our upcoming book “Achieving DevOps” from Apress, due out in mid 2019 and available now for pre-order!

Kickstarting a DevOps Culture

Some initial background – I lead on a team of passionate DevOps engineers/managers who are tasked with making our DevOps transformation work.   While our group is only officially about 5 months old, we’ve all been working this separately for quite a while.

About every two weeks we have a group of about 15 DevOps experts that get together and talk – we call them the “design team”.  That’s a critical touch point for us – we identify some problems in the organization, talk about what might be the best practice for them, and then use that as a base in making recommendations. So that’s how we set up a common direction and coordinate; but we each speak for and report to a different piece of the org. That’s a very good thing – I’d be worried if we were a separate group of architects, because then we’d get tuned out as “those DevOps guys”. It’s a different thing altogether if a recommendation is coming from someone working for the same person you do!

We’ve made huge strides when it comes to being more of a learning-type organization – which means, are we risk-friendly, do we favor experimentation? When there’s a problem, we’re starting to focus less on root cause and ‘how do we prevent this disaster from happening again’ – and more on, what did we learn from this? I see teams out there trying new things, experimenting with a new tool for automation – and senior management has responded favorably.

Our movement didn’t kick off with a bang. About 5 years ago, we came to the realization that our quality in my area of IT was poor. We knew quality was important, but didn’t understand how to improve it. Some of the software we were deploying was overly complex and buggy. In another area, the issue wasn’t quality but time – the manual test cycle was too long, we’re talking weeks for any release.

You can tell we’re making progress by listening to people’s conversations – it’s no longer about testing dates or coverage percentages or how many bugs we found this month, but “how soon can we get this into production?” – most of the fear is gone of a buggy release as we’ve moved up that quality curve. But it has been a gradual thing. I talked to everyone I could think of at conferences, about their experiences with DevOps. It took a lot of trial and error to find out what works with our organization. No one that I know of has hit on the magical formula right off the bat; it takes patience and a lot of experimentation.

Start With Testing

Our first effort was to target testing – automated testing, in our case using HP’s UFT and Quality Center platform. But there never was an all-hands-on-deck, call to “Do DevOps!” – that did happen, but it came two years later. We had to lay down the groundwork by focusing first on quality, specifically testing.

We’re five years along now and we are making progress, but don’t kid yourself that growth or a change in mindset happens overnight. Just the phrase “Shift Left” for example – we did shift our quality work earlier in the development process by moving to unit testing and away from UI/Regression testing. We found that it decreased our bugs in production by a very significant amount.

We went through a few phases – one where we had a small army of contractors doing test automation and regression testing against the UI layer. Quality didn’t improve, because of the he-said/she-said type interactions between the developers and QA teams in their different siloes. We tried to address interactions between different applications and systems with integration testing, and again found little value. The software was just too complex. Then we reached a point where we realized the whole dynamic needed to be rethought.

So, we broke up the QA org in its entirety, and assigned QA testers on each of our agile teams and said – you guys will sink or swim as a team. Our success with regression testing went up dramatically, once we could write tests along with the software as it was being developed.  Once a team is accountable for their quality, they find a way of making it happen.

We got resistance and kickback from the developers, which was a little surprising. There was a lot of complaint when we first started requiring developers to write unit tests along with their code of it not being “value added” type activity. But we knew this was something that was necessary – without unit tests, by the time we knew there was a problem in integration or functional testing, it would often be too late to fix it in time before it went out the door.

So, we held the line and now those teams that have a comprehensive unit testing suite are seeing very few errors being released to production.  At this point, those teams won’t give up unit testing because it’s so valuable to them.

“Shift Left” doesn’t mean throwing out all your integration and regression testing. You still need to do a little testing to make sure the user experience isn’t broken. “Shift Left” means test earlier in the process, but in my mind it also means that “our team” owns our quality.

Culture and Energy are the Limiting Points

If you want to “Do DevOps” as a solo individual, you’ll fail.   You need other experts around you to share the load and provide ideas and help.  A group is stronger than any individual.

Can I say – the tool is not the problem, ever? It’s always culture and energy. What I seem to find is, we can make progress in any area that I or another DevOps expert can personally inject some energy into. If I’m visible, if I talk to people, if I can build a compelling storyline – we make rapid progress. Without it, we don’t. It’s almost like starting a fire – you can’t just crumple up some newspaper, dump some kindling on it, light a match and walk away. You’ve got to tend it, constantly add material or blow on it to get something going.

We’re spread very thin; energy and time are limited, and without injecting energy things just don’t happen. That’s a very common story – it’s not that we’re lazy, or bad, or stupid – we work very hard, but there’s so much work to be done we can’t spare the cycles to look at how we’re going about things. Sometimes, you need an outside perspective to provide that new idea, or show a different way.

Lead By Listening

One of the base principles of DevOps is to find your area of pain and devote cycles into automating it. That removes a lot of waste, human defects, errors when you’re running a deployment. But that doesn’t resonate when I work with a team that’s new to DevOps. I don’t walk in there with a stone tablet of commandments, “here’s what you should do to do DevOps”. That’s a huge turn-off.

Instead, I start by listening. I talk to each team ask them how they go about their work, what they do, how they do it. Once we find out how things are working, we can also identify some problems – then we can come in and we can talk about how automation can address that problem in a way that’s specific to that team, how DevOps can make their world better. They see a better future and they can go after it.

Tools as an Incentive

I just said the tool isn’t the problem, but that doesn’t mean it’s not a critical part of the solution. I’m a techie at heart and I like a shiny new tool just as much as the next person. You can use tools as incentives to get new changes rolling. It’s a tough sell to walk into a meeting and pitch unit testing as a cure to quality issues if they take a long time to write. But if we talk about using Visual Studio Enterprise and how it makes unit tests simple and it’s able to run them real time, now it becomes easier to do unit testing than to test the old way. If we can show how these tools can shrink testing to be an afterthought instead of a week, now we have your attention!

About a year ago, our CIO set a mandate for the entire organization to excel at both DevOps and Agile. But the architecture wasn’t defined, no tools were specified. Which is terrific – DevOps and Agile is just a way of improving what we can do for the business. We now see different teams having different tech stacks and some variation in the tools based on what their pain point is and what their customers are needing.  As a rule, we encourage alignment where it makes sense around either a technology stack or with a common leader. That provides enough alignment that teams can learn from each other and yet look for better ways of solving their issues.

The rule is that each main group in IT should favor a toolchain, but should choose software architecture that fits their business needs.  In one area, for example, the focus is on getting changes into production as fast as possible. This is the cutting edge of the blade, so automation and fast turnaround cycles are everything. For them, microservices are a terrific option and the way that their development happens – it fits the business outcomes they want.

Do You Need the Cloud?

They’ll tell you that DevOps means the cloud; you can’t do it without rapid provisioning which means scalable architecture and massive cloud-based datacenters. But we’re almost 100% on-prem. For us, we need to keep our software, especially R&D, privately hosted. That hasn’t slowed us down much.   It would certainly be more convenient to have cloud-based data centers and rapid provisioning, but it’s not required by any means.

Metrics We Care About

We focus on two things – lead time (or cycle time in the industry) and production impact. We want to know the impact in terms of lost opportunity – when the fab slows down or stops because of a change or problem. That resonates very well with management, it’s something everyone can understand.

But I tell people to be careful about metrics. It’s easy to fall in love with a metric and push it to the point of absurdity! I’ve don’t this several times. We’ve dabbled in tracking defects, bug counts, code coverage, volume of unit testing, number of regression tests – and all of them have a dark side or poor behavior that is encouraged. Just for example, let’s say we are tracking and displaying volume of regression tests. Suddenly, rather than creating a single test that makes sense, you start to see tests getting chopped up into dozens of tests with one step in them so the team can hit a volume metric. With bug counts – developers can classify them as misunderstood requirement rather than admitting something was an actual bug. When we went after code coverage, one developer wrote a unit test that would bring the entire module of code under test and ran that as one gigantic block to hit their numbers.

We’ve decided to keep it simple – we’re only going to track these 2 things – cycle time and production impact – and the teams can talk individually in their retrospectives about how good or bad their quality really is. The team level is also where we can make the most impact on quality.

I’ve learned a lot about metrics over the years from Bob Lewis’ IS Survivor columns.  Chief among those lessons is to be very, very careful about the conversation you have with every metric.  You should determine what success looks like, and then generate a metric that gives you a view of how your team is working.  All subsequent conversations should be around “if we’re being successful” and not “are we achieving the metric.”   The worst thing that can happen is that I got what I measured.

PMO Resistance

Sometimes we see some resistance from the BSA/PM layer. That’s usually because we’re leading with our left foot – the right way is to talk about outcomes. What if we could get code out the door faster, with a happier team, with less time testing, with less bugs? When we lead with the desired outcome, that middle layer doesn’t resist, because we’re proposing changes that will make their lives easier.

I can’t stress this enough – focus on the business outcomes you’re looking for and eliminate everything else. Only pursue a change if the outcome fits one of those business needs.

When we started this quality initiative, initially our release cycle averaged – I wish I was exaggerating – about 300 days. We would invest a huge amount of testing at every site before we would deploy. Today, we have teams with cycle times under 10 days. But that speed couldn’t happen unless our quality had gone up. We had to beef up our communication loop with the fab so if there was a problem we can stop it before it gets replicated.

The Role of Communication

You can’t overstate credibility. As we create less and less impact with changes we deploy, our relationship with our customers in the business gets better and better. Just for example, three years ago we had just gone through a disastrous communication tool patch that had grounded an entire site for hours.  We worked through the problems internally and then I came to a plant IT director a year later and told them that we thought the quality issues were taken care of and enlisted their help.

Our next deployment required 5 minutes of downtime and had limited sporadic impact.  And that’s been the last real impact we’ve had during software deployment for this tool in almost 3 years – now our deployments are automated and invisible to our users. Slowly building up that credibility and a good reputation for caring about the people you’re impacting downstream has been a big part of our effort.

Cross-Functional Teams

It’s commonly accepted that for DevOps to work you must be cross-functional. We are like many other companies in that we use a Shared Services model – we have several agile teams that include development, QA roles, an infrastructure team, and Operations which handles trouble tickets from the sites – each with their own leader. This might be a pain point in many companies, but for us it’s just how we work. We’ve learned to collaborate and share the pain so that we’re not throwing work over the fence. It’s not always perfect, but it’s very workable.

For example, in my area every week we have a recap meeting which Ops leads, where they talk about what’s been happening in production and work out solutions with the dev managers in the room. In this way the teams work together and feel each other’s pain. We’re being successful and we haven’t had to break up the company into fully cross-functional groups.

Purists might object to this – we haven’t combined Development and Operations, so can we really say that we are “doing DevOps”? If it would help us drive better business outcomes, that org reshuffling would have happened. But for us, since the focus is on business outcomes, not on who we report to, our collaboration cross team is good and getting better every day. We’re all talking the same language, and we didn’t have to reshuffle. We’re all one team. The point is to focus on the business outcomes and if you need to reorg, it will be apparent when teams talk about their pain points.

If It Comes Easy, It Doesn’t Stick

Circling back to energy – sometimes I sit in my office and wish that culture was easier to change. It’d be so great if there was a single metric we could align on, or a magical technique where I could flip a switch and everyone would get it and catch fire with enthusiasm. Unfortunately, that silver bullet doesn’t exist.

Sometimes I listen to Dave Ramsey on my way in to work – he talks about changing the family tree and getting out of debt. Something he said though resonated with me – “If it comes easy, it doesn’t stick.” If DevOps came easy for us, it wouldn’t really have the impact on our organization that we need. There’s a lot of effort, thought, suffering – pain, really – to get any kind of outcome that’s worth having.

As long as you focus on the outcome, I believe DevOps is a fantastic thing for just about any organization. But, if you view it as a recipe that you need to follow, or a checklist – you’re on the wrong track already, because you’re not thinking about outcomes. If you build from an outcome that will help your business and think backwards to the best way of reaching that outcome – then DevOps is almost guaranteed to work.

Achieving DevOps – the back story

In writing the book “Achieving DevOps“, we threw away easily as many words as we ended up keeping. I wish space would have allowed us to talk in more depth about waste, Mission Command, and some other principles that we could only skim over at best.

We talk about this in the book as well – but we’re so much in debt to the bright people out there and the lasting work they’ve done. Not all of these were directly referenced in the book, but all influenced us. We didn’t have room for them in the book, but we figure this might be a nice starting point.

In doing our research – which was something we were only able to pull away from with regret and a few sledgehammer whacks by our publisher – some books stood out as being especially amazing. These, I’ve put below with the book cover as an active hyperlink – you can go right to Amazon and buy it from there. (We don’t get paid in any way for this. It’s just to help give back a little.)

But really, the best books I’ve already talked about in my post on “Where To Start?”

OK, on to the hotlinks:

Chapter 2 – Ratcheting Change

  • [robha] – “A Counterintuitive Strategy for Building a Daily Exercise Habit”, Rob Hardy., 7/21/2017. A great article that first got us thinking about bright lines and activation energy.
  • [bjfth] – “Tiny Habits”, BJ Fogg, Stanford University, 1/1/2018.
  • [bjthgs] – “Find a good spot in your life”, BJ Fogg. Stanford University, 1/1/2018.
  • [jclat] – “Atomic Habits: An Easy & Proven Way to Build Good Habits & Break Bad Ones”, James Clear. Avery, 10/16/2018. ISBN-10: 0735211299, ISBN-13: 978-0735211292

  • [duhigg] – “The Power of Habit: Why We Do What We Do in Life and Business”, Charles Duhigg. Random House, 1/1/2014. ISBN-10: 081298160X, ISBN-13: 978-0812981605
  • [baume] – “Willpower: Rediscovering the Greatest Human Strength”, Roy Baumeister and John Tierney. Penguin Books, 8/28/2012. ISBN-10: 0143122231, ISBN-13: 978-0143122234
  • [jclub] – “Do Things You Can Sustain”, James Clear.

Chapter 2 – Kanban

  • [hanselman] – “Maslow’s Hierarchy of Needs of Software Development”, Scott Hanselman., 1/8/2012.

  • [ferriss] – “The 4-Hour Workweek: Escape 9-5, Live Anywhere, and Join the New Rich”, Timothy Ferriss, December 2019, ISBN-10: 9780307465351, ISBN-13: 978-0307465351
  • [drift2] – My original writeup on Timothy Fenriss’ book –
  • [tdoh] – “The DevOps Handbook: How to Create World-Class Agility, Reliability, and Security in Technology Organizations”, Gene Kim, Patrick Dubois, John Willis, Jez Humble. IT Revolution Press, 10/6/2016, ISBN-10: 1942788002, ISBN-13: 978-1942788003
  • [forsgren] – “Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations”, Nicole Forsgren PhD, Jez Humble, Gene Kim. IT Revolution Press, 3/27/2018. ISBN-10: 1942788339, ISBN-13: 978-1942788331

Chapter 2 – Reliability First

Chapter 3 – Continuous Integration

Chapter 3 – Shift Left on Testing

  • [dora2017] – “Annual State of DevOps Report”, unattributed author(s). Puppet Labs, 2017.
  • [clean] – “Clean Code: A Handbook of Agile Software Craftsmanship”, Robert C Martin. Prentice Hall, 8/11/2008. ISBN-10: 9780132350884, ISBN-13: 978-0132350884

  • [feathers] – “Working Effectively with Legacy Code”, Michael Feathers. Prentice Hall, 10/2/2004. ISBN-13: 978-0131177055, ISBN-10: 9780131177055. A true masterpiece. Most of us are not blessed with greenfield type projects; I can’t think of many people that wouldn’t benefit greatly from reading this book and understanding how to better tame that monolith looming in the background.
  • [refactmf] – “Refactoring: Improving the Design of Existing Code”, Martin Fowler. Addison-Wesley Signature Series, 11/30/2018. ISBN-10: 0134757599, ISBN-13: 978-0134757599
  • [crisp] – “Agile Testing: A Practical Guide for Testers and Agile Teams”, Lisa Crispin, Janet Gregory. Addison-Wesley Professional, 1/9/2009. ISBN-10: 9780321534460, ISBN-13: 978-0321534460
  • [crisp2] – “More Agile Testing: Learning Journeys for the Whole Team”, Lisa Crispin, Janet Gregory. Addison-Wesley Professional, 10/16/2014. ISBN-10: 9780321967053, ISBN-13: 978-0321967053
  • [14pt] – “Dr. Deming’s 14 Points for Management”, unattributed author(s).,
  • [forsgren] – “Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations”, Nicole Forsgren PhD, Jez Humble, Gene Kim. IT Revolution Press, 3/27/2018. ISBN-10: 1942788339, ISBN-13: 978-1942788331
  • [freem] – “Growing Object-Oriented Software, Guided by Tests”, Steve Freeman, Nat Pryce. Addison-Wesley Professional, 10/22/2009. ISBN-10: 9780321503626, ISBN-13: 978-0321503626
  • [mesz] – “xUnit Test Patterns: Refactoring Test Code”, Gerard Meszaros. Addison-Wesley, 5/31/2007. ISBN-10: 9780131495050, ISBN-13: 978-0131495050. Particularly good in its discussion about dummy objects, fake obj, stubs, spies, and mocks.
  • [dbnm] – “No more excuses”, Donovan Brown., 12/12/2016. Our personal battle cry when it comes to “asking for permission” to write unit tests.
  • [cohnx] – “The Forgotten Layer of the Test Automation Pyramid”, Mike Cohn. Mountain Goat Software, 12/17/2009.
  • [williams] – “The Costs and Benefits of Pair Programming”, Alistair Cockburn, Laurie Williams, 1/1/2001.
  • [gucks] – “Moving 65,000 Microsofties to DevOps on the Public Cloud”, Sam Guckenheimer, 8/3/2017.
  • [shahxr] – “Shift Left to Make Testing Fast and Reliable”, Munil Shah. Microsoft Docs, 11/8/2017. A must-read for any serious QA devotee.
  • [shahyt] – “Combining Dev and Test in the Org”, Munil Shah. YouTube, 10/24/2017. Microsoft’s decision to move to a single engineering organization where testing and development are unified was a game-changer.
  • [fowlbu] – “UnitTest”, Martin Fowler., 5/5/2014.
  • [fowltp] – “TestPyramid”, Martin Fowler,, 5/1/2012.
  • [cohn] – “Testing Pyramids & Ice-Cream Cones”, Alister Scott. Watirmelon, unknown date.
  • [nonderminism] – “Eradicating Non-Determinism in Tests”, Martin Fowler, 4/14/2011.
  • [ddt] – “Defect Driven Testing: Your Ticket Out the Door at Five O’Clock”, Jared Richardson., 8/4/2010. . Note his thoughts on combating bugs, which tend to come in clusters, with what he calls ‘testing jazz’ – thinking in riffs with dozens of tests checking an issue like invalid spaces in input.
  • [stiny] – “You Are Your Software’s Immune System!”, Matt Stine., 7/20/2010.
  • [molteni] – “Giving Up on test-first development”, Luca Molteni. iansommerville, 3/17/2016. The author found TDD unsatisfying because it encouraged conservatism, focused on detail vs structure, and didn’t catch data mismatches – which he later elaborated with other weak points, including reliance on a layered architecture, agreed upon success criteria, and a controllable operating environment. We disagree with most of his objections but agree with the cautionary note that there is no single universal engineering method that works in every and all cases.
  • [martin] – “The Three Laws of TDD”, Robert Martin., unknown date.
  • [martin3] – “When TDD doesn’t work.”, Robert Martin. The Clean Code Blog, 4/30/2014.
  • [humbleobj1] – “Refactoring code that accesses external services”, Martin Fowler., 2/17/2015. A great implementation of Humble Object and refactoring based on Bounded Contexts in this article.
  • [gruvle] – “Start and Scaling Devops in the Enterprise”, Gary Gruver. BookBaby, 12/1/2016. ISBN-10: 1483583589, ISBN-13: 978-1483583587

  • [gruv] – “Leading the Transformation: Applying Agile and DevOps Principles at Scale”, Gary Gruver, Tommy Mouser. IT Revolution Press, 8/1/2015. ISBN-10: 1942788010, ISBN-13: 978-1942788010. An in depth exploration of how HP was able to pull itself out of the mud of long test cycles – even with a labyrinth of possible hardware combinations.

Chapter 3 – Definition of Done, Family Dinner Code Reviews

Chapter 4 – Blameless Postmortems

Chapter 4 – Hypothesis Driven Development

Chapter 4 – Value Stream Mapping

  • [teams] – “Team of Teams: New Rules of Engagement for a Complex World”, Stanley McChrystal. Portfolio, 5/12/2015. ISBN-10: 1591847486, ISBN-13: 978-1591847489. The second most influential book we read, besides “The Power of Habit”. Highly recommended either printed or on Audible; it’s a fast read, and amazingly insightful.
  • [ohno] – “Toyota Production System: Beyond Large-Scale Production”, Taiichi Ohno. Productivity Press; 3/1/1988, ISBN-10: 0915299143, ISBN-13: 978-0915299140
  • [shingo] – “A Study of the Toyota Production System: From an Industrial Engineering Viewpoint (Produce What Is Needed, When It’s Needed)”, Shigeo Shingo, Andrew P. Dillon. Productivity Press; 10/1/1989. ISBN-10: 9780915299171, ISBN-13: 978-0915299171
  • [popp] – “Implementing Lean Software Development: From Concept to Cash”, Mary and Tom Poppendieck. Addison-Wesley Professional, 9/17/2006. ISBN-10: 0321437381, ISBN-13: 978-0321437389
  • [jeffmu] – “The Multitasking Myth”, Jeff Atwood. Coding Horror Blog, 9/27/2006.
  • [liker] – “The Toyota Way: 14 Management Principles from the World’s Greatest Manufacturer”, Jeffrey K. Liker, McGraw-Hill Education; 1/7/2004, ISBN-10: 0071392319, ISBN-13: 978-0071392310
  • [devcaf65] – “DevOps Cafe Episode 62 – Mary and Tom Poppendieck”, Damon Edwards, John Willis. DevOps Café, 8/16/2015.
  • [willis] – “DevOps Culture (Part 1)”, John Willis. IT Revolution, 5/1/2012. This is an extremely influential blog; I found myself turning back to it many times.

Chapter 5 – Small Cross Functional Teams

  • [tdoh] – “The DevOps Handbook: How to Create World-Class Agility, Reliability, and Security in Technology Organizations”, Gene Kim, Patrick Dubois, John Willis, Jez Humble. IT Revolution Press, 10/6/2016, ISBN-10: 1942788002, ISBN-13: 978-1942788003
  • [domenic] – “Making Work Visible: Exposing Time Theft to Optimize Work & Flow”, Dominica Degrandis, 11/14/2017, IT Revolution Press; ISBN-10: 1942788150, ISBN-13: 978-1942788157
  • [mcchryst] – “Team of Teams: New Rules of Engagement for a Complex World”, Stanley McChrystal. Portfolio, 5/12/2015. ISBN-10: 1591847486, ISBN-13: 978-1591847489
  • [rother] – “Toyota Kata: Managing People for Improvement, Adaptiveness and Superior Results”, Mike Rother. McGraw-Hill Education, 8/4/2009. ISBN-10: 0071635238, ISBN-13: 978-0071635233

Chapter 5 – Configuration Management and Infrastructure as Code

  • [rbias] – “The History of Pets vs Cattle and How to Use the Analogy Properly”, Randy Bias., 9/29/2016.
  • [kief] – “Infrastructure as Code: Managing Servers in the Cloud”, Kief Morris. O’Reilly Media, 6/27/2016. ISBN-10: 1491924357, ISBN-13: 978-1491924358
  • [cern] -“Are your servers PETS or CATTLE?”, Simon Sharwood. The Register, 3/18/2013.
  • [guckiac] – “What is Infrastructure as Code?”, Sam Guckenheimer. Microsoft Docs, 4/3/2017.
  • [russd] – “It Takes Dev and Ops to Make DevOps”, Russ Collier., 7/26/2013.
  • [puppiac] – “Infrastructure as code”, unattributed author(s). Puppet, unknown date. – A great overview with videos of why IAC is so important
  • [newm] “Building Microservices: Designing Fine-Grained Systems”, Sam Newman. O’Reilly Media, 2/20/2015. ISBN-10: 1491950358, ISBN-13: 978-1491950357
  • [yevg] – “Terraform: Up and Running: Writing Infrastructure as Code”, Yevgeniy Brikman. O’Reilly Media, 3/27/2017. ISBN-10: 1491977086, ISBN-13: 978-1491977088
  • [sre] – “Site Reliability Engineering: How Google Runs Production Systems”, Niall Richard Murphy, Betsy Beyer, Chris Jones, Jennifer Petoff, O’Reilly Media; 4/16/2016, ISBN-10: 149192912X, ISBN-13: 978-1491929124
  • [gruvle] – “Start and Scaling Devops in the Enterprise”, Gary Gruver. BookBaby, 12/1/2016. ISBN-10: 1483583589, ISBN-13: 978-1483583587

Chapter 5 – Security As Part of the Lifecycle

Chapter 5 – Automated Jobs and Dev Production Support

  • [maun] – “Rundeck Helps Ticketmaster Reshape Operations”, unattributed author(s)., 1/1/2015. – Note the strong objections by both developers and Operations (costs, risks, SOX and security compliance, straightjacketed solution sets and loss of control). This resistance dropped on both sides as a lengthy pilot period proved that runbooks provided both simplicity and auditable, repeatable, and traceable action steps that simplified troubleshooting.
  • [pagr] – “Incident Response”, unattributed author(s). PagerDuty, unknown date. An excellent documentation hub on how to handle initial response.
  • [mulkey2] – “DevOps Cafe Episode 61 – Jody Mulkey”, John Willis, Damon Edwards. DevOps Café, 7/27/2015.
  • [newm] “Building Microservices: Designing Fine-Grained Systems”, Sam Newman. O’Reilly Media; 2/20/2015. ISBN-10: 1491950358, ISBN-13: 978-1491950357
  • [sharma] – “The DevOps Adoption Playbook: A Guide to Adopting DevOps in a Multi-Speed IT Enterprise”, Sanjeev Sharma. Wiley, 2/28/2017. ISBN-10: 9781119308744, ISBN-13: 978-1119308744
  • [gruvle] – “Start and Scaling Devops in the Enterprise”, Gary Gruver, BookBaby, 12/1/2016. ISBN-10: 1483583589, ISBN-13: 978-1483583587
  • [sre] – “Site Reliability Engineering: How Google Runs Production Systems”, Niall Richard Murphy, Betsy Beyer, Chris Jones, Jennifer Petoff, O’Reilly Media; 4/16/2016, ISBN-10: 149192912X, ISBN-13: 978-1491929124

Chapter 6 – Metrics and Monitoring

  • [babb] – “Fly-Fishin’ Fool: The Adventures, Misadventures, and Outright Idiocies of a Compulsive Angler”, James Babb. Lyons Press; 4/1/2005. ISBN-10: 1592285937, ISBN-13: 978-1592285938
  • [theart] – “The Art of Monitoring”, James Turnbull. Amazon Digital Services LLC, 6/8/2016. ASIN: B01GU387MS. Perhaps the best overall discussion we’ve seen of monitoring and a very good, explicit implementation of the ELK stack to handle aggregation and dashboarding. See my blog post for more on this outstanding work.
  • [guckenheimer2] – “Moving 65,000 Microsofties to DevOps on the Public Cloud”, Sam Guckenheimer. Microsoft Docs, 8/3/2017.
  • [hawthorne] – “The Hawthorne effect”, Tom Hindle. The Economist, 11/3/2008.
  • [baer] – “How Changing One Habit Helped Quintuple Alcoa’s Income”, Drake Baer. Business Insider, 4/19/2014.
  • [popp4] – “DevOps Cafe Episode 62 – Mary and Tom Poppendieck”, John Willis, Damon Edwards. DevOps Café, 8/16/2015.

  • [visible] – “The Visible Ops Handbook: Implementing ITIL in 4 Practical and Auditable Steps”, Kevin Behr, Gene Kim, George Spafford. Information Technology Process Institute, 6/15/2005. ISBN-10: 0975568612, ISBN-13: 978-0975568613. We wish this short but powerful book was better known. Like Continuous Delivery”, it’s aged well – and most of its precepts still hold true. It resonates particularly well with IT managers and Operations staff.
  • [rayg2] – “Customer focus and making production visible with Raygun”, Damian Brady. Channel9, 2/8/2018.
  • [hubbard] – “How to Measure Anything: Finding the Value of Intangibles in Business”, Douglas Hubbard. Wiley Publishing, 3/17/2014. ISBN-10: 9781118539279, ISBN-13: 978-1118539279
  • [turnbull] – “DevOps Cafe Episode 70 – James Turnbull”, John Willis, Damon Edwards. DevOps Café, 10/26/2016.
  • [cockr] – “DevOps Cafe Episode 50 – Adrian Cockcroft”, John Willis, Damon Edwards. DevOps Café, 7/22/2014. I love this interview in part for Adrian calling out teams that are stuck in analysis paralysis – and the absurdity of not giving teams self-service environment provisioning. “First I ask… are you serious?”

  • [julian] – “Practical Monitoring: Effective Strategies for the Real World”, Mike Julian. O’Reilly Media, 11/23/2017. ISBN-10: 1491957352, ISBN-13: 978-1491957356. I think this may actually be a little better than “The Art of Monitoring” – though that’s also a book we loved and found value in – just because there’s less of a narrow focus on the ELK stack.
  • [habit] – “The Power of Habit: Why We Do What We Do in Life and Business”, Charles Duhigg. Random House, 1/1/2014. ISBN-10: 081298160X, ISBN-13: 978-0812981605
  • [bejtlich] – “The Practice of Network Security Monitoring: Understanding Incident Detection and Response”, Richard Bejtlich. No Starch Press, 7/15/2013. ISBN-10: 1593275099, ISBN-13: 978-1593275099

Chapter 6 – Feature Flags and Continuous Delivery

Chapter 6 – Disaster Recovery and Gamedays

  • [dyn1] – “The Dynatrace Unbreakable Pipeline in Azure DevOps and Azure? Bam!”, Abel Wang., 8/3/2018. DevOps-and-azure-bam/ We would have loved to have gone into much more detail around self-healing CD pipelines and especially the advances made by Dynatrace. Monitoring as Code as a concept is rapidly growing in popularity; we love the application of using automated monitoring for a more viable go/no go decision, and having monitoring (monspec) files kept in source control right next to the other infrastructure and source code of the project.
  • [dyn2] – “Unbreakable DevOps Pipeline: Shift-Left, Shift-Right & Self-Healing”, Andreas Grabner. DynaTrace, 2/9/2018. A great walkthrough of implementing an unbreakable CD pipeline, in this case using AWS Lambda functions and Dynatrace. Andreas makes a great case for applying the Shift-Left movement to monitoring as code.
  • [dop65] – “DevOps Cafe Episode 65 – John interviews Damon”, John Willis, Damon Edwards. DevOps Café, 12/15/2015. A great discussion about the antipatterns around the releases and the dangerous illusion of control that many managers suffer from. In one company, they had less than 1% of CAB submittals rejected – out of 2,000 approved. Those that were rejected often had not filled out the correct submittal form! As Damon brought out, all this activity was three degrees removed from the keyboard – those making the approvals really had very little idea of what was actually going on. [dop65]
  • [dri2] – “Monitoring, and Why It Matters To You”, Dave Harrison., 4/4/2017. A more complete discussion of the vicious vs virtuous cycle described in this section, along with some specific examples from Etsy’s groundbreaking work around monitoring.
  • [tdoh] – “The DevOps Handbook: How to Create World-Class Agility, Reliability, and Security in Technology Organizations”, Gene Kim, Patrick Dubois, John Willis, Jez Humble. IT Revolution Press, 10/6/2016, ISBN-10: 1942788002, ISBN-13: 978-1942788003. There’s an excellent story by Heather Mickman of Target about what it took to yank an antique process centered around what they called the TEAP-LARB form. “The surprising thing was that no one knew, outside of a vague notion that we needed some sort of governance process. Many knew that there had been some sort of disaster that could never happen again years ago, but no one could remember exactly what that disaster was.”
  • [forsgren] – “Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations”, Nicole Forsgren PhD, Jez Humble, Gene Kim. IT Revolution Press, 3/27/2018. ISBN-10: 1942788339, ISBN-13: 978-1942788331
  • [dora2017] – “Annual State of DevOps Report”, unattributed author(s). Puppet Labs, 2017.
  • [mcchrystal] – “Team of Teams: New Rules of Engagement for a Complex World”, Stanley McChrystal. Portfolio, 5/12/2015. ISBN-10: 1591847486, ISBN-13: 978-1591847489. The author notes that top-down decisionmaking (as with CAB meetings) has the effect of sapping firepower and initiative; this was echoed by Brian Blackman and Anne Steiner in their interviews in the Appendix section. The military has learned the limitations of higher command, and strives not to command more than is necessary or plan beyond the circumstances that can be foreseen. Orders are given that define and communicate the intent, but the execution strategy is often left up to the individual units.
  • [catafl] – “CatastrophicFailover”, Martin Fowler., 3/7/2005. . A vivid description of a cascading failure and the complexities associated with event-driven architectures that informed the failure Alex experienced in this section.
  • [matr] – “Making Matrixed Organizations Successful with DevOps: Tactics for Transformation in a Less Than Optimal Organization”, Gene Kim. IT Revolution DevOps Enterprise Forum 2017. A good discussion on how and why to form a cross-functional team, starting with the leadership level.
  • [gruvle] – “Start and Scaling Devops in the Enterprise”, Gary Gruver. BookBaby, 12/1/2016. ISBN-10: 1483583589, ISBN-13: 978-1483583587

Chapter 7 – Microservices

  • [newm] “Building Microservices: Designing Fine-Grained Systems”, Sam Newman. O’Reilly Media; 2/20/2015. ISBN-10: 1491950358, ISBN-13: 978-1491950357. SUCH a great book, definitely on my top 3 list on this subject.
  • [bbom] – “Big Ball of Mud”, Brian Foote and Joseph Yoder. University of Illinois at Urbana-Champaign, 6/26/1999. Based on a presentation at the Fourth Conference on Patterns Languages of Programs 1997, the original and very well known “big ball of mud” paper.
  • [yarrow] – “The Org Charts Of All The Major Tech Companies”, Jay Yarrow. Business Insider, 6/29/2011,
  • [manu] – “The Google Doodler”, Manu Cornet., 2011.
  • [feathers] – “Working Effectively with Legacy Code”, Michael Feathers. Prentice Hall, 10/2/2004. ISBN-13: 978-0131177055, ISBN-10: 9780131177055
  • [fowl2] – “Microservices”, James Lewis and Martin Fowler., 3/25/2014.
  • [yegge] – “Stevey’s Google Platforms Rant”, Steve Yegge., 1/11/2011. – a now legendary rant about platforms by a software architect that worked early on at both Google and Amazon. Steve did NOT get fired for his little “reply all” oopsie, shockingly – which tells you a lot about the positive traits of Google’s culture right there.
  • [dign] – “Little Things Add Up”, Larry Dignan. Baseline Magazine, 10/19/2005. – “Small teams are fast… and don’t get bogged down. … each group assigned to a particular business is completely responsible for it… the team scopes the fix, designs it, builds it, implements it and monitors its ongoing use.”

  • [sfowl] – “Production-Ready Microservices: Building Standardized Systems Across an Engineering Organization”, Susan Fowler. O’Reilly, 12/1/2016. ISBN-10: 1491965975, ISBN-13: 978-1491965979. Susan points out that there’s always a balance between speed and safety; the key is to start with a clear goal in mind. Her thoughts around alerts and dashboarding are very well thought out. Even better, it hits perhaps the one true weak point of microservices right on the head; the need for governance. She found it most effective to have a direct pre-launch overview with the development team going over the design on a whiteboard; within ten minutes, it will become apparent if the solution was truly production-ready. If you have only one book to read on microservices – this is it.
  • [conw2] – “How Do Committees Invent?”, Melvin Conway., 4/1/1968. – The original paper as submitted by Melvin Conway. Famously the Harvard Business Review rejected Melvin’s original paper due to lack of proof; Datamation ended up publishing it in April 1968, and Fred Brook’s classic book “The Mythical Man-Month” made it famous. Rarely has such a small splash made such a big ripple.
  • [nacha] – “The Influence of Organizational Structure On Software Quality: An Empirical Case Study”, Nachiappan Nagappan, Brendan Murphy, Victor Basili, and Nachi Nagappan. Microsoft Research, 1/1/2008. – A very nice metrics-based backup to what we read in “The Mythical Man-Month”, as shown with the troubled Windows Vista release at Microsoft. Here in a recap of that disastrous release, the researchers found that the structure of the organization was the most relevant predictor of failure-prone applications – versus traditional KPIs like churn, complexity, coverage, and bug counts. We suspect that this paper and others like it influenced the decision by Microsoft to upend the structure of their program teams for Azure DevOps and Bing.
  • [grint] – “Splitting the organization and integrating the code: Conway’s law revisited”, Rebecca Grinter, James D. Herbsleb. ACM Digital Library, 5/22/1999. Interestingly, while the Nachiappan study above mentioned that globally distributed teams didn’t perform worse than collocated teams, this paper says the opposite – collocated teams are better functioning than globally distributed. It turns out that when you control for team size, both are correct: the greatest limiting factor was that old enemy, communications overhead. In other words, it doesn’t seem to matter as much if a team is collocated vs distributed, as long as we cap the size to that magical 5-12 number.
  • [lightst] – “The Only Good Reason to Adopt Microservices”, Vijay Gill., 7/19/2018.
  • [kimbre] – “An Interview with Jez Humble on Continuous Delivery, Engineering Culture, and Making Decisions”, Kimbre Lancaster., 8/16/2018.
  • [fami] – “Microservices, IoT, and Azure: Leveraging DevOps and Microservice Architecture to deliver SaaS Solutions”, Bob Familiar. Apress, 10/20/2015. ISBN-10: 9781484212769, ISBN-13: 978-1484212769. The best book we’ve seen out there on IoT in the Microsoft space, by a long shot. Bob Familiar does a terrific job of explaining IoT and microservices in context.
  • [fowl4] – “StranglerApplication”, Martin Fowler., 6/29/2004.
  • [narum] – “Strangler Pattern”, Masashi Narumoto and Mike Wasson. Microsoft Docs, 6/22/2014, A good quick overview of how we can use the strangler pattern to chip away and eventually deprecate a massive legacy app. Mike Wasson in particular may be one of the best technical writers we’ve got at Microsoft.
  • [calca] – “Building Products at SoundCloud —Part I: Dealing with the Monolith”, Phil Calcado. Soundcloud, 6/11/2014.
  • [hodg1] – “Azure DevOps: From Monolith to Cloud Service”, Buck Hodges. YouTube, 10/24/2017. A nice discussion of how Azure DevOps made the switch to microservices, including maintaining consistency between an on-premises product and the hosted multi-tenant service, how they tackled that tough backend problem, and starting over with telemetry.
  • [hodg2] – “From Monolith to Cloud Service”, Buck Hodges. Microsoft Docs, 11/8/2017. . Starting from a position much like Ben’s team does, with a good use of version control but little else – no telemetry, no agile or scrum, no live-site support or on-call experience, Buck walks us through turning an onprem monolith into a microservice-based, cloud-native service with Azure DevOps.
  • [hodg3] – “Patterns for Resiliency in the Cloud”, Buck Hodges. Microsoft Docs, 11/8/2017. . Cloud native architecture really means resilient architecture, and distributed computing makes tracking down a root cause a frustrating and sometimes multi-week endeavor – yes, even with feature flags. Buck explores the Circuit Breaker originally implemented by Netflix and how it’s used with Azure DevOps to degrade gracefully, and their use of throttling as limits are approached with SQL Xevents.
  • [evans] – “Domain-Driven Design: Tackling Complexity in the Heart of Software”, Eric Evans. Addison-Wesley Professional, 8/30/2003. ISBN-10: 0321125215, ISBN-13: 978-0321125217. This is the gold standard, and should be required reading for anyone considering microservices – or indeed just plain well-defined systems architecture.
  • [driftx] – “Practical Microservices”, Dave Harrison., 9/7/2017. . The original blog post and references that influenced this chapter.
  • [amund] – “Microservice Architecture: Aligning Principles, Practices, and Culture”, Mike Amundsen, Matt McLarty, Ronnie Mitra, Irakli Nadareishvili. O’Reilly Media, 8/5/2016. ISBN-10: 1491956259, ISBN-13: 978-1491956250. A great discussion on Domain Driven Design in chapter 5, along with a great practical breakdown of handling one workstream and defining service boundaries using DDD of a sample company.
  • [lewis] – “GOTO 2015 • How I Finally Stopped Worrying and Learnt to Love Conway’s Law”, James Lewis. GOTO 2015 Chicago conference, YouTube, 7/15/2015. There’s a few great examples where they knew the org was not capable of the change needed – and designed a system that would fit it (square peg in square hole!) instead of dictating how the design should work in a perfect, idealistic world.
  • [shconw] – “Randy Shoup on Microservices, the Reality of Conway’s Law, and Evolutionary Architecture”, Daniel Bryant. InfoQ, 7/3/2015. Randy uses his experience from Google and eBay to talk about why monoliths aren’t necessarily as evil as we often think they are.
  • [vaugh] – “Implementing Domain-Driven Design”, Vaughn Vernon. Addison-Wesley, 2/16/2013. ISBN-10: 0321834577, ISBN-13: 978-0321834577. This is the best applied and in-depth discussion we’ve seen of Eric’s groundbreaking work around decomposition and finding domain boundaries.
  • [newmpr] – “Principles Of Microservices”, Sam Newman. YouTube, 11/1/2015, Sam goes through the underlying principles behind microservices, and then attempts to resolve the tension in a core issue with microservices – how independent can they truly be as part of a whole?
  • [qamr] – “Using Microservices Architecture to Break Your Vendor Lock-in”, unattributed author(s). QArea, unknown date. – Google is famous for buying or relying on COTS or OS libraries – but making sure that any interactions are through a shell that they can control and modify. This article discusses the negative cycle when we overrely on vendors and how it increases the fragility of our systems – and how they have broken this vendor lockin using Golang microservices.
  • [caval] – “Our journey to microservices: mono repo vs multiple repositories”, Avi Cavale., 6/2/2016. Shippable started their effort with multiple repositories, and ended up making the switch over to a single repository: “The only thing you really give up with a mono repo is the ability to shut off developers from code they don’t contribute to. There should be no reason to do this in a healthy organization with the right hiring practices. Unless you’re paranoid… or named Apple.”
  • [netfl1] – “Adopting Microservices at Netflix: Lessons for Architectural Design”, Tony Mauro., 2/19/2015. – A very good overview of Adrian Cockroft’s series of talks and thinking on microservices and the lessons he learned at Netflix.
  • [goto2014] – “GOTO 2014 • Migrating to Cloud Native with Microservices”, Adrian Cockroft. YouTube, 12/15/2014. – the original video on Netflix and microservices that was the source for the article above.
  • [nginx2014] – “Fast Delivery”, Adrian Cockcroft. Nginx, YouTube, 12/2/2014. – Adrian points out that Netflix from the beginning favored a fine-grained, loosely coupled architecture. This fed into every one of the four key capabilities Adrian finds vital to deliver at scale – allowing autonomy and the freedom to innovate and make fast decisions; getting answers using big data analytics to explore alternatives and evaluate success; relying on the cloud to remove the latency around spinning up new resources; and eliminating coordination latency by folding everyone needed to deploy and support a service into a single team.
  • [gehan] – “Want to develop great microservices? Reorganize your team”, Neil Gehani. Mesosphere, unknown date. – He calls a cross functional delivery team of 6-12 people a “build-and-run” team, which we kind of like.
  • [kimgb] – “Going big with DevOps: How to scale for continuous delivery success”, Gene Kim., unknown date. . We love the Target story because it’s one of those inspiring dumpster-fire-to-paradise redemption accounts.
  • [brooks] – “The Mythical Man-Month: Essays on Software Engineering, Anniversary Edition”, Frederick P. Brooks Jr. Addison-Wesley Professional, 8/12/1995. ISBN-10: 9780201835953, ISBN-13: 978-0201835953

Chapter 7 – One Mission

  • [lond] – “To Build a Fire, and Other Stories”, Jack London. Reader’s Digest Association, 1/1/1994. ISBN-10: 0895775832, ISBN-13: 978-0895775832
  • [dweck] – “Mindset: The New Psychology of Success”, Carol Dweck. Random House, 2/28/2006. ISBN-10: 1400062756, ISBN-13: 978-1400062751
  • [popov] – “Fixed vs. Growth: The Two Basic Mindsets That Shape Our Lives”, Maria Popova., 1/29/2014. Love the BrainPickings site and its fabulous content.
  • [nigel2] – “Why are we all such hypocrites when it comes to DevOps?”, Nigel Kersten. SpeakerDeck, 10/17/2017. – A great presentation by Nigel Kersten on impoverished communication. He covers optimism bias (which is more likely when you lack experience, believe you have more control/influence than you actually do, and think negative events are unlikely). I also love the point he makes on our own skewed view of others – that we often attribute other’s behavior/skillsets as unchangeable, whereas we excuse our own as being caused by external factors (traffic was terrible today, I’m at stress from home, etc)
  • [hbr] – “Up and Down the Communications Ladder”, Bruce Harriman. Harvard Business Review, 9/1/1974. – The original source of the presentation by Nigel, based on a 1969 study. We’ll call out one key point – that the feedback program must not be an endcap, but product visible results.
  • [habit] – “The Power of Habit: Why We Do What We Do in Life and Business”, Charles Duhigg. Random House, 1/1/2014. ISBN-10: 081298160X, ISBN-13: 978-0812981605
  • [ohwm] – “Workplace Management”, Taiichi Ohno. McGraw-Hill Education, 12/11/2002. ISBN-10: 9780071808019, ISBN-13: 978-0071808019
  • [sharma] – “The DevOps Adoption Playbook: A Guide to Adopting DevOps in a Multi-Speed IT Enterprise”, Sanjeev Sharma. Wiley, 2/28/2017. ISBN-10: 9781119308744, ISBN-13: 978-1119308744
  • [russd] – “It Takes Dev and Ops to Make DevOps”, Russ Collier., 7/26/2013.
  • [cumm2017] – “DevOpsDays Boston 2017 – KEYNOTE: Settlers of DevOps”, Rob Cummings. YouTube, 10/20/2017, The Boston 2017 keynote to DevOps Days, with the outstanding Settlers and Town Planners model. He dismantles the appallingly stupid Bimodal IT theory, and we love Rob’s very succinct and beautiful definitions of what DevOps is about: “I want to deliver customer value faster and more humanely.”
  • [tdoh] – “The DevOps Handbook: How to Create World-Class Agility, Reliability, and Security in Technology Organizations”, Gene Kim, Patrick Dubois, John Willis, Jez Humble. IT Revolution Press, 10/6/2016, ISBN-10: 1942788002, ISBN-13: 978-1942788003. Chapter 16 by Steve Bella and Karen Whitley Bell is outstanding as a case study of ING Netherlands; it may be the best chapter in the entire book.
  • [wardl] – “On Pioneers, Settlers, Town Planners and Theft”, Simon Wardley., 3/13/2015. – The original source of the now famous three-phase DevOps growth model.
  • [teams] – “Team of Teams: New Rules of Engagement for a Complex World”, Stanley McChrystal. Portfolio, 5/12/2015. ISBN-10: 1591847486, ISBN-13: 978-1591847489

  • [lean] – “Lean Enterprise: How High Performance Organizations Innovate at Scale”, Jez Humble, Joanne Molesky, Barry O’Reilly. O’Reilly Media, 1/3/2015. ISBN-10: 1449368425, ISBN-13: 978-1449368425. For large enterprises attempting big-picture changes, this is the best book out there that we’ve found to date. Very pragmatic, numbers-centric and a huge influence on the contents of this book.
  • [bung] – “Mission Command: An Organizational Model for Our Time”, Stephen Bungay. Harvard Business Review, 11/2/2010. Mission Command embraces a conception of leadership which unsentimentally places human beings at its center.
  • [reine] – “The Principles of Product Development Flow: Second Generation Lean Product Development”, Donald Reinertsen. Celeritas Publishing, 1/1/2009. ISBN-10: 1935401009, ISBN-13: 978-1935401001
  • [kimbg] “The Other Side of Innovation: Solving the Execution Challenge”, Vijay Govindarajan, Chris Trimble. Harvard Business Review, 9/2/2010. ISBN-10: 1422166961, ISBN-13: 978-1422166963
  • [perkin] – “Structuring for Change: The Dual Operating System”, Neil Perkin., 4/11/2017.
  • [kotte] – “Accelerate: Building Strategic Agility for a Faster-Moving World”, John P. Kotter. Harvard Business Review Press, 4/8/2014. ISBN-10: 1625271743, ISBN-13: 978-1625271747. Kotter describes here what we now call a “virtual” cross functional team, which he calls a ‘dual operating system’ – combining the entrepreneurial capability of a network with the organizational efficiency of traditional pyramid-like hierarchy, and argues that one compliments the other.
  • [dam41] – “You Can’t Change Culture, But You Can Change Behavior, and Behavior Becomes Culture”, Damon Edwards., Vimeo, 10/10/2012. . An awesome discussion on culture change and how our behavior – and the standards we set – causes ripple effects.
  • [sagat] – “Why DevOps Matters: Practical Insights on Managing Complex & Continuous Change”, unattributed author(s). Saugatuck Technology, 10/1/2014. A Microsoft-sponsored study that has some nice data driven insights.
  • [eliz] – “Change Agents of Ops: What it Takes”, Eliza Earnshaw. Puppet, 11/6/2014. A very punchy interview with Sam Eaton, the director of engineering operations at Yelp.
  • [kimx] – “How do we Better Sell DevOps?”, Gene Kim., Vimeo, 5/6/2013. – A great presentation, describing the business benefits derived from DevOps.
  • [chamor] – “4 Ways to Create a Learning Culture on Your Team”, Tomas Chamorro-Premuzic, Josh Bersin. Harvard Business Review, 7/12/2018. – Covers how leaders shouldn’t wait or be dependent on employer-provided training, but instead lead by example in demonstrating curiosity and sharing learning; reinforce positive learning behavior (including providing meaningful critical feedback), and looking for hungry minds in your interviewing process.
  • [woodw] – “Moving 65,000 Microsofties to DevOps with Visual Studio Team Services”, Martin Woodward, A fuller walkthrough of the Azure DevOps team’s transformation, start to finish.
  • [dora2017] – “Annual State of DevOps Report”, unattributed author(s). Puppet Labs, 2017.
  • [dora2018] – “Annual State of DevOps Report”, unattributed author(s). Puppet Labs, 2018.
  • [kissl2] – “Transforming to a Culture of Continuous Improvement”, Courtney Kissler, DevOps Enterprise Summit 2014 presentation,
  • [forsgren] – “Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations”, Nicole Forsgren PhD, Jez Humble, Gene Kim. IT Revolution Press, 3/27/2018. ISBN-10: 1942788339, ISBN-13: 978-1942788331. We particularly enjoyed the introduction by Courtney Kissler.
  • [nflpd] – “Adopting Microservices at Netflix: Lessons for Team and Process Design”, Tony Mauro. Nginx, 3/10/2015. A very good article, covering Netflix’s use of the OODA loop in optimizing for speed versus efficiency, and creating a high-freedom, high-responsibility culture with less process.
  • [walkr] – “Resilience Thinking: Sustaining Ecosystems and People in a Changing World”, Brian Walker, David Salt. Island Press, 8/22/2006. ISBN-10: 9781597260930, ISBN-13: 978-1597260930
  • [doj1] – “DevOps Dojo”, unattributed author(s). Chef, 4/10/2018.
  • [targy3] – “DevOps At Target: Year 3”, Heather Mickman. IT Revolution, YouTube, 11/28/2016. Heather describes the storming/norming/performing process we’ve seen elsewhere with successful DevOps initiatives – starting in 2012, with change agents appearing and kickstarting a grassroots DevOps transformation; then a gradual uplift as senior leaders took up the torch and provided the muscle and focus needed to build out better architecture.
  • [damb] – “Target CIO explains how DevOps took root inside the retail giant”, Damon Brown., 1/16/2017. More on Target’s use of DevOps Dojos to overcome hurdles, from the CIO directly.
  • [rach] – “Target Rebuilds its Engineering Culture, Moves to DevOps”, Rachael King. Wall Street Journal, 10/19/2015. The subject of the Dojo keeps coming up as a critical catalyst in the Target use case.
  • [eliz] – “DevOps and Change Agents: Common Themes”, Eliza Earnshaw. Puppet, 12/3/2014.
  • [srew] – “The Site Reliability Workbook”, Betsy Beyer, Niall Richard Murphy, David K. Rensin, Kent Kawahara, and Stephen Thorne. A terrific resource, especially the discussion in Chapter 6 on toil.
  • [schauso] – “Sharing our experience of self-organizing teams”, Willy Schaub. Microsoft Developer Blog, 12/2/2016. This and Brian Harry’s article below describe one of the most innovative – and insane-sounding! – team building exercises that ended up being much less disruptive, and wildly successful, than Microsoft first thought.
  • [bharryso] – “Self forming teams at scale”, Brian Harry. Microsoft Developer Blog, 7/24/2015.
  • [bjaaso] – “Agile principles in practice”, Aaron Bjork. Microsoft Docs, 5/30/2018.

Chapter 7 – DevOps and Leadership

Chapter 8 – The End of the Beginning

  • [lewpm] – “Project management non-best-practices”, Bob Lewis. InfoWorld, 9/26/2006.
  • [mezak] – “The Origins of DevOps: What’s in a Name?”, Steve Mezak., 1/25/2018. A nice overview of the beginnings of the DevOps movement, including the seminal presentations given in 2008 and 2009 by Andrew Schafer, Patrick Debois, John Allspaw, and Paul Hammond.
  • [net] – New English Translation of Ecclesiastes 3:22. NET Bible Noteless, Kindle edition, 8/26/2005. ASIN: B0010XIA8K
  • [shunryu] – “Zen Mind, Beginner’s Mind: Informal Talks on Zen Meditation and Practice”, Shunryu Suzuki. Shambhala Library, 10/10/2006. ISBN-10: 9781590302675, ISBN-13: 978-1590302675

Appendix – Aaron Bjork

  • [bjork] – “Agile At Microsoft”, Aaron Bjork. Microsoft Visual Studio, YouTube, 10/2/2017. This is the best explanation I’ve seen of “The Microsoft Story”, and it’s packed with information; a must-watch.
  • [wang2] – “VSLive! Keynote: Abel Wang Details Microsoft’s Painful DevOps Journey”, Abel Wang. Visual Studio Magazine, 8/17/2018. There’s a great snapshot and explanation of the bug cap in this article, as well as other background behind the MS story.

Appendix – Betsy Beyer, Stephen Thorne

  • [sre] – “Site Reliability Engineering: How Google Runs Production Systems”, Betsy Beyer, Chris Jones, Jennifer Petoff, Niall Richard Murphy. O’Reilly Media, 4/1/2016. ISBN-10: 9781491929124, ISBN- 13: 978-1491929124
  • [ghbsre] – “The Site Reliability Workbook: Practical Ways to Implement SRE”, Niall Murphy, David Rensin, Betsy Beyer, Kent Kawahara, Stephen Thorne. O’Reilly Media, 8/1/2018. ISBN-10: 1492029505, ISBN-13: 978-1492029502
  • [kieran] – “Managing Misfortune for Best Results”, Kieran Barry. SREcon EMEA, 8/30/2018. . This is a great overview of the Wheel of Misfortune exercises in simulating outages for training, and some antipatterns to avoid.

Appendix – John-Daniel Trask

Appendix – John Weers

  • [issurv] – IS Survivor, Bob Lewis. . This is a great site John recommended that we enjoyed very much, especially on process and change management.

Appendix – Rob England

Appendix – Sam Guckenheimer