Root cause: line in Shakespearean play

News recently broke about the crash of Ethiopian Airlines Flight 302. This post is about a different plane crash, Eastern Airlines Flight 375, in 1960. Flight 375 crashed on takeoff from Logan airport in Boston when it flew into a flock of birds. More specifically, in the words of Michael Kalafatas, it “slammed into a flock of ten thousand starlings“.

The starling isn’t native to North America. An American drug manufacturer named Eugene Schieffelin made multiple attempts to bring over different species of bird to the U.S. Many of his his efforts failed, but he was successful at bringing starlings over from Europe, releasing sixty European starlings in 1890 and another forty in 1891. Nate Dimeo recounts the story of the release of the sixty starlings in New York’s Central Park in episode 138 of he memory palace podcast.

Schieffelin’s interest included starlings because he wanted to bring over all of the birds mentioned in Shakespeare plays. The starling is mentioned only once in Shakespeare’s works: in Henry IV, Part I, in a line uttered by Sir Henry Percy:

Nay, I will; that’s flat: 
He said he would not ransom Mortimer; 
Forbad my tongue to speak of Mortimer;
But I will find him when he lies asleep, 
And in his ear I’ll holla ‘Mortimer!’ 
Nay, 
I’ll have a starling shall be taught to speak 
Nothing but ‘Mortimer,’ and give it him
To keep his anger still in motion.

The story is a good example of the problems of using causal language to talk about incidents. I doubt an accident investigation report would list “line in 16th century play” as a cause. And, yet, if Shakespeare had not included that line in the play, or had substituted a different bird for a starling, the accident would not have happened.

Of course, this type of counterfactual reasoning isn’t useful at all, but that’s exactly the point. Whenever we start with an incident, we can always go further back in time and play “for want of a nail”: the place where we stop is determined by factors such as time constraints of the investigation and available information. Neither of those factors are properties of the incident itself.

William Shakespeare didn’t cause Flight 375 to crash, because “causes” don’t exist in the world. Instead, we construct causes when we look backwards from incidents. We do this because of our need to make sense of the world. But the world is a messy, tangled web of interactions. Those causes aren’t real. It’s only by moving beyond the notion of causes that we can learn more about how those incidents came to be.

The danger of “insufficient virtue”

Nate Dimeo hosts a great storytelling podcast called The Memory Palace, where each episode is a short historical vignette. Episode 316: Ten Fingers, Ten Toes is about how people have tried to answer the question: “why are the bodies of some babies drastically different from the bodies of all others?”

The stories in this podcast usually aren’t personal, but this episode is an exception. Dimeo recounts how his great-aunt, Anna, was born without fingers on her left hand. Anna’s mother (Dimeo’s great-grandmother) blamed herself: when pregnant, she had been startled by a salesman knocking on the back door, and had bitten her knuckles. She had attributed the birth defect to her knuckle-biting.

We humans seem to be wired to attribute negative outcomes to behaving insufficiently virtuously. This is particularly apparent in the writing style of many management books. Here are some quotes from a book I’m currently reading.

For years, for example, American manufacturers thought they had to choose between low cost and high quality… They didn’t realize that they could have both goals, if they were willing to wait for one while they focused on the other.

Whenever a company fails, people always point to specific events to explain the “causes” of the failure: product problems, inept managers, loss of key people, unexpectedly aggressive competition, or business downturns. Yet, the deeper systemic causes for unsustained growth go unrecognized.

Why wasn’t that balancing process noticed? First, WonderTech’s financially oriented top management did not pay much attention to their delivery service. They mainly tracked sales, profits, return on investment, and market share. So long as these were healthy, delivery times were the least of their concerns.

Such litanies of “negative visions” are sadly commonplace, even among very successful people. They are the byproduct of a lifetime of fitting in, of coping, of problem solving. As a teenager in one of our programs once said, “We shouldn’t call them ‘grown ups’ we should call them ‘given ups.’

Peter Senge, The Fifth Discipline

In this book (The Fifth Discipline), Senge associates the principles he is advocating for (e.g., systems thinking, personal mastery, shared vision) with virtue, and the absence of these principles with vice. The book is filled with morality tales of the poor fates of companies due to insufficiently virtuous executives, to the point where I feel like I’m reading Goofus and Gallant comics.

This type of moralized thinking, where poor outcomes are caused by insufficiently virtuous behavior, is a cancer on our ability to understand incidents. It’s seductive to blame an incident on someone being greedy (an executive) or sloppy (an operator) or incompetent (a software engineer). Just think back to your reactions to incidents like the Equifax Data Breach or the California wildfires.

The temptation to attribute responsibility when bad things happen is overwhelming. You can always find greed, sloppiness, and incompetence if that’s what you’re looking for. We need to fight that urge. When trying to understand how an incident happened, we need to assume that all of the people involved were acting reasonably given the information they had the time. It means the difference between explaining incidents away, and learning from them.

(Oh, and you’ll probably want to check out the Field Guide to Understanding ‘Human Error’ by Sidney Dekker).

Notes on David Woods’s Resilience Engineering short course

David Woods has a great series of free online lectures on resilience engineering. After watching those lectures, a lot of the material clicked for me in a way that it never really did from reading his papers.

Woods writes about systems at a very general level: the principles he describes could apply to cells, organs, organisms, individuals, teams, departments, companies, ecosystems, socio-technical systems, pretty much anything you could describe using the word “system”. This generality means that he often uses abstract concepts, which apply to all such systems. For example, Woods talks about units of adaptive behavior, competence envelopes, and florescence. Abstractions that apply in a wide variety of contexts are very powerful, but reading about them is often tough going (cf. category theory).

In the short course lectures, Woods really brings these concepts to life. He’s an animated speaker (especially when you watch him at 2X speed). It’s about twenty hours of lectures, and he packs a lot of concepts into those twenty hours.

I made an effort to take notes as I watched the lectures. I’ve posted my notes to GitHub. But, really, you should watch the videos yourself. It’s the best way to get an overview about what resilience engineering is all about.

Our brittle serverless future

I’m really enjoying David Woods’s Resilience Engineering short course videos. In Lecture 9, Woods mentions an important ingredient in a resilient system: the ability to monitor how hard you are working to stay in control of the system.

I was thinking of this observation in the context of serverless computing. In serverless, software engineers offload the responsibility of resource management to a third-party organization, who handles this transparently for them. No more thinking in terms of servers, instance types, CPU utilization and memory usage!

The challenge is this: from the perspective of a customer of a serverless provider, you don’t have visibility into how hard the provider is working to stay in control. If the underlying infrastructure is nearing some limit (e.g., amount of incoming traffic it can handle), or if it’s operating in degraded mode because of an internal failure, these challenges are invisible to you as a customer.

Woods calls this phenomenon the veil of fluency. From the customer’s perspective, everything is fine. Your SLOs are all still being met! However, from the provider’s perspective, the system may be very close to the boundary, the point where it falls over.

Woods also talks about the importance of reciprocity in resilient organizations: how different units of adaptive behavior synchronize effectively when a crunch happens and one of them comes under pressure. In a serverless environment, you lose reciprocity because there’s a hard boundary between the serverless provider and a customer. If your system is deployed in a serverless environment, and a major incident happens where the serverless system is a contributing factor, nobody from your serverless provider is going to be in the Slack channel or on the conference bridge.

I think Simon Wardley is correct in his prediction that serverless is the future of software deployment. The tools are still immature today, but they’ll get there. And systems built on serverless will likely be more robust, because the providers will have more expertise in resource management and fault tolerance than their customers do.

But every system eventually reaches its limit. One day a large-scale serverless-based software system is going to go past the limit of what it can handle. And when it breaks, I think it’s going to break quickly, without warning, from the customer’s perspective. And you won’t be able to coordinate with the engineers at your serverless provider to bring the system back into a good state, because all you’ll have are a set of APIs.

TLA+ is hard to learn

I’m a fan of the formal specification language TLA+. With TLA+, you can build models of programs or systems, which helps to reason about their behavior.

TLA+ is particularly useful for reasoning about the behavior of multithread programs and distributed systems. By requiring you to specify behavior explicitly, it forces you to think about interleaving of events that you might not otherwise have considered.

The user base of TLA+ is quite small. I think one of the reasons that TLA+ isn’t very popular is that it’s difficult to learn. I think there are at least three concepts you need for TLA+ that give new users trouble:

  • The universe as a state machine
  • Modeling programs and systems with math
  • Mathematical syntax

The universe as state machine

TLA+ uses a state machine model. It treats the universe as a collection of variables whose values vary over time.

A state machine in sense that TLA+ uses it is similar, but not exactly the same, as the finite state machines that software engineers are used to. In particular:

  • A state machine in the TLA+ sense can have an infinite number of states.
  • When software engineers think about state machines, they think about a specific object or component being implemented as a finite state machine. In TLA+, everything is modeled as a state machine.

The state machine view of systems will feel familiar if you have a background in physics, because physicists use the same approach for system modeling: they define a state variable that evolves over time. If you squint, a TLA+ specification looks identical to a system of first-order differential equations, and associated boundary conditions. But, for the average software engineer, the notion of an entire system as an evolving state variable is a new way of thinking.

The state machine approach requires a set of concepts that you need to understand. In particular, you need to understand behaviors, which requires that you understand statessteps, and actions. Steps can stutter, and actions may or may not be enabled. For example, here’s the definition of “enabled” (I’m writing this from memory):

An action a is enabled for a state s if there exists a state t such that a is true for the step s→t.

It took me a long time to internalize these concepts to the point where I could just write that out without consulting a source. For a newcomer, who wants to get up and running as quickly as possible, each new concept that requires effort to understand decreases the likelihood of adoption.

Modeling programs and systems with math

One of the commonalities across engineering disciplines is that they all work with mathematical models. These models are abstractions, objects that are simplified versions of the artifacts that we intend to build. That’s one of the thing that attracts me about TLA+, it’s modeling for software engineering.

A mechanical engineer is never going to confuse the finite element model they have constructed on a computer with the physical artifact that they are building. Unfortunately, we software engineers aren’t so lucky. Our models superficially resemble the artifacts we build (a TLA+ model and a computer program both look like source code!). But models aren’t programs: a model is a completely different beast, and that trips people up.

Here’s a metaphor: You can think of writing a program as akin to painting, in that both are additive work: You start with nothing and you do work by adding content (statements to your program, paint to a canvas).

The simplest program, equivalent to an empty canvas, is one that doesn’t do anything at all. On Unix systems, there’s a program called true which does nothing but terminate successfully. You can implement this in shell as an empty file. (Incidentally, AT&T has copyrighted this implementation).

By contrast, when you implement a model, you do the work by adding constraints on the behavior of the state variables. It’s more like sculpting, where you start with everything, and then you chip away at it until you end up with what you want.

The simplest model, the one with no constraints at all, allows all possible behaviors. Where the simplest computer program does nothing, the simplest model does (really allows) everything. The work of modeling is adding constraints to the possible behaviors such that the model only describes the behaviors we are interested in.

When we write ordinary programs, the only kind of mistake we can really make is a bug, writing a program that doesn’t do what it’s supposed to. When we write a model of a program, we can also make that kind of mistake. But, we can make another kind of mistake, where our model allows some behavior that would never actually happen in the real world, or isn’t even physically possible in the real world.

Engineers and physicists understand this kind of mistake, where a mathematical model permits a behavior that isn’t possible in the real world. For example, electrical engineers talk about causal filters, which are filters whose outputs only depend on the past and present, not the future. You might ask why you even need a word for this, since it’s not possible to build a non-causal physical device. But it’s possible, and even useful, to describe non-causal filters mathematically. And, indeed, it turns out that filters that perfectly block out a range of frequencies, are not causal.

For a new TLA+ user who doesn’t understand the distinction between models and programs, this kind of mistake is inconceivable, since it can’t happen when writing a regular program. Creating non-causal specifications (the software folks use the term “machine-closed” instead of “causal”) is not a typical error for new users, but underspecifying the behavior some variable of interest is very common.

Mathematical syntax

Many elements of TLA+ are taken directly from mathematics and logic. For software engineers used to programming language syntax, these can be confusing at first. If you haven’t studied predicate logic before, the universal (∀) and extensional (∃) quantifiers will be new.

I don’t think TLA+’s syntax, by itself, is a significant obstacle to adoption: software engineers pick up new languages with unfamiliar syntax all of the time. The real difficulty is in understanding TLA+’s notion of a state machine, and that modeling is describing a computer program as permitted behaviors of a state machine. The new syntax is just one more hurdle.

Why we will forever suffer from missing timeouts, TTLs, and queue size bounds

If you’ve operated a software service, you will have inevitably hit one of the following problems:

A network call with a missing timeout.  Some kind of remote procedure call or other networking call is blocked waiting … forever, because there’s no timeout configured on the call.

Missing time-to-live (TTL). Some data that was intended to be ephemeral did not explicitly have a TTL set on it, and it didn’t get removed by normal means, and so its unexpected presence bit you.

A queue with no explicit size limit. A queue somewhere doesn’t have an explicitly configured upper bound on its size, and somehow the producers are consistently outnumbering the consumers, and then your queue eventually grows to some size that you never expected to happen.

Unfortunately, the only good solution to these problems is diligence. We have to remember to explicitly set timeouts, TTLs, and queue sizes. there are two reasons:

It’s impossible for a library author to define a reasonable default for these values. Appropriate timeouts, TTLs, and queue sizes will vary enormously from one use case to another, there simply isn’t a “reasonable” value to pick without picking one so large that it’s basically the same as being unbounded.

Forcing users to always specify values is a lousy user experience for new users. Library authors could make these values required rather than optional. However, this makes it more annoying for new users of these libraries, it’s an extra step that forces them to make a decision they don’t really want to think about. They probably don’t even know what a reasonable value is when they’re first setting out.

I think forcing users to specify these limits would lead to more robust software, but I can see many users complaining about being forced to set these limits rather than defaulting to infinity. At least, that’s my guess about why library authors don’t do it. 

The Equifax breach report

The House Oversight and Government Reform Committee released a report on the big Equifax data breach that happened last year. In a nutshell, a legacy application called ACIS contained a known vulnerability that attackers used to gain access to internal Equifax databases.

A very brief timeline is:

  • Day 0: (3/7/17) Apache Struts vulnerability CVE-2017-5638 is publicly announced
  • Day 1: (3/8/17) US-CERT sends an alert to Equifax about the vulnerability
  • Day 2: (3/9/17) Equifax’s Global Threat and Vulnerability Management (GTVM) team posts to an internal mailing list about the vulnerability and requests that app owners should patch within 48 hours
  • Day 37: (4/13/17) Attackers exploit the vulnerability in the ACIS app

The report itself is… frustrating. There is some good content here. The report lays out multiple factors that enabled the breach, including:

  • A scanner that was run but missed the vulnerable app because of the directory that the scan ran in
  • An expired SSL certificate that prevented Equifax from detecting malicious activity
  • The legacy nature of the vulnerable application (originally implemented in the 1970s)
  • A complex IT environment that was the product of multiple acquisitions.
  • An organizational structure where the chief security officer and the chief information officer were in separate reporting structures.

The last bullet, about the unconventional reporting structure for the chief security officer, along with the history of that structure, was particularly insightful. It would have been easy to leave out this sort of detail in a report like this.

On the other hand, the report exhibits some weapons-grade hindsight bias. To wit:

 Equifax, however, failed to implement an adequate security program to protect this sensitive data. As a result, Equifax allowed one of the largest data breaches in U.S. history. Such a breach was entirely preventable.

Equifax failed to fully appreciate and mitigate its cybersecurity risks. Had the company taken action to address its observable security issues prior to this cyberattack, the data breach could have been prevented.

Page 4

Equifax knew its patch management process was ineffective.501 The 2015 Patch Management Audit concluded “vulnerabilities were not remediated in a timely manner,” and “systems were not patched in a timely manner.” In short, Equifax recognized the patching process was not being properly implemented, but failed to take timely corrective action.

Page 80

The report highlights a number of issues that, if they had been addressed, would have prevented or mitigated the breach, including:

Lack of a clear owner of the vulnerable application. An email went out announcing the vulnerability, but nobody took action to patch the vulnerable app.

Lack of a comprehensive asset inventory. The company did not have a database where that they could query to check if any published vulnerabilities applied to any applications in use.

Lack of network segmentation in the environment where the vulnerable app ran. The vulnerable app ran a network that was not segmenting from unrelated databases. Once the app was compromised, it was used as a vector to reach these other databases.

Lack of integrity file monitoring (FIM). FIM could have detected malicious activity, but it wasn’t in place. 

Not prioritizing retiring the legacy system. This one is my favorite. From the report: “Equifax knew about the security risks inherent in its legacy IT systems, but failed to prioritize security and modernization for the ACIS environment”.

Use of NFS. The vulnerable system had an NFS mount, that allowed the attackers to access a number of files.

Frustratingly, the report does not go into any detail about how the system got into this state.  It simply lays them out like an indictment for criminal negligence. Look at all of these deficiencies! They should have known better! Even worse, they did know better and didn’t act!

In particular, the report doesn’t dig enough into the communication breakdown that resulted in ACIS not being patched. Here’s an exchange that gets close:

To determine who was responsible for applying the Apache Struts patch to the ACIS system, the Committee asked [former Senior Vice President and Chief Information Officer for Global Corporate Platforms Graeme] Payne to identify employees by the roles listed within the Patch Management Policy. Specifically, the Committee asked him to identify the business owner, system owner, and application owner responsible for the ACIS system. Payne testified:

Q. So the application owner for ACIS would have been who or what organization?
A. So I don’t believe there was any explicit designation of application owners. If you ask me who I think the application owner would be, I can probably answer that.

Q. That would be good.
A. So I believe – in my view, the application owner for ACIS – for the online dispute portal component because that was a component – was [Equifax IT Employee 1] and probably also [Equifax IT Employee 2]. So again, I don’t believe there were any specific designations, so these would be – if someone asked me, “Who do you think they would be?” that would probably be the two people I would look at.

**

Q. So would they have been the people that should have received the GTVM email saying you need to patch?
A. Yes, as well as the system owner.

Q. Okay. Who’s the system owner?
A. So again, those people weren’t designated. So I can –

Q. Tell me who you think?
A. My guess would be that the system owner would be someone in the infrastructure group probably under [Equifax IT Employee 3], since…as part of the global platform services group, his team ran the sort of the server operations

***

Q. If you look at the definition . . . it says: System owner is responsible for applying patch to electronic assets.

So would it be the case that [Equifax IT Employee 3] would have been the one responsible for actually applying the patch to ACIS?
A. Possibly. Again, we are talking at a level that I wasn’t involved in, so I can’t talk specifically about…who actually

Pages 65-66

Alas, the committee doesn’t seem to have interviewed any of the ICs referenced as Equifax IT Employees 1-3. How did they understand how ownership works? In addition, there’s also no context here about how ownership generally works inside of Equifax. Was ACIS a special case, or was it typical? 

There was also a theme that anyone who was worked in a software project would recognize:

[Former Chief Security Officer Susan] Mauldin stated Equifax was in the process of making the ACIS application Payment Card Industry (PCI) Data Security Standard (DSS) compliant when the data breach occurred.

Mauldin testified the PCI DSS implementation “plan fell behind and these items did not get addressed.” She stated:

A. The PCI preparation started about a year before, but it’s very complex. It was a very complex – very complex environment.

Q. year before, you mean August 2016?

A. Yes, in that timeframe.

Q. And it was scheduled to be complete by August 2017?

A. Right.

Q. But it fell behind?

A. It fell behind.

Q. Do you know why?

A. Well, what I recall from the application team is that it was very complicated, and they were having – it just took a lot longer to make the changes than they thought. And so they just were not able to get everything ready in time.

Pages 80-81

And, along the same lines:

So there were definitely risks associated with the ACIS environment that we were trying to remediate and that’s why we were doing the CCMS upgrade.

It was just – it was time consuming, it was risky . . . and also we were lucky that we still had the original developers of the system on staff.


So all of those were risks that I was concerned about when I came into this role. And security was probably also a risk, but it wasn’t the primary driver. The primary driver was to get off the old system because it was just hard to manage and maintain.

Graeme Payne, former Senior Vice President and Chief Information Officer for Global Corporate Platforms, page  82

Good luck finding a successful company that doesn’t face similar issues.

Finally, in a beautiful example of scapegoating, there’s the Senior VP that Equifax fired, ostensibly for failing to forward an email that had already been sent to an internal mailing list. In the scapegoat’s own words:

To assert that a senior vice president in the organization should be forwarding vulnerability alert information to people . . . sort of three or four layers down in the organization on every alert just doesn’t hold water, doesn’t make any sense. If that’s the process that the company has to rely on, then that’s a problem.

Graeme Payne, former Senior Vice President and Chief Information Officer for Global Corporate Platforms, page 51