Lorin Hochstein

Ramblings about software, research and other things

Ansible: Up and Running

leave a comment »

I’m writing a book about Ansible. You can grab a free preview version of the book that contains the first three chapters.

Ansible-Ebook

Written by Lorin

November 9, 2014 at 5:50 pm

Posted in Uncategorized

Tagged with

Paper review: Simple Testing Can Prevent Most Critical Failures

leave a comment »

Written by Lorin

October 9, 2014 at 9:55 am

Posted in research, software

Tagged with ,

Cloud software, fragility and Air France 447

with one comment

The Human Factor by William Langewiesche is an account of how Air France Flight 447 crashed back in 2009. It’s a great piece of long-form journalism.

[Recently deceased engineer Earl] Wiener pointed out that the effect of automation is to reduce the cockpit workload when the workload is low and to increase it when the workload is high.

That’s basically Nassim Nicholas Taleb’s definition of fragility: where the potential downside is much greater than the potential upside.

But this is what really struck me:

[U. Michigan Industrial & Systems Engineering Professor Nadine] Sarter said, “… Complexity means you have a large number of subcomponents and they interact in sometimes unexpected ways. Pilots don’t know, because they haven’t experienced the fringe conditions that are built into the system. I was once in a room with five engineers who had been involved in building a particular airplane, and I started asking, ‘Well, how does this or that work?’ And they could not agree on the answers. So I was thinking, If these five engineers cannot agree, the poor pilot, if he ever encounters that particular situation … well, good luck.””

We also build software by connecting together different subcomponents. We know that the hardware underlying these subcomponents can fail, and in order to provide reliability for our customers we must be able to deal with these failures. If we’re sophisticated, we turn to automation: we try to build fault-tolerant systems that can automatically detect failures and compensate for them.

But the behavior of these types of systems is difficult to reason about, precisely because they take action automatically based on self-monitoring. These systems can handle hard failures of individual components: when something has obviously failed. But if it’s a soft failure, where the component is still partially working, or if two independent components fail simultaneously, then the system may not be able to handle it. Adding automation to handle failures introduces new failure modes even as it eliminates old ones, and the new ones may be much harder for an operator to understand.

Consider the AWS outage of October 2012, which took down multiple websites that deploy on Amazon’s cloud. The outage was a result of:

1. A hardware failure on a data collection server
2. A DNS update not propagating to all internal DNS servers
3. A memory leak bug in an agent that monitors the health of EBS (storage) servers

The agents collect operational data from the storage servers and transfer it to data collection servers. Some agents couldn’t reach the collection servers because of a stale DNS entry. Because the agents also had a memory leak bug, they consumed too much memory on the storage servers.

Lack of sufficient memory is a good way to create a soft failure: the component still works, but in a degraded fashion. These memory-poor servers couldn’t keep up with requests, and eventually became stuck. The system detected the stuck servers (a hard failure), and failed over to healthy servers, but there were so many stuck servers that the healthy servers couldn’t keep up, and also became stuck.

The irony is that this failure is entirely due to the monitoring subsystem, which is intended to increase system reliability! It happened because of the co-incidence of a hardware failure in one component of the monitoring subsystem (a data collection server) and a software bug in another component of the monitoring subsystem (data collection agent).

We can’t avoid automation. In the case of the airlines, the automation has significantly increased safety, even though it increases the chances of incidents like Air France 447. For cloud software, this type of fault-tolerance automation does result in more reliable systems.

But, while these systems mean failure is less likely, when it does happen, it’s much more difficult to understand what’s happening. The future of cloud software is systems that fail much less often, but much harder. Buckle up.

Written by Lorin

September 30, 2014 at 9:26 pm

Estimating confidence intervals, part 6

leave a comment »

Our series of effort-estimation-in-the-small continues. This feature took a while to complete (where “complete” means “deployed to production”). I thought this was the longest feature I’d worked on, but looking at the historical data, there was another feature that took even longer.

plot

Somehow, the legend has disappeared from the plot. The solid line is my best estimate of time remaining each day, and the dashed line is the true amount of time left. The grey area is my 90% confidence interval estimate.

plot2

As usual, my “expected” estimate was much too optimistic. I initially estimated 10 days, where it actually took 17 days. I did stay within my 90% confidence interval, which gives me hope that I’m getting better at those intervals.

When I started this endeavor, my goal was to do a from-scratch estimate each day, but that proved to require too much mental effort, and I succumbed to the anchoring effect. Typically, I would just make an adjustment to the previous day’s estimate.

Interestingly, when I was asked in meetings how much time was left to complete this feature, I gave off-the-cuff (and, unsurprisingly, optimistic) answers instead of consulting my recorded estimates and giving the 90% interval.

Written by Lorin

September 20, 2014 at 11:55 pm

Posted in software

Tagged with

Good to great

leave a comment »

A few months ago I read Good to Great, a book about the factors that led to companies making a transition from being “good” to being “great”. Collins, the author, defines “great” as companies whose stock performed at least three times better than the overall market over at least fifteen years. While the book is ostensibly about a  research study, it feels packaged as a set of recommendations for executives looking to turn their good companies into great ones.

The lessons in the book sound reasonable, but here’s the thing: If Collins’s theory is correct, we should be able to identify companies that will outperform the market by a factor of three in fifteen-years time, by surveying employees to see if they meet the seven criteria outlined in the book.

It has now been almost thirteen years since the book has been published. Where are the “Good to Great” funds?

Written by Lorin

August 24, 2014 at 10:26 pm

Posted in research

Tagged with

Estimating confidence intervals, part 5

leave a comment »

This was a quick feature. I accidentally finished on day 4 before I estimated, so I just set the max/min/expected value to 1.

 

Effort estimate

 

Error in effort estimate

Written by Lorin

August 24, 2014 at 8:28 pm

Posted in software

Tagged with

Estimating confidence intervals, part 4

leave a comment »

Here’s the latest installment in my continuing saga to estimate effort with 90% confidence intervals. Here’s the plot:

plot

In this case, my estimate of the expected time to completion was fairly close to the actual time. The upper end of the 90% confidence interval is extremely high, largely because there was some work that I considered optional to complete the feature that decided to put off to some future data.

Here’s the plot of the error:

plot2It takes a non-trivial amount of mental efforts to do these estimates each day. I may stop doing these soon.

 

Written by Lorin

July 21, 2014 at 8:33 pm

Posted in research, software

Tagged with

Follow

Get every new post delivered to your Inbox.