Lorin Hochstein

Ramblings about software, research and other things

Archive for the ‘systems’ Category

Death of a pledge as systems failure

leave a comment »

Caitlin Flanagan has written a fantastic and disturbing piece for the Atlantic entitled Death at a Penn State Fraternity.

This line really jumped out at me:

Fraternities do have a zero-tolerance policy regarding hazing. And that’s probably one of the reasons Tim Piazza is dead.

The official policy of the fraternities is that hazing is forbidden. Because this is the official policy, it is the individuals in a particular frat house that are held responsible if hazing happens, not the national fraternity organization.

This policy has had the effect of insulating the organizations from being liable, but it hasn’t stopped hazing from being widespread: according to Flanagan, 80% of fraternity members report being hazed.

Because individual fraternity members are the ones that are on the hook if something goes wrong during hazing, reporting an injury carries risk, which means the member must make a decision involving a tradeoff. In the case documented above, that tradeoff led to a nineteen year old dying of his injuries.

This example really reinforces ideas around systems thinking: the introduction of the zero-tolerance policy did not have the intended effect. Because the culture of hazing remains, the policy ended up making things worse.

Advertisements

Written by Lorin

October 7, 2017 at 5:16 pm

Posted in systems

Antics, Drift and Chaos

leave a comment »

Here’s the talk I gave at Strange Loop 2017.

Written by Lorin

October 3, 2017 at 11:16 pm

Posted in netflix, systems

Tagged with

Assumption of rationality

leave a comment »

Matthew Reed wrote a post about Lisa Servon’s book “The Unbanking of America”. This comment stood out for me (emphasis mine):

By treating her various sources as intelligent people responding rationally to their circumstances, rather than as helpless victims of evil predators, [Servon] was able to stitch together a pretty good argument for why people make the choices they make.

In its approach, it reminded me a little of Tressie McMillan Cottom’s “Lower Ed” or Matthew Desmond’s “Evicted.”  In their different ways, each book addresses a policy question that is usually framed in terms of smart, crafty, evil people taking advantage of clueless, ignorant, poor people, and blows up the assumption.  In no case are predators let off the hook, but the “prey” are actually (mostly) capable and intelligent people doing the best they can.  Understanding why this is the best they can do, and what would give them better options, leads to a very different set of prescriptions.

 

Sidney Dekker calls this perspective the local rationality principle. It assumes that people make decisions that are reasonable given the constraints that they are working within, even though from the outside those decisions appear misguided.

I find this assumption of rationality to be a useful frame for explaining individual behavior. It’s worth putting in the effort to identify why a particular decision would have seemed rational within the context in which it was made.

Written by Lorin

June 28, 2017 at 1:55 am

Posted in systems

A conjecture on why reliable systems fail

with one comment

Even highly reliable systems go down occasionally. After having read over the details of several incidents, I’ve started to notice a pattern, which has led me to the following conjecture:

Once a system reaches a certain level of reliability, most major incidents will involve:

  • A manual intervention that was intended to mitigate a minor incident, or
  • Unexpected behavior of a subsystem whose primary purpose was to improve reliability

Here are three examples from Amazon’s post-mortem write-ups of major AWS outages:

The S3 outage on February 28, 2017 involved a manual intervention to debug an issue that was causing the S3 billing system to progress more slowly than expected.

The DynamoDB outage on September 20, 2015 (which also affected SQS, auto scaling, and CloudWatch) involved healthy storage servers taking themselves out of service by executing a distributed protocol that was (presumably) designed that way for fault tolerance.

The EBS outage on October 22, 2012 (which also affected EC2, RDS, and ELBs) involved a memory leak bug in an agent that monitors the health of EBS servers.

Written by Lorin

June 24, 2017 at 4:45 pm

Posted in systems

Tagged with

Changing a complex system is hard

leave a comment »

I’ve been reading Drift into Failure by Sidney Dekker, and it’s a fantastic book about applying systems thinking to understand how complex systems fail. One of the things that a systems thinking perspective teaches you is that we don’t know how a complex system will respond to a change in inputs. In particular, when you try an intervention that is intended to improve the system, it might make things worse.

There’s a great example of this phenomenon in a recent paper by Alpert et al. entitled Supply-Side Drug Policy in the Presence of Substitutes: Evidence from the Introduction of Abuse-Deterrent Opioids, that was mentioned on a recent episode of Vox’s The Weeds podcast.

The paper examined the impact of introducing an abuse-deterrent version of OxyContin on drug abuse. The FDA approved OxyContin pills that were more difficult to crush or dissolve. These pills were expected to act as a deterrent to abuse since addicts tend to consume the drug by chewing, snorting, or injecting it to increase its impact. The authors examined the rates of OxyContin abuse before and after these new abuse-deterrent drugs were introduced in different states.

What the authors found was that this intervention did have the effect of decreasing OxyContin abuse. Unfortunately, it also increased heroin-related deaths. The unexpected effect was that addicts  substituted one form of opiate for a more dangerous one.

The only way for us to know that our interventions have the desired effect on a complex system is to try and measure that effect.

Written by Lorin

January 18, 2017 at 1:21 am

Posted in systems