Lorin Hochstein

Ramblings about software, research and other things

Sketching on the back end

with 2 comments

In his essay entitled, The Cognitive View: A Different Look at Software Design, Robert Glass made the following observation about how software engineers do design work:

What we see here is a mental process, a very rapid process, an iterative process, a process in fact of fast trial and error. The mind forms a solution to the problem, knowing that it will be inadequate because the mind is not yet able to fully grasp all the facets of the problem. That problem solution, the mind knows, must be in the form of a model, because it is going to be necessary to try sample input against the model, run a quick simulation (inside the mind) on the model using the sample input, and get sample outputs (still inside the mind) from that simulation.

The essence of design, then, is rapid modeling and simulation. And a key factor in design is the ability to propose solutions and allow them to fail!

Design work incorporates trial and error. We think up a potential solution, and then try to evaluate whether it’s a good fit for the problem.

But not all potential solutions are so easy to evaluate entirely inside one’s head. In fields where designers are working on physical designs or visual interfaces, they use sketches to help with early solution evaluation, sometimes called formative evaluation. Sketching user experiences is a great book on this topic.

For those of us working on the back-end, we don’t think of this early evaluation of solutions as involving “sketching”, but the metaphor fits. One of the things I find appealing about the Alloy modeling language is that it allows you to sketch out different data models and check their properties.

In a recent blog post, Michael Fogus describes working with clojure.spec as a form of sketching. He also makes the following observation about JSON.

While Lisp programmers have long know the utility of sketching their structs, I’m convinced that one of the primary reasons that JSON has taken over the world is that it provides JS-direct syntactic literals that can be easy typed up and manipulated with little fuss. JSON is a perfectly adequate tool for sketching for many programmers.

I believe UML failed because the designers did not understand the role that box-and-arrow diagrams played in software engineering design. They wanted to build a visual modeling language with well-defined semantics: we wanted to sketch.

Written by Lorin

February 20, 2017 at 2:54 am

Posted in software

I’d rather read code than tests

leave a comment »

But the benefits [of test-driven development] go beyond that. If you want to know how to call a certain API, there is a test that does it. If you want to know how to create a certain object, there is a test that does it. Anything you want to know about the existing system, there is a test that demonstrates it. The tests are like little design documents, little coding examples, that describe how the system works and how to use it.

Have you ever integrated a third party library into your project? You got a big manual full of nice documentation. At the end there was a thin appendix of examples. Which of the two did you read? The examples of course! That’s what the unit tests are! They are the most useful part of the documentation. They are the living examples of how to use the code. They are design documents that are hideously detailed, utterly unambiguous, so formal that they execute, and they cannot get out of sync with the production code.

TheThreeRulesOfTdd, “Uncle” Bob Martin

I sat down today with a co-worker today to discuss a pull request I had submitted earlier in the day.

“You read it already?” I asked.

“Yeah”, he replied, to which he amended “…well, I didn’t read your tests.”

I wasn’t really surprised at that, since (a) it’s the non-test code that I’m worried about, and (b) I often don’t read tests in pull requests either.

Still, I paused for a moment to discuss this with him, because I remembered Uncle Bob’s paean to unit tests as documentation for code. My colleague admitted that he didn’t read tests because reading the tests required more effort than reading the code.

On further reflection, I realized that I, too, never look at tests when I’m trying to understand an unfamiliar codebase. I just read the code directly.

What’s going on here? My theory is: unit tests are too noisy to be useful for understanding.

For APIs, yes, examples are wonderful: if I need to write code that will call into somebody else’s API, I can never get enough examples. But for most of the code that I read, I’m not trying to understand how to call into it like a library, I’m trying to understand its role in the context of a larger system.

For the code I typically work with, the unit tests require complex setup and assertion sections. This increases the cognitive load on the reader: why expend the extra effort to understand the test when I can read the source directly?

There are techniques for making tests more readable: factor your tests well, use a single assertion per test, use tools to improve readability like Rspec and Hamcrest. Still,  it always feels like it isn’t worth the effort to try and understand the tests. After all, I can always just read the code.

 

Written by Lorin

January 26, 2017 at 1:51 am

Posted in software

Tagged with

How open source beat agile

leave a comment »

How does code land in your master branch? Do your team members commit directly to master, or do you merge in pull requests?

In 2008, the repository hosting company known as GitHub was founded. GitHub quickly became the dominant player in the world of open source project hosting. GitHub’s pull request model of contributing to a project became so popular that pull request is now a generic expression.

Two years after GitHub was founded, Jez Humble and David Farley published Continuous Delivery, a book about automated deployment. One of the central ideas of continuous delivery is that all team members should commit directly and frequently to the mainline branch in the version control repository (master in Git, “trunk” in Subversion, Git’s predecessor). According to proponents of continuous delivery, working in long-lived feature branches that are occasionally merged to mainline should be avoided.

Continuous Delivery was an enormously influential book. As an example of its influence, Netflix’s deployment system, Spinnaker, bills itself as a “continuous delivery platform”. And yet, I’ve never worked on a project where the developers committed directly to mainline. The “review and merge pull request” model has dominated my professional career. The problem is that it isn’t clear how to integrate the continuous delivery approach with code review[1]. I’d wager that the majority of developers who support the idea of continuous delivery are also merging feature branches via pull request rather than committing directly to mainline.

While we pay lip service to doing continuous delivery, we’ve made the choice to go the pull request route. What’s going on here?

I think this illustrates a larger tension between the open source way of doing software development and the agile way of doing software development, where we end up speaking the language of agile but adopting the processes of open source. To understand why agile and open source would be in tension with one another, it’s instructive to examine where the agile movement came from.

Agile was born out of the world of software consulting; the substring “consult” has twelve matches on the Authors page of the Agile Manifesto. The kinds of software projects that consultants work on tend to have three properties: they are co-located, synchronous and have a well-defined customer. A good illustration of this is Extreme Programming (XP), the ur-agile process, which requires co-located synchronous development for pair programming, and having a well-defined customer or doing release planning.

By contrast, open source software projects are distributed, asynchronous, and don’t have well-defined customers. It’s natural, then, that open source processes would be different than agile ones. The continuous delivery approach of committing to mainline doesn’t make sense in an open source project, since most contributors don’t have permission to commit to mainline.

Most of the projects I’ve worked on have looked more like the agile context than the open source project, but we still do things the open source way.

When we talk about agile concepts such as continuous delivery, pair programming, and test-driven development, we refer to them as “processes”, but they’re really skills. You can’t just pick up a skill and be proficient immediately.  One way to improve a skill is the apprentice model: to observe how an expert applies that skill in context.

In open source, the entire process is visible, by definition. We can observe how projects do code reviews, integration tests, new feature planning, and so on. In the world where agile came from, we can’t. We have access to books and talks by consultants, but we don’t get to see how they applied their skills to solve specific difficulties they encountered with agile.

Consider Gherkin-based specifications. Gherkin is popular in agile, but it isn’t used in open source projects, because open source projects don’t use specifications. Without being able to observe how experts write Gherkin-based specs, it’s difficult for a novice to assess whether the approach is worth pursuing[2]. The only time I’ve ever seen Gherkin on projects is when I’ve written it myself.

Because open source succeeds in making work visible where agile fails, developers have more opportunities to become better open source developers than agile developers.


  1. Yes, yes, if you do pair programming, then you are doing a kind of continuous code review that is compatible with continuous delivery. But not all teams want to do pair programming, and in that case you still only have a single reviewer.  ↩
  2. The biggest disappointment in the otherwise great book Specification by Example by Gojko Adzic is the lack of examples!  ↩

Written by Lorin

January 21, 2017 at 7:42 pm

Posted in software

Tagged with

Changing a complex system is hard

leave a comment »

I’ve been reading Drift into Failure by Sidney Dekker, and it’s a fantastic book about applying systems thinking to understand how complex systems fail. One of the things that a systems thinking perspective teaches you is that we don’t know how a complex system will respond to a change in inputs. In particular, when you try an intervention that is intended to improve the system, it might make things worse.

There’s a great example of this phenomenon in a recent paper by Alpert et al. entitled Supply-Side Drug Policy in the Presence of Substitutes: Evidence from the Introduction of Abuse-Deterrent Opioids, that was mentioned on a recent episode of Vox’s The Weeds podcast.

The paper examined the impact of introducing an abuse-deterrent version of OxyContin on drug abuse. The FDA approved OxyContin pills that were more difficult to crush or dissolve. These pills were expected to act as a deterrent to abuse since addicts tend to consume the drug by chewing, snorting, or injecting it to increase its impact. The authors examined the rates of OxyContin abuse before and after these new abuse-deterrent drugs were introduced in different states.

What the authors found was that this intervention did have the effect of decreasing OxyContin abuse. Unfortunately, it also increased heroin-related deaths. The unexpected effect was that addicts  substituted one form of opiate for a more dangerous one.

The only way for us to know that our interventions have the desired effect on a complex system is to try and measure that effect.

Written by Lorin

January 18, 2017 at 1:21 am

Posted in systems

Personal productivity tools

leave a comment »

Personal productivity tools

Productivity tools have always held a special fascination for me. I also tend to futz around with multiple tools, trying to find the perfect match for my workflow. My toolset has been pretty stable for several months now. Here’s what I’m currently using.

OmniFocus

I’ve been a fan of Gettings Things Done for a long time. Of the various GTD-supporting tools I’ve found, I like OmniFocus the best. Useful features include:

  • Syncs well between laptop and phone
  • Easy to add to the inbox via keyboard shortcut in OS X
  • Easy to add to the inbox in the iOS app
  • Integrates with reminders on iOS, which means I can say to my watch “Remind me to do X” and “do X” ends up in my OmniFocus inbox
  • Per-project support for “serial tasks” (only one next action) and “parallel tasks” (multiple next actions). I use this all of the time.
  • I can put projects “on hold” and they don’t show up in current context. In particular, I have an on-hold “Someday” task which acts as a catch-all for things I don’t want to forget but that I don’t plan on doing in the near term.

My contexts are:

  • office
  • online
  • home
  • phone
  • work
  • waiting

VoodooPad

VoodooPad is a personal wiki. It mainly has two uses: context for each project I’m working on (e.g., pastes of recent error messages), and reference pages for things like urls and commonly used code snippets or commands that I often forget.

I like how it’s free-form, and not just plain text. This means I can paste in images that are rendered inline, and I can render code and terminal output in fixed-width font, and my notes in variable-width font.

That being said, what I’d really like is some content system that lets me organize by a topic and by date, and VoodooPad only does by topic, but it’s the closest I’ve been able to find.

Emergent Task Planner

I use a notebook called the Emergent Task Planner to structure my day. I write down tasks that I’d like to accomplish that day and schedule them in chunks of time. I often don’t follow the specific schedule, but I find it helps if I take some time to think about what I’m going to try to accomplish, as well as explicitly scheduling out time for checking email so I’m less tempted to do that while working.

Ubiquitous capture tools

Getting Things Done has a notion of “ubiquitious capture”: being able to quickly capture content that you can come back to later. In addition to OmniFocus, I use a few other tools for ubiquitious capture:

Index cards

I keep a stack of index cards in my back pocket with a binder clip and along with a Fisher space pen. It’s often faster to scribble on an index card than to take out my phone. This was inspired by Merlin Mann’s Hipster PDA.

CiteULike

When I encounter a book or academic paper I’d like to read, I clip it to CiteULike.

Instapaper

If I encounter an essay on the web I don’t have time to read, I use Instapaper to capture it for later . It has great Kindle support: every week it automatically emails the content to my Kindle Paperwhite.

Pinboard

I use Pinboard to bookmark reference material. I was a Delicious user for a long time, but Pinboard’s UX is so much better, than I’m happy to pay them for it rather than use Delicious for free.

Written by Lorin

July 3, 2016 at 8:37 pm

Posted in Uncategorized

Tagged with

When software takes a human life

leave a comment »

A Tesla driver was killed in a car crash while the Autopilot system was engaged. According to the news report:

Joshua D. Brown, of Canton, Ohio, died in the accident May 7 in Williston, Florida, when his car’s cameras failed to distinguish the white side of a turning tractor-trailer from a brightly lit sky and didn’t automatically activate its brakes, according to government records obtained Thursday.

These types of automative systems are completely outside my area of expertise. That being said, I imagine that validating this type of control system that relies on complex sensor data must be incredibly challenging. The input space is mind-bogglingly huge, so how do you catch these kinds of corner cases in testing?

The failure here is not due to a “bug” (or “defect” in academic software engineering jargon) in the traditional sense that we use the term. Yet, there clearly was a defect in this system, and the result was a human fatality.

I was also struck by this line:

Harley [an analyst at Kelley Blue Book] called the death unfortunate, but said that more deaths can be expected as the autonomous technology is refined.

I wonder if future deaths will lead to additional regulations on how software engineering work is done in domains like this.

Written by Lorin

July 1, 2016 at 1:26 am

Posted in Uncategorized

Tagged with

Software development as engineering

leave a comment »

Glenn Vanderburg argues eloquently that software development is a genuine engineering discipline.

 

Written by Lorin

June 14, 2016 at 12:11 am

Posted in software