## Good to great

A few months ago I read Good to Great, a book about the factors that led to companies making a transition from being “good” to being “great”. Collins, the author, defines “great” as companies whose stock performed at least three times better than the overall market over at least fifteen years. While the book is ostensibly about a research study, it feels packaged as a set of recommendations for executives looking to turn their good companies into great ones.

The lessons in the book sound reasonable, but here’s the thing: If Collins’s theory is correct, we should be able to identify companies that will outperform the market by a factor of three in fifteen-years time, by surveying employees to see if they meet the seven criteria outlined in the book.

It has now been almost thirteen years since the book has been published. Where are the “Good to Great” funds?

## Estimating confidence intervals, part 4

Here’s the latest installment in my continuing saga to estimate effort with 90% confidence intervals. Here’s the plot:

In this case, my estimate of the expected time to completion was fairly close to the actual time. The upper end of the 90% confidence interval is extremely high, largely because there was some work that I considered optional to complete the feature that decided to put off to some future data.

Here’s the plot of the error:

It takes a non-trivial amount of mental efforts to do these estimates each day. I may stop doing these soon.

## Not apprenticeship!

Mark Guzdial points to an article by Nicholas Lemann in the Chronicle of Higher Ed entitled The Soul of the Research University. It’s a good essay about the schizophrenic nature of the modern research university. But Lemann takes some shots at the notion of teaching *skills* in the university. Here’s some devil’s advocacy from the piece:

Why would you want to be taught by professors who devote a substantial part of their time to writing projects, instead of working professionals whose only role at the university is to teach? Why shouldn’t the curriculum be devoted to imparting the most up-to-the-minute skills, the ones that will have most value in the employment market? Embedded in those questions is a view that a high-quality apprenticeship under an attentive mentor would represent no loss, and possibly an improvement, over a university education.

Later on, Lemann refutes that perspective, that students are better off being taught at research universities by professors engaged in research. He seems to miss the irony that this apprenticeship model is *precisely* how these research universities train PhD students. For bonus irony, here was the banner ad I saw atop the article:

## Estimating confidence intervals, part 3

Another episode in our continuing series of effort estimation in the small with 90% confidence intervals. I recently finished implementing another feature after doing the effort estimates for each day. Here’s the plot:

Once again, I underestimated the effort even at the 90% level, although not as badly as last time. Here’s a plot of the error.

I also find it takes real mental energy to do these daily effort estimates.

## Crossing the river with TLA+

Lately, I’ve been interested in approaches to software specifications that are amenable to model checking. A few weeks ago in this blog, I wrote about solving a logic puzzle with Alloy. Today’s post is about solving a different logic puzzle. I found this one from the Alloy online tutorial:

A farmer is on one shore of a river and has with him a fox, a chicken,

and a sack of grain. He has a boat that fits one object besides himself.In the presence of the farmer nothing gets eaten, but if left without the

farmer, the fox will eat the chicken, and the chicken will eat the grain.

How can the farmer get all three possessions across the river safely?

To solve this, I used TLA+, a specification language developed by Leslie Lamport. It also uses PlusCal, which is an algorithm language that can be automated translated into TLA+ using the TLA toolbox.

Here’s my solution, which includes PlusCal but doesn’t show the automatically translated parts of the model.

```
-------------------------------- MODULE boat --------------------------------
EXTENDS Integers, FiniteSets
CONSTANTS Farmer, Fox, Chicken, Grain
CREATURES == {Farmer, Fox, Chicken, Grain}
alone(animals, side) == (animals \in SUBSET side) /\ ~ Farmer \in side
somebodyGetsEaten(l, r) == \/ alone({Fox, Chicken}, l)
\/ alone({Fox, Chicken}, r)
\/ alone({Chicken, Grain}, l)
\/ alone({Chicken, Grain}, r)
safe(l, r) == ~somebodyGetsEaten(l, r)
safeBoats(from, to) ==
{ boat \in SUBSET from : /\ Farmer \in boat
/\ Cardinality(boat) <= 2
/\ safe(from \ boat, to \cup boat) }
(***************************************************************************
--algorithm RiverCrossing {
variables left = CREATURES; right = {};
process ( LeftToRight = 0 )
{ l: while (left /= {})
{ await (Farmer \in left);
with(boat \in safeBoats(left, right))
{
left := left \ boat;
right := right \cup boat
}
}
}
process ( RightToLeft = 1 )
{ r: while (left /= {})
{ await (Farmer \in right);
with(boat \in safeBoats(right, left))
{
left := left \cup boat;
right := right \ boat
}
}
}
}
***************************************************************************)
```

============================================================

To solve the problem with the TLA toolbox, you’ll need to specify an invariant that will be violated when the puzzle is solved. I used `right /= CREATURES`

.

Run the model, and it will produce a trace that violates the invariant:

(You’ll first need to translate the PlusCal into TLA+, and you’ll need to specify the value of the constants. I just chose “Model value” for each of them).

You can see the full model with the automatic PlusCal translation in one of my Github repos.

## Estimating confidence intervals, part 2

Here is another data point from my attempt to estimate 90% confidence intervals. This plot shows my daily estimates for completing a feature I was working on.

The dashed line is the “truth”: it’s what my estimate would have been if I had estimated perfectly each day. The shaded region represents my 90% confidence estimate: I was 90% confident that the amount of time left fell into that region. The solid line is the traditional pointwise effort estimate: it was my best guess as to how many days I had left before the feature would be complete.

For this feature, I significantly underestimated the effort required to complete it. For the first four days, my estimates were so off that my 90% confidence interval didn’t include the true completion time: it was only correct 60% of the time.

This plot shows the error in my estimates for each day:

Apparently, I’m not yet a well-calibrated estimator. Hopefully, that will improve with further estimates.