Tuesday, January 24, 2012

The Lean Startup

I just finished reading Eric Ries' The  Lean Startup, and am happy to say  that it was much better than I expected.

When I heard about the book I thought that it only applied to cloud based web startups: the kind of shops that could readily perform A/B testing over a weekend, an approach pioneered by Google early on.

My reaction to this was predictable: that's fine when you're making simple changes to web pages, but what if you're doing some heavy technical lifting, e.g., new technology (Watson), hard technology with deep foundational requirements (Dropbox), etc.

Ries describes something more subtle. He develops a general methodology that allows quantitative evaluation of assumptions, as quickly as possible, with the least possible amount of effort.

Dropbox is one of his most striking examples

The challenge was that it was impossible to demonstrate the working software in prototype form. The product required that they overcome significant technical hurdles; it also had an online service component that required high reliability and availability. To  avoid the risk of warming up after years of development with product nobody wanted, Drew did something unexpected easy: he made a video.

The goal of the video was to validate the assumption that people would actually be interested in such a product. Of course in the Dropbox case the interest was there, but the book is filled with numerous examples in which it wasn't.

Ries' core approach consists of a large scale feedback loop, shown below (taken from here):



My two core take aways are

  • MVP--the minimum viable product, and
  • Measure-- the importance of making clear, tangible predictions ahead of time.

Both ideas are important in any product that has a customer focus, whether the customers are online consumers or members of an internal department.

They are reminiscent of the techniques my group used years ago to measure uptake of product features by internal departments: if we deployed a page with a particular group in mind, we could see if they continued to use it a few weeks out. If not, we'd visit them and ask what the problem was. There wasn't any need to wait for feature requests (or to hear in a budget meeting that you weren't providing any value). We sought them out and fixed the problem, if at all possible. Monitoring at page level allowed us to make these judgments for small product increments.

MVP is simply getting something in front of people as quickly as possible -- it doesn't need to be working, doesn't need to scale, doesn't need to be fully polished, but it does need to give a feel as to why someone would want to incorporate the product into their life.

Measure is more generally important. Ries is perfectly willing to start with measurable  goals that seem almost trivial. The idea being that in short order these goals can eliminate easy workarounds, myths of low hanging fruit, etc.

In one of his examples, he aimed for a revenue of a few hundred dollars a month to start. A few hundred a month sounds trivial, and certainly isn't sustainable, but after a few months he couldn't even do that.

The point is that if his goal was the few million dollars a year he'd need for true sustainability, it would have taken him years to realize that he was on the wrong track. This because realistically, even in the best case, the million dollar goal would require a few years to achieve, pushing feedback out to years rather than months.

The Lean Startup doesn't just focus on startups, it includes of a number of examples from groups within large non-software companies (Proctor and Gamble) and established software companies (Intuit) etc. shows that startup is more an attitude than a corporate structure.

I highly recommend this book, I don't consider it so much a Lean Startup book, but an optimized effort book: How to decide if your efforts are wasting your time or actually taking you in the direction that you want to go.

To be clear: I don't think that this means that data measured at a fine grained level is the only way of gathering feedback, e.g., it's hard to incrementally identify the best design for a large website c.f., Goodbye, Google. However, even in these cases, an MVP is important. It is vital to ground your ideas with your target audience, sooner rather than later (even if it's with a short video of a simulation of your idealized goal).

Tuesday, January 17, 2012


I know I'm late to this, but I'll share anyway:

If you're actively developing or supporting anything with a web front end, take a look at watir for automated smoke and regression tests. The site has a "watir in 5 minutes" page, but probably really takes more like 3, it's that well designed.

Watir stands for:Web Application Testing in Ruby. It is:
  • Open source
  • Simple to install: just a ruby library (gem)
  • All of ruby is available when writing test scripts
  • Well regarded
  • Widely used. Here's a partial list:

Some Watir Users

Watir Limitations

  • “W3C” web only!
    • No plug-ins (active-x, flash)
  • No recording
      Recording tools are available, but not part of the core effort
Watir Advantages
  • Widely used
  • Very easy to use
  • Programmer friendly
  • Ruby is sane
  • Terse without being obtuse
Interactive Development
    You can debug scripts interactively using irb

For example, this screenshot shows a section of a browser wind and an irb window about to execute a method to click on the browser's edit button:

Screen Shot 2012 01 16 at 1 30 17 PM

resulting in:

Screen Shot 2012 01 16 at 1 30 43 PM
Page elements are referenced by
  • Type and Id/displayed-text/link destination etc
    • @browser.select_list(:id, 'StatusSelect')
  • or Index
    • @browser.buttons[3]
        Index is obviously not the preferred approach, but it is occasionally necessary, e.g., for pages with four identical “submit” buttons

On a maintainability note: I recently had to change a script from FireFox (which I'm increasingly becoming disenchanted with) to Chrome. The change only required altering 2 lines of code

#require 'firewatir'

require "watir-webdriver"


#    @browser = Watir::Browser.new

@browser = Watir::Browser.new(:chrome)

However, I did find the behavior a bit different between Chrome and Firefox: If a button wasn't visible on the page Chrome wouldn't click it. I never had a issue with this in Firefox.

Update:, upon further investigation it appears that may be the behavior of the newer versions of watir, rather than a Chrome compatibility issue.