Notes on #NoEstimates in Berlin

Last night I went to see Vasco Duarte, from Oikosofy deliver a talk on project management, with provocative title NoEstimates – an unconventional approach for the “deliver on time” problem. After getting home, I put together this thread of the most interesting idea in the talk for me. There was enough interest to justify me reading the book, and writing them up in more detail, so I’m sharing my thoughts here for the curious – about the book, and also the talk.

It’s worth declaring where I’m coming to provide some context to this review: professionally, I do a weird grab bag of front and back end development, some devops-y bits, UX and user research, and workshops, through my company, Product Science.

My skills and experience are smeared across all these disciplines, and although I probably identify with product management as a discipline more than anything else, I still enjoy coding, and earn some of my money from doing so.

I don’t think of myself as an agile coach, but as I get older, process becomes more interesting, and I’m thinking about it more and more.

Picking fights with the #NoEstimates hashtag

It goes without saying that using any kind of #no$INSERT_TERM_HERE hashtag to share an idea is:

  • quick, somewhat troll-ish way to draw attention to a thing you’re doing
  • also a quick way to rile anyone who identifies with that term

Sure enough, getting riled was one of the responses on the evening in the room, as and online, when sharing this thread after event, it felt like I wondered into a agile coach bunfight.

There are some angry sounding posts rebuking the arguments in the book, in the replies  if you look. I don’t know enough about the background to really comment on them, so I’m staying out of that.

Instead, I’m going to share the ideas that I think were most interesting to me, to help you decide if it’s worth looking into the book and ideas in more detail.

As an aside, the book is very cheap, and a very fast read. You can read it in a few hours – I got home at 10:30, and I had finished the book the following morning before starting work.

Little if any relation between estimates and time taken to deliver work

The key idea I took from his talk was that as an industry, IT is terrible at making estimates, and that the majority of the projects we work on fail.

As the the argument goes, dedicating time to estimating work is counter productive, and should instead should be seen as waste, and something we ought to keep to a minimum.

https://twitter.com/cory_foy/status/427910102742364160

To to support the argument, this chart came up in the book, and also in the talk – what the image shows is that there’s very little relation between the average time taken to deliver a story, and estimated story size.

So, if we’re dedicating all this time to estimating our work, and using it to decide what to do, and if it is this ineffective at giving us useful information to plan with, why are we doing it?

Would it be worth trying not to estimate at all?

IMG_20180116_200452

Maybe there’s another way?

Not ‘no estimation’, but ‘less estimation’

If you’ve ever used Pivotal tracker, or a burndown chart, what he suggests instead probably won’t be news to you – instead of upfront estimation, it’s wiser to look at work that gets delivered each week, and focus on breaking down stories so they’re:

  • small enough to deliver in a set time box (usually a week or two weeks)
  • roughly the same size as each other

You can then use this to project forwards to create what he refers to as a forecast instead, and use this to plan instead. If you know how much you’re getting done on average week, then you can projection this rate forwards to work out how much will, or will not be done by a given date.

As I said, this projecting forwards based on past perfomance isn’t massively ground breaking, but he had some interesting ideas about how you might work out what to do instead if you aren’t estimating.

The most interesting idea to me, that I wish was covered more explicitly in the book

One tool to help with a no-estimates approach, that was news to me was the snappily titled Work Complexity Relative Evaluation diagram.

IMG_20180116_200616

The idea as I understood it, was largely that you can place units of work you think you need to do, along two axes – technical complexity, versus organisational complexity.

You might think of technical complexity as how hard something is to implement, and think through, if you were building it. It might cover how many moving parts there are to think about, and how technically risky it is. The further to the right it is, the riskier, or greater the amount of effort you think you might need.

This axis isn’t concerned with depending on other people, or other teams working with you to get it delivered.

Social complexity, the other axis covers this.

This is more a function of how many people you need to speak to do get something delivered – this might be how many people in other teams you depend on to deliver something, or how easy it is to get another part of an organisation to do something for you, or simply how well understood something is inside your team, outside of the technical axis.

There’s two useful ideas here bundled into these two axes:

  • there’s more to delivering a story than how technically difficult it is
  • shipping something frequently relies on other parties, and delays often happen outside a time-boxed development cycle

Yes, we have a matrix

Using these axes means you end up with yes – a consultant’s favourite – a two by two matrix.

Instead of spending time estimating every single story along a fibonacci scale, or how many hours and days it might take, in your team, you’d place stories in these four quadrants.

Things in the bottom right, are in shape to work on, and should be small enough to deliver right away, and fairly quickly, assuming they pass the INVEST test.

The time you might spend estimating each story and assigning hours or points to, you would instead spend doing what you can to move things into the bottom left.

Moving things to the bottom left

This typically would be splitting stories into smaller, more independent parts, or coming up with units of work that let you reduce the complexity, or increase your understanding of a problem. The idea is that you break off progressively more of the problem each time, to bring you closer to delivering it, and create some kind of measurable value.

This might be doing some technical research (you might call it a research spike) to share with your team if it’s new, risky technology you’re working with, or running some kind of experiment to see if people click on a button or sign up on a landing page, to see if a feature has interest before dedicating time to building it.

It might also be something a straightforward as getting people from different teams in a room to talk through an API together, or (gasp!) speaking to actual users, and sharing prototypes with them.

Isn’t this just estimating along a different axis?

You can make the argument that there’s an implicit act of estimation going on here, by plotting bits of work on this axis, and that breaking a story down into a thing deliverable in a single day requires you to think about size to do so.

Also, many responses to the book I’ve seen so far take issue with whether the stats for estimates can be used to make the argument core to #NoEstimates – that we’re not very good estimating as an industry, and doing so is harmful.

I think focusing on the misleading, linkbaity hashtag misses a more interesting part of the message, that there’s a world outside the team building features, and that interacting with that world is important, and worth spending time and resources on, even if it feels intensely uncomfortable, and is outside the skills set the team currently has.

I also like the idea that stories as presented here are not necessarily about just delivering code, or nice looking mockups, but showing some progress towards understanding a business problem better, or reducing the risk associated with it.

I think it also implies that coding should probably be the last resort, which I think is good – shipping code is a really expensive, risky way to solve a problem, and once it’s shipped, it keeps being expensive, as it needs to be maintained.

A difficult pill

The product person in me loves this idea, but I think in Berlin, it may be a hard message to hear, and I’ll try explaining why:

One thing I’ve found compared to when working in London, is that (anecdotally) there’s a real focus in Berlin on hiring developers to ship code, and designers to ship sketch/photoshop files over than anyone else who might have the skills you associate with the discovery, like user researchers or some cases, product managers.

You could read last night’s message, as essentially telling companies they’ve hired the wrong skill sets for their product teams, by concentrating on people who build things, based on the assumption that a problem is already well understood, when it might not be.

The other interesting idea in the book – rolling wave forecast

The other idea, that was new to me in the book was the rolling wave forecast.

The diagram below nabbed from the book goes some way to explaining it. It seems to be a weird mix of release plan, roadmap and burn down chart. As far as I can see, it’s supposed to:

  • be less detailed over time like a roadmap, to communicate uncertainty of delivering X features by date Y
  • give an idea of the sequence of work being done, like a release plan, so you know when your pet feature might make it to users
  • highlight when you’re likely to miss a deadline by projecting the current rate of delivering work forward like a burndown chart, to invite discussion about changing priorities, and which parts of a project are most valuable to deliver

A rolling wave forecast

NB: – in this diagram, the book presents a collection of user stories as a feature, in the way UX or product people might refer to an item on a roadmap as theme, or perhaps an epic.

I’m not sure this does a better job than any of these, but then again, I’ve never come across one before – if you have no roadmap already, or the idea of a burndown chart sounds like black magic, maybe it’s useful.

I’d be curious about hearing from people who have used one, and where it fits in with the artefacts I listed above.

If you’re curious, you can see all the highlights I made as I read the book on hypothes.is here.

The talk

I was a bit turned off by the salesy feel of the talk, and lots of these ideas seem to be ones core to how I understand product management. I took a few pics of slides that I think give an idea of the general feel of the talk, and if nothing else, it was nice to meet long time twitter friend @FJ in person for the first time.

The book

The book is an easy enough read, if like me, you haven’t really paid much attention to the NoEstimates movement, and it was free on the night (which was nice, as I had pretty much decided to buy it, to get a quick summary of the ideas in one place).

Regardless of whether you agree with the ideas ( I don’t agree with all of them myself, and wouldn’t apply them all either, but as I said, there are some interesting ones there), being able to say you’ve read it, and why you agree or disagree is worth more than 7 bucks to me.

So, on along that axis, I’m ahead, I guess. Thanks to Vasco and the BCGDV crew for putting it on – you don’t need to agree with everything someone says to find being exposed to an idea useful, going definitely felt useful.

Feel free to ask questions in the comments, or contact me directly – my DMs are open on twitter and I believe my about page lists all the way you might contact me.

Further links

The Noestimates book has a free first chapter

A rebuttal of arguments in the Noestimates book, part one of a series

Another piece arguing against the presentation style common in the noestimates movement

Estimating without even talking to your team

 

Visualising the wiki holes you fall down with Pilgrim

I just came across Pilgrim, an interesting web thing that converts pages inrto a more readable, ad-free version of their form self, but also visualises the links you click to get an map out where you end up going as you look through it.

I’m finding these tools, and ones like hypothesis, and Pockets’ recent experiments in highlighting text in their app interesting lately.

Not sure what I’ll do it, but who knows, maybe it’ll be useful to refer back to it in future.

How much CO2 does an office worker generate per year?

I just posted this to friends on Facebook, and it seems a good idea to share it here too, to help with my search:
Hello internet friends. Would a kind soul be able to help me out here?
I’m doing a recorded 20 minute talk in Feb about the environmental impact of building digital products for a free online conference, http://www.sustainableux.com, and I’m looking for leads to work out a number I want to refer to in the talk.
 
I’m after a single number, for the average carbon footprint of a single office worker, working in an office, full time, including their commute, in terms of tonnes of CO2 per year.
 
I know commutes vary wildly, and that’s fine. For this, an average will suffice, as I’m not pretending this will be accurate, just a ballpark figure.
 
I think I’ll mainly be speaking to an audience based in the US, or Europe, based on last year’s viewer figures, so extra points if the number applies to those regions.
 
I know this figure won’t be precise, and I’m aware there are loads of factors that affect this anyway. To the nearest tonne is probably okay.
 
If it helps, please think of this as a number for all those other places that aren’t where you work. I appreciate your office might be super green and virtuous and you’d love to tell everyone about how much you recycle, and how you’re going to eco-heaven, and how everyone is else is terrible but for the purposes of this request I don’t think it will add to the conversation here.
 
Sorry about sounding grumpy in that last para. I’ll be super grateful and give you a shout out in the talk if you can find it.
 
Thanks y’all ❤

Recap – storing state in a browser for users

I’ve been working on some static sites recently, and I needed to show some content to someone, but allow it to be dismissed easily, and then stay dismissed. This is much a note to future me as anyone else, but hopefully it’ll be helpful to some other soul on the net.

Doing this with frameworks like django or rails

When I’m working with a full dynamic site, how you do this is usually hidden from you using a handy data structure in the language that your framework is written in.

For example, in django you often have a handy session object available, which works like a dictionary you can get and set data on. Let’s say you have a blog, and you want to set some state on the user, to mark that they’ve commented on on a post you’ve written.

In some view code handling a get request, you might set a value on the session like so:

request.session['has_commented'] = True

Then, later on you could check for this by calling get on the session object:

if request.session.get('has_commented', False):
        return HttpResponse("You've already commented.")

Python also has a native cookie library that does a similar job, of storing state on a client browser, and sparing you the gory details.

This is very handy, but when when you’re working with a static site, you don’t have the server there to wrap all of it in some tasty syntactic sugar.

If like me, you’ve had it abstracted away from you most of the time, it might be useful to know your options.

Your options if you need to do it yourself

If you want to keep data on the client

Right now, if you want to store state just on the client you have two popular options: local storage, and session storage. In both cases you use javascript to write values, with an api that looks a bit like so:

// Save data to sessionStorage
sessionStorage.setItem('key', 'value');

// Get saved data from sessionStorage
var data = sessionStorage.getItem('key');

The main difference between localstorage and session storage is that session storage is automatically expired at the when you close a tab. By comparison, local storage persists, so you can visit a page, set some data, close it, and assuming you haven’t set a super short expiry date, access it again the following day  from the same browser.

You can’t access this from a server though it never leaves the browser.

If you want to be able to access the data on the server

If you do want data to persist for more than one session, and you want to be able to read this information on the server side (like the example the django example above), then cookies are a better fit. They work at the HTTP level, so they’re really sent as extra headers when you send HTTP requests to a server somewhere. Mozilla’s docs are fantastic if you want to learn more about what’s really happening under the hood.

You might want to have a similar API to the session and local storage APIs, to keep them easy to remember, so you can use javasscript to set data on a user’s client browser like so:

// set a value as cookie
docCookies.setItem(name, value)

// get saved value from the cookies
docCookies.getItem(name)

If you want this, then you want to look at the nice cookies.js library, that wraps it up in some nice synatactic sugar, and save you remember esoteric invokations, just to set some basic data on a client.

Just remember, you can’t set much data with cookies compared local storage or session storage.

Obviously if you’re sending data back and forth between a browser and server, there are privacy implications and in the EU, you should to inform users how you’re using cookies, to get consent before you start tracking your users. Again the MDN docs are a good concise starting point, if you need a reminder.

 

 

Super hand tip – search your own tabs in Firefox

If you’re like me. You have too many tabs open. So many in fact that you might open the same tab multiple times because you forgot when you last opened it.

Firefox tab search is your friend

When using Firefox, there’s a hidden feature which I only found out about last week. You can restrict to search to your own open tabs. This avoids you opening the same site multiple times because

Try hitting cmd+L (or ctrl l on linux, I guess) as usual, to start entering a new url to visit.

But first add a percent symbol, add a space then start typing. Firefox will show a list open  tabs that match the text you’re typing:

Screen Shot 2018-01-08 at 21.36.30

Not bad eh?

One day every browser will have this.

Till then, it’s nice that firefox is free.

Update: it turns out there are even more nice things here. It turns out that there are a whole raft of different searches you can carry out with the mozilla awesome bar thing, to only search your favourites, only in your browsing history and so on. I found this out from Antony Ricaud’s tweet below after Simon picked up on my own tweet when I discovered this nifty little feature.

 

Book review: Time is Money – The business Value of Web Performance

Since discovering how significant the  energy impact of moving data over networks is for the Planet Friendly Web Guide, I’ve found myself reading more and more about web performance optimisation (WPO). Last week I ordered Tammy Evert’s book, Time is Money – the Business Value of Web Performance. It arrived on Saturday morning, and I had finished it by the afternoon. Here’s a quick review.

TLDR: I really like it. It’s a short book to give you ammo to help win arguments about web performance, in a language product managers and other people who don’t code will understand.

You might be able to explain what you would do if you had the time and budget to work on performance on a site or app you’re responsible for. This book help you explain why someone should allocate the time and budget to let you make it happen.

Longer version: I really like it. It does what a published book does well, compared to a blog post or website, which is provide a concise argument, in ways that hopefully won’t feel dated within months of being published, and save you sifting through the entire internet to find the information to arrive at the similar conclusion yourself.

How we perceive performance

It begins with a brief primer on why speed and responsiveness (in the HCI, not mobile friendly sense) feel intuitively right, and how regardless of technology, there are a few universal laws around how quickly an application ought to respond, to user input to allow them to feel productive and maintain a state of flow. When I say universal, it might as well be she references literature from as far back as 60’s,on human computer interaction.

Why a business would care about performance

The next section presents a useful way to think about speed, in ways that a business or project sponsor will otherwise be able to understand – if you’ve ever read about WPO, you’ll probably be familiar with the familiar quotes about speed increasing conversion rates in e-commerce. But Tammy presents few other ways to sell it, from benchmarking against competitors with various monitoring tools, to brand perception, where the people would respond to the same sites, with the same copy, and same design, but see the all of them as lower quality when when presented over a slower connection, to comparing the impact of slow sites of the impact of downtime on a business.

I had never thought of selling it in this way, but it’s a really elegant way to think about it – and it’s totally compatible with how we might reasonably think about risk in business.

You might think about risk in these terms:

risk = severity of consequences x likelihood of it happening

Some risks have severe consequences. Having a site go down for example, means it can’t raise donations, complete purchases, or find the information they need. It (hopefully) doesn’t happen very often, but because it’s so severe, we dedicate resources to avoiding downtime, like on-call rotas, building redundancy into our systems and so on.

While a slow site may not be as dramatic as a site going down, degrading performance to the point that people stop using it, or go to a competitor has similar effect – it stops making money, or letting people meet their needs. And worse, a slow site is not a one-off event – if it’s continually happening, then the cumulative effect of users abandoning tasks over a longer period can be greater than a more dramatic, but shorter period of downtime.

This section, introduced me to a catchy term, the performance poverty line – the slower site is, the more conversions tend to drop off, until, at around 6 seconds, they’re barely even happen compared to further up the scale.

If you don’t work in e-commerce, but if you work in an organisation where you have internal customers, the chapter following it shows how to think about it in terms of productivity gains, from sites working properly, or reduced bills on infrastructure.

The how of performance

I’ll say again – this is not a technical book, but the book does provide a good grounding on the principles of performance – the differences between latency and bandwidth, and how different parts of the infrastructure of the net affect performance, and what things like a content delivery network are and why they matter, and the different kinds of monitoring available these days  – synthetic and real user monitoring (RUM) performance measurement . This section is very accessible – if someone knows what HTML is, then they should be able to get through it easily, while still covering a lot of ground.

There’s some useful guidance on how you might target your performance efforts too, like which parts of a user journey tend to make the most sense to optimize first to see results.

The future of performance

The book rounds off with a few words about the future. There are a few new APIs in browsers to make measuring performance in terms that are more useful to us than simply tracking how long an entire page took to load, and it covers those, and there’s some further thoughts what where the function of tracking performance fits in an organisation.

Before I read this book, I hadn’t really heard of a role called a digital performance manager, and from what I read, it feels like a cross between a very focused product manager, and the kind of data scientist who gets their kicks from running analysis on the HTTP Archive with Big Query (as an aside, this sounds quite fun – I’d love to hear if this actually is your job).

Who should buy this book

It feels like there are two clear audiences for this book:

  1. People into WPO who want to know how to sell it to others.
  2. People the first group would like to sell it to

People into WPO who want to know how to sell it to others. If this is you and you’re working in a job where you want to start doing this, then it’ll probably cost you as much in billable time to read this review, go to amazon, read some more reviews and buy the book, as the book itself costs to buy, and start making a case at work. What are you waiting for?

The other audience, seems to be the people who the first group would want to give this book to – either because someone has given it to them to read, to understand what they’re continually going on about, or because they don’t code, but think there might be something in this performance thing. It feels like this is the real audience for the book, and the writing is light, and easy to read quickly for this reason.

If you don’t code as primary way of making money, but you work with developers, it’s a good complement to the following short books, for understanding other aspects of working on or managing a digital product:

  1. Erin Kissane’s Elements of Content Strategy, (for content strategy and UX)
  2. Jimmy Janlen’s Toolbox for the Agile Coach (for sharing and planning agile work better in co-located environments)
  3. Kathy Sierra’s Badass (making users feel productive and effective when using your products).

In case you forgot – I really like it.