Book review: Time is Money – The business Value of Web Performance

Since discovering how significant the  energy impact of moving data over networks is for the Planet Friendly Web Guide, I’ve found myself reading more and more about web performance optimisation (WPO). Last week I ordered Tammy Evert’s book, Time is Money – the Business Value of Web Performance. It arrived on Saturday morning, and I had finished it by the afternoon. Here’s a quick review.

TLDR: I really like it. It’s a short book to give you ammo to help win arguments about web performance, in a language product managers and other people who don’t code will understand.

You might be able to explain what you would do if you had the time and budget to work on performance on a site or app you’re responsible for. This book help you explain why someone should allocate the time and budget to let you make it happen.

Longer version: I really like it. It does what a published book does well, compared to a blog post or website, which is provide a concise argument, in ways that hopefully won’t feel dated within months of being published, and save you sifting through the entire internet to find the information to arrive at the similar conclusion yourself.

How we perceive performance

It begins with a brief primer on why speed and responsiveness (in the HCI, not mobile friendly sense) feel intuitively right, and how regardless of technology, there are a few universal laws around how quickly an application ought to respond, to user input to allow them to feel productive and maintain a state of flow. When I say universal, it might as well be she references literature from as far back as 60’s,on human computer interaction.

Why a business would care about performance

The next section presents a useful way to think about speed, in ways that a business or project sponsor will otherwise be able to understand – if you’ve ever read about WPO, you’ll probably be familiar with the familiar quotes about speed increasing conversion rates in e-commerce. But Tammy presents few other ways to sell it, from benchmarking against competitors with various monitoring tools, to brand perception, where the people would respond to the same sites, with the same copy, and same design, but see the all of them as lower quality when when presented over a slower connection, to comparing the impact of slow sites of the impact of downtime on a business.

I had never thought of selling it in this way, but it’s a really elegant way to think about it – and it’s totally compatible with how we might reasonably think about risk in business.

You might think about risk in these terms:

risk = severity of consequences x likelihood of it happening

Some risks have severe consequences. Having a site go down for example, means it can’t raise donations, complete purchases, or find the information they need. It (hopefully) doesn’t happen very often, but because it’s so severe, we dedicate resources to avoiding downtime, like on-call rotas, building redundancy into our systems and so on.

While a slow site may not be as dramatic as a site going down, degrading performance to the point that people stop using it, or go to a competitor has similar effect – it stops making money, or letting people meet their needs. And worse, a slow site is not a one-off event – if it’s continually happening, then the cumulative effect of users abandoning tasks over a longer period can be greater than a more dramatic, but shorter period of downtime.

This section, introduced me to a catchy term, the performance poverty line – the slower site is, the more conversions tend to drop off, until, at around 6 seconds, they’re barely even happen compared to further up the scale.

If you don’t work in e-commerce, but if you work in an organisation where you have internal customers, the chapter following it shows how to think about it in terms of productivity gains, from sites working properly, or reduced bills on infrastructure.

The how of performance

I’ll say again – this is not a technical book, but the book does provide a good grounding on the principles of performance – the differences between latency and bandwidth, and how different parts of the infrastructure of the net affect performance, and what things like a content delivery network are and why they matter, and the different kinds of monitoring available these days  – synthetic and real user monitoring (RUM) performance measurement . This section is very accessible – if someone knows what HTML is, then they should be able to get through it easily, while still covering a lot of ground.

There’s some useful guidance on how you might target your performance efforts too, like which parts of a user journey tend to make the most sense to optimize first to see results.

The future of performance

The book rounds off with a few words about the future. There are a few new APIs in browsers to make measuring performance in terms that are more useful to us than simply tracking how long an entire page took to load, and it covers those, and there’s some further thoughts what where the function of tracking performance fits in an organisation.

Before I read this book, I hadn’t really heard of a role called a digital performance manager, and from what I read, it feels like a cross between a very focused product manager, and the kind of data scientist who gets their kicks from running analysis on the HTTP Archive with Big Query (as an aside, this sounds quite fun – I’d love to hear if this actually is your job).

Who should buy this book

It feels like there are two clear audiences for this book:

  1. People into WPO who want to know how to sell it to others.
  2. People the first group would like to sell it to

People into WPO who want to know how to sell it to others. If this is you and you’re working in a job where you want to start doing this, then it’ll probably cost you as much in billable time to read this review, go to amazon, read some more reviews and buy the book, as the book itself costs to buy, and start making a case at work. What are you waiting for?

The other audience, seems to be the people who the first group would want to give this book to – either because someone has given it to them to read, to understand what they’re continually going on about, or because they don’t code, but think there might be something in this performance thing. It feels like this is the real audience for the book, and the writing is light, and easy to read quickly for this reason.

If you don’t code as primary way of making money, but you work with developers, it’s a good complement to the following short books, for understanding other aspects of working on or managing a digital product:

  1. Erin Kissane’s Elements of Content Strategy, (for content strategy and UX)
  2. Jimmy Janlen’s Toolbox for the Agile Coach (for sharing and planning agile work better in co-located environments)
  3. Kathy Sierra’s Badass (making users feel productive and effective when using your products).

In case you forgot – I really like it.

 

 

 

 

 

 

 

Sporadic video recommendation # 3 – Yulia Startsev, on side effects, promise and generators in javascript

I’ve been trying to get back up to speed again with javascript recently, and with all the new features being added each year, I’m starting to like it much more than I did before. Try as I might, though I hadn’t really got my head around how to use generators, or when they might be useful.

After seeing this talk here by Yulia Startsev, it’s making a lot more sense – the talk gives some useful background on how the nicer new async/await syntactic sugar is made possible with generators and gives some nice background on how the new Firefox debugger is built and works.

It’s not short (45 minutes), but if like me, you’ve been feeling a bit shaky about some of the new Promise and generator synatx in newer javascript, it’s a worth a look.

 

This is not news but Mozilla’s Developer Docs site is fantastic

I recently was trying to debug why some snazzy passwordless sign-in flow wasn’t working on Firefox for me, when it worked just fine on Chrome, and investigation lead to me reading up on CORS (Cross Origin Resource Sharing), a way to share access to resources (like images, js files and so on) across domains.

I’m really impressed by the writing, and something I thought I knew, I came away learning loads more about.

Good stuff.

Does peer to peer matched funding exist online?

I met a guy a few years back, Premasagar Rose. We got on well, and we had a lot of similar interests, but I didn’t know he had left the UK for Portugal until I saw a tweet along these lines:

This isn’t something I’d wish upon anyone, and the story he tells of wildfire spreading across the region, and  burning down homes is horrifying. This thread before shows what I’m talking about:

The pledgebank model and social proof as a way to increase donations

Now, I feel pretty powerless about all of this, and I want to help.

And while donating some figure like £25 is better than nothing, but what would be better would be to have some way to match funds with friends, so maybe three of us each donate the same amount, in return for me agreeing to come in on a donation they make, when they want help increasing the effect of a donation to something they care about.

Essentially I want to try out a variant of the pledgebank mechanism.

So, I’ve sent this tweet out to test the idea:

There’s of course some expectation that the next causes aren’t something I massively disagree with, but I haven’t come across this model before, and I’m curious about whether it’s because it’s failed before, and why.

Or whether it’s just a pattern I haven’t come across yet on donation sites.

Why it has me curious

I’m curious, because (if you excuse the jargon) it seems like a way to use your social capital to boost your ability to make financial donations, while creating social proof around donating, without it feeling so much like a humble brag when you make a post on social media about the donation you just made, and look how much better of a person you are than everyone else (I’m totally guilty of doing this before).

Is this common in the world of online fundraising already? Let me know in the comments if so, or get in touch the usual ways.

Update: it totally worked! WOOHOO!

So while I was writing this, I got two responses! One from @bash, who I’ve know for a few years though the internet:

And one from Linda Humphries , who I met at MapCamp, less than a month beforehand:

Okay, this is nice – but how to do you know people the others really  are donating too, apart from well… just trusting each other?

It occurred to me the obvious way to check if a donation really had been made was just to use the supporter listing on the donation page:

Screen Shot 2017-11-01 at 22.51.07

BOOM!

Three times the amount I would have been able to donate if I was by myself, and having the audience of two peers made me more likely to follow through and donate. Also, the fact that I had a personal connection to Prem made it possible for Bash and Linda to donate as well, because they might not know Prem, but they do have a connection to me.

Of course, I now need to actually follow through myself when Bash or Linda want to donate, but I’ve been pretty explicit and public about the terms, and I have all sorts of reasons not to renege on the deal.

My guess is after we’ve all donated the 3 amounts, we’re free of any further obligations, and the experiment has run.

This is not a new idea – yes, I know ROSCAs are a thing already

What I’ve just described, is pretty close to something called a Rotating and Saving and Credit Association – loads of people who don’t have access to banking use them all the time outside of Western Europe. I did a tiny bit of work for a startup trying to make them accessible back in 2009, and I haven’t really thought about them since.

But I wonder if the principles here, with small enough groups, and small enough amounts might be a worthwhile pattern to explore using for fundraising in future.

If this is interesting to you, here’s how you can help – next time you’re thinking of donating to a charitable cause, see if you can find two other folk you know to come in on similar terms. I’d love to see what questions come up, and see if there’s a way to make the pattern easier to understand and apply online.

As ever, if you have questions or comments, hit me up on the comments, or get in touch using the normal mechanisms. Ta!

 

 

 

 

 

An update on election Hackday and the goals of WhoTargets.me

Earlier in August, I went to a hack day with a few friends to work on WhoTargetsMe, a project started by some people in London. We ended up working on the project because we felt that platforms like Facebook had emerged, that were were powerful in the same way that you might consider TV and the press to be powerful when it came to influencing elections.

Thing is, there’s not much in the way of oversight for platforms like Facebook, especially during elections, so it’s very hard to see if the platform is being used in a malicious way.

The appeal for WhoTargetsMe, for me at least was that it was a clever approach to build a dataset to allow for some kind of scrutiny over how Facebook was being used in elections, and it seemed a good way to work towards the things Tom Steinberg outlined:

What I want is this: I want Facebook and Google to show goodwill by voluntarily publishing data on the political adverts that are purchased on their platform and shown to users in the U.K. in the next six weeks.

To be more specific, I want:

  • A copy of each unique advert (e.g image/text/video)
  • Data on who this advert was targeted at (e.g everyone/only women/only people in London)
  • Data on how many people have been shown each advert
  • Information about who the buyer was

Earlier this week, in a briefing with Techcrunch, Facebook announced something that felt like progress towards this goal:

Facebook briefed TechCrunch on the changes that include hiring 1,000 more people to its global ads review team over the next year, and making it so anyone can see any ad run by any organization on Facebook instead of only the ads targeted to them.

So, one of the key ideas about making adverts less ‘dark’ looks to like it might actually be delivered.

Also, largely as a result of more and more Russian interference in the election, Facebook agreed to share a set of ads with US Congress, and some information about it’s use. This snippet from their own blog is enlightening, but the underlying data doesn’t seem to have been shared beyond the congress investigation :

Most of the ads appear to focus on divisive social and political messages across the ideological spectrum, touching on topics from LGBT matters to race issues to immigration to gun rights. A number of them appear to encourage people to follow Pages on these issues.

Here are a few other facts about the ads:

  • An estimated 10 million people in the US saw the ads. We were able to approximate the number of unique people (“reach”) who saw at least one of these ads, with our best modeling

  • 44% of total ad impressions (number of times ads were displayed) were before the US election on November 8, 2016; 56% were after the election.

  • Roughly 25% of the ads were never shown to anyone. That’s because advertising auctions are designed so that ads reach  people based on relevance, and certain ads may not reach anyone as a result.

  • For 50% of the ads, less than $3 was spent; for 99% of the ads, less than $1,000 was spent.

It’s a shame that it takes a train-wreck of an election for this to come out, and I hope it means that for this kind of information to be shared, it doesn’t take the same kind of disastrous election we saw in November 2016. But at least it sets a precedent making it easier to campaign for this information to be shared more regularly, as part of a step to make the use of platforms like Facebook in elections more transparent.

On fairphone, and sustainable electronics

I’ve been a Fairphone user since 2013, when the first phone came out, and I’ve been a user of the FP2, the first phone the company designed fully themselves. In this post, I explain the process of updating it to extend its life, compared with buying a new one, and how hard doing sustainable electronics is.

A bit of background around Fairphone

It’s nice having things like smartphones, and handy electronics, but if you spend any time thinking about what really goes into getting them over to us, you’ll quickly realise there are some very ugly sides to doing so. Many of the minerals going into the electronics we use come from areas wracked by conflict, and conditions inside some factories making them can often be awful, and beyond the human cost, and it’s fair to say that if we do know about it, most of us are either in a state of denial or depression amount of waste created by digging this stuff out of the ground, turning it into chips and so on, and shipping it around the world to us.

Once hardware is with us, it’s often so hard to repair, that’s often cheaper or simpler to buy a new piece of hardware and send the old one to landfill than try to fix this, and the trend across the industry, is generally one that’s making this worse.

In the face of it’s, nice to know there are some companies looking at the problem, and trying to design a solution as if people, and well… the rest of the world mattered, and one of these companies is Fairphone. Coming from roots as a pressure group campaigning about the human cost of the electronics industry, Fairphone is now one of the mist interesting companies making electronics, and I’ve been an owner of both generations of the Fairphone FP1,  and Fairphone FP2 since I first heard of the company in 2012.

Aiming for sustainability through modularity and openness in phones

You can tell Fairphone is a product coming from a group of service designers – one thing I like about it is the attention to the entire lifecycle of the product, as well as how it’s used, to provide alterantive so needing to buy a new phone if you want benefit from the designers of the product learning more about how to improve it.

I’ll give a couple examples below:

Replaceable cases

If you want a phone to last, it’s common to put a handset in some kind of protective case. Of course for many phones this ruins the lines, and as a result it’s common to have a phone exposed to damage, largely to keep it looking comparative sleek and fit well in your pocket.

Fairphone’s approach is to design the phone so the outer casing already is slightly ruggedised to do the job of protecting it, and crucially, easy to replace, so when you DO inevitably drop it or damage it through wear and tear, you can buy just that part. By designing the case with this in mind, you don’t end up needing a bulky protective case that make the phone feel so much larger and awkward to handle.

It might be a stretch to refer to this as modular, but you can see this idea of having replacement parts in the cases. Over the last year and a half, I’ve managed to wear out part of my case, largely by dropping it and general abuse. So, I ended up buying a replacement case the new iterations of the FP2 are now released with. It was easy to replace at home, and it feels like the design is informed by actual user feedback since the original launch – the shape is slimmer making the phone feel smaller and fit better in my pocket, and the new case uses a different, higher quality plastic that is slightly rough, making it feel less slippery making it feel safe in my hand.

 

Better cameras

I think I’ve had my FP2 for about two years now, and over that time, I’ve been largely happy with it, for what I use it for – the GPS works well for way-finding and exercise, and it’s more or less fast enough to be a good working device. The camera hasn’t been great though, and the battery life has been a pain at times.

The new generation of the phone has a better set of cameras, but for existing owners like me, Fairphone have made the camera modules available to order separately.

I ordered them, and replacing them turned out to be straight forward. I know have a new camera, and new case, and the phone largely feels like a new handset, extending the useful life by at least a year or so.

You can see the difference in two somewhat similar photos below:

The limits of this approach

The camera works better now, and I’m a much happier with the case but there are limits. When it was first announced, the Fairphone was released with Android 5. This was okay, but newer versions of the Android operating system have improvements  to how your permissions and privacy work. This year, Fairphone managed to release an update to Android 6, but in many new phones, there’s a new version of the operating system with further improvements.

From what I can tell, the because of some details in the chipset used by the FP2, it either can’t be upgraded to Android 7, or it’s going to be a pain to do so. It’s not a concern right now, but shortens the phone’s useful life.

This is really, really hard

I guess the point of this post was that even Fairphone, one of the world leaders in building electronics in an open, sustainable, largely planet friendly way, have a hard time making a business out of building complex physical product this way, but it still feels worth aiming for, because well… we only have one planet left, and people matter.

If this interests you, you might be interested in the IoTMark project, that came out of the Open Internet of things Certification Mark event run this summer and DotEveryone’s efforts to make a Trustworthy techmark.

Trying out a vision statement for the Planet Friendly Web Guide

As I mentioned before, I’m part of the Mozilla Open Leadership programme as a Open Project Lead. In this blog, I’ll write a bit getting the vision statement together, and the thinking behind it.

First, here’s the statement as of 13th September

The Planet Friendly Web Guide: I’m working with web professionals, campaigners, and academics, to build tools and information resources for web professionals so that they can understand and radically reduce the environmental impact of the web

About the structure

You might notice the structure. I’m deliberately following the structure as outlined here in the Open Leaders Training guide:

I’m working with [community, allies, contributors] to [make, build, teach, or do something] so that [audience, end users, consumers, community members] can [do something different, achieve a goal]

It follows a familiar mad-libs format, much like coming up with problem statements, or similar when building digital products. I’ve taken clients through this same process for project kickoff workshops, with a few slight changes.

In more detail

I’ll unpack this a bit, and explain the parts that were emphasised in the initial vision statement.

Web professionals, campaigners, and academics

I’m initially aiming this at people who’ve I’ve delivered talks to and been able to convince to come to meetups I’ve run before. If I can’t get some of them on board, I have no chance of getting this off the ground at all.

Build tools and information resources for web professionals

One thing I’ve learned from looking at other organisations is that while it’s useful to just share information or some kind of easy to consume info product (books, courses etc), having tools to validate and help to work towards a stated goal allow a set of best practices to be built into a workflow, so it happens by default.

You see this with continuous integration pipelines, and in web performance budgets, and some agile working practices, all of which are designed to surface problems as early as possible, and variation away from an ideal state.

Understand and radically reduce the environmental impact of the web

We have a finite carbon budget for the planet if we want to stay inside safe limits for living in. The amount of change to how we live and work we need to stay within two degrees of climate change is going to need to be breathtaking – it’ll need to change pretty much every industry we can think of, including the web.

Right now, that IT has the same footprint as aviation, and is growing around twice as fast is barely registering among most people building the systems that will be built to replace the current ones we rely on.

If you don’t know how IT plays a part in contributing CO2 emissions leading to climate change your chances of reducing the negative impact is has will fall drastically.

Is this clear? How could this be clearer?

Right now, I’m expecting to start this project with taking the research I’ve been doing and putting into talks, and arrange it into a book or guide of some kind, but the end goal would to make it possible some way to automate this process – i.e. creating a tool to allow you to check a site against a set of criteria much like how linters and validators work (i.e. Lighthouse for progressive web apps, ecograder for single pages, and so on).

That hopefully should give some more context to it, but as ever, I’d love to hear back to see where or how it can be clearer.

As ever, if you’re interested in finding out a bit more about the project and my progress on it, I’ve set up a mailing list to make it easy to stay up to date, at planetfriendly.productscience.co.uk.