How to start out with user research on a product, when you don’t have much buy-in yet

I recently had a friend ask me about how you would fit user research into your product development process, when:

  • you already have a product released to users
  • and there’s no structured plan for doing user research, nor previous experience with in-house researchers

The first thing I did was invite her to the #researchOps workspace on Slack – it’s a fantastic resource, and there are loads of really experienced people who can give better answers than me.

I’m now writing this up, based on what I said to her, in case others are in a similar position.

In the post below I present a model for understanding the value of user research, which I find useful. I also share why I think that if you’re in a relatively small organisation, it’s worth starting by testing things you already have in production first.

Note: In this post I use the term user research interchangeably with the term design research, as used in the Australian DTO.

The main benefit of user research

It’s worth taking a second to talk about the value of user research. I explain the value of user research to budget holders like so:

User research reduces the risk of building the wrong thing.

Building digital products is expensive. This is because it typically takes the time of expensive people.

So, waiting a long time while expensive people build lots of stuff before you see whether any of it works, is not only is a really expensive, but it’s really risky.

So, it makes sense to spend a part of your total budget to find out, as early as possible, if things do work. This is so you can respond to this new information, while you still have some budget to act upon in it.

This can apply to an existing feature, or with your entire value proposition.

Got that?

If you’re working with people who are driven by numbers, then it’s useful to talk about user research as about reducing risk.

What this looks like — incorporating research into product development

Okay, so once people are onboard with reducing risk with user research, what do you do next?

Will Myddelton’s blog is a brilliant resource anyway, but I think his model describing three main kinds of user research you might do is really helpful here.

It’s the clearest, snappiest, most usable description of user research I’ve come across so far, and he presents the three kinds as:

  1. Testing things the team have built
  2. Working out what the team should build next
  3. Understanding potential users and their lives

I’ll cover these in detail below, then I’ll explain why, in the context I’ve described above, I think you should start with the first one if you’re not doing user research regularly.

Testing things the team have built

This is probably the most straightforward to incorporate into how you build things and gives you immediate insights you can act upon.

The goal is to uncover areas where users get stuck using what you already have. The outcomes are typically slightly revised designs, and updated copy to address the most obvious problem areas.

As Will suggests, doing this first also helps the rest of the team learn first hand why user research is valuable. You always learn something new, and it often shows that users use what you have built differently to how you would use it.

 

you-are-not-your-user.png
Obligatory GDS poster truthbomb when discussing user research

 

Here, you’re reducing the risk that your product is currently underperforming for reasons that are obvious in retrospect, and easy to fix quickly.

However, it’s really, really important to act upon what you learn, to realise the value of this research, and make it worth repeating.

You’d typically refer to this activity as usability testing, user testing or something similar. I’d suggest avoiding the phrase user testing if you can, as you’re not testing the user, you’re testing the product.

If there’s one book to read for this doing, I’d recommend Steve Krug’s Rocket Surgery Made Easy.

Working out what the team should build next

If there’s future work planned, there will also be risk in assumptions you will have made about how valuable, useful, or even feasible much of it is.

When you’re applying research techniques here, you’re using them to reduce the risk that your planned solution won’t solve the problem you’ve identified.

Here, the focus is identifying these assumptions you’re making, and carrying out activities to reduce the uncertainty in them.

Testing Prototypes

It’s not the only way to do this, but a common way to do this is building one or more prototypes first, and having a user researcher run moderated sessions with users themselves, or design unmoderated sessions for users to take, to review later.

You typically make a trade-off between simulation of value, versus time to actionable insight.

Low-fidelity prototypes are fast to get to users, so you get feedback quickly.

Higher fidelity prototypes are more complete, and slower to build, but let you test richer interactions, so offer a more faithful simulation of the value you will eventually provide.

Generally speaking, the riskier, and more substantial the new feature is, the more effort you can justify spending on prototyping to reduce the risk.

This isn’t the only approach through. You may be lucky enough to find that a competing product has a feature that’s similar enough to what you want to build, to justify using that instead of taking time to build a prototype yourself.

This is also valid, and will often help you understand issues associated with this kind of feature, as long as you are clear about which parts you’re trying to test.

 

People might refer to this as user testing a prototype, or validating an idea in the backlog,

Who are our users, and what are their problems?

The final kind of research is arguably the most impactful, and if you were starting a project anew you’d begin with this, to make it clear who your users were, and what needs you think you’re meeting.

Run at the beginning of a project, you might associate this with what people refer to as a Discovery Phase, and typically involves sets of in-depth interviews with people about their lives, and habits more than being directly about your product.

The findings and insights for this kind of research are things that underpin your entire value proposition, and this kind of research reduces the risk that you are basing your entire product strategy on a premise that is inaccurate or poorly understood.

Picking your battles

This kind of research also requires the most effort organisationally to respond to, and therefore from the perspective of a someone trying to champion user research, it’s the riskiest.

Typically, when I’ve been involved in research like this, or seen others do the same, even when you take great pains to share well supported, valid findings, these can be ignored if acting upon these findings means overcoming inertia in the organisation.

People in the organisation might already be invested making a specific product decision that is undermined by these findings, or pressure from outside the team may make changing course harder (i.e. investors might assume this is already settled, and be pushing for growth over changes to a business model, etc.)

Until you’ve build up some credibility by sharing some prior insights that have clearly helped make an existing product or service work better, telling managers that people are using a product for totally different reasons than they first thought, or that the value proposition needs rethinking will often be resisted, as it creates a load of new work, and on a human level can raise awkward questions that inside the management team, about why this risk is being discovered now, rather than earlier.

They might respond to your findings positively, and rethink their entire product focus, but they may also dismiss it, and you’ll end up wasting a load of social capital trying to bring about some change in an organisation not ready for it yet.

How to start — test that the thing you have works the way you’d expect

For this reason, if you have a product that people already use and pay for and you don’t have loads of influence, you’re probably better off answering the question of Is what we have working? first.

Starting out with this is fairly low risk — people working on products typically have heard of usability testing, and it will result in generating ideas for updating copy or similar, that the team can implement to based on where they saw users struggling.

This helps demonstrate the value of research and helps get buy-in from the team. From there, it’s easier to start moving to the other questions.

I’ll outline the specifics on Is what we have working?, in a future post, and a bit about operationalising it.

Learning more

BTW, I found Will Mydelton’s model via Leisa Reichelt’s tinyletter, This Deserves your Attention – you should absolutely subscribe if any of this post interested you.

If you’re also interested in making research part of how products are built where you work, I’d really, really suggest joining the researchops community slack. It’s great.

Please help me name this product triangle thing

I’ve worked as a developer, a sysadmin, a user researcher, a product manager, and UX’er, with different teams, and I’d like to share an idea here I want to be able to present visually, as a way to identify common patterns of behaviour that make products less successful.

This is a work in progress, and it’s not fully thought through – it’s based on a couple of tweets from people who I follow and really enjoy reading, and I don’t have a snappy title for it, but I’ve tried running by a few people from UX, product and engineering backgrounds, and so far:

  • they’ve been able draw with a marker where they think their efforts in their own product teams lie among these three regions
  • we’ve been able to use it to help guide our own discussions about how we build digital products.

3 things you need, to build a successful product

In short, I think that to build a successful product or service, that is able to sustain itself, your team needs to be competent in all three of three broad areas. Here’s how I picture it, and I’ll explain the terms in more detail below:

lean-product-triangle

I’ll just say product for the rest of this post instead of the more clumsy product or service, as I increasingly find it more useful to think of these as the same kind of thing these days.

What I mean by Value, Quality and Flow

These are extremely broad terms, and I’ll try to be a bit more specific:

Value – Build the right thing

I’m calling one these value, in the eyes of the user – this might be the customer paying for it, or it’s a public service, the people whose specific needs you’re meeting.

You need clear understanding of the needs you’re meeting, the value provided by it, and a way to validate that this is actually valued by them.

Quality – Build the thing right

The next one here is what I’m calling quality. Depending on the disciplines your team has competency in, this might be an intuitive user experience, an easy to maintain, safe, secure, easy to extend codebase, or a beautifully presented design system that communicates the values you want.

It’s typically the stuff you might consider your putting in your portfolio if you were a visual designer, or put on github as developer.

Flow – Keep on shipping the thing

The final one is what I’m calling flow, and when I use the word flow, I’m thinking in terms of continuous delivery.  So, delivering frequent, small batches of new work, in a sustainable fashion, in response to the things your team learns about what’s valuable to your users, or what’s happening in the market.

What happens when you miss one out

When you have quality, and value, but no flow.

You might have commissioned a load of in-depth research to understand a group of people who have a clear need, and are prepared to pay for it. You might then have hired a fantastically skilled team to build a beautiful product with a wonderfully thought through set of  features.

If you don’t have a clear way to get this in front of people, and regularly respond to what’s happening outside of the team, then one of two things will probably happen.

Probably the most demoralising is having it never seeing the light of day – something that happens depressingly regularly, especially in agency work.

The other is that if it does make it out the door, and you’re unable to respond to feedback fast enough, then your product loses relevance, you end up with no one using it anyway. Sad times.

lean-product-triangle-flow.png

When you have value and flow, but not quality

Let’s say you’ve been able to find an unmet need, bash together a prototype, and then find a way to get progressively less clunky versions of it in front of people, up to the point where you have enough people using it that some pay for it, or it provides a valuable enough service for some other sponsor wants to cover the cost of running it.

This is common too – you might see an idea start at a hackday for example, or you might be able to get something either built speculatively, or very, very cheaply by inexperienced developers or people who should know better, thinking they can fit it in around the rest of their life, and cutting corners to get something out the door (I’ve been this fool before – it’s always a bad idea).

To begin with, can get changes and updates out regularly, but they’ll often miss edge cases, introduce bugs, and so on. When given the choice, of fixing these bugs, or covering the edge cases, and pressing on in favour of new ideas in new features, the shiny takes priority.

All these hacks build up, and eventually, it becomes a horrible ball of mud, with so much deferred clear-up work (i.e. all flavours of tech debt, ux debt, or whatever flavour of debt your team grumbles about) that it’s absolutely horrible to work on.

It becomes impossible to find anyone who will touch it, or it becomes so hard to change without breaking everything, that you essentially become unable to ship anything meaningful. Sad times, again.

lean-product-triangle-quality.png

When you have flow and quality but no value

In our last case, let’s say you have a a really proficient team, who are masters of their craft, and you’ve invested in building or buying a top notch continuous delivery pipeline. You can ship new changes a bajillion times a day, your builds are evergreen, your designs atomic, and consistent across all the channels you support.

Everyone is blogging on the product blog about the latest thing your company has open sourced, and stoked to be helping the community, and the move the state of the art forward in their field.

But… it’s not obvious how the users’ needs are met by the product, in way that is much better than the other options they have available. Or, it’s not clear how the product sustains itself financially, or where the money might come from to pay for everyone. If this isn’t clear, it’s often because the money isn’t there.

In this case, if you don’t have a way of seeing which parts of what you’re offering are already valued by people, and what else they want, there may not be time to discover it this before you run out of money.

You might see this if you’re working to a deliver roadmap that’s been decided up front ages ago, and there’s no chance to discuss or check if the things being delivered really are valuable, or which parts of the product are really being used.

Alternatively, rather than working to an overly rigid roadmap, where you’re not acting on feedback, the other extreme would be indiscriminately acting on all feedback, with no real way of telling of it’s valid or relevant, and making sweeping changes to the product.  In this case you might have accidentally discovered something really valuable, or a really valuable set of users, but not realise it, through this totally miss your last lifeline, as you thrash around frantically trying to pivot out of a horrible death spiral.

 

 

 

lean-product-triangle-value.png

You need all three – value, quality, and flow

lean-product-triangle-all-three.png

This is increasingly how I see digital product development – these three things aren’t in opposition with each other, and ideally you want to be slap bang in the middle.

In my experience most teams, end up drifting towards one of the edges of this triangle over time, and if they don’t take steps to make sure they’re not missing the third piece of this triangle, you start to see increasingly severe cases of the bad things in these red boxes.

A ways to use this diagram

One really simple way, is to try drawing this on a whiteboard or flipchart, and asking your team to mark where in the triangle they reckon they’re at (assuming they’re okay with these comically broad definitions of value, quality and flow here).

If you see clusters of marks towards one corner, it gives you an idea of what changes to your process you might want to make, or what changes to the makeup of skills in your team are worth considering.

You might also use this to acknowledge explicitly in a team that you’ve chosen to temporarily drift away from one corner – maybe you’re deferring some work to hit a deadline, and deliberately taking a hit on quality (this is your call to make, not mine), and you want to have a visible reminder, so you can get back to the sweet spot later.

I’d be curious about how it’s useful.

Other ways of telling

I did think about listing a few other signs to look out for, but:

  • this post is already huge, and I think these are best explored in future posts
  • I really want to use this post to see if this way of presenting is a help to others

Is this useful to you?

If this is, I’d love to hear from you – I’ve used it a few times in workshops and as a discussion tool, but this it the first time I’ve shared it online.

This isn’t my idea

I need to be clear – this is based on a load of ideas I’ve come across elsewhere.

Tom Stuart put me onto Donald Reinertsen’s book, Lean Product Development, which is full of absolute gems, not not an easy read, and has informed a lot of my thinking about this now.

If you’ve heard of Donald Reinertsen and Cost of Delay, Black Swan Farming do a great job of making the ideas much easier to digest. I’ve found the ideas on their site defining what they mean when they say value extremely useful, and I follow @joshuajames on twitter.

In fact, here’s the tweet I read that got me thinking about presenting this visually in the first place:

And here’s the tweet from Dan North it referred to, which in turn referred to Melissa Perri’s one:

Presenting it as a triangle and marking it

I think the idea for visualising it, and asking people to make a mark among three points probably came from seeing this video here, of a severe-looking Dave Snowden presenting Sensemaker, a research tool I started looking into again, after getting involved in the nascent researchOps community on slack,

As the video outlines at 2:50, getting someone to mark a single point here between three points in a triangle forces a bit more thought than just asking for three numbers and building a radar chart from the results.

If someone can share with me the cog sci principle behind this, I’d love to know more – it was totally new to me.

Screen Shot 2018-03-18 at 00.12.25.png

Who looks after Flow, Quality and Value in a team?

You can interpret this triangle as the tension between three roles – a product manager, a tech lead, and a delivery manager.

I think it’s more helpful to think of these as three areas a team that is building a product needs to be competent in. Most importantly, I think the team needs to be able to talk constructively about this interplay, and tension between the three areas to work best.

Naming this

I have no idea what to call it. The lazy option would be something like Lean Product Triangle, given that putting Lean in front of anything seems to be the new agile.

There’s another framework called the Product Management Triangle. I had no idea it existed before googling it just now.

Using this yourself

You’re welcome to use this – it’s licensed under the Creative Commons CC BY SA, and the google slides document I was working in, is publicly visible here.

Feedback I’ve seen so far

I’ll summarise some of the feedback I’ve had since sharing this.

A bit more on quality and flow

I’ve had a few kinds of feedback when I speak about this, and I think it’s down to the terms I musing – as they’re pretty broad.  When I speak to people in engineering management roles, their response is typically – I get flow anyway, by investing in quality, as I’m able to deploy and respond to change quickly, so why are these different?

I’d argue that you can end up with a codebase, that is well documented, has lots of well written, passing tests, is secure and performant – lots of developers would refer to as a relatively nice codebase to work in.

If you see your job as a developer is to write code (after all when you were hired they were testing your ability to write code more than anything else), you might call this a high quality codebase. And you might be maintaining this level of quality making sure tests are passing, you’ve written documentation, and your pull request was merged into master after a review by at least one other person.

There might still be an convoluted deployment process, and you might be releasing relatively rarely as a result.

When I refer to flow here as optimising for the time from an idea being accepted as worthwhile to work on, to being used by people it was intended for.

In this case, I’d say you might have quality, but not necessarily flow. Depending on the company you might have a coach, a scrum master, engineering manager or delivery manager seeing this as their responsibility – it’s more important that someone is thinking about it, and arguing for it than who is doing it

The iron/engineer’s triangle

The most common triangle people refer to in software development is the iron triangle:

“good, cheap, fast – pick two”

The idea is that you can’t have all three, and trying to go for all three is folly. You might see it referred to as Scope, Resources, Time instead. The idea that you get to only get to choose two remains, so in agile circles it’s best to be flexible about scope, as Time and Resources are usually already fixed.

Writing up IotMark, part 2 of 2

I said on twitter that I’d do a write up of the Friday IoTMark workshop last week:

I’ve had enough people fave it to give me a reason to write it, but over the weekend, I realised that giving context and backstory to the event took up more than 1500 words already. I’ve split that out into a separate post, so here’s how the day went down and the key things I learned.

Much less than 60 people this time around

Previous events have been relatively large affairs, taking over entire venues, with breakout rooms, and culminating in a somewhat chaotic mob-editing of a google doc, to come up with either a bill of rights style document, or some kind of spec or set of principles to inform the creation of a certification mark for connected products.

This time round, there was a handful of people, in person, in a room for the whole day in central Mitte instead – albeit one with a nice view from the tenth floor, and a few people skyped in, in anm iPhone in a glass, to make them easier to hear.

DSC_0345.jpg

What we were going for

The goal of the day, as I understood it, was to get the ideas in the most recent document in better shape for being used as a basis for a certification mark.

This included:

  1. tidying up the language and trying to make it as accessible as possible without losing the substance of the initial document
  2. reconciling it with the issues raised on github, and feedback when presenting it at Mozfest in November 2017.

There was also a move to make the IoTMark something people and organisations of all sizes could realistically commit to adhering to – when setting up a certification mark can easily cost more than 40k USD, it makes sense make it something that actually could be adopted.

A bit like Energy Star, for IoT

DSC_0387.jpg

One strategy to follow if you want a diverse set of people to commit to a set of ideas, is to introduce levels of commitment, so you aren’t asking for people to make a binary all in/ all out choice.

There was already some implicit sense of grading here in the normative language (a MUST  is non-negotiable, but a SHOULD for example, is not absolutely mandatory), so started with that, to come up with a rough set of three groupings.

I say groupings here, because it’s really, really hard to have a single axis when discussing these products, that goes from, say, bad to good. It’s much more complicated than that, and different people value different qualities – some people might value openness of software and hardware over the supply chain transparency, and some might treat being explicit about how personal data is used as more important than marketing to a specific group of people.

That said, one grouping was considered a bare minimum – something considered reasonable to expect anyone wanting a connected product that follows the principles to follow. It felt important to have something everyone could have as some shared values, before focusing on diverging areas.

Relating GDPR to IoT

The other thing that came out of the day was just how far reaching the GDPR is when it comes to thinking about about connected products, and why it made sense to think about it in the context of IoT.

It’s a common refrain that the tech industry needs to improve when it comes to it’s cavalier attitude to people’s data – often deeply personal data secured in dangerously careless ways, and used in ways that definitely aren’t in the interests of the consumer.

I also think it’s safe to say that there is appetite for change, and well… from my perspective at least, GDPR feels like change for the tech industry in the same way that a giant asteroid might have represented change to wildlife around the end of the Cretaceous period on Earth.

To be clear, this change isn’t necessarily bad – asteroids can sometimes get rid of dinosaurs and make life easier for us humans, after all.

Why you might care about GDPR for IoT

I often point people to these posters from the co-up to help understand the ideas behind the GDPR, and why it’s a big deal – for many consumers, the ideas in the GDPR don’t sound like unreasonable things to ask for, especially when there often seems to be no real penalty for bad behaviour, from lawmakers.

Chaos Monkeys for your data pipeline, otherwise known as Europeans

In addition, if you are building a connected product the the changes to privacy law are extraterritorial, as in – if any citizens of EU member states end up in your data pipeline, or data is proceeded in the EU these laws still apply. I think Heather Burns, covers this really well in a recent piece aimed at web developers, and much of it applies for IoT too:

In May of 2018, a major upgrade to Europe’s overarching data protection framework becomes enforceable. This will be followed by a companion piece of legislation pertaining to data in transit. The extraterritorial nature of these two frameworks — they protect the privacy rights of people in Europe regardless of where their data is collected — means that they will become the de facto standard for privacy around the world.

Enforcement may be another issue, but given that for 300 million people at least, the biggest changes to privacy law in 25 years are coming into effect this year, and will force changes anyway, ignoring it seemed short-sighted, as well as being against many of the ideas in original document.

Overlap between IoT principles and GDPR

I’ll end this section with one thing that struck me – it’s totally possible to build connected products that comply with the ideas of the GDPR, and there are very large companies doing exactly this. In fact, there’s some great stuff by Matt Webb, on how privacy can be a competitive advantage for IoT services that’s worth reading if you’re interested in this field.

He cites Hoxton Analytics as a good example – by building privacy into their product from the beginning, they could be deployed in places more invasive systems couldn’t, helping them against competitors – there’s a lot of FUD about GDPR, but it’s also an opportunity to rethink the playing field, something lots of companies already invested in one way of working seem to miss.

RFCs versus principles

Another thing that came up thing during the day, was the difference between a spec using specific normative language as described by the IETF, which you typically validate programmatically based on the MUSTs and SHOULDs, and a set of principles which tend to have more fuzzy edges, and are more open to interpretation.

It’s relatively old now, there’s some really great thinking in Lawrence Lessig’s book Code, about different ways of enforcing behaviour, which I think is relevant.

I don’t have the book handy, but Danah Boyd’s blog summarises one of the key ideas nicely :

In his seminal book “Code”, Larry Lessig argued that social systems are regulated by four forces: 1) the market; 2) the law; 3) social norms; and 4) architecture or code.

I think it’s easy to conflate these. When you’re trying to bring about a specific kind of behaviour you want people to follow, it’s worth thinking in these terms, so you don’t try to make one force act as it’s another.

Embodied thinking in physical space

DSC_0382

Finally, in the afternoon, after re-reading through the the principles as described in the original Open IoT documents, referring to the Github issues, and Mozilla versions, and deduping them, we spent a bunch of time finding ways to group these principles, to make it easier to follow a meaningful amount of them if you can’t follow all of them.

If you have a load of people in a room, and you need to help build a shared understanding, one strategy to help people to this is use some physical token represent an idea, and let people use the physical space to communicate how ideas relate to each other.

Given we had spent a bunch of time already checking our understanding of the ideas in isolation, I found it useful to help communicate positions for people in the room to discuss more effectively.

The downside here when you are skyping people in, is that they’re not able to move things around themselves – and by doing this, you’re making a specific decision to favour collaboration in the room, over those who are remote. In most cases, I think this is the right call, as the alternative is often to collaborate at speed of the person with the lowest bandwidth connection, which reduces what you can get done in a limited amount of time.

I’d love to hear suggestions for finding a way around this – while you can use tools like realtimeboard to create a whiteboard-like experience, you’re still reducing the richness of your interactions in the room to what the software lets you do with a piece of backlit glass.

How to get involved in this in future

If you’re interested in any of this, I’d suggest heading to the iotmark website – from there you can see the current version of the principles, join the slack workspace, or subscribe for updates over email, or follow the IotMark on twitter.

As ever, comments on this post are welcome, and if you prefer you can contact me directly.

 

 

 

Writing up IoTmark – 1 of 2

I went to last Friday’s IoTMark follow-up event from last summer’s Open IoT Definition, partly because I figured I’d learn a lot, also because whether I like it or not, the evolution of the Internet of Things (IoT) is something I feel I should have a handle on as someone working in tech.

The day was more productive than I had expected it to be, and I learned loads – but before I could put together a writeup, it seemed worth giving some more context here if this was new to you. There’s another post where I do the write up of the day itself.

My background, and disclaimers

I am definitely not an IoT expert. I write this from the perspective who is paid to write code, but also paid to run workshops, and help product development teams work more effectively together.

The one system I put into production in 2009 with my first company that you might describe as vaguely IoT-y was a system where we were inferring utilisation by co-workers in a couple of co-working spaces, based on detecting people’s devices connected to wifi.

We then monitored energy used in a building, and then tracked the energy mix on the national grid to work out the carbon footprint per person per hour of the space.

The idea was to help the owner make an argument that you could have a greener office, by shaping demand and making better use of the space available. I’m happy to chat about the lessons learned here, but that’s another post.

It was some of my first python programming, it was buggy as hell, a source of incredible stress for me, and commercially ruinous for my company, but I learned about failure modes, security, and all the weird ways things can fail, as the worst possible time. In fact, it was enough for me to flee from anything to do with hardware, for years. As such, there may be errors, and I might get terminology wrong – please contact me direct if you see any, and I’ll change them.

I share this to give some background of where I’m coming from, if my viewpoint seems different to yours.

The background of the Open IoT events and IoTMark

In 2012 – the Open Internet Of Things Assembly

In 2012, along with a bunch of other nerds, I went to the Open IoT Definition event on 17th June 2012, at the Google Campus in Shoreditch, organised by Alexandra Deschamps-Sonsino, and ended up at a fascinating, if somewhat chaotic event. Over the day, 60-70 people ended up forming groups to talk about various aspects of how IoT were affecting our lives, how the incentives seemed to reward all wrong kinds of behaviour, and how we feel a more humane version of this might look.

This culminated in something like 50 different people, trying to use google docs to simultaneously author a sort of bill of rights, the Open Internet of Things Definition, to encapsulate this.

Amazingly, there’s still a storify available of the day that does a fairly good job of capturing what it all felt like.

The day felt nice, and I met loads of really interesting people, but I’d be lying if I said we all left the day, and rebuilt everything we did along the principles written up in that Open IoT document.

Fast forward 2017 – The Internet of Things Mark

Last year, I heard from Alex and Usman, letting me know that they were planning to do a follow-up event, and asking whether I’d be interested in coming back to help moderate one of the sessions.

The goal of the day was to come up with a kind of trustmark for IoT products, and we needed to agree on what the trustmark stood for, before we could go through the process of registering trademarks, and all the necessary legal bits.

The venue they had found was London Zoo, and it dovetailed (sorry) nicely with a good friend’s birthday.

So, I booked a train from Berlin to London and headed over for June 17th 2017 – 5 years to the day of the last event.

What, someone did something with those principles?

One of the things that really surprised me at the event, was that some larger companies had been paying attention to the work going into these principles as a way to carve out a competitive advantage.

In particular, we saw Stefan Ferber from Bosch talking about many of the same principles we had written about, and how they were applying them in their work.

In particular, he was talking about the benefits of trust when thinking about IoT – optimising for trust as response to an ever more complex world.

If I can trust the service or the product I’m using, rather than having an adversarial relationship (i.e. like I might with many low cost flight airlines, or ad-tech providers, or much of retail banking in the US, etc.), then it follows that I’m more likely to want to use offerings from the company making it over others.

DSC_0950.jpg

Moderating the life cycle group

Shortly after some intros, we ended up splitting into interest groups, and I ended up helping moderate one of the sessions around the environmental and life-cycle aspects of IoT.

In our group, we had quite a wide mix of people, from young founders building VC-backed IoT startups, to people from the Restart Project talking about repair and design for remanufacture, and embedded programming specialists concerned with the end of life of IoT hardware.

We were looking at planned obsolescence, end of life support – for example, when hardware is literally embedded in your house, and you sell your house, but can’t transfer ownership of the devices, you have new kinds of problems we haven’t really had to deal with before.

Or when your phone works, but through software update policy is left dangerously vulnerable to hackers, to the extent it’s irresponsible to put data on it, you have the kinds of problems we were exploring that effectively make it unusable for many of the common use-cases you might associate with a typical smartphone.

We also looked into the embodied carbon, and fairness of the supply chains – where the minerals like coltan come from that make up your computer chips, and the conditions in which they’re made, and where hardware is assembled.

So, all the easy problems, then.

Our goal was to come up with some principles we could all agree to in our group, and would be prepared follow in this absolute discussion minefield of topics. As you imagine, getting someone in a VC-backed hardware startup set up for hyper growth to find common ground with a design for repair campaigner is not easy, but we ended up with a few principles we could take the the final session of the day.

One day we will learn

DSC_0018
Usman, once again, trying to get 60 people to agree in a google document

And once again, we tried to get 60 people to agree to principles, but this time, we were trying to get something more concrete to put on a product, so it was even harder to find consensus.

Even after the back and forth within my group, we found that loads of the stuff we had settled on as uncontroversial had others pushing back hard – so we ended up with much more watered down set of ideas in the lifecycle section.

The Open IoTMark RFC

Like the last event, the culmination was a group of people, essentially yelling at poor Usman, as he and other people tried to write a document to capture all the ideas in the principles.

But this time round, given the number of engineers in the audience, someone came up with idea of describing these principles using the normative language you’d see from the IETF.org, with very specific, precise meanings applied to the words like MUST, SHALL, SHOULD, and so on.

I’ll be honest – this entire process was exhausting and not very satisfying.

This shouldn’t be news though, and it wasn’t isn’t a reflection on Usman’s skill as a facilitator – I think it’s very hard to avoid in cases like this.

As you move from principles you care about when it’s not your job to own the supply chain and deliver the returns an investor is expecting, to actually having to publicly say you’re prepared adopt these principles, of course you’re going to get push-back.

That said, Usman pushed on, and before we had to leave, we somehow ended up with a document that if everyone wasn’t happy with, they were prepared to co-sign it.

And that’s the background. Here’s the actual write up of the day.

 

An update on OMGDPR – explaining the format

So, a little under two weeks ago, I wrote a bit about OMGDPR, a hypothetical  community-run, open space event where practitioners who build digital products or services, can meet to learn from each other how they’re responding to what amount to  seismic changes in privacy law.

Me and Maik have been chatting to various organisation in Berlin who might be up for hosting it and we’re this far from announcing a date and venue now.

So, it seemed a good time to outline the format in a bit more detail, so when we do announce the event formally, we can refer to this.

What is Open Space, and why?

If you’re not familiar with Open Space, you can think of  it as as particular format of event that is suited to situations where everyone coming has a shared interest in a solving a pressing complicated problem, but where there isn’t a single obvious solution to it.

In cases like this, it would be foolish to take on responsibility for having all the answers for aforementioned pressing, complicated problem – we’re not GDPR experts, and frankly, even experts I have spoken to readily admit they don’t have all the answers themselves either right now, given how much uncertainty there is circling GDPR right now.

Instead, you can create the space for people coming along to explore the part of the problem that’s most valuable to them, so they can discuss it with other like minded people, then share back some of the key insights, so people leave with information they can make use of in their own context.

So, in our case, we’re sorting out a venue, and making sure it has sufficient breakout space for these smaller, deeper conversations to happen.

Alright, I’m starting to get it. What does it look like though?

You can think of the event as comprised of 4 stages.

Introducing the rules and format

Not everyone is comfortable speaking at events full of strangers, so open space has a few rules to help avoid awkward moments. Also if you’re used to coming to more passive events, where you watch a panel, or single speaker speak from a position of power, presented as an expert, it can be easy to feel a bit lost at first.

So, you’d typically have a summary of the general rules and ideas when you start, before people move into the next session.

The Marketplace of ideas

As mentioned before, the attendees are responsible for bringing content and questions to explore. How do you work out what to pick from the topics though?

Typically, people who want to talk about something within a topic have a chance to pitch their pet topic to see if anyone else is interested, to create a menu of topics to discuss.

For GDPR, one might be a discussion about “what informed consent looks like from a user experience point of view”, or “what new skills you might need in your product teams to stay on the right side of the law in future”.

At this point, there are usually more topics than time and space to discuss them, and you need to end up with a list of topics to match:

  1. the number of breakout areas available for discussion
  2. the number of sessions per area

As an example, if you had three rooms, and enough time in each room for 3 sets of 30 minute sessions, you’d be able to cover 9 topics.

How to choose this

It’s common to use some kind of voting process, like listing the topics on index cards and then letting everyone either:

  • mark the ones that interest them with a marker near the list or
  • attach some kind of coloured dot to the topics that catch their eye

This is gives a quick visual indicator of the ‘winning ideas’.

After this, the organisers arrange these ideas into clearly visible timetable of what’s being discussed where, and when, essentially creating a multi-track conference of sorts.

Breaking out into smaller sessions

If you’re an attendee, you’re now free to go wherever is most interesting to you to join a discussion.

The law of two feet

If it turns out the discussion wasn’t what you expected, one of the key ideas for open space is the law of two feet – by coming to the event, you take responsibility for being in the conversations most useful to you, and generally speaking, it’s not considered rude to leave a session half way through if you’re not getting much from it.

We’re really, really keen on having some volunteers to help capture what is learned during the sessions – please get in touch if you’d be up for helping cover one session.

Re-convening and wrapping up

After the rounds of discussion, it’s common to invite everyone back to a central space, to allow time for people a chance to share any insights they learned that were worth sharing.

And after that, you’re free to go, hopefully having learned something useful, that will help you come up with an appropriate response to the meteor heading towards the industry that is GDPR.

Fancy it? If all goes well, there’ll be an eventbrite page to sign up to come to OMGDPR later on in April, but in the meantime, you can register interest below:

https://productscience.typeform.com/to/PuUf47

Sustainability of the web vs through the web

I referred to this a Sustainability of the web, in a recent blog post about a recorded talk I did about Planet friendly Web development, and halfway through explaining the term I realised it was better expanding on the difference between Sustainability of the web, vs sustainability through the web in a separate post. I think it’s a useful idea, so without further ado…

Sustainability of the web, vs sustainability through the web

Jack Townsend introduced me to this concept a while back, and while the boundaries can be bit fuzzy, I think it’s a useful idea.

As the name suggests, sustainability of the web is an inward looking concept – mainly concerned with making the web as it currently is have a lighter environmental footprint, as it supports the existing behaviour we take part in, like staying in touch with each other, shopping online and so on.

While you can accuse it only being a local optimisation, I still think it’s worth discussing, and spending time on, especially given that the carbon footprint of IT, in 2018 is probably about the same size or larger than the aviation industry now.

As I outline the video above, the incentives are aligned to make it easier to get people on board with if they already have invested years developing skills that can be useful here.

For me at least, Sustainability through the web is a more outward looking concept, and this is about replacing existing behaviour. An example might be what got me interested in Loco2 back in 2008, where the goal was to make it as easy to book trains in Europe as it was to book planes in Europe, to reduce the impact of travel.

Another might be one of the clean meat companies like Memphis Meats, or Perfect Day Foods, who are working out how to brew milk from yeast and sugars, instead of cows, so you can have all the delicious dairy things we’re used to in the west, without the colossal carbon footprint of that comes from the millions farting and burping cows we use at the moment to make dairy-based foods.

I think you need both – one is easier to see some success with, but the other arguably has a larger impact, but harder to see it work in the short term.

Explaining a planet friendly web in 10 minutes

Over the last year or so, I’ve been doing variations for an idea I’m calling Planet Friendly Web, and after bumping into Paul Johnston at Monkigras earlier this year, then Jeffconf, we got chatting about a meetup for CTOs interested doing something about climate change.

I couldn’t be there physically, but I did get a chance to put together a pre-recorded talk on steps we can take as technologists interested the environmental sustainability of the web.

It’s 10 minutes long, and gives an overview of mental model I’m writing about in more detail at planetfriendlyweb.org.

Here it is:

 

There’s longer, 30 minute version I did for Sustainable UX, that goes into things in more detail here:

Enjoy!