Trying a test run of a sustainable product design workshop on Nov 29th in Berlin

I work as a freelance developer / user researcher / product person, and later in December, I’m teaming up with Dr Isabel Ordóñez to run a workshop, Designing Out Waste at ThingsCon in Rotterdam. We’re doing a test run first in Berlin, so if you’re in Berlin and you work on building physical products (and ideally, connected/IoT products) it might be up your street.

More details on the workshop for ThingsCon

Here’s the workshop abstract.

Around 80 percent the environmental impact of a product or service happens as a result of decisions made at the early design phase. If you want to reduce the impact, and have the rest of the team with you, you need to know how to think about this phase, and how to talk about the trade-offs you’ll make to achieve these reductions. This is the goal of this session.

The test run

We’re doing it on Nov 29th, Factory Mitte, at 6pm.

Why are we doing this?

I’ve blogged before about Fairphone and sustainable electronics before, and over the last 5 years, I’ve lead the life-cycle group when working on a programme of sorts with the  IoTMark project.

Generally speaking, when we talk about the environmental impact of electronics, as an industry there’s plenty of people saying how bad things are, and plenty of evidence to support this, but sadly very few good examples to point to if you want to do something responsible, and many of us don’t know where to start.

When I was working in the lifecycle group, just finding a set of actions people could agree to was a huge challenge, because while there are tools and approaches you can take, they’re often seen as the domain only of sustainability experts and the general level of education is absymal.

Meeting Dr Isabel Ordóñez

Fortunately, at an event in Berlin a around 6 months ago, I met Dr Isabel Ordóñez, who had recently finished a thesis investigating the blockers to adopting sustainable design. After meeting, we started talking about how to make it more approachable, and after reading her phD thesis, I found loads of answers to the problems I had encountered myself over the last five years.

A month or so later, I saw her present again at Open Source Circular Economy Days at the EUREF Campus to an audience interested in environmental sustainability but with little or no professional experience working in the field.

The ideas and measures she presented we easy to understand, and practical, and looked like they’d work in a short workshop.

Teaming up for a workshop

So, over the last few weeks, we’ve been meeting to work on the learning materials, and activities and we think we’ve got enough now to help people wanting to take their first steps, by introducing them to some useful frameworks for thinking, and some free resources.

There’s more than you might expect out there now – there are tools like theoretical tools, like MET matrixes to systematically structure your thinking, the way a Business Model Canvas can help you think you how a business will work. And there are increasingly freely available resources and datasets about the materials you can work with, as well as freely available, open source software to help model this now.

We think there’s enough there now for people to start seeing some value from applying what they learn, and also make some measurable improvements to their own practice.

Doing a practice run on Nov 29th

Before we run it at a conference where people have paid a few hundred euros to attend, it seemed worth doing a practice run, so that’s exactly what we’re doing on Thursday 29th November in Factory Berlin, Mitte.

We have a handful of spaces, so if in your line of work you design connected physical products or electronics, or are part of the team providing a service around them, we’d love to hear from you.

Releasing the workshop material after the conference

After the conference, we’ll be releasing the workshop materials and worksheets under a CC license, in a format that’s suitable for running workshops yourself.

We’ll also be releasing a template for Realtimeboard, suitable for using in a remote, synchronous, moderated workshop format, so you don’t need to physically be in Berlin to benefit from it. I’ll be looking for people to try this with later in December again.

Getting in touch

You can reach me all the ways listed on this page, and on social media, you can reach me via direct message on Twitter and Facebook, and yes, even on LinkedIn.



I’m hosting the Mozilla Global Sprints in Berlin in May – come along!

I’m helping host the Berlin Mozilla Global Sprint next week – it’s a two day event, set aside to create the space to make it easy to volunteer on existing open source projects, aligned with the key Mozilla’s key Internet Health issues, outlined in their recently published Internet Health Report.

More specifically, these issues, taken from the report are:

WEB LITERACY: Projects that teach individuals skills to shape — and not simply consume — the web.

OPENNESS: Projects that keep the web transparent and understandable, allow anyone to invent online without asking permission, and encourage thoughtful sharing and reuse of data, code, and ideas.

PRIVACY & SECURITY: Projects that illuminate what happens to our personal data online, and how to make the Internet safer for all.

DIGITAL INCLUSION: Projects that ensure everyone has an equal opportunity to access the Internet, and can use it to improve their lives and societies.

DECENTRALIZATION: Projects that protect and secure an Internet controlled by many, so that no one actor can own it or control it or switch it off.

Oh neat, these are things I’m totally in favour of. What projects are there?

There’s a load of open source projects you can hack on listed at Mozilla’s Pulse page. But to be honest, as long as you’re working on a project that addresses the issues listed above, you’ll be welcome.

If you’re feeling particularly generous, and heroic

I’m looking for some help on a project called the Planet Friendly Web Guide, which I was working on last earlier last year, as part of the Mozilla Open Web Leader programme.

I presented it at SustainableUX, and if you’re visually inclined, you can see the deck below I that I used:

I’m looking for help in a bunch of ways, but the simplest way to see where you can help is to visit this contributing page on the guide.

In particular, I’m looking for help building some fun little widgets to let people get a quick idea of the carbon footprint of the infrastructure used to serve sites they use or build, based on the general platform, packets, process model, to see if it’s easy to apply for someone who hasn’t been working on the project like I have.

Come, and hack on something nice

So, to recap, the deal is basically:

  • turn up
  • hack on a thing that largely agrees with the principles outlined in Mozilla’s 5 key issues
  • be fed at lunchtime as a token of appreciation if you’re giving your time to make the web a better place

There’s a lot more about the whole idea of Mozilla’s global sprints on their dedicated site.

You can sign up here on the registration site.

Wait two whole days? In this weather?

It’s also totally cool to drop by for just part of the two day period – understandable if you just want to spend a bit of time on a project, before getting out and enjoying the wonderful Berlin summer weather.

You in?

 

OMGDPR is live – ZOMG

I’ve written previously about GDPR, thinking out loud about running an event called OMGDPR – a community run, unconference to explore the changes to the industry it’ll bring about. In this post, I introduce the event publicly and explain why I think it’s important.

The background – what’s GDPR?

GDPR stands for General Data Protection Regulation – it’s the term used to describe a set of changes to laws governing how data about people can be stored and used, in the EU, but also about EU citizens when they’re outside the EU too.

These changes become law on May 25th across the EU, and this is generally seen as the first time data protection regulation has had teeth.

I’ve written more on this blog, when I first floated the idea, and again with this update when I described the format in more detail. I also did a somewhat rambling talk last night at the Thingscon Salon at Mozilla’s headquarters.

You can now sign up for OMGDPR

omgdpr-wordmark.png

I’m pleased to announce that on Saturday, April 21st, we’re running OMGDPR, a community-run conference about GDPR, at Soundcloud’s swanky offices.

You can now sign up to come to OMGDPR at the Eventbrite link below:

http://bit.ly/omgdpr-tix

Why GDPR?

I’ve had to explain why I think GDPR is important a few times in the last few weeks.

If you’ve been following recent revelations about social media being used to sway elections and referenda around the world, and the seemingly endless stream of data breaches and questionable practices around, I think you’d agree the way personal data is used, stored and traded needs to change.

We need an alternative, and industry has consistently failed to regulate itself or provide one, so GDPR is effectively the policy response to this garbage fire.

GDPR is interesting for two main reasons for me.

Firstly – unlike previous data protection legislation from the late 1990’s, it’s not so prescriptive about how organisations should comply with the law.

This is good, as we don’t end up with the silly conversations about cookie law, but bad in the sense that it’s quite as clear when it comes to telling if you’re operating within the law.

Secondly – the penalties for not complying with the GDPR are pretty eye-watering. If you’re found not complying with this legislation, you can be fined up for up to 4% of your organisation’s turnover, or 20 million euros, whichever is larger.

See what I mean about teeth now?

Obviously not every company will automatically be fined these huge amounts on May 26th if they’re not complying with the law, but it provides a very strong motivation to look at how data is handled, that we’ve never had before.

Why an unconference – tacit, emergent vs explicit, codified

I came across a really nice explanation on the value of unconferences recently, on the oneTeamGov blog on Medium, explaining the difference between sharing explicit knowledge with conferences and workshops, compared with sharing implicit knowledge with unconference style events:

A traditional workshop will focus on spreading explicit knowledge (the codified knowledge on what works best) that is contained in best practice databases and toolkits. The unconference format also uncovers tacit knowledge; what participants have learnt works well in their local context.

So to paraphrase:

When you have a clear solution to a problem, it’s more useful to spread that codified knowledge in the form of talks, workshops, which emphasise attendees absorbing information from an expert.

When you have more uncertainty, and a group of people motivated to solve it unconferences are good for surfacing knowledge about what works, from the entire group.

I’m shamelessly borrowing the image below from that post, as I found it really useful when thinking about this:

implict vs expliti knowledge.png

Applying this to GDPR and OMGDPR

For the parts of GDPR, that are explicit and well understood (much of it is based on existing practice, which often hasn’t been followed)  – there are now loads of people doing the codified knowledge stuff.

Type in GDPR into a search engine, and you’ll find loads and loads of consultants and trainers who can help you now for a price.

For the parts that aren’t so clear, there’s a role for unconferences and similar, hence OMGDPR.

A favour to ask

Typically, white dudes like me and Maik are really well represented at tech events, so I’m pretty sure we’ve got that viewpoint nailed. We’ve also had some success with reaching people who don’t sound and look like us, and nor work in the same context.

But it would be a missed opportunity if we missed out on hearing the voices who are typically marginalised not even considered when tech products are built for mostly white male Europeans and North Americans.

Also, you don’t need to be some technical genius to contribute – we really want a wide variety of voices, as GDPR affects a wide range of people, with different things to bring. If you’re a designer or work with content – for example, then there’s a whole piece about making consent understandable, and how we need to rethink patterns we’ve used before. If you’re more into operations, there’s a whole range of new kinds of agreements that need to be understood and implemented now. If you’re into strategy or service design, there’s a whole raft of new privacy friendly ideas for services waiting to be discussed

An unconference is only as good as who comes to it, so if there are people you’d like to see there, and you think their voice would add to the conversations on the day, please do invite them.

Final note about doing this in other parts of the world

We have a nice venue, and some smart people signed up already but there’s nothing magical about OMGDPR that can’t be replicated elsewhere.

I’ve had a few people ask about doing this elsewhere in the world later in the year. I’m curious about how much appetite there is.

If you’re not coming to OMGDPR in Berlin but still interested, please do get in touch.

 

 

 

 

How to start out with user research on a product, when you don’t have much buy-in yet

I recently had a friend ask me about how you would fit user research into your product development process, when:

  • you already have a product released to users
  • and there’s no structured plan for doing user research, nor previous experience with in-house researchers

The first thing I did was invite her to the #researchOps workspace on Slack – it’s a fantastic resource, and there are loads of really experienced people who can give better answers than me.

I’m now writing this up, based on what I said to her, in case others are in a similar position.

In the post below I present a model for understanding the value of user research, which I find useful. I also share why I think that if you’re in a relatively small organisation, it’s worth starting by testing things you already have in production first.

Note: In this post I use the term user research interchangeably with the term design research, as used in the Australian DTO.

The main benefit of user research

It’s worth taking a second to talk about the value of user research. I explain the value of user research to budget holders like so:

User research reduces the risk of building the wrong thing.

Building digital products is expensive. This is because it typically takes the time of expensive people.

So, waiting a long time while expensive people build lots of stuff before you see whether any of it works, is not only is a really expensive, but it’s really risky.

So, it makes sense to spend a part of your total budget to find out, as early as possible, if things do work. This is so you can respond to this new information, while you still have some budget to act upon in it.

This can apply to an existing feature, or with your entire value proposition.

Got that?

If you’re working with people who are driven by numbers, then it’s useful to talk about user research as about reducing risk.

What this looks like — incorporating research into product development

Okay, so once people are onboard with reducing risk with user research, what do you do next?

Will Myddelton’s blog is a brilliant resource anyway, but I think his model describing three main kinds of user research you might do is really helpful here.

It’s the clearest, snappiest, most usable description of user research I’ve come across so far, and he presents the three kinds as:

  1. Testing things the team have built
  2. Working out what the team should build next
  3. Understanding potential users and their lives

I’ll cover these in detail below, then I’ll explain why, in the context I’ve described above, I think you should start with the first one if you’re not doing user research regularly.

Testing things the team have built

This is probably the most straightforward to incorporate into how you build things and gives you immediate insights you can act upon.

The goal is to uncover areas where users get stuck using what you already have. The outcomes are typically slightly revised designs, and updated copy to address the most obvious problem areas.

As Will suggests, doing this first also helps the rest of the team learn first hand why user research is valuable. You always learn something new, and it often shows that users use what you have built differently to how you would use it.

 

you-are-not-your-user.png
Obligatory GDS poster truthbomb when discussing user research

 

Here, you’re reducing the risk that your product is currently underperforming for reasons that are obvious in retrospect, and easy to fix quickly.

However, it’s really, really important to act upon what you learn, to realise the value of this research, and make it worth repeating.

You’d typically refer to this activity as usability testing, user testing or something similar. I’d suggest avoiding the phrase user testing if you can, as you’re not testing the user, you’re testing the product.

If there’s one book to read for this doing, I’d recommend Steve Krug’s Rocket Surgery Made Easy.

Working out what the team should build next

If there’s future work planned, there will also be risk in assumptions you will have made about how valuable, useful, or even feasible much of it is.

When you’re applying research techniques here, you’re using them to reduce the risk that your planned solution won’t solve the problem you’ve identified.

Here, the focus is identifying these assumptions you’re making, and carrying out activities to reduce the uncertainty in them.

Testing Prototypes

It’s not the only way to do this, but a common way to do this is building one or more prototypes first, and having a user researcher run moderated sessions with users themselves, or design unmoderated sessions for users to take, to review later.

You typically make a trade-off between simulation of value, versus time to actionable insight.

Low-fidelity prototypes are fast to get to users, so you get feedback quickly.

Higher fidelity prototypes are more complete, and slower to build, but let you test richer interactions, so offer a more faithful simulation of the value you will eventually provide.

Generally speaking, the riskier, and more substantial the new feature is, the more effort you can justify spending on prototyping to reduce the risk.

This isn’t the only approach through. You may be lucky enough to find that a competing product has a feature that’s similar enough to what you want to build, to justify using that instead of taking time to build a prototype yourself.

This is also valid, and will often help you understand issues associated with this kind of feature, as long as you are clear about which parts you’re trying to test.

 

People might refer to this as user testing a prototype, or validating an idea in the backlog,

Who are our users, and what are their problems?

The final kind of research is arguably the most impactful, and if you were starting a project anew you’d begin with this, to make it clear who your users were, and what needs you think you’re meeting.

Run at the beginning of a project, you might associate this with what people refer to as a Discovery Phase, and typically involves sets of in-depth interviews with people about their lives, and habits more than being directly about your product.

The findings and insights for this kind of research are things that underpin your entire value proposition, and this kind of research reduces the risk that you are basing your entire product strategy on a premise that is inaccurate or poorly understood.

Picking your battles

This kind of research also requires the most effort organisationally to respond to, and therefore from the perspective of a someone trying to champion user research, it’s the riskiest.

Typically, when I’ve been involved in research like this, or seen others do the same, even when you take great pains to share well supported, valid findings, these can be ignored if acting upon these findings means overcoming inertia in the organisation.

People in the organisation might already be invested making a specific product decision that is undermined by these findings, or pressure from outside the team may make changing course harder (i.e. investors might assume this is already settled, and be pushing for growth over changes to a business model, etc.)

Until you’ve build up some credibility by sharing some prior insights that have clearly helped make an existing product or service work better, telling managers that people are using a product for totally different reasons than they first thought, or that the value proposition needs rethinking will often be resisted, as it creates a load of new work, and on a human level can raise awkward questions that inside the management team, about why this risk is being discovered now, rather than earlier.

They might respond to your findings positively, and rethink their entire product focus, but they may also dismiss it, and you’ll end up wasting a load of social capital trying to bring about some change in an organisation not ready for it yet.

How to start — test that the thing you have works the way you’d expect

For this reason, if you have a product that people already use and pay for and you don’t have loads of influence, you’re probably better off answering the question of Is what we have working? first.

Starting out with this is fairly low risk — people working on products typically have heard of usability testing, and it will result in generating ideas for updating copy or similar, that the team can implement to based on where they saw users struggling.

This helps demonstrate the value of research and helps get buy-in from the team. From there, it’s easier to start moving to the other questions.

I’ll outline the specifics on Is what we have working?, in a future post, and a bit about operationalising it.

Learning more

BTW, I found Will Mydelton’s model via Leisa Reichelt’s tinyletter, This Deserves your Attention – you should absolutely subscribe if any of this post interested you.

If you’re also interested in making research part of how products are built where you work, I’d really, really suggest joining the researchops community slack. It’s great.

Please help me name this product triangle thing

I’ve worked as a developer, a sysadmin, a user researcher, a product manager, and UX’er, with different teams, and I’d like to share an idea here I want to be able to present visually, as a way to identify common patterns of behaviour that make products less successful.

This is a work in progress, and it’s not fully thought through – it’s based on a couple of tweets from people who I follow and really enjoy reading, and I don’t have a snappy title for it, but I’ve tried running by a few people from UX, product and engineering backgrounds, and so far:

  • they’ve been able draw with a marker where they think their efforts in their own product teams lie among these three regions
  • we’ve been able to use it to help guide our own discussions about how we build digital products.

3 things you need, to build a successful product

In short, I think that to build a successful product or service, that is able to sustain itself, your team needs to be competent in all three of three broad areas. Here’s how I picture it, and I’ll explain the terms in more detail below:

lean-product-triangle

I’ll just say product for the rest of this post instead of the more clumsy product or service, as I increasingly find it more useful to think of these as the same kind of thing these days.

What I mean by Value, Quality and Flow

These are extremely broad terms, and I’ll try to be a bit more specific:

Value – Build the right thing

I’m calling one these value, in the eyes of the user – this might be the customer paying for it, or it’s a public service, the people whose specific needs you’re meeting.

You need clear understanding of the needs you’re meeting, the value provided by it, and a way to validate that this is actually valued by them.

Quality – Build the thing right

The next one here is what I’m calling quality. Depending on the disciplines your team has competency in, this might be an intuitive user experience, an easy to maintain, safe, secure, easy to extend codebase, or a beautifully presented design system that communicates the values you want.

It’s typically the stuff you might consider your putting in your portfolio if you were a visual designer, or put on github as developer.

Flow – Keep on shipping the thing

The final one is what I’m calling flow, and when I use the word flow, I’m thinking in terms of continuous delivery.  So, delivering frequent, small batches of new work, in a sustainable fashion, in response to the things your team learns about what’s valuable to your users, or what’s happening in the market.

What happens when you miss one out

When you have quality, and value, but no flow.

You might have commissioned a load of in-depth research to understand a group of people who have a clear need, and are prepared to pay for it. You might then have hired a fantastically skilled team to build a beautiful product with a wonderfully thought through set of  features.

If you don’t have a clear way to get this in front of people, and regularly respond to what’s happening outside of the team, then one of two things will probably happen.

Probably the most demoralising is having it never seeing the light of day – something that happens depressingly regularly, especially in agency work.

The other is that if it does make it out the door, and you’re unable to respond to feedback fast enough, then your product loses relevance, you end up with no one using it anyway. Sad times.

lean-product-triangle-flow.png

When you have value and flow, but not quality

Let’s say you’ve been able to find an unmet need, bash together a prototype, and then find a way to get progressively less clunky versions of it in front of people, up to the point where you have enough people using it that some pay for it, or it provides a valuable enough service for some other sponsor wants to cover the cost of running it.

This is common too – you might see an idea start at a hackday for example, or you might be able to get something either built speculatively, or very, very cheaply by inexperienced developers or people who should know better, thinking they can fit it in around the rest of their life, and cutting corners to get something out the door (I’ve been this fool before – it’s always a bad idea).

To begin with, can get changes and updates out regularly, but they’ll often miss edge cases, introduce bugs, and so on. When given the choice, of fixing these bugs, or covering the edge cases, and pressing on in favour of new ideas in new features, the shiny takes priority.

All these hacks build up, and eventually, it becomes a horrible ball of mud, with so much deferred clear-up work (i.e. all flavours of tech debt, ux debt, or whatever flavour of debt your team grumbles about) that it’s absolutely horrible to work on.

It becomes impossible to find anyone who will touch it, or it becomes so hard to change without breaking everything, that you essentially become unable to ship anything meaningful. Sad times, again.

lean-product-triangle-quality.png

When you have flow and quality but no value

In our last case, let’s say you have a a really proficient team, who are masters of their craft, and you’ve invested in building or buying a top notch continuous delivery pipeline. You can ship new changes a bajillion times a day, your builds are evergreen, your designs atomic, and consistent across all the channels you support.

Everyone is blogging on the product blog about the latest thing your company has open sourced, and stoked to be helping the community, and the move the state of the art forward in their field.

But… it’s not obvious how the users’ needs are met by the product, in way that is much better than the other options they have available. Or, it’s not clear how the product sustains itself financially, or where the money might come from to pay for everyone. If this isn’t clear, it’s often because the money isn’t there.

In this case, if you don’t have a way of seeing which parts of what you’re offering are already valued by people, and what else they want, there may not be time to discover it this before you run out of money.

You might see this if you’re working to a deliver roadmap that’s been decided up front ages ago, and there’s no chance to discuss or check if the things being delivered really are valuable, or which parts of the product are really being used.

Alternatively, rather than working to an overly rigid roadmap, where you’re not acting on feedback, the other extreme would be indiscriminately acting on all feedback, with no real way of telling of it’s valid or relevant, and making sweeping changes to the product.  In this case you might have accidentally discovered something really valuable, or a really valuable set of users, but not realise it, through this totally miss your last lifeline, as you thrash around frantically trying to pivot out of a horrible death spiral.

 

 

 

lean-product-triangle-value.png

You need all three – value, quality, and flow

lean-product-triangle-all-three.png

This is increasingly how I see digital product development – these three things aren’t in opposition with each other, and ideally you want to be slap bang in the middle.

In my experience most teams, end up drifting towards one of the edges of this triangle over time, and if they don’t take steps to make sure they’re not missing the third piece of this triangle, you start to see increasingly severe cases of the bad things in these red boxes.

A ways to use this diagram

One really simple way, is to try drawing this on a whiteboard or flipchart, and asking your team to mark where in the triangle they reckon they’re at (assuming they’re okay with these comically broad definitions of value, quality and flow here).

If you see clusters of marks towards one corner, it gives you an idea of what changes to your process you might want to make, or what changes to the makeup of skills in your team are worth considering.

You might also use this to acknowledge explicitly in a team that you’ve chosen to temporarily drift away from one corner – maybe you’re deferring some work to hit a deadline, and deliberately taking a hit on quality (this is your call to make, not mine), and you want to have a visible reminder, so you can get back to the sweet spot later.

I’d be curious about how it’s useful.

Other ways of telling

I did think about listing a few other signs to look out for, but:

  • this post is already huge, and I think these are best explored in future posts
  • I really want to use this post to see if this way of presenting is a help to others

Is this useful to you?

If this is, I’d love to hear from you – I’ve used it a few times in workshops and as a discussion tool, but this it the first time I’ve shared it online.

This isn’t my idea

I need to be clear – this is based on a load of ideas I’ve come across elsewhere.

Tom Stuart put me onto Donald Reinertsen’s book, Lean Product Development, which is full of absolute gems, not not an easy read, and has informed a lot of my thinking about this now.

If you’ve heard of Donald Reinertsen and Cost of Delay, Black Swan Farming do a great job of making the ideas much easier to digest. I’ve found the ideas on their site defining what they mean when they say value extremely useful, and I follow @joshuajames on twitter.

In fact, here’s the tweet I read that got me thinking about presenting this visually in the first place:

And here’s the tweet from Dan North it referred to, which in turn referred to Melissa Perri’s one:

Presenting it as a triangle and marking it

I think the idea for visualising it, and asking people to make a mark among three points probably came from seeing this video here, of a severe-looking Dave Snowden presenting Sensemaker, a research tool I started looking into again, after getting involved in the nascent researchOps community on slack,

As the video outlines at 2:50, getting someone to mark a single point here between three points in a triangle forces a bit more thought than just asking for three numbers and building a radar chart from the results.

If someone can share with me the cog sci principle behind this, I’d love to know more – it was totally new to me.

Screen Shot 2018-03-18 at 00.12.25.png

Who looks after Flow, Quality and Value in a team?

You can interpret this triangle as the tension between three roles – a product manager, a tech lead, and a delivery manager.

I think it’s more helpful to think of these as three areas a team that is building a product needs to be competent in. Most importantly, I think the team needs to be able to talk constructively about this interplay, and tension between the three areas to work best.

Naming this

I have no idea what to call it. The lazy option would be something like Lean Product Triangle, given that putting Lean in front of anything seems to be the new agile.

There’s another framework called the Product Management Triangle. I had no idea it existed before googling it just now.

Using this yourself

You’re welcome to use this – it’s licensed under the Creative Commons CC BY SA, and the google slides document I was working in, is publicly visible here.

Feedback I’ve seen so far

I’ll summarise some of the feedback I’ve had since sharing this.

A bit more on quality and flow

I’ve had a few kinds of feedback when I speak about this, and I think it’s down to the terms I musing – as they’re pretty broad.  When I speak to people in engineering management roles, their response is typically – I get flow anyway, by investing in quality, as I’m able to deploy and respond to change quickly, so why are these different?

I’d argue that you can end up with a codebase, that is well documented, has lots of well written, passing tests, is secure and performant – lots of developers would refer to as a relatively nice codebase to work in.

If you see your job as a developer is to write code (after all when you were hired they were testing your ability to write code more than anything else), you might call this a high quality codebase. And you might be maintaining this level of quality making sure tests are passing, you’ve written documentation, and your pull request was merged into master after a review by at least one other person.

There might still be an convoluted deployment process, and you might be releasing relatively rarely as a result.

When I refer to flow here as optimising for the time from an idea being accepted as worthwhile to work on, to being used by people it was intended for.

In this case, I’d say you might have quality, but not necessarily flow. Depending on the company you might have a coach, a scrum master, engineering manager or delivery manager seeing this as their responsibility – it’s more important that someone is thinking about it, and arguing for it than who is doing it

The iron/engineer’s triangle

The most common triangle people refer to in software development is the iron triangle:

“good, cheap, fast – pick two”

The idea is that you can’t have all three, and trying to go for all three is folly. You might see it referred to as Scope, Resources, Time instead. The idea that you get to only get to choose two remains, so in agile circles it’s best to be flexible about scope, as Time and Resources are usually already fixed.

Writing up IoTmark – 1 of 2

I went to last Friday’s IoTMark follow-up event from last summer’s Open IoT Definition, partly because I figured I’d learn a lot, also because whether I like it or not, the evolution of the Internet of Things (IoT) is something I feel I should have a handle on as someone working in tech.

The day was more productive than I had expected it to be, and I learned loads – but before I could put together a writeup, it seemed worth giving some more context here if this was new to you. There’s another post where I do the write up of the day itself.

My background, and disclaimers

I am definitely not an IoT expert. I write this from the perspective who is paid to write code, but also paid to run workshops, and help product development teams work more effectively together.

The one system I put into production in 2009 with my first company that you might describe as vaguely IoT-y was a system where we were inferring utilisation by co-workers in a couple of co-working spaces, based on detecting people’s devices connected to wifi.

We then monitored energy used in a building, and then tracked the energy mix on the national grid to work out the carbon footprint per person per hour of the space.

The idea was to help the owner make an argument that you could have a greener office, by shaping demand and making better use of the space available. I’m happy to chat about the lessons learned here, but that’s another post.

It was some of my first python programming, it was buggy as hell, a source of incredible stress for me, and commercially ruinous for my company, but I learned about failure modes, security, and all the weird ways things can fail, as the worst possible time. In fact, it was enough for me to flee from anything to do with hardware, for years. As such, there may be errors, and I might get terminology wrong – please contact me direct if you see any, and I’ll change them.

I share this to give some background of where I’m coming from, if my viewpoint seems different to yours.

The background of the Open IoT events and IoTMark

In 2012 – the Open Internet Of Things Assembly

In 2012, along with a bunch of other nerds, I went to the Open IoT Definition event on 17th June 2012, at the Google Campus in Shoreditch, organised by Alexandra Deschamps-Sonsino, and ended up at a fascinating, if somewhat chaotic event. Over the day, 60-70 people ended up forming groups to talk about various aspects of how IoT were affecting our lives, how the incentives seemed to reward all wrong kinds of behaviour, and how we feel a more humane version of this might look.

This culminated in something like 50 different people, trying to use google docs to simultaneously author a sort of bill of rights, the Open Internet of Things Definition, to encapsulate this.

Amazingly, there’s still a storify available of the day that does a fairly good job of capturing what it all felt like.

The day felt nice, and I met loads of really interesting people, but I’d be lying if I said we all left the day, and rebuilt everything we did along the principles written up in that Open IoT document.

Fast forward 2017 – The Internet of Things Mark

Last year, I heard from Alex and Usman, letting me know that they were planning to do a follow-up event, and asking whether I’d be interested in coming back to help moderate one of the sessions.

The goal of the day was to come up with a kind of trustmark for IoT products, and we needed to agree on what the trustmark stood for, before we could go through the process of registering trademarks, and all the necessary legal bits.

The venue they had found was London Zoo, and it dovetailed (sorry) nicely with a good friend’s birthday.

So, I booked a train from Berlin to London and headed over for June 17th 2017 – 5 years to the day of the last event.

What, someone did something with those principles?

One of the things that really surprised me at the event, was that some larger companies had been paying attention to the work going into these principles as a way to carve out a competitive advantage.

In particular, we saw Stefan Ferber from Bosch talking about many of the same principles we had written about, and how they were applying them in their work.

In particular, he was talking about the benefits of trust when thinking about IoT – optimising for trust as response to an ever more complex world.

If I can trust the service or the product I’m using, rather than having an adversarial relationship (i.e. like I might with many low cost flight airlines, or ad-tech providers, or much of retail banking in the US, etc.), then it follows that I’m more likely to want to use offerings from the company making it over others.

DSC_0950.jpg

Moderating the life cycle group

Shortly after some intros, we ended up splitting into interest groups, and I ended up helping moderate one of the sessions around the environmental and life-cycle aspects of IoT.

In our group, we had quite a wide mix of people, from young founders building VC-backed IoT startups, to people from the Restart Project talking about repair and design for remanufacture, and embedded programming specialists concerned with the end of life of IoT hardware.

We were looking at planned obsolescence, end of life support – for example, when hardware is literally embedded in your house, and you sell your house, but can’t transfer ownership of the devices, you have new kinds of problems we haven’t really had to deal with before.

Or when your phone works, but through software update policy is left dangerously vulnerable to hackers, to the extent it’s irresponsible to put data on it, you have the kinds of problems we were exploring that effectively make it unusable for many of the common use-cases you might associate with a typical smartphone.

We also looked into the embodied carbon, and fairness of the supply chains – where the minerals like coltan come from that make up your computer chips, and the conditions in which they’re made, and where hardware is assembled.

So, all the easy problems, then.

Our goal was to come up with some principles we could all agree to in our group, and would be prepared follow in this absolute discussion minefield of topics. As you imagine, getting someone in a VC-backed hardware startup set up for hyper growth to find common ground with a design for repair campaigner is not easy, but we ended up with a few principles we could take the the final session of the day.

One day we will learn

DSC_0018
Usman, once again, trying to get 60 people to agree in a google document

And once again, we tried to get 60 people to agree to principles, but this time, we were trying to get something more concrete to put on a product, so it was even harder to find consensus.

Even after the back and forth within my group, we found that loads of the stuff we had settled on as uncontroversial had others pushing back hard – so we ended up with much more watered down set of ideas in the lifecycle section.

The Open IoTMark RFC

Like the last event, the culmination was a group of people, essentially yelling at poor Usman, as he and other people tried to write a document to capture all the ideas in the principles.

But this time round, given the number of engineers in the audience, someone came up with idea of describing these principles using the normative language you’d see from the IETF.org, with very specific, precise meanings applied to the words like MUST, SHALL, SHOULD, and so on.

I’ll be honest – this entire process was exhausting and not very satisfying.

This shouldn’t be news though, and it wasn’t isn’t a reflection on Usman’s skill as a facilitator – I think it’s very hard to avoid in cases like this.

As you move from principles you care about when it’s not your job to own the supply chain and deliver the returns an investor is expecting, to actually having to publicly say you’re prepared adopt these principles, of course you’re going to get push-back.

That said, Usman pushed on, and before we had to leave, we somehow ended up with a document that if everyone wasn’t happy with, they were prepared to co-sign it.

And that’s the background. Here’s the actual write up of the day.

 

Trying an idea – OMGDPR, a GPDR-themed event in Berlin

I’ve been following the passage of GDPR from ideas to law over the last couple of years, and I’m convinced its effects will be far reaching, and extremely disruptive to the industry I work in, but also any industry that collects and processes data around customers.

I started chatting with a friend Maik, and we’re now testing to see if there’s interest in an event around it, that we’re calling OMGDPR.

Okay, what is OMGDPR?

OMGDPR is the working title for an community-run, open space event, in Berlin in late March/early April for practitioners who build digital products or services, to learn from each other about GDPR will affect their organisation, and by extension, how they work.

Wait. You keep saying GDPR. What’s GDPR?

GDPR is the short name for the what’s being referred to as most the important change in data privacy regulation in 20 years, and stands for General Data Protection Regulation.

I’m going to cheat here and use wikipedia’s summary of the changes to the law:

“The proposed new EU data protection regime extends the scope of the EU data protection law to all foreign companies processing data of EU residents. It provides for a harmonisation of the data protection regulations throughout the EU, thereby making it easier for non-European companies to comply with these regulations; however, this comes at the cost of a strict data protection compliance regime with severe penalties of up to 4% of worldwide turnover.”[4]

The GDPR also brings a new set of “digital rights” for EU citizens in an age when the economic value of personal data is increasing in the digital economy.

The key takeaways are:

Wow, ‘threaten the existence of a company’? Now I’m interested.

Quite.

The changes to the law arrived in late May, and they affect every company in the EU, but loads of companies, particularly smaller ones aren’t really prepared for it yet.

There’s also lots of FUD (Fear, uncertainty and doubt) around, so our intention was to create a space to let people talk about it in a relatively welcoming, safe, informal environment so they can see what they need to do if they haven’t had the time to think through a response to it.

Likewise, we’re hoping there will be a chance to learn from others who have had a chance to look into it, and would like to see more organisations treat personal data with the respect it deserves.

Okay, how do I find out more?

The easiest thing to do is try filling out the form below that we’re using to gauge interest – we’re aiming to run the event along open space principles, where people:

  1. bring the topics they’d like to discuss
  2. autonomously form into groups to discuss the topics that they are interested in
  3. report back what they learn for the rest of the group to reflect on or capture
  4. leave the event, with a clearer idea about what they might do

Here’s the form:

https://productscience.typeform.com/to/PuUf47

Okay, that’s it – if this interests you please give the form a go, and if there are typos or missing questions, do please let me know.

Thanks!