Quick notes from an EU Green Public Procurement Workshop for Cloud and IT in Brussels

As part of my work with the Green Web Foundation, I’ve ended up spending time in Brussels going to workshops, to feed into policy for greening the way we do digital. I’ve just finished the second workshop today which was about the sexy, sexy subject of public procurement. Why am I doing this? Because I think the way elected bodies spend money is a pretty decent lever for climate action, and it’s a way for me to have some influence as a citizen. Here are my take-aways.

What was this workshop about?

I’m struggling to find links to point to it, but the short version is:

  • the EU spends lots of money on IT, as in 45 bn each year
  • there are targets the EU has to at least make this somewhat efficient, and people are starting to realise that electricity, when not coming from renewable sources is a source of CO2 emissions
  • the EU also has some notion that digital technology, while a possible enabler for reducing CO2 emissions, can also be a source of emissions

So, the goal of the workshops has been to validate and add to some extra perspective to the research, and hopefully inform policy around Green Public Procurement from 2020 onwards, with a report in Feburary:

First of all, I’m glad that I am in a position where I can take a day away from billing for my time to attend a workshop like this.

Almost all the people in the workshop were from companies of more than a thousand people, and we had a minority of policy makers and academics. I think we had a couple of people from small businesses presenting, but they weren’t around for whole day, which meant that some of the exercises ended up with mainly us hearing the views of huge companies rather than small companies that make up at least half of the economy in Europe.

OK, what was discussed then?

The main thrust of the day, was about how we might make computing, datacentres and cloud more energy efficient, as a way to decouple economic growth from the corresponding growth in emissions when it’s clear global emissions are going in the wring direction.

Energy efficiency also seems to be one of the few ways we get to talk about climate, as it’s often a presented as as win win. Yes, this feels a bit weak-sauce in the face of scientists basically screaming at us to take the science seriously. But eh… I guess at least it’s a way we can start talking about carbon, and regulation and creating the incentives to make how we work in tech follow the science, right?

Key things I learned

  • We’re still not very good at talking about carbon. I looked, and despite the science being pretty clear, and the leaders of Europe declaring a climate emergency, and describing policy in terms of carbon emissions, and jeez, kids striking every Friday to remind us, it didn’t really come up anywhere near as much as I’d expect.
  • The certification schemes and codes of conduct have relatively low take up. There is a dizzying range of codes of conduct for datacentres, and different certification schemes like the Blue Angel in Germany, and among others. Despite the money pouring into them, they’re still comparitively niche.
  • Cloud and datacentres aren’t included in the published National Action Plans to reduce carbon emissions by countries in the Europe. For reasons I don’t quite understrand, cloud computing and datacentres don’t seem to factor when countries share their plans to reduce their emissions. This feels a bit like how aviation is treated in some places, but it’s much less eaiser to understand the reasoning – I mean, we know running servers normally will emit carbon, right?
  • There are a bunch of European research projects in this field already. There’s a veritable alphabet soup out there of projects to find some kind way to do greener computing, from CloudWatch2, to PICSE (Procurement Innovation For Cloud Services in Europe), ASCETiC (Adapting Service lifeCycle towards EfficienT Clouds), EURECA (EU Resource Efficiency Coordination Action), and Helix Nebula, among others.
  • There’s some draft public procurement for cloud and ICT, that’s been announced that might help, and it’s still being finished. There isn’t a clear url I can share but this looked pretty interesting – it’s essentially pre-written stuff to copy and paste language for use in procurement, to account for all kind of things like clear, fair selection criteria, for things that make a difference in Co2 emissions when you spend money on tech. They also include sample criteria for leaving contracts, if a supplier doesn’t get their shit together too. There’s already published guidance for a bunch of sectors, and there’s a newsletter sign up form on the European Commission site, which also as links to their helpdesk if you’re interested in the draft content. (see also – my snaps from the day)

The top recommendations from the day

It’s worth me sharing these recommendations online here, before I share a pic of which recommendations had the most interest:

The rankings of the recommendations listed above. Carbon tracking, and incentives to help move away from wasteful ways of doing things were higher up.

My take on the recommendations

There’s a big report coming in Feburary, but I had a few takeaways from the day beyond this.

At first glance it looks a kind of Green New Deal-ish, right, with carbon reductions, and incentives to help a just transition to better infrastructure. There’s few nods to the lack of transparency in this field – with the idea of a virtual smart meter to help people understand their own impact was popular.

I think the inclusion of investing in creating standards, while dull, sounds useful, as this is a field where it’s really hard to get reliable numbers.

However, I feel like in its current form there some problems.

Regulators and policy folks seem to be unable to see the similarities between cloud markets, and energy markets, and left to their own devices, I think these recommendations are likely to consolidate the lead of existing hyperscale providers.

This is because they already are more efficient than smaller operators, and already are further along in terms of tracking their own carbon (even if they don’t disclose it fully – like Amazon).

Personally, I think there’s a chance to be more daring here, and just like how decisions around the EnergieWende in Germany lead to the creation of an energy market with lots of small providers instead of a near oligopoly position elsewhere, I think something like a DigitalWende, to create something like a single, EU wide spot market for compute as a commodity would help – as right now, you only get to do this within one provider.

Combining that with work around low carbon orchestration and scheduling software like Aled James’s open source, load shifting low carbon Kubernetes scheduler, or projects to make use of under used capacity like how Helios’s Open Compute Cloud feels like it would support the creation of a much more vibrant European cloud market, than just delivering it all to a handful of American companies.

Questions?

As ever, I’m happy to chat about this in more detail, and the ways you can contact me are listed on my contact page.

If this kind of wonkish climate and cloudfare interests you , you might also enjoy the Greening Digital Newsletter I write too.

How to rate limit punks with nginx

I do some ops work for the Green Web Foundation, and over the last few weeks we’ve been seeing nasty spikes

limit_req_zone $binary_remote_addr zone=nopunks:10m rate=10r/s;

What does this mean? We start by calling limit_req_zone, to tell nginx we want to set up a zone where we rate limit requests on our server, telling it to use $binary_remote_addr, or the binary representation of a connecting client’s IP address to tell one requesting client from another. We want to be able to refer to this rate limiting zone, so we give it a name, nopunks zone=nopunks:10m, and we site aside 10 megabytes of space to keep track of all the possible IP addresses connecting.

This means we can keep track of something how much our poor API is being hammed, from around 160,000 different IP addresses – useful!

Finally we set a rate of requests that seems fair with rate=10r/s. This means we want an upper limit of 10 requests per second to apply to this zone.

So, after writing this, we have a special zone, nopunks, that we can apply to any vhost or server we want with nginx.

Adding our zone to a site or API we want to protect

Now we have that, let’s apply this handy new nopunks zone, to a route in nginx.

location / {
      # apply the nopunks
      # allow a burst of up to 20 requests
      # in one go, with no delay
      limit_req zone=nopunks burst=20 nodelay;
      # tell the offending client they are being
      # rate limited - it's polite!
      limit_req_status=429;
      # try to serve file directly, fallback to index.php
      try_files $uri /index.php$is_args$args;
}

What we’re doing here is applying the nopunks zone, and passing in a couple of extra incantations, to avoid a page loading too slowly. We use burst=20 to say:

we are cool with a burst of up to 20 requests, in one go before we stop accepting requests

Thing is, this leaves us with a backlog of 20 requests, each taking 0.1 seconds to be served, so the whole set of requests will take 2 seconds. That’s a pretty poor user experience. So, we can pass in nodelay – this adjusts our rate limiting to say this instead:

okay, you can send up to 20 requests, and we’ll even let you send them as fast as you like, but if you send any more than that, we’ll rate limit you

Finally, by default, with nginx, when a site is rate limited it serves a rather dramatic 503 error, as if something very wrong had happened. Instead limit_req_status=429 tells nginx to tell the connecting client to send a 429 too many requests status, so ideally, the person programming the offending HTTP client gets the message, and stops hammering the your API so hard.

So there you have it

This is a message mainly to my future self, next time I am looking after a server under attack. But with some luck, it’ll make being DOS’d (unintentional or not) a less stressful experience for another soul on the internet.

Further reading

The nginx documentation is pretty clear if you need to do this, with lots of helpful examples, and the guide on rate limiting on the Nginx website also was a godsend when I had do this today.

In praise of sea otters

Sea otters are awesome. I have a bunch of tabs open, and I wanted to drop some content here before I close them.

Sea otters are great in particular if you care about climate change, because they eat sea urchins, which in turn really, really like eating kelp, a giant seaweed that forms huge, beautiful kelp forests, that sequester (i.e. draw down) loads of CO2.

The sad thing is that we almost hunted sea otters to extinction in the early 20th century, and their reduced numbers have meant that loads of kelp forests have been decimated by marauding sea urchins that would have otherwise been eaten by them.

These sea urchins have worked little aquatic lumberjacks, gnawing through seaweed, and killing it, releasing CO2.

If I end up starting a project with a sea otter as a mascot, this is why.

Sea otters. Cute, climate heroes

More links

https://www.nationalgeographic.com/animals/mammals/s/sea-otter/

https://science.jrank.org/kids/pages/58/OTTERS-URCHINS-KELP.html

https://www.montereybayseaweeds.com/the-seaweed-source/2018/9/25/sea-otters-arent-the-saviors-of-all-kelp-forests-hmns6-5l8jb

Two things I wish existed, and would want to make if had lawyer super powers

A friend of mine, Ed asked me this in a private Whatsapp group before tagging me on twitter with this message:

Two things I wished existed

A “Green Oak” Software License

Anything to discourage the use of open source software and services to support the extraction of fossil fuels would be good.

We’ve seen previously that one of the key things stopping fuel and natural gas so far has been the difficulty in raising finance, or getting insurance on new fossil fuel projects. See this thread for more:

So, I think we should make it riskier and more expensive to use open source software to support fossil fuel extraction.

Maybe a thing like a “Green Oak License” – i.e. along the lines of the Blue Oak Model License, but with explicit language about use in the extraction of fossil fuels being forbidden.

If this exists in a sensible form, then it becomes possible to have a conversation about what people building software are comfortable with it being used for, and ideally, for us, as grown up professionals, take more responsibility in how the things we might make are used.

As tech grows up, so must we, and if we say software is eating the world, then maybe this new world should have a different aesthetic, where it’s just not cool to have anything to do with extracting fossil fuels, when the science if so overwhelming, and when we need investment that is going into fossil fuels to go into things like drawing down carbon, or transitioning our economy away from them.

Model policy language for procurement for purchasing to be in-line with net-zero targets

The second thing would be some model language to use in procurement, to basically say:

“this big purchase we make needs to be inline with net-zero targets”

I don’t know what it might be, but creating an incentive that people either can’t complain about being non-competitive, or that people can use to force a conversation in places where a climate emergency has been declared to give these declarations some teeth would be helpful in my view.

Maybe it’s a specific thing to ask for to show this, like a verifiable commitment, the way the WCAG guidelines forced accessibility to be a thing in public sector. You can see precedents set where NYC public schools forced Amazon kindle to be more accessible, which has now ended up creating norms for private sector too, like in the case where a blind man has successfully sued Domino’s Pizza for building an inaccessible site.

Why I think this would help

I say this because I understand more than 50% of UK councils to have declared a climate emergency now.

But without any mechanism to act upon this declaration, I worry that it’s just a feel good gesture, and any momentum from doing it will be lost.

If there’s some legal basis to back up the science, which we all seem be ignoring, at least it can lead to a conversation along the lines of:

“OK, what does acting as if there was a climate emergency look like?”.

The goal here isn’t to penalise people for declaring a climate emergency, but instead to create the legal mechanism to allow the people pushing for it, to push for action, rather than being fobbed off with a response like “we already declared it, we’re done!”.

The people campaigning for things like emergency declarations shouldn’t need to be policy experts or technocrats, but their reasonable wishes of keeping a world safe for their and their friends children should be respected.

I’ve been working with a friend, James Gardner to sketch out some ideas along the lines of a “ten tonne rule”. I’m hoping these two sentences on how to use will outline the idea behind the ten tonne rule:

If any spend will cause more than 10 tonnes of CO2 emissions, rank bids by CO2 emitted over length of the contract.

Suppliers show the workings for their CO2 figures in bids. Favour the lowest.

James has written some more on his site.

How to forward requests with proxy_pass in nginx

I’ve been doing some work of late with The Green Web Foundation, and recently we moved from using Gearman as a queue, to RabbitMQ instead.

RabbitMQ has a management UI that makes it easier to tell what it’s doing, and it also exposes this information at a specific port, (lets say 12345) in the form of a handy dashboard.

You might want to make this easy to see remotely, and if you already use Nginx as a webserver for serving files on the same machine, one thing you can do is serve this dashboard using nginx, using a handy directive called proxy_pass, and setting up an upstream provider, to send requests to another service.

Here’s how it works.

First all, set up a server

server {
  # sample values
  listen 123.123.123.123:80;
  server_name dashboard.thegreenwebfoundation.org;
  # the directory to serve files from
  root  /var/www/dashboard.thegreenwebfoundation.org;


  location / {
        # try to serve file directly, fallback to index.php
        try_files $uri /index.html$is_args$args;
  }

}

Once you have a server, this should be serving files from the directory /var/www/dashboard.thegreenwebfoundation.org.

This works for static files, but in the case of the RabbitMQ dashboard, we have the dashboard being served from a different port.

One way to serve this content on a different port is to define it as an upstream, server, like so, so we can refer to it later:

upstream rabbitmgmnt {
	server localhost:12345;
}

Now we have defined a server listening on 12345, we need a way to send traffic along to this new upstream server. One way is to use the location directive like this – now, any requests sent to dashboard.thegreenwebfoundation.org/rabbit will not be sent along to the upstream rabbitmgnt server.

location /rabbit/ {
        proxy_pass http://rabbitmgmnt/;
  }

Why is this useful?

This is handy as it saves you needing to set up a whole new virtual host.

Note: I’ve abridged the code this example to keep the code easily readable, but you’d almost always serve this over HTTPS, and ideally, you’d try to reduce the possible IP addresses you’ve be able to access this endpoint over.

This post is one for my future self, when I forget how to use nginx again…

Helpful links

RabbitMQ Management plugin – https://www.rabbitmq.com/management.html

Nginx proxy_pass info – http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass

Nginx upstream servers – http://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream

Fellowship-it: an idea to get over the last hurdle when applying for funding

I keep planning to apply for the Shuttleworth fellowship, and failing to apply, because I’m not happy with my final application, so I want to try a weird trick that might help. It might help you too.

My experience

I tried earlier this year, at the last minute, I flaked out, because, after spending hours writing an application, I couldn’t get a good video together that I was happy with, actually make up the last bit of the the submission for the fellowship. I’d keep re-recording, or re-writing what I was going to say.

In a word, I couldn’t ship it.

What if I can’t keep noodling around with the video?

If you’ve ever been to an ignite/pecha kucha event, you’ll be familiar with an interesting hack – to make the night more interesting, and keep the event flowing.

Speakers have 300 seconds, and the slides automatically cycle through the deck – you don’t get control it. It’s scary, but also liberating.

Fellowship-it

Puns seem to be one of the main drivers of my professional career, and and this is no exception. I’m planning to run an event in the last week before the Shuttleworth Fellowship application closes, for others in Berlin or Germany, who are thinking of applying, who:

  • want to apply for the shuttleworth fellowship which closes on Nov 3rd
  • are cool with presenting your idea on a projector in venue in Berlin, from a PDF, or online deck (i.e. not your laptop)
  • are happy presenting in front of a small, friendly supportive audience

This sound like you? If so, here’s the plan:

  • we get venue (I’ve asked co-up but I’m open to other venues, as long as they’re easy to get to, and free)
  • one of us sets up a DSLR or similar camera
  • we take it in turns to present our idea, in one take, to the rest of the audience. We have the projector and the audience to help.
  • we immediately upload the video somewhere we can all access (probably youtube, but I’m not too fussy)

We then have a video uploaded an online, the same day, that we can link to, before the deadline.

Why do this?

I have a hard time getting these applications shipped, and I think others might too.

So, if we can do something to help with the most awkward part of getting these applications over the line, I think it’ll increase the chances of one of us actually getting funded, for whatever project we want to throw the majority of our waking hours at, over the next few years.

If nothing else, it’ll be a good test run, if you DO want to make a video anyway. You’ll get the practice for trying to speak coherently about your project.

If you’re into it

Let’s say we might do on this on the afternoon/evening of Friday Nov 1st, or at some point on Saturday Nov 2nd. That still gives time to see the vid and make the application deadline of Nov 3rd.

If you have a venue in mind, or you fancy doing this, shoot me an email with the subject of Fellowship-it“, and say “I’m up for doing this”, to chris@productscience.net.

You can see more, but that’s enough.

If there’s enough of us in Berlin who want to this I’ll set some time aside to make it happen.

Update: I provisionally have a space now (Co-up, a cool community space in Berlin) and a date – the evening of Nov 2nd.

How much of our internet infrastructure will be underwater in 15 years?

I follow Alexandra Dechamps Sonsino on twitter, and I learn a colossal amount from what she share, but some recent links she shared really got me thinking. I’ve written previously about how tech and the internet plays havoc with our climate because it relies on fossil fuels. It looks like the climate is wreaking havoc right back.

TLDR: Burning fossil fuels to run the internet our worsens climate change. Now rising sea levels look like they’ll swamp our infrastructure back.

This piece from last year in the National Geographic is eye-opening, about how rising sea levels are affecting the operation of the internet. Because commercial firms don’t disclose this, the authors ended up needing to scrape loads of pages to get an idea where all that infra was, and found out this:

Cities like New York, Miami, and Seattle are likely to see up to 12 inches of extra water by 2030—well inside the time range of a mortgage on a house, or the planning horizon for big public infrastructure projects. A foot of extra water wending through some of those cities, the researchers say, would put about 20 percent of the nation’s key internet infrastructure underwater.

They name specific companies in the paper, like AT & T, Century Link and so on, whose infrastructure is at risk. Above a certain size of company, there are climate related financial disclosures it really should be sharing, for the benefit of investors, suppliers,, customers and so on, and there are companies who are doing this.

One good example is Etsy, who last year started integrating environmental reporting with financial reporting.

Here’s what they say, specifically, referring to the Sustainability Accounting boards Standards:

Discussion of the integration of environmental considerations into strategic planning for data center needs. Etsy’s goals include powering our operations with 100% renewable electricity by 2020, and reducing the intensity of our energy use by 25% by 2025.

These goals are included as key considerations as we plan for our computing needs, and have been a focus of our sustainability efforts. When transitioning to a cloud computing infrastructure, we selected Google Cloud Platform, a partner that shares our commitment to 100% renewable electricity. Their highly efficient datacenters are expected to help us save significant energy. Moreover, moving to flexible cloud-based infrastructure should enable us to reduce major idle time and associated energy consumption.

In 2018, Etsy entered into a virtual power purchase agreement for solar energy in Virginia. Once operational, this project is expected to provide us with renewable attributes to apply to our operations and computing infrastructure, furthering our goals of creating a cleaner internet and reducing our impact on the planet. We actively monitor and manage energy consumption from our computing infrastructure.

In 2018, our colocated data centers accounted for 68% of total energy consumed, or 7330 MWh.

From Etsy’s SASB section of their SEC filing for 2018

The paper cited though, Lights Out: Climate Change Risk to InternetInfrastructure goes further. It literally shows where there is projected flooding, and where there is infrastructure where the flooding will happen:

I’m not aware of much in the way of publicly accessible data listing this, and I’m not aware of research like this outside of the states.

It seems kind of useful to know how much of the biggest machine on earth, that many of we use rely on every day, will be underwater in the next few years though, surely?

If you’re working in this field, I’d love to chat. Better yet, come say hi in ClimateAction.tech.