Quick notes from an EU Green Public Procurement Workshop for Cloud and IT in Brussels

As part of my work with the Green Web Foundation, I’ve ended up spending time in Brussels going to workshops, to feed into policy for greening the way we do digital. I’ve just finished the second workshop today which was about the sexy, sexy subject of public procurement. Why am I doing this? Because I think the way elected bodies spend money is a pretty decent lever for climate action, and it’s a way for me to have some influence as a citizen. Here are my take-aways.

What was this workshop about?

I’m struggling to find links to point to it, but the short version is:

  • the EU spends lots of money on IT, as in 45 bn each year
  • there are targets the EU has to at least make this somewhat efficient, and people are starting to realise that electricity, when not coming from renewable sources is a source of CO2 emissions
  • the EU also has some notion that digital technology, while a possible enabler for reducing CO2 emissions, can also be a source of emissions

So, the goal of the workshops has been to validate and add to some extra perspective to the research, and hopefully inform policy around Green Public Procurement from 2020 onwards, with a report in Feburary:

First of all, I’m glad that I am in a position where I can take a day away from billing for my time to attend a workshop like this.

Almost all the people in the workshop were from companies of more than a thousand people, and we had a minority of policy makers and academics. I think we had a couple of people from small businesses presenting, but they weren’t around for whole day, which meant that some of the exercises ended up with mainly us hearing the views of huge companies rather than small companies that make up at least half of the economy in Europe.

OK, what was discussed then?

The main thrust of the day, was about how we might make computing, datacentres and cloud more energy efficient, as a way to decouple economic growth from the corresponding growth in emissions when it’s clear global emissions are going in the wring direction.

Energy efficiency also seems to be one of the few ways we get to talk about climate, as it’s often a presented as as win win. Yes, this feels a bit weak-sauce in the face of scientists basically screaming at us to take the science seriously. But eh… I guess at least it’s a way we can start talking about carbon, and regulation and creating the incentives to make how we work in tech follow the science, right?

Key things I learned

  • We’re still not very good at talking about carbon. I looked, and despite the science being pretty clear, and the leaders of Europe declaring a climate emergency, and describing policy in terms of carbon emissions, and jeez, kids striking every Friday to remind us, it didn’t really come up anywhere near as much as I’d expect.
  • The certification schemes and codes of conduct have relatively low take up. There is a dizzying range of codes of conduct for datacentres, and different certification schemes like the Blue Angel in Germany, and among others. Despite the money pouring into them, they’re still comparitively niche.
  • Cloud and datacentres aren’t included in the published National Action Plans to reduce carbon emissions by countries in the Europe. For reasons I don’t quite understrand, cloud computing and datacentres don’t seem to factor when countries share their plans to reduce their emissions. This feels a bit like how aviation is treated in some places, but it’s much less eaiser to understand the reasoning – I mean, we know running servers normally will emit carbon, right?
  • There are a bunch of European research projects in this field already. There’s a veritable alphabet soup out there of projects to find some kind way to do greener computing, from CloudWatch2, to PICSE (Procurement Innovation For Cloud Services in Europe), ASCETiC (Adapting Service lifeCycle towards EfficienT Clouds), EURECA (EU Resource Efficiency Coordination Action), and Helix Nebula, among others.
  • There’s some draft public procurement for cloud and ICT, that’s been announced that might help, and it’s still being finished. There isn’t a clear url I can share but this looked pretty interesting – it’s essentially pre-written stuff to copy and paste language for use in procurement, to account for all kind of things like clear, fair selection criteria, for things that make a difference in Co2 emissions when you spend money on tech. They also include sample criteria for leaving contracts, if a supplier doesn’t get their shit together too. There’s already published guidance for a bunch of sectors, and there’s a newsletter sign up form on the European Commission site, which also as links to their helpdesk if you’re interested in the draft content. (see also – my snaps from the day)

The top recommendations from the day

It’s worth me sharing these recommendations online here, before I share a pic of which recommendations had the most interest:

The rankings of the recommendations listed above. Carbon tracking, and incentives to help move away from wasteful ways of doing things were higher up.

My take on the recommendations

There’s a big report coming in Feburary, but I had a few takeaways from the day beyond this.

At first glance it looks a kind of Green New Deal-ish, right, with carbon reductions, and incentives to help a just transition to better infrastructure. There’s few nods to the lack of transparency in this field – with the idea of a virtual smart meter to help people understand their own impact was popular.

I think the inclusion of investing in creating standards, while dull, sounds useful, as this is a field where it’s really hard to get reliable numbers.

However, I feel like in its current form there some problems.

Regulators and policy folks seem to be unable to see the similarities between cloud markets, and energy markets, and left to their own devices, I think these recommendations are likely to consolidate the lead of existing hyperscale providers.

This is because they already are more efficient than smaller operators, and already are further along in terms of tracking their own carbon (even if they don’t disclose it fully – like Amazon).

Personally, I think there’s a chance to be more daring here, and just like how decisions around the EnergieWende in Germany lead to the creation of an energy market with lots of small providers instead of a near oligopoly position elsewhere, I think something like a DigitalWende, to create something like a single, EU wide spot market for compute as a commodity would help – as right now, you only get to do this within one provider.

Combining that with work around low carbon orchestration and scheduling software like Aled James’s open source, load shifting low carbon Kubernetes scheduler, or projects to make use of under used capacity like how Helios’s Open Compute Cloud feels like it would support the creation of a much more vibrant European cloud market, than just delivering it all to a handful of American companies.


As ever, I’m happy to chat about this in more detail, and the ways you can contact me are listed on my contact page.

If this kind of wonkish climate and cloudfare interests you , you might also enjoy the Greening Digital Newsletter I write too.

How to rate limit punks with nginx

I do some ops work for the Green Web Foundation, and over the last few weeks we’ve been seeing nasty spikes

limit_req_zone $binary_remote_addr zone=nopunks:10m rate=10r/s;

What does this mean? We start by calling limit_req_zone, to tell nginx we want to set up a zone where we rate limit requests on our server, telling it to use $binary_remote_addr, or the binary representation of a connecting client’s IP address to tell one requesting client from another. We want to be able to refer to this rate limiting zone, so we give it a name, nopunks zone=nopunks:10m, and we site aside 10 megabytes of space to keep track of all the possible IP addresses connecting.

This means we can keep track of something how much our poor API is being hammed, from around 160,000 different IP addresses – useful!

Finally we set a rate of requests that seems fair with rate=10r/s. This means we want an upper limit of 10 requests per second to apply to this zone.

So, after writing this, we have a special zone, nopunks, that we can apply to any vhost or server we want with nginx.

Adding our zone to a site or API we want to protect

Now we have that, let’s apply this handy new nopunks zone, to a route in nginx.

location / {
      # apply the nopunks
      # allow a burst of up to 20 requests
      # in one go, with no delay
      limit_req zone=nopunks burst=20 nodelay;
      # tell the offending client they are being
      # rate limited - it's polite!
      # try to serve file directly, fallback to index.php
      try_files $uri /index.php$is_args$args;

What we’re doing here is applying the nopunks zone, and passing in a couple of extra incantations, to avoid a page loading too slowly. We use burst=20 to say:

we are cool with a burst of up to 20 requests, in one go before we stop accepting requests

Thing is, this leaves us with a backlog of 20 requests, each taking 0.1 seconds to be served, so the whole set of requests will take 2 seconds. That’s a pretty poor user experience. So, we can pass in nodelay – this adjusts our rate limiting to say this instead:

okay, you can send up to 20 requests, and we’ll even let you send them as fast as you like, but if you send any more than that, we’ll rate limit you

Finally, by default, with nginx, when a site is rate limited it serves a rather dramatic 503 error, as if something very wrong had happened. Instead limit_req_status=429 tells nginx to tell the connecting client to send a 429 too many requests status, so ideally, the person programming the offending HTTP client gets the message, and stops hammering the your API so hard.

So there you have it

This is a message mainly to my future self, next time I am looking after a server under attack. But with some luck, it’ll make being DOS’d (unintentional or not) a less stressful experience for another soul on the internet.

Further reading

The nginx documentation is pretty clear if you need to do this, with lots of helpful examples, and the guide on rate limiting on the Nginx website also was a godsend when I had do this today.