Skip to Content
Home

Technology alone cannot save us.

Posted
Updated
Categories
  • Web
  • Environment
  • Performance
Wordcount
3717

I've been thinking about carbon emissions a lot over the last couple of years. Something I regularly hear in discussions about anthropogenic climate change, especially from those who would rather take no action at all, is that we will inevitably develop new technologies which will allow us to massively reduce our emissions while continuing 'business as usual'. Making significant alterations to our lifestyles or business practices, therefore, is entirely unnecessary as science will come to save us.

As someone who works in an industry that is entirely based upon technology (the web) I can not deny that the efficiency of technology is constantly improving. I can now carry a device in my pocket that is more powerful than the family computer we had 20 years ago. I am also, however, acutely aware that any new technological developments have a fundamental weakness: people must be willing to make use of them for us to benefit.

Let me use a few examples to illustrate this point.

Dirty websites.

Last year, during the first week of the COP26 Climate summit in Glasgow, I visited the summit's official website. The first thing that struck me, before I was even able to read any of the text, was how long the site took to load. If you have any familiarity with the web, you may be aware that slow loading times on a website are often a symptom of other, more fundamental, problems lurking beneath the surface; one of which is high greenhouse gas emissions.

Despite what many people would like us to believe, websites do not exist within some form of ephemeral cloud. In order for us to 'visit' it, a web page must be stored on a specialist computer somewhere. A copy of it must then be sent through the labyrinth of routers, switches, signal boosters, and optical to electrical converters that is the internet, and received by the smaller computer that we want to view it on; whether that is a desktop or laptop, smart phone or watch.

Sending data across the internet like this is not instantaneous. The more data there is, and the more circuitous the route it takes, the longer it will take to arrive. Anyone who remembers waiting for web pages to finish downloading on a dial-up internet connection will be acutely aware of this.

In order to perform the work that we wish them to do, computers need electricity; the harder they need to think about what they are doing, and the longer they need to do so, the more electricity they will consume. Your laptop's battery, for example, will last far longer if you are only editing text files or spreadsheets, than if your are watching videos on youtube.

What this means for website emissions is simple. Sending data across the internet is not free, and the more data you send, the more electricity will be used. Even renewable electricity is not entirely carbon-neutral, so any level of usage will make some contribution to climate change.

Individually, a single webpage does not emit much greenhouse gas each time it is visited (the global average is estimated at 1.76 grams of CO2 equivalent, according to the Website Carbon Calculator), but with some websites receiving billions of views a month, this quickly adds up; the internet as a whole is estimated to have emissions higher than the UK.

Not all websites are created equal.

There are many reasons why a website might take a long time to load. The site might simply be hosted in a location that is physically distant from you, and the data has to travel a longer route to arrive. There may also be temporary service disruptions, or patches of infrastructure that operate at a substantially lower bandwidth. Not all of these will necessarily be permanent, nor indicative of poor coding practices. In the case of many sites (including the official COP26 website), however, the root cause of poor performance is often a simple case of waste.

Building a website from scratch is a fairly time-consuming process. In order to speed development up, many sites are based upon a framework. A framework is a set of pre-built components, such as popups, dropdown menus, or galleries, that developers can combine together to create different pages. In order to be as useful as possible, most frameworks contain the code for far more components than any single site would usually need, and provide separate files to allow websites to include only the parts they use.

During development, however, it is a common practice to include everything, and only remove the parts you don't need when you have settled on the final components you will use on the site. I myself often do this, as it is more time efficient to perform one mass clean-up, than to be constantly adding files as I find I need them.

If a developer is feeling lazy, or under time pressure, however, this final step can be overlooked; which is exactly what seems to have happened with the COP26 website. Each page you visit on the site asks your browser to download the code for the entire framework, even though only a small fraction of it is being used. All the data for the framework must make that long trek across the internet, using electricity in the process, only for most of it to be discarded at its final destination.

Unused code, however, was not the only thing slowing the site down; they also had a problem with images. Photos taken on a digital camera (of any kind, smart phones included) are generally very high-resolution. If you are intending to print out your photographs, this large size is essential to ensure the photo looks crisp, as printers operate at a much higher definition than screens. On the web, however, where images may only take up a small fraction of the page, this detail is completely unnecessary.

Most of the images on the COP26 website, while of reasonable dimensions, were simply saved at far too high a quality level for the web. Here again, they also had a problem with unused data being sent to users; a large image of the top of the globe in the footer was sent as the complete circle, even though about 80% of it was hidden off-screen.

Combined, these and other problems added up to the homepage of the COP26 website weighing over 18 mega bytes of data, compared to a global average page size of around 3. According to an online tool called the Website Carbon Calculator, this resulted in green house gas emissions of approximately 11 grams of CO2 equivalent each time the page was viewed, compared to a global average of around 1.76. Remember, that while 11 grams is still a fairly small figure, multiplied by the tens of thousands of visits the site was likely getting a week it is far from insignificant.

How you do something matters more than the tools you use.

A few days later, after having received no useful response from any of the organisers of the summit, I decided to build a lower-impact alternative as a way of bringing attention to the subject of website emissions. Over the course of a weekend, I recreated several pages from the original as closely as possible, using the same framework, and made them publicly available online at climate-friendly-cop26.org. Without too much effort, I managed to get the average emissions for a page on my alternative down to 0.19 grams per view: or 30 times less than that of the original, and 9 times lower than the average website. To put it another way, 30 people could visit my version of the site for the same amount of carbon as a single visitor to the original would produce.

Moving way from websites for a moment, I would like to talk about the efficiency of computer programs in general. When personal computers were first appearing, most programs were written in a form that the computer could directly understand. These 'machine languages' are very efficient for a computer to interpret, but are rather difficult for humans to write and understand. As time went on, several alternative programming languages were invented that more closely resembled written English (for good or ill, English still dominates programming). These made life easier for programmers but, as the computer was unable to process code in this form, introduced an additional step of having to be translated (or compiled) into machine language before they could be run.

As with anything in life, there's generally multiple ways of solving the same problem using a computer and, depending on the programming language you use, the machine code that is actually read by the computer can be very different. A website that aims to track the effect of this on performance and efficiency is the Computer Language Benchmarks Game.

Each benchmark on the site is a sample program that people implement in different programming languages. In each case, the input and output data remain the same, but the way of processing it will vary according to the individual programmer and the language they use. Benchmark findings famously don't directly map to the real world, but looking at the results from the benchmarks can give us a rough idea of the difference in efficiency of how we use technology.

Think about these differences a little like cooking something from a recipe, as computer code is roughly analogous to giving a recipe to the computer to follow. Where you are cooking is likely in a different place to where you store your ingredients, and you can either scan ahead in the recipe and gather all of your ingredients at once, or simply fetch each individually as you find you need it. The final meal will be the same, but the amount of work you have to do (and time it takes to complete) can be vastly different.

In 2017, a group of researchers used the example programs from the Computer Language Benchmarks Game and measured the energy consumption while running them. Their paper, Energy efficiency across programming languages: how do energy, time, and memory relate? found that, on average, the least efficient programming language used almost 80 times as much energy to do the same task as the most efficient.

Profit over people.

If the performance of software can vary so widely, why do people write slow and polluting software in the first place? The reasons for this can be many.

The programming languages that produce slower code also tend to be the ones that are easier to write in. In many cases, creating software that executes quickly involves dictating fairly precisely how the computer should manage it's memory and behave. Learning how to do this well, without introducing errors into the code, is something that takes time and a lot of practice. It's also something that isn't generally taught in the beginner programming tutorials you find on the web.

Many beginning programmers, therefore, are either not experienced enough to write fast code, or simply unaware that it's something they need to think about. Furthermore, not every piece of software will necessarily have the same performance requirements; a difference of a few seconds in the execution time of the pet project you use once a month isn't going to make much difference, but on a piece of core infrastructure used thousands of times a day, that same time increase could be critical. Sometimes, programmes never designed for mass adoption become used in ways that they weren't intended for.

When I was developing my alternative to the official COP26 website, I was thinking about performance all the way through the process. While I ended up spending less time optimising the loading speed of the site than I spent copy and pasting text, I did still spend additional time in improving the performance after the site was fully operational. For developers working for a company, this is often a luxury they cannot afford.

In a commercial environment, where money and profit dictate all decisions, the most pressing demand from management is generally to ship a product as soon as possible. Staff time costs money, and managers and stake holders are normally hostile to the idea of spending time on something that is not going to bring in immediate profits. Speed and performance are things that are hard to measure their benefits. A slow website may lose you potential customers, but it's difficult to measure lost earnings from visitors to a site that never wait for a page to fully load. The same is true of other software; most people I know complain about how long programmes on their computer take to start, but they don't generally switch operating system because of that.

Companies that care only about profits simply have no incentive to reduce their impact on the environment when not doing so will have no effect on their margins. I have spoken to many developers over the years who would love to work on the performance of their projects, but feel it is a luxury they cannot afford when they are being constantly pressured to finish projects and start on the next one.

Laws and legislation don't work

If, even in a sector where a couple of week's work by a single individual is sufficient to reduce emissions by an order of magnitude, companies are not taking action to reduce their impact on the environment, how do we make them?

There has been a lot of talk in the discussion around climate change in taking legal action against corporations acting counter to our interests, and some positive progress has been made in this space. You might argue, therefore, that the best way to reduce the emissions from software is to introduce laws stipulating certain maximum thresholds. The threat of fines or legal action for non-compliance might, it would be hoped, force a global reduction in emissions.

A recent piece of legislation governing what people can do on the internet, however, shows that this doesn't always have the intended effect.

GDPR

When you buy something online, the company you are purchasing from needs to collect some data about you in order to full-fill their end of the transaction. If they have to ship a product to you, for example, they need to know your address and a phone number or email address to contact you with if something goes wrong with the order. Most companies hold on to this information long after you have placed the order; deleting information takes admin time, and if you place an order again it is easier to reuse the existing information you have given them than to recreate it.

Many years ago, online stores discovered that the database of people's details and past purchases they were maintaining was exactly the kind of information that is essential to highly targeted advertising, and that certain companies that specialised in this were willing to pay them to access it. Suddenly, information that they were holding purely for administrative purposes could become another revenue stream.

It's easier to sell someone something if you know exactly what their interests are, and online advertising has become almost ubiquitous. Most websites that you visit will be using a tool such as google analytics, or facebook pixel that tracks you as you move across the web; registering what sites you visit, and using that information to build up a detailed profile about your physical characteristics and interests that can be used to determine which ads you are most likely to engage with.

Many companies that used their customer's personal data in this way were doing so entirely without their knowledge. A few years ago the European General Data Protection Regulation (GDPR) was introduced to try and curb this practice and protect the personal information of people online.

As someone who was working at a web agency at the time, I was able to observe first-hand the actions that companies took in order to comply with the new legislation. In every case I witnessed, that action was the absolute minimum that they felt they could get away with to avoid persecution.

A key part of GDPR is informed consent, before a company can process the personal information of an individual, they must now prove that they were expressly given permission to do so. To avoid losing the profit that they were making from selling data (or in cases where they were not selling data directly, losing access to the tools other companies were providing them with in exchange for that data), website owners introduced mandatory tick boxes on any online forms. These check boxes pointed to privacy policies that started with the words "We care about your privacy", and then continued in legalese so ambiguous as to be almost completely unintelligible. Without checking the box, and confirming that you have read and agreed to the terms and conditions, you can not use the site or service.

None of our clients stopped using invasive tracking or selling customer data when the legislation was introduced. Instead of harvesting data without telling their customers, they simply moved to telling them in such a way that most simply wouldn't understand. Complying with the spirit of the law would have reduced their profits, so they instead complied with the words.

I fear that any attempts to introduce legislation governing carbon emissions from software will have the same effect; those who are currently in breach of any standards set will reduce their emissions by the bare-minimum to comply with the legislation. Measuring the emissions from websites is already a fairly imprecise science, and there is much room for green washing and manipulating statistics to paint any picture that you want.

Legally set standards will only reduce emissions to whatever threshold is set, and then give the worst offenders a shield to hide behind and a certificate to brandish in the face of any criticism that they are not trying hard enough.

Greater efficiency equals greater utilisation

Increased emissions from technology do not always stem from waste. As the efficiency of a technology increases, generally so too does the utilisation.

When electronic computers were first introduced, they were incredibly expensive and took up entire rooms. A University might have one, but it would be unheard of for an ordinary person to. As the efficiency of computers increased, and they got progressively smaller and cheaper, so too did their adoption. Today, they're everywhere and most people carry one around in their pocket. What once would have taken a device the size of an entire room to compute, can now be done by a tiny little circuit-board powered by a solar panel or batteries.

As the efficiency of computers has increased, however, so too has the complexity of what we're asking them to do. Some of the early arcade games such as Space Invaders or Pacman, despite being extremely popular in their time, are rather primitive by today's standards. As computers have become more powerful, we've moved on from simple, pixel-based 2D graphics to whole, photorealistic, 3D worlds. The computer that I am writing this on could probably play a few hundred of those early arcade games at once if I could keep up with it but, while I still enjoy playing the classics, there are other games I also want to play that simply would not be possible without a more powerful computer.

Once upon a time you had to go to an arcade to play video games, now you can do so on your watch if you wish; and there are a lot more trousers with pockets and wrists with watches now than there were arcades before.

If we increase the efficiency of technology, we will inevitably want to do more with it. This does not just apply to computers, the same thing happened with fridges.

Sustainability is indefinite

This leads us to an impasse. The process of technological innovation that means we can grow more crops per acre than ever before, power more lightbulbs for the same amount of energy than we could ten years ago, is still subject to physical laws. There is only a finite amount of work you can do with a single kilowatt, for example. Improvements in efficiency have allowed us to offset some of the costs to the planet of our growth up until this point, but we can not rely upon them forever.

By its very definition, sustainability means using less resources than can naturally regenerate; not for the next 10 years, or even 100, but indefinitely. Despite the sheer enormity of the planet upon which we live, it is still finite. Anything we take from the environment beyond that which will be replaced naturally, therefore, will eat into that finite reserve and, eventually, come back to haunt us.

If we do not address the fact that most of our societies operate upon a model of continuous growth we will eventually deplete all of the resources available on this, our only home.

Human needs are not all we need to take into account. This is not some form of hippie concern for fluffy animals, as some people try and paint it (though I do feel that we have no right to destroy the habitats of other living things), but a case of simple survival. 100% of the food we eat, and clean air we breath is produced by other living organisms; if we change the environment in ways that mean they struggle to survive, so will we.

Technology is a tremendous tool in our fight to slow climate change, but it is only that: a tool, not a solution. If we try and apply the tools at our disposal in the same ways as we have before, we may well manage to avert the current climate catastrophe (though it will not be easy), but all we will be achieving is deferring that crisis to some future date.

To be truly sustainable, we need to stop growing, and to stop growing we need to restructure our society and our value systems; to abandon our current system that prioritises greed above all else.

Technology cannot save us, but we ourselves may be able to do so.