Light, energy and the internet
- Lectures and events
- Publication Date
- 23/03/2015
- Featuring
- Professor David A B Miller CorrFRSE
A lecture held in Edinburgh and presented by Professor David A B Miller FRS CorrFRSE, W. M. Keck Foundation Professor of Electrical Engineering and Professor by Courtesy of Applied Physics Stanford University. This was held as part of the International Year of Light series. The Exponential growth of data on the internet relies on optical fibre and phonetic technology, but handling that data requires power in environmentally significant amounts, with most of the energy dissipated for communicating inside machines and data centres. New generations of integrated photonic technology may solve that problem too, sustaining society’s insatiable demand for data.
More light, less heat ?
The Internet has changed the lives of billions of people but after years of exponential growth, it could be ‘hitting the wall’ in terms of scalability and processing ability – what goes on inside the devices that manage the data. Computer chips are even getting hot enough to fry an egg. Could optical technologies come to the rescue? Or could the answer lie in biological or quantum computing? Professor David Miller cast some light on the complex challenges as well as opportunities ahead…
Professor Miller opened his lecture by explaining how nearly all the information carried on the Internet is transported by light, over most of the distance it covers. Light makes the Internet work by sending data via networks of optical fibres, and these networks have the potential to scale up for decades to come – there is plenty of land left for laying more cables.
The problem is that what goes on inside the computers and the routers which control the traffic speeding around the Internet is not keeping up with the growth of the traffic. The optical networks have plenty spare capacity. The connections are the bottleneck, including the junctions where the traffic arrives at the routers and the chips which do the calculations needed to process the data. There is no room left on the chips to add more wires (interconnect density), and we are running out of energy – most of it is used to send the electrical signals that transport and process the data.
The good news is that optics can address both these problems, to help scale the Internet further.
The problem is also an old one, similar to that which faced the telegraph network in Victorian times. In the late 1850s, this network was the key to ‘command and control’ of the Empire, but performance and capacity were limited. The first attempt to lay a submarine cable across the Atlantic was a failure, however. The signal dropped and there was too much intersymbol interference – confusions between one Morse-code symbol and next. When the engineers tried to solve these problems by sending 2,000 volts down the wire, they blew up the cable. Lord Kelvin had warned this would happen and invented an improved cable design and a better solution (a mirror galvanometer) for reading the weak signals. His inventions enabled the cable to work, connecting Newfoundland and Ireland in 1865. In 1902, a cable was laid linking Vancouver and Fanning Island in the middle of the Pacific – a distance of about 4,000 miles. This could send 100 characters per minute or seven bits per second – a million million times slower than optical fibre today. (The early cables also caused an ecological disaster – the latex used to insulate them came from gutta percha trees which were nearly wiped out to meet the demand.)
Today, we use hundreds of millions of kilometres of optical fibres, each about the same thickness as a human hair, insulated with plastic, which suffer hardly any signal loss or intersymbol interference, are very fast and have a huge capacity. Introduced in the 1980s, these optical networks now criss-cross the world, making the Internet work. But how long will the existing infrastructure be able to cope?
The Internet is growing at a mind-boggling rate. For example, YouTube has one billion unique users per month, and the number of hours spent on YouTube has increased by 50% in the last year alone, with 300 hours of video uploaded every minute. There are now seven billion mobile phones in use around the world and 4.5 billion unique users – over 60% of the world’s population. The planet has more phone connections than people, and three billion Internet users – most of them not even using broadband yet, which means there is enormous potential for further expansion. The traffic on the Internet will reach the zettabyte scale (1021 bytes) by 2016, according to Cisco, transporting 250 million million bits per second – 64 times more data in 2018 than 2005. The ‘Internet of things’ will be responsible for some of this growth, with toasters talking to refrigerators. And optical fibre will continue to handle the traffic.
The network itself is not the problem, but the underlying electronics are struggling to cope. Moore’s Law predicted that the number of transistors per chip would double every 18–24 months, and this has largely been correct till now. In the 1990s, Gordon Moore calculated that the number of transistors manufactured every year was greater than the number of raindrops that fell in California. Every year, we may make close to as many transistors as the total number which existed at the start of the year – there are now more transistors than ants on the planet. But the improvement in performance of the chips is beginning to slow. The major challenges are wiring density and power dissipation, which leads to problems such as the speed of the connections in and out of memory. Until recently, the number of transistors per chip and the volume of Internet traffic had been growing at similar rates, growing by rates of 25 to 100 times over the last 10 years, much faster than our ability to connect inside the machines, using current electrical technology.
The problems are communications density and energy. We now use 5–9 per cent of our electrical energy in information processing and communications systems. And there is a physical limit to the rates of connections on wires. Once you fill up the space available for wiring, you ‘hit the wall’ in terms of connection performance, unless you spend large sums of money and use a lot more energy. The electrical resistance of copper is the main limitation. Despite this, companies such as Intel and Nvidia continue to develop very high-performance chips, including 14 nanometer technology, but the ‘byte per flop gap’ prevents the chips from exploiting their full calculation capability (the number of ‘floating point operations’ or ‘flops’) – we cannot get enough information in and out of the chips. This is where the input/output rate “parts company” with silicon chips, said Professor Miller, “and the gap is getting wider.” Our ability to send information around on the chips is falling behind our ability to do calculations. Memory bandwidth (the number of ‘bytes’ per second) is not keeping up with processor performance, and has not improved much over the last ten years because of the limitations of wires. We already generate enough heat on the chip to fry an egg and as much as 80 per cent of that power can be used just to send the information about. Such energy dissipation also requires vast cooling systems. These problems of interconnect energy and density inside machines will limit the performance of the Internet – the chip itself may be very efficient at performing calculations but because we do more work inside the computers and routers which manage the network than along the length of the network itself, this is where the bottleneck lies.
Energy consumption by the Internet is growing faster than our ability to generate power, with Internet servers using more than all the power we produce around the world with solar panels. If the data volume grows 10–100 times more per decade, we will soon face an energy crunch, and this will stop us scaling up the Internet. The power used for 70 Google enquiries is enough to boil the water for one cup of coffee, and this underlines the fact the
Internet is not as ‘free’ as widely believed. Every time you send a signal electrically, you have to charge and then discharge a wire between two transistors to make the connection, and this has a cost.
The solution may be optical technology and complex silicon–photonic systems, including ‘free space optics’, photonic crystals and nanophotonics. “With light, you don’t need to charge wires – you use photons,” Professor Miller explained. And you can use optics for long-distance operations as well as between chips, to reduce the energy used inside the machines. We can also use the silicon and germanium already used in electronic chips to send and receive optical signals with photons in and out of a chip, with much less energy than is used now with electrical signals. Such approaches can also exploit nanotechnology to give even lower energies.
Dense, low-energy optoelectronics, he concluded, are the key to enabling the networks of the future – in fact, the only possible solution to these problems of connection energy and density. The constraints on growth are inside the machines, not the long distance network, and we should now start to exploit the new opportunities optics will create.
- What can we do to reduce the energy dissipated by computing and the Internet?
- Don’t use wireless connections! And turn your computer off. Companies such as Google are very aware of the problem, trying to reduce the power used by their servers, for both financial and ecological reasons.
- How do mobile phones send signals? What role do satellites play?
- Optical fibres carry most of the traffic – geostationary satellites and radio waves operate at too low a frequency and offer too few channels for them to carry the amount of information we need to send, so we don’t use radio waves for any large amount of data except over short distances.
Would quantum computing help?
We should look at it and ‘play’ with new ideas. Quantum computing is a fascinating field, but perhaps it is some time away. It is letting us play with what we now view as strange aspects of quantum mechanics, such as entanglement. We should remember that what was once viewed as a very odd phenomenon – the idea of ‘wave-particle’ duality – is now used routinely trillions of times per day in optical communications, where we think of light both as waves and as photons.
What about software solutions – for example, packet switching sends a letter in 1,000 pieces, but what if we could make the packets bigger?
This reminds me of what is called Gates’s Law – that the number of transistors required to process a simple letter doubles every 18 months. That is a just a joke of course, but there is a kernel of truth to it; software is so hard to write that for decades now we have relied on the hardware getting very much better every few years so we could get away with writing progressively less efficient software to handle ever more complicated tasks. People who think hard about software understand this point well. Though we can still rely to some degree on hardware such as optics and electronics getting better, there is no doubt that these problems of scaling the internet are so large that we must do everything we can to solve this problem, including improvements in software.
At chip level, do you suggest we use an optical signal to modulate another optical signal?
Silicon logic is very good, and optical logic is still difficult if we want large numbers of very low energy devices. Though silicon logic is still hard for optics to compete with, competing with copper is the ‘low hanging fruit’ for optical technology, and optics is progressively replacing copper in telecoms networks. Will we go all the way to optical logic? The problem may be challenging for energy, numbers of devices and costs, but we should still carry on with research.
Could there be a biological solution, based on the architecture of the brain? Or have we left the brain behind?
The brain has not been left behind. Neural-inspired machines may be developed. We have much to learn from biology – biological systems are slow but also very efficient. Given what we can do with our brains, it is remarkable it does not overheat. Of course, if our brain used more energy, we would have to eat more!