Biz & IT —

How the Internet works: Submarine fiber, brains in jars, and coaxial cables

A deep dive into Internet infrastructure, plus a rare visit to a subsea cable landing site.

QAM, DWDM, QPSK...

With cables and amplifiers in place, most likely for decades, there’s no more tinkering to be done in the ocean. Bandwidth, latency, and quality-of-service achievements are dealt with at the landing sites.

“Forward error correction is used to understand the signal that’s being sent, and modulation techniques have changed as the amount of traffic going down the signal has increased," says Osborne. “QPSK [Quadrature Phase Shift Keying] and BPSK [Binary Phase Shift Keying], sometimes called PRK [Phase Reversal Keying] or 2PSK, are the long distance modulation techniques. 16QAM [Quadrature Amplitude Modulation] would be used on a shorter length subsea cable system, and they’re bringing in 8QAM technology to fit in between 16QAM and BPSK.”

DWDM (Dense Wavelength Division Multiplexing) technology is used to combine the various data channels, and by transmitting these signals at different wavelengths—different coloured light within a specific spectrum—down the fibre optic cable, it effectively creates multiple virtual-fibre channels. In doing so the carrying capacity of the fibre is dramatically increased.

Currently, each of the four pairs has a capacity of 10 terabits per second (Tbps), amounting to a total of 40Tbps on the TGN-A cable. At the time, a figure of 8Tbps was the current lit capacity on this Tata network cable. As new customers come on stream they’ll nibble away at the spare capacity, but we're not about to run out: there’s still 80 percent to go, and another encoding or multiplexing enhancement will most likely be able increase the throughput capabilities in years to come.

One of the main issues affecting this application of photonics communications is the optical dispersion of the fibre. It’s something designers factor in to the cable construction, with some sections of fibre having positive dispersion qualities and others negative. And if you need to do a repair, you’ll have to be sure you have the correct dispersion cable type on board. Back on dry land, electronic dispersion compensation is one area that’s being increasingly refined to tolerate more degraded signals.

“Historically, we used to use spools of fibre for dispersion compensation,” says John, “but today it’s all done electronically. It’s much more accurate, enabling higher bandwidths.”

So now rather than initially offering customers 1G (gigabit), 10G, or 40G fibre connectivity, technological enhancements in recent years mean the landing site can prepare “drops” of 100G.

The cable guise

Although hard to miss with its bright yellow trunking, at a glance both the Atlantic and west European submarine cables inside the building could easily be mistaken for some power distribution system. Wall-mounted in the corner, this installation doesn’t need to be fiddled with, although if a new run of optical cable is required, it will be spliced together directly from the subsea fibre inside the box. Coming up from the floor of the landing site, the red and black sticker shouts “TGN Atlantic Fiber," while to the right is the TGN-WER cable, which sports a different arrangement with its fibre pairs separated at the junction box.

To the left of both boxes are power cables inside metal pipes. The thicker two are for TGN-A, the slimmer ones are for TGN-WER. The latter also has two submarine cable paths with one landing at Bilbao in Spain and the other near Lisbon in Portugal. As the distance from these countries to the UK is shorter, there’s significantly less power required, hence rather thinner power cables.

The power lines that feed into TGN-A and TGN-WER.
Enlarge / The power lines that feed into TGN-A and TGN-WER.
Bob Dormon / Ars Technica UK

Referring to the setup at the landing station, Osborne says, “Cables coming up from the beach have three core parts: the fibres that carry the traffic, the power portion, and the earth portion. The fibres that carry the traffic are what are extended over that box. The power portion gets split out to another area within the site.”

The yellow fibre trunking snakes overhead to the racks that will perform various tasks, including demultiplexing the incoming signals to separate out different frequency bands. These are potential "drops," where an individual channel can terminate at the landing station to join a terrestrial network.

As John puts it, “100G channels come in and you have 10G clients: 10 by 10s. We also offer a pure 100G.”

“It depends what the client wants,” adds Osborne. “If they want a single 100G circuit that’s coming out of one of those boxes it can be handed over directly to the customer. If the customer wants a lower speed, then yes, it will have to be handed over to further equipment to split it up into lower speeds. There are clients who will buy a 100G direct link but not that many. A lower-tier ISP, for example, wanting to buy transmission capability from us, will opt for a 10G circuit.

“The submarine cable is providing multiple gigabits of transport capability that can be used for private circuits in between two corporate offices. It can be running voice calls. All that transport can be augmented into the Internet backbone service layer. And each of those product platforms has different equipment which is separately monitored.

“The bulk of the transport on the cable is either used for our own Internet or is being sold as transport circuits to other Internet wholesale operators—the likes of BT, Verizon, and other international operators who don’t have their own subsea cables buy transport from us.”

A distribution frame at the Tata landing site/data centre.
Enlarge / A distribution frame at the Tata landing site/data centre.

Tall distribution frames support a patchwork of optical cables that divvy up 10G connectivity for clients. If you fancy a capacity upgrade then it’s pretty much as simple as ordering the cards and stuffing them into the shelves—the term used to describe the arrangements in the large equipment chassis.

John points out a customer’s existing 560Gbps system (based on 40G technology), which recently received an additional 1.6Tbps upgrade. The extra capacity was achieved by using two 800Gbps racks, both functioning on 100G technology for a total bandwidth of more than 2.1Tbps. As he talks about the task, one gets the impression that the lengthiest part of the process is waiting for the new cards to show up.

All of Tata’s network infrastructure onsite is duplicated, so there are two of rooms SLT1 and SLT2. One Atlantic system internally referred to as S1 is on the left of SLT1, and the Western Europe Portugal cable referred to as C1 is on the right. And on the other side of the building there’s SLT2, with the Atlantic S2 system together with C2 connecting to Spain.

In a separate area nearby is the terrestrial room, which, among other tasks, handles traffic connections to Tata’s data centre in London. One of the transatlantic fibre pairs doesn’t actually drop at the landing site at all. It’s an “express pair” that continues straight to Tata's London premises from New Jersey to minimise latency. Talking of which, John looked up the latency of the two Atlantic cables; the shorter journey clocks up a round trip delay (RTD) of 66.5ms, while the longer route takes 66.9ms. So your data is travelling at around 437,295,816 mph. Fast enough for you?

On this topic he describes the main issues: “Each time we convert from optical to electrical and then back to optical, this adds latency. With higher-quality optics and more powerful amplifiers, the need to regenerate the signal is minimised these days. Other factors involve the limitations on how much power can be sent down the subsea cables. Across the Atlantic, the signal remains optical over the complete path.”

Channel Ars Technica