Comments

You must log in or register to comment.

EffeteTrees t1_jdsg5ql wrote

The cables are bundles of incredibly fine fiberoptic strands. Each strand is used to the maximum- i.e. very fast state-of-the-industry optical switches cramming as much data through as possible. In terms of privately-owned globe-spanning infrastructure, I can’t imagine something more vital than these undersea cables at the moment.

9

JohnnyJordaan t1_jdsgphd wrote

Note that what's called a 'cable' is not just one single transmission line, it's a bundle of tens of fiber optic cables (strands of transparent plastic). They work by laser light that's shone through them and flickers in an extremely high frequency to deliver gigabits or even terabits of data per second per fiber. Modern versions even use multiple lasers with different colors to transmit multiple streams of data through the same fiber. As the plastic isn't perfect, after a distance of a few hundred miles they need a receiver (an electronic 'eye' to see the laser signals) connected to a new set of lasers to 'repeat' the data for the next few hundred miles. These are powered by separate power lines in the cable, with a very high voltage to be able to travel such a long distance without losing too much power in the cable itself.

As a sidenote, specifically for repetitive stuff like cat videos or your latest Netflix series, video websites and streaming services use local servers in every part of the world to serve the content to the local users. They in turn request the data from centralised servers, but that only has to be done once per chunk of video, saving a lot of bandwith. This is called 'caching' and removes the need for most of the basic internet stuff to have to cross the Atlantic all the time. It's also noticeable when you watch more obscure videos or listen to less popular music that it sometimes takes a few extra seconds to start playback. This is caused by the local server not having the content ready and first has to get it from the central server(s).

2

therouterguy t1_jdsh75j wrote

First of all a lot of data is cached locally. It is unlikely the data is transferred just for you from Europe to the US. Each cable consists of multiple fiber optic wires. each wire can handle multiple colors and each color can handle 10s of gigabit/s per second. Depending on the distance 400gbit/s per fiber is not unheard of. 100s of these fibers per cable gives a lot of capacity. But again your youtube video will not be streamed on demand from Europe to the US it will be there already.

5

Ape_Togetha_Strong t1_jdshffq wrote

Information is encoded by changing the signal that is being sent. The amount of data you can transfer per second is a function of how often you can change the signal being sent, and detect the change on the other end. We can do this really, really, fast.

Once you've reached the limit of your ability to make the signal change more often, you can increase the amount of information transferred by making each unit of this signal contain more information than just "on" or "off". Instead of a signal where everything above a certain amplitude is a 1, and below it is a 0, you can split the amplitude into more sections, each one encoding multiple binary digits. You might split it into four different amplitudes corresponding to 00, 01, 10, 11.

You can build more and more advanced ways of encoding more information into the signal. For example, you can also send multiple signals of the same wavelength that are out of phase with each other, and if you have two signals, and one encodes phase angle and the other amplitude, now you can combine those signals to create a "line" that points to a location in 2D space. Then you can construct a grid of possible combinations of bits, 0000, 0001, 0010, 0011, etc. in that 2D space that each unit of the signal "points" to.

On top of that, you can send multiple different data streams through the same fiber at the same time by using different wavelengths of light for each of them.

And then you have many, many fibers in the same cable.

1

symedia t1_jdshpar wrote

Bomb exploding in a city far away from me ... Internet dropping...

Idk bro internet would affect me similar like it was 9/11 cancelled cartoons for me. If bomb kaboom next to me then ... Tough luck i don't have to live with it

4

Leucippus1 t1_jdshuah wrote

There are a lot of them, and one cable will hold about 72 individual fiber strands, that is a lot of full optical bandwidth. Space division multiplexing is a way to make more then one link over the same fiber strand, so you can squeeze a lot of bandwidth over one strand of fiber optic.

(https://community.fs.com/blog/application-of-space-division-multiplexing-sdm-in-submarine-optical-cable.html)

Due to the distributed nature of the internet, you don't actually have to cross undersea cables very often. There are caching points and content delivery networks so you are usually not more than a few hops away from your content. This can be arranged through something called 'transit agreements' and 'peering agreements'; that is an entire essay unto itself, but suffice to say almost every modern ISP POP (internet service provider point of presence) has a Netflix caching server in a server rack plugged directly into the service provider equipment. That shortens the distance between the content and the subscriber.

Source: I work for one of the major US ISPs.

1

RickTitus OP t1_jdsidfs wrote

Ok this helps.

Sounds like there are a lot of fancy ways to cram a lot of data more efficiently.

My baseline assumption is that two computers are typing out bytes of data one at a time and sending those individually, and then sending another. I guess i have to accept that computers have better ways of doing that

1

Gnonthgol t1_jdsj90j wrote

Your home fiber link may be capable of 1Gbps. Your ISP may be artificially limiting this but the equipment is capable of this. The equipment used for the subsea cables are usually capable of 100Gbps. This is because they use better electronics, lasers and detectors. But then they do not just have one of these feeding each cable, they have a number of these transceivers at different wavelengths and use a prism to combine the light in one end and split it up at the other end. So a single fiber strand can connect several of these 100Gbps links across the ocean. They then take lots of these fiber strands and bundle them together in one cable.

For comparison a video stream is typically around 10Mbps. Your home Internet is technically capable of 100 video streams at once. The high speed links used by ISPs are capable of 10 thousand video streams. When you bundle them together with a prism you can get maybe 250 thousand video streams through the single strand. A single cable is then capable of a few million video streams. And there are around 500 of these understea cables so the total capacity of them is over a billion video streams.

You are however right that even that is still not enough and companies like Google (owner of YouTube), Netflix, Amazon, etc. have problems with the slow bandwidth of undersea cables. So they take advantage of the fact that a lot of people see the same videos and will copy the videos to different datacenters all over the world and then people will stream from their local datacenter.

1

oboshoe t1_jdsjzys wrote

Others have covered the answer quite well.

As a network engineer since the early 1980s, I'm actually a bit surprised that there are that MANY. I would have guessed about 100.

As far as movies being streamed. They are rarely streamed over an incredible distance. Most of the time they are hosted quite close. If you are in a major city, the source is in the same city as you.

There are entire businesses based on keeping content locally cached and they are funded by the content providers.

2

pseudopad t1_jdsn6iq wrote

A lot of the truly huge bandwidth eaters don't have to cross the Atlantic. Heavily used video streaming services almost always have regional servers directly hooked up to major ISPs backbone networks.

Video streaming alone takes up a huge amount of internet traffic (some sources say 65%, others as much as 80%). Each 5 seconds of 1080p Netflix video eats up as much data as a typical non-video web page. 4k video consumes three times as much. When you take this out of the equation, the amount of data that needs to cross the oceans drop to a much more manageable level.

CDNs (content delivery network) also help lower the amount of data that needs to be transmitted. If you've heard of Cloudflare or Akamai, these are services that host web pages, or much of the content on them, at multiple locations across the earth.

This means often-requested data can be loaded from somewhere close to the user for a lot of web pages. These CDNs also help smaller web pages defend against denial of service attacks.

1