Verizon FiOS Tests 10G Upstream and Downstream

acc1444

Regular
[OP]
Regulars
Aug 2, 2008
970
37
****ing awesome!!! or awesum.

It is old video but I have seen it now.
Verizon FiOS Team Video: Verizon FiOS Tests 10G Upstream and Downstream - YouTube
Verizon FIOS team is testing FIOS network capable of delivering data at speed of 10G.
In testing, system has 10Gig network card and she is testing network speed. 2.4 GB movie takes 4 sec to copy from one to another system.
Unbelievable. !!!!!!
We don't even have sum of ISP clients' network adapter capacity is 10G.

FiOS Team has one Indian, a system architect and here we are sitting on cloud (for us, it is cloud) ranging from 256-1024 kbps.
We haven't replaced outdated cables yet and Kapil Sibal is telling us they would be replaced by 2014. As per new national broadband plan proposal, speed would be around 10 mbps after 2014.

Optical fiber setup budget is 60000 Crore. Gods knows how much money will be taken as bribe.
 

mgcarley

Founder, Hayai Broadband
Regulars
Jun 22, 2009
6,298
113
Similar testing is being done in Portugal and some other places too. If memory serves, Verizon is working with Motorola and Zon is working with Alcatel Lucent.Now all we need is faster hard drives.
 

acc1444

Regular
[OP]
Regulars
Aug 2, 2008
970
37
Similar testing is being done in Portugal and some other places too. If memory serves, Verizon is working with Motorola and Zon is working with Alcatel Lucent.
Yeah. I have seen Alcatel Lucent instrument in Video.
I think they haven't implemented 10G to customers, yet. It is under testing.
Now all we need is faster hard drives.
I think barracuda or SSD will capable to transfer data at high speed.
Seagate Barracuda XT "World's Fastest Hard Drive" 2x Faster Than Yours With 6Gbps Transfer Speeds


About India, We only get this , Welcome to Google TiSP.....

----------

I am waiting for Barracuda to use it as external.
 


mgcarley

Founder, Hayai Broadband
Regulars
Jun 22, 2009
6,298
113
Yeah. I have seen Alcatel Lucent instrument in Video.
I think they haven't implemented 10G to customers, yet. It is under testing.

I think barracuda or SSD will capable to transfer data at high speed.
Seagate Barracuda XT "World's Fastest Hard Drive" 2x Faster Than Yours With 6Gbps Transfer Speeds


About India, We only get this , Welcome to Google TiSP.....

----------

I am waiting for Barracuda to use it as external.

AFAIK They haven't even implemented 1G to users. They've recently introduced 150 though.

The 10G though is 10GPON which means 10G "shared" between 32/64/128 users (according to the split ratio and class of fiber), so in actual fact they could roll out 10GPON transparently and not upgrade the service tiers they actually offer: instead 10G/128 users = 78mbit/s average available bitrate per user for that PON port (upstream from the local node, backhaul, peering etc is a different story of course but this would be equivalent to guaranteeing a certain level of speed in the same way as the TRAI defines the speed guarantee, that is "up to the ISP node")... interestingly, it works out virtually identical (per user) as existing GPON, but at a lower cost per user (128 splits theoretically cheaper than 32 splits due to less fiber and equipment being involved).

I think what they've conducted is field trials to see what the network can do so that, hopefully, they can start future-proofing it a little bit.

As for hard-drives... yeah, maybe. Sadly, mechanical drives still rule (as much as they suck)... but 6Gbps is... not realistic... check Toms Hardware.
 

rajnusker

Regular
Regulars
Mar 12, 2012
529
20
Similar testing is being done in Portugal and some other places too. If memory serves, Verizon is working with Motorola and Zon is working with Alcatel Lucent.

Now all we need is faster hard drives.

Mechanical drives cannot hold that much speed 2400/4 = 600MB/s without hybrid technology.. SSD's will do the job, OCZ Z-drive is probably the fastest with sequential read/write of 2000MB/s..
 

mgcarley

Founder, Hayai Broadband
Regulars
Jun 22, 2009
6,298
113
Mechanical drives cannot hold that much speed 2400/4 = 600MB/s without hybrid technology.. SSD's will do the job, OCZ Z-drive is probably the fastest with sequential read/write of 2000MB/s..

600MByte/s = 4.8Gbit/s, and 2000MByte/s works out to around 16Gbit/s, or faster than any interface currently allows... but that's not the point... the point is that on a mechanical hard drive, even half the interface speed just isn't feasible, while on an SSD, you can at least approach about half the rated speed... as demonstrated:

On a mechanical hard drive, the top drives are performing at a 1.5Gbit/s level, even with a 6Gbit/s interface (scroll down to see the graphs)
Charts, benchmarks HDD Charts 2012, [01] Read Throughput Average: h2benchw 3.16

For SSDs, you're not quite doubling that to just short of 3Gbit/s (scroll down to see the graphs)
Charts, benchmarks SSD Charts 2011, AS-SSD 4K Q64 Random Read

----------

Also, the whole idea behind having super-high-speed Internet is not necessarily for a single machine, but for multiple devices to be able to use the web simultaneously without affecting each other. I generally assume that around 30-40mbit/s per device is sufficient, so 5 devices means ~150mbit/s bandwidth (ideally, though I think when you start involving stuff like tablets and wifi-enabled cellphones the 30-40mbit/s assumption can be safely lowered accordingly... so by "devices" I mean high-bandwidth devices like Computers, XBOX, PS3, Internet TV etc).
 


acc1444

Regular
[OP]
Regulars
Aug 2, 2008
970
37
All the time, I have a same question when hard drives reading and writing speed discussion go on.

Every hard drive has limitation on reading and writing speed. Then, how does web servers deliver data which has heavy traffic ? Web servers like Google, FB.com. FB.com receives 2000 image upload every sec as they claimed. They must have servers capable of writing upload contents of every users. Don't their server hard drive have writing or reading limitation?
SSD, barracuda and OCZ Z has been invented recently. What was the scenario before and now ???
If they set up more numbers of servers/hard drives to meet the requirement once server hard drive writing or reading capacity is loaded, then Fb.com and Google have highest traffic in the world. I do not think they have large numbers of system set up there, Do they?
 

mgcarley

Founder, Hayai Broadband
Regulars
Jun 22, 2009
6,298
113
All the time, I have a same question when hard drives reading and writing speed discussion go on.

Every hard drive has limitation on reading and writing speed. Then, how does web servers deliver data which has heavy traffic ? Web servers like Google, FB.com. FB.com receives 2000 image upload every sec as they claimed. They must have servers capable of writing upload contents of every users. Don't their server hard drive have writing or reading limitation?
SSD, barracuda and OCZ Z has been invented recently. What was the scenario before and now ???
If they set up more numbers of servers/hard drives to meet the requirement once server hard drive writing or reading capacity is loaded, then Fb.com and Google have highest traffic in the world. I do not think they have large numbers of system set up there, Do they?

The simple answer: busy sites utilize lots and lots and lots of servers. Has been the case for a long time. With the amount of servers Facebook has got, 2,000 images a second is nothing (and that's probably even an old number). It's well known that companies like Google and Facebook have got several of *their own* data centers around the world.

Additionally, Apache and nginx (Two of the common web server software packages in use) tend to cache the most common content in memory, rather than delivering from the hard drive, so most of those servers have got bucketloads of RAM also.
 

acc1444

Regular
[OP]
Regulars
Aug 2, 2008
970
37
The simple answer: busy sites utilize lots and lots and lots of servers. Has been the case for a long time. With the amount of servers Facebook has got, 2,000 images a second is nothing (and that's probably even an old number). It's well known that companies like Google and Facebook have got several of *their own* data centers around the world.

Additionally, Apache and nginx (Two of the common web server software packages in use) tend to cache the most common content in memory, rather than delivering from the hard drive, so most of those servers have got bucketloads of RAM also.

I think I should start learning about data centers to know them perfectly. I know High traffic websites use cache system for better performance which decreases the cost. Especially, we can say Youtube is totally relying on cache.
You said needs lots and lots of servers means content distribution is head ache for them. In case, for ex. Facebook, on its one particular server, 100k users data are stored and 50k users can access details at one time but in peak hours if more users are online, what would be happen? It will survive. Logically it won't happen, They might have kept automatic provision to handle those kind of problem.

Lots and Lots of servers means web servers have to keep hard drive speed capacity while designing the server systems.
 

mgcarley

Founder, Hayai Broadband
Regulars
Jun 22, 2009
6,298
113
I think I should start learning about data centers to know them perfectly. I know High traffic websites use cache system for better performance which decreases the cost. Especially, we can say Youtube is totally relying on cache.

Caching/content distribution is completely different (although in some respects, the same) to just having lots of servers... a content distribution network gives it the geographical diversity so that the site is always performing well throughout the world in case the worst should happen.

You said needs lots and lots of servers means content distribution is head ache for them. In case, for ex. Facebook, on its one particular server, 100k users data are stored and 50k users can access details at one time but in peak hours if more users are online, what would be happen? It will survive. Logically it won't happen, They might have kept automatic provision to handle those kind of problem.

That's where the load balancing comes in to play.

The main challenge seems to be less about serving up files but more about keeping the database nice and speedy. The reason FB/Google etc do quite well in delivering large amounts of data around the world is a combination of several factors, including sheer number of servers, the content distribution network (geographic diversity) as well as the distribution of physical data: the cluster of servers which houses the databases will have different specifications and optimizations to the servers which house the images, which will have different specifications and optimizations applied to the servers that house the webpages or system which generates the webpages and processes all the data - when you view a Facebook page you may be pulling all sorts of different chunks of data from 3, 4, 5, 10 or more physical servers (I don't actually know the real numbers in Facebook's case, I'm just going by what I do know from years gone by in dealing with much smaller scale setups), and it's those servers working together (in conjunction with the aforementioned factors) which keep the site running smoothly... unlike, for example, a small business which can only afford & deal with a much simpler web server setup which includes all of the above on a single machine.

If optimized properly, a single machine can probably handle a few thousand connections at a time, assuming it's got enough RAM, CPU etc to play with.

Lots and Lots of servers means web servers have to keep hard drive speed capacity while designing the server systems.

If I had to guess, I'd say the likes of Google and Facebook are hiring engineers who are fairly bright ;)
 

Similar threads