Showing posts with label protocol. Show all posts
Showing posts with label protocol. Show all posts

Wednesday, October 20, 2010

Learning About Multicast

I was really interested to discuss Multicast this week in our Networking class, as a big fan of Hulu (the free version) and Netflix, I think it is fascinating the way that content, such as streaming video, is transported to clients throughout the internet. Although Hulu and Netflix don't necessarily have to use multicast to get their information out there, their protocols are probably similar.

I think this area of networking is important because I believe that streaming and downloadable media will eventually replace physical media, beginning with television and movies. In discussing this idea with my adviser, he felt that large scale streaming would "break the internet" due to the high bandwidth requirements. Therefore, smart ideas such as multicast to create content distribution networks will become more important in the future to ensure that the internet doesn't "break."

As for other forms of media, such as video games on consoles, it may take longer to get rid of people's attachment to physical media. The advent of content distribution systems like Steam on PC and Mac have shifted the trend on computers toward downloadable media, which runs faster than reading from a disk, but requires a large amount of storage, which is probably why it is not prominent in consoles that are limited to 120 - 250GB compared to 500GB - several TB on PCs.

Saturday, October 9, 2010

TCP vs. Other

If I had to break up transport protocol research into two main areas it would be research into TCP and research into other protocols. Since TCP has established itself as the leading protocol for the internet today, many researchers find success in changing TCP to fit certain needs (as can be seen in Data Center TCP or Scalable TCP).

Others tend to take a more daring approach and look at completely new protocols, which may seem like a fruitless task in terms of improving the Internet, but their ideas make more sense when they are applied to smaller networks (although they may still be large networks) that need specific functionality (such as enterprise networks). The only problem that researchers taking this stance face is that their protocol needs to be "TCP-Friendly."

TCP-Friendly refers to the fact that the new protocol is fair (in terms of bandwidth) when competing with a TCP connection. Vegas was criticized for being less aggressive than Reno, while BIC was criticized for being too aggressive. It seems that in order for the TCP protocol to be replaced form this point forward that the Internet itself will have to evolve so as to demand a new protocol. Such is the case with protocols like Scalable TCP, which may become more useful as high speed networks become more prominent.

As far as the transport layer goes, even though advances are made all the time, it seems as though the field is somewhat stagnant since we are too willing to submit to the current working protocol (TCP Reno in this case) and, although some protocols are better, no other currently available protocol is good enough for the world at large to want to incorporate it on a large scale.

Sunday, October 3, 2010

Vegas vs. Reno

I'm not exactly sure why all the names of the TCP spin-offs happen to be cities in Nevada, but I'm more interested in the concepts behind the protocols, rather than trying to understand naming conventions in networking research. The papers we read on Vegas and Reno were fairly interesting, mainly because it showed that Vegas was a better protocol, but for some reason Reno is still in use.

The first paper was by the original creators of Vegas, showing off how much better than Reno their protocol was. The second paper reaffirmed a lot of the things that were stated in the first paper, but the authors noted a flaw in the Vegas protocol: Reno was more aggressive and stole bandwidth from Vegas. This issue was fixed in the third (and final) paper we read, stating that alpha and beta default values of Vegas could be optimized to improve fairness with Vegas.

However, we're still using Reno. We know Vegas is good (better than Reno), but its not good enough to change the norm. From what I understand from my professor, Reno already has a large install base and Vegas isn't a big enough improvement over the current norm (Reno) to justify a switch. Protocols like CUBIC were more aggressive than Reno, while Vegas was less aggressive and neither managed to work well with Reno, so they weren't accepted. Maybe it was because they couldn't integrate well and then outperform the existing protocol or maybe because, although the fairness issue was resolved, the paper didn't go into detail on how to measure buffer capacity in order to improve the fairness. Whatever the issue, most research seems to have moved on from trying to improve TCP and have focused on high speed transport protocols, which I'll probably read for next week.

Wednesday, September 29, 2010

TCP

In our networking class this week we started discussing transport layer protocols for the internet, namely the different types of TCP (such as Tahoe, Reno, New Reno, and SACK). While we'll go into more detail in later reading (which compares Reno and Vegas), we were mainly looking at the differences between congestion control and performance, but I am more interested in looking at adoption rates (how successful each have been) as well as deployment strategies.

From what I've read so far it seems that introducing new transport protocols on a large scale seems fairly difficult, however, most TCP-based protocols seem to thrive. Is it because the underlying architecture already uses TCP, so changing it slightly won't make a huge difference. Or is modifying the transport protocol easier than I think? Looking at other transport protocols (like BIC, that was used for Linux a while ago) it seems that implementing a transport protocol is not too difficult, but getting widespread acceptance is. The next question I would ask is: do we need new protocols? I guess that depends on how much the underlying architecture changes over time.