Sunday, October 31, 2010

What Pathlet Routing Did Right...

For this week's reading I read the Pathlet Routing paper presented at ACM Sigcomm in 2009. Pathlet Routing improves on current techniques in routing by allowing for scalability, which is fast becoming an issue with Border Gateway Protocol (BGP), and also allows for multipath routing, which overcomes the poor reliability and suboptimal path quality often associated with BGP.

However, despite all its benefits, Pathlet Routing would have been destined to fail if it didn't have one key feature: it can emulate policies of BGP, source routing, and several recent multipath proposals (like NIRA, LISP, and MIRO). Although it is still too soon to see if Pathlet Routing is successful, ensuring that the new protocol works well with the existing protocols is a good step towards creating a successful routing protocol. Not only can Pathlet Routing emulate many different existing protocols, but it can mix policies so that it can emulate multiple different protocols to work together.

Although I could be cynical and say Pathlet Routing might never get a chance to shine, the fact that the Internet needs a scalable routing protocol actually means it could get a chance. We'll have to wait and see.

Tuesday, October 26, 2010

The End of the World (or at least IPv4)

Today we had an interesting discussion in class about the fact that the world is literally running out of IPv4 addresses. In fact, it is predicted that by around May of 2012 there will no longer be any available IPv4 addresses (I guess Nostradamus was right...). This leads to the interesting question of how and when IPv6 will be implemented. Of course, many new computers and devices already have IPv6 addresses, but will router configurations need to be changed? Are there tables of IP addresses that will need to be washed away and reinstated with the new IP addresses? Basically, what kind of overhead will this involve?

Also, is this another Y2K incident? Or is this a legitimate issue that needs to be looked into further so that we truly understand the repercussions? I am inclined to believe it may be an important step in history of networking, but I have a feeling it may not be as hard to convert over as it sounds. I guess we'll have to wait and see.

Friday, October 22, 2010

Network Neutrality

In an interesting turn of events, we had time in our Networking class to discuss Network Neutrality. I won't pretend to be completely versed in the subject, but I understand the basic argument. Moreover, I can understand both sides of the argument.

As a big fan of streaming video (as I believe I mentioned earlier) due to Hulu and Netflix and an avid online gamer, I don't like the idea of having my packets discriminated against. However, if that is a company's business model, then I believe it shouldn't be regulated by the government just to suit my wants.

I am instead in favor of the common carrier principle, in which everyone should have a choice in which ISP they connect to so if one ISP does decide to put my packets on lower priority because I'm streaming video or playing games, then I should be able to switch ISPs.

I feel that this would essentially stop the debate on Network Neutrality, since it allows ISPs to do whatever they want and pay the consequences for their actions by losing customers or service, just like any business should. It also allows common users the opportunity to pick an ISP that best fits their needs, stimulating competition and ensuring there are no monopolies.

Wednesday, October 20, 2010

Learning About Multicast

I was really interested to discuss Multicast this week in our Networking class, as a big fan of Hulu (the free version) and Netflix, I think it is fascinating the way that content, such as streaming video, is transported to clients throughout the internet. Although Hulu and Netflix don't necessarily have to use multicast to get their information out there, their protocols are probably similar.

I think this area of networking is important because I believe that streaming and downloadable media will eventually replace physical media, beginning with television and movies. In discussing this idea with my adviser, he felt that large scale streaming would "break the internet" due to the high bandwidth requirements. Therefore, smart ideas such as multicast to create content distribution networks will become more important in the future to ensure that the internet doesn't "break."

As for other forms of media, such as video games on consoles, it may take longer to get rid of people's attachment to physical media. The advent of content distribution systems like Steam on PC and Mac have shifted the trend on computers toward downloadable media, which runs faster than reading from a disk, but requires a large amount of storage, which is probably why it is not prominent in consoles that are limited to 120 - 250GB compared to 500GB - several TB on PCs.

Thursday, October 14, 2010

The Problems with Routing (and Networking Research)

Today in class we read about a lot of the current problems with BGP routing. I was astounded to discover that a significant portion of BGP prefixes (around 25%) continuously flap and can take hours to converge to the correct route. Furthermore, the authors claim a 400 fold reduction in churn rate when using their protocol, the hybrid linkstate path-vector protocol or HLP, which seems to me reason enough to implement the routing protocol, yet we are still BGP.

This makes me feel a little discouraged when faced with the prospect of finding an area of networking research that could eventually be useful enough to be implemented in real networks. Maybe I am thinking on too large of a scale, I am sure there are many aspects of LANs, enterprise networks, etc., that could be modified and updated easily, but since I don't plan on being a system administrator, most of the research I do will be geared toward improving the Internet. However, since the Internet is so large, I understand the difficulties in implementing new architectures and protocols, but when helpful protocols, like HLP, that could make a significant impact on the Internet gets rejected, then there seems little hope for any research idea I could come up with.

On a more positive note though, I am sure that HLP had a significant impact on improvements in BGP in the last few years and there are other avenues of networking research that we have not discussed in class as of yet that could work better for prospective research ideas, such as wireless networks. So I am looking forward to covering that (as well as our sure to be interesting discussion on net neutrality next lesson).

End to End Congestion Control

One idea that came up in the paper we read this week really stuck with me, its not a new idea, I've heard it being talked about a lot in fact, its the idea of getting people to cooperate, in this case on the internet. For me, in my current area of research, that is a very interesting control problem. What incentives can we offer people in order to persuade them to cooperate together in an environment where cooperation isn't necessarily an inherent prospect.

In the example of the Internet, the problem that the researchers were looking at was the idea of trying to get UDP connections to share bandwidth fairly with TCP connections. An initial glance at this problem clearly shows that there does not exist any incentive for UDP to cooperate on the network, there is more incentive for the connections NOT to use congestion control at all. Why should they have to lose bandwidth? What benefit does that have for them? Th authors note that social incentives could play a factor, but are unquantifiable and not very trustworthy.

One suggestion that the authors made was creating router mechanisms that detect uncooperative flows and restricting their bandwidth. This penalizes flows for not conforming to congestion control. In looking toward my area of research, the idea of incentives to control a system seems very intriguing and I would like to read more research papers that focus on this idea in other applications.

Saturday, October 9, 2010

TCP vs. Other

If I had to break up transport protocol research into two main areas it would be research into TCP and research into other protocols. Since TCP has established itself as the leading protocol for the internet today, many researchers find success in changing TCP to fit certain needs (as can be seen in Data Center TCP or Scalable TCP).

Others tend to take a more daring approach and look at completely new protocols, which may seem like a fruitless task in terms of improving the Internet, but their ideas make more sense when they are applied to smaller networks (although they may still be large networks) that need specific functionality (such as enterprise networks). The only problem that researchers taking this stance face is that their protocol needs to be "TCP-Friendly."

TCP-Friendly refers to the fact that the new protocol is fair (in terms of bandwidth) when competing with a TCP connection. Vegas was criticized for being less aggressive than Reno, while BIC was criticized for being too aggressive. It seems that in order for the TCP protocol to be replaced form this point forward that the Internet itself will have to evolve so as to demand a new protocol. Such is the case with protocols like Scalable TCP, which may become more useful as high speed networks become more prominent.

As far as the transport layer goes, even though advances are made all the time, it seems as though the field is somewhat stagnant since we are too willing to submit to the current working protocol (TCP Reno in this case) and, although some protocols are better, no other currently available protocol is good enough for the world at large to want to incorporate it on a large scale.

Thursday, October 7, 2010

Scalable TCP (and the Future of Networks)

From the Scalable TCP paper and other networking papers I've been reading, many researchers seem to be doing research for an Internet that doesn't quite exist yet. For example, while Scalable TCP may only be useful for a small group of users in the current internet, as High Speed Internet becomes more and more popular, and it may take many years or decades to do so, I am sure Scalable TCP, and related research, will become more useful to a larger number of users. However, by that time, more and more researchers will probably be shifting their focus to interplanetary networks, which may or may not ever come to fruition (but it is still looking toward the future).

Of course, this may seem like an inherent part of research (looking towards the future), but only in these recent papers has the idea really stood out to me. For our networking research, we are looking at Network Tomography as a tool to infer Network Topology. The main reason why researchers claim to use this tool, instead of more active probing tools like traceroute, is in order to be able to continue mapping the Internet while the routers and users become less and less cooperative with network measurement tools. Essentially, it seems that the researchers are preparing for a future Internet where little or no cooperation will exist.

Note that I stated that the researchers claim this is their primary reason for research, but my professors believe that the networking techniques that they are proposing are very useful for mapping networks that certain users, such as governments, etc., don't want mapped. An idea that is useful today, while still preparing for the future. So now I ask myself where I should focus my research. Should I take a gamble and focus on research that may or may not be useful in the future? Or should I focus on research that is important for people now and may have future implications? Just some interesting things I have been thinking about.

Sunday, October 3, 2010

Vegas vs. Reno

I'm not exactly sure why all the names of the TCP spin-offs happen to be cities in Nevada, but I'm more interested in the concepts behind the protocols, rather than trying to understand naming conventions in networking research. The papers we read on Vegas and Reno were fairly interesting, mainly because it showed that Vegas was a better protocol, but for some reason Reno is still in use.

The first paper was by the original creators of Vegas, showing off how much better than Reno their protocol was. The second paper reaffirmed a lot of the things that were stated in the first paper, but the authors noted a flaw in the Vegas protocol: Reno was more aggressive and stole bandwidth from Vegas. This issue was fixed in the third (and final) paper we read, stating that alpha and beta default values of Vegas could be optimized to improve fairness with Vegas.

However, we're still using Reno. We know Vegas is good (better than Reno), but its not good enough to change the norm. From what I understand from my professor, Reno already has a large install base and Vegas isn't a big enough improvement over the current norm (Reno) to justify a switch. Protocols like CUBIC were more aggressive than Reno, while Vegas was less aggressive and neither managed to work well with Reno, so they weren't accepted. Maybe it was because they couldn't integrate well and then outperform the existing protocol or maybe because, although the fairness issue was resolved, the paper didn't go into detail on how to measure buffer capacity in order to improve the fairness. Whatever the issue, most research seems to have moved on from trying to improve TCP and have focused on high speed transport protocols, which I'll probably read for next week.