I won't go into too much detail in this blog about what wireless tomography is, but I am pretty sure that it will be the topic for my research proposal. It is the idea we have been discussing in our research meetings and I think working on it will help me understand the research better and hopefully come up with some new ideas for the research.
I feel that it will be a useful topic for me to study because, once the class is complete, the ideas I formulate will still be useful. Whereas, if I tried to find a different area of research related to wireless networking, my efforts would most likely bring nothing to fruition.
A blog about networks by a Spanish speaking Mormon Indian born in Fiji and raised in Australia, married to a Japanese Brazilian, pursuing a PhD in Computer Science from BYU after graduating from BYU-Hawaii
Friday, December 10, 2010
Thursday, December 9, 2010
Can't Get Enough of Network Security
Most of the security papers we looked at this week seemed to discuss anomaly detection, I am not sure if this is just the new fad in Network Security, but it seems like Kalman Filtering does a good job (at least in detecting denial of service attacks) while the newly presented ASTUTE protocol seems to take note of other anomalies in the system. Of course, given the nature of Network Security, there will always be a need for new anomaly detection systems as attackers work out how to circumvent the current systems.
The paper I enjoyed the most was a paper that attempted to increase fairness by using a credit system. I am interested in the ideas of incentives to work fairly, while the security nature of the system was focused on making sure people don't cheat the system. I think that it's a very interesting dilemma, since there usually is no incentive to fairly share the network, but integrating it into the system so that people have a greater incentive (and aren't forced) to share seems like an interesting area of research. Overall, I feel that Network Security has a lot of potential, but I'm not sure how much I would enjoy the cat and mouse view of research. I might just focus on wireless networks for now.
The paper I enjoyed the most was a paper that attempted to increase fairness by using a credit system. I am interested in the ideas of incentives to work fairly, while the security nature of the system was focused on making sure people don't cheat the system. I think that it's a very interesting dilemma, since there usually is no incentive to fairly share the network, but integrating it into the system so that people have a greater incentive (and aren't forced) to share seems like an interesting area of research. Overall, I feel that Network Security has a lot of potential, but I'm not sure how much I would enjoy the cat and mouse view of research. I might just focus on wireless networks for now.
Tuesday, December 7, 2010
The Scope of Network Security
Today I realized there really is a lot more involved in Network Security then I had originally imagined. My paper covered machine-learning for anomaly detection, while we also discussed spam filtering, botnets (and how to attack them), improving DNS in order to track malicious activity, and even outsourcing network management in an attempt to improve network security (which reminds me too much of the movie "The Net" to sound like a safe and viable solution).
Maybe I hadn't realized that there was such a wide variety because I haven't researched much in this area, or maybe because I haven't been exposed to all the different ways that "bad guys" try and attack a network, but this topic has been a really eye-opening experience. I mean Hollywood glamorizes hacking all the time, but these papers discuss serious threats that could have serious ramifications. Seeing a real-life perspective of cyber-terrorism is interesting and it starts to make sense why it has become more of a cat and mouse game.
Some of the papers had really intuitive ideas to deal with attackers, and even tutorial papers like the ones I read play an important part. As my professor stated in class, tutorial papers do a great job at focusing the field in the right direction, so that students working on their Master's can understand how to proceed if they are new to the area and can help the community as a whole fight the "bad guys."
Maybe I hadn't realized that there was such a wide variety because I haven't researched much in this area, or maybe because I haven't been exposed to all the different ways that "bad guys" try and attack a network, but this topic has been a really eye-opening experience. I mean Hollywood glamorizes hacking all the time, but these papers discuss serious threats that could have serious ramifications. Seeing a real-life perspective of cyber-terrorism is interesting and it starts to make sense why it has become more of a cat and mouse game.
Some of the papers had really intuitive ideas to deal with attackers, and even tutorial papers like the ones I read play an important part. As my professor stated in class, tutorial papers do a great job at focusing the field in the right direction, so that students working on their Master's can understand how to proceed if they are new to the area and can help the community as a whole fight the "bad guys."
Sunday, December 5, 2010
Anomaly Detection
I just read a paper entitled "Outside the Closed World: On Using Machine Learning For Network Intrusion Detection," which was very interesting because of the other paper we recently read for class on Anomaly Detection. I understood this paper really well because instead of trying to introduce a new model for anomaly detection, it simply tried to explain two things: (i) why machine-learning hasn't been as effective in anomaly detection as it has been in other areas, e.g. natural language processing and product recommendation systems, and (ii) how researchers can approach this area successfully.
I haven't read too many papers like this, but there must be plenty who try and steer research in the right direction. Some of the authors insights were intuitive, while others seemed fairly obvious. Overall, it helped me understand a little more how research is approached as opposed to how it should be approached. As for my own research, I am focusing on wireless networks, but I still have yet to decide what topic to choose for my proposal. I guess I will have to pick soon.
I haven't read too many papers like this, but there must be plenty who try and steer research in the right direction. Some of the authors insights were intuitive, while others seemed fairly obvious. Overall, it helped me understand a little more how research is approached as opposed to how it should be approached. As for my own research, I am focusing on wireless networks, but I still have yet to decide what topic to choose for my proposal. I guess I will have to pick soon.
Wednesday, December 1, 2010
Monitoring Traffic
I have to admit, I wouldn't think of monitoring traffic to be network security, but then I realized that police often "monitor traffic" on the road to try and catch speeding "criminals", so I guess it should have made more sense.
Anyways, ASTUTE (a networking traffic monitor that looks for anomalies) takes an interesting approach in that it doesn't rely on previous data to find anomalies. While I don't understand the entire procedure it follows, I understand that this is a novel idea and, from the results, it appears to work fairly well compared to existing anomaly detectors.
Although anomaly detection doesn't focus only on network security, it is a benefit, on top of being able to find faults with the network structure and other intriguing events in the network. One thing I did learn from this paper was how many different ways there were to detect and prevent network attacks and that network security research can also be beneficial to more than just a single focused area.
Anyways, ASTUTE (a networking traffic monitor that looks for anomalies) takes an interesting approach in that it doesn't rely on previous data to find anomalies. While I don't understand the entire procedure it follows, I understand that this is a novel idea and, from the results, it appears to work fairly well compared to existing anomaly detectors.
Although anomaly detection doesn't focus only on network security, it is a benefit, on top of being able to find faults with the network structure and other intriguing events in the network. One thing I did learn from this paper was how many different ways there were to detect and prevent network attacks and that network security research can also be beneficial to more than just a single focused area.
All About Spam
No, not the canned spiced ham (spam) popular in Hawaii, but the annoyingly consistent spam emails that turn up in our e-mail inbox every day. Microsoft Research, by monitoring spam, have come up with AutoRE, a technique that allows them to capture up to 16-18% of spam that is not usually detected by common spam filters. AutoRE works well because it has a low false positive rate and is the first (known) automated system to be able to generate regular expression signatures, a technique that was previously only possible by human experts.
The most interesting thing about the paper published on AutoRE, is that the spam monitor is not a comprehensive spam reduction tool. Most papers try and focus on the amount of spam that can be stopped, while the AutoRE paper focuses on stopping spam created by botnets that have been difficult to stop in the past.
Perhaps the weirdest part of the paper was the anti-climactic ending, where AutoRE has not been tested on data in real-time, although it supposedly would be easy to do so. It is interesting that a paper like this was published at SIGCOMM when it appears that the actual implementation was unfinished, even though the preliminary results were fairly good.
In terms of network security, as my professor explained in class, it is just a cat and mouse game. People will find ways around these incredible innovations and that will essentially drive the research in Network Security. This paper is a great example as it explores the changing trends in botnets to try and avoid current spam filters.
The most interesting thing about the paper published on AutoRE, is that the spam monitor is not a comprehensive spam reduction tool. Most papers try and focus on the amount of spam that can be stopped, while the AutoRE paper focuses on stopping spam created by botnets that have been difficult to stop in the past.
Perhaps the weirdest part of the paper was the anti-climactic ending, where AutoRE has not been tested on data in real-time, although it supposedly would be easy to do so. It is interesting that a paper like this was published at SIGCOMM when it appears that the actual implementation was unfinished, even though the preliminary results were fairly good.
In terms of network security, as my professor explained in class, it is just a cat and mouse game. People will find ways around these incredible innovations and that will essentially drive the research in Network Security. This paper is a great example as it explores the changing trends in botnets to try and avoid current spam filters.
Tuesday, November 30, 2010
Network Security
The next research area we are looking at in Networking is Network Security. Now while Wireless Networking may have a lot of promise in terms of research, I feel Network Security does also, maybe moreso than Wireless Networking. The problem with Network Security is that people are always finding ways to work around them, which means that there will also need to be new innovations in Network Security.
From what I have heard about Network Security, it would be interesting to see how Quantum Computing will affect the area. I am sure that Quantum Computing will change every area of computing especially in all areas networking, but the most I have heard about Quantum Computing is in security in general (which I am sure will affect Network Security).
From what I have heard about Network Security, it would be interesting to see how Quantum Computing will affect the area. I am sure that Quantum Computing will change every area of computing especially in all areas networking, but the most I have heard about Quantum Computing is in security in general (which I am sure will affect Network Security).
Finding a Research Proposal
With just a few weeks left in the semester I find myself preparing for the final project in our networking class, namely the research proposal. The proposal is basically a novel concept we have come up with to solve a problem prominent in networking research, while we do not need to actually create a solution, we must propose the experiment in order to determine the results.
As yet I have not decided on a specific topic for this research project, but I feel that (as aforementioned) wireless networking holds the most promise in terms of feasible research. I will try and choose something that isn't too closely related to the area of research we are working on in the lab, but it should be an interesting assignment to work on.
As yet I have not decided on a specific topic for this research project, but I feel that (as aforementioned) wireless networking holds the most promise in terms of feasible research. I will try and choose something that isn't too closely related to the area of research we are working on in the lab, but it should be an interesting assignment to work on.
Wednesday, November 17, 2010
Wireless Networking Research
Something interesting I noticed while reading the paper Link-level Measurements from an 802.11b Mesh Network, which basically discussed possible reasons for link loss rates in MIT's wireless mesh network, Roofnet. One of the strange things I noticed was how inconclusive the results were, the authors stated: "The large number of links with intermediate loss rates is probably due to multi-path fading rather than attenuation or interference."
Not only were the conclusions somewhat inconclusive, the paper as a whole never proposed any solutions to a problem. In fact, the purpose of the paper was to "contribute to an understanding of the reasons for intermediate-quality links." The fact that this alone is a significant contribution to studying wireless networks shows how early researchers are in understanding the nuances of wireless networks.
Although some of the references in the paper date back to experiments with wireless networks in the mid-nineties, the work in this paper shows how little is known about how wireless networks truly work and how best to measure link loss rates or return trip times. This will be interesting to note as I proceed with my wireless research.
Not only were the conclusions somewhat inconclusive, the paper as a whole never proposed any solutions to a problem. In fact, the purpose of the paper was to "contribute to an understanding of the reasons for intermediate-quality links." The fact that this alone is a significant contribution to studying wireless networks shows how early researchers are in understanding the nuances of wireless networks.
Although some of the references in the paper date back to experiments with wireless networks in the mid-nineties, the work in this paper shows how little is known about how wireless networks truly work and how best to measure link loss rates or return trip times. This will be interesting to note as I proceed with my wireless research.
Wednesday, November 10, 2010
Dealing with Contention
One of the most interesting ideas I have seen related to wireless networks is the concept of contention. In wireless networks we have to deal with contention regions and different routing techniques. In wireless mesh networks we note that the nodes are fixed, but having to deal with mobile networks seems like an extremely difficult task.
In our research we are trying to look at measurements in wireless networks. Even knowing the topology of the network, it seems difficult to be able to discern key ideas in the network using such fundamental principles are round trip times and delays. Contention doesn't seem to work in a specific way for a specific network, so I am looking into running simulations to see what we can discover.
It may have taken a few months, but the work I've been doing in research and the reading I have done for class seem to have helped me vastly improve my research skills and I am excited to apply directly the things I am learning to my research.
In our research we are trying to look at measurements in wireless networks. Even knowing the topology of the network, it seems difficult to be able to discern key ideas in the network using such fundamental principles are round trip times and delays. Contention doesn't seem to work in a specific way for a specific network, so I am looking into running simulations to see what we can discover.
It may have taken a few months, but the work I've been doing in research and the reading I have done for class seem to have helped me vastly improve my research skills and I am excited to apply directly the things I am learning to my research.
Monday, November 8, 2010
Wireless Networking
For our networking research group and our upcoming lectures in our networking class, we are looking at wireless networks. The only work I've done with wireless networks in my life is setting up a wireless router in my home so that I can link several laptops, smartphones, and a wireless printer on the same network. However, I am excited to read more into wireless networks as I feel, as I may have mentioned before, that wireless networking is the most promising field in networking research at this moment.
For research we are looking into Internet measurements similar to the ones we previously discussed in class, but this time for wireless networks. It is interesting to see how different wireless and wired networks act. For example, in wireless networks we need to consider the contention between various routers or end systems, where as in wired we are more focused on throughput, etc.
Since wireless is still a fairly new area of research, I believe there are plenty of useful contributions in the area of wireless networking that will be beneficial to the field. Some ideas for our specific area of research would include tracking wireless/mobile networks on the battlefield or tracking criminals using wireless devices, but other approaches could you useful for different situations.
For research we are looking into Internet measurements similar to the ones we previously discussed in class, but this time for wireless networks. It is interesting to see how different wireless and wired networks act. For example, in wireless networks we need to consider the contention between various routers or end systems, where as in wired we are more focused on throughput, etc.
Since wireless is still a fairly new area of research, I believe there are plenty of useful contributions in the area of wireless networking that will be beneficial to the field. Some ideas for our specific area of research would include tracking wireless/mobile networks on the battlefield or tracking criminals using wireless devices, but other approaches could you useful for different situations.
The Principle Behind Research
Since we didn't have class this past Thursday, I thought I would discuss an idea that has been consistent throughout my blog for this week's post. I believe I may have been overly cynical about research in networking up to this point. After discussing this with a co-worker, I better understand the motivations behind the research I have been reading about.
Firstly, my co-worker brought to my realization that all research, not just research focused on networking, has projects that are never brought to fruition. In fact, the majority of research in general seems to be done more in a hope that it will be useful to someone at sometime. In many cases, substantial research is done by building upon the work of others, whose individual contributions may not have made the same impact.
I will try and be less cynical towards networking research, after all - in the near future the Internet may need an incredible overhaul and all the current research will be very useful indeed...
Firstly, my co-worker brought to my realization that all research, not just research focused on networking, has projects that are never brought to fruition. In fact, the majority of research in general seems to be done more in a hope that it will be useful to someone at sometime. In many cases, substantial research is done by building upon the work of others, whose individual contributions may not have made the same impact.
I will try and be less cynical towards networking research, after all - in the near future the Internet may need an incredible overhaul and all the current research will be very useful indeed...
Sunday, October 31, 2010
What Pathlet Routing Did Right...
For this week's reading I read the Pathlet Routing paper presented at ACM Sigcomm in 2009. Pathlet Routing improves on current techniques in routing by allowing for scalability, which is fast becoming an issue with Border Gateway Protocol (BGP), and also allows for multipath routing, which overcomes the poor reliability and suboptimal path quality often associated with BGP.
However, despite all its benefits, Pathlet Routing would have been destined to fail if it didn't have one key feature: it can emulate policies of BGP, source routing, and several recent multipath proposals (like NIRA, LISP, and MIRO). Although it is still too soon to see if Pathlet Routing is successful, ensuring that the new protocol works well with the existing protocols is a good step towards creating a successful routing protocol. Not only can Pathlet Routing emulate many different existing protocols, but it can mix policies so that it can emulate multiple different protocols to work together.
Although I could be cynical and say Pathlet Routing might never get a chance to shine, the fact that the Internet needs a scalable routing protocol actually means it could get a chance. We'll have to wait and see.
However, despite all its benefits, Pathlet Routing would have been destined to fail if it didn't have one key feature: it can emulate policies of BGP, source routing, and several recent multipath proposals (like NIRA, LISP, and MIRO). Although it is still too soon to see if Pathlet Routing is successful, ensuring that the new protocol works well with the existing protocols is a good step towards creating a successful routing protocol. Not only can Pathlet Routing emulate many different existing protocols, but it can mix policies so that it can emulate multiple different protocols to work together.
Although I could be cynical and say Pathlet Routing might never get a chance to shine, the fact that the Internet needs a scalable routing protocol actually means it could get a chance. We'll have to wait and see.
Tuesday, October 26, 2010
The End of the World (or at least IPv4)
Today we had an interesting discussion in class about the fact that the world is literally running out of IPv4 addresses. In fact, it is predicted that by around May of 2012 there will no longer be any available IPv4 addresses (I guess Nostradamus was right...). This leads to the interesting question of how and when IPv6 will be implemented. Of course, many new computers and devices already have IPv6 addresses, but will router configurations need to be changed? Are there tables of IP addresses that will need to be washed away and reinstated with the new IP addresses? Basically, what kind of overhead will this involve?
Also, is this another Y2K incident? Or is this a legitimate issue that needs to be looked into further so that we truly understand the repercussions? I am inclined to believe it may be an important step in history of networking, but I have a feeling it may not be as hard to convert over as it sounds. I guess we'll have to wait and see.
Also, is this another Y2K incident? Or is this a legitimate issue that needs to be looked into further so that we truly understand the repercussions? I am inclined to believe it may be an important step in history of networking, but I have a feeling it may not be as hard to convert over as it sounds. I guess we'll have to wait and see.
Friday, October 22, 2010
Network Neutrality
In an interesting turn of events, we had time in our Networking class to discuss Network Neutrality. I won't pretend to be completely versed in the subject, but I understand the basic argument. Moreover, I can understand both sides of the argument.
As a big fan of streaming video (as I believe I mentioned earlier) due to Hulu and Netflix and an avid online gamer, I don't like the idea of having my packets discriminated against. However, if that is a company's business model, then I believe it shouldn't be regulated by the government just to suit my wants.
I am instead in favor of the common carrier principle, in which everyone should have a choice in which ISP they connect to so if one ISP does decide to put my packets on lower priority because I'm streaming video or playing games, then I should be able to switch ISPs.
I feel that this would essentially stop the debate on Network Neutrality, since it allows ISPs to do whatever they want and pay the consequences for their actions by losing customers or service, just like any business should. It also allows common users the opportunity to pick an ISP that best fits their needs, stimulating competition and ensuring there are no monopolies.
As a big fan of streaming video (as I believe I mentioned earlier) due to Hulu and Netflix and an avid online gamer, I don't like the idea of having my packets discriminated against. However, if that is a company's business model, then I believe it shouldn't be regulated by the government just to suit my wants.
I am instead in favor of the common carrier principle, in which everyone should have a choice in which ISP they connect to so if one ISP does decide to put my packets on lower priority because I'm streaming video or playing games, then I should be able to switch ISPs.
I feel that this would essentially stop the debate on Network Neutrality, since it allows ISPs to do whatever they want and pay the consequences for their actions by losing customers or service, just like any business should. It also allows common users the opportunity to pick an ISP that best fits their needs, stimulating competition and ensuring there are no monopolies.
Wednesday, October 20, 2010
Learning About Multicast
I was really interested to discuss Multicast this week in our Networking class, as a big fan of Hulu (the free version) and Netflix, I think it is fascinating the way that content, such as streaming video, is transported to clients throughout the internet. Although Hulu and Netflix don't necessarily have to use multicast to get their information out there, their protocols are probably similar.
I think this area of networking is important because I believe that streaming and downloadable media will eventually replace physical media, beginning with television and movies. In discussing this idea with my adviser, he felt that large scale streaming would "break the internet" due to the high bandwidth requirements. Therefore, smart ideas such as multicast to create content distribution networks will become more important in the future to ensure that the internet doesn't "break."
As for other forms of media, such as video games on consoles, it may take longer to get rid of people's attachment to physical media. The advent of content distribution systems like Steam on PC and Mac have shifted the trend on computers toward downloadable media, which runs faster than reading from a disk, but requires a large amount of storage, which is probably why it is not prominent in consoles that are limited to 120 - 250GB compared to 500GB - several TB on PCs.
I think this area of networking is important because I believe that streaming and downloadable media will eventually replace physical media, beginning with television and movies. In discussing this idea with my adviser, he felt that large scale streaming would "break the internet" due to the high bandwidth requirements. Therefore, smart ideas such as multicast to create content distribution networks will become more important in the future to ensure that the internet doesn't "break."
As for other forms of media, such as video games on consoles, it may take longer to get rid of people's attachment to physical media. The advent of content distribution systems like Steam on PC and Mac have shifted the trend on computers toward downloadable media, which runs faster than reading from a disk, but requires a large amount of storage, which is probably why it is not prominent in consoles that are limited to 120 - 250GB compared to 500GB - several TB on PCs.
Thursday, October 14, 2010
The Problems with Routing (and Networking Research)
Today in class we read about a lot of the current problems with BGP routing. I was astounded to discover that a significant portion of BGP prefixes (around 25%) continuously flap and can take hours to converge to the correct route. Furthermore, the authors claim a 400 fold reduction in churn rate when using their protocol, the hybrid linkstate path-vector protocol or HLP, which seems to me reason enough to implement the routing protocol, yet we are still BGP.
This makes me feel a little discouraged when faced with the prospect of finding an area of networking research that could eventually be useful enough to be implemented in real networks. Maybe I am thinking on too large of a scale, I am sure there are many aspects of LANs, enterprise networks, etc., that could be modified and updated easily, but since I don't plan on being a system administrator, most of the research I do will be geared toward improving the Internet. However, since the Internet is so large, I understand the difficulties in implementing new architectures and protocols, but when helpful protocols, like HLP, that could make a significant impact on the Internet gets rejected, then there seems little hope for any research idea I could come up with.
On a more positive note though, I am sure that HLP had a significant impact on improvements in BGP in the last few years and there are other avenues of networking research that we have not discussed in class as of yet that could work better for prospective research ideas, such as wireless networks. So I am looking forward to covering that (as well as our sure to be interesting discussion on net neutrality next lesson).
This makes me feel a little discouraged when faced with the prospect of finding an area of networking research that could eventually be useful enough to be implemented in real networks. Maybe I am thinking on too large of a scale, I am sure there are many aspects of LANs, enterprise networks, etc., that could be modified and updated easily, but since I don't plan on being a system administrator, most of the research I do will be geared toward improving the Internet. However, since the Internet is so large, I understand the difficulties in implementing new architectures and protocols, but when helpful protocols, like HLP, that could make a significant impact on the Internet gets rejected, then there seems little hope for any research idea I could come up with.
On a more positive note though, I am sure that HLP had a significant impact on improvements in BGP in the last few years and there are other avenues of networking research that we have not discussed in class as of yet that could work better for prospective research ideas, such as wireless networks. So I am looking forward to covering that (as well as our sure to be interesting discussion on net neutrality next lesson).
End to End Congestion Control
One idea that came up in the paper we read this week really stuck with me, its not a new idea, I've heard it being talked about a lot in fact, its the idea of getting people to cooperate, in this case on the internet. For me, in my current area of research, that is a very interesting control problem. What incentives can we offer people in order to persuade them to cooperate together in an environment where cooperation isn't necessarily an inherent prospect.
In the example of the Internet, the problem that the researchers were looking at was the idea of trying to get UDP connections to share bandwidth fairly with TCP connections. An initial glance at this problem clearly shows that there does not exist any incentive for UDP to cooperate on the network, there is more incentive for the connections NOT to use congestion control at all. Why should they have to lose bandwidth? What benefit does that have for them? Th authors note that social incentives could play a factor, but are unquantifiable and not very trustworthy.
One suggestion that the authors made was creating router mechanisms that detect uncooperative flows and restricting their bandwidth. This penalizes flows for not conforming to congestion control. In looking toward my area of research, the idea of incentives to control a system seems very intriguing and I would like to read more research papers that focus on this idea in other applications.
In the example of the Internet, the problem that the researchers were looking at was the idea of trying to get UDP connections to share bandwidth fairly with TCP connections. An initial glance at this problem clearly shows that there does not exist any incentive for UDP to cooperate on the network, there is more incentive for the connections NOT to use congestion control at all. Why should they have to lose bandwidth? What benefit does that have for them? Th authors note that social incentives could play a factor, but are unquantifiable and not very trustworthy.
One suggestion that the authors made was creating router mechanisms that detect uncooperative flows and restricting their bandwidth. This penalizes flows for not conforming to congestion control. In looking toward my area of research, the idea of incentives to control a system seems very intriguing and I would like to read more research papers that focus on this idea in other applications.
Saturday, October 9, 2010
TCP vs. Other
If I had to break up transport protocol research into two main areas it would be research into TCP and research into other protocols. Since TCP has established itself as the leading protocol for the internet today, many researchers find success in changing TCP to fit certain needs (as can be seen in Data Center TCP or Scalable TCP).
Others tend to take a more daring approach and look at completely new protocols, which may seem like a fruitless task in terms of improving the Internet, but their ideas make more sense when they are applied to smaller networks (although they may still be large networks) that need specific functionality (such as enterprise networks). The only problem that researchers taking this stance face is that their protocol needs to be "TCP-Friendly."
TCP-Friendly refers to the fact that the new protocol is fair (in terms of bandwidth) when competing with a TCP connection. Vegas was criticized for being less aggressive than Reno, while BIC was criticized for being too aggressive. It seems that in order for the TCP protocol to be replaced form this point forward that the Internet itself will have to evolve so as to demand a new protocol. Such is the case with protocols like Scalable TCP, which may become more useful as high speed networks become more prominent.
As far as the transport layer goes, even though advances are made all the time, it seems as though the field is somewhat stagnant since we are too willing to submit to the current working protocol (TCP Reno in this case) and, although some protocols are better, no other currently available protocol is good enough for the world at large to want to incorporate it on a large scale.
Others tend to take a more daring approach and look at completely new protocols, which may seem like a fruitless task in terms of improving the Internet, but their ideas make more sense when they are applied to smaller networks (although they may still be large networks) that need specific functionality (such as enterprise networks). The only problem that researchers taking this stance face is that their protocol needs to be "TCP-Friendly."
TCP-Friendly refers to the fact that the new protocol is fair (in terms of bandwidth) when competing with a TCP connection. Vegas was criticized for being less aggressive than Reno, while BIC was criticized for being too aggressive. It seems that in order for the TCP protocol to be replaced form this point forward that the Internet itself will have to evolve so as to demand a new protocol. Such is the case with protocols like Scalable TCP, which may become more useful as high speed networks become more prominent.
As far as the transport layer goes, even though advances are made all the time, it seems as though the field is somewhat stagnant since we are too willing to submit to the current working protocol (TCP Reno in this case) and, although some protocols are better, no other currently available protocol is good enough for the world at large to want to incorporate it on a large scale.
Thursday, October 7, 2010
Scalable TCP (and the Future of Networks)
From the Scalable TCP paper and other networking papers I've been reading, many researchers seem to be doing research for an Internet that doesn't quite exist yet. For example, while Scalable TCP may only be useful for a small group of users in the current internet, as High Speed Internet becomes more and more popular, and it may take many years or decades to do so, I am sure Scalable TCP, and related research, will become more useful to a larger number of users. However, by that time, more and more researchers will probably be shifting their focus to interplanetary networks, which may or may not ever come to fruition (but it is still looking toward the future).
Of course, this may seem like an inherent part of research (looking towards the future), but only in these recent papers has the idea really stood out to me. For our networking research, we are looking at Network Tomography as a tool to infer Network Topology. The main reason why researchers claim to use this tool, instead of more active probing tools like traceroute, is in order to be able to continue mapping the Internet while the routers and users become less and less cooperative with network measurement tools. Essentially, it seems that the researchers are preparing for a future Internet where little or no cooperation will exist.
Note that I stated that the researchers claim this is their primary reason for research, but my professors believe that the networking techniques that they are proposing are very useful for mapping networks that certain users, such as governments, etc., don't want mapped. An idea that is useful today, while still preparing for the future. So now I ask myself where I should focus my research. Should I take a gamble and focus on research that may or may not be useful in the future? Or should I focus on research that is important for people now and may have future implications? Just some interesting things I have been thinking about.
Of course, this may seem like an inherent part of research (looking towards the future), but only in these recent papers has the idea really stood out to me. For our networking research, we are looking at Network Tomography as a tool to infer Network Topology. The main reason why researchers claim to use this tool, instead of more active probing tools like traceroute, is in order to be able to continue mapping the Internet while the routers and users become less and less cooperative with network measurement tools. Essentially, it seems that the researchers are preparing for a future Internet where little or no cooperation will exist.
Note that I stated that the researchers claim this is their primary reason for research, but my professors believe that the networking techniques that they are proposing are very useful for mapping networks that certain users, such as governments, etc., don't want mapped. An idea that is useful today, while still preparing for the future. So now I ask myself where I should focus my research. Should I take a gamble and focus on research that may or may not be useful in the future? Or should I focus on research that is important for people now and may have future implications? Just some interesting things I have been thinking about.
Sunday, October 3, 2010
Vegas vs. Reno
I'm not exactly sure why all the names of the TCP spin-offs happen to be cities in Nevada, but I'm more interested in the concepts behind the protocols, rather than trying to understand naming conventions in networking research. The papers we read on Vegas and Reno were fairly interesting, mainly because it showed that Vegas was a better protocol, but for some reason Reno is still in use.
The first paper was by the original creators of Vegas, showing off how much better than Reno their protocol was. The second paper reaffirmed a lot of the things that were stated in the first paper, but the authors noted a flaw in the Vegas protocol: Reno was more aggressive and stole bandwidth from Vegas. This issue was fixed in the third (and final) paper we read, stating that alpha and beta default values of Vegas could be optimized to improve fairness with Vegas.
However, we're still using Reno. We know Vegas is good (better than Reno), but its not good enough to change the norm. From what I understand from my professor, Reno already has a large install base and Vegas isn't a big enough improvement over the current norm (Reno) to justify a switch. Protocols like CUBIC were more aggressive than Reno, while Vegas was less aggressive and neither managed to work well with Reno, so they weren't accepted. Maybe it was because they couldn't integrate well and then outperform the existing protocol or maybe because, although the fairness issue was resolved, the paper didn't go into detail on how to measure buffer capacity in order to improve the fairness. Whatever the issue, most research seems to have moved on from trying to improve TCP and have focused on high speed transport protocols, which I'll probably read for next week.
The first paper was by the original creators of Vegas, showing off how much better than Reno their protocol was. The second paper reaffirmed a lot of the things that were stated in the first paper, but the authors noted a flaw in the Vegas protocol: Reno was more aggressive and stole bandwidth from Vegas. This issue was fixed in the third (and final) paper we read, stating that alpha and beta default values of Vegas could be optimized to improve fairness with Vegas.
However, we're still using Reno. We know Vegas is good (better than Reno), but its not good enough to change the norm. From what I understand from my professor, Reno already has a large install base and Vegas isn't a big enough improvement over the current norm (Reno) to justify a switch. Protocols like CUBIC were more aggressive than Reno, while Vegas was less aggressive and neither managed to work well with Reno, so they weren't accepted. Maybe it was because they couldn't integrate well and then outperform the existing protocol or maybe because, although the fairness issue was resolved, the paper didn't go into detail on how to measure buffer capacity in order to improve the fairness. Whatever the issue, most research seems to have moved on from trying to improve TCP and have focused on high speed transport protocols, which I'll probably read for next week.
Wednesday, September 29, 2010
TCP
In our networking class this week we started discussing transport layer protocols for the internet, namely the different types of TCP (such as Tahoe, Reno, New Reno, and SACK). While we'll go into more detail in later reading (which compares Reno and Vegas), we were mainly looking at the differences between congestion control and performance, but I am more interested in looking at adoption rates (how successful each have been) as well as deployment strategies.
From what I've read so far it seems that introducing new transport protocols on a large scale seems fairly difficult, however, most TCP-based protocols seem to thrive. Is it because the underlying architecture already uses TCP, so changing it slightly won't make a huge difference. Or is modifying the transport protocol easier than I think? Looking at other transport protocols (like BIC, that was used for Linux a while ago) it seems that implementing a transport protocol is not too difficult, but getting widespread acceptance is. The next question I would ask is: do we need new protocols? I guess that depends on how much the underlying architecture changes over time.
From what I've read so far it seems that introducing new transport protocols on a large scale seems fairly difficult, however, most TCP-based protocols seem to thrive. Is it because the underlying architecture already uses TCP, so changing it slightly won't make a huge difference. Or is modifying the transport protocol easier than I think? Looking at other transport protocols (like BIC, that was used for Linux a while ago) it seems that implementing a transport protocol is not too difficult, but getting widespread acceptance is. The next question I would ask is: do we need new protocols? I guess that depends on how much the underlying architecture changes over time.
Saturday, September 25, 2010
Measuring the Internet (and Some Other Video Game-Related Discussion)
Thinking back to the first paper we read when studying applications, a paper entitled "End-to-End Internet Packet Dynamics," it seemed like it had nothing to do with the application layer at all. Measuring the internet sounds more like a transport problem, but I have discovered it is actually an application problem, and an important one at that. Measuring the internet is essential to understanding the shifting dynamics of its usage, not to mention that similar techniques can be used in mapping the internet.
Our lab in class has also been focused on the idea of Internet Measurement, and it has been interesting to see how such a small program, when applied to a huge simulation (in this case PlanetLab) can create so much data. We're collecting almost a million data points by pinging 100 computers from 100 different computers 10 times, still small compared to the 500 odd million collected by Microsoft Research to test Htrae, but bigger than anything I've ever done (especially since I've never left code running for more than 20 minutes, let alone 2 days straight).
On a side note, it was fun to see a few game related papers come up during the discussion (mainly discussing P2P gaming). Mine, on Halo, and another paper discussing how to get more players (other than the regular 16 or 32) into a single game. They discussed things such as putting more attention to those within a certain area of vision, either by proximity or through a scope, and they even factored in details such as more people focusing on a flag carrier, etc. They were apparently able to simulate 900 players in a single game with this technology, which kind of reminded me of the game MAG, which allows for 256 players in a single game, but the focus of the research was to improve performance for small areas and I'm not sure how well MAG performs when lots of players are grouped into one area (and I know that the graphical level of MAG isn't all that great).
Our lab in class has also been focused on the idea of Internet Measurement, and it has been interesting to see how such a small program, when applied to a huge simulation (in this case PlanetLab) can create so much data. We're collecting almost a million data points by pinging 100 computers from 100 different computers 10 times, still small compared to the 500 odd million collected by Microsoft Research to test Htrae, but bigger than anything I've ever done (especially since I've never left code running for more than 20 minutes, let alone 2 days straight).
On a side note, it was fun to see a few game related papers come up during the discussion (mainly discussing P2P gaming). Mine, on Halo, and another paper discussing how to get more players (other than the regular 16 or 32) into a single game. They discussed things such as putting more attention to those within a certain area of vision, either by proximity or through a scope, and they even factored in details such as more people focusing on a flag carrier, etc. They were apparently able to simulate 900 players in a single game with this technology, which kind of reminded me of the game MAG, which allows for 256 players in a single game, but the focus of the research was to improve performance for small areas and I'm not sure how well MAG performs when lots of players are grouped into one area (and I know that the graphical level of MAG isn't all that great).
Sunday, September 19, 2010
I Always Knew Halo was Good for Something
This week in our networking class is student chosen papers based on research in applications. After looking at internet measurement and p2p papers, I settled on a p2p paper from Microsoft Research that focused on improving Halo, well online games (and other latency-sensitive p2p systems) in general, but they did use Halo to come up with their latency prediction system, Htrae (Earth backwards, home of Bizarro and his friends from the DC comics universe).
The basic idea is that the system uses geolocation as an initial condition to a network coordinate system that uses spherical coordinates to map every Xbox to a virtual Earth, a method known as geographic bootstrapping. The point of Htrae is to improve matchmaking so that latency between players is reduced, thereby reducing in game lag.
Reducing the lag in an online games is important, especially in the popular first person shooter games, because lag could essentially mean you are killed before you even see the person that kills you. That, of course, would affect your view of how "fun" the game is and, since online gaming is a huge million (or even billion) dollar industry, you would find a new game that has less lag.
Although I do not know if Htrae is actually being used in any of the current games available today, the recently released Halo Reach (prequel to the original Halo game) is the most likely candidate. Halo Reach allows players to select certain preferences (such as improved matchmaking for skill level, etc.) by increasing the matchmaking time. The default matchmaking time (labeled as the fastest method) most likely incorporates Htrae, as it was published by Microsoft Game Studios.
Although many feel video games are a waste of time, there is no denying that they have substantial influence in pushing forward research in various areas of computer science. I have mainly read about improvements in graphics and AI, so it was interesting to see video games helping to push the envelope in terms of networking research.
The basic idea is that the system uses geolocation as an initial condition to a network coordinate system that uses spherical coordinates to map every Xbox to a virtual Earth, a method known as geographic bootstrapping. The point of Htrae is to improve matchmaking so that latency between players is reduced, thereby reducing in game lag.
Reducing the lag in an online games is important, especially in the popular first person shooter games, because lag could essentially mean you are killed before you even see the person that kills you. That, of course, would affect your view of how "fun" the game is and, since online gaming is a huge million (or even billion) dollar industry, you would find a new game that has less lag.
Although I do not know if Htrae is actually being used in any of the current games available today, the recently released Halo Reach (prequel to the original Halo game) is the most likely candidate. Halo Reach allows players to select certain preferences (such as improved matchmaking for skill level, etc.) by increasing the matchmaking time. The default matchmaking time (labeled as the fastest method) most likely incorporates Htrae, as it was published by Microsoft Game Studios.
Although many feel video games are a waste of time, there is no denying that they have substantial influence in pushing forward research in various areas of computer science. I have mainly read about improvements in graphics and AI, so it was interesting to see video games helping to push the envelope in terms of networking research.
Wednesday, September 15, 2010
Understanding Peer-to-Peer Systems
Although I had a basic understanding of how peer-to-peer (P2P) systems worked (thanks to a little application known as BitTorrent), reading about how a P2P system is designed helped me better understand the nuances associated with creating a P2P application. In particular, the distributed lookup protocol I studied was entitled "Chord." Unlike many other networking protocol names, this wasn't an acronym for anything special.
The main problem Chord is trying to solve is the problem of efficiently locating a node that stores a particular data item. Chord is impressive in its running time, taking only O(logN), where N is the number of nodes, to maintain routing information and resolve lookups, and also in its robustness, updating routing information for leaving and joining in O(log^2(n)) time.
More importantly, Chord showed me that in order to effectively create a P2P application we need to look at several aspects of P2P. Firstly, Chord implements consistent hashing and stabilization in order to allow for nodes to join and leave. Successfully allowing for users to be able to enter or drop out at any time without disturbing the distributed network is an important aspect of P2P applications. If this aspect of P2P is not handled correctly and efficiently, then the application will essentially fail.
Next, Chord is also scalable, which essentially means it is a feasible protocol for the existing internet architecture. Unlike the other architectures I have studied, Chord is a protocol that is implemented on the application layer, which is much easier to manipulate than the underlying architecture of the internet. Chord not only solves a problem in existing P2P protocols, but it does so in a way that is actual usable, which is an important aspect of research that I haven't seen a lot of in networking (at least in research on internet architecture).
Overall, I found this particular paper very useful, because I feel that understanding these basic ideas of P2P networks and protocols will better prepare me for the next paper I will read for my networking class, a paper on matchmaking for online games, which I am very excited to read.
The main problem Chord is trying to solve is the problem of efficiently locating a node that stores a particular data item. Chord is impressive in its running time, taking only O(logN), where N is the number of nodes, to maintain routing information and resolve lookups, and also in its robustness, updating routing information for leaving and joining in O(log^2(n)) time.
More importantly, Chord showed me that in order to effectively create a P2P application we need to look at several aspects of P2P. Firstly, Chord implements consistent hashing and stabilization in order to allow for nodes to join and leave. Successfully allowing for users to be able to enter or drop out at any time without disturbing the distributed network is an important aspect of P2P applications. If this aspect of P2P is not handled correctly and efficiently, then the application will essentially fail.
Next, Chord is also scalable, which essentially means it is a feasible protocol for the existing internet architecture. Unlike the other architectures I have studied, Chord is a protocol that is implemented on the application layer, which is much easier to manipulate than the underlying architecture of the internet. Chord not only solves a problem in existing P2P protocols, but it does so in a way that is actual usable, which is an important aspect of research that I haven't seen a lot of in networking (at least in research on internet architecture).
Overall, I found this particular paper very useful, because I feel that understanding these basic ideas of P2P networks and protocols will better prepare me for the next paper I will read for my networking class, a paper on matchmaking for online games, which I am very excited to read.
Tuesday, September 14, 2010
All About Packet Dynamics
Vern Paxson's paper "End-to End Internet Packet Dynamics" essentially takes a NASDAQ view of the Internet, i.e. NASDAQ uses a small group of companies to represent the entire stock market, while Paxson uses several sites to model the entire internet. The paper, overall, serves to dispel certain misconceptions (or assumptions) related to TCP connections and also serves as a basis for further research dependent on the actual performance of the internet.
One interesting idea brought up in the paper included the number of corrupt packets accepted every day by the internet (estimated at one in 3 million). Paxson notes, however, that switching the 16-bit checksum to a 32-bit checksum would change the number to one in 2*10^13. Since this paper was written in 1997, I assume this has been implemented, so it would be interesting to see today if a 64-bit checksum could be (or already has been) implemented. Would changing it essentially nullify the effects of corrupt packets being accepted in the internet altogether? Or does the rate of internet usage keep up with the checksum?
Another interesting point that Paxson discusses is the Nd "dups" threshold, which is currently set at 3. He shows that dropping it to 2 would allow for a gain in retransmit opportunities by a significant amount (60 - 70%), and that the tradeoff, the ratio of good to bad "dups," could be stabilized using a wait time of W = 20msec before generating the second "dup." However, while this idea is theoretically an improvement, he notes that the size of the internet (back in 1997) made it impractical to implement. Trying to implement a solution today that was not feasible in 1997 would be impossible. Leading to the question, should more research be done into areas that we know are infeasible to implement? Or do we believe there is a feasible way to vastly improve the internet?
Paxson's study provided researchers with a lot of solid data to test the effects of end to end packet dynamics in the internet. I am not sure how much the internet's end to end procedure has changed since 1997, but it would be useful to have a similar experiment on the modern internet to evaluate the changes that have occurred in the past 13 or so odd years. Such an experiment would be useful to researchers attempting to improve the dynamics of the internet, as well as for application developers attempting to understand how the underlying protocols should affect their implementations.
One interesting idea brought up in the paper included the number of corrupt packets accepted every day by the internet (estimated at one in 3 million). Paxson notes, however, that switching the 16-bit checksum to a 32-bit checksum would change the number to one in 2*10^13. Since this paper was written in 1997, I assume this has been implemented, so it would be interesting to see today if a 64-bit checksum could be (or already has been) implemented. Would changing it essentially nullify the effects of corrupt packets being accepted in the internet altogether? Or does the rate of internet usage keep up with the checksum?
Another interesting point that Paxson discusses is the Nd "dups" threshold, which is currently set at 3. He shows that dropping it to 2 would allow for a gain in retransmit opportunities by a significant amount (60 - 70%), and that the tradeoff, the ratio of good to bad "dups," could be stabilized using a wait time of W = 20msec before generating the second "dup." However, while this idea is theoretically an improvement, he notes that the size of the internet (back in 1997) made it impractical to implement. Trying to implement a solution today that was not feasible in 1997 would be impossible. Leading to the question, should more research be done into areas that we know are infeasible to implement? Or do we believe there is a feasible way to vastly improve the internet?
Paxson's study provided researchers with a lot of solid data to test the effects of end to end packet dynamics in the internet. I am not sure how much the internet's end to end procedure has changed since 1997, but it would be useful to have a similar experiment on the modern internet to evaluate the changes that have occurred in the past 13 or so odd years. Such an experiment would be useful to researchers attempting to improve the dynamics of the internet, as well as for application developers attempting to understand how the underlying protocols should affect their implementations.
Friday, September 10, 2010
So how will the Internet evolve?
Who knows. From what I have studied it seems unlikely that the Internet will get a complete overhaul, but there are certain pressing issues that need to be addressed, such as security, and so I predict that, within a few decades, the Internet will be very different from the one that we use today.
From the papers we discussed in our networking class it seems as though the trend of architecture research is focusing on data-centric approaches (although there are some radical application-centric ideas floating around). This is based on the idea that the current trend of internet use is to find data.
People don't care about where the data comes from, they just care that they get the data. Although I agree that this might be useful right now, it suffers from the same problems that plagued the original creation of the internet. The original designers were focused solely on what they needed for an internet right then and there. They didn't look forward to the future to try and incorporate ideas for a more data-centric network. If we follow suit and do not look a little ahead to the future, we may be implementing a data-centric network when the trend is shifting away from that type of network.
However, I do understand that researchers need to work with what they have and that it is incredibly difficult to predict the future. After all, who knew that the internet would grow to permeate the entire world?
I don't have any solutions to the current problems of the internet and at first I felt that the internet is extremely useful for everything I need, so why fix what isn't broke? Studying internet architecture has opened my eyes to the real struggles behind security issues and, from some radical papers, I have come to understand how limiting the internet architecture can be at times. I wasn't looking at the big picture of the internet before and now I better understand what people are trying to do and what kind of areas I need to look into if I want to help brainstorm the future of the internet.
From the papers we discussed in our networking class it seems as though the trend of architecture research is focusing on data-centric approaches (although there are some radical application-centric ideas floating around). This is based on the idea that the current trend of internet use is to find data.
People don't care about where the data comes from, they just care that they get the data. Although I agree that this might be useful right now, it suffers from the same problems that plagued the original creation of the internet. The original designers were focused solely on what they needed for an internet right then and there. They didn't look forward to the future to try and incorporate ideas for a more data-centric network. If we follow suit and do not look a little ahead to the future, we may be implementing a data-centric network when the trend is shifting away from that type of network.
However, I do understand that researchers need to work with what they have and that it is incredibly difficult to predict the future. After all, who knew that the internet would grow to permeate the entire world?
I don't have any solutions to the current problems of the internet and at first I felt that the internet is extremely useful for everything I need, so why fix what isn't broke? Studying internet architecture has opened my eyes to the real struggles behind security issues and, from some radical papers, I have come to understand how limiting the internet architecture can be at times. I wasn't looking at the big picture of the internet before and now I better understand what people are trying to do and what kind of areas I need to look into if I want to help brainstorm the future of the internet.
Thursday, September 9, 2010
Why have the Internet at all?
No, I am not calling upon the abolition of the Internet. I am instead trying to make a witty reference to a radical paper I read called "The End of Internet Architecture." The author, Timothy Roscoe, essentially throws out the rather extreme view that the current Internet architecture is not good enough for the functionality we need and that no other architecture will fix the issues the current Internet is experiencing. Instead, his idea is to virtualize the Internet, with applications taking on every role that the current Internet architecture has, effectively removing Internet architecture.
Though Roscoe's claims may seem ludicrous at first, there are some merits to his line of thought. For example, he proposes that removing the architecture of the Internet essentially opens up the functionality of the internet. It is no secret that many Internet application developers have to spend a large amount of time to create a workaround for the application to work with the current architecture of the Internet, rather then seamlessly working with it. Roscoe proposes that a world of creativity could be opened up if applications did not need workarounds, or weren't tied down by protocols.
Overall, Roscoe isn't really trying to create a new architecture, the paper seems more like a call to arms to push research in networking into the field of systems research. Roscoe doesn't delve very deep into the idea of mixing systems and networking research, but he does give a few ideas of areas that other researchers could focus on in order to push forward. Although I don't entirely agree with Roscoe's approach, I applaud his ability to think outside the box in terms of trying to find new avenues of networking research to explore. Who knows, maybe some of his ideas will come into practice as the Internet evolves over the next few decades.
Though Roscoe's claims may seem ludicrous at first, there are some merits to his line of thought. For example, he proposes that removing the architecture of the Internet essentially opens up the functionality of the internet. It is no secret that many Internet application developers have to spend a large amount of time to create a workaround for the application to work with the current architecture of the Internet, rather then seamlessly working with it. Roscoe proposes that a world of creativity could be opened up if applications did not need workarounds, or weren't tied down by protocols.
Overall, Roscoe isn't really trying to create a new architecture, the paper seems more like a call to arms to push research in networking into the field of systems research. Roscoe doesn't delve very deep into the idea of mixing systems and networking research, but he does give a few ideas of areas that other researchers could focus on in order to push forward. Although I don't entirely agree with Roscoe's approach, I applaud his ability to think outside the box in terms of trying to find new avenues of networking research to explore. Who knows, maybe some of his ideas will come into practice as the Internet evolves over the next few decades.
Monday, September 6, 2010
So what's everyone else doing?
In studying how to improve the architecture of the internet, one must look at what has already been done. One of the architectures we studied in class was DONA, which stands for "A Data-Oriented (and Beyond) Network Architecture." Although I didn't understand all the nuances of the architecture, the basic idea of DONA is to improve the efficiency of the internet by understanding that internet usage is data-centric, rather than host-centric, and modeling an architecture to support this trend.
The main problem I see with the proposed architecture is the feasibility of its implementation. Although this aspect of DONA is covered in the article, I feel that a key point was not addressed: how to successfully market it to the masses. In order to successfully launch the 'Internet 2.0,' the millions (or billions) of users must be able to use the system. While DONA is not impossible to use, it is different and, as mentioned in the article, it is more difficult.
Although a data-centric internet would benefit the masses, explaining to them that they must work harder so that the internet can work better may be a difficult task, especially when we consider that the current internet has workarounds in place that allow them to essentially have aspects of a data-centric internet, without having to learn a new naming convention.
Some may argue that this is a problem for psychologists, not students of networking, but I would counter that that type of thinking is what caused the networking problems in the first place. The Internet was created for certain tasks, while research into how users would use the Internet was not even considered at the time, since the entire idea of the Internet had not been completely realized.
I believe that in order to have a successful 'Internet 2.0,' not only should we improve the architecture of the Internet dependent upon its usage in modern times, but we must also properly prepare for its implementation. I won't go so far as to say that the perfect internet would not require users to change their behavior, but if I would propose that if a change in behavior is necessary, then we as researchers should effectively determine a manner in which we can help transition the masses into a new world with a better Internet.
The main problem I see with the proposed architecture is the feasibility of its implementation. Although this aspect of DONA is covered in the article, I feel that a key point was not addressed: how to successfully market it to the masses. In order to successfully launch the 'Internet 2.0,' the millions (or billions) of users must be able to use the system. While DONA is not impossible to use, it is different and, as mentioned in the article, it is more difficult.
Although a data-centric internet would benefit the masses, explaining to them that they must work harder so that the internet can work better may be a difficult task, especially when we consider that the current internet has workarounds in place that allow them to essentially have aspects of a data-centric internet, without having to learn a new naming convention.
Some may argue that this is a problem for psychologists, not students of networking, but I would counter that that type of thinking is what caused the networking problems in the first place. The Internet was created for certain tasks, while research into how users would use the Internet was not even considered at the time, since the entire idea of the Internet had not been completely realized.
I believe that in order to have a successful 'Internet 2.0,' not only should we improve the architecture of the Internet dependent upon its usage in modern times, but we must also properly prepare for its implementation. I won't go so far as to say that the perfect internet would not require users to change their behavior, but if I would propose that if a change in behavior is necessary, then we as researchers should effectively determine a manner in which we can help transition the masses into a new world with a better Internet.
Wednesday, September 1, 2010
Improve the Internet? Me?
Thinking back to the first time I used the internet makes me feel kind of old, I remember chatting over ICQ (I seek you) and downloading music from Napster (before everyone found out it was illegal). Now, I love streaming Hulu (the free version), learning fun facts from Wikipedia (even if the validity of the information is questionable), and socializing over Facebook (since my parents haven't figure out how to join it yet). Reading through the article "The Design Philosophy of the DARPA Internet Protocols" forced me to reflect on the birth of the internet, and it's rise over the past few decades and the impact it has had on the world.
DARPA's main goal was to "develop an effective technique for multiplexed utilization of existing interconnected networks." Essentially, DARPA had several networks they wanted to combine to make an "interconnected network" or "internet." DARPA was not thinking on a worldwide scale at the time, but this vision of connecting networks over large distances was an important step towards the establishment of the World Wide Web we see today. The main goal in the creation of the internet was to improve communication channels between different military groups throughout the country, with such a strong focus on sending messages to one another it is no surprise that email and social networking have become so popular today.
The question of why this communication was so important, more important than the survivability or security of the internet, could have many answers and truly depends on the issues facing those in charge of developing the goals, which I unfortunately have no real insight into. Perhaps a more poignant question then is why study the history of the internet at all? Is it to understand the motivations of those who created it? Or is there a deeper purpose?
My networking professor would have me believe that we study the history of the internet in order to improve the future of the internet. Should we not learn from the mistakes of others, so we do not follow in their footsteps? As an undergraduate I may have eaten up my professor's words and then spat them back out during some mid-term or final, never again remembering their importance once the course was over. As a graduate student, however, I am more inclined to dig a little deeper and try and determine if that truly is a role I need to play: an internet improver. I'm not sure, my knowledge of networks is minimal, but hopefully the more I study them, the better equipped I will be to step into those shoes.
If I were to become an internet improver, I believe that the best place to start, of course, is how we can improve upon what has already been done. Where did DARPA go right and in which areas did they fail? While a full analysis is not possible, we can look at a few of the goals outlined in the aforementioned paper:
1) Connecting existing networks, which I believe was mainly in order to improve communication. With sites and applications like Google Voice and Facebook, can we be any more connected? I postulate that we can, especially due to the emergence of smartphones in recent years. Since video and voice transmission through the internet is already possible, why am I paying for minutes on my cell phone? Why can't I just have an unlimited data plan and then make calls and video chats via the internet on cell phones or computers? I assume it is possible, but would changes need to be made to the internet architecture and protocols to make it feasible? Are there other obstacles stopping this from being possible? An important question whenever suggesting changes would also be how would these changes affect us in the future? DARPA made the mistake of not thinking on a global scale, forsaking such measures as security for connectivity. Should I, then, be thinking about networks on an interplanetary scale?
2) Survivability of the internet, a topic which at first seem pretty much taken care of. Going back to our interplanetary scale, what would happen if the connections between two planets were lost? The internet would survive, but if there were a substantial number of connections on other planets, then the internet could essentially be split. What kind of satellite technology is needed to ensure that doesn't happen? Of course, just as there is a problem with thinking on too small of a scale, I also believe that thinking on too large of a scale could be detrimental, or at least a waste of time. Should I be worrying so much about the problem of interplanetary connections when its inception could be many lifetimes away?
3) The final goal I will discuss (although there are more goals outlined in the paper) is the ability to support multiple types of communication services. Since the internet is so large, it isn't easy to simply introduce a new protocol for streaming video or voice or other necessary services. Can we improve on TCP/IP or UDP? Surely we can, but, from the little that I've learned, apparently it is not feasible. How then can we improve these services? Is it possible to improve the overall internet architecture by introducing new protocols in the application layer? Or will we still be bottle-necked by the limitations of the existing protocols that have cemented their position in internet usage?
Hopefully as I continue to study the architecture of the internet and learn about networking I can begin to answer these questions and better understand these concepts. Who knows, maybe I can even make some small contributions that will help improve the internet...
DARPA's main goal was to "develop an effective technique for multiplexed utilization of existing interconnected networks." Essentially, DARPA had several networks they wanted to combine to make an "interconnected network" or "internet." DARPA was not thinking on a worldwide scale at the time, but this vision of connecting networks over large distances was an important step towards the establishment of the World Wide Web we see today. The main goal in the creation of the internet was to improve communication channels between different military groups throughout the country, with such a strong focus on sending messages to one another it is no surprise that email and social networking have become so popular today.
The question of why this communication was so important, more important than the survivability or security of the internet, could have many answers and truly depends on the issues facing those in charge of developing the goals, which I unfortunately have no real insight into. Perhaps a more poignant question then is why study the history of the internet at all? Is it to understand the motivations of those who created it? Or is there a deeper purpose?
My networking professor would have me believe that we study the history of the internet in order to improve the future of the internet. Should we not learn from the mistakes of others, so we do not follow in their footsteps? As an undergraduate I may have eaten up my professor's words and then spat them back out during some mid-term or final, never again remembering their importance once the course was over. As a graduate student, however, I am more inclined to dig a little deeper and try and determine if that truly is a role I need to play: an internet improver. I'm not sure, my knowledge of networks is minimal, but hopefully the more I study them, the better equipped I will be to step into those shoes.
If I were to become an internet improver, I believe that the best place to start, of course, is how we can improve upon what has already been done. Where did DARPA go right and in which areas did they fail? While a full analysis is not possible, we can look at a few of the goals outlined in the aforementioned paper:
1) Connecting existing networks, which I believe was mainly in order to improve communication. With sites and applications like Google Voice and Facebook, can we be any more connected? I postulate that we can, especially due to the emergence of smartphones in recent years. Since video and voice transmission through the internet is already possible, why am I paying for minutes on my cell phone? Why can't I just have an unlimited data plan and then make calls and video chats via the internet on cell phones or computers? I assume it is possible, but would changes need to be made to the internet architecture and protocols to make it feasible? Are there other obstacles stopping this from being possible? An important question whenever suggesting changes would also be how would these changes affect us in the future? DARPA made the mistake of not thinking on a global scale, forsaking such measures as security for connectivity. Should I, then, be thinking about networks on an interplanetary scale?
2) Survivability of the internet, a topic which at first seem pretty much taken care of. Going back to our interplanetary scale, what would happen if the connections between two planets were lost? The internet would survive, but if there were a substantial number of connections on other planets, then the internet could essentially be split. What kind of satellite technology is needed to ensure that doesn't happen? Of course, just as there is a problem with thinking on too small of a scale, I also believe that thinking on too large of a scale could be detrimental, or at least a waste of time. Should I be worrying so much about the problem of interplanetary connections when its inception could be many lifetimes away?
3) The final goal I will discuss (although there are more goals outlined in the paper) is the ability to support multiple types of communication services. Since the internet is so large, it isn't easy to simply introduce a new protocol for streaming video or voice or other necessary services. Can we improve on TCP/IP or UDP? Surely we can, but, from the little that I've learned, apparently it is not feasible. How then can we improve these services? Is it possible to improve the overall internet architecture by introducing new protocols in the application layer? Or will we still be bottle-necked by the limitations of the existing protocols that have cemented their position in internet usage?
Hopefully as I continue to study the architecture of the internet and learn about networking I can begin to answer these questions and better understand these concepts. Who knows, maybe I can even make some small contributions that will help improve the internet...
Subscribe to:
Posts (Atom)