Thinking back to the first paper we read when studying applications, a paper entitled "End-to-End Internet Packet Dynamics," it seemed like it had nothing to do with the application layer at all. Measuring the internet sounds more like a transport problem, but I have discovered it is actually an application problem, and an important one at that. Measuring the internet is essential to understanding the shifting dynamics of its usage, not to mention that similar techniques can be used in mapping the internet.
Our lab in class has also been focused on the idea of Internet Measurement, and it has been interesting to see how such a small program, when applied to a huge simulation (in this case PlanetLab) can create so much data. We're collecting almost a million data points by pinging 100 computers from 100 different computers 10 times, still small compared to the 500 odd million collected by Microsoft Research to test Htrae, but bigger than anything I've ever done (especially since I've never left code running for more than 20 minutes, let alone 2 days straight).
On a side note, it was fun to see a few game related papers come up during the discussion (mainly discussing P2P gaming). Mine, on Halo, and another paper discussing how to get more players (other than the regular 16 or 32) into a single game. They discussed things such as putting more attention to those within a certain area of vision, either by proximity or through a scope, and they even factored in details such as more people focusing on a flag carrier, etc. They were apparently able to simulate 900 players in a single game with this technology, which kind of reminded me of the game MAG, which allows for 256 players in a single game, but the focus of the research was to improve performance for small areas and I'm not sure how well MAG performs when lots of players are grouped into one area (and I know that the graphical level of MAG isn't all that great).
A blog about networks by a Spanish speaking Mormon Indian born in Fiji and raised in Australia, married to a Japanese Brazilian, pursuing a PhD in Computer Science from BYU after graduating from BYU-Hawaii
Showing posts with label application. Show all posts
Showing posts with label application. Show all posts
Saturday, September 25, 2010
Wednesday, September 15, 2010
Understanding Peer-to-Peer Systems
Although I had a basic understanding of how peer-to-peer (P2P) systems worked (thanks to a little application known as BitTorrent), reading about how a P2P system is designed helped me better understand the nuances associated with creating a P2P application. In particular, the distributed lookup protocol I studied was entitled "Chord." Unlike many other networking protocol names, this wasn't an acronym for anything special.
The main problem Chord is trying to solve is the problem of efficiently locating a node that stores a particular data item. Chord is impressive in its running time, taking only O(logN), where N is the number of nodes, to maintain routing information and resolve lookups, and also in its robustness, updating routing information for leaving and joining in O(log^2(n)) time.
More importantly, Chord showed me that in order to effectively create a P2P application we need to look at several aspects of P2P. Firstly, Chord implements consistent hashing and stabilization in order to allow for nodes to join and leave. Successfully allowing for users to be able to enter or drop out at any time without disturbing the distributed network is an important aspect of P2P applications. If this aspect of P2P is not handled correctly and efficiently, then the application will essentially fail.
Next, Chord is also scalable, which essentially means it is a feasible protocol for the existing internet architecture. Unlike the other architectures I have studied, Chord is a protocol that is implemented on the application layer, which is much easier to manipulate than the underlying architecture of the internet. Chord not only solves a problem in existing P2P protocols, but it does so in a way that is actual usable, which is an important aspect of research that I haven't seen a lot of in networking (at least in research on internet architecture).
Overall, I found this particular paper very useful, because I feel that understanding these basic ideas of P2P networks and protocols will better prepare me for the next paper I will read for my networking class, a paper on matchmaking for online games, which I am very excited to read.
The main problem Chord is trying to solve is the problem of efficiently locating a node that stores a particular data item. Chord is impressive in its running time, taking only O(logN), where N is the number of nodes, to maintain routing information and resolve lookups, and also in its robustness, updating routing information for leaving and joining in O(log^2(n)) time.
More importantly, Chord showed me that in order to effectively create a P2P application we need to look at several aspects of P2P. Firstly, Chord implements consistent hashing and stabilization in order to allow for nodes to join and leave. Successfully allowing for users to be able to enter or drop out at any time without disturbing the distributed network is an important aspect of P2P applications. If this aspect of P2P is not handled correctly and efficiently, then the application will essentially fail.
Next, Chord is also scalable, which essentially means it is a feasible protocol for the existing internet architecture. Unlike the other architectures I have studied, Chord is a protocol that is implemented on the application layer, which is much easier to manipulate than the underlying architecture of the internet. Chord not only solves a problem in existing P2P protocols, but it does so in a way that is actual usable, which is an important aspect of research that I haven't seen a lot of in networking (at least in research on internet architecture).
Overall, I found this particular paper very useful, because I feel that understanding these basic ideas of P2P networks and protocols will better prepare me for the next paper I will read for my networking class, a paper on matchmaking for online games, which I am very excited to read.
Tuesday, September 14, 2010
All About Packet Dynamics
Vern Paxson's paper "End-to End Internet Packet Dynamics" essentially takes a NASDAQ view of the Internet, i.e. NASDAQ uses a small group of companies to represent the entire stock market, while Paxson uses several sites to model the entire internet. The paper, overall, serves to dispel certain misconceptions (or assumptions) related to TCP connections and also serves as a basis for further research dependent on the actual performance of the internet.
One interesting idea brought up in the paper included the number of corrupt packets accepted every day by the internet (estimated at one in 3 million). Paxson notes, however, that switching the 16-bit checksum to a 32-bit checksum would change the number to one in 2*10^13. Since this paper was written in 1997, I assume this has been implemented, so it would be interesting to see today if a 64-bit checksum could be (or already has been) implemented. Would changing it essentially nullify the effects of corrupt packets being accepted in the internet altogether? Or does the rate of internet usage keep up with the checksum?
Another interesting point that Paxson discusses is the Nd "dups" threshold, which is currently set at 3. He shows that dropping it to 2 would allow for a gain in retransmit opportunities by a significant amount (60 - 70%), and that the tradeoff, the ratio of good to bad "dups," could be stabilized using a wait time of W = 20msec before generating the second "dup." However, while this idea is theoretically an improvement, he notes that the size of the internet (back in 1997) made it impractical to implement. Trying to implement a solution today that was not feasible in 1997 would be impossible. Leading to the question, should more research be done into areas that we know are infeasible to implement? Or do we believe there is a feasible way to vastly improve the internet?
Paxson's study provided researchers with a lot of solid data to test the effects of end to end packet dynamics in the internet. I am not sure how much the internet's end to end procedure has changed since 1997, but it would be useful to have a similar experiment on the modern internet to evaluate the changes that have occurred in the past 13 or so odd years. Such an experiment would be useful to researchers attempting to improve the dynamics of the internet, as well as for application developers attempting to understand how the underlying protocols should affect their implementations.
One interesting idea brought up in the paper included the number of corrupt packets accepted every day by the internet (estimated at one in 3 million). Paxson notes, however, that switching the 16-bit checksum to a 32-bit checksum would change the number to one in 2*10^13. Since this paper was written in 1997, I assume this has been implemented, so it would be interesting to see today if a 64-bit checksum could be (or already has been) implemented. Would changing it essentially nullify the effects of corrupt packets being accepted in the internet altogether? Or does the rate of internet usage keep up with the checksum?
Another interesting point that Paxson discusses is the Nd "dups" threshold, which is currently set at 3. He shows that dropping it to 2 would allow for a gain in retransmit opportunities by a significant amount (60 - 70%), and that the tradeoff, the ratio of good to bad "dups," could be stabilized using a wait time of W = 20msec before generating the second "dup." However, while this idea is theoretically an improvement, he notes that the size of the internet (back in 1997) made it impractical to implement. Trying to implement a solution today that was not feasible in 1997 would be impossible. Leading to the question, should more research be done into areas that we know are infeasible to implement? Or do we believe there is a feasible way to vastly improve the internet?
Paxson's study provided researchers with a lot of solid data to test the effects of end to end packet dynamics in the internet. I am not sure how much the internet's end to end procedure has changed since 1997, but it would be useful to have a similar experiment on the modern internet to evaluate the changes that have occurred in the past 13 or so odd years. Such an experiment would be useful to researchers attempting to improve the dynamics of the internet, as well as for application developers attempting to understand how the underlying protocols should affect their implementations.
Friday, September 10, 2010
So how will the Internet evolve?
Who knows. From what I have studied it seems unlikely that the Internet will get a complete overhaul, but there are certain pressing issues that need to be addressed, such as security, and so I predict that, within a few decades, the Internet will be very different from the one that we use today.
From the papers we discussed in our networking class it seems as though the trend of architecture research is focusing on data-centric approaches (although there are some radical application-centric ideas floating around). This is based on the idea that the current trend of internet use is to find data.
People don't care about where the data comes from, they just care that they get the data. Although I agree that this might be useful right now, it suffers from the same problems that plagued the original creation of the internet. The original designers were focused solely on what they needed for an internet right then and there. They didn't look forward to the future to try and incorporate ideas for a more data-centric network. If we follow suit and do not look a little ahead to the future, we may be implementing a data-centric network when the trend is shifting away from that type of network.
However, I do understand that researchers need to work with what they have and that it is incredibly difficult to predict the future. After all, who knew that the internet would grow to permeate the entire world?
I don't have any solutions to the current problems of the internet and at first I felt that the internet is extremely useful for everything I need, so why fix what isn't broke? Studying internet architecture has opened my eyes to the real struggles behind security issues and, from some radical papers, I have come to understand how limiting the internet architecture can be at times. I wasn't looking at the big picture of the internet before and now I better understand what people are trying to do and what kind of areas I need to look into if I want to help brainstorm the future of the internet.
From the papers we discussed in our networking class it seems as though the trend of architecture research is focusing on data-centric approaches (although there are some radical application-centric ideas floating around). This is based on the idea that the current trend of internet use is to find data.
People don't care about where the data comes from, they just care that they get the data. Although I agree that this might be useful right now, it suffers from the same problems that plagued the original creation of the internet. The original designers were focused solely on what they needed for an internet right then and there. They didn't look forward to the future to try and incorporate ideas for a more data-centric network. If we follow suit and do not look a little ahead to the future, we may be implementing a data-centric network when the trend is shifting away from that type of network.
However, I do understand that researchers need to work with what they have and that it is incredibly difficult to predict the future. After all, who knew that the internet would grow to permeate the entire world?
I don't have any solutions to the current problems of the internet and at first I felt that the internet is extremely useful for everything I need, so why fix what isn't broke? Studying internet architecture has opened my eyes to the real struggles behind security issues and, from some radical papers, I have come to understand how limiting the internet architecture can be at times. I wasn't looking at the big picture of the internet before and now I better understand what people are trying to do and what kind of areas I need to look into if I want to help brainstorm the future of the internet.
Thursday, September 9, 2010
Why have the Internet at all?
No, I am not calling upon the abolition of the Internet. I am instead trying to make a witty reference to a radical paper I read called "The End of Internet Architecture." The author, Timothy Roscoe, essentially throws out the rather extreme view that the current Internet architecture is not good enough for the functionality we need and that no other architecture will fix the issues the current Internet is experiencing. Instead, his idea is to virtualize the Internet, with applications taking on every role that the current Internet architecture has, effectively removing Internet architecture.
Though Roscoe's claims may seem ludicrous at first, there are some merits to his line of thought. For example, he proposes that removing the architecture of the Internet essentially opens up the functionality of the internet. It is no secret that many Internet application developers have to spend a large amount of time to create a workaround for the application to work with the current architecture of the Internet, rather then seamlessly working with it. Roscoe proposes that a world of creativity could be opened up if applications did not need workarounds, or weren't tied down by protocols.
Overall, Roscoe isn't really trying to create a new architecture, the paper seems more like a call to arms to push research in networking into the field of systems research. Roscoe doesn't delve very deep into the idea of mixing systems and networking research, but he does give a few ideas of areas that other researchers could focus on in order to push forward. Although I don't entirely agree with Roscoe's approach, I applaud his ability to think outside the box in terms of trying to find new avenues of networking research to explore. Who knows, maybe some of his ideas will come into practice as the Internet evolves over the next few decades.
Though Roscoe's claims may seem ludicrous at first, there are some merits to his line of thought. For example, he proposes that removing the architecture of the Internet essentially opens up the functionality of the internet. It is no secret that many Internet application developers have to spend a large amount of time to create a workaround for the application to work with the current architecture of the Internet, rather then seamlessly working with it. Roscoe proposes that a world of creativity could be opened up if applications did not need workarounds, or weren't tied down by protocols.
Overall, Roscoe isn't really trying to create a new architecture, the paper seems more like a call to arms to push research in networking into the field of systems research. Roscoe doesn't delve very deep into the idea of mixing systems and networking research, but he does give a few ideas of areas that other researchers could focus on in order to push forward. Although I don't entirely agree with Roscoe's approach, I applaud his ability to think outside the box in terms of trying to find new avenues of networking research to explore. Who knows, maybe some of his ideas will come into practice as the Internet evolves over the next few decades.
Subscribe to:
Comments (Atom)