Showing posts with label network. Show all posts
Showing posts with label network. Show all posts

Sunday, September 19, 2010

I Always Knew Halo was Good for Something

This week in our networking class is student chosen papers based on research in applications. After looking at internet measurement and p2p papers, I settled on a p2p paper from Microsoft Research that focused on improving Halo, well online games (and other latency-sensitive p2p systems) in general, but they did use Halo to come up with their latency prediction system, Htrae (Earth backwards, home of Bizarro and his friends from the DC comics universe).

The basic idea is that the system uses geolocation as an initial condition to a network coordinate system that uses spherical coordinates to map every Xbox to a virtual Earth, a method known as geographic bootstrapping. The point of Htrae is to improve matchmaking so that latency between players is reduced, thereby reducing in game lag.

Reducing the lag in an online games is important, especially in the popular first person shooter games, because lag could essentially mean you are killed before you even see the person that kills you. That, of course, would affect your view of how "fun" the game is and, since online gaming is a huge million (or even billion) dollar industry, you would find a new game that has less lag.

Although I do not know if Htrae is actually being used in any of the current games available today, the recently released Halo Reach (prequel to the original Halo game) is the most likely candidate. Halo Reach allows players to select certain preferences (such as improved matchmaking for skill level, etc.) by increasing the matchmaking time. The default matchmaking time (labeled as the fastest method) most likely incorporates Htrae, as it was published by Microsoft Game Studios.

Although many feel video games are a waste of time, there is no denying that they have substantial influence in pushing forward research in various areas of computer science. I have mainly read about improvements in graphics and AI, so it was interesting to see video games helping to push the envelope in terms of networking research.

Tuesday, September 14, 2010

All About Packet Dynamics

Vern Paxson's paper "End-to End Internet Packet Dynamics" essentially takes a NASDAQ view of the Internet, i.e. NASDAQ uses a small group of companies to represent the entire stock market, while Paxson uses several sites to model the entire internet. The paper, overall, serves to dispel certain misconceptions (or assumptions) related to TCP connections and also serves as a basis for further research dependent on the actual performance of the internet.

One interesting idea brought up in the paper included the number of corrupt packets accepted every day by the internet (estimated at one in 3 million). Paxson notes, however, that switching the 16-bit checksum to a 32-bit checksum would change the number to one in 2*10^13. Since this paper was written in 1997, I assume this has been implemented, so it would be interesting to see today if a 64-bit checksum could be (or already has been) implemented. Would changing it essentially nullify the effects of corrupt packets being accepted in the internet altogether? Or does the rate of internet usage keep up with the checksum?

Another interesting point that Paxson discusses is the Nd "dups" threshold, which is currently set at 3. He shows that dropping it to 2 would allow for a gain in retransmit opportunities by a significant amount (60 - 70%), and that the tradeoff, the ratio of good to bad "dups," could be stabilized using a wait time of W = 20msec before generating the second "dup." However, while this idea is theoretically an improvement, he notes that the size of the internet (back in 1997) made it impractical to implement. Trying to implement a solution today that was not feasible in 1997 would be impossible. Leading to the question, should more research be done into areas that we know are infeasible to implement? Or do we believe there is a feasible way to vastly improve the internet?

Paxson's study provided researchers with a lot of solid data to test the effects of end to end packet dynamics in the internet. I am not sure how much the internet's end to end procedure has changed since 1997, but it would be useful to have a similar experiment on the modern internet to evaluate the changes that have occurred in the past 13 or so odd years. Such an experiment would be useful to researchers attempting to improve the dynamics of the internet, as well as for application developers attempting to understand how the underlying protocols should affect their implementations.

Friday, September 10, 2010

So how will the Internet evolve?

Who knows. From what I have studied it seems unlikely that the Internet will get a complete overhaul, but there are certain pressing issues that need to be addressed, such as security, and so I predict that, within a few decades, the Internet will be very different from the one that we use today.

From the papers we discussed in our networking class it seems as though the trend of architecture research is focusing on data-centric approaches (although there are some radical application-centric ideas floating around). This is based on the idea that the current trend of internet use is to find data.

People don't care about where the data comes from, they just care that they get the data. Although I agree that this might be useful right now, it suffers from the same problems that plagued the original creation of the internet. The original designers were focused solely on what they needed for an internet right then and there. They didn't look forward to the future to try and incorporate ideas for a more data-centric network. If we follow suit and do not look a little ahead to the future, we may be implementing a data-centric network when the trend is shifting away from that type of network.

However, I do understand that researchers need to work with what they have and that it is incredibly difficult to predict the future. After all, who knew that the internet would grow to permeate the entire world?

I don't have any solutions to the current problems of the internet and at first I felt that the internet is extremely useful for everything I need, so why fix what isn't broke? Studying internet architecture has opened my eyes to the real struggles behind security issues and, from some radical papers, I have come to understand how limiting the internet architecture can be at times. I wasn't looking at the big picture of the internet before and now I better understand what people are trying to do and what kind of areas I need to look into if I want to help brainstorm the future of the internet.

Monday, September 6, 2010

So what's everyone else doing?

In studying how to improve the architecture of the internet, one must look at what has already been done. One of the architectures we studied in class was DONA, which stands for "A Data-Oriented (and Beyond) Network Architecture." Although I didn't understand all the nuances of the architecture, the basic idea of DONA is to improve the efficiency of the internet by understanding that internet usage is data-centric, rather than host-centric, and modeling an architecture to support this trend.

The main problem I see with the proposed architecture is the feasibility of its implementation. Although this aspect of DONA is covered in the article, I feel that a key point was not addressed: how to successfully market it to the masses. In order to successfully launch the 'Internet 2.0,' the millions (or billions) of users must be able to use the system. While DONA is not impossible to use, it is different and, as mentioned in the article, it is more difficult.

Although a data-centric internet would benefit the masses, explaining to them that they must work harder so that the internet can work better may be a difficult task, especially when we consider that the current internet has workarounds in place that allow them to essentially have aspects of a data-centric internet, without having to learn a new naming convention.

Some may argue that this is a problem for psychologists, not students of networking, but I would counter that that type of thinking is what caused the networking problems in the first place. The Internet was created for certain tasks, while research into how users would use the Internet was not even considered at the time, since the entire idea of the Internet had not been completely realized.

I believe that in order to have a successful 'Internet 2.0,' not only should we improve the architecture of the Internet dependent upon its usage in modern times, but we must also properly prepare for its implementation. I won't go so far as to say that the perfect internet would not require users to change their behavior, but if I would propose that if a change in behavior is necessary, then we as researchers should effectively determine a manner in which we can help transition the masses into a new world with a better Internet.

Wednesday, September 1, 2010

Improve the Internet? Me?

Thinking back to the first time I used the internet makes me feel kind of old, I remember chatting over ICQ (I seek you) and downloading music from Napster (before everyone found out it was illegal). Now, I love streaming Hulu (the free version), learning fun facts from Wikipedia (even if the validity of the information is questionable), and socializing over Facebook (since my parents haven't figure out how to join it yet). Reading through the article "The Design Philosophy of the DARPA Internet Protocols" forced me to reflect on the birth of the internet, and it's rise over the past few decades and the impact it has had on the world.

DARPA's main goal was to "develop an effective technique for multiplexed utilization of existing interconnected networks." Essentially, DARPA had several networks they wanted to combine to make an "interconnected network" or "internet." DARPA was not thinking on a worldwide scale at the time, but this vision of connecting networks over large distances was an important step towards the establishment of the World Wide Web we see today. The main goal in the creation of the internet was to improve communication channels between different military groups throughout the country, with such a strong focus on sending messages to one another it is no surprise that email and social networking have become so popular today.

The question of why this communication was so important, more important than the survivability or security of the internet, could have many answers and truly depends on the issues facing those in charge of developing the goals, which I unfortunately have no real insight into. Perhaps a more poignant question then is why study the history of the internet at all? Is it to understand the motivations of those who created it? Or is there a deeper purpose?

My networking professor would have me believe that we study the history of the internet in order to improve the future of the internet. Should we not learn from the mistakes of others, so we do not follow in their footsteps? As an undergraduate I may have eaten up my professor's words and then spat them back out during some mid-term or final, never again remembering their importance once the course was over. As a graduate student, however, I am more inclined to dig a little deeper and try and determine if that truly is a role I need to play: an internet improver. I'm not sure, my knowledge of networks is minimal, but hopefully the more I study them, the better equipped I will be to step into those shoes.

If I were to become an internet improver, I believe that the best place to start, of course, is how we can improve upon what has already been done. Where did DARPA go right and in which areas did they fail? While a full analysis is not possible, we can look at a few of the goals outlined in the aforementioned paper:

1) Connecting existing networks, which I believe was mainly in order to improve communication. With sites and applications like Google Voice and Facebook, can we be any more connected? I postulate that we can, especially due to the emergence of smartphones in recent years. Since video and voice transmission through the internet is already possible, why am I paying for minutes on my cell phone? Why can't I just have an unlimited data plan and then make calls and video chats via the internet on cell phones or computers? I assume it is possible, but would changes need to be made to the internet architecture and protocols to make it feasible? Are there other obstacles stopping this from being possible? An important question whenever suggesting changes would also be how would these changes affect us in the future? DARPA made the mistake of not thinking on a global scale, forsaking such measures as security for connectivity. Should I, then, be thinking about networks on an interplanetary scale?

2) Survivability of the internet, a topic which at first seem pretty much taken care of. Going back to our interplanetary scale, what would happen if the connections between two planets were lost? The internet would survive, but if there were a substantial number of connections on other planets, then the internet could essentially be split. What kind of satellite technology is needed to ensure that doesn't happen? Of course, just as there is a problem with thinking on too small of a scale, I also believe that thinking on too large of a scale could be detrimental, or at least a waste of time. Should I be worrying so much about the problem of interplanetary connections when its inception could be many lifetimes away?

3) The final goal I will discuss (although there are more goals outlined in the paper) is the ability to support multiple types of communication services. Since the internet is so large, it isn't easy to simply introduce a new protocol for streaming video or voice or other necessary services. Can we improve on TCP/IP or UDP? Surely we can, but, from the little that I've learned, apparently it is not feasible. How then can we improve these services? Is it possible to improve the overall internet architecture by introducing new protocols in the application layer? Or will we still be bottle-necked by the limitations of the existing protocols that have cemented their position in internet usage?

Hopefully as I continue to study the architecture of the internet and learn about networking I can begin to answer these questions and better understand these concepts. Who knows, maybe I can even make some small contributions that will help improve the internet...