In our networking class this week we started discussing transport layer protocols for the internet, namely the different types of TCP (such as Tahoe, Reno, New Reno, and SACK). While we'll go into more detail in later reading (which compares Reno and Vegas), we were mainly looking at the differences between congestion control and performance, but I am more interested in looking at adoption rates (how successful each have been) as well as deployment strategies.
From what I've read so far it seems that introducing new transport protocols on a large scale seems fairly difficult, however, most TCP-based protocols seem to thrive. Is it because the underlying architecture already uses TCP, so changing it slightly won't make a huge difference. Or is modifying the transport protocol easier than I think? Looking at other transport protocols (like BIC, that was used for Linux a while ago) it seems that implementing a transport protocol is not too difficult, but getting widespread acceptance is. The next question I would ask is: do we need new protocols? I guess that depends on how much the underlying architecture changes over time.
A blog about networks by a Spanish speaking Mormon Indian born in Fiji and raised in Australia, married to a Japanese Brazilian, pursuing a PhD in Computer Science from BYU after graduating from BYU-Hawaii
Wednesday, September 29, 2010
Saturday, September 25, 2010
Measuring the Internet (and Some Other Video Game-Related Discussion)
Thinking back to the first paper we read when studying applications, a paper entitled "End-to-End Internet Packet Dynamics," it seemed like it had nothing to do with the application layer at all. Measuring the internet sounds more like a transport problem, but I have discovered it is actually an application problem, and an important one at that. Measuring the internet is essential to understanding the shifting dynamics of its usage, not to mention that similar techniques can be used in mapping the internet.
Our lab in class has also been focused on the idea of Internet Measurement, and it has been interesting to see how such a small program, when applied to a huge simulation (in this case PlanetLab) can create so much data. We're collecting almost a million data points by pinging 100 computers from 100 different computers 10 times, still small compared to the 500 odd million collected by Microsoft Research to test Htrae, but bigger than anything I've ever done (especially since I've never left code running for more than 20 minutes, let alone 2 days straight).
On a side note, it was fun to see a few game related papers come up during the discussion (mainly discussing P2P gaming). Mine, on Halo, and another paper discussing how to get more players (other than the regular 16 or 32) into a single game. They discussed things such as putting more attention to those within a certain area of vision, either by proximity or through a scope, and they even factored in details such as more people focusing on a flag carrier, etc. They were apparently able to simulate 900 players in a single game with this technology, which kind of reminded me of the game MAG, which allows for 256 players in a single game, but the focus of the research was to improve performance for small areas and I'm not sure how well MAG performs when lots of players are grouped into one area (and I know that the graphical level of MAG isn't all that great).
Our lab in class has also been focused on the idea of Internet Measurement, and it has been interesting to see how such a small program, when applied to a huge simulation (in this case PlanetLab) can create so much data. We're collecting almost a million data points by pinging 100 computers from 100 different computers 10 times, still small compared to the 500 odd million collected by Microsoft Research to test Htrae, but bigger than anything I've ever done (especially since I've never left code running for more than 20 minutes, let alone 2 days straight).
On a side note, it was fun to see a few game related papers come up during the discussion (mainly discussing P2P gaming). Mine, on Halo, and another paper discussing how to get more players (other than the regular 16 or 32) into a single game. They discussed things such as putting more attention to those within a certain area of vision, either by proximity or through a scope, and they even factored in details such as more people focusing on a flag carrier, etc. They were apparently able to simulate 900 players in a single game with this technology, which kind of reminded me of the game MAG, which allows for 256 players in a single game, but the focus of the research was to improve performance for small areas and I'm not sure how well MAG performs when lots of players are grouped into one area (and I know that the graphical level of MAG isn't all that great).
Sunday, September 19, 2010
I Always Knew Halo was Good for Something
This week in our networking class is student chosen papers based on research in applications. After looking at internet measurement and p2p papers, I settled on a p2p paper from Microsoft Research that focused on improving Halo, well online games (and other latency-sensitive p2p systems) in general, but they did use Halo to come up with their latency prediction system, Htrae (Earth backwards, home of Bizarro and his friends from the DC comics universe).
The basic idea is that the system uses geolocation as an initial condition to a network coordinate system that uses spherical coordinates to map every Xbox to a virtual Earth, a method known as geographic bootstrapping. The point of Htrae is to improve matchmaking so that latency between players is reduced, thereby reducing in game lag.
Reducing the lag in an online games is important, especially in the popular first person shooter games, because lag could essentially mean you are killed before you even see the person that kills you. That, of course, would affect your view of how "fun" the game is and, since online gaming is a huge million (or even billion) dollar industry, you would find a new game that has less lag.
Although I do not know if Htrae is actually being used in any of the current games available today, the recently released Halo Reach (prequel to the original Halo game) is the most likely candidate. Halo Reach allows players to select certain preferences (such as improved matchmaking for skill level, etc.) by increasing the matchmaking time. The default matchmaking time (labeled as the fastest method) most likely incorporates Htrae, as it was published by Microsoft Game Studios.
Although many feel video games are a waste of time, there is no denying that they have substantial influence in pushing forward research in various areas of computer science. I have mainly read about improvements in graphics and AI, so it was interesting to see video games helping to push the envelope in terms of networking research.
The basic idea is that the system uses geolocation as an initial condition to a network coordinate system that uses spherical coordinates to map every Xbox to a virtual Earth, a method known as geographic bootstrapping. The point of Htrae is to improve matchmaking so that latency between players is reduced, thereby reducing in game lag.
Reducing the lag in an online games is important, especially in the popular first person shooter games, because lag could essentially mean you are killed before you even see the person that kills you. That, of course, would affect your view of how "fun" the game is and, since online gaming is a huge million (or even billion) dollar industry, you would find a new game that has less lag.
Although I do not know if Htrae is actually being used in any of the current games available today, the recently released Halo Reach (prequel to the original Halo game) is the most likely candidate. Halo Reach allows players to select certain preferences (such as improved matchmaking for skill level, etc.) by increasing the matchmaking time. The default matchmaking time (labeled as the fastest method) most likely incorporates Htrae, as it was published by Microsoft Game Studios.
Although many feel video games are a waste of time, there is no denying that they have substantial influence in pushing forward research in various areas of computer science. I have mainly read about improvements in graphics and AI, so it was interesting to see video games helping to push the envelope in terms of networking research.
Wednesday, September 15, 2010
Understanding Peer-to-Peer Systems
Although I had a basic understanding of how peer-to-peer (P2P) systems worked (thanks to a little application known as BitTorrent), reading about how a P2P system is designed helped me better understand the nuances associated with creating a P2P application. In particular, the distributed lookup protocol I studied was entitled "Chord." Unlike many other networking protocol names, this wasn't an acronym for anything special.
The main problem Chord is trying to solve is the problem of efficiently locating a node that stores a particular data item. Chord is impressive in its running time, taking only O(logN), where N is the number of nodes, to maintain routing information and resolve lookups, and also in its robustness, updating routing information for leaving and joining in O(log^2(n)) time.
More importantly, Chord showed me that in order to effectively create a P2P application we need to look at several aspects of P2P. Firstly, Chord implements consistent hashing and stabilization in order to allow for nodes to join and leave. Successfully allowing for users to be able to enter or drop out at any time without disturbing the distributed network is an important aspect of P2P applications. If this aspect of P2P is not handled correctly and efficiently, then the application will essentially fail.
Next, Chord is also scalable, which essentially means it is a feasible protocol for the existing internet architecture. Unlike the other architectures I have studied, Chord is a protocol that is implemented on the application layer, which is much easier to manipulate than the underlying architecture of the internet. Chord not only solves a problem in existing P2P protocols, but it does so in a way that is actual usable, which is an important aspect of research that I haven't seen a lot of in networking (at least in research on internet architecture).
Overall, I found this particular paper very useful, because I feel that understanding these basic ideas of P2P networks and protocols will better prepare me for the next paper I will read for my networking class, a paper on matchmaking for online games, which I am very excited to read.
The main problem Chord is trying to solve is the problem of efficiently locating a node that stores a particular data item. Chord is impressive in its running time, taking only O(logN), where N is the number of nodes, to maintain routing information and resolve lookups, and also in its robustness, updating routing information for leaving and joining in O(log^2(n)) time.
More importantly, Chord showed me that in order to effectively create a P2P application we need to look at several aspects of P2P. Firstly, Chord implements consistent hashing and stabilization in order to allow for nodes to join and leave. Successfully allowing for users to be able to enter or drop out at any time without disturbing the distributed network is an important aspect of P2P applications. If this aspect of P2P is not handled correctly and efficiently, then the application will essentially fail.
Next, Chord is also scalable, which essentially means it is a feasible protocol for the existing internet architecture. Unlike the other architectures I have studied, Chord is a protocol that is implemented on the application layer, which is much easier to manipulate than the underlying architecture of the internet. Chord not only solves a problem in existing P2P protocols, but it does so in a way that is actual usable, which is an important aspect of research that I haven't seen a lot of in networking (at least in research on internet architecture).
Overall, I found this particular paper very useful, because I feel that understanding these basic ideas of P2P networks and protocols will better prepare me for the next paper I will read for my networking class, a paper on matchmaking for online games, which I am very excited to read.
Tuesday, September 14, 2010
All About Packet Dynamics
Vern Paxson's paper "End-to End Internet Packet Dynamics" essentially takes a NASDAQ view of the Internet, i.e. NASDAQ uses a small group of companies to represent the entire stock market, while Paxson uses several sites to model the entire internet. The paper, overall, serves to dispel certain misconceptions (or assumptions) related to TCP connections and also serves as a basis for further research dependent on the actual performance of the internet.
One interesting idea brought up in the paper included the number of corrupt packets accepted every day by the internet (estimated at one in 3 million). Paxson notes, however, that switching the 16-bit checksum to a 32-bit checksum would change the number to one in 2*10^13. Since this paper was written in 1997, I assume this has been implemented, so it would be interesting to see today if a 64-bit checksum could be (or already has been) implemented. Would changing it essentially nullify the effects of corrupt packets being accepted in the internet altogether? Or does the rate of internet usage keep up with the checksum?
Another interesting point that Paxson discusses is the Nd "dups" threshold, which is currently set at 3. He shows that dropping it to 2 would allow for a gain in retransmit opportunities by a significant amount (60 - 70%), and that the tradeoff, the ratio of good to bad "dups," could be stabilized using a wait time of W = 20msec before generating the second "dup." However, while this idea is theoretically an improvement, he notes that the size of the internet (back in 1997) made it impractical to implement. Trying to implement a solution today that was not feasible in 1997 would be impossible. Leading to the question, should more research be done into areas that we know are infeasible to implement? Or do we believe there is a feasible way to vastly improve the internet?
Paxson's study provided researchers with a lot of solid data to test the effects of end to end packet dynamics in the internet. I am not sure how much the internet's end to end procedure has changed since 1997, but it would be useful to have a similar experiment on the modern internet to evaluate the changes that have occurred in the past 13 or so odd years. Such an experiment would be useful to researchers attempting to improve the dynamics of the internet, as well as for application developers attempting to understand how the underlying protocols should affect their implementations.
One interesting idea brought up in the paper included the number of corrupt packets accepted every day by the internet (estimated at one in 3 million). Paxson notes, however, that switching the 16-bit checksum to a 32-bit checksum would change the number to one in 2*10^13. Since this paper was written in 1997, I assume this has been implemented, so it would be interesting to see today if a 64-bit checksum could be (or already has been) implemented. Would changing it essentially nullify the effects of corrupt packets being accepted in the internet altogether? Or does the rate of internet usage keep up with the checksum?
Another interesting point that Paxson discusses is the Nd "dups" threshold, which is currently set at 3. He shows that dropping it to 2 would allow for a gain in retransmit opportunities by a significant amount (60 - 70%), and that the tradeoff, the ratio of good to bad "dups," could be stabilized using a wait time of W = 20msec before generating the second "dup." However, while this idea is theoretically an improvement, he notes that the size of the internet (back in 1997) made it impractical to implement. Trying to implement a solution today that was not feasible in 1997 would be impossible. Leading to the question, should more research be done into areas that we know are infeasible to implement? Or do we believe there is a feasible way to vastly improve the internet?
Paxson's study provided researchers with a lot of solid data to test the effects of end to end packet dynamics in the internet. I am not sure how much the internet's end to end procedure has changed since 1997, but it would be useful to have a similar experiment on the modern internet to evaluate the changes that have occurred in the past 13 or so odd years. Such an experiment would be useful to researchers attempting to improve the dynamics of the internet, as well as for application developers attempting to understand how the underlying protocols should affect their implementations.
Friday, September 10, 2010
So how will the Internet evolve?
Who knows. From what I have studied it seems unlikely that the Internet will get a complete overhaul, but there are certain pressing issues that need to be addressed, such as security, and so I predict that, within a few decades, the Internet will be very different from the one that we use today.
From the papers we discussed in our networking class it seems as though the trend of architecture research is focusing on data-centric approaches (although there are some radical application-centric ideas floating around). This is based on the idea that the current trend of internet use is to find data.
People don't care about where the data comes from, they just care that they get the data. Although I agree that this might be useful right now, it suffers from the same problems that plagued the original creation of the internet. The original designers were focused solely on what they needed for an internet right then and there. They didn't look forward to the future to try and incorporate ideas for a more data-centric network. If we follow suit and do not look a little ahead to the future, we may be implementing a data-centric network when the trend is shifting away from that type of network.
However, I do understand that researchers need to work with what they have and that it is incredibly difficult to predict the future. After all, who knew that the internet would grow to permeate the entire world?
I don't have any solutions to the current problems of the internet and at first I felt that the internet is extremely useful for everything I need, so why fix what isn't broke? Studying internet architecture has opened my eyes to the real struggles behind security issues and, from some radical papers, I have come to understand how limiting the internet architecture can be at times. I wasn't looking at the big picture of the internet before and now I better understand what people are trying to do and what kind of areas I need to look into if I want to help brainstorm the future of the internet.
From the papers we discussed in our networking class it seems as though the trend of architecture research is focusing on data-centric approaches (although there are some radical application-centric ideas floating around). This is based on the idea that the current trend of internet use is to find data.
People don't care about where the data comes from, they just care that they get the data. Although I agree that this might be useful right now, it suffers from the same problems that plagued the original creation of the internet. The original designers were focused solely on what they needed for an internet right then and there. They didn't look forward to the future to try and incorporate ideas for a more data-centric network. If we follow suit and do not look a little ahead to the future, we may be implementing a data-centric network when the trend is shifting away from that type of network.
However, I do understand that researchers need to work with what they have and that it is incredibly difficult to predict the future. After all, who knew that the internet would grow to permeate the entire world?
I don't have any solutions to the current problems of the internet and at first I felt that the internet is extremely useful for everything I need, so why fix what isn't broke? Studying internet architecture has opened my eyes to the real struggles behind security issues and, from some radical papers, I have come to understand how limiting the internet architecture can be at times. I wasn't looking at the big picture of the internet before and now I better understand what people are trying to do and what kind of areas I need to look into if I want to help brainstorm the future of the internet.
Thursday, September 9, 2010
Why have the Internet at all?
No, I am not calling upon the abolition of the Internet. I am instead trying to make a witty reference to a radical paper I read called "The End of Internet Architecture." The author, Timothy Roscoe, essentially throws out the rather extreme view that the current Internet architecture is not good enough for the functionality we need and that no other architecture will fix the issues the current Internet is experiencing. Instead, his idea is to virtualize the Internet, with applications taking on every role that the current Internet architecture has, effectively removing Internet architecture.
Though Roscoe's claims may seem ludicrous at first, there are some merits to his line of thought. For example, he proposes that removing the architecture of the Internet essentially opens up the functionality of the internet. It is no secret that many Internet application developers have to spend a large amount of time to create a workaround for the application to work with the current architecture of the Internet, rather then seamlessly working with it. Roscoe proposes that a world of creativity could be opened up if applications did not need workarounds, or weren't tied down by protocols.
Overall, Roscoe isn't really trying to create a new architecture, the paper seems more like a call to arms to push research in networking into the field of systems research. Roscoe doesn't delve very deep into the idea of mixing systems and networking research, but he does give a few ideas of areas that other researchers could focus on in order to push forward. Although I don't entirely agree with Roscoe's approach, I applaud his ability to think outside the box in terms of trying to find new avenues of networking research to explore. Who knows, maybe some of his ideas will come into practice as the Internet evolves over the next few decades.
Though Roscoe's claims may seem ludicrous at first, there are some merits to his line of thought. For example, he proposes that removing the architecture of the Internet essentially opens up the functionality of the internet. It is no secret that many Internet application developers have to spend a large amount of time to create a workaround for the application to work with the current architecture of the Internet, rather then seamlessly working with it. Roscoe proposes that a world of creativity could be opened up if applications did not need workarounds, or weren't tied down by protocols.
Overall, Roscoe isn't really trying to create a new architecture, the paper seems more like a call to arms to push research in networking into the field of systems research. Roscoe doesn't delve very deep into the idea of mixing systems and networking research, but he does give a few ideas of areas that other researchers could focus on in order to push forward. Although I don't entirely agree with Roscoe's approach, I applaud his ability to think outside the box in terms of trying to find new avenues of networking research to explore. Who knows, maybe some of his ideas will come into practice as the Internet evolves over the next few decades.
Monday, September 6, 2010
So what's everyone else doing?
In studying how to improve the architecture of the internet, one must look at what has already been done. One of the architectures we studied in class was DONA, which stands for "A Data-Oriented (and Beyond) Network Architecture." Although I didn't understand all the nuances of the architecture, the basic idea of DONA is to improve the efficiency of the internet by understanding that internet usage is data-centric, rather than host-centric, and modeling an architecture to support this trend.
The main problem I see with the proposed architecture is the feasibility of its implementation. Although this aspect of DONA is covered in the article, I feel that a key point was not addressed: how to successfully market it to the masses. In order to successfully launch the 'Internet 2.0,' the millions (or billions) of users must be able to use the system. While DONA is not impossible to use, it is different and, as mentioned in the article, it is more difficult.
Although a data-centric internet would benefit the masses, explaining to them that they must work harder so that the internet can work better may be a difficult task, especially when we consider that the current internet has workarounds in place that allow them to essentially have aspects of a data-centric internet, without having to learn a new naming convention.
Some may argue that this is a problem for psychologists, not students of networking, but I would counter that that type of thinking is what caused the networking problems in the first place. The Internet was created for certain tasks, while research into how users would use the Internet was not even considered at the time, since the entire idea of the Internet had not been completely realized.
I believe that in order to have a successful 'Internet 2.0,' not only should we improve the architecture of the Internet dependent upon its usage in modern times, but we must also properly prepare for its implementation. I won't go so far as to say that the perfect internet would not require users to change their behavior, but if I would propose that if a change in behavior is necessary, then we as researchers should effectively determine a manner in which we can help transition the masses into a new world with a better Internet.
The main problem I see with the proposed architecture is the feasibility of its implementation. Although this aspect of DONA is covered in the article, I feel that a key point was not addressed: how to successfully market it to the masses. In order to successfully launch the 'Internet 2.0,' the millions (or billions) of users must be able to use the system. While DONA is not impossible to use, it is different and, as mentioned in the article, it is more difficult.
Although a data-centric internet would benefit the masses, explaining to them that they must work harder so that the internet can work better may be a difficult task, especially when we consider that the current internet has workarounds in place that allow them to essentially have aspects of a data-centric internet, without having to learn a new naming convention.
Some may argue that this is a problem for psychologists, not students of networking, but I would counter that that type of thinking is what caused the networking problems in the first place. The Internet was created for certain tasks, while research into how users would use the Internet was not even considered at the time, since the entire idea of the Internet had not been completely realized.
I believe that in order to have a successful 'Internet 2.0,' not only should we improve the architecture of the Internet dependent upon its usage in modern times, but we must also properly prepare for its implementation. I won't go so far as to say that the perfect internet would not require users to change their behavior, but if I would propose that if a change in behavior is necessary, then we as researchers should effectively determine a manner in which we can help transition the masses into a new world with a better Internet.
Wednesday, September 1, 2010
Improve the Internet? Me?
Thinking back to the first time I used the internet makes me feel kind of old, I remember chatting over ICQ (I seek you) and downloading music from Napster (before everyone found out it was illegal). Now, I love streaming Hulu (the free version), learning fun facts from Wikipedia (even if the validity of the information is questionable), and socializing over Facebook (since my parents haven't figure out how to join it yet). Reading through the article "The Design Philosophy of the DARPA Internet Protocols" forced me to reflect on the birth of the internet, and it's rise over the past few decades and the impact it has had on the world.
DARPA's main goal was to "develop an effective technique for multiplexed utilization of existing interconnected networks." Essentially, DARPA had several networks they wanted to combine to make an "interconnected network" or "internet." DARPA was not thinking on a worldwide scale at the time, but this vision of connecting networks over large distances was an important step towards the establishment of the World Wide Web we see today. The main goal in the creation of the internet was to improve communication channels between different military groups throughout the country, with such a strong focus on sending messages to one another it is no surprise that email and social networking have become so popular today.
The question of why this communication was so important, more important than the survivability or security of the internet, could have many answers and truly depends on the issues facing those in charge of developing the goals, which I unfortunately have no real insight into. Perhaps a more poignant question then is why study the history of the internet at all? Is it to understand the motivations of those who created it? Or is there a deeper purpose?
My networking professor would have me believe that we study the history of the internet in order to improve the future of the internet. Should we not learn from the mistakes of others, so we do not follow in their footsteps? As an undergraduate I may have eaten up my professor's words and then spat them back out during some mid-term or final, never again remembering their importance once the course was over. As a graduate student, however, I am more inclined to dig a little deeper and try and determine if that truly is a role I need to play: an internet improver. I'm not sure, my knowledge of networks is minimal, but hopefully the more I study them, the better equipped I will be to step into those shoes.
If I were to become an internet improver, I believe that the best place to start, of course, is how we can improve upon what has already been done. Where did DARPA go right and in which areas did they fail? While a full analysis is not possible, we can look at a few of the goals outlined in the aforementioned paper:
1) Connecting existing networks, which I believe was mainly in order to improve communication. With sites and applications like Google Voice and Facebook, can we be any more connected? I postulate that we can, especially due to the emergence of smartphones in recent years. Since video and voice transmission through the internet is already possible, why am I paying for minutes on my cell phone? Why can't I just have an unlimited data plan and then make calls and video chats via the internet on cell phones or computers? I assume it is possible, but would changes need to be made to the internet architecture and protocols to make it feasible? Are there other obstacles stopping this from being possible? An important question whenever suggesting changes would also be how would these changes affect us in the future? DARPA made the mistake of not thinking on a global scale, forsaking such measures as security for connectivity. Should I, then, be thinking about networks on an interplanetary scale?
2) Survivability of the internet, a topic which at first seem pretty much taken care of. Going back to our interplanetary scale, what would happen if the connections between two planets were lost? The internet would survive, but if there were a substantial number of connections on other planets, then the internet could essentially be split. What kind of satellite technology is needed to ensure that doesn't happen? Of course, just as there is a problem with thinking on too small of a scale, I also believe that thinking on too large of a scale could be detrimental, or at least a waste of time. Should I be worrying so much about the problem of interplanetary connections when its inception could be many lifetimes away?
3) The final goal I will discuss (although there are more goals outlined in the paper) is the ability to support multiple types of communication services. Since the internet is so large, it isn't easy to simply introduce a new protocol for streaming video or voice or other necessary services. Can we improve on TCP/IP or UDP? Surely we can, but, from the little that I've learned, apparently it is not feasible. How then can we improve these services? Is it possible to improve the overall internet architecture by introducing new protocols in the application layer? Or will we still be bottle-necked by the limitations of the existing protocols that have cemented their position in internet usage?
Hopefully as I continue to study the architecture of the internet and learn about networking I can begin to answer these questions and better understand these concepts. Who knows, maybe I can even make some small contributions that will help improve the internet...
DARPA's main goal was to "develop an effective technique for multiplexed utilization of existing interconnected networks." Essentially, DARPA had several networks they wanted to combine to make an "interconnected network" or "internet." DARPA was not thinking on a worldwide scale at the time, but this vision of connecting networks over large distances was an important step towards the establishment of the World Wide Web we see today. The main goal in the creation of the internet was to improve communication channels between different military groups throughout the country, with such a strong focus on sending messages to one another it is no surprise that email and social networking have become so popular today.
The question of why this communication was so important, more important than the survivability or security of the internet, could have many answers and truly depends on the issues facing those in charge of developing the goals, which I unfortunately have no real insight into. Perhaps a more poignant question then is why study the history of the internet at all? Is it to understand the motivations of those who created it? Or is there a deeper purpose?
My networking professor would have me believe that we study the history of the internet in order to improve the future of the internet. Should we not learn from the mistakes of others, so we do not follow in their footsteps? As an undergraduate I may have eaten up my professor's words and then spat them back out during some mid-term or final, never again remembering their importance once the course was over. As a graduate student, however, I am more inclined to dig a little deeper and try and determine if that truly is a role I need to play: an internet improver. I'm not sure, my knowledge of networks is minimal, but hopefully the more I study them, the better equipped I will be to step into those shoes.
If I were to become an internet improver, I believe that the best place to start, of course, is how we can improve upon what has already been done. Where did DARPA go right and in which areas did they fail? While a full analysis is not possible, we can look at a few of the goals outlined in the aforementioned paper:
1) Connecting existing networks, which I believe was mainly in order to improve communication. With sites and applications like Google Voice and Facebook, can we be any more connected? I postulate that we can, especially due to the emergence of smartphones in recent years. Since video and voice transmission through the internet is already possible, why am I paying for minutes on my cell phone? Why can't I just have an unlimited data plan and then make calls and video chats via the internet on cell phones or computers? I assume it is possible, but would changes need to be made to the internet architecture and protocols to make it feasible? Are there other obstacles stopping this from being possible? An important question whenever suggesting changes would also be how would these changes affect us in the future? DARPA made the mistake of not thinking on a global scale, forsaking such measures as security for connectivity. Should I, then, be thinking about networks on an interplanetary scale?
2) Survivability of the internet, a topic which at first seem pretty much taken care of. Going back to our interplanetary scale, what would happen if the connections between two planets were lost? The internet would survive, but if there were a substantial number of connections on other planets, then the internet could essentially be split. What kind of satellite technology is needed to ensure that doesn't happen? Of course, just as there is a problem with thinking on too small of a scale, I also believe that thinking on too large of a scale could be detrimental, or at least a waste of time. Should I be worrying so much about the problem of interplanetary connections when its inception could be many lifetimes away?
3) The final goal I will discuss (although there are more goals outlined in the paper) is the ability to support multiple types of communication services. Since the internet is so large, it isn't easy to simply introduce a new protocol for streaming video or voice or other necessary services. Can we improve on TCP/IP or UDP? Surely we can, but, from the little that I've learned, apparently it is not feasible. How then can we improve these services? Is it possible to improve the overall internet architecture by introducing new protocols in the application layer? Or will we still be bottle-necked by the limitations of the existing protocols that have cemented their position in internet usage?
Hopefully as I continue to study the architecture of the internet and learn about networking I can begin to answer these questions and better understand these concepts. Who knows, maybe I can even make some small contributions that will help improve the internet...
Subscribe to:
Posts (Atom)