For our networking research group and our upcoming lectures in our networking class, we are looking at wireless networks. The only work I've done with wireless networks in my life is setting up a wireless router in my home so that I can link several laptops, smartphones, and a wireless printer on the same network. However, I am excited to read more into wireless networks as I feel, as I may have mentioned before, that wireless networking is the most promising field in networking research at this moment.
For research we are looking into Internet measurements similar to the ones we previously discussed in class, but this time for wireless networks. It is interesting to see how different wireless and wired networks act. For example, in wireless networks we need to consider the contention between various routers or end systems, where as in wired we are more focused on throughput, etc.
Since wireless is still a fairly new area of research, I believe there are plenty of useful contributions in the area of wireless networking that will be beneficial to the field. Some ideas for our specific area of research would include tracking wireless/mobile networks on the battlefield or tracking criminals using wireless devices, but other approaches could you useful for different situations.
A blog about networks by a Spanish speaking Mormon Indian born in Fiji and raised in Australia, married to a Japanese Brazilian, pursuing a PhD in Computer Science from BYU after graduating from BYU-Hawaii
Showing posts with label internet. Show all posts
Showing posts with label internet. Show all posts
Monday, November 8, 2010
The Principle Behind Research
Since we didn't have class this past Thursday, I thought I would discuss an idea that has been consistent throughout my blog for this week's post. I believe I may have been overly cynical about research in networking up to this point. After discussing this with a co-worker, I better understand the motivations behind the research I have been reading about.
Firstly, my co-worker brought to my realization that all research, not just research focused on networking, has projects that are never brought to fruition. In fact, the majority of research in general seems to be done more in a hope that it will be useful to someone at sometime. In many cases, substantial research is done by building upon the work of others, whose individual contributions may not have made the same impact.
I will try and be less cynical towards networking research, after all - in the near future the Internet may need an incredible overhaul and all the current research will be very useful indeed...
Firstly, my co-worker brought to my realization that all research, not just research focused on networking, has projects that are never brought to fruition. In fact, the majority of research in general seems to be done more in a hope that it will be useful to someone at sometime. In many cases, substantial research is done by building upon the work of others, whose individual contributions may not have made the same impact.
I will try and be less cynical towards networking research, after all - in the near future the Internet may need an incredible overhaul and all the current research will be very useful indeed...
Friday, October 22, 2010
Network Neutrality
In an interesting turn of events, we had time in our Networking class to discuss Network Neutrality. I won't pretend to be completely versed in the subject, but I understand the basic argument. Moreover, I can understand both sides of the argument.
As a big fan of streaming video (as I believe I mentioned earlier) due to Hulu and Netflix and an avid online gamer, I don't like the idea of having my packets discriminated against. However, if that is a company's business model, then I believe it shouldn't be regulated by the government just to suit my wants.
I am instead in favor of the common carrier principle, in which everyone should have a choice in which ISP they connect to so if one ISP does decide to put my packets on lower priority because I'm streaming video or playing games, then I should be able to switch ISPs.
I feel that this would essentially stop the debate on Network Neutrality, since it allows ISPs to do whatever they want and pay the consequences for their actions by losing customers or service, just like any business should. It also allows common users the opportunity to pick an ISP that best fits their needs, stimulating competition and ensuring there are no monopolies.
As a big fan of streaming video (as I believe I mentioned earlier) due to Hulu and Netflix and an avid online gamer, I don't like the idea of having my packets discriminated against. However, if that is a company's business model, then I believe it shouldn't be regulated by the government just to suit my wants.
I am instead in favor of the common carrier principle, in which everyone should have a choice in which ISP they connect to so if one ISP does decide to put my packets on lower priority because I'm streaming video or playing games, then I should be able to switch ISPs.
I feel that this would essentially stop the debate on Network Neutrality, since it allows ISPs to do whatever they want and pay the consequences for their actions by losing customers or service, just like any business should. It also allows common users the opportunity to pick an ISP that best fits their needs, stimulating competition and ensuring there are no monopolies.
Thursday, October 14, 2010
The Problems with Routing (and Networking Research)
Today in class we read about a lot of the current problems with BGP routing. I was astounded to discover that a significant portion of BGP prefixes (around 25%) continuously flap and can take hours to converge to the correct route. Furthermore, the authors claim a 400 fold reduction in churn rate when using their protocol, the hybrid linkstate path-vector protocol or HLP, which seems to me reason enough to implement the routing protocol, yet we are still BGP.
This makes me feel a little discouraged when faced with the prospect of finding an area of networking research that could eventually be useful enough to be implemented in real networks. Maybe I am thinking on too large of a scale, I am sure there are many aspects of LANs, enterprise networks, etc., that could be modified and updated easily, but since I don't plan on being a system administrator, most of the research I do will be geared toward improving the Internet. However, since the Internet is so large, I understand the difficulties in implementing new architectures and protocols, but when helpful protocols, like HLP, that could make a significant impact on the Internet gets rejected, then there seems little hope for any research idea I could come up with.
On a more positive note though, I am sure that HLP had a significant impact on improvements in BGP in the last few years and there are other avenues of networking research that we have not discussed in class as of yet that could work better for prospective research ideas, such as wireless networks. So I am looking forward to covering that (as well as our sure to be interesting discussion on net neutrality next lesson).
This makes me feel a little discouraged when faced with the prospect of finding an area of networking research that could eventually be useful enough to be implemented in real networks. Maybe I am thinking on too large of a scale, I am sure there are many aspects of LANs, enterprise networks, etc., that could be modified and updated easily, but since I don't plan on being a system administrator, most of the research I do will be geared toward improving the Internet. However, since the Internet is so large, I understand the difficulties in implementing new architectures and protocols, but when helpful protocols, like HLP, that could make a significant impact on the Internet gets rejected, then there seems little hope for any research idea I could come up with.
On a more positive note though, I am sure that HLP had a significant impact on improvements in BGP in the last few years and there are other avenues of networking research that we have not discussed in class as of yet that could work better for prospective research ideas, such as wireless networks. So I am looking forward to covering that (as well as our sure to be interesting discussion on net neutrality next lesson).
Wednesday, September 29, 2010
TCP
In our networking class this week we started discussing transport layer protocols for the internet, namely the different types of TCP (such as Tahoe, Reno, New Reno, and SACK). While we'll go into more detail in later reading (which compares Reno and Vegas), we were mainly looking at the differences between congestion control and performance, but I am more interested in looking at adoption rates (how successful each have been) as well as deployment strategies.
From what I've read so far it seems that introducing new transport protocols on a large scale seems fairly difficult, however, most TCP-based protocols seem to thrive. Is it because the underlying architecture already uses TCP, so changing it slightly won't make a huge difference. Or is modifying the transport protocol easier than I think? Looking at other transport protocols (like BIC, that was used for Linux a while ago) it seems that implementing a transport protocol is not too difficult, but getting widespread acceptance is. The next question I would ask is: do we need new protocols? I guess that depends on how much the underlying architecture changes over time.
From what I've read so far it seems that introducing new transport protocols on a large scale seems fairly difficult, however, most TCP-based protocols seem to thrive. Is it because the underlying architecture already uses TCP, so changing it slightly won't make a huge difference. Or is modifying the transport protocol easier than I think? Looking at other transport protocols (like BIC, that was used for Linux a while ago) it seems that implementing a transport protocol is not too difficult, but getting widespread acceptance is. The next question I would ask is: do we need new protocols? I guess that depends on how much the underlying architecture changes over time.
Saturday, September 25, 2010
Measuring the Internet (and Some Other Video Game-Related Discussion)
Thinking back to the first paper we read when studying applications, a paper entitled "End-to-End Internet Packet Dynamics," it seemed like it had nothing to do with the application layer at all. Measuring the internet sounds more like a transport problem, but I have discovered it is actually an application problem, and an important one at that. Measuring the internet is essential to understanding the shifting dynamics of its usage, not to mention that similar techniques can be used in mapping the internet.
Our lab in class has also been focused on the idea of Internet Measurement, and it has been interesting to see how such a small program, when applied to a huge simulation (in this case PlanetLab) can create so much data. We're collecting almost a million data points by pinging 100 computers from 100 different computers 10 times, still small compared to the 500 odd million collected by Microsoft Research to test Htrae, but bigger than anything I've ever done (especially since I've never left code running for more than 20 minutes, let alone 2 days straight).
On a side note, it was fun to see a few game related papers come up during the discussion (mainly discussing P2P gaming). Mine, on Halo, and another paper discussing how to get more players (other than the regular 16 or 32) into a single game. They discussed things such as putting more attention to those within a certain area of vision, either by proximity or through a scope, and they even factored in details such as more people focusing on a flag carrier, etc. They were apparently able to simulate 900 players in a single game with this technology, which kind of reminded me of the game MAG, which allows for 256 players in a single game, but the focus of the research was to improve performance for small areas and I'm not sure how well MAG performs when lots of players are grouped into one area (and I know that the graphical level of MAG isn't all that great).
Our lab in class has also been focused on the idea of Internet Measurement, and it has been interesting to see how such a small program, when applied to a huge simulation (in this case PlanetLab) can create so much data. We're collecting almost a million data points by pinging 100 computers from 100 different computers 10 times, still small compared to the 500 odd million collected by Microsoft Research to test Htrae, but bigger than anything I've ever done (especially since I've never left code running for more than 20 minutes, let alone 2 days straight).
On a side note, it was fun to see a few game related papers come up during the discussion (mainly discussing P2P gaming). Mine, on Halo, and another paper discussing how to get more players (other than the regular 16 or 32) into a single game. They discussed things such as putting more attention to those within a certain area of vision, either by proximity or through a scope, and they even factored in details such as more people focusing on a flag carrier, etc. They were apparently able to simulate 900 players in a single game with this technology, which kind of reminded me of the game MAG, which allows for 256 players in a single game, but the focus of the research was to improve performance for small areas and I'm not sure how well MAG performs when lots of players are grouped into one area (and I know that the graphical level of MAG isn't all that great).
Friday, September 10, 2010
So how will the Internet evolve?
Who knows. From what I have studied it seems unlikely that the Internet will get a complete overhaul, but there are certain pressing issues that need to be addressed, such as security, and so I predict that, within a few decades, the Internet will be very different from the one that we use today.
From the papers we discussed in our networking class it seems as though the trend of architecture research is focusing on data-centric approaches (although there are some radical application-centric ideas floating around). This is based on the idea that the current trend of internet use is to find data.
People don't care about where the data comes from, they just care that they get the data. Although I agree that this might be useful right now, it suffers from the same problems that plagued the original creation of the internet. The original designers were focused solely on what they needed for an internet right then and there. They didn't look forward to the future to try and incorporate ideas for a more data-centric network. If we follow suit and do not look a little ahead to the future, we may be implementing a data-centric network when the trend is shifting away from that type of network.
However, I do understand that researchers need to work with what they have and that it is incredibly difficult to predict the future. After all, who knew that the internet would grow to permeate the entire world?
I don't have any solutions to the current problems of the internet and at first I felt that the internet is extremely useful for everything I need, so why fix what isn't broke? Studying internet architecture has opened my eyes to the real struggles behind security issues and, from some radical papers, I have come to understand how limiting the internet architecture can be at times. I wasn't looking at the big picture of the internet before and now I better understand what people are trying to do and what kind of areas I need to look into if I want to help brainstorm the future of the internet.
From the papers we discussed in our networking class it seems as though the trend of architecture research is focusing on data-centric approaches (although there are some radical application-centric ideas floating around). This is based on the idea that the current trend of internet use is to find data.
People don't care about where the data comes from, they just care that they get the data. Although I agree that this might be useful right now, it suffers from the same problems that plagued the original creation of the internet. The original designers were focused solely on what they needed for an internet right then and there. They didn't look forward to the future to try and incorporate ideas for a more data-centric network. If we follow suit and do not look a little ahead to the future, we may be implementing a data-centric network when the trend is shifting away from that type of network.
However, I do understand that researchers need to work with what they have and that it is incredibly difficult to predict the future. After all, who knew that the internet would grow to permeate the entire world?
I don't have any solutions to the current problems of the internet and at first I felt that the internet is extremely useful for everything I need, so why fix what isn't broke? Studying internet architecture has opened my eyes to the real struggles behind security issues and, from some radical papers, I have come to understand how limiting the internet architecture can be at times. I wasn't looking at the big picture of the internet before and now I better understand what people are trying to do and what kind of areas I need to look into if I want to help brainstorm the future of the internet.
Thursday, September 9, 2010
Why have the Internet at all?
No, I am not calling upon the abolition of the Internet. I am instead trying to make a witty reference to a radical paper I read called "The End of Internet Architecture." The author, Timothy Roscoe, essentially throws out the rather extreme view that the current Internet architecture is not good enough for the functionality we need and that no other architecture will fix the issues the current Internet is experiencing. Instead, his idea is to virtualize the Internet, with applications taking on every role that the current Internet architecture has, effectively removing Internet architecture.
Though Roscoe's claims may seem ludicrous at first, there are some merits to his line of thought. For example, he proposes that removing the architecture of the Internet essentially opens up the functionality of the internet. It is no secret that many Internet application developers have to spend a large amount of time to create a workaround for the application to work with the current architecture of the Internet, rather then seamlessly working with it. Roscoe proposes that a world of creativity could be opened up if applications did not need workarounds, or weren't tied down by protocols.
Overall, Roscoe isn't really trying to create a new architecture, the paper seems more like a call to arms to push research in networking into the field of systems research. Roscoe doesn't delve very deep into the idea of mixing systems and networking research, but he does give a few ideas of areas that other researchers could focus on in order to push forward. Although I don't entirely agree with Roscoe's approach, I applaud his ability to think outside the box in terms of trying to find new avenues of networking research to explore. Who knows, maybe some of his ideas will come into practice as the Internet evolves over the next few decades.
Though Roscoe's claims may seem ludicrous at first, there are some merits to his line of thought. For example, he proposes that removing the architecture of the Internet essentially opens up the functionality of the internet. It is no secret that many Internet application developers have to spend a large amount of time to create a workaround for the application to work with the current architecture of the Internet, rather then seamlessly working with it. Roscoe proposes that a world of creativity could be opened up if applications did not need workarounds, or weren't tied down by protocols.
Overall, Roscoe isn't really trying to create a new architecture, the paper seems more like a call to arms to push research in networking into the field of systems research. Roscoe doesn't delve very deep into the idea of mixing systems and networking research, but he does give a few ideas of areas that other researchers could focus on in order to push forward. Although I don't entirely agree with Roscoe's approach, I applaud his ability to think outside the box in terms of trying to find new avenues of networking research to explore. Who knows, maybe some of his ideas will come into practice as the Internet evolves over the next few decades.
Monday, September 6, 2010
So what's everyone else doing?
In studying how to improve the architecture of the internet, one must look at what has already been done. One of the architectures we studied in class was DONA, which stands for "A Data-Oriented (and Beyond) Network Architecture." Although I didn't understand all the nuances of the architecture, the basic idea of DONA is to improve the efficiency of the internet by understanding that internet usage is data-centric, rather than host-centric, and modeling an architecture to support this trend.
The main problem I see with the proposed architecture is the feasibility of its implementation. Although this aspect of DONA is covered in the article, I feel that a key point was not addressed: how to successfully market it to the masses. In order to successfully launch the 'Internet 2.0,' the millions (or billions) of users must be able to use the system. While DONA is not impossible to use, it is different and, as mentioned in the article, it is more difficult.
Although a data-centric internet would benefit the masses, explaining to them that they must work harder so that the internet can work better may be a difficult task, especially when we consider that the current internet has workarounds in place that allow them to essentially have aspects of a data-centric internet, without having to learn a new naming convention.
Some may argue that this is a problem for psychologists, not students of networking, but I would counter that that type of thinking is what caused the networking problems in the first place. The Internet was created for certain tasks, while research into how users would use the Internet was not even considered at the time, since the entire idea of the Internet had not been completely realized.
I believe that in order to have a successful 'Internet 2.0,' not only should we improve the architecture of the Internet dependent upon its usage in modern times, but we must also properly prepare for its implementation. I won't go so far as to say that the perfect internet would not require users to change their behavior, but if I would propose that if a change in behavior is necessary, then we as researchers should effectively determine a manner in which we can help transition the masses into a new world with a better Internet.
The main problem I see with the proposed architecture is the feasibility of its implementation. Although this aspect of DONA is covered in the article, I feel that a key point was not addressed: how to successfully market it to the masses. In order to successfully launch the 'Internet 2.0,' the millions (or billions) of users must be able to use the system. While DONA is not impossible to use, it is different and, as mentioned in the article, it is more difficult.
Although a data-centric internet would benefit the masses, explaining to them that they must work harder so that the internet can work better may be a difficult task, especially when we consider that the current internet has workarounds in place that allow them to essentially have aspects of a data-centric internet, without having to learn a new naming convention.
Some may argue that this is a problem for psychologists, not students of networking, but I would counter that that type of thinking is what caused the networking problems in the first place. The Internet was created for certain tasks, while research into how users would use the Internet was not even considered at the time, since the entire idea of the Internet had not been completely realized.
I believe that in order to have a successful 'Internet 2.0,' not only should we improve the architecture of the Internet dependent upon its usage in modern times, but we must also properly prepare for its implementation. I won't go so far as to say that the perfect internet would not require users to change their behavior, but if I would propose that if a change in behavior is necessary, then we as researchers should effectively determine a manner in which we can help transition the masses into a new world with a better Internet.
Wednesday, September 1, 2010
Improve the Internet? Me?
Thinking back to the first time I used the internet makes me feel kind of old, I remember chatting over ICQ (I seek you) and downloading music from Napster (before everyone found out it was illegal). Now, I love streaming Hulu (the free version), learning fun facts from Wikipedia (even if the validity of the information is questionable), and socializing over Facebook (since my parents haven't figure out how to join it yet). Reading through the article "The Design Philosophy of the DARPA Internet Protocols" forced me to reflect on the birth of the internet, and it's rise over the past few decades and the impact it has had on the world.
DARPA's main goal was to "develop an effective technique for multiplexed utilization of existing interconnected networks." Essentially, DARPA had several networks they wanted to combine to make an "interconnected network" or "internet." DARPA was not thinking on a worldwide scale at the time, but this vision of connecting networks over large distances was an important step towards the establishment of the World Wide Web we see today. The main goal in the creation of the internet was to improve communication channels between different military groups throughout the country, with such a strong focus on sending messages to one another it is no surprise that email and social networking have become so popular today.
The question of why this communication was so important, more important than the survivability or security of the internet, could have many answers and truly depends on the issues facing those in charge of developing the goals, which I unfortunately have no real insight into. Perhaps a more poignant question then is why study the history of the internet at all? Is it to understand the motivations of those who created it? Or is there a deeper purpose?
My networking professor would have me believe that we study the history of the internet in order to improve the future of the internet. Should we not learn from the mistakes of others, so we do not follow in their footsteps? As an undergraduate I may have eaten up my professor's words and then spat them back out during some mid-term or final, never again remembering their importance once the course was over. As a graduate student, however, I am more inclined to dig a little deeper and try and determine if that truly is a role I need to play: an internet improver. I'm not sure, my knowledge of networks is minimal, but hopefully the more I study them, the better equipped I will be to step into those shoes.
If I were to become an internet improver, I believe that the best place to start, of course, is how we can improve upon what has already been done. Where did DARPA go right and in which areas did they fail? While a full analysis is not possible, we can look at a few of the goals outlined in the aforementioned paper:
1) Connecting existing networks, which I believe was mainly in order to improve communication. With sites and applications like Google Voice and Facebook, can we be any more connected? I postulate that we can, especially due to the emergence of smartphones in recent years. Since video and voice transmission through the internet is already possible, why am I paying for minutes on my cell phone? Why can't I just have an unlimited data plan and then make calls and video chats via the internet on cell phones or computers? I assume it is possible, but would changes need to be made to the internet architecture and protocols to make it feasible? Are there other obstacles stopping this from being possible? An important question whenever suggesting changes would also be how would these changes affect us in the future? DARPA made the mistake of not thinking on a global scale, forsaking such measures as security for connectivity. Should I, then, be thinking about networks on an interplanetary scale?
2) Survivability of the internet, a topic which at first seem pretty much taken care of. Going back to our interplanetary scale, what would happen if the connections between two planets were lost? The internet would survive, but if there were a substantial number of connections on other planets, then the internet could essentially be split. What kind of satellite technology is needed to ensure that doesn't happen? Of course, just as there is a problem with thinking on too small of a scale, I also believe that thinking on too large of a scale could be detrimental, or at least a waste of time. Should I be worrying so much about the problem of interplanetary connections when its inception could be many lifetimes away?
3) The final goal I will discuss (although there are more goals outlined in the paper) is the ability to support multiple types of communication services. Since the internet is so large, it isn't easy to simply introduce a new protocol for streaming video or voice or other necessary services. Can we improve on TCP/IP or UDP? Surely we can, but, from the little that I've learned, apparently it is not feasible. How then can we improve these services? Is it possible to improve the overall internet architecture by introducing new protocols in the application layer? Or will we still be bottle-necked by the limitations of the existing protocols that have cemented their position in internet usage?
Hopefully as I continue to study the architecture of the internet and learn about networking I can begin to answer these questions and better understand these concepts. Who knows, maybe I can even make some small contributions that will help improve the internet...
DARPA's main goal was to "develop an effective technique for multiplexed utilization of existing interconnected networks." Essentially, DARPA had several networks they wanted to combine to make an "interconnected network" or "internet." DARPA was not thinking on a worldwide scale at the time, but this vision of connecting networks over large distances was an important step towards the establishment of the World Wide Web we see today. The main goal in the creation of the internet was to improve communication channels between different military groups throughout the country, with such a strong focus on sending messages to one another it is no surprise that email and social networking have become so popular today.
The question of why this communication was so important, more important than the survivability or security of the internet, could have many answers and truly depends on the issues facing those in charge of developing the goals, which I unfortunately have no real insight into. Perhaps a more poignant question then is why study the history of the internet at all? Is it to understand the motivations of those who created it? Or is there a deeper purpose?
My networking professor would have me believe that we study the history of the internet in order to improve the future of the internet. Should we not learn from the mistakes of others, so we do not follow in their footsteps? As an undergraduate I may have eaten up my professor's words and then spat them back out during some mid-term or final, never again remembering their importance once the course was over. As a graduate student, however, I am more inclined to dig a little deeper and try and determine if that truly is a role I need to play: an internet improver. I'm not sure, my knowledge of networks is minimal, but hopefully the more I study them, the better equipped I will be to step into those shoes.
If I were to become an internet improver, I believe that the best place to start, of course, is how we can improve upon what has already been done. Where did DARPA go right and in which areas did they fail? While a full analysis is not possible, we can look at a few of the goals outlined in the aforementioned paper:
1) Connecting existing networks, which I believe was mainly in order to improve communication. With sites and applications like Google Voice and Facebook, can we be any more connected? I postulate that we can, especially due to the emergence of smartphones in recent years. Since video and voice transmission through the internet is already possible, why am I paying for minutes on my cell phone? Why can't I just have an unlimited data plan and then make calls and video chats via the internet on cell phones or computers? I assume it is possible, but would changes need to be made to the internet architecture and protocols to make it feasible? Are there other obstacles stopping this from being possible? An important question whenever suggesting changes would also be how would these changes affect us in the future? DARPA made the mistake of not thinking on a global scale, forsaking such measures as security for connectivity. Should I, then, be thinking about networks on an interplanetary scale?
2) Survivability of the internet, a topic which at first seem pretty much taken care of. Going back to our interplanetary scale, what would happen if the connections between two planets were lost? The internet would survive, but if there were a substantial number of connections on other planets, then the internet could essentially be split. What kind of satellite technology is needed to ensure that doesn't happen? Of course, just as there is a problem with thinking on too small of a scale, I also believe that thinking on too large of a scale could be detrimental, or at least a waste of time. Should I be worrying so much about the problem of interplanetary connections when its inception could be many lifetimes away?
3) The final goal I will discuss (although there are more goals outlined in the paper) is the ability to support multiple types of communication services. Since the internet is so large, it isn't easy to simply introduce a new protocol for streaming video or voice or other necessary services. Can we improve on TCP/IP or UDP? Surely we can, but, from the little that I've learned, apparently it is not feasible. How then can we improve these services? Is it possible to improve the overall internet architecture by introducing new protocols in the application layer? Or will we still be bottle-necked by the limitations of the existing protocols that have cemented their position in internet usage?
Hopefully as I continue to study the architecture of the internet and learn about networking I can begin to answer these questions and better understand these concepts. Who knows, maybe I can even make some small contributions that will help improve the internet...
Subscribe to:
Comments (Atom)