Vern Paxson's paper "End-to End Internet Packet Dynamics" essentially takes a NASDAQ view of the Internet, i.e. NASDAQ uses a small group of companies to represent the entire stock market, while Paxson uses several sites to model the entire internet. The paper, overall, serves to dispel certain misconceptions (or assumptions) related to TCP connections and also serves as a basis for further research dependent on the actual performance of the internet.
One interesting idea brought up in the paper included the number of corrupt packets accepted every day by the internet (estimated at one in 3 million). Paxson notes, however, that switching the 16-bit checksum to a 32-bit checksum would change the number to one in 2*10^13. Since this paper was written in 1997, I assume this has been implemented, so it would be interesting to see today if a 64-bit checksum could be (or already has been) implemented. Would changing it essentially nullify the effects of corrupt packets being accepted in the internet altogether? Or does the rate of internet usage keep up with the checksum?
Another interesting point that Paxson discusses is the Nd "dups" threshold, which is currently set at 3. He shows that dropping it to 2 would allow for a gain in retransmit opportunities by a significant amount (60 - 70%), and that the tradeoff, the ratio of good to bad "dups," could be stabilized using a wait time of W = 20msec before generating the second "dup." However, while this idea is theoretically an improvement, he notes that the size of the internet (back in 1997) made it impractical to implement. Trying to implement a solution today that was not feasible in 1997 would be impossible. Leading to the question, should more research be done into areas that we know are infeasible to implement? Or do we believe there is a feasible way to vastly improve the internet?
Paxson's study provided researchers with a lot of solid data to test the effects of end to end packet dynamics in the internet. I am not sure how much the internet's end to end procedure has changed since 1997, but it would be useful to have a similar experiment on the modern internet to evaluate the changes that have occurred in the past 13 or so odd years. Such an experiment would be useful to researchers attempting to improve the dynamics of the internet, as well as for application developers attempting to understand how the underlying protocols should affect their implementations.
No comments:
Post a Comment