In 2002, Rice University’s Sitaram Iyer and Peter Druschel, along with Microsoft Research’s Antony Rowstron presented a paper called Squirrel; a decentralized, peer-to-peer web cache alternative.
A P2P Internet
With Squirrel, instead of the existing web cache system used today, web browsers on desktop machines could share their local caches to form a more efficient and scalable web cache. The new decentralized model would reduce the need for dedicated hardware and any associated administrative costs.
The research team also evaluated Squirrel’s decentralized web caching algorithms and discovered that it exhibited strong performance that was equally comparable to a centralized web cache when it came down to hit ratio, bandwidth usage, and latency. Most importantly, it also had the benefits of decentralization since it was scalable, self-organizing, and resilient to node failures.
Problems with Existing Web Caching
While web caching is a crucial and well-deployed technique, they are often deployed on dedicated machines at corporate networks and Internet Service Providers. The traditional approach, therefore, involves using dedicated hardware which is expensive and usually incurs ongoing administrative costs.
If there is a growth in users, hardware upgrades will be necessary as it faces scalability problems. Furthermore, dedicated web cache servers often represent a single point of failure. If the web cache server doesn’t work, it can deny access to cached web content for every user in the network.
Squirrel’s Decentralized Web Caching
Squirrel facilitates the sharing of web objects among client nodes. Instead of maintaining a local cache of web objects, Squirrel can get these nodes to export their local cache to other nodes in the network. There is now a large shared virtual web cache with each node browsing and caching websites.
In Squirrel’s decentralized peer-to-peer web cache system, a significant advantage is that Squirrel can pool resources from a variety of desktop machines yet achieve the functionality and performance of a dedicated web cache without additional hardware.
When there is an increase in client nodes, there is an increase in the number of shared resources. With this peer-to-peer model, Squirrel, therefore, can scale quickly and automatically.
For this to occur, Squirrel uses Pastry, a self-organizing, peer-to-peer routing substrate as the object location service. Pastry is therefore responsible for identifying and routing to nodes that cache copies of a requested object. Unlike dedicated web cache servers, Squirrel does not require any additional administration and is also more resilient to node failures.
Squirrel’s ideal environment is 100 to 100,000 nodes. While Squirrel can operate outside of the range, the researching team, however, did not have the workload to test this theory. The team believes that Squirrel would be a cheaper, low-management, and fault-resilient solution for web caching in large intranets. They are also confident that with a peer-to-peer routing substrate, it is relatively simple and easy to deploy Squirrel in a corporate network.
The post Can Squirrel Become the Next Decentralized Peer-to-Peer Internet Protocol? appeared first on BTCMANAGER.
Source: BTCManager.com
Original Post: Can Squirrel Become the Next Decentralized Peer-to-Peer Internet Protocol?