Speed-up may make "topic-sensitive" page rankings feasible
Computer science researchers at Stanford University have developed several new techniques that together may make it possible to calculate Web page rankings as used in the Google search engine up to five times faster. The speed-ups to Googles method may make it realistic to calculate page rankings personalized for an individuals interests or customized to a particular topic.
The Stanford team includes graduate students Sepandar Kamvar and Taher Haveliwala, noted numerical analyst Gene Golub and computer science professor Christopher Manning. They will present their first paper at the Twelfth Annual World Wide Web Conference (WWW2003) in Budapest, Hungary, May 20-24, 2003. The work was supported by the National Science Foundation (NSF), an independent federal agency that supports fundamental research and education in all fields of science and engineering.
Computing PageRank, the ranking algorithm behind the Google search engine, for a billion Web pages can take several days. Google currently ranks and searches 3 billion Web pages. Each personalized or topic-sensitive ranking would require a separate multi-day computation, but the payoff would be less time spent wading through irrelevant search results. For example, searching a sports-specific Google site for "Giants" would give more importance to pages about the New York or San Francisco Giants and less importance to pages about Jack and the Beanstalk.
"This work is a wonderful example of how NSF support for basic computer science research, including applied mathematics and algorithm research, has impacts in daily life," said NSF program officer Maria Zemankova. In the mid-1990s, an NSF digital library project and an NSF graduate fellowship also supported Stanford graduate students Larry Page and Sergey Brin while they developed what would become the Google search engine.
To speed up PageRank, the Stanford team developed a trio of techniques in numerical linear algebra. First, in the WWW2003 paper, they describe so-called "extrapolation" methods, which make some assumptions about the Webs link structure that arent true, but permit a quick and easy computation of PageRank. Because the assumptions arent true, the PageRank isnt exactly correct, but its close and can be refined using the original PageRank algorithm. The Stanford researchers have shown that their extrapolation techniques can speed up PageRank by 50 percent in realistic conditions and by up to 300 percent under less realistic conditions.
A second paper describes an enhancement, called "BlockRank," which relies on a feature of the Webs link structure-a feature that the Stanford team is among the first to investigate and exploit. Namely, they show that approximately 80 percent of the pages on any given Web site point to other pages on the same site. As a result, they can compute many single-site PageRanks, glue them together in an appropriate manner and use that as a starting point for the original PageRank algorithm. With this technique, they can realistically speed up the PageRank computation by 300 percent.
Finally, the team notes in a third paper that the rankings for some pages are calculated early in the PageRank process, while the rankings of many highly rated pages take much longer to compute. In a method called "Adaptive PageRank," they eliminate redundant computations associated with those pages whose PageRanks finish early. This speeds up the PageRank computation by up to 50 percent.
"Further speed-ups are possible when we use all these methods," Kamvar said. "Our preliminary experiments show that combining the methods will make the computation of PageRank up to a factor of five faster. However, there are still several issues to be solved. Were closer to a topic-based PageRank than to a personalized ranking."
The complexities of a personalized ranking would require even greater speed-ups to the PageRank calculations. In addition, while a faster algorithm shortens computation time, the issue of storage remains. Because the results from a single PageRank computation on a few billion Web pages require several gigabytes of storage, saving a personalized PageRank for many individuals would rapidly consume vast amounts of storage. Saving a limited number of topic-specific PageRank calculations would be more practical.
The reason for the expensive computation and storage requirements lies in how PageRank generates the rankings that have led to Googles popularity. Unlike page-ranking methods that rate each page separately, PageRank bases each pages "importance" on the number and importance of pages that link to the page.
Therefore, PageRank must consider all pages at the same time and cant easily omit pages that arent likely to be relevant to a topic. It also means that the faster method will not affect how quickly Google presents results to users searches, because the rankings are computed in advance and not at the time a search is requested.
The Stanford teams conference paper and technical reports on enhancing the PageRank algorithm, as well as the original paper describing the PageRank method, are available on the Stanford Database Groups Publication Server (http://dbpubs.stanford.edu/).
New method for simulating yarn-cloth patterns to be unveiled at ACM SIGGRAPH
09.07.2020 | Association for Computing Machinery
Virtual Reality Environments for the Home Office
09.07.2020 | Universität Stuttgart
New insight into the spin behavior in an exotic state of matter puts us closer to next-generation spintronic devices
Aside from the deep understanding of the natural world that quantum physics theory offers, scientists worldwide are working tirelessly to bring forth a...
Kiel physics team observed extremely fast electronic changes in real time in a special material class
In physics, they are currently the subject of intensive research; in electronics, they could enable completely new functions. So-called topological materials...
Solar cells based on perovskite compounds could soon make electricity generation from sunlight even more efficient and cheaper. The laboratory efficiency of these perovskite solar cells already exceeds that of the well-known silicon solar cells. An international team led by Stefan Weber from the Max Planck Institute for Polymer Research (MPI-P) in Mainz has found microscopic structures in perovskite crystals that can guide the charge transport in the solar cell. Clever alignment of these "electron highways" could make perovskite solar cells even more powerful.
Solar cells convert sunlight into electricity. During this process, the electrons of the material inside the cell absorb the energy of the light....
Empa researchers have succeeded in applying aerogels to microelectronics: Aerogels based on cellulose nanofibers can effectively shield electromagnetic radiation over a wide frequency range – and they are unrivalled in terms of weight.
Electric motors and electronic devices generate electromagnetic fields that sometimes have to be shielded in order not to affect neighboring electronic...
A promising operating mode for the plasma of a future power plant has been developed at the ASDEX Upgrade fusion device at Max Planck Institute for Plasma...
07.07.2020 | Event News
02.07.2020 | Event News
19.05.2020 | Event News
10.07.2020 | Life Sciences
10.07.2020 | Materials Sciences
10.07.2020 | Life Sciences