The Register has a great piece on Cuil‘s launch, its impact on Google, and what the Web really is these days. While I don’t completely agree with the article’s point, thinking of the Web not as the culmination of linked documents but as The Index (i.e., search engine handling of the Web) is interesting and useful. Here are some of the key points from the article (“Spammers, Cuil, and the rescue from planet Google”):
With a little thought, Cuil not being as good as Google at finding what we want online is the least surprising piece of news since people familiar with the situation said JPII was partial to fish on a Friday. In 2008, Mountain View’s all-seeing algorithms in many ways are the web.
It’s easy to identify what happened. When it first surfaced in 1998, Google made sense of the web a bit better than anyone else. It was a useful improvement on existing services. Ten years later, the web does its best to make sense of Google.
The sorry upshot is that barring some unimaginable technological leap no search engine’s results will ever be better than Google’s, at least in the West. And the switch leaves the likes of Microsoft and Cuil (and a dozen other doomed start-ups) effectively attempting to reverse-engineer Google, not understand the information on the web.
…
The people at the vanguard of reverse-engineering Google are not its jealous search rivals. They’re the spammers and SEO consultants. They have driven an ever-closer relationship between the quirks and whims of Google’s algorithms and policies, and the structure and content of the web. It’s a feedback loop that was unavoidable once Google’s early rivals proved unable to respond to its better search results and presentation.