next up previous
Next: TREC Web Tracks Up: A Case Study in Previous: Introduction


The TREC Ad-Hoc and Web Tracks

The ad-hoc task has been at the heart of TREC evaluations since the begining of TREC [24]. In this task, TREC participants are given a collection of Newswire and other documents, usually about 500,000 to 700,000 documents in roughly two gigabytes of text. Along with the documents, the participants are also given a set of fifty queries posed by real users (often called assessors as their key role is to assess the relevance of documents retrieved by different systems for their queries). The conference participants rank documents from the collection for every query using their systems, and the top 1,000 documents for each query are returned to NIST by every participant for evaluation. The assessors judge the top 100 to 200 documents from every system for relevance and various evaluation scores are computed for each participating system (for example, average precision, precision in top 10, 20, 30 documents, and so on).

Even though the TREC ad-hoc task, especially when using short 2-3 word queries(**), is very close to what happens in a web search system, there are some notable differences. For example, the type of documents being searched in TREC ad-hoc are not web pages.



Subsections
next up previous
Next: TREC Web Tracks Up: A Case Study in Previous: Introduction
Amit Singhal 2001-02-18