Abstract
Personalized Information Retrieval (PIR) is an effective solution when purposes of queries are issued but users receive the same results. The PIR-CLEF 2018 task aims to explore the methods and evaluations of PIR. By analyzing the provided data we generate query level and session level baselines. We compare baselines and extended models we propose, and experiment results show that insufficient relevance information has a negative impact on the performance of models and evaluation process. Since personalization ranking based on typical users interests is not effective in reality, especially when the results of relevance feedback is not satisfactory, we consider that the PIR task should not only relate to context, but to the various search intentions. We propose several suggestions about data and evaluation process.
| Original language | English |
|---|---|
| Journal | CEUR Workshop Proceedings |
| Volume | 2125 |
| State | Published - 2018 |
| Event | 19th Working Notes of CLEF Conference and Labs of the Evaluation Forum, CLEF 2018 - Avignon, France Duration: 10 Sep 2018 → 14 Sep 2018 |
Keywords
- Data Analysis
- Personalized Information Retrieval
- Query Expansion