Browse > Article
http://dx.doi.org/10.1633/JISTaP.2013.1.3.2

Interactive Information Retrieval: An Introduction  

Borlund, Pia (Royal School of Library and Information Science University of Copenhagen)
Publication Information
Journal of Information Science Theory and Practice / v.1, no.3, 2013 , pp. 12-32 More about this Journal
Abstract
The paper introduces the research area of interactive information retrieval (IIR) from a historical point of view. Further, the focus here is on evaluation, because much research in IR deals with IR evaluation methodology due to the core research interest in IR performance, system interaction and satisfaction with retrieved information. In order to position IIR evaluation, the Cranfield model and the series of tests that led to the Cranfield model are outlined. Three iconic user-oriented studies and projects that all have contributed to how IIR is perceived and understood today are presented: The MEDLARS test, the Book House fiction retrieval system, and the OKAPI project. On this basis the call for alternative IIR evaluation approaches motivated by the three revolutions (the cognitive, the relevance, and the interactive revolutions) put forward by Robertson & Hancock-Beaulieu (1992) is presented. As a response to this call the 'IIR evaluation model' by Borlund (e.g., 2003a) is introduced. The objective of the IIR evaluation model is to facilitate IIR evaluation as close as possible to actual information searching and IR processes, though still in a relatively controlled evaluation environment, in which the test instrument of a simulated work task situation plays a central part.
Keywords
interactive information retrieval; IIR; evaluation; human-computer information retrieval; HCIR; IIR evaluation model; user-oriented information retrieval; information retrieval; IR; history;
Citations & Related Records
연도 인용수 순위
  • Reference
1 Pejtersen, A.M. & Austin, J. (1983). Fiction retrieval: Experimental design and evaluation of a search system based on users' value criteria (Part 1). Journal of Documentation, 39 (4), 230-246.   DOI   ScienceOn
2 Pejtersen, A.M. & Austin, J. (1984). Fiction retrieval: Experimental design and evaluation of a search system based on users' value criteria (Part 2). Journal of Documentation, 40 (1), 25-35.   DOI   ScienceOn
3 Pejtersen, A.M. & Fidel, R. (1998). A framework for work centered evaluation and design: A case study of IR on the web. Grenoble, March 1998. [Working paper for MIRA Workshop, Unpublished].
4 Pejtersen, A.M. & Rasmussen, J. (1998). Effectiveness testing of complex systems. In M. Helander (Ed.), Handbook of human-computer interaction. Amsterdam: North-Holland, 1514-1542.
5 Pejtersen, A.M. (1980). Design of a classification scheme for fiction based on an analysis of actual user-librarian communication and use of the scheme for control of librarians' search strategies. In O. Harbo, & L. Kajberg (Eds.), Theory and application of information research. Proceedings of the 2nd International Research forum on Information Science. London: Mansell, 146-159.
6 Pejtersen, A.M. (1989). A library system for information retrieval based on a cognitive task analysis and supported by an icon-based interface. In Proceedings of the 12th Annual International ACM SIGR Conference on Research and Development in Information Retrieval (SIGIR 1989), ACM, 40-47.
7 Pejtersen, A.M. (1991). Interfaces based on associative semantics for browsing in information retrieval. Roskilde, Denmark: Riso National Laboratory, (Riso-M-2883).
8 Pejtersen, A.M. (1992). New model for multimedia interfaces to online public access catalogues. The Electronic Library, 10 (6), 359-366.   DOI
9 Pejtersen, A.M., Olsen, S.E. & Zunde, P. (1987). Development of a term association interface for browsing bibliographic data bases based on end users' word associations. In I. Wormell (Ed.), Knowledge engineering: expert systems and information retrieval. London: Taylor Graham, 92-112.
10 Rasmussen, J., Pejtersen, A.M. & Goodstein, L.P. (1994). Cognitive systems engineering. N.Y.: John Wiley & Sons.
11 Robertson, S.E. & Hancock-Beaulieu, M.M. (1992). On the evaluation of IR systems. Information Processing & Management, 28 (4), 457-466.   DOI   ScienceOn
12 Robertson, S.E. (1981). The methodology of information retrieval experiment. In K. Sparck Jones (Ed.), Information retrieval experiments. London: Buttersworths, 9-31.
13 Robertson, S.E. (1997a). Overview of the Okapi Projects. Journal of Documentation, 53 (1), 3-7.   DOI   ScienceOn
14 Robertson, S.E. (Ed.). (1997b). Special issue on Okapi. Journal of Documentation, 53 (1).
15 Robertson, S.E., Lu, W. & MacFarlane, A. (2006). XML-structured documents: Retrievable units and inheritance. In H. Legind Larsen, G. Pasi, D. Ortiz-Arroyo, T. Andreasen, & H. Christiansen (Eds.), Proceedings of Flexible Query Answering Systems 7th International Conference, FQAS 2006, Milan, Italy, June 7-10, 2006, LNCS, 4027, Springer-Verlag, (2006), 121-132.
16 Robertson, S.E., Walker, S. & Beaulieu, M. (1997). Laboratory experiments with Okapi: Participation in the TREC programme. Journal of Documentation, 53 (1), 20-34.   DOI   ScienceOn
17 Ruthven, I. & Kelly, D. (Eds.). (2011). Interactive information seeking, behaviour and retrieval. London: Facet Publishing.
18 Ruthven, I. (2008). Interactive information retrieval. Annual Review of Information Science and Technology, 24 (1), 2008, 43-91.
19 Salton, G. (1981). The smart environment for retrieval system evaluation: Advantages and problem areas. In K. Sparck Jones (Ed.), Information retrieval experiments. London: Buttersworths, 316-329.
20 Salton, G. (1972). A New comparison between conventional indexing (MEDLARS) and automatic text processing (SMART). Journal of the American Society for Information Science, (March-April), 75-84.
21 Sanderson, M. (2010). Test collection based evaluation of information retrieval systems. Foundations and Trends in Information Retrieval, 4 (4), 247-375.   DOI
22 Saracevic, T. (1995). Evaluation of evaluation in information retrieval. In E.A Fox, P. Ingwersen, & R. Fidel (Eds.), Proceedings of the 18th ACM Sigir Conference on Research and Development of Information Retrieval. Seattle, 1995. N.Y.: ACM Press, 138-146.
23 Schamber, L. (1994). Relevance and information behavior. In M.E. Williams (Ed.), Annual Review of Information Science and Technology (ARIST). Medford, NJ: Learned Information, INC., 29, 3-48.
24 Schamber, L. Eisenberg, M.B. & Nilan, M.S. (1990). A re-examination of relevance: Toward a dynamic, situational definition. Information Processing & Management, (26), 755-775.
25 Sharp, J. (1964). Review of the Cranfield-WRU test literature. Journal of Documentation, 20 (3), 170-174.
26 Sparck Jones, K. (1971). Automatic keyword classification for information retrieval. London: Buttersworths.
27 Sparck Jones, K. (1981a). Retrieval system tests 1958-1978. In K. Sparck Jones (Ed.), Information retrieval experiments. London: Buttersworths, 213-255.
28 Sparck Jones, K. (1981b). The Cranfield tests. In K. Sparck Jones (Ed.), Information retrieval experiments. London: Buttersworths, 256-284.
29 Sparck Jones, K. (Ed.). (1981c). Information retrieval experiments. London: Buttersworths.
30 Spink, A., Greisdorf, H. & Bateman, J. (1998). From highly relevant to not relevant: Examining different regions of relevance. Information Processing & Management, 34 (5), 599-621.   DOI   ScienceOn
31 Swanson, D.R. (1965). The evidence underlying the Cranfield results. Library Quarterly, 35, 1-20.   DOI
32 Swanson, D.R. (1986). Subjective versus objective relevance in bibliographic retrieval systems. Library quarterly, 56, 389-398.   DOI
33 Tague-Sutcliffe, J. (1992). The pragmatics of information retrieval experimentation, revisited. Information Processing & Management, 28(4), 467-490.   DOI   ScienceOn
34 Thorne, R.G. (1955). The efficiency of subject catalogues and the cost of information searches. Journal of Documentation, 11(3), 130-148.   DOI
35 Voorhees, E.M. & Harman, D.K. (2005a). The text retrieval conference. In E.M. Voorhees & D.K. Harman (Eds.). TREC: Experiment and evaluation in information retrieval. Cambridge, Massachusetts: The MIT Press. 3-19.
36 Voorhees, E.M. & Harman, D.K. (Eds.) (2005b). TREC: Experiment and evaluation in information retrieval. Cambridge, Massachusetts: The MIT Press.
37 Walker, S. & De Vere, R. (1990). Improving subject retrieval in online catalogues: 2. Relevance feedback and query expansion. London: British Library. (British Library Research Paper 72).
38 Walker, S. (1989). The Okapi online catalogue research projects. In The Online catalogue: developments and directions. London: The Library Association, 84-106.
39 Wang, P. (2001). Methodologies and methods for user behavioral research. In M.E. Williams (Ed.), Annual Review of Information Science and Technology, 34, 1999, 53-99.
40 Wilson, M. (2011). Interfaces for information retrieval. In I. Rutven, & D. Kelly (Eds.), Interactive information seeking, behaviour and retrieval. London: Facet Publishing, 139-170.
41 Belkin, N.J. (1980). Anomalous states of knowledge as a basis for information retrieval. The Canadian Journal of Information Science, (5), 133-143.
42 Xie, I. (2008). Interactive information retrieval in digital environments. IGI Publishing.
43 Aitchison, J. & Cleverdon, C. (1963). Aslib Cranfield research project: Report on the test of the Index of Metallurgical Literature of Western Reserve University. Cranfield: The College of Aeronautics.
44 Beaulieu, M. & Jones, S. (1998). Interactive searching and interface issues in the Okapi Best Match Probabilistic Retrieval System. Interacting with Computers, 10, 237-248.   DOI   ScienceOn
45 Beaulieu, M. (1997). Experiments on interfaces to support query expansion. Journal of Documentation, (53)1, 8-19.   DOI   ScienceOn
46 Beaulieu, M., Robertson, S. & Rasmussen, E. (1996). Evaluating interactive systems in TREC. Journal of the American Society for Information Science, 47 (1), 85-94.   DOI
47 Belkin, N.J. (2008). Some(what) grand challenges for information retrieval. ACM SIGIR Forum, 42 (1), 47-54.   DOI
48 Belkin, N.J., Cool, C., Croft, W.B. & Callan, J.P. (1993). The effect of multiple query representation on information retrieval system performance. In R. Korfhage, E. Rasmussen, & P. Willett (Eds.), Proceedings of the 16th ACM Sigir Conference on Research and Development of Information Retrieval. Pittsburgh, 1993. New York: ACM Press, 339-346.
49 Belkin, N.J., Oddy, R. & Brooks, H. (1982). ASK for information retrieval: Part I. Background and theory. Journal of Documentation, 38 (2), 61-71.   DOI   ScienceOn
50 Borlund, P. & Ingwersen, P. (1997). The development of a method for the evaluation of interactive information retrieval systems. Journal of Documentation, 53 (3), 225-250.   DOI   ScienceOn
51 Borlund, P. (2003a). The IIR evaluation model: A framework for evaluation of interactive information retrieval systems. Information Research, 8 (3). Retrieved from http://informationr.net/ir/8- 3/paper152.html
52 Borlund, P. & Ingwersen, P. (1998). Measures of relative relevance and ranked half-life: Performance indicators for interactive IR. In B.W. Croft, A. Moffat, C.J. van Rijsbergen, R. Wilkinson, & J. Zobel (Eds.), Proceedings of the 21st ACM Sigir Conference on Research and Development of Information Retrieval. Melbourne, 1998. Australia: ACM Press/York Press, 324-331.
53 Borlund, P. (2000a). Evaluation of interactive information retrieval systems. Abo: Abo Akademi University Press. Doctoral Thesis, Abo Akademi University.
54 Borlund, P. (2000b). Experimental components for the evaluation of interactive information retrieval systems. Journal of Documentation, 56 (1), 71-90.   DOI   ScienceOn
55 Borlund, P. (2003b). The concept of relevance in IR. Journal of the American Society for Information Science and Technology, 54 (10), 913-925.   DOI   ScienceOn
56 Bruce, H.W. (1994). A cognitive view of the situational dynamism of user-centered relevance estimation. Journal of the American Society for Information Science, 45, 142-148.   DOI
57 Cleverdon, C.W. & Keen, E.M. (1966). Aslib Cranfield Research Project: Factors determining the performance of indexing systems. Vol. 2: Results. Cranfield.
58 Cleverdon, C.W. (1960). Aslib Cranfield Research Project: Report on the first stage of an investigation into the comparative efficiency of indexing systems. Cranfield: the College of Aeronautics.
59 Cleverdon, C.W. (1962). Aslib Cranfield Research Project: Report on the testing and analysis of an investigation into the comparative efficiency of indexing systems. Cranfield.
60 Cleverdon, C.W., Mills, J. & Keen, E.M. (1966). Aslib Cranfield Research Project: Factors determining the performance of indexing systems. Vol. 1: Design.
61 Cool, C. & Belkin, N.J. (2011). Interactive information retrieval: History and background. In I. Rutven & D. Kelly (Eds.), Interactive information seeking, behaviour and retrieval. London: Facet Publishing, 1-14.
62 Ellis, D. (1989). A behavioural approach to information retrieval systems design. Journal of Documentation, 45 (3), 171-212.   DOI
63 Ellis, D. (1996). Progress and problems in information retrieval. London: Library Association Publishing.
64 Fidel, R. (2012). Human information interaction: An ecological approach to information behavior. Cambridge, MA: MIT.
65 Goodstein, L.P. & Pejtersen, A.M. (1989). The Book House: System functionality and evaluation. Roskilde, Denmark: Riso National Laboratory, (Riso-M-2793).
66 Gull, C.D. (1956). Seven years of work on the organization of materials in the special library. American Documentation, 7, 320-329.   DOI
67 Harter, S.P. & Hert, C.A. (1997). Evaluation of information retrieval systems: Approaches, issues, and methods. In M.E. Williams (Ed.), Annual Review of Information Science and Technology, 32, 1997, 3-94.
68 Harter, S.P. (1996). Variations in Relevance assessments and the measurement of retrieval effectiveness. Journal of the American Society for Information Science, 47 (1), 37-49.   DOI
69 Ingwersen, P. & Jarvelin, K. (2005). The turn: Integration of information seeking retrieval in context. Dordrecht, Netherlands: Springer Verlag.
70 Ingwersen, P. (1992). Information retrieval interaction. London: Taylor Graham.
71 Kuhlthau, C.C. (1993). Seeking meaning: A process approach to library and information science. Norwood, NJ: Ablex Publishing.
72 Jarvelin, K. & Kekalainen, J. (2000). IR evaluation methods for retrieving highly relevant documents. In N.J. Belkin, P. Ingwersen, & M.-K. Leong (Eds.), Proceedings of the 23rd ACM Sigir Conference on Research and Development of Information Retrieval. Athens, Greece, 2000. New York, N.Y.: ACM Press, 2000, 41-48.
73 Jarvelin, K. (2011). Evaluation. In I. Rutven & D. Kelly (Eds.), Interactive information seeking, behaviour and retrieval. London: Facet Publishing, 113-138.
74 Kelly, D. (2009). Methods for evaluating interactive information retrieval systems with users. Foundations and Trends in Information Retrieval, 3 (1-2), 1-224.
75 Lancaster, W.F. (1969). Medlars: Report on the evaluation of its operating efficiency. American Documentation, 20, 119-142.   DOI
76 Lu, W., Robertson, S.E. & Macfarlane, A. (2007). CISR at INEX 2006. In N. Fuhr, M. Lalmas, and A. Trotman (Eds.), Comparative Evaluation of XML Information Retrieval Systems: 5th International Workshop of the Initiative for the Evaluation of XML Retrieva (INEX 2006), Dagstuhl, Germany, LNCS 4518, Springer-Verlag, (2007), 57-63.
77 Lu, W., Robertson, S.E. & Macfarlane, A. (2006). Field- Weighted XML retrieval based on BM25. In N. Fuhr, M. Lalmas, S. Malik, & G. Kazai (Eds.), Advances in XML Information Retrieval and Evaluation: Fourth Workshop of the INitiative for the Evaluation of XML Retrieval (INEX 2005), Dagstuhl, 28-30 November 2005, Lecture Notes in Computer Science, Vol 3977, Springer-Verlag,
78 Marchionini, G. (2006). Toward human-computer information retrieval bulletin. In June/July 2006 Bulletin of the American Society for Information Science. Retrieved from http://www.asis.org/ Bulletin/Jun-06/marchionini.html
79 Martyn, J. & Lancaster, F.W. (1981). Investigative methods in library and information science: An introduction. Virginia: Information Resources Press. 1981. (2nd impression September 1991).