CityLIS Term 1 Week 3. In which we completed the story of documents from the dawn of time to the present day and discovered everything connects; I found out how catalogue cards connect with the pre-history of the web; the Economist wrote about the Future of the Book and played with it’s form; we learnt about asking questions and finding answers using databases and information retrieval and knowledge management.
Inspired Library and Information Science Foundations (LISF) and the story of documents Part 3 this catalogue card shows us the use of classification schemes within a cataloguing code using a 20th century format, the index card. It also provides some additional user created metadata added to the official typed record. An added identifier is “the Lemur Book” referring to the animals that usually distinguish the cover of an O’Reilly book. We also see something written on that links into the information retrieval themes covered in the Digital Information Technologies and Architecture (DITA) information retrieval themes and the contextual siting of search around a seeker and their information context and needs: “What we find changes who we become”. This image itself was found by practicing information retrieval techniques from the DITA lab session.
Yes in this week’s LISF lecture we completed our history of the story of documents taking is from the enlightenment to the present day in the ongoing quest for bibliographic control over the world’s knowledge. This featured much coverage of the 19th century and Victorian pioneers who laid down such robust foundations for modern library and information science they are still the cornerstones of the discipline to this day. This includes intellectual tools such as catalogues, classification schemes and memory institutions such as the British Library and the public library network.
These themes were reinforced in Week one of the FutureLearn MOOC Web Science: How the Web is Changing the World from the University of Southampton. I watched a Lecture (activity 1.10) by Professor Les Carr on the pre-history of the web. This discussed familiar territory now including Paul Otlet’s Mundanaeum and Vannevar Bush’s Memex. He spoke of the importance of the Mundanaeum not just as another attempt to collate the world’s knowledge but also stressed new intellectual tools: librarians, queries, and technologies: the index card.
“Query became part of the bibliographic record. Content was interlinked.” – Professor Les Carr
He also spoke about the 1937 idea by H.G Wells to use microfilm to capture all the world’s knowledge as The World Brain, a permanent encyclopaedia.
“There is no practical obstacle whatever now to the creation of an efficient index to all human, knowledge, ideas and achievement” – H.G. Wells
We then passed through the emergence of the internet, a network of network, inspired by the work of computer scientists such as Vint Cerf towards the emergence of the web. Despite this lineage from the attempts for bibliographic control and capturing all knowledge the web this wasn’t really the impetus for the web. The web was intended to solve information management problems at the CERN research lab in Geneva.
The web’s architecture contained three core ideas that realised and embedded interlinking and querying in the digital record:
- URIs/URLs – the idea that everything has a unique identifier
- HTTP – a mechanism for allowing clients and servers to communicate via the internet
- HTML – the ability to encode document structure and links to related documents in a simple markup language
From Geneva it expanded throughout the scientific research community and was then given to the world. As Tim Berners-Lee famously said: “This is for Everyone” and everyone took it and used it for new and different purposes extending the web into the information service we have today.
If you are not taking #FLwebsci yet register quickly and catch up before it closes. It’s a well put together course with great discussions going on as participants share their thoughts and experience.
Lyn’s whole epic narrative arc of documents from the ancient world through to the world wide web was also supplemented this week by an essay published in the Economist on the Future of the Book called From Papyrus to Pixels. The article itself is a fascinating read connecting books past, present and future and discussing the connections between formats, technologies, authors, readers and publishing business models to trace things that endure, things that may change and things that may fade and revive. For all that has changed the essence of the book as a route to pleasure and for encouraging connections between people and knowledge persists across millennia.
“Books will evolve online and off, and the definition of what counts as one will expand; the sense of the book as a fundamental channel of culture, flowing from past to future, will endure.” – The Economist. Future of the Book Essay. From Papyrus to Pixels.
Interestingly the essay is also provided in three formats: an audio version, an ink stained, coffee ringed skeuomorphic virtual book and a web page. It was noticeable when I first encountered this information presentation that my first thought was to call it a ‘traditional’ web page. I clearly thought using the web to deliver audio or digital reconstructions of a retro physical paper format to be more cutting edge. The web succeeds most when it takes what was best about old formats and technologies (codices, radio) and brings them them forward to the web creating richer ever more intricate and converged documents. I still find turning pages (even fake ones) more immersive and a two page layout in soothing black and white more engaging than scrolling through a long single column of text with brightly coloured images, headings and marginalia. How technically and conceptually clever of them to prompt such debate even before a word has been read.
Over in our cityLIS digital world we covered databases, information retrieval and the precision of search engines. I had never paid such close attention to the practice of searching before. Perhaps I have become a lazy searcher carelessly tossing free text searches into the most obvious search box and uncritically accepting what comes. Thanks to this week’s lab I paid close attention to different types of information need, to different search methods for information retrieval, the precision and recall of different search engines and came up with some varying conclusions. This also came up in our research methods class where we were introduced to Cyril Cleverdon who was the first person to suggest formal testing of information retrieval systems and developed the measures of precision and recall as part of his investigation into the comparative efficiency of indexing systems.
Cleverdon is an entity in Google’s Knowledge Graph and bridging the gap between information needs and knowledge was another theme of the week. This connected into our Information Management and Policy lecture on Knowledge Management that was given by guest Lecturer Noeleen Schenk from Metataxis. In this session we covered some of the models, benefits, drivers, tools and challenges involved in managing knowledge within organisations.