So far, there has been a lot of discussion about automation in libraries and problems concerning catalogs, but not so much concerning the future collection of the literature itself. I suppose that in a decade or two, when the number of electronic books will be much, much larger, the conversion problems will not be as easy as when it comes to transferring database records and getting rid of some tabs and CR-characters.
I think that to maintain a transfer of future electronic books (with the technical complication of, say, the Grolier or the Compton multimedia encyclopedias of today) to newer platforms, continuously through time, would be impossible. Instead, we need some sort of information embedded in these books, to tell future computers what processor, clock speed, OS and other parameters that must be emulated. Otherwise, like Steve Cisler at the Apple Computer Library wrote to me, we must restrict ourselves to recovery-oriented projects in retrospect. I think we ought to help future recovery programs by at least doing half of their work now.
I know that a lot is being done today when it comes to standards between different computer platforms and countries. But does anyone think of standardization over time, over decades, centuries? Storing on magnetic and/ or optic media saves much space compared to storing piles of paper and rows of books, we are told. But what if the libraries of tomorrow have to be museums too - with hundreds of antique computers set up in hundreds of different configurations?
Isn't the risk that PR-related material, presentations and other more short-lived stuff can "afford" to use the latest technology, but anyone who produces literary or educational material meant to last some decades, has to limit him/herself to some sort of lowest common denominator when it comes to technological complexity?
Dare we really believe that libraries in the future will be equipped with supercomputers that can emulate or simulate various sorts of antique hardware and software? Actually, shouldn't this emulation or simulation process of the future start here and now with discussions and plans and guide lines for developers of authoring tools, the invention of special "embedded reconstruction code sets."
Is this a very naive idea? So far, not one out of two dozen people I have talked with, has made any comment on this.
It would be very tragic if the reachable cultural range in the future would get narrower and narrower. Today we can - maybe with a little help - read Chaucer. Without any special deciphering gadgets we can read text that is six hundred years old! But in the year 2600 - will people be able to read (watch and listen to) anything electronically published that is older than twenty years?
Perhaps they can get a rough idea of what an old masterpiece is about. They can restore some text and maybe a picture or two, but not make it run together as the artist once meant.
Imagine if today, instead of watching Citizen Kane, we would have to be content with reading a short summary of it in plain text, because no projectors could run the film format that Orson Welles used.
I have talked to representatives of IBM, Apple and Microsoft, and but few seem to think that this could become a real problem. On the other hand, representatives for the academic world and people at different national libraries think I have a point here.
I know of SGML, which seems very promising when we talk about text. Do you know if anything else is being done about this problem when it comes to more complex documents like hyper or multimedia books? I would be grateful for any comment (short or long) on this question.
Karl-Erik Tallmo would like to hear from you. You can reach him at Nisus Publishers, Observatoriegatan 22, S 113 29 Stockholm, Sweden. fax +46 8 34 44 18 or via Internet firstname.lastname@example.org