|Search Options | Help | Site Map | Cultivate Web Site|
|Home | Current Issue | Index of Back Issues|
|Issue 1 Home | Editorial | Features | Regular Columns | News & Events | Misc.|
By Kim Veltman - July 2000
This article was intially written for MEDICI; a print version will be published from them.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
When the Internet began in 1969 it was largely a communication channel for high-energy physics. When the World Wide Web emerged in the 1980s it rapidly became a repository for all subjects. In the past decades there have been three important trends:
Visionaries now speak of a time in the --near-- future when all recorded knowledge will be accessible through the World Wide Web. How to integrate these three kinds of knowledge will thus become an increasing challenge. Fortunately, many of the obstacles standing in the way of such a vision are already being tackled by organisations such as the W3C, the Internet Society, standards bodies and a number of international consortia. At first, problems of technological interoperability at the level of hardware and software dominated the scene. More recently, there has been increasing interest in interoperability of content. Here, work is being done on heterogeneous, distributed databases. The efforts of the Dublin Core (DC) to define a common ground through basic data entry fields are extremely valuable. The European Commission is supporting multilingual approaches through its Multi Lingual Information Society (MLIS) programme. The W3C is working on a Resource Description Framework (RDF), which will integrate other initiatives.
Metadata has emerged as one of the key concepts. This article focuses on three sets of problems concerning metadata which require further work. First, there are problems of quantity of copies and versions introduced by the enormous proliferation of images, words, sounds and other materials made available through Information and Communication Technologies (ICT). Second, there are problems of determining the quality and veracity of these images, words and sounds. Third, there is a challenge of developing dynamic metadata to deal with cultural and historical dimensions of knowledge.
In the past, photographers typically made an image of selected paintings in museums and galleries, which were then used by a relatively small number of scholars who published books and articles. Today, with JPEG technology, a single painting produces a vignette, an imagette, a regular image, high definition image and a very high definition image. A single painting thus generates five images.
Developments in infrared reflectography allow us to see different layers . If there were only three such layers under the surface, a single painting would thus generate 20 images (plate 1 in Appendix 1). In the case of a famous painting there is more than the original to be considered. There are copies. If there were four copies then those 20 images mentioned above would become 100 images. There are frequently also versions, variants, images based on and caricatures. Even assuming there were only one each of these for the original and the four copies, then there would be another 80 images (for the five resolutions), each with surfaces and three layers, i.e. 320 images. Thus one original painting would generate 420 images. Each of these would also be subject to reconstructions. Assuming three of these reconstructions for each of the above, one would have 1260 images generated by a single painting (plate 2).
If we include moving images of the same the problems multiply. It is true that the Motion Picture Experts Group (MPEG 7) and the MPEG 21 group, as well as the Society for Motion Picture and Television Engineers (SMPTE)  group are addressing some of these aspects. But we have no means, at present to gain systematic access to the whole spectrum of images linked with a single painting. Similar problems and examples exist with respect to text, sounds and other media.
Given the immense advances in ease of digital reproduction, questions of quality also become paramount. Here methods such as digital watermarks can help determine whether a given image represents an unaltered version of the original . To illustrate the deeper problems entailed with respect to veracity it is useful to begin with an example of a relatively simple contemporary event such as a plane crash. At the local scene all the details of this event will be recorded. We will read in the local paper of who was killed, who their families were, how this has affected their neighbours, their colleagues at work and so on. At the regional level the same event will be recorded as a plane crash and a smaller number of details concerning the most important crash victims will be provided (plate 3). At the national level, there will be a more matter of fact report of yet another plane crash. At the global level, the actual event is not likely to be described. Rather we shall probably witness a tiny fluctuation in the annual statistics of persons who have died. In historical terms, say the statistics concerning deaths in the course of a century (what the Annales School might call the longue durée), this fluctuation will become all but invisible.
This example points to a first fundamental problem concerning metadata. Those working at the local, regional, national and historical levels typically have very different foci of attention, which are frequently reflected in quite different ways of dealing with, recording and storing their facts. The same event, which requires many pages at the local level, may merely be recorded as a numerical figure at the historical level. Unless there is a careful correlation among these different levels, it will not be possible to move seamlessly through these different information sources concerning the same event.
Implicit in the above is also an unexpected insight into a much debated phenomenon. Benjamin Barber, in his Jihad vs. McWorld , has drawn attention to a seeming paradox that there is a trend towards globalizations with McDonalds (and Hiltons) everywhere. At the same time there is a reverse trend towards local and regional concerns, which he describes as if this were somehow a lapse in an otherwise desirable progress. Looking at the diagram below (plate 3), it becomes clear why these opposing trends are not a co-incidence. Clearly we need a global approach to discern patterns in population, energy and the crucial ingredients in order to understand the big picture and to render sustainable our all too fragile planet. But this level, however important, is also largely an abstraction. It reduces the complexity of the everyday into series of graphs and statistics allowing us to see patterns which would not otherwise be evident. Yet in the complexity of the regional, are all the facts, all the gory details, which are crucial for the everyday person. Thus trends towards CNN are invariably counterbalanced by trends towards local television, local radio, community programmes, and local chat groups on the Internet. This is not a lapse in progress. It is a necessary measure to ensure that the humane dimensions of communication remain. In retrospect, Marshall McLuhan's view of this as a trend towards a "global village" is much more accurate than Barber's metaphor because it acknowledges symbiotic co-existence rather than dualistic opposition.
These problems of metadata become clearer if we pursue the hypothetical case of a plane crash from a slightly different point of view (plate 4). At the event there are usually eye-witnesses. For the sake of our illustration let us posit that there are three. There will also be on-site reporters who may not have been eye-witnesses. Again we shall posit three. They send their material back to (three) off-site press bureaus. These gather information and send them on to (three) global press bureaus. In our hypothetical example, the "event" has thus gone through some combination of 12 different sources (3 eyewitnesses, 3 on-site reporters, 3 off-site press bureaus and 3 global press bureaus, ignoring for the moment that the latter institutions typically entail a number of individuals). When we look at the six o-clock news on the evening of the event, however, we are usually presented with one series of images about the event.
It may in fact be the case that all twelve of the intermediaries have been very careful to record their intervention in the process: i.e. the metadata will often be encoded in some way. What is important from our standpoint, however, is that we have no access to that level of the data. There is usually no way of knowing whether we are looking at eyewitness one as filtered through on-site reporter two etc. More importantly, even if we did know this, there would be no way of gaining access at will to the possibly conflicting report of eyewitness two, on-site reporter three and so on. There may be much rhetoric about personalisation of news, news on demand, and yet the reality is that we have no way of checking behind the scenes to get a better picture.
Such a level of detail may often seem superfluous. If the event is as straightforward as a plane crash all that is crucial is a simple list of the facts. But the bombing of the Chinese Embassy during the recent Kosovo war offers a more complex case. We were given some facts: the embassy was bombed but not told how many persons were killed. We were told that the Chinese objected as if they were being unreasonable and only many weeks later were we told that this had been a failed intervention of the CIA. Until we have useable metadata which allow us to check references, to compare stories and to arrive at a more balanced view, we are at the mercy of the persons or powers who are telling the story, often without even being very clear as to who is behind that power. Is that satellite news the personal opinion of the owner himself or might it represent the opinions and views of those in whose influence they dwell? If we are unable to check such details we must ultimately abandon our attempts at truth concerning what we see and what we are told.
The problems concerned with these contemporary events fade in comparison with historical events, which are the main focus of our cultural quest. It is generally accepted that in the year 33 A. D. (give or take a year or two depending on chronology and calendar adjustments) there occurred an event, which might be described as the most famous dinner party ever: the Last Supper. From a contemporary standpoint there were twelve eyewitnesses (the Apostles) of whom four were also the equivalents of on-site reporters (Matthew, Mark, Luke and John). In today's terms, their reports were syndicated and are better remembered as part of a collection now known as the New Testament. Popular versions with less text and more pictures were also produced in the form of the Biblia pauperum: equivalents of an expurgated Daily Mirror
The theme was then taken up by the Franciscans in their version of billboards -- without the advertising fees known as fresco cycles. This idea developed in Assisi, was marketed in their Florentine branch known as Santa Croce where the idea caught on and soon became the rage, so much so that the Dominicans soon used it in San Marco and elsewhere. In the church of Santa Maria delle Grazie in Milan, Leonardo da Vinci gave a new twist to what had by now effectively become the company slogan. The idea soon became part of the Churchs international marketing strategy. Copies appeared on walls as paintings in Pavia, Lugano, Tongerlo and eventually London. As part of the franchise strategy multi-media was used. So there were soon reproductions in the form of engravings, lithographs, photographs, three-D models, and eventually even films and videos. In the old tradition that imitation is best form of flattery, even the competition used the motif, culminating in a version where Marilyn Monroe herself and twelve of her Hollywood colleagues made out of the Last Supper a night on the town.
As a result of these activities in the course of nearly two millennia, there are literally tens of thousands of versions, copies and variants of the most famous dinner in history, which brings us back to the problems of metadata. If I go to one of the standard search engines such as Hotbot and type in Last Supper, I am given over 50,000 entries concerning the event, which happen to be on-line, or to speak technically, a subset of somewhere between 10 and 30% of that amount which have been successfully found by the leading search engines.
There is no way of limiting my search to the text versions of the original reporters, to large scale wall sized versions in the scale of Leonardos original, which was eight by four meters. Nor can one distinguish between Franciscan and Dominican versions, authentic copies as opposed to lampoons, caricatures and sacrilegious spoofs. To a great expert requiring a system to find such details might seem xcessive because they might know most of these things at a glance. But what of the young teenager living in Hollywood who, as an atheist, has no religious background and sees the version with Marilyn Monroe for the first time? How are they to know that this a spoof rather than something downloaded from a fifties version of CNN online? A true search engine would help not only the young Hollywood teenager but also help every true searcher. Indeed it should provide truth even if the searcher is "false."
Underlying the difficulties considered above with respect to the Last Supper, is a deeper set of problems. We expect our search engines to provide a single, static answer. By contrast, the realities of cultural and historical knowledge entail multiple, dynamic answers with respect to space, time, individuals, objects, concepts etc. Accordingly we need dynamic metadata to deal with each of these. Some simple examples will illustrate this need.
Current printed maps in atlases are static. Historically the boundaries and names of countries, regions and cities are continually changing. Electronic maps should therefore be dynamic such that they can reflect changes over time: how, for instance, St. Petersburg becomes Leningrad and subsequently returns to St. Petersburg, or how a Roman Empire begins in Italy, expands enormously throughout the Mediterranean basin, and then contracts again. As a result if I were searching for something in fourteenth century Poland, the search engine would consult a different map than for the Renaissance or for today. The question: Where is Poland? will thus shift with time and adjust accordingly. Applied globally this will furnish us with more than simple instructions of how to get there. It will make visible persons' misconceptions of geography at various points of history. It will show political differences: how, for instance, India's maps of India and Pakistan, may well be different than Pakistan's maps of the same two countries. To achieve this global co-operation will be needed. A spatial metadata project should produce dynamically changing atlases and link this with Geographical Information Systems (GIS) and Global Positioning Systems (GPS). This is a pre-requisite for visualising changing political boundaries and new approaches to history .
Current search engines are typically linked to a single time scale. Western knowledge typically assumes a Gregorian (or Julian) calendar. There are also Jewish, Muslim, Chinese, Indian and other calendars. Those of the Hebrew faith had their fourth, not second, millennium problem some time ago. At present, it requires the intervention of an expert to translate a date from one of these chronological systems to its equivalent in our Gregorian time-scale . Needed is historical, temporal metadata, which allows automatic mapping among these standard chronological systems. This will be a significant step towards studying history from multi-cultural points of view. Hence, if I am reading an Arabic or Jewish manuscript and come upon the date 380, the system will immediately provide an equivalent in the Christian Gregorian or Julian calendars.
There are further problems with respect to individuals. Today there are static lists of the complete works or of a catalogue raisonnée of the writings, paintings and instruments by a given individual. What is known about the writings, paintings or instruments of an individual changes over time. The list of manuscripts by Leonardo was different in 1500, than in 1600, 1800 or today. The paintings attributed to Rembrandt were different in the eighteenth century, the mid-twentienth century and after the Rembrandt Committee finished its research. Indeed, at any given period in history, there is debate among scholars concerning the exact contents of such lists. One needs standard lists, which can then be adjusted to show which items are dubious or contested. Hence, such lists need to be dynamic (plate 6). Not only do the lists of contents change over time, so too do interpretations concerning the contents with respect to: transcriptions (plate 7); the meaning of a given term (plate 8); and the role of that term in various classification schemes (plate 9).
This dynamic dimension needs to be included equally into the history of interpretations (plate 10). At the simplest level, we need metadata to link primary texts with the secondary literature concerning those texts. Entailed herein are both a) changing historical knowledge about an individual, and b) changing perceptions of an individual. Paradoxically, persons now famous such as Montaigne or Leonardo, were judged very differently throughout the centuries, almost forgotten in some generations, particularly praised and often for different reasons in other generations. Our present search methods of presenting individuals do not take adequate account of such aspects.
Then there is an even more elusive challenge of assessing the authority of sources concerning an individual (plate 11). In the case of a genius such as Leonardo, for instance, thousands of persons feel prompted to write something about the man. The number of persons in any generation, who have actually read his notebooks, has never been more than a handful. The Internet potentially offers us access to everyone who cites Leonardo, but has almost no mechanisms in place to distinguish between standard works, generally respected works and non-authoritative lists. A radical proposal of some to re-introduce censorship is, in our view, not a reasonable solution. The problem is made the more elusive because the recognised world authority in one decade may well be replaced in another decade.
Needed, therefore, are new kinds of dynamic, weighted bibliographies, which allow us to have subsets on the basis of field specific acceptance, new ways of expressing and recording in electronic form the well established traditions of peer review, (which is totally different from attempts to do simplistic electronic computations of quality), to arrive as it were at peer review with an historical dimension in electronic form and yet still have access to a wider range of less authoritative or more precisely, less established by the authorities, sources in a field. In tackling such alternatives between the authority of sources versus (mere) citations, we would be using technologies in new ways to return to central questions of quality.
Present day sources typically focus on objects as static entities. Moreover, the limitations of print frequently lead us to focus on one example as if it were the whole category. Accordingly we all know about the Coliseum in Rome but most of us are unaware of the dozens of coliseums spread throughout the Roman Empire. Using the dynamic maps and chronologies outlined above, new kinds of cultural maps can be developed, which allow us to trace the spatial-temporal spread of major cultural forms such as Greek theatres or temples, Roman coliseums, or Christian Romanesque churches. This will allow novel approaches to long standing problems of central inspiration and regional effects, the interplay between centre and periphery, in some cases between centre and colonies. Such questions pertaining to original and variants (versions, copies, imitations), are again central to the challenges of a European Union which aims to maintain diversity.
Related to this new approach to objects is the question of different interpretations of a same object or complex of objects. Most of us are familiar with the Roman Forum from our secondary school history lessons. Most of us are unaware that Italian, French, and German reconstructions of that same Roman Forum are very different. Present day search engines focus on providing us access to the original Roman Forum. Cultural and historical metadata will allow us to call up these different interpretations as well and thereby allow us to see the differences in approaches.
In the realms of architecture and construction, firms such as AutoDesk have created a consortium to deal with Industry Foundation Classes. This project treats all the basic elements of architecture as intelligent objects. As a result a basic object such as a door is defined in terms of its different contexts. Hence if I am building a skyscraper, the software will immediately "know" that the doors thereof will need to be of a very different strength than if I am building a cottage. In other words, the advent of "intelligent doors" means that the software provides me with a basic shape, which automatically adjusts itself to the context at hand.
From an everyday, operational standpoint this marks an enormous advance because it saves architects and designers the trouble of calculating the parameters of every single door, window and other architectural unit. Inherent therein, however, there is also a great danger. If applied mindlessly all doors and windows would be alike which would result in a world-wide homogenization which Barber  has called the MacWorld effect. The richness of the architectural tradition lies precisely therein that the doors and windows of Michelangelo are different than those of le Corbusier or Richard Meier; that fifteenth century Florentine doors are quite distinct from those in Lucca, Pisa, Rome and other European cities. This immensely rich tradition is documented in publications and photographic archives. Again there is a challenge to link the generic software with examples of unique expressions. Hence if I were an architect in Florence, the software would not only provide me with basic facts about windows, but also specific facts about Florentine windows.
Theoretically it is possible to go much further. One could add knowledge of individual Florentine doors and those of all cities through local databases, which can be accessed by the software. As a result the software could provide me with both general properties of doors and detailed information about Florentine doors. In the case of an historic home or building of the fourteenth century this information could be so detailed as to provide me with the entire history of restorations, which the building has undergone. In the case of Florence, there is an incentive to maintain the historical core. In more modern cities the exercise of design becomes even more interesting if I can call up experiences in other cities in order to arrive at new architectural forms. In other words the generic examples of intelligent doors in regular software can be greatly enriched by the particular, unique examples showing historical and cultural variants which can serve as a source of inspiration for new creativity.
Music entails a particular kind of cultural object. Unlike a painting or a sculpture, where the object defines the content, music in the form of notes merely gives instructions for content in the form of a performance. Hence, in music, different versions play a more central role than in painting. Multiple interpretations are the content and while one can dismiss wrongly played versions as uninteresting, interpretations by master players are all of interest. Pablo Casals may play the Bach Cello Sonatas very differently than Rostopovich and yet both are important. Dynamic metadata for music should thus provide access to the notes, their generic rendering and their individual interpretation . Here there are important projects such as Standard Music Description Language, and Music Tagging Type Description (Mutated), which will be connected with MPEG 7.
Presently we have many different classification systems and thesauri. Concrete
proposals for mapping among these systems exist (Williamson , McIlwaine ). The Canadian Heritage Information Network (CHIN), the Marburg Archive and projects such as Joconde and TermIT have done very useful preliminary work in this direction. Systems such as the Universal Decimal Classification (UDC) and developments in terminology allow more systematic treatment of relations among subjects into classes such as subsumptive, determinative, ordinal etc. (Perrault) . A dynamic system, which allows us to switch between classifications in different cultures and historical periods would provide new kinds of filters for perceiving and appreciating subtleties of historical and cultural diversity.
The enormous implications for learning range from the philosophical and epistemological domain, where we could trace the changing relations of concepts dynamically to the humanities with courses on culture and civilisation (a term which again has very different connotations in French, German and English). Instead of just citing different monuments, works of art and literature, we could explore the different connections among ideas in different cultural traditions. For example, Ranganathans classification from India is much weaker than Dewey with respect to the fine arts, yet much more subtle than Dewey with respect to metaphysics and religion.
An integration of the methods outlined above will lead to new kinds of knowledge maps which allow us to trace the evolution of a concept both spatially in different countries and temporally, in different historical periods. This will allow us to return with new depth to the problems already broached above on several occasions of standard/ model versus variants/versions, of centre versus periphery and the role of continuity in the spread of major forms and styles of expression.
An integration of the above methods will further allow a new approach to the history of narrative and thereby new approaches to literature, art and culture as a whole. A culture such as Europe is confined to a relatively small number of major narratives, deriving on the one hand from the Judaeo-Christian tradition (the Bible, Lives of the Saints), and on the other hand from the Greco-Roman tradition (Homer, Virgil, Ovid). We belong to the same culture if we know the same narratives, if we have the same stories in common. Paradoxically those who have the same stories, inevitably develop very different ways of telling those stories. The media differ. For instance, in Italy the lives of the saints most frequently become the great fresco cycles on the walls of churches. In France and the Netherlands, the lives of the saints are more frequently treated in illuminated manuscripts. In Germany, they frequently appear in complex altarpieces. Not only do the media vary but also the ways of telling stories. The Life of Christ in Spain is very different than in the Balkans or within the Orthodox tradition in Russia. Even so the commonality of themes means that a European can feel an affinity towards a Russian Orthodox church, which they cannot readily feel with an Indian temple with stories from the Mahabharata or the Ramayana (unless of course they know these stories as well).
In these transformations of the familiar lie at once the fascination of change through continuity which inspired the studies of Aby Warburg and his school, and also implicitly, a series of important lessons about the keys to diversity. The most diverse narratives are precisely about the most familiar stories. To visualise and make visible the complexities of our historical diversities of expression is our best hope for understanding the challenges of future diversity. Inherent in such an approach lie the seeds for understanding changing definitions of Europe and for developing a vision of the Europes of tomorrow: dynamic phenomena, processes rather than static definitions. At the same time this a multicultural approach which goes beyond the traditional limits of Euro-centrism. Such narratives apply to all the great cultures, of China, India, Japan, Persia. Hence, such an approach to metadata will lead scholars throughout the world to change their research and others to research the implications of such changes.
Knowledge includes culture. Cultural heritage in museums, libraries and archives, concert halls and theatres plays an essential role in identity and has fundamental implications for employment, education, tourism and for content industries such as film, television, records and now Internet. In addition to technological standards, systematic multi-media access to this heritage requires interoperability of content and adequate usage patterns. This in turn requires metadata, which reflects the cultural and historical dimensions of knowledge and for which the Resource Description Framework of the World Wide Web (W3C) Consortium offers a useful framework.
The vision of MEMECS  recognizes the importance of interoperability of systems , but focusses on interoperability of content through the development of dynamic metadata for cultural and historical dimensions of knowledge. This approach, which is fully multilingual , includes dynamic treatments of time (with multiple chronological systems e.g. Julian, Hebraic, Islamic, Hindu, Chinese); dynamic treatments of space (with multiple maps reflecting historical differences, different projection methods and competing cultural claims). It includes dynamic authority lists for names, concepts, multiple classification systems, terms, texts, corpora of an individual, music, narratives, means of recording quality of the corpus and of the interpreters. With respect to objects and events it includes resolutions and layers of images in different media with links between copies, versions etc.; resolutions in detail from local to global; different versions of present and past events.
Much preliminary work has already been done through initiatives such as the Resource Description Framework, the Dublin Core , the Interoperability Forum, JPEG 2000, MPEG7 and MPEG 21, SMPTE and many metadata projects such as SCHEMAS. Already in 1995, the G7 initiated a project for Multimedia Access to World Cultural Heritage. In 1996, the European Commission initiated a Memorandum of Understanding for Multimedia Access to Europe's Cultural Heritage, which was a forerunner of the MEDICI Framework which is organizing this year's cultural track at WWW9. The full dimension of interoperability of content requires new technology, inter-disciplinary research and public and private partnerships, in which organisations such as the European Commission can play an important role in the formation and stability of adequate constituencies. Five aspects require development: technology, networks, pilot projects, research and international dimensions.
First, new technologies are needed to deal with dynamic maps, chronologies, names, objects and concepts. These might be co-ordinated by the RDF section of W3C, possibly in conjunction with the JRC and national supercomputing facilities such as INRIA, GMD, and CINECA (CNR).
Second, networks are needed which integrate the holdings of libraries, museums and archives and makes them accessible to research institutes and educational institutions with high-speed networks such as Internet 2, CANARIE and emerging European equivalents such as FING (France), and Gigaport (Netherlands). In large part, the infrastructure already exists. Needed are active connections between the content of memory institutions and research and education facilities, which can use the content in new ways as was recently confirmed in the eEurope action lines at a recent meeting in Lisbon .
Third, pilot projects need to combine the new technology via networks with content from memory institutions, namely, libraries, archives and museums (such as the Louvre, the Uffizi, the Staatliche Museen Preussischer Kulturbesitz and the Kunsthistorisches Museum) in interoperability labs for both research and education. Such projects would link content with concepts such as virtual reference rooms  through the use of multi-agent systems  and use these to develop new learning environments.
Fourth, interoperability of content through Internet access also requires research on other themes such as problems of method, access, reference, restoration, reconstruction, and terminology. Further analysis is required on appropriation and usage patterns of cultural data, based on inter-disciplinary approaches in anthropology, sociology, aesthetics, etc. This could be led by the emerging European Network of Centres of Excellence in Digital Culture in the context of the MEDICI framework.
Fifth, the development of metadata, reflecting cultural and historical dimensions of knowledge responds to world-wide concerns concerning the Internet's sensitivity to cultural diversity and preservation of the memories of civilisations. It offers an entry into implicit, tacit knowledge as well as explicit knowledge. Ultimately cultural and historical metadata should reflect the 6,500 languages of the world. These international dimensions should be reflected in Internet governance in the context of ICANN.
In the early days of literacy in the West, a series of rules for the use of language evolved. This gradually led to the fields of grammar (which dealt with the structure), dialectic (which dealt with the logic) and rhetoric (which dealt with the effects of language). Together grammar, dialectic and rhetoric became the trivium, the humanities side of the seven liberal arts (along with the proto-scientific quadrivium of mathematics, arithmetic, astronomy and music, figure 1).
Standard Generalized Markup Language
Logic, Truth of statement
Resource Description Framework
Effect, Expression, Style
Cascading Style Sheets
Figure 1. Links between the ancient trivium and recent Internet developments
When the Internet began in 1969 it was intended primarily to provide new ways for humans to communicate at a distance. The past decades have seen the emergence of a new challenge: to provide new ways for machines to communicate with each other without the intervention of humans. This helps to explain why the theme of metadata has become central to the world of computers. In the process, groups such as the World Wide Web Consortium and the Internet Society are effectively engaged in re-formulating in electronic form, the rules of grammar, dialectic and rhetoric. The syntax aspects of grammar are covered by Standardized Graphical Markup Language (SGML) and eXtensible Markup Language (XML). Recent developments with respect to a Virtual Hyperglossary (VHG) are addressing semantic elements. Expression and style relating to rhetoric are being covered by Cascading Style Sheets (CSS) and eXtensible Style Language (XSL, cf. figure 1).
This article began by drawing attention to a proliferation of copies and versions and to problems of quality and veracity. Such problems call for new approaches to dynamic metadata, which might be co-ordinated in a framework called MEMECS. It recommends that MEMECS could become part of W3's vision; linked with the European Commission's MEDICI Framework and furthered by the long-term research programmes of the Commission.
The Internet is not just about scanning in existing content or gaining access to digital materials. It requires pre-structuring our knowledge anew. It also entails finding electronic equivalents for all our rules and definitions of knowledge .Ultimately it requires changing our conceptions of knowledge itself. The challenge that faces us is to ensure that these transformations reflect all the diversity of our being rather than reducing us to the limitations of some algorithm. That is why the cultural and historical dimensions of knowledge are so important. Combined with important work towards a global network of scientific literature and knowledge (such as the Global Info Project), it can lead to a larger vision of interoperability of enduring, collaborative and personal knowledge; a bridging of Snows Two Cultures; a global information ecosystem; a truly semantic World Wide Web as envisioned by Tim Berners-Lee.
I thank Professor Alfredo Ronchi (Milan) for inviting me to prepare this paper and Valentine Herman for inviting me to join his education panel within the culture track. I am grateful to my doctoral student, Nik Baerten for kindly preparing the plates. In addition I thank both him and my colleague, Drs. Johan van de Walle for reading a draft of the paper.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Kim H. Veltman
Kim Veltman is Scientific Director of the Maastricht McLuhan Institute (MMI) and co-ordinator of a new European Network of Centres of Excellence in Digital Cultural Heritage.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
For citation purposes:
Veltman, K "Cultural and Historical Metadata: MEMECS (Metadonnées et Mémoire Collective Systématique)", Cultivate Interactive, issue 1, 3 July 2000
Copyright ©2000 - 2006 University of Bath | Published by UKOLN | Design by ILRT | Contact Us