AbstractsComputer Science

Exploring the use of contextual metadata collected during ubiquitous learning activities

by Martin Svensson




Institution: Växjö University
Department:
Year: 2008
Keywords: Knowledge representation; Ubiquitous computing; Context; Location-based learning; Natural Sciences; Computer and Information Science; Naturvetenskap; Data- och informationsvetenskap; SOCIAL SCIENCES; Statistics, computer and systems science; Informatics, computer and systems science; SAMHÄLLSVETENSKAP; Statistik, data- och systemvetenskap; Informatik, data- och systemvetenskap; Informatik; Informatics; samhälle/juridik; samhälle/juridik
Record ID: 1348629
Full text PDF: http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-2074


Abstract

Recent development in modern computing has led to a more diverse use of devices within the field of mobility. Many mobile devices of today can, for instance, surf the web and connect to wireless networks, thus gradually merging the wired Internet with the mobile Internet. As mobile devices by design usually have built-in means for creating rich media content, along with the ability to upload these to the Internet, these devices are potential contributors to the already overwhelming content collection residing on the World Wide Web. While interesting initiatives for structuring and filtering content on the World Wide Web exist – often based on various forms of metadata – a unified understanding of individual content is more or less restricted to technical metadata values, such as file size and file format. These kinds of metadata make it impossible to incorporate the purpose of the content when designing applications. Answers to questions such as "why was this content created?" or "in which context was the content created?" would allow for a more specified content filtering tailored to fit the end-users cause. In the opinion of the authors, this kind of understanding would be ideal for content created with mobile devices which purposely are brought into various environments. This is why we in this thesis have investigated in which way descriptions of contexts could be caught, structured and expressed as machine-readable semantics. In order to limit the scope of our work we developed a system which mirrored the context of ubiquitous learning activities to a database. Whenever rich media content was created within these activities, the system associated that particular content to its context. The system was tested during live trials in order to gather reliable and “real” contextual data leading to the transition to semantics by generating Rich Document Format documents from the contents of the database. The outcome of our efforts was a fully-functional system able to capture contexts of pre-defined ubiquitous learning activities and transforming these into machine-readable semantics. We would like to believe that our contribution has some innovative aspects – one being that the system can output contexts of activities as semantics in real-time, allowing monitoring of activities as they are performed.