Dynamic Configurable Entity Recognition from Text
This project complements current methods for entity recognition in situations where extraction requires a dynamic character, either in the vocabulary used or in other aspects of configuration.
Here a modular approach is taken, where data is first fed into a multilingual lexical analysis web service. The results of this analysis are then used to build search needles, which are finally fed as a SPARQL query into any vocabulary stored at a Linked Data endpoint.
By dissociating the lexical processing from the reference vocabulary lookup, and by allowing both to be dynamically configured, it is possible to tailor entity recognition for a particular task much quicker than traditional methods allow. In addition, querying a live SPARQL endpoint allows any changes to the reference vocabulary to be immediately available for recognition without model rebuilding or similar.
D.Sc. Eetu Mäkelä, Aalto University
Arttu Oksanen, Jouni Tuominen, Eetu Mäkelä, Minna Tamper, Aki Hietanen and Eero Hyvönen: Law and Justice as a Linked Open Data Service
. Submitted. bib pdf
Everybody is expected to know and obey the law in today’s society. Governments therefore publish legislation and case law widely in print and on the web. Such legal information is provided for human consumption, but the information is usually not available as data for algorithmic analysis and applications to use. However, this would be beneficial in many use cases, such as building more intelligent juridical online services and conducting research into legislation and legal practice. To address these needs, this paper presents Semantic Finlex, a national in-use data resource and system for publishing Finnish legislation and related case law as a Linked Open Data service with applications. The system transforms and interlinks on a regular basis data from the legacy legal database Finlex of the Ministry of Justice into Linked Open Data, based on the new European standards ECLI and ELI. The data is hosted on a ”7-star” SPARQL endpoint with a variety of related services available that ease data re-use. Rich Internet Applications using only SPARQL for data access are presented as first application demonstrators of the data service.
Petri Leskinen, Mikko Koho, Erkki Heino, Minna Tamper, Esko Ikkala, Jouni Tuominen, Eetu Mäkelä and Eero Hyvönen: Modeling and Using an Actor Ontology of Second WorldWar Military Units and Personnel
. Submitted. bib pdf
This paper presents a model for representing historical military personnel and army units, based on large datasets aboutWorldWar II in Finland. The model is in use inWarSampo data service and semantic portal, which has had tens of thousands of distinct visitors. A key challenge here is how to represent ontological changes, since the ranks and units of military personnel, as well as the names and structures of army units change rapidly in wars. This leads to serious problems in both search as well as data linking due to ambiguity and homonymy of names. In our solution, actors are represented in terms of the events they participated in, which facilitates disambiguation of persons and units in different spatio-temporal contexts. The linked data in the WarSampo Linked Open Data cloud and service has ca. 9 million triples, including actor datasets of ca. 100 000 soldiers and ca. 16 100 army units. To test the model in practice, an application for semantic search and recommending based on data linking was created, where the spatio-temporal life stories of individual soldiers can be reassembled dynamically by linking data from different datasets. An evaluation is presented showing promising results in terms of linking precision.
Kimmo Kettunen, Eetu Mäkelä, Juha Kuokkala, Teemu Ruokolainen and Jyrki Niemi: Modern Tools for Old Content - in Search of Named Entities in a Finnish OCRed Historical Newspaper Collection 1771-1910
. Proceedings of LWDA 2016
, Potsdam, Germany, September, 2016. bib pdf
Named entity recognition (NER), search, classification and tagging of names and name like frequent informational elements in texts, has become a standard information extraction procedure for textual data. NER has been applied to many types of texts and different types of entities: newspapers, fiction, historical records, persons, locations, chemical compounds, protein families, animals etc. In general a NER system’s performance is genre and domain dependent and also used entity categories vary. The most general set of named entities is usually some version of three partite categorization of locations, persons and organizations. In this paper we report first trials and evaluation of NER with data out of a digitized Finnish historical newspaper collection Digi. Digi collection contains 1 960 921 pages of newspaper material from years 1771– 1910 both in Finnish and Swedish. We use only material of Finnish documents in our evaluation. The OCRed newspaper collection has lots of OCR errors; its estimated word level correctness is about 74–75 %. Our principal NER tagger is a rule-based tagger of Finnish, FiNER, provided by the FIN-CLARIN consortium. We show also results of limited category semantic tagging with tools of the Semantic Computing Research Group (SeCo) of the Aalto University. FiNER is able to achieve up to 60.0 F-score with named entities in the evaluation data. Seco’s tools achieve 30.0–60.0 F-score with locations and persons. Performance of FiNER and SeCo’s tools with the data shows that at best about half of named entities can be recognized even in a quite erroneous OCRed text
Eetu Mäkelä, Thea Lindquist and Eero Hyvönen: CORE - A Contextual Reader based on Linked Data
. Proceedings of Digital Humanities 2016, long papers
, pp. 267-269, Kraków, Poland, July, 2016. bib pdf link
CORE is a contextual reader application intended to improve user close reading experience, particularly with regard to material in an unfamiliar domain. CORE works by utilizing Linked Data reference vocabularies and datasets to identify entities in any PDF file or web page. For each discovered entity, pertinent information such as short descriptions, pictures, or maps are sourced and presented on a mouse-over, to allow users to familiarize themselves with any unfamiliar concepts, places, etc in the texts they are reading. If further information is needed, an entity can be clicked to open a full context pane, which supports deeper contextualization (also visually, e.g. by displaying interactive timelines or maps). Here, CORE also facilitates serendipitous discovery of further related knowledge, by being able to bring in and suggest related resources from various repositories. Clicking on any such resource loads it into the contextual reader for endless further browsing.
Eero Hyvönen, Erkki Heino, Petri Leskinen, Esko Ikkala, Mikko Koho, Minna Tamper, Jouni Tuominen and Eetu Mäkelä: WarSampo Data Service and Semantic Portal for Publishing Linked Open Data about the Second World War History
. The Semantic Web – Latest Advances and New Domains (ESWC 2016)
(Harald Sack, Eva Blomqvist, Mathieu d Aquin, Chiara Ghidini, Simone Paolo Ponzetto and Christoph Lange (eds.)), Springer-Verlag, May, 2016. bib pdf
This paper presents the WarSampo system for publishing collections of heterogeneous, distributed data about the Second World War on the Semantic Web. WarSampo is based on harmonizing massive datasets using event-based modeling, which makes it possible to enrich datasets semantically with each others’ contents. WarSampo has two components: First, a Linked Open Data (LOD) service WarSampo Data for Digital Humanities (DH) research and for creating applications related to war history. Second, a semanticWarSampo Portal has been created to test and demonstrate the usability of the data service. The WarSampo Portal allows both historians and laymen to study war history and destinies of their family members in the war from different interlinked perspectives. Published in November 2015, theWarSampo Portal had some 20,000 distinct visitors during the first three days, showing that the public has a great interest in this kind of applications.
Eetu Mäkelä: Combining a REST Lexical Analysis Web Service with SPARQL for Mashup Semantic Annotation from Text
. Proceedings of the ESWC 2014 demonstration track, Springer-Verlag
, May, 2014. bib pdf
Current automatic annotation systems are often monolithic, holding internal copies of both machine-learned annotation models and the reference vocabularies they use. This is problematic particularly for frequently changing references such as person and place registries, as the information in the copy quickly grows stale. In this paper, arguments and experiments are presented on the notion that sufficient accuracy and recall can both be obtained simply by combining a sufficiently capable lexical analysis web service with querying a primary SPARQL store, even in the case of often problematic highly inflected languages.