» print this page!
» Follow us on Twitter
» Be our friend on Facebook

Latest News

SeCo on Twitter

SeCo on Facebook

Dynamic Configurable Entity Recognition from Text

This project complements current methods for entity recognition in situations where extraction requires a dynamic character, either in the vocabulary used or in other aspects of configuration.

Here a modular approach is taken, where data is first fed into a multilingual lexical analysis web service. The results of this analysis are then used to build search needles, which are finally fed as a SPARQL query into any vocabulary stored at a Linked Data endpoint.

By dissociating the lexical processing from the reference vocabulary lookup, and by allowing both to be dynamically configured, it is possible to tailor entity recognition for a particular task much quicker than traditional methods allow. In addition, querying a live SPARQL endpoint allows any changes to the reference vocabulary to be immediately available for recognition without model rebuilding or similar.

System demonstrators:

Links:

Contact Person

D.Sc. Eetu Mäkelä, Aalto University



Publications

2016

Kimmo Kettunen, Eetu Mäkelä, Juha Kuokkala, Teemu Ruokolainen and Jyrki Niemi: Modern Tools for Old Content - in Search of Named Entities in a Finnish OCRed Historical Newspaper Collection 1771-1910. Proceedings of LWDA 2016, Potsdam, Germany, September, 2016. bib pdf
Named entity recognition (NER), search, classification and tagging of names and name like frequent informational elements in texts, has become a standard information extraction procedure for textual data. NER has been applied to many types of texts and different types of entities: newspapers, fiction, historical records, persons, locations, chemical compounds, protein families, animals etc. In general a NER system’s performance is genre and domain dependent and also used entity categories vary. The most general set of named entities is usually some version of three partite categorization of locations, persons and organizations. In this paper we report first trials and evaluation of NER with data out of a digitized Finnish historical newspaper collection Digi. Digi collection contains 1 960 921 pages of newspaper material from years 1771– 1910 both in Finnish and Swedish. We use only material of Finnish documents in our evaluation. The OCRed newspaper collection has lots of OCR errors; its estimated word level correctness is about 74–75 %. Our principal NER tagger is a rule-based tagger of Finnish, FiNER, provided by the FIN-CLARIN consortium. We show also results of limited category semantic tagging with tools of the Semantic Computing Research Group (SeCo) of the Aalto University. FiNER is able to achieve up to 60.0 F-score with named entities in the evaluation data. Seco’s tools achieve 30.0–60.0 F-score with locations and persons. Performance of FiNER and SeCo’s tools with the data shows that at best about half of named entities can be recognized even in a quite erroneous OCRed text
Eetu Mäkelä, Thea Lindquist and Eero Hyvönen: CORE - A Contextual Reader based on Linked Data. Proceedings of Digital Humanities 2016, long papers, pp. 267-269, Kraków, Poland, July, 2016. bib pdf link
CORE is a contextual reader application intended to improve user close reading experience, particularly with regard to material in an unfamiliar domain. CORE works by utilizing Linked Data reference vocabularies and datasets to identify entities in any PDF file or web page. For each discovered entity, pertinent information such as short descriptions, pictures, or maps are sourced and presented on a mouse-over, to allow users to familiarize themselves with any unfamiliar concepts, places, etc in the texts they are reading. If further information is needed, an entity can be clicked to open a full context pane, which supports deeper contextualization (also visually, e.g. by displaying interactive timelines or maps). Here, CORE also facilitates serendipitous discovery of further related knowledge, by being able to bring in and suggest related resources from various repositories. Clicking on any such resource loads it into the contextual reader for endless further browsing.
Eero Hyvönen, Erkki Heino, Petri Leskinen, Esko Ikkala, Mikko Koho, Minna Tamper, Jouni Tuominen and Eetu Mäkelä: WarSampo Data Service and Semantic Portal for Publishing Linked Open Data about the Second World War History. The Semantic Web – Latest Advances and New Domains (ESWC 2016) (Harald Sack, Eva Blomqvist, Mathieu d Aquin, Chiara Ghidini, Simone Paolo Ponzetto and Christoph Lange (eds.)), Springer-Verlag, May, 2016. bib pdf
This paper presents the WarSampo system for publishing collections of heterogeneous, distributed data about the Second World War on the Semantic Web. WarSampo is based on harmonizing massive datasets using event-based modeling, which makes it possible to enrich datasets semantically with each others’ contents. WarSampo has two components: First, a Linked Open Data (LOD) service WarSampo Data for Digital Humanities (DH) research and for creating applications related to war history. Second, a semanticWarSampo Portal has been created to test and demonstrate the usability of the data service. The WarSampo Portal allows both historians and laymen to study war history and destinies of their family members in the war from different interlinked perspectives. Published in November 2015, theWarSampo Portal had some 20,000 distinct visitors during the first three days, showing that the public has a great interest in this kind of applications.

2014

Eetu Mäkelä: Combining a REST Lexical Analysis Web Service with SPARQL for Mashup Semantic Annotation from Text. Proceedings of the ESWC 2014 demonstration track, Springer-Verlag, May, 2014. bib pdf
Current automatic annotation systems are often monolithic, holding internal copies of both machine-learned annotation models and the reference vocabularies they use. This is problematic particularly for frequently changing references such as person and place registries, as the information in the copy quickly grows stale. In this paper, arguments and experiments are presented on the notion that sufficient accuracy and recall can both be obtained simply by combining a sufficiently capable lexical analysis web service with querying a primary SPARQL store, even in the case of often problematic highly inflected languages.
/m/fs/seco/www/www.seco.tkk.fi/include/secoweb/utils.php; Wed, 26 Apr 2017 14:53:25 +0300