TAC 2017 Tracks KBP Tracks Cold Start KB/SF Guidelines Tools EDL Event BeSt Data Schedule Organizers ADR Track Call for Participation Track Registration Reporting Guidelines TAC 2017 Workshop |
TAC 2017 Cold Start KB TrackOverviewThe Cold Start KB track builds a knowledge base from scratch, using a predefined KB schema and a collection of unstructured text. The KB schema for Cold Start 2017 consists of:
The submitted Cold Start KBs are evaluated by both a composite query-based evaluation, and a set of component evaluations. The composite KB evaluation applies a set of Cold Start evaluation queries to each KB and assesses the correctness of the events, sentiment sources and targets, and SF slot fillers found. The component evaluations are implemented by projecting out the individual components from the submitted KB, and evaluating each component output file as though it had been submitted directly to the standalone track for that component. The following component files are projected from each submitted Cold Start KB:
The standalone EDL, EN, EAL, and BeSt tasks are evaluated using gold standard annotations on a common set of approximately 500 "core" documents, and are described fully on their respective track home pages. Below, we focus on the composite Cold Start KB Construction task and the component SF task, which are evaluated using post-submission assessment of responses to Cold Start evaluation queries. TasksThe Cold Start KB schema contains typed entity, event and string nodes; various kinds of mentions for each node; and SF predicates, sentiment predicates, and event predicates that can connect nodes in the KB. Given a collection of approximately 90K English, Chinese, and Spanish documents, the Cold Start KB Construction system must find all entities, SF relations, events, event arguments, and sentiment (towards entities) that conform with the Cold Start KB schema, and output a KB file consisting of one assertion per line, where each assertion is a subject-predicate-object triple that is augmented with provenance and a confidence value. If a KB includes multiple assertions involving the same subject-predicate-object triple (but with different provenance), the assertion with the highest confidence value will be assessed, and additional assertions with lower confidence value will be assessed as resources permit; it is expected that approximately 3 assertions (each with different justifications) will be assessed for each subject-predicate-object triple involving SF, sentiment, or event predicates. Each Cold Start KB undergoes a composite KB evaluation, in which a set of Cold Start evaluation queries is applied to the KB and the responses are assessed. A Cold Start evaluation query contains a name mention of an entity in the document collection (an "entry point"), and a sequence of one or more SF, sentiment, or event predicates (e.g., "per:date_of_birth", "org:is_liked_by", "per:conflict.attack_attacker"). The entry point selects a single corresponding entity node in the KB, and the sequence of predicates is followed to arrive at a set of terminal objects at the end of the sequence. The terminal objects are then assessed and scored as in the traditional English slot filling task. For example, a typical query may ask "What are the ages of the siblings of the Bart Simpson mentioned in Document 42?" or "What attack events have an attacker who is disliked by the Marge Simpson mentioned in Document 9?" Such "two-hop" queries will verify that the knowledge base is well-formed in a way that goes beyond the component tasks of entity discovery and linking, slot filling, event nugget detection and coreference, event argument extraction and linking, and sentiment detection. Each evaluation query may have multiple entry points (i.e., multiple mentions of the same entity), in order to mitigate cascaded errors caused by submitted KBs that are not able to link every name mention to a KB entity node. Systems partipating in the Cold Start KB Construction task also undergo a set of component evaluations, along the following dimensions:
The Cold Start Slot Filling (SF) task removes the requirement that an entire text collection must be processed. Instead, Cold Start SF participants will receive the Cold Start evaluation queries that involve only SF predicates, and need only produce those entities and relations that would be found by the queries. A TAC slot filling system can easily be applied to this task by running initially from each evaluation query entry point, then recursively applying the system to the identified slot fillers. The justification spans for a single justification must come from the same document but (unlike CSKB systems) SF systems must return only a single (highest confidence) justification for each subject-predicate-object triple. The 2017 Cold Start KB Construction tasks differs from the the 2017 tasks in the following significant ways:
Track CoordinatorsHoa Dang (National Institute of Standards and Technology, hoa.dang@nist.gov)Shahzad Rajput (National Institute of Standards and Technology, shahzad.rajput@nist.gov) |
NIST is an agency of the
U.S. Department of Commerce
privacy policy / security notice / accessibility statement
disclaimer
FOIA
Last updated: Tuesday, 16-May-2017 16:15:42 UTC
Comments to: tac-web@nist.gov