TAC 2016 Tracks
Cold Start SF/KB
TAC 2016 Workshop
TAC KBP 2016 Event Argument Tasks
The Event Argument Extraction and Linking (EAL) tasks of the TAC KBP Event track focus on extracting event arguments and linking arguments that belong to the same event. EAL will be offered in three languages: English, Spanish, and Chinese. As in 2015, all tasks will operate using the Rich ERE definitions of events, nuggets, event hopper coreference, and argument validity. As in 2014/2015, the argument task will continue to support some inference of arguments that were ignored through "trumping" in Rich ERE. The evaluations will use a reduced event taxonomy.
The document level tasks will all be scored via alignment with an LDC gold standard conforming to RichERE guidelines. This is a change from EAL 2015. BBN will release software that calculates EAL scores (which will remain that "entity" not "mention" level) over RichERE.
There will be an extension to the EAL task which evaluates the system's ability to produce cross-lingual/cross-document event frames. Conceptually, this task will ask systems to assign global IDs to the document level event hoppers that they create. This task will be evaluated by LDC assessment of the results of queries over a system-generated "event KB". Participants in this task will need to process a ~90K document corpus (roughly split in thirds by language). As in TAC ColdStart, participants will submit text files that can be transformed into the "event KB". We hope to provide system produced T-EDL output to all participants of the corpus EAL evaluation. More details will be available soon. The organizers are still discussing whether to offer the option of running on only a subset of the corpus (e.g.500 documents per-language) for those who are interested in participating in a document, but not the cross-document/cross-lingual task.
For the cross-document/crosslingual EAL task, there will be a pilot in late April/early May. As in the 2014 EAL pilot, the goal of the 2016 pilot will be to clarify the evaluation process not to attempt to benchmark system performance. As such, the pilot will be run using previously released RichERE documents. We will let you know which documents are included, but it will be up to developers to decide whether want to remove the pilot test-set from their training data (and operate in a "dev-test" mode) or if they would prefer to operate in test-on-train for purposes of exercising other parts of their systems.
While the ACE 2005 event annotation is being provided to all participants, this task diverges from ACE in some cases. One example of divergence is the addition of correct answers derived through inference/world knowledge. This evaluation will treat as correct some cases that were explicitly excluded in ACE 2005.
Task CoordinatorEvent Arguments: Marjorie Freedman (BBN, firstname.lastname@example.org)
NIST is an agency of the
U.S. Department of Commerce
Last updated: Friday, 17-Feb-2017 17:26:25 EST
Comments to: email@example.com