Go to the NIST home page Go to the TAC Home Page TAC Banner

Return to TAC Homepage

TAC 2016 Tracks
    Cold Start SF/KB
      Event Nugget
      Event Argument
Track Registration
Reporting Guidelines
TAC 2016 Workshop

TAC KBP 2016 Event Track


The goal of the TAC KBP Event track is to extract information about events such that the information would be suitable as input to a knowledge base. In 2016, there will be document-level evaluations of Event Nugget Detection and Coreference (EN) tasks and Event Argument Extraction and Linking (EAL) tasks. All Event tasks will be in three languages: English, Chinese, and Spanish.

As in 2015, all Event tasks will operate using the Rich ERE definitions of events, nuggets, event hopper coreference, and argument validity. As in 2014/2015, the argument task will continue to support some inference of arguments that were ignored through "trumping" in Rich ERE. The evaluations will use a reduced event taxonomy.

The document level tasks will all be scored via alignment with an LDC gold standard conforming to RichERE guidelines. For EN, this is the same scoring paradigm as in 2015. For EAL, this is a change. BBN will release software that calculates EAL scores (which will remain at "entity" not "mention" level) over RichERE.

There will not be task specific training data created in 2016. Instead, participants will be encouraged to use a mix of (a) training data from previous years (Rich ERE, EN annotation, EA assessments, ACE) and (b) some additional RichERE that will be released for each of the three languages.

Participants can submit to any number and combination of tasks in the Event track. Because input documents are shared across the tasks, participants must refrain from examining any of the input documents before they have finished submitting results for all of the Event tasks in which they are participating.

Preliminary Schedule

    Preliminary TAC KBP 2016 Schedule
    February 29Track registration opens
    July 15Deadline for registration for track participation
    August - OctoberTrack evaluation windows (varies by track)
    August 1-14EDL First Evaluation Window (EDL1)
    August 15-29Cold Start KB/SF Evaluation Window
    August 22-Sept 7Event Argument Extraction and Linking Evaluation Window
    Sept 1Release EDL1 scores to individual participants
    Sept 12-19EDL Evaluation Window 2 (EDL2)
    Sept 20-Oct 3Event Nugget Detection and Coreference Evaluation Window
    Sept 26-Oct 3Slot Filler Validation/Ensembling Evaluation Window
    Oct 10-17Belief and Sentiment Evaluation Window
    By mid OctoberRelease of individual evaluated results to participants (most tracks)
    October 10Deadline for short system descriptions
    October 18Deadline for workshop presentation proposals
    October 20Notification of acceptance of presentation proposals
    Nov 1Deadline for system reports (workshop notebook version)
    November 14-15TAC 2016 workshop in Gaithersburg, Maryland, USA
    February 2017Deadline for system reports (final proceedings version)


While the ACE 2005 event annotation is being provided to all participants, this task diverges from ACE in some cases. One example of divergence is the addition of correct answers derived through inference/world knowledge. This evaluation will treat as correct some cases that were explicitly excluded in ACE 2005.

Track Coordinators

Event Arguments: Marjorie Freedman (BBN, mfreedma@bbn.com)
Event Nuggets: Teruko Mitamura (CMU, teruko@cs.cmu.edu) and Eduard Hovy (CMU, ehovy@andrew.cmu.edu)

NIST is an agency of the
U.S. Department of Commerce

privacy policy / security notice / accessibility statement

Last updated: Tuesday, 28-Mar-2017 11:22:19 EDT
Comments to: tac-web@nist.gov