Go to the NIST home page Go to the TAC Home Page TAC Banner

Return to TAC Homepage

TAC 2016 Tracks
    Cold Start SF/KB
    EDL
    Ensembling
    Event
      Event Nugget
      Event Argument
          Guidelines
          Tools
          Schedule
    Belief/Sentiment
Data
Schedule
Organizers
Track Registration
Reporting Guidelines
TAC 2016 Workshop




TAC KBP 2016 Event Argument Tasks

Overview

The Event Argument Extraction and Linking (EAL) tasks of the TAC KBP Event track focus on extracting event arguments and linking arguments that belong to the same event. EAL will be offered in three languages: English, Spanish, and Chinese. As in 2015, all tasks will operate using the Rich ERE definitions of events, nuggets, event hopper coreference, and argument validity. As in 2014/2015, the argument task will continue to support some inference of arguments that were ignored through "trumping" in Rich ERE. The evaluations will use a reduced event taxonomy.

The document level tasks will all be scored via alignment with an LDC gold standard conforming to RichERE guidelines. This is a change from EAL 2015. BBN will release software that calculates EAL scores (which will remain that "entity" not "mention" level) over RichERE.

There will be an extension to the EAL task which evaluates the system's ability to produce cross-lingual/cross-document event frames. Conceptually, this task will ask systems to assign global IDs to the document level event hoppers that they create. This task will be evaluated by LDC assessment of the results of queries over a system-generated "event KB". Participants in this task will need to process a ~90K document corpus (roughly split in thirds by language). As in TAC ColdStart, participants will submit text files that can be transformed into the "event KB". We hope to provide system produced T-EDL output to all participants of the corpus EAL evaluation. More details will be available soon. The organizers are still discussing whether to offer the option of running on only a subset of the corpus (e.g.500 documents per-language) for those who are interested in participating in a document, but not the cross-document/cross-lingual task.

For the cross-document/crosslingual EAL task, there will be a pilot in late April/early May. As in the 2014 EAL pilot, the goal of the 2016 pilot will be to clarify the evaluation process not to attempt to benchmark system performance. As such, the pilot will be run using previously released RichERE documents. We will let you know which documents are included, but it will be up to developers to decide whether want to remove the pilot test-set from their training data (and operate in a "dev-test" mode) or if they would prefer to operate in test-on-train for purposes of exercising other parts of their systems.

Preliminary Schedule

    Preliminary TAC KBP 2016 Schedule
    February 29Track registration opens
    July 15Deadline for registration for track participation
    August - OctoberTrack evaluation windows (varies by track)
    August 1-14EDL First Evaluation Window (EDL1)
    August 15-29Cold Start KB/SF Evaluation Window
    August 22-Sept 7Event Argument Extraction and Linking Evaluation Window
    Sept 1Release EDL1 scores to individual participants
    Sept 12-19EDL Evaluation Window 2 (EDL2)
    Sept 20-Oct 3Event Nugget Detection and Coreference Evaluation Window
    Sept 26-Oct 3Slot Filler Validation/Ensembling Evaluation Window
    Oct 10-17Belief and Sentiment Evaluation Window
    By mid OctoberRelease of individual evaluated results to participants (most tracks)
    October 10Deadline for short system descriptions
    October 18Deadline for workshop presentation proposals
    October 20Notification of acceptance of presentation proposals
    Nov 1Deadline for system reports (workshop notebook version)
    November 14-15TAC 2016 workshop in Gaithersburg, Maryland, USA
    February 2017Deadline for system reports (final proceedings version)

References

While the ACE 2005 event annotation is being provided to all participants, this task diverges from ACE in some cases. One example of divergence is the addition of correct answers derived through inference/world knowledge. This evaluation will treat as correct some cases that were explicitly excluded in ACE 2005.

Task Coordinator

Event Arguments: Marjorie Freedman (BBN, mfreedma@bbn.com)


NIST is an agency of the
U.S. Department of Commerce

privacy policy / security notice / accessibility statement
disclaimer
FOIA

Last updated: Friday, 17-Feb-2017 17:26:25 EST
Comments to: tac-web@nist.gov