===================================================================== TAC KBP 2016 EVENT ARGUMENT EXTRACTION AND LINKING EVALUATION RESULTS ===================================================================== Team ID: ijk Organization: Zhejiang University Run ID: ijk3 Did the run access the live Web during the evaluation window: No Did the run perform any cross-sentence reasoning: Yes Did the run return meaningful confidence values: No ************************************************************* ************************************************************* Language: English Number of participating teams: 7 This report first contains a summary of the scores of your submissions compared to those of the other systems based on the official metric for each sub-task. This is followed by more detailed information about your system's performance. In the charts below, max is the best scoring submission across all participants. If there were at least three submissions for this language, the median score over the best submissions from each team will be given as well. All summary scores are given as percentiles based on bootstrap resampling . Document Level Argument Summary Score: System 5% 50% 95% ijk3 2.4 2.9 3.4 Max 8.4 9.5 10.6 Rank 4 2.4 2.9 3.4 The argument score is described in section 7.1 of the task guidelines. System 5% 50% 95% ijk3 1.2 1.5 1.9 Max 7.6 8.5 9.4 Rank 4 1.2 1.5 1.9 The linking score is described in section 7.1 of the task guidelines. Score details: TP = # of true positive document-level arguments found FP = # of false positive document-level arguments found FN = # of false negative document-level arguments ArgP = precision of finding document-level arguments ArgR = recall of finding document-level arguments F1 = F1 measure of finding document-level arguments ArgScore = official document-level argument finding score LinkScore = official document-level linking score All scores scaled 0-100 System TP FP FN ArgP ArgR ArgF1 ArgScore LinkScore ijk3 337.0 937.0 6286.0 26.5 5.1 8.5 2.9 1.5