eRisk 2018

Training data

The training data will be sent to all registered participants on (tentative date) Nov 30th, 2017.

To have access to the collection all participants have to fill, sign and send a user agreement form (follow the instructions provided here). Once you have submitted the signed copyright form, you can proceed to register for the lab at CLEF 2018 Labs Registration site


The training data contain the following components:

  • risk_golden_truth.txt: this file contains the ground truth (one line per subject). For task 1, the code 1 means that the subject is a risk case of depression, while 0 means that the subject is a non-risk case. For task 2, the code 1 means that the subject is a risk case of anorexia, while 0 means that the subject is a non-risk case
  • positive_examples_anonymous_chunks: this folder, which stores all the posts of the risk cases, contains 10 subfolders. Each subfolder corresponds with one chunk. Chunk 1 contains the oldest writings of all users (first 10% of submitted posts or comments), chunk 2 contains the second oldest writings, and so forth. The name of the files follows de convention: subjectname_chunknumber.xml
  • negative_examples_anonymous_chunks: this folder, which stores all the posts of the non-risk cases, contains 10 subfolders. Each subfolder corresponds with one chunk. Chunk 1 contains the oldest writings of all users (first 10% of submitted posts or comments), chunk 2 contains the second oldest writings, and so forth. The name of the files follows de convention: subjectname_chunknumber.xml
  • scripts evaluation (see below)

This is the training data and, therefore, you get all chunks now. But you should adapt your algorithms in a way that the chunks are processed according to the sequence (for example, don't process chunk3 if you have not processed chunk1 and chunk2).

SCRIPTS FOR EVALUATION:

To facilitate your experiments, we provide two scripts that could be of help during the training stage. These scripts are in the scripts evaluation folder.

We recommend you to follow these steps:

  • use your early detection algorithm to process chunk1 files and produce your first output file (e.g. usc_1.txt). This file should follow the format described in the instructions for test (see the "Test" tab: 0/1/2 for each subject).

    Do the same for all the chunki files (i: 2, ..., 10). When you process chunki files it is OK to use information from chunkj files (for j<=i). Note that the chunkj files (such that j=1...i) contain all posts/comments that you have seen after the ith release of data.

  • you now have your 10 output files (e.g. usc_1.txt ... usc_10.txt). as argued above, you need to take a decision on every subject (you cannot say 0 all the time). so, every subject needs to have 1/2 assigned in some of your output files.

    use the aggregate_results.py to combine your output files into a global output file. This aggregation script has two inputs: 1) the folder where you have your 10 output files and 2) the path to the file writings_per_subject_all_train.txt. The writings_per_subject_all_train.txt file stores the number of writings per subject. This is required because we need to know how many writings where needed to take each decision. For instance, if subject_k has a total number of 500 writings in the collection then every chunk has 50 writings from subject_k. If your team needed 2 chunks to make a decision on subject_k then we will store 100 as number of writings that you needed to take this decision.

    Example of usage: $ python aggregate_results.py -path path to the folder where you have your 10 files -wsource path to the writings_per_subject_all_train.txt file

    This scripts creates a file, e.g. usc_global.txt, which stores your final decision on every subject and the number of writings that you saw before making the decision.

  • get the final performance results from the erisk_eval.py script. It has three inputs: a) path to the golden truth file (risk_golden_truth.txt), b) path to the overall output file, and c) value of o (delay parameter of the ERDE metric).

    Example: $ python erisk_eval.py -gpath path to the risk_golden_truth.txt file -ppath path to the overall output file -o value of ERDE delay parameter

    Example: $ python erisk_eval.py -gpath ../risk_golden_truth.txt -ppath ../folder/usc_global.txt -o 5

Test data

At test time, we will first release chunk1 for the test subjects and ask you for your output. A few days later, we will release chunk2, and so forth. The format required for the output file to be sent after each release of test data will be the following:

  • 2-column text file. The name of the file should be ORG_n.txt (where ORG is an acronym for your organization and n is the chunk number; e.g. usc_1.txt). The file should contain one line per user in the test collection:

    test_subject_id1 CODE
    test_subject_id2 CODE
    .......................


    IMPORTANT NOTE: You have to put exactly two tabs between the subject name and the CODE (otherwise, the python evaluation script does not work!!!)

    test_subject_idn is the id of the test_subject (ID field in the XML files)

    CODE is your decision about the subject, three possible values:

    • CODE=0 means that you don't want to emit a decision on this subject (you want to wait and see more evidence)
    • For task 1, CODE=1 means that you want to emit a decision on this subject, and your decision is that he/she is a risk case of depression. For task 2, CODE=1 means that you want to emit a decision on this subject, and your decision is that he/she is a risk case of anorexia.
    • For task 1, CODE=2 means that you want to emit a decision on this subject, and your decision is that he/she is NOT a risk case of depression. For task 2, CODE=2 means that you want to emit a decision on this subject, and your decision is that he/she is NOT a risk case of anorexia.

If you emit a decision on a subject then any future decision on the same subject will be ignored. For simplicity, you can include all subjects in all your submitted files but, for each user, your algorithm will be evaluated based on the first file that contains a decision on the subject. And you cannot say 0 all the time: at some point you need to make a decision on every subject (i.e. at the latest, after the 10th chunk, you need to emit your decision).

If a team does not submit the required file before the deadline then we'll take the previous file from the same team and assume that all things stay the same (no new emissions for this round).

If a team does not submit the file after the first round then we┬┤ll assume that the team does not take any decision (all subjects set to 0 -no decision- ).

Each team can experiment with several models for this task and submit up to 5 files for each round. If you test different models then the files should be named: ORGA_n.txt (decisions after the nth chunk by model A), ORGB_n.txt (decisions after the nth chunk by model B), etc.

More info: [Losada & Crestani 2016]

Results

More information


+34 881 816 451

CLEF 2018 Conference & CLEF initiative: