Downloads

Competition report

arXiv

The competition report summarizes competition challenges and tasks, evaluation protocol, method descriptions and evaluation results. We provide an author copy of the official report published in ICDAR's proceedings, archived on arXiv.

Citation:

@InProceedings{chazalon.21.icdar.mapseg,
  author    = {Joseph Chazalon and Edwin Carlinet and Yizi Chen and Julien Perret and Bertrand Dum\'enieu and Cl\'ement Mallet and Thierry G\'eraud and Vincent Nguyen and Nam Nguyen and Josef Baloun and Ladislav Lenc and and Pavel Kr\'al},
  title     = {ICDAR 2021 Competition on Historical Map Segmentation},
  booktitle = {Proceedings of the 16th International Conference on Document Analysis and Recognition (ICDAR'21)},
  year      = {2021},
  address   = {Lausanne, Switzerland},
}

Dataset

DOI

For each task, we provide a folder containing inputs and expected outputs (ground truth):

  • 1-detbblock is for Task 1: “Detect Building Blocks”;
  • 2-segmaparea is for Task 2: “Segment Map Area”;
  • 3-locglinesinter is for Task 3: “Locate Graticule Line Intersections”.

For each of those tasks, we provide a training set, a validation set and a test set.

  • $TASK/train contains sample inputs and expected outputs to be used to train your method.
    File names in this set start with 1NN.
  • $TASK/validation contains sample inputs and expected outputs to be used to assess the performance of your method without touching the test set.
    It should be used to calibrate the hyper-parameters of your approach.
    File names in this set start with 2NN.
  • $TASK/test contains sample inputs and expected outputs to be used to measure and report the final performance of your method.
    File names in this set start with 3NN.

Participants' submissions, descriptions and evaluation reports

DOI

We released submitted results and methods descriptions from competition's participants, along with the evaluation results we computed.

Content is organized as follows.

  • Descriptions of their methods submitted by participants are available in the descriptions/ folder.
  • Results submitted by participants are available in the task-1/, task-2/ and task-3/ folders.
  • Metrics computed by competition organizers are available in the evaluation_t1/, evaluation_t2/ and evaluation_t3/ folders.

Evaluation tools

DOI

Easy PIP installation:

pip install -U icdar21-mapseg-eval

Evaluation tools are open-source Python 3.7+ programs available on GitHub and archived on Zenodo.

Please check the documentation for the evaluation tools for more details about how to install them.