Research Center for Digital Sustainability

Criticality Prediction for Swiss Court Rulings

Criticality Prediction for Swiss Court Rulings

This project is available as a Bachelor's or Master's project.

Introduction

Swiss court decisions are anonymized to protect the privacy of the involved people (parties, victims, etc.). Previous research [1] has shown that it is possible to re-identify companies involved in court decisions by linking the rulings with external data in certain cases. Our project tries to further build an automated system for re-identifying involved people from court rulings. This system can then be used as a test for the anonymization practice of Swiss courts. For more information regarding the overarching research project please go here.

There are many reasons for people to appeal a case to a higher court, such as to correct wrong facts or conclusions from the lower court, new evidence, a tactic to delay the legal effect of the decision, insufficient jurisdiction in the area of the case, a tactic to occupy the other party's legal department and waste resources or simply defiance. But in general, cases from lower courts that end up in the Supreme court are probably more controversial than cases that are not appealed. We define such cases as critical and others as non-critical. We coin the new task Legal Criticality Prediction which classifies lower court decisions into critical and non-critical. This task can be used to help courts decide how much time to spend on a given decision based on its criticality status, thus hopefully improving the lower court decision quality. This could lead to lower appeal rates and thus relieve the courts of appeal and reduce legal costs for companies and individuals. Additionally, it can be valuable for clients and law firms for estimating the further course of the case. For example things like the chances of winning in courts of appeal, or possible associated costs of an appeal could be better estimated. 

Research Questions

So far, to the best of our knowledge, the Legal Criticality Prediction task has not been studied in the literature.

RQ1: What Macro-F1 Score (a way to measure the performance of a model) can be achieved using state-of-the-art methods in the new Legal Criticality Prediction task?

RQ2: What biases does the model exhibit concerning different cantons, legal areas, or publication years? (So, e.g. is it better in certain cantons than in others?)

Steps

  1. Improve and polish the existing criticality prediction dataset
  2. Make a script that is given to you ready to run experiments.
  3. Run experiments and improve the Macro-F1 Score of the model by investigating avenues like the following:
    1. Combatting class imbalance
    2. Using data augmentation [2]
    3. Handling long textual input better
    4. Your own ideas
  4. Analyze the experimental results.

Activities

⬤⬤⬤◯◯ Programming

⬤⬤⬤⬤◯ Experimentation

⬤◯◯◯◯ Literature

Prerequisites

Good programming skills (preferably in Python)

Preferably experience in deep learning (transformers)

Contact

Joel Niklaus

References

[1] Vokinger, K.N., Mühlematter, U.J., 2019. Re-Identifikation von Gerichtsurteilen durch «Linkage» von Daten(banken). Jusletter 27.
[2] Feng, Steven et al. “A Survey of Data Augmentation Approaches for NLP.” Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (2021): n. pag. Crossref. Web.