Research Center for Digital Sustainability

Explainability Methods for Legal Judgment Prediction in Switzerland

Explainability Methods for Legal Judgment Prediction in Switzerland

This project is available as a Seminar or Bachelor's project. It is available as a group project.

Introduction

Swiss court decisions are anonymized to protect the privacy of the involved people (parties, victims, etc.). Previous research [1] has shown that it is possible to re-identify companies involved in court decisions by linking the rulings with external data in certain cases. Our project tries to further build an automated system for re-identifying involved people from court rulings. This system can then be used as a test for the anonymization practice of Swiss courts. For more information regarding the overarching research project, please go here.

We recently presented a dataset for Legal Judgment Prediction (you try to predict the outcome of a case based on its facts) including 85K Swiss Federal Supreme Court decisions [2]. Although we achieved up to 70% Macro-F1 Score, the models still work as black boxes and are thus not interpretable. In this project, you will venture in the realm of explainable Machine Learning to understand the predictions better the models made on the Legal Judgment Prediction dataset. There are many explainability methods that can be tried and compared such as SHAP, LIME, Diverse Counterfactual Explanations, Integrated Gradients, using Attention, or using Probes to predict the legal area, origin cantons or citations.

Other libraries that might come in handy: Interpret-text, Captum, transformers-interpret

Research Questions

So far, to the best of our knowledge, the Explainable AI methods have not been studied in Swiss Legal Judgment Prediction. 

RQ1: Can we find the models relying on spurious correlations, rather than on sensible content, to make the decisions?

RQ2: What insight on the prediction of the legal judgment outcome can we draw using the explanations?

Steps

  1. Get roughly familiar with the literature on explainable NLP. 
  2. Get roughly familiar with the existing code for Swiss Judgment Prediction
  3. Get familiar with the explainability method(s) that you will try on the Swiss Judgment Prediction models.
  4. Write the code to test the chosen explainability method(s) on the models.
  5. Analyze the results qualitatively and quantitatively if possible.

Activities

⬤⬤⬤◯◯ Programming

⬤⬤⬤⬤◯ Experimentation

⬤◯◯◯◯ Literature

Prerequisites

Good programming skills (preferably in Python)

Preferably experience in deep learning (transformers)

Contact

Joel Niklaus

References

[1] Vokinger, K.N., Mühlematter, U.J., 2019. Re-Identifikation von Gerichtsurteilen durch «Linkage» von Daten(banken). Jusletter 27.
[2] Niklaus, J et al. "Swiss-Judgment-Prediction: A Multilingual Legal Judgment Prediction Benchmark" Natural Legal Language Processing Workshop @ Empirical Methods for Natural Language Processing (2021)
[3] Malik, V., Sanjay, R., Nigam, S.K., Ghosh, K., Guha, S.K., Bhattacharya, A., & Modi, A. (2021). ILDC for CJPE: Indian Legal Documents Corpus for Court JudgmentPrediction and Explanation. ACL/IJCNLP.