Data-driven AI tools
Workpackage leader: Marcel Worring
Researcher: Yijia Zheng
Description
My research focuses on modelling higher-order relations within multi-modal data through hypergraph learning methods. As real-world data from various online platforms continues to grow in size and complexity, the police require more time for analysis of such extensive datasets, impacting the efficiency of the evidence-finding process. While traditional graph neural networks are limited to pairwise relations, hypergraphs with arbitrary-order hyperedges are naturally suitable for representing complex relations. Hypergraph learning methods have emerged as versatile and potent algorithms for extracting higher-order relations among multiple objects. Based on hypergraph learning methods, a robust law enforcement tool can be built for automatically processing complex relations from huge data and discovering potential evidence for the court.
Knowledge-based AI tools
Workpackage leader: Floris Bex
Researcher: Roos Scheffers
Description
Within this project, my research focuses on how knowledge-based AI, specifically formal argumentation, can be formalized and evaluated based on the expertise of intelligence analysts. My goal is to assist intelligence analysts in the process from data gathering to producing evidence for court. Knowledge-based AI systems can integrate the expertise of law enforcement professionals. Using this expert knowledge systems can be created to be interpretable from the onset and be seamlessly integrated into law enforcement work. Argumentation can be used to generate interactive and contestable explanations. I will generate explanations and determine which are suitable for law enforcement. This is beneficial to other parties in the criminal justice chain (lawyers, judges, prosecution), allowing them to contest AI-generated evidence.
Visualization tools
Workpackage leader: Stef van den Elzen
Researcher: Kay Roggenbuck
Description
As part of the project, my research will be focused on developing visual analytics tools and techniques to support technical and non-technical stakeholders in hypothesis generation, testing, and presentation. To this end, we aim to support investigators in exploring multimodal and heterogeneous data by considering the legal requirements. Additionally, the goal is to increase the transparency and explainability of state-of-the-art and future AI tools by providing interactive visual systems. Finally, the gathered results and techniques should be used to support non-technical stakeholders by providing simple visualizations giving insights into the data, AI tools, and their outputs.
Guidlines for professionals
Workpackage leader: Stephan Grimmelikhuijsen
Researcher: Koen Verdenius
Description
My focus chiefly will be on the human and organizational aspects of the hybrid intelligence process. You can imagine the research might concern automation bias, the effect of hybrid processes on discretionary practices, the usability of products and how digital products are being coproduced by various stakeholders. The hope is for the research to result in behavioral interventions, training or frameworks that facilitate accountable and effective intelligence operations. Classic themes from public administration such as legitimacy, transparency, accountability, street-level bureaucracy and e-government are almost certain to be covered. It is also highly likely that I will conduct experimental research in line with approaches pioneered in behavioral public administration.
Future-proof legal framework
Workpackage leader: Maša Galič
Researcher: Johan van Banning
Description
My focus will be on the legal framework for bringing AI-generated evidence to the courtroom. The use of AI tools by law enforcement is a promising way to handle and bring evidence to court, but such tools may be difficult to understand for judges, lawyers and defendants. No specific legal rules for the use of such tools exist at this moment, and fair trial principles require evidence presented in court to be understandable and contestable. The research will focus on reinterpreting existing fair trial principles, such as the equality of arms and the right to an adversarial trial, to formulate legal requirements on the transparency and contestability of such tools, as well as investigating how existing and upcoming EU-regulation on such topics as data protection can help ensure that evidence handled using AI is trustworthy, transparent and contestable.