20/02/2026

Impact Assessment Human Rights and Algorithms (IAMA)

The text on this page was automatically translated and hence may differ from the original. No rights can be derived from this translation.

The Impact Assessment on Human Rights and Algorithms ('IAMA') is an instrument for discussion and decision-making around the use of artificial intelligence (AI) and other algorithms. The IAMA focuses on assessing the impact of the algorithm on fundamental rights and also allows for the consideration of other relevant issues regarding the use of the algorithm, at an early stage and in a structured manner. This helps prevent the hasty deployment of an algorithm without a thorough examination of its consequences, along with the accompanying risks such as carelessness, ineffectiveness, or infringement of fundamental rights. On behalf of the Ministry of the Interior and Kingdom Relations (BZK), the University of Utrecht, together with Dialogic, developed a new version of the IAMA. This new IAMA is now clearly linked to the European AI regulation, and can be used to fulfil the obligations set by that regulation regarding fundamental rights (the so-called 'FRIA obligation').