Black box, Explainable AI and legal regulation
4 Aprel 2025 13:23
77 baxış

Black box, Explainable AI and legal regulation

While AI systems handle hundreds of processes in finance, education, health care, and administration, these systems often operate as a 'black box', making it difficult for individuals to understand how their data is processed. Black box models, in particular, fail to provide accurate and transparent information to the data subject. Because these models are opaque and can infringe on fundamental rights such as the right to a fair trial, privacy, and may lead to discrimination, a lack of accountability, and other issues. That is why most states have already started to build their regulation system and adapt it to modern changes.

The outline of this research will explore the Right to Explanation (RTE), which is hindered by Black box models and solely automated systems. 

RTE seeks to clarify the reasoning behind a prediction or decision. In the publication of the European Data Protection Supervisor, explainability finds an answer to the question of why AI makes a particular decision and shows justification.

As we strive for understandable, actionable, and meaningful elements in automated systems, it is important to address some technical challenges and maintain a balance between the scope of explainability and potential shortcomings. Lawyers should be familiar with some of the current complexities:

  • It is not possible to provide proper RTE to data subjects merely by disclosing the source code of the algorithms, as the source code is unintelligible to non-experts. We need more than technical formalities.
  • The risk of automation bias: If incorrect suggestions are accepted as correct due to misleading numerical and visual explanations, it could have negative consequences.
  • The balance between the complexity of variables and clarity: "Systems with more variables will typically perform better than simpler systems."
  • Over-explaining can reduce the predictive power of signals and may lead to the strategic manipulation of the system.
  • Unsupervised systems — online machine learning (ML) where input is not provided by humans but is instead discovered by the software—further complicate the implementation.

Despite these challenges and uncertainties about how and when automated processes are explainable and what the moral implications are, there are evolving regulations worldwide. These will be discussed in the next writing with references to the GDPR and the AI Act.

5
Dərc etdiXoşqədəm
2 izləyici
77 baxış
Şərhlər