How smart algorithms learn unacceptable patterns

Share:

About the research

When you apply for your next job or loan, you might be assessed by a computer. Many banks and companies already use smart algorithms, also known as AI, to judge your data. But what to do if such an algorithm learns unacceptable patterns, leading to discrimination? Carmen Mazijn (VUB) created a technique called LUCID to detect how an AI model makes decisions and determine if the model is biased. "Together, we can ensure that computers make fair decisions without discriminating," she says.

Technology
Carmen Mazijn
VUB

Carmen Mazijn is a dynamic researcher passionate about the fairness of artificial intelligence (AI) systems. With a strong desire to make a difference in the world, she explores the complex world of decision-making processes and machine learning. By investigating the mysteries of black-box algorithms in an interdisciplinary consortium at VUB, Carmen hopes to shed light on the ethical implications of AI in our modern society.

Gerelateerde video's