The contemporary challenges of AI: beyond algorithms
1. Biases in AI systems
AI is only as intelligent as the data it uses. If the training data sets are biased, the models reproduce and amplify these biases. For example, studies have shown that some facial recognition systems perform much less well for groups that are under-represented in the data, such as women or non-white people. These biases can have serious consequences in areas such as recruitment or justice.
Researchers are actively working on solutions: increasing data sets, introducing equity metrics, or adjusting algorithms to detect and correct these biases.
2. Explicability: understanding complex models
Modern AI models, particularly those based on deep learning, are often described as black boxes. Their inner workings are difficult to interpret, even for experts. In critical areas such as health or finance, this opacity poses a problem: it is essential to understand why a decision has been taken. Approaches such as local explicability (LIME, SHAP) or attentional networks help to make these models more transparent.
3. Security and robustness in the face of adversarial attacks
Another major issue is the vulnerability of AI systems to adversarial attacks. Simple disturbances that are imperceptible to the human eye can trick a model into making incorrect decisions. For example, in the case of autonomous vehicles, a minor change to a road sign could cause an accident.
Solutions include mechanisms for hardening models, such as adversarial training, or systems capable of detecting these anomalies in real time.
4. Regulation and technological sovereignty
Although frameworks such as the RGPD have been in place for several years, they are not sufficient to address the technical specificities of AI. The European AI Act currently in preparation seeks to fill this gap, by establishing standards for the use and development of AI. But there are still major challenges ahead, not least ensuring fair competition with American and Asian technology giants.