Artificial Intelligence test engineering
Artificial Intelligence can be used to process huge volumes of data to perform complex tasks, usually very efficiently. However, models based on AI, and more specifically Machine Learning or Deep Learning, are often compared to “black boxes” due to their opaque operation.
A European regulation, known as the AI Act, is due to be voted on at the end of 2023 to regulate the use and marketing of artificial intelligence. Its aim is to help innovation while guaranteeing users’ security and rights in terms of data protection, transparency and ethics.
AI testing is becoming essential, especially for AI used in mission-critical systems such as medical diagnostic tools, autonomous driving and aviation.
However, conventional software testing techniques are generally not applicable to these systems. To meet this need, Kereval has developed new testing processes for AI.
Identifying your needs
We offer you :
- Consulting on best practices in test engineering and validation of AI systems
- Consulting on AI verification and validation methodologies and testing tools (explicability and robustness)
You need to :
- Verify the expected behavior of the developed algorithm model before its deployment in production,
- Deploy a low-cost software testing environment without the need to collect specific new data,
- Improve AI algorithms (targeted data augmentation)
- or to convince your customers of the robustness of your application through objective metrics.
Our support
To help you comply with the European AI Act regulation, our audit services will provide you with the indicators you need to communicate with your customers (robustness, ethics, explicability, traceability, etc.). This assessment will also help you improve your AI systems.
Grid comprising 50 AI pipeline verification points, including :
- Technical obligations
- Documentation requirements
- Best practices
Grid based on the AI Act and standards/referentials:
- European Commission: AI Act, RGPD
- ISO/EIC 24029-1:2018
- AI HLEG: Ethics guidelines for trustworthy AI & ALTAI
- OECD
Audit report:
- Areas of non-compliance with the AI Act
- Aspects of the AI pipeline to be improved (methods, data quality, etc.)
- Suggested tools to complete AI pipeline assessment