EASA launches Machine Learning Application Approval (MLEAP) research project

The European Union Aviation Safety Agency (EASA) had launched a call for tenders for the “Machine Learning Application Approval” (MLEAP) research project funded by the Horizon Europe research and innovation programme. It selected APSYS, a subsidiary of Airbus, to implement the MLEAP project in partnership with LNE & NUMALIS.

Airbus decided last July to merge the services activities of Airbus Cybersecurity and Apsys, which specializes in security and industrial risk management, and created Airbus Protect, which is therefore the project leader for the two years. coming.

The National Metrology and Testing Laboratory (LNE) and Numalis, a French innovative software company providing tools and services aimed at making AI reliable and explainable, have therefore been collaborating with Airbus Protect on the project since May 2022. This relates to the approval of the technology of machine learning (ML) for systems intended for use in safety-related applications in all areas covered by the EASA Basic Regulation and is funded by the Horizon Europe program to the tune of €1,475,400.

EASA is interested in potential applications based on the machine learning and the deep learning in safety-critical applications for several years, it has moreover published its roadmap on artificial intelligence in February 2020, followed by a reflection document: “First usable guidance for level 1 machine learning applications” in April 2021. This concept paper presents an initial set of goals for Level 1 AI (human assistance), to anticipate future EASA directions and requirements for machine learning (ML) applications. ) related to security.

Project objectives

The partners will focus on streamlining the certification and approval processes by identifying concrete means of compliance with the learning assurance objectives of the EASA guidelines for AI/ML applications (levels 1, 2 and 3 as defined in the EASA AI roadmap), with a focus on levels 1B (enhanced human assistance) and 2 (human/machine collaboration). At level 3, the machine becomes more autonomous.

The medium-term effect of the project will be to ease some remaining restrictions on the acceptance of ML applications in safety-critical applications.

Expected results

The research outputs will consist of a set of reports identifying a set of methods and tools to address the following three major topics:

  • Guarantees on the “generalization of the machine learning model”
  • Guarantees on “the completeness and representativeness of the data”
  • Guarantees on the robustness of the algorithm and the model

Alongside the project, at least one full-scale aviation use case should be developed to demonstrate the effectiveness and ease of use of the proposed methods and tools.

The work will be distributed as follows:

  • Task 1: Methods and tools for the assessment of completeness and representativeness of datasets (training, validation and testing) in data-driven ML and distance education;
  • Task 2: Methods and tools for quantification of generalization guarantees for ML and DL models;
  • Task 3: Methods and tools for verifying an ML algorithm and the robustness/stability of a model;
  • Task 4: Communication, dissemination, knowledge sharing, stakeholder management;
  • Task 5: Project management.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *