No one today doubts that artificial intelligence (AI) can generate a number of opportunities for businesses. It makes it possible to predict the needs of individuals, to act in advance, to adapt projects,… to respond to the major challenges that arise in several areas.
Despite these benefits, the risks of AI should not be neglected ; its development and use must take place within the respect for the privacy of individuals and ensuring the protection of their personal data.
GDPR and AI
AI allows the development of sophisticated machines and the development of applications that can replace human intelligence in order to arrive at solutions, using algorithms. The development of these algorithms requires the use of a large amount of data, most of which is personal data. The use of these data is, in fact, inevitable in that they allow tools to progress and evolve. The link between the GDPR and AI is therefore obvious. AI generates complex questions, particularly with regard to the protection of personal data, and AI players must pay particular attention to the legal issues related to personal data. These issues need to be identified and taken into account right from the system design phase. More concretely, a balance between compliance with legal rules on the one hand and the development of technologies on the other must be ensured.
The risks of AI
The risks that the use of AI can present are numerous. They may relate to the data used for system developmentbe linked to the very development of the tools or even be linked to the data collected via these machines. They can thus arise, in particular in the case where the data used for the design of the solutions are not compliant or even when the privacy requirements are not taken into account from the design stage. However, avoiding these risks does not seem easy. The difficulty of protecting AI systems has also been highlighted by the CNIL’s Digital Innovation Laboratory by specifying the statistical nature of their construction making them vulnerable to attacks.
In this perspective, the European Union has prioritized this theme and several texts have been adopted in this area, including a resolution “on a global European industrial policy on artificial intelligence and robotics”.promoting a common approach with the aim of facilitating the development of these technologies, making the most of their advantages and in order to reduce the risks as much as possible. Also, the European Commission wanted to clarify the prohibited AI systems by limiting the cases where their use is tolerated.. This desire can only be welcomed in that it helps to contribute to design responsible and trusted AI.
Innovation yes, but with respect for personal data
Since AI technologies, and in particular those based on machine learning, are inherently intrusive, care should be taken to not to infringe the rights and freedoms of individuals when designing and using these tools. The approach adopted must take into account the fundamental principles of the GDPR, including in particular the lawfulness, fairness and proportionality of the processing. It must also allow the reduction of risks or even their prevention:
Use of compliant data
The data used for development must have been collected in compliance with data protection requirements and their use must not be for non-legitimate purposes. It would therefore be necessary make sure beforehand the compliance of these data and identify the purposes for which they may be processed. The data used must also be essential to achieve the objectives. Thus, an evaluation of the quantity of data must be done on a regular basis in order to reduce the number of data used during the learning phase.
Compliance with the principle of Privacy by Design
Data protection principles must be taken into account upstream of any project. These rules must also be respected during all phases of the project and collaboration between the various actors must be ensured to guarantee effective data protection throughout their life cycle.
Compliant collection of personal data
The collection of personal data via these tools must be done in compliance with data protection requirements. Thus, the information of the persons must be carried out and their consent must be collected in the event that this legal basis applies. In particular, care should be taken to do not store data than the time needed to achieve goals predefined and to use measures to sufficiently secure the data.
The CNIL has put in place several tools to help the actors concerned to set up compliant AI systems:
the right questions to ask yourself before using an artificial intelligence system
them steps to take to guarantee the quality and transparency of the system
them actions to be implemented to secure processing and prevent breaches and attacks
the means to promote transparency and allow people to exercise their rights.
This responsibility to ensure the conformity of the systems weighs not only on the service providers, but also on the developers and manufacturers, including in particular the algorithm designers which must guarantee this compliance throughout the life of the applications.
To know more
Ola Mohty, lawyer and GDPR expert at Data Legal Drive. She is actively involved in various issues relating to data protection.