ethical and legal risks” by Céline Castets-Renard

The International Center for Comparative Criminology (CICC), attached to the University of Montreal and the University of Quebec at Trois-Rivières, organized with the Profiling Observatory “The scientific season of the CICC 2022-2023” which took place from October 24 to 28. This event began with conferences whose theme was “Artificial intelligence and profiling”. The conference “AI and profiling: ethical and legal risks”, was presented by Céline Castets-Renard, Full Professor at the Faculty of Civil Law, at the University of Ottawa where she holds the Responsible AI Chair at the worldwide, she also holds the Law Chair at ANITI (Artificial and Natural Intelligence Toulouse Institute).

The CICC, born in 1969 from a scientific partnership between the University of Montreal and the International Society of Criminology, is one of the most important criminological research centers in the world. In addition to 63 regular researchers, it also brings together 104 collaborators from Quebec, Canada and also internationally. With doctorates in criminology, psychology, political science, law, sociology, anthropology, social work, history, economics, forensic sciences, biology and chemistry, the regular researchers and collaborators of the CICC seek to understand, from a multidisciplinary perspective, the processes regulation of criminal behavior, as well as the different methods of intervention deployed by public, private and community institutions to deal with it. The Center regularly organizes international debates and conferences on criminal and security issues.

For this event, it joined forces with the Profiling Observatory (ODP), made up of more than thirty researchers, around twenty community and institutional partners as well as around forty students. at master’s and doctoral level. Their work focuses on the practices and experiences of profiling in the police, justice, correctional services, youth protection, health and social services, social assistance and migration sectors.

The conference : ” AI and profiling: ethical and legal risks »presented by Céline Castets-Renard

The conference first addresses some issues of AI and profiling, then presents problematic use cases of facial recognition in Canada, addresses that of Clearview AI and then focuses on the responses of Canadian law and its limits, including Bill C-27.

The challenges of AI and profiling

Céline Castets-Renard’s intervention begins with a presentation of automated decision-making systems, their uses, their impact on society: predictions (weather, terrorism, security, etc.), recommendations, decision-making assistance. These systems, supervised or not, trained with data such as age, race, etc. make classifications leading to profiling and even discrimination. The first challenge is therefore the choice of data.

Training data

Bias and discrimination often come from poor quality or insufficient data. While some are over-represented, there are many cases of under-representation such as that of data on femicide, particularly in Mexico.

Céline Castets-Renard cites as an example of predictive justice the COMPAS affair, a recidivism calculation score in relation to certain categories of the population. This presented the same error rate whether for black or Caucasian individuals, but it was not specified that this same rate was in favor of the latter and against the former…

Since humans trust the decisions of machines more than their expertise, these kinds of mistakes can have a big impact. The speaker took another example of misleading success, that of facial recognition, announced at around 85% and presented the work of Joy Buolamwini, on gender and/or race bias and discrimination in this technology. These pointed out that black women were underrepresented in the training data, leading to a high error rate. In the case of Amazon’s facial recognition system, the success rate is 100% for white men and drops to 68.6% for black women.

Facial recognition, authentication and identification

For Céline Castets-Renard, facial recognition is an intrusive technology for individuals. She questions the lack of transparency in public action, citing the case of 2 young women claiming to be of Somali origin who had obtained the status of political refugees but whose status was invalidated by the SPR (Refugee Protection Section), alleging that their photos offered a great similarity to those of two Kenyan students…

Regarding the identification, it refers to the case of a native arrested in a store for having allegedly stolen from it a few months earlier. This man would have been particularly watched because of his origins.

The answers of the law

In Canada, the laws concerning private data depending on whether it is the private sector or the public sector are different. For the private sector, Bill C-7 is in draft, reforming the PIPEDA Act of 2000, while for the public sector, the Personal Information Act of 1985 applies, therefore not concerning AI systems . On the other hand, a directive on automated systems, aimed at assessing their risks, was adopted by the Treasury Board.

The speaker raises the question of the use of Clearview AI facial recognition technology by services other than the police, particularly within the immigration service. She highlights the authorities’ lack of high sanctions against the company compared to Australia, some European countries and the CNIL before considering the lack of transparency of automated decisions within the Canadian immigration service.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *