AI and data management: beware of infringements!

Our blogger believes that ordinary people are not fully aware of the exponential amount of information they share on the internet and which companies have access to today. (Photo: 123RF)

GUEST BLOG. Generally, I believe that ordinary people are not fully aware of the exponential amount of information that they share on the internet and that companies have access to today.

In addition to the social media on which we happily spread our privacy, we use a host of services, online or not, which store and use our personal data for different purposes. Certainly, the majority of websites are usually secure and have an obligation to protect the information they use, but none of them is really immune to a breach.

Remember the theft of data from Desjardins in 2019, one of the worst incidents of its kind in the country, which turned the lives of millions of people and businesses upside down. Since then, Quebecers have become particularly suspicious, not without reason: our personal data are prized commodities, sold all over the world for astronomical sums.

Artificial intelligence, used to make often important decisions on the lives of humans, is absolutely not excluded from this ethical debate. Like any self-respecting technology, it must comply with privacy regulations, a task that is far from easy since AI needs a lot of data to function. For example, intelligence feeds from social media or our phones are used by some companies to train machine learning systems, allowing them to monitor consumer behaviors or manage customer relationships and strategic marketing analysis.

Companies that are active on multiple platforms can also obtain data from many sources. Some are provided directly by users, such as contact information, while others are collected through the use of cookies (third-party cookies). Thus, it becomes difficult to know exactly what personal information is collected and where it is stored.

Because it is such a new technology, AI is ultimately subject to very little oversight or accountability. This raises the question of how to establish and maintain user confidentiality and integrity while obtaining and processing the data essential to feed algorithms, necessary for the flourishing of many areas of business.

For companies that use or want to use artificial intelligence to improve their operations, this can be a real headache. Those without strong privacy management policies or processes run the risk of having their databases lost, stolen, exposed, or hacked — breaches with far-reaching and costly consequences.

Regulations vary widely from region to region, making compliance difficult. Some places in the world, however, are much more advanced than us at this level. The European Union, for example, has just implemented the “AI act”, an ambitious bill that ensures that AI systems placed on the market respect the fundamental rights of citizens. The clauses are exhaustive and make particular mention of data collection, yes, but also of other more controversial applications of artificial intelligence such as deepfakes, facial recognition and biometric data.

Such a law is certain to ignite the powder and pave the way for other countries, Canada included. As proof, the federal government recently introduced its Bill C-27, the very first legislation affecting artificial intelligence and data.

In summary, this law aims to “frame the design, development and deployment of AI systems to protect individuals from potential harm”, and this, for all companies that carry out a regulated activity related to intelligence. artificial and which use anonymized data to feed their algorithms. Organizations should delineate clear data protection policies, perform risk assessment and mitigation, monitor their system’s compliance on an ongoing basis, and demonstrate transparency.

Parliament will set up a tribunal dedicated to the protection of personal information, define security requirements and impose penalties on those who violate these regulations. Moreover, the most serious offenses could incur a penalty of 25 million dollars, or 5% of the turnover of the organization. It’s a think about it.

There is no artificial intelligence worthwhile without a good dose of ethics, that’s for sure. As AI models depend on the quality of data to deliver effective results, their existence must therefore advocate respect for users and their personal information to prevent abusive and inaccurate analysis or bias against individuals. subject to automated decisions.

For businesses here, vigilance and rigor should be exercised with regard to the artificial intelligence systems they operate. Although several algorithms do not really use data deemed personal or sensitive, it is still important to advocate a culture of transparent, human and responsible AI within your technological infrastructure, as soon as it is implemented. Otherwise, watch out for violations!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *