Take the measure of artificial intelligence


Artificial intelligence (AI) has entered a new era characterized by four trends: a homogenization of its techniques, a massive acceleration of its adoption, a growing information asymmetry between the private and public sectors, and a crystallization of geopolitical tensions. .

Homogenization and foundation models

The history of AI is recent (post-1950) but already rich in technological breakthroughs that have significantly transformed its commercial applications. The latest corresponds to the emergence of AI models of gigantic size making it possible to improve and systematize the development of predictive algorithms. The field of automatic language processing provides a particularly illuminating example of this phenomenon. Before 2018, any specific task (sentiment prediction, detection of fake news…) required a specific AI model, the development of which was particularly expensive. The introduction of the BERT model (Bidirectional Transformers for Language Understanding) in 2018 has changed the game: it makes it possible to digest textual information and transform it into a more synthetic form, directly usable by traditional machine learning algorithms. The new AI paradigm thus proceeds from a homogenization of learning techniques, through two-step information processing: extraction-synthesis first, then calibration for a specific task. The very complex extraction-synthesis phase is precisely made possible by the introduction of foundation models according to the terminology of Stanford University, of which BERT is a part. The scaling up of these foundation models offers a lot of promise but comes with a new form of systemic risk in that their inherent failings carry over to the many algorithms they spawned. This phenomenon of intermediation complicates the measurement of the societal and economic impact of AI for public decision-makers.

The new AI paradigm thus proceeds from a homogenization of learning techniques, through two-step information processing: extraction-synthesis first, then calibration for a specific task.

Victor Storchan and Nathan Noiry

Massive adoption that is accelerating

Artificial intelligence is moving from the lab to commercial uses at a pace never seen before. Its adoption is massive: according to the Californian company OpenAI, one of the world leaders in the sector, the GPT-3 models (text generation), Dall-E (image generation) or GitHub Copilot (computer code generation) have each passed the symbolic bar of one million users. We are also witnessing an impressive acceleration of this adoption: if GPT-3, introduced in 2020, took two years to reach one million users, it took only two months for Dall-E 2, introduced in 2022. , to reach this goal. These brutal changes are forcing public decision-makers to react with an ever-increasing sense of urgency.

An asymmetry of information between the private sector and the private sector

The specific resources needed to develop new AI systems have led to a pre-eminence of private research over public research. For example, the development of foundation models requires not only the collection of data sets on a very large scale, but also exponential calculation means whose costs can amount to tens of millions of dollars, resources often inaccessible to the academic world. The result is a considerable asymmetry of information between public universities and private research centers, with a pernicious feedback effect: being the only ones able to produce the most efficient models, the private centers attract the best talents and race alone in the lead. . In particular, the gap between the reality of the technology deployed in industry and the perception that public decision-makers have of it has widened considerably, at the risk of concentrating the debate on technological chimeras that distract from the real societal problems posed by the AI. Private companies also maintain a technological vagueness about the real capabilities of the models they develop: rigorous scientific publications are often challenged by resounding, scientifically opaque press releases. Canadian researcher Gary Marcus introduced the neologism demo software to account for this phenomenon of editorialization of AI, where the illusion of a simplified presentation sometimes masks the lack of maturity of a technology. In order to rebalance the balance of power, Stanford University has pushed the idea of ​​a national research cloud which would provide adequate resources for public research.

The gap between the reality of the technology deployed in industry and the perception of it by public decision-makers has widened considerably, at the risk of focusing the debate on technological chimeras diverting from the real societal problems posed by AI.

Victor Storchan and Nathan Noiry

The increasing geopoliticization of AI

On the international scene, geopolitical tensions are crystallizing. Between powers first, where the concepts of techno-sovereignism and techno-nationalism are illustrated in practice. When Naver, Google’s Korean competitor, announces that it can replicate generative text models as powerful as their American competitors, the press release specifies that “unlike the English-centric GPT-3 model, this also means securing the sovereignty of AI by developing an optimized language model for Korean”. The Chinese example is also emblematic of this new techno-nationalism: the Ministry of Science and Technology has thus established a list of companies intended to form a “national team for AI” capable of projecting Chinese power. Moreover, a new form of diplomacy is emerging between States and technological platforms. On the one hand, the States appoint digital ambassadors and with the GAFA, on the other, the technological platforms recruit experts responsible for anticipating the geopolitical reactions that their AI systems could arouse. This increase in the geopoliticization of AI was perceived as early as July 2021 by Antony Blinken: “democracies must pass the technological test together” and “diplomacy, […] has a big role to play in this regard”. In France, Emmanuel Macron recently underlined the need “to combine what it is to be a diplomat with extremely specialized knowledge in technology”.

The four trends of the new era that AI has entered make it a difficult technology to grasp. Faced with growing technological uncertainty, States are getting organized to take stock of the transformations at work, in order to identify the opportunities and vulnerabilities of AI, a major challenge for the coming decade.

Faced with growing technological uncertainty, States are getting organized to take stock of the transformations at work, in order to identify the opportunities and vulnerabilities of AI, a major challenge for the coming decade.

Victor Storchan and Nathan Noiry

Multilateral initiatives to measure the large-scale impacts of AI: a mode of governance to be rethought

States are organizing themselves in a plurilateral framework, including industry and civil society, to better assess and measure the macroeconomic impacts of AI (future of work, economic impact, inequalities, industrial changes, etc.). But initiatives such as the PMIA (global partnership for AI) remain very academic: technical reports on the state of progress of AI or best practices between countries, the deliverables produced set out general principles without however providing actionable routes. The working groups are helpless because they have no direct technical execution capacity to, among other things, prioritize resources or plan long-term funding. In order to be in more direct contact with the latest advances in AI in companies and to put in place more operational plurilateral strategies, the mode of governance of these partnerships remains to be rethought. The priority is to diversify expertise and develop assessment tools capable of bringing up concrete points of vigilance to public decision-makers.

The regulation of uses at the scale of the AI ​​system and technological players

The development of a regulation adapted to AI systems becomes essential: it is a question of taking into account the technological specificities of the field and of building new adapted legislative frameworks, like theAI Law under negotiation at the European Commission. Crucially, any body of legislation relating to AI must be based on the development of appropriate tools to ensure the auditing of existing systems. In this regard, the National Institute of Standards and Technology (NIST) already incorporates audits of facial recognition systems, not only from the point of view of their performance, but also from the point of view of their demographic biases (in relation to women and racialized people, in particular) . In order to ensure a common basis for comparison, the NIST has labeled databases that are not accessible to audited companies. The development of audit systems specific to foundation modelswhich irrigate all the others, is also decisive for regulating the AI ​​of tomorrow. Stanford University intends to stimulate this track of inspection of foundation models by organizing this year a competition rewarding the development of operational tools making it possible to answer, for example, the following questions: are the decisions of the model stable over time? are women under-represented by the model? etc

Crucially, any body of legislation relating to AI must be based on the development of appropriate tools to ensure the auditing of existing systems.

Victor Storchan and Nathan Noiry

Think long term

The implementation of plurilateral initiatives and the construction of audit tools cannot replace the question of the values ​​that must prevail in the implementation of AI. The precepts of trusted AI, promoted by the European Commission, are relatively recent in the development of this technology. Historically, three trends have accompanied advances in the field without anticipating the risks they induced: human-machine competition (rather than cooperation), autonomy vis-à-vis any human supervision and centralization of resources. Today, many public institutions (the Berkeley Center for Machine-Human Compatible AI), NGOs (the cooperative AI foundation) or private companies (Redwood Research, Anthropic) are looking into the question of the alignment of values ​​between men and machines. This question is all the more relevant as certain companies such as DeepMind or OpenAI have the stated objective of developing an AI that would exceed human cognitive capacities.

The technological revolution induced by AI is also anthropological in that it upsets our societies. At the same time, the updating of our political reading grids is blurred by the new technical and geopolitical paradigms of the sector. It should be noted that this difficulty in understanding AI is not specific to public decision-makers: in recent years, spectacular advances in the field have constantly thwarted the predictions of the best experts. In this context of growing uncertainty, the development of adequate measurement tools has never been more crucial.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *