Little by little, AI is making its way into companies. Many use cases are explored, piloted and implemented at scale. However, do we know the difference between a real inference engine and a simple algorithm?
From an artisan and agrarian civilization to an industrial civilization and today to a civilization based on knowledge and technology, the evolutions seem to follow one another at increasingly rapid rates on the third planet of the solar system. An acceleration that seems to become exponential instead of following a linear progression. In a few years, AI has gone from deeptech status to mainstream technology, trivialized by dint of being used in all sauces and claimed as a standard in many products that do not have it.
Because there is a chasm between a real inference engine and algorithms as sophisticated as they are. An algorithm is a set of instructions that are executed when a trigger is detected. It is always the same instructions and the same triggers that launch the execution, no adaptation to changing conditions is possible if this is not provided by the developer and integrated into the code. Admittedly, these algorithms can be more or less sophisticated to take into account complex situations, but the principle of their operation remains immutable: all the situations and the actions they trigger must be foreseen and integrated into the application. These algorithms are similar to more or less complex decision trees.
AI, a term used inappropriately
AI, on the other hand, is based on complex algorithms gathered in inference engines and which have the ability to modify their response according to the events that are supposed to trigger them. They are thus endowed with the capacity to learn and adapt (not to be confused with machine learning) and can produce responses that were not predicted. This is the greatest advantage of AI and also its most feared disadvantage, because it can make the wrong decisions. We all remember the “misogynistic” AI that refused loans to women.
Knowing this, it seems obvious that the term AI is used inappropriately by publishers and manufacturers of computer products. For example, some computer manufacturers ensure that their machines are equipped with AI to adapt its operation to the work habits of users, in order to streamline work, lower consumption or protect the system. This is abusive, the real AI is in their data centers which analyze the use of the computers they sell. As Claire Loffler, security engineer at Vectra AI points out, “The capabilities, limitations, implications and even motivations of AI are regularly discussed in public forums, often with the effect of obscuring or exaggerating the truth. . The term ‘AI’ itself is frequently used as a catch-all, particularly in cybersecurity, referring to mysterious technologies that claim to be the cure for all enterprise ills.
It is important to ask the right questions
So what to do? “It is important to ask the right questions,” says Claire Loffler, because to know if AI will keep its promises, it is necessary to ask four questions.
1Anomaly detector or threat hunter?
A simple anomaly scan is likely to overwhelm security teams if it is not accompanied by more information. True AI will rely on intelligence from outside the organization. Solutions that merely detect internal anomalies are of little use, as not all anomalies turn out to be threats upon examination, and many genuine threats have cloaking mechanisms to hide their behavior or make it pass as an authorized or innocuous action.
AI platforms need to address these questions, while non-AI solutions create new problems by increasing the flood of alerts and placing the burden of investigation on security teams, while overlooking the real threats. True AI solutions examine behaviors and history to minimize noise and provide more contextual and actionable alerts.
2What should be the role of AI?
If AI is only an add-on to a solution and is only used to solve peripheral problems, then its potential will not be fully exploited. AI must also be able to respond to fundamental operational challenges. It must be at the heart of the functionality and management of a system. In short, it is very important to know where AI is deployed and where it operates.
3What about its creators?
As experience has shown: the AI inherits the characteristics of its designer/developer. A look at the team that designed the AI solution says it all. What is their know-how in data science, security research, psychology? Many disciplines and skills are required to design AI that delivers value. Also, review the vendor’s support commitments to help you get the most out of your investment.
4What promises are made?
If an AI-based solution is touted as a panacea for all ills, beware. AI does not see everything and does not do everything. We have recently experienced a major collective technological transformation. New complexities have emerged: hybrid cloud, multicloud, the proliferation of opaque third-party networks and rogue endpoints, and the growing popularity of SaaS and PaaS. While exaggerated promises are not a new trend, in this environment of intense pressure on the cybersecurity function, the temptation to believe them is growing. The best way forward is through experience, agility and continuous improvement. Over time, true AI will perfect itself, while over-promising will shatter on the reality principle.