Without quality data, AI remains useless

Data collection is a key element within organizations, and many of them seem to have made good progress in this area.

However, just because companies make full use of their data does not mean they do so effectively…

AI, a mirror to the larks? According to Fortune Business Insights, the global AI market is expected to grow from $387.45 billion in 2022 to $1,394.30 billion in 2029, growing at a CAGR of 20.1%.

But more and more studies point out that this technology does not provide any relevant results if it is not fed by quality data. If used effectively, they can play a crucial role in organizations for processes such as forecasting and decision making.

The latest global report from data integration specialist Fivetran comes to the same conclusion as other studies: the majority (71%) of the 550 IT professionals and data scientists surveyed say they have difficulty accessing all the data needed to run AI programs, workloads and models.

This finding is significant, as data is vital for model training and implementation. One cannot run an AI program without laying a solid foundation for data storage and movement, starting with a datalake to automate data ingestion and pre-processing.

Obstacles to data access

In this survey, almost all respondents confirmed that they collect and use data from operational systems (supply chain, manufacturing and maintenance, and product lifecycle management) at some level.

However, 69% said they struggled to access the right information at the right time, while at least 73% said they had difficulty extracting, loading and transforming data and translating it into actionable advice and insights for decision makers. .

As a result, even though a large number of organizations (87%) view AI as critical to business survival, they fail to make the most of it.

Only a minority of professionals indicate that they have been using ML/AI methodologies to build models in enterprise applications for more than a year.

“Looking at the reasons why some do not build models from enterprise applications to automatically make predictions and/or business decisions, the common factor is competence: either the professionals have the skills, but their attention is focused elsewhere, or they don’t have the skills at all,” the report reads.

Their manual and broken data processes result in inaccurate models, leading to a lack of trust and a return to human. Respondents said inefficient data processes forced them to rely on human-made decisions 71% of the time.

In fact, only 14% said they had reached advanced AI maturity, meaning using general-purpose AI to automatically make predictions and make business decisions.

Waste of time

Additionally, the financial impact is significant, with respondents estimating that they lose an average of 5% of their global annual revenue due to models built from inaccurate or poor quality data.

The challenges associated with data movement, processing, and availability also mean that talent hired to build AI models ends up wasting time on tasks outside of their primary job.

In the Fivetran survey, respondents said their data scientists spend an average of 70% of their time on data preparation alone. As many as 87% of respondents acknowledged that data science talent within their organization is not being used to its full potential.

“Analytical teams that use a modern data stack can more easily extend the value of their data and maximize their investments in AI and data science,” George Fraser, CEO of Fivetran, said in the study.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *