AI lacks computing power, for IBM the answer lies in chips

Close-up of IBM’s artificial intelligence unit chip. Picture: IBM

The hype suggests that artificial intelligence (AI) is already everywhere, but in reality, the technology behind it is still in development. Many AI applications are powered by chips that were not designed for AI. Instead, they rely on general-purpose CPUs and GPUs created for gaming. This mismatch has led to a flurry of investment – ​​from tech giants such as IBM, Intel and Google, as well as start-ups and venture capitalists – in new chips expressly designed for AI workloads.

As technology improves, business investment will surely follow. According to Gartner, AI chip revenues totaled more than $34 billion in 2021 and are expected to reach $86 billion in 2026. Additionally, according to the research firm, less than 3% of data center servers in 2020 included workload accelerators, with more than 15% expected to do so by 2026.

IBM Research, for its part, has just unveiled the Artificial Intelligence Unit (AIU), a prototype chip specialized in AI.

“We are running out of computing power. AI models are growing exponentially, but the hardware needed to train these behemoths and run them on servers in the cloud or on edge devices like smartphones and laptops. sensors has not progressed as quickly,” IBM said.

Run deep learning models

The AIU is the IBM Research AI Hardware Center’s first SoC designed expressly to run enterprise AI deep learning models.

IBM argues that the “workhorse of traditional computing”, otherwise known as the CPU, was designed before the arrival of deep learning. While CPUs are good for general applications, they aren’t as good at training and running deep learning models that require massively parallel AI operations.

“There’s no doubt in our minds that AI is going to be a fundamental driver of computing solutions for a long, long time,” Jeff Burns, director of AI Compute for IBM Research, told ZDNET. “It’s going to be infused into the IT landscape, into these complicated enterprise IT infrastructures and solutions, in a very broad and pervasive way.”

For IBM, it makes more sense to build comprehensive solutions that are effectively universal, according to Jeff Burns, “so that we can integrate these capabilities into different compute platforms and support a very, very wide variety of AI requirements. of business.”

Save resources

The AIU is an application-specific integrated circuit (ASIC), but it can be programmed to perform any type of deep learning task. The chip features 32 processing cores built with 5nm technology and contains 23 billion transistors. The layout is simpler than a CPU, designed to send data directly from one compute engine to another, making it more energy efficient. It’s designed to be as easy to use as a graphics card and can be plugged into any computer or server with a PCIe slot.

To save energy and resources, the IAU uses approximate calculation, a technique developed by IBM to trade off calculation accuracy for efficiency. Traditionally, the calculation has relied on 64- and 32-bit floating-point arithmetic, which provides a level of precision useful for finance, scientific calculations, and other applications where precision of detail is important. However, this level of precision is not really necessary for the vast majority of AI applications.

“If you think about tracing the path of a self-driving vehicle, there is no exact position in the lane where the car should be,” says Jeff Burns. “There’s a range of places in the lane”.

Neural networks are fundamentally inexact – they produce output with probability. For example, a computer vision program can tell you with 98% certainty that you are looking at a photo of a cat. Despite this, neural networks were initially trained with high-precision arithmetic, which consumed a lot of energy and time.

vertical

The AIU’s approximate calculation technique allows it to scale from 32-bit floating-point arithmetic to bit formats containing a quarter of the amount of information.

For the chip to be truly universal, IBM did not content itself with hardware innovations. IBM Research emphasized basic models, with a team of 400 to 500 people working on these models. Unlike AI models which are designed for a specific task, base models are trained on a large set of unlabeled data, creating a resource similar to a gigantic database. Then, when you need a model for a specific task, you can re-train the base model using a relatively small amount of labeled data.

Through this approach, IBM intends to tackle different verticals and different use cases of AI. There are a handful of areas the company builds base models for – these use cases cover areas such as chemistry and time series data. Historical data, which simply refers to data collected at regular intervals, is essential for industrial companies that need to observe the operation of their equipment. After building basic models for a handful of key areas, IBM can develop more specific, vertical-focused offerings. The team also ensured that the IAU software was fully compatible with IBM’s proprietary Red Hat software stack.

Source: ZDNet.com

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *