“Posits”, a new form of numbers that is revolutionizing AI

Even if we don’t always realize it, artificial intelligence is an integral part of our daily lives. Did you know, for example, that they are hiding behind automatic translation tools, or even Google messaging, Gmail?

If their operation remains obscure for ordinary mortals, it is partly because they require immense computing power. For example, it took a million billion billion operations to train OpenAI’s most advanced language model, GPT-3, for a whopping $5 million.

However, there is a solution to reduce these costs, according to IEEE Spectrum: a different way of representing numbers, in the form of “posits”.

We owe this invention to engineers John Gustafson and Isaac Yonemoto, who thought of posits as an alternative to the traditional system of floating-point arithmetic processors. It was therefore a question of finding a new way of encoding real numbers.

Since then, a research team from the Complutense University of Madrid has tested this standard in a brand new processor core and the results are quite encouraging: the accuracy of a basic computing task has reportedly increased fourfold.

A possible revolution in mathematics

To understand the magnitude of the technological advance that posits represent, it is necessary to take into account the fact that real numbers cannot be perfectly encoded since there are an infinite number of them.

In the classical system, therefore, many reals must be rounded in order to fit into a designated amount of bits, the bit being the smallest unit of information on a computer. But with posits, you can represent more than with floating point.

Also, for large positive and negative numbers, their accuracy is increased. “It is a better match for the natural distribution of numbers in a calculationdescribes Gustafson. It’s the right precision, where you need it. There are so many bit patterns in floating point arithmetic that no one ever uses, it’s wasteful.”

In their technical trial, the Complutense University team was able to compare the calculations performed using 32-bit floats and those using 32-bit posits. She concluded that the improvement in accuracy did not come at the expense of computation time, but only an increase in chip area and power consumption.

It remains to be seen whether, despite the undeniable gains in numerical precision, the training of large AIs will really be affected by this new standard. “It’s possible that posits speed up training since you lose less information along the way, but we don’t know yet, explains David Mallasén Quintana, researcher at the University of Madrid. People have tried them in software; now we want to try them in hardware.”

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *