OpenAI has released a new tool based on artificial intelligence: it allows you to create 3D models from text, like Dall-E with images or ChatGPT with text. An AI that could well help creative people in their three-dimensional productions.
OpenAI has announced the release of a new tool for creating images: Point-E, which manages to create 3D point clouds from texts. The start-up linked to Elon Musk and specialized in artificial intelligence therefore does it again after Dall-E and ChatGPT which have met with great success in recent months, as the possible uses are so impressive.
According to the team led by Alex Nichol, ” Point-E can often produce consistent, high-quality 3D shapes for complex queries ”, which is also in color. If it is not the best tool that exists, its creators are certain: it is the fastest by far.
An AI that requires much less resources than others to create 3D objects
For OpenAI engineers, the greatest feat of their tool is its speed and by extension the low computing power to make it work. In a scientific article, they explain that for the generation of 3D objects via text, ” newer methods typically require several GPU hours to produce a single sample “. Which is a lot when you see that Dall-E manages to create images in seconds and that Meta even manages to create videos.
OpenAI has therefore decided to use an alternative method ” which produces 3D models in just 1-2 minutes on a single GPU “. To date, Point-E has been trained on millions of 3D models, which have been converted into a standard computer format.
We are told that AI first generates a single view using a text-to-image model (like Dall-E). It is from this generated image that the program creates 3D points. At the end of this step, the model includes 1024 points, which the program then refines to arrive at 4096 points.
Why Point-E does better than other artificial intelligences
These two steps take only a few seconds independently, which makes it possible to create 3D models much faster. Point-E intends to combine the advantages of two generation methods:
- The first consists of training 3D generation models from paired data;
- The second uses pre-trained text-to-image models to then represent these images in 3D.
It is precisely by detaching itself from 3D that Point-E manages to reduce the necessary computing power, unlike what other experimental AIs are currently doing. Although other programs manage to generate three-dimensional elements, the developers explain that the main problem encountered is ” optimization procedures », which require a lot of graphic resources. This is what prevents the creation of practical tools.
The other great strength of Point-E is to use images as a training base, not 3D objects: the latter are available in much larger quantities. This is what allows it to generate items of all types and from more complex descriptions.
Possible uses of this OpenAI tool
Point-E could revolutionize the creation of 3D content and thus help modellers in their work, whether for cinema or for video games. One can imagine integration into a game engine, like Unreal Engine, which would allow objects generated in a few seconds to be integrated into a virtual universe, from a textual description.
Especially since its competitive advantage is undoubtedly the most practical: its speed of execution. Opposite him, there is also Google’s DreamFusion, which also has a lot of punching power in this area.
Enough to help in the creation of metaverses and more generally universes in virtual reality/augmented reality. For now, the software still has limitations. First of all, the definition of the objects generated remains quite weak, but the engineers have more ideas.
There are obviously biases that automatic generation tools can suffer from and this is partly why Point-E is not accessible to everyone on the OpenAI site. We could very well ask this AI to generate a weapon blueprint, for example, which must be avoided at all costs. However, Point-E has been put online on Github and everyone can consult the content of the project since it is in open-source.
To follow us, we invite you to download our Android and iOS application. You can read our articles, files, and watch our latest YouTube videos.