More AI for more productivity

Artificial intelligence (AI) was – as expected – a very cross-cutting topic at the Google Cloud Next’22 conference covering areas like everyday productivity, industry 4.0 and data science.

Simplifying access to artificial intelligence is a leitmotif of all clouds, even if everyone goes about it in their own way. No wonder then that many of the flagship announcements of this Google Cloud Next’22 which is being held this week actually revolve around the use cases of AI and the means to make it more immediately accessible to companies.

Multiplying the use cases of computer vision, improving everyone’s productivity, offering data scientists new tools… These were the three main axes of Google’s announcements on the occasion of this event.

Vertex AI Vision to interpret video streams

Google ad Vertex AI Vision a new extension of its ML/IA development environment dedicated this time to the analysis of images and video streams. Objective, simplify and accelerate the creation and deployment of computer vision applications whether for industry 4.0 (typically for the control of parts or compliance with safety rules on sites), health, retail (monitoring and automatic inventory of stocks), etc.

Google explains that “ Vertex AI Vision can reduce the time to build computer vision applications that previously took weeks to just hours, for ten times the cost of current offerings. To achieve these efficiencies, Vertex AI Vision offers an easy-to-use ‘drag and drop’ interface and a whole library of pre-trained ML models for common tasks such as occupancy counting, product recognition and object detection. In addition, the solution also allows you to import into your Vertex AI Vision applications, your existing AutoML models or your custom ML models from Vertex AI “.

AI Agents to improve everyone’s productivity

There are those who have the skills to develop their own AI specifically tailored to their needs. And there are those who don’t have the skills and are looking for “turnkey” services that they can call from their business applications. While Vertex AI Vision is typically aimed at the former, Google Cloud’s new “AI Agents” are aimed at the latter.
These are technologies that allow customers to apply the best of AI to common business challenges, with limited technical expertise. explains Google.

At Google Cloud Next’22, the hyperscaler announced ” Translation center », a self-service translation service to translate documents into 135 languages ​​enterprise-wide.

Translation center combines Google Cloud AI technology, Google Neural Machine Translation and AutoML technology, to make it easy to ingest and translate content from the most common types of business documents, including Google Docs and Slides, PDFs and Microsoft Word.
This Hub makes it possible to efficiently manage translations. Translation preserves layouts and formatting. The Hub provides granular management controls (including security and cost control) and enables interactive review of documents by collaborators. Different departments within the company can create their own glossaries, store what Google calls “translation memories” (in other words, teach AI to remember how a set of words were corrected and translated by employees at the time machine translation review), customize the translation AI with AutoML.
Translation Hub is available in two subscription forms: Basic and Advanced. Post-editing of translated content, integration of AutoML models and translation memories are functions reserved for the “Advanced” subscription.

In addition to Translation Hub, Google introduced enhancements to two existing services now referred to as “AI Agents.”
Documentary AI automates document processing by creating analysis workflows. This AI Agent is enriched with two features: Document AI Workbench to simplify information extraction from unstructured documents (by creating your own templates) and Document AI warehouse which makes it possible to search, store and govern metadata (to better automate the classification of documents for example).
AI contact center an intelligent agent to meet all contact center needs from intelligent customer routing to transcript analysis to chatting with bots.

Open-source and a new CPU

Last part of the AI ​​announcements, Google Cloud is enriching its offer for data scientists. The hyperscaler announced a new initiative resulting from a collaboration between Google and players like Meta, AMD, ARM, Intel and NVidia in order to avoid the adhesion of models to platforms or hardware technologies.
Stated objective of this alliance: to end incompatibilities between frameworks and AI accelerators and to simplify the deployment of ML models from different frameworks (starting with PyTorch and TensorFlow) on different hardware architectures (CPU, GPU, ASIC).
First realization of this initiative, the XLA compiler specific to TensorFlow is now decoupled from TensorFlow and becomes an open source project called OpenXLA.

Another announcement, this time directly linked to Google Cloud’s IaaS infrastructures, the hyperscaler is launching a new generation of machines: MV C3.
Their particularity? They are based on a SoC (System on Chip) co-developed by Intel and Google. This SoC (called E2000) combines a 4th generation Xeon Scalable (the famous Sapphire Rapids still awaiting official launch) and an IPU (Infrastructure Processing Unit) invented by Google and responsible for optimizing and securing network flows without intervention of the CPU. These new “C3” VMs are particularly recommended for data work that requires high performance and high confidentiality.

Also to read about Google Cloud Next’22:

Google unifies its BI and ML tools under the Looker banner and ramps up data

Google also wants to move your mainframe workloads to the cloud

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *