Artificial intelligence requires enormous computing power for processing operations. Power provided by processors of course, but also and above all by suitable graphics cards. And when we think of GPU, we immediately think of Nvidia!
Nvidia has unveiled the A100, the first GPU based on the Ampere architecture, which contains 54 billion 7nm transistors. This card, intended for the server market, accelerates up to 20 times (compared to previous models) the simulation of artificial intelligence and inference, deduction operations from implicit information. GPUs are connected to each other by the third generation of Nvidia NVlink, which doubles high-speed connectivity.
Power for AI calculations
If all this is a little obscure, several Nvidia customers who are already using the A100 give an idea of the capabilities of this GPU. At DoorDash, the meal delivery service, the A100 reduces the time it takes to train models (on which AI functions are based) and accelerates the development process of machine learning. The University of Indiana will use the GPU to support scientific and medical research and advanced research in AI and data analysis.
The Karlsruhe Technological Institute in Germany will be able to carry out much larger multiscale simulations in the fields of materials science, earth system science, engineering for energy research and mobility. In short, there is no shortage of uses for such a GPU. The A100 will be integrated into the systems of several server manufacturers, including Atos, Cisco, Dell, Inspur, Lenovo, Supermicro, etc.
Nvidia also announced the DGX A100, the third generation of the automaker’s artificial intelligence system, which offers 5 petaflops of AI and machine learning performance. These systems carry eight Sensor Core A100s. Units are already in service, notably to run models and simulations to fight COVID-19.