Nvidia announced new features for its top-of-the-line AI chip on November 13, saying the new offering will be available next year with Amazon.com, Alphabet's Google, and Oracle. The H200 chip, as it is known, will surpass Nvidia's current top H100 chip. The main improvement is more high-bandwidth memory, which is one of the most expensive parts of the chip and determines how much data it can process in a given amount of time.
Nvidia dominates the AI chip market and powers OpenAI's ChatGPT service as well as many other generative AI services that respond to queries in human-like writing. With more high-bandwidth memory and a faster connection to the chip's processing elements, such services will be able to respond more quickly. The H200 has 141 gigabytes of high-bandwidth memory, an increase from the previous H100's 80 gigabytes. Nvidia did not reveal the memory suppliers for the new chip, but Micron Technology announced in September that it was working to become an Nvidia supplier.
Nvidia also purchases memory from SK Hynix in Korea, which announced last month that AI chips are helping to revive sales.
Nvidia announced on Nov 8 that Amazon Web Services, Google Cloud, Microsoft Azure, and Oracle Cloud Infrastructure, as well as specialty AI cloud providers CoreWeave, Lambda, and Vultr, will be among the first cloud service providers to offer access to H200 chips.
We use cookies to ensure you get the best experience on our website. Read more...