Google Cloud and Mistral AI have announced a partnership to allow the Paris-based generative AI startup to distribute its language models on the tech giant's infrastructure.
"As part of the agreement, Mistral AI will use Google Cloud's AI-optimized infrastructure, including TPU Accelerators, to further test, build, and scale up its LLMs (large language models), all while benefiting from Google Cloud's security and privacy standards," the two companies said in a joint statement.
To generate text and other content, large language models are trained on massive amounts of data.
Mistral AI, founded by former Meta and Google AI researchers, announced on Dec 11 that it had raised 385 million euros ($415 million) in its second funding round in seven months, led by Andreessen-Horowitz and LightSpeed Venture Partners.
Mistral AI brings the strongest open generative models to the developers, along with efficient ways to deploy and customise them for production. Their API follows the specifications of the popular chat interface initially proposed by our dearest competitor. They provide a Python and Javascript client library to query our endpoints. The endpoints allow users to provide a system prompt to set a higher level of moderation on model outputs for applications where this is an important requirement.