Amazon is making a multimillion-dollar investment in the development of an ambitious large language model (LLM), aiming to compete with leading models created by OpenAI and Alphabet. According to individuals familiar with the matter, the model known as "Olympus" possesses an impressive 2 trillion parameters, potentially positioning it as one of the most extensive models currently undergoing training. The GPT-4 models from OpenAI, considered some of the finest models around, reportedly boast an astounding one trillion parameters.
Due to the undisclosed nature of the project, individuals expressed their opinions under the guarantee of anonymity.
Leading the team is Rohit Prasad, the former head of Alexa, who now directly reports to CEO Andy Jassy. Prasad, the head scientist of general artificial intelligence (AI) at Amazon, brought together researchers from the Alexa AI division and the Amazon science team to collaborate on training models.
Amazon has already conducted training on smaller models like Titan. In addition, Amazon Web Services (AWS) has formed partnerships with AI model startups, including Anthropic and AI21 Labs, providing their services to AWS users. According to sources, it is believed that having models developed in-house could enhance the appeal of their offerings on AWS. Enterprise clients on AWS are seeking access to high-performing models.
LLMs, or Language Models, serve as the foundation for AI tools that acquire knowledge from extensive datasets in order to produce responses that resemble human behavior.
Given the significant computing power needed, the cost of training larger AI models tends to be higher. During an earnings call in April, Amazon's executives announced their plan to boost investment in LLMs and generative AI technologies. Notably, the company intends to reduce resources allocated to fulfillment and transportation within its retail business.
We use cookies to ensure you get the best experience on our website. Read more...