The competition between big tech companies is reaching new heights with the dawn of the generative AI era – and some have even declared it an ‘AI war.’ From the looks of it, Microsoft is seemingly at the forefront thanks to its immense arsenal of generative AI-powered products like Bing Chat and Image Creator. Meanwhile, Google doesn’t seem too far behind, even as it’s yet to launch a standalone generative AI product or plug the tech into its biggest product, Google Search.
Luke Sernau, Senior Software Engineer at Google and founder of Better Engineering, stated in a document that neither Google nor OpenAI is in a position to win the AI arms race. He suggested that Google’s rivalry with OpenAI has distracted the company from the rapid developments being made in open-source technology: “While we’ve been squabbling, a third faction has been quietly eating our lunch. I’m talking, of course, about open source.”
Sernau’s statements were part of a document that was published on an internet system at Google in early April. Since then, it’s been shared thousands of times among Googlers, according to the report which cites a person familiar with the matter. On May 4, the document was published by consulting firm SemiAnalysis, and has thereafter been circulating in Silicon Valley.
“We have no secret sauce”
When it comes to large language models, it’s Meta’s LLaMA that appears to be the open-source community’s favourite. The model, which was released in February, is claimed by Meta to outperform GPT-3 across many tasks, including natural language processing and sentiment analysis. It’s also much more adaptable with customisable weights that allow it to run on less powerful hardware, making it more developer-friendly.
But LLaMA isn’t the only developer-friendly LLM out there and Sernau is obviously aware of this.
“While our models still hold a slight edge in terms of quality, the gap is closing astonishingly quickly. Open-source models are faster, more customizable, more private, and pound-for-pound more capable. They are doing things with $100 and 13B params that we struggle with at $10M and 540B. And they are doing so in weeks, not months.”