Google Announces Open AI Models Gemma
Google has recently introduced a new series of artificial intelligence (AI) models named Gemma, making a significant stride in the realm of AI development. This move aligns with actions taken by other tech giants such as Meta Platforms, aiming to democratize access to advanced AI technologies. The announcement, made on Wednesday, reveals that Google's Alphabet subsidiary is offering these AI models for developers and businesses to utilize and adapt freely, fostering innovation and potentially boosting the adoption of Google's cloud services.
Google's initiative to release its Gemma models as "open models" provides a foundation for external developers to create bespoke AI software. By making crucial technical information, such as model weights, publicly accessible, Google is inviting a broader community of software engineers to engage with and build upon its technology. This strategy not only aims to galvanize the development of AI applications but also encourages the use of Google Cloud, which has recently turned profitable. To further attract users to its cloud platform, Google is offering $300 in credits to first-time cloud customers who employ these models.
The Fine Line Between Openness and Control
While Google has taken a step towards openness by sharing its Gemma models, it stops short of fully embracing an open-source approach. The company retains certain controls over the use and distribution of these models, a decision that likely stems from concerns over potential misuse of open-source AI technologies. The debate around open-source AI is multifaceted, with some experts warning of its risks and others advocating for its ability to broaden participation in AI development.
Notably, Google has decided not to release its more advanced Gemini models under the same open conditions as Gemma. The Gemma models are described to have either two billion or seven billion parameters, which indicates their complexity and potential capabilities. For context, Meta's recently unveiled Llama 2 models range from seven to 70 billion parameters, and OpenAI's GPT-3 model, announced in 2020, boasts a staggering 175 billion parameters.
In a parallel announcement, Nvidia disclosed its collaboration with Google to optimise the performance of the Gemma models on Nvidia's hardware. This partnership underscores the importance of ensuring AI models can run efficiently across different platforms. Additionally, Nvidia revealed plans to update its chatbot software, currently in development, to be compatible with Gemma models on Windows PCs. This move further highlights the ongoing efforts to integrate AI technologies seamlessly into various user environments.
Google's release of the Gemma AI models marks a notable moment in the evolution of artificial intelligence technology. By offering these models for free and fostering a collaborative environment, Google is not only advancing its own cloud platform but also contributing to the broader AI ecosystem. The partnership with Nvidia and the strategic decision to keep certain controls in place reflect the complexities and responsibilities inherent in the development and deployment of powerful AI tools.
