Google and Hugging Face announced a strategic partnership aimed at advancing open AI and machine learning development.
This collaboration will integrate Hugging Face’s platform with Google Cloud’s infrastructure, including Vertex AI, making generative AI more accessible and impactful for developers. This partnership allows Hugging Face users and Google Cloud customers to easily deploy production-ready models on Google Cloud using inference endpoints and accelerate applications using TPUs on Hugging Face Spaces. , you’ll be able to manage your usage through your Google Cloud account.
Developers can now leverage AI-optimized infrastructure such as TPUs and GPUs to quickly and cost-effectively train, tune, and serve open models on Google Cloud. The partnership also supports the introduction of Google Kubernetes Engine, enabling the creation of new generative AI applications.
The move is seen as a significant step into AI for Google’s parent company Alphabet, and has been compared to a collaboration between Microsoft and OpenAI, but Jeff Boudier, head of product at Hugging Face, said Google and Hugging He commented that the Face partnership is completely different.
Google’s Tensor Processing Unit is specialized hardware developed to accelerate machine learning tasks, especially those involving large matrix operations. Unlike general-purpose graphics processing units that are designed for parallel computing and can be used versatilely for a variety of computing workloads, TPUs are purpose-built for AI and ML workloads, with a focus on tensor operations. Achieve faster speeds and improve energy efficiency. TPUs are also more energy efficient than GPUs. They are designed with a focus on reducing power consumption per operation, leading to lower energy costs and a lower carbon footprint. This partnership will allow Hugging Face users to take advantage of his TPU available through Google Cloud.
Vertex AI is Google’s machine learning and MLOps platform available in a cloud environment. The two integrations allow Hugging Face users to target his Vertex AI as a deployment platform for hosting and managing open models. You can also choose his GKE, a managed Kubernetes service for hosting model that provides fine-grained control and customization features.
Hugging Face has attracted significant investment from tech giants including Google. In the Series D funding round, Hugging Face raised $235 million with participation from Google, Amazon, Nvidia, Intel, AMD, Qualcomm, IBM, Salesforce and others, making the startup’s valuation double his increased to $4.5 billion. With our commitment to open source and open models, Hugging Face has quickly grown as the preferred platform for hosting models, datasets, and inference endpoints. Almost all open model providers, including Meta, Microsoft, and Mistral, make their models available on Hugging Face Hub.
Google has a foundational model that is only available on public cloud platforms. Gemini is one of the best performing large-scale language models and was announced last month. Other models such as Imagen, Chirp, and Codey are part of his Vertex AI products. Hugging Face’s integration with Google Cloud gives customers a choice of proprietary and open models for building and deploying generative AI applications in the cloud.
The partnership between Google and Hugging Face promises to democratize AI by making it easier for companies to build their own AI using open models and technology. As Hugging Face becomes a central hub for open source AI software, this partnership could potentially double his repository of AI-related software.
New features, including Vertex AI and GKE deployment options, will be available to Hugging Face Hub users in the first half of 2024.
follow me twitter Or LinkedIn. check out my website.