--- title: README emoji: 🌍 colorFrom: pink colorTo: red sdk: static pinned: false --- ![Hugging Face x Google Cloud](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/google-cloud/thumbnail.png) *Welcome to the official Google organization on Hugging Face\!* [Google collaborates with Hugging Face](https://huggingface.co/blog/gcp-partnership) across open science, open source, cloud, and hardware to **enable companies to innovate with AI** [on Google Cloud AI services and infrastructure with the Hugging Face ecosystem](https://huggingface.co/docs/google-cloud/main/en/index). ## Featured Models and Tools * **Gemma Family of Open Multimodal Models** * **Gemma** is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models * **PaliGemma** is a versatile and lightweight vision-language model (VLM) * **CodeGemma** is a collection of lightweight open code models built on top of Gemma * **RecurrentGemma** is a family of open language models built on a novel recurrent architecture developed at Google * **ShieldGemma** is a series of safety content moderation models built upon Gemma 2 that target four harm categories * **[**BERT**](https://huggingface.co/collections/google/bert-release-64ff5e7a4be99045d1896dbc), [**T5**](https://huggingface.co/collections/google/t5-release-65005e7c520f8d7b4d037918), and [**TimesFM**](https://github.com/google-research/timesfm) Model Families** * **Author ML models with [**MaxText**](https://github.com/google/maxtext), [**JAX**](https://github.com/google/jax), [**Keras**](https://github.com/keras-team/keras), [**Tensorflow**](https://github.com/tensorflow/tensorflow), and [**PyTorch/XLA**](https://github.com/pytorch/xla)** * **[**SynthID**](https://deepmind.google/technologies/synthid/)** is a Google DeepMind technology that watermarks and identifies AI-generated content ([🤗 Space](https://huggingface.co/spaces/google/synthid-text)) ## Open Research and Community Resources * **Google Blogs**: * [https://blog.google/](https://blog.google/) * [https://cloud.google.com/blog/](https://cloud.google.com/blog/) * [https://deepmind.google/discover/blog/](https://deepmind.google/discover/blog/) * [https://developers.google.com/learn?category=aiandmachinelearning](https://developers.google.com/learn?category=aiandmachinelearning) * **Notable GitHub Repositories**: * [https://github.com/google/jax](https://github.com/google/jax) is a Python library for high-performance numerical computing and machine learning * [https://github.com/huggingface/Google-Cloud-Containers](https://github.com/huggingface/Google-Cloud-Containers) facilitate the training and deployment of Hugging Face models on Google Cloud * [https://github.com/pytorch/xla](https://github.com/pytorch/xla) enables PyTorch on XLA Devices (e.g. Google TPU) * [https://github.com/huggingface/optimum-tpu](https://github.com/huggingface/optimum-tpu) brings the power of TPUs to your training and inference stack * [https://github.com/openxla/xla](https://github.com/openxla/xla) is a machine learning compiler for GPUs, CPUs, and ML accelerators * [https://github.com/google/JetStream](https://github.com/google/JetStream) (and [https://github.com/google/jetstream-pytorch](https://github.com/google/jetstream-pytorch)) is a throughput and memory optimized engine for large language model (LLM) inference on XLA devices * [https://github.com/google/flax](https://github.com/google/flax) is a neural network library for JAX that is designed for flexibility * [https://github.com/kubernetes-sigs/lws](https://github.com/kubernetes-sigs/lws) facilitates Kubernetes deployment patterns for AI/ML inference workloads, especially multi-host inference workloads * [https://github.com/GoogleCloudPlatform/ai-on-gke](https://github.com/GoogleCloudPlatform/ai-on-gke) is a collection of AI examples, best-practices, and prebuilt solutions * **Google AI Research Papers**: [https://research.google/](https://research.google/) ## On-device ML using [Google AI Edge](http://ai.google.dev/edge) * Customize and run common ML Tasks with low-code [MediaPipe Solutions](https://ai.google.dev/edge/mediapipe/solutions/guide) * Run [pretrained](https://ai.google.dev/edge/litert/models/trained) or custom models on-device with [Lite RT (previously known as TensorFlow Lite)](https://ai.google.dev/edge/lite) * Convert [TensorFlow](https://ai.google.dev/edge/lite/models/convert_tf) and [JAX](https://ai.google.dev/edge/lite/models/convert_jax) models to LiteRT * Convert PyTorch models to LiteRT and author high performance on-device LLMs with [AI Edge Torch](https://github.com/google-ai-edge/ai-edge-torch) * Visualize and debug models with [Model Explorer](https://ai.google.dev/edge/model-explorer) ([🤗 Space](https://huggingface.co/spaces/google/model-explorer)) ## Partnership Highlights and Resources * Select Google Cloud CPU, GPU, or TPU options when setting up your **Hugging Face [**Inference Endpoints**](https://huggingface.co/blog/tpu-inference-endpoints-spaces) and Spaces** * **Train and Deploy Hugging Face models** on Google Kubernetes Engine (GKE) and Vertex AI **directly from Hugging Face model landing pages or from Google Cloud Model Garden** * **Integrate [**Colab**](https://colab.research.google.com/) notebooks with Hugging Face Hub** via the [HF\_TOKEN secret manager integration](https://huggingface.co/docs/huggingface_hub/v0.23.3/en/quick-start#environment-variable) and transformers/huggingface\_hub pre-installs * Leverage [**Hugging Face Deep Learning Containers (DLCs)**](https://cloud.google.com/deep-learning-containers/docs/choosing-container#hugging-face) for easy training and deployment of Hugging Face models on Google Cloud infrastructure Read about our principles for responsible AI at [https://ai.google/responsibility/principles](https://ai.google/responsibility/principles/)