Background
Google Brain was an AI research division within Google, founded in 2011 as a part of Google X by Jeff Dean, Greg Corrado, and Andrew Ng. It grew into one of the most influential research labs in deep learning. In April 2023, Google Brain merged with DeepMind to form google-deepmind.
Key Contributions
Google Brain’s most consequential contribution was the transformer architecture, introduced in attention-is-all-you-need (2017). The paper’s “attention is all you need” insight eliminated recurrence entirely, enabling massive parallelization and forming the backbone of nearly all modern LLMs.
The team also produced word2vec, which popularized dense word embeddings and demonstrated that vector arithmetic could capture semantic relationships. t5 unified NLP tasks under a text-to-text framework, advancing transfer learning. switch-transformer scaled models to trillions of parameters through sparse mixture-of-experts routing. The chain-of-thought-paper demonstrated that prompting models to reason step-by-step dramatically improved performance on complex tasks.
Notable Publications
- attention-is-all-you-need (2017)
- word2vec (2013)
- t5 (2019)
- switch-transformer (2021)
- chain-of-thought-paper (2022)
Influence
Google Brain researchers authored many of the foundational papers in modern AI. The transformer architecture alone underpins GPT, BERT, LLaMA, and virtually every major LLM. Key researchers including Noam Shazeer, Quoc Le, and Jeff Dean shaped the field’s trajectory. The lab’s merger into google-deepmind consolidated Google’s AI research efforts.
Sources
- Attention Is All You Need (File, DOI)
- Efficient Estimation of Word Representations in Vector Space (File, DOI)
- Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer (File, DOI)
- Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity (File, DOI)
- Chain-of-Thought Prompting Elicits Reasoning in Large Language Models (File, DOI)