Discover What's New In Gemma 1.1 Update: New 2B & 7B Instruction Tuned models

Google released the Gemma 1.1 update for Gemma models, focusing on the Gemma 7 billion instruction tuned and Gemma 2 billion instruction tuned models. The update brings improvements in response quality, coding capabilities, instruction following, and multi-turn conversation quality, showcasing advancements in prompting, fine-tuning, and language-specific capabilities for users to experiment with.

Google released a new update for Gemma models, specifically for the Gemma 7 billion instruction tuned model and the Gemma 2 billion instruction tuned model, calling it Gemma 1.1. The update brings improvements in quality, coding capabilities, factuality, instruction following, and multi-turn conversation quality. The release notes mention training over a novel method leading to gains in these areas, making it better for most use cases compared to the original Gemma instruction tunes.

The Gemma 1.1 model shows interesting updates in how it responds to prompts, hinting at instruction fine tuning. By changing prompts, users can see varied responses and potentially improve results. The model’s structure remains similar to previous versions, using markdown for output and maintaining a structured approach to answering prompts. The model demonstrates step-by-step reasoning in responses, showing a learned ability to connect different elements in prompts.

By fine-tuning prompts and using specific system pre-prompts, users can elicit more accurate and varied responses from the Gemma model. Adjusting the preamble and context of prompts can significantly impact the model’s outputs. Users can experiment with different prompts to achieve desired results, such as improving the model’s performance in math-related questions and creative writing prompts.

The Gemma model shows proficiency in handling prompts in various languages, demonstrating potential for language-specific fine-tuning. Users can leverage the model’s understanding of multiple languages by tailoring prompts accordingly. Additionally, the model’s response to React prompting shows promise for tasks like function calling and tool usage, presenting opportunities for customization and advanced functionalities.

While the Gemma 2 billion model shows improvements, it lags behind the 7 billion model in responsiveness to React prompting and reasoning tasks. Users are encouraged to test out the models, try different prompts, and provide feedback on their experiences. The provided notebooks allow users to explore the models and experiment with prompts to observe how the models perform under different conditions. Overall, the Gemma 1.1 update brings enhancements in various areas, offering users the opportunity to interact with the models in more nuanced ways.