How to Run Copilot Chat Locally with Lemonade on AMD Ryzen™ AI

The video demonstrates running GitHub Copilot Chat entirely locally using Lemonade and open-source models on an AMD Ryzen AI processor, eliminating the need for cloud services or subscriptions. It showcases seamless code navigation and modification within a complex codebase, highlighting the efficiency and privacy of local AI-powered coding assistance.

In this video, the presenter enthusiastically introduces the ability to run GitHub Copilot Chat entirely locally using open-source models, without relying on cloud services, API keys, or subscriptions. This marks a significant milestone as open-source models have become fast, capable, and efficient enough to power real Copilot chat experiences directly on a local machine. The demonstration is done on an AMD Ryzen AMX Plus 395 processor, showcasing the practical application of this technology.

To get started, users need to install Lemonade using the provided installer and then add the Lemonade for GitHub Copilot Chat plugin. After installation, users can manage their models by selecting from those installed on Lemonade. Additionally, users can install gguf models from Hugging Face or other sources that are compatible with their hardware. The presenter highlights that some models perform better than others for coding tasks, recommending the Quentri coder 30 billion model for an optimal experience.

Once the model is selected, the presenter demonstrates Copilot Chat in action within a complex code repository. By asking where the developer entry points are defined, the local Copilot Chat quickly searches through the codebase, utilizing various tools in the background to locate the function definition. This process happens entirely on the local machine, emphasizing the power and efficiency of the setup without any cloud dependency.

The presenter further showcases the capabilities by requesting a code modification: changing the default log level of a function from “info” to “warning.” Copilot Chat processes this request, determines the necessary changes, and applies them across the codebase. The user can then review and accept these changes seamlessly, illustrating how local AI models can assist with real-time coding tasks effectively.

In conclusion, the video encourages viewers who have been waiting for a truly local Copilot Chat experience to try Lemonade for GitHub Copilot Chat. It highlights the exciting potential of open-source models to empower developers with fast, private, and subscription-free AI coding assistance directly on their own machines. This development represents a significant step forward in making AI-powered coding tools more accessible and efficient for everyone.