The video explores running local Large Language Models (LLMs) such as Ollama on the Steam Deck, a portable gaming system with PC capabilities. By leveraging containerization tools like Distrobox, the creator successfully installs and tests different LLM models on the Steam Deck, showcasing promising results in terms of performance and potential for running AI workloads on the device efficiently.
The video discusses the use of the Steam Deck, a portable gaming system that doubles as a full-fledged PC, to run local LLMs (Large Language Models) such as Ollama. The creator sets out to test how well LLMs perform on the Steam Deck by trying out different models and observing the speeds achieved in tokens per second. The Steam Deck is switched to desktop mode for this purpose, and the process begins with attempting to install Ollama directly on the device. However, due to the user space limitations and lack of certain packages like Brew, the direct installation is not successful.
The creator then explores using a containerization tool called Distrobox on the Steam Deck to create a user space where Ollama can be installed and run successfully. By setting up Distrobox and leveraging its capabilities, the installation of Ollama is achieved within the container space. Different LLM models such as Llama 2, Llama 3, and Lava are downloaded and tested on the Steam Deck to evaluate their performance. Llama 2 performs significantly better in terms of token generation speed compared to previous Raspberry Pi tests, showing promising results for running LLMs on the Steam Deck.
Moving on to Llama 3, a slower but higher-quality model, the creator tests its performance by asking it to write a regular expression to match email addresses. The results indicate a slightly slower token generation speed but a better quality answer. Additionally, the Lava model for image recognition is tested using an image of a Nintendo Switch, showcasing the Steam Deck’s capability to handle multimodal tasks. The successful recognition of the gaming console image demonstrates the feasibility of running image-based LLMs on the Steam Deck.
The creator notes the space requirements for storing the LLM models on the Steam Deck, emphasizing the need to manage storage space effectively, especially on devices with limited storage capacity. While the current setup runs LLMs well on the Steam Deck, there is potential for future improvements, such as utilizing the GPU for enhanced performance. Overall, the video showcases the experimentation and successful execution of running local LLMs on the Steam Deck, providing insights into the device’s AI capabilities and potential for running AI workloads efficiently in a portable form factor.