Timothy Garenat from Anything LLM presents a new feature in the 1.12 release that allows users to interact with their local AI models remotely via Telegram, providing a secure, cloud-like experience without compromising privacy by running models on their own hardware. The integration supports full functionality—including messaging, advanced commands, and content generation—while syncing chat history between desktop and mobile, enabling seamless AI assistant access on the go.
In this video, Timothy Garenat from Anything LLM introduces a new feature from the recent 1.12 release that enables users to interact with their local AI models on the go through a channels integration, specifically using Telegram. This feature is designed to provide a cloud-like experience while maintaining full end-to-end privacy by leveraging local models running on the user’s own hardware. The setup process is straightforward, involving scanning a QR code or creating a Telegram bot via BotFather, then linking it to the Anything LLM desktop or server client.
Timothy walks through the simple steps to create a Telegram bot, emphasizing the importance of using a complex username for security and privacy. Once the bot is created, users paste the provided token into the Anything LLM app to establish the connection. This integration allows users to chat with their local AI models from anywhere using Telegram, with the caveat that the desktop or server running the model must be powered on to respond. The feature supports any local model, and Timothy demonstrates using the Quen 3 Vision 8B Instruct model.
The video highlights the seamless experience of interacting with the AI assistant remotely, including sending messages, receiving streamed responses, and utilizing advanced capabilities like web scraping, document generation, and web search. Users must approve themselves via a pairing code to ensure security and prevent unauthorized access to their bot. Once approved, the Telegram bot responds to queries just as the desktop client would, enabling full functionality on mobile devices.
Timothy also showcases the ability to use various commands within Telegram to manage models, switch workspaces, and start new threads, mirroring the desktop experience. He demonstrates sending images to the bot and receiving contextual responses, as well as requesting the AI to research topics and generate PDFs, which are then accessible directly on the mobile device. This illustrates the powerful integration of local AI capabilities with mobile convenience.
Finally, Timothy encourages feedback from users about supporting additional communication channels like Discord or Slack to expand the feature’s reach. He emphasizes the benefit of having local models accessible anywhere without being tethered to a desktop, while maintaining privacy and control. The chat history and generated content sync bidirectionally between the desktop and mobile clients, ensuring a consistent and flexible user experience.