In this in-person class, Eli the Computer Guy introduces AI Large Language Model systems using Ollama and Python, demonstrating how to run and customize local LLMs, implement memory, and build user-friendly applications like web interfaces and autoblogging tools. He emphasizes understanding AI as a manageable technology stack, highlights practical system design, ethical considerations, and encourages hands-on learning to empower learners in integrating AI effectively and responsibly.
The video is an in-person class introduction to AI Large Language Model (LLM) systems using Ollama and Python, presented by Eli the Computer Guy. Eli begins by sharing his extensive background in technology education since 1996 and his experience with online tech education, emphasizing the power of technology to empower people globally. He discusses the importance of understanding technology as systems and encourages learners to see AI as just another technology stack rather than something overwhelmingly complex. He stresses that anyone, even a child, can integrate AI into projects with minimal coding, and highlights the need for technology professionals to understand their value and the purpose of their roles in organizations.
The core of the class focuses on using Ollama, a framework that allows running large language models locally on Windows, Mac, or Linux systems without relying on external APIs like OpenAI. Eli explains how different LLM models vary in size and resource requirements, with smaller models like Pi3 suitable for systems with less RAM and larger models like GPT-OSS requiring more powerful hardware. He demonstrates how to install and run these models, interact with them using Python, and handle the responses. He also discusses the concept of prompt and context engineering, showing how injecting specific instructions into queries can guide the AI’s responses to be more concise or follow certain rules.
Eli delves into advanced topics such as dynamic injections using REST APIs to customize AI responses based on user location, memory implementation to maintain conversational context across multiple queries, and the importance of testing AI systems with diverse user groups to identify biases or unexpected behaviors. He explains that LLMs do not inherently have memory; memory must be explicitly created and managed, often using simple text files or databases. He also touches on the challenges of maintaining consistent AI behavior due to frequent updates in models and APIs, emphasizing the need for version control and thorough testing.
The class further explores data cleaning techniques using Python’s Beautiful Soup module to scrape and extract relevant text from web pages, which can then be fed into LLMs for summarization or tagging. Eli demonstrates how to build a simple web application using the Bottle web framework in Python, enabling users to interact with the AI through a browser interface rather than the command line. This web app example includes handling user input, sending queries to the LLM, and displaying responses, illustrating how AI can be integrated into user-friendly applications.
Finally, Eli showcases an autoblogging application that scrapes content from websites, summarizes it, generates new titles, and creates blog posts automatically. He discusses the ethical and practical implications of such automation, including content ownership and the potential for AI-generated content to flood the internet. Throughout the session, Eli emphasizes practical system design considerations such as scalability, concurrency, and cost-effectiveness, encouraging learners to think critically about how to build AI solutions that are both functional and sustainable in real-world environments. The class concludes with an invitation for hands-on practice and troubleshooting support.