Dynamic in context learning for LLMs

The video showcases a dynamic in-context learning system where large language models iteratively generate and refine natural language algorithms to solve complex prediction tasks, demonstrated using the Kaggle Spaceship Titanic dataset. This approach enables continuous improvement in prediction accuracy without traditional training, highlighting both its potential and challenges, with resources and further engagement offered through the presenter’s Patreon.

The video presents a dynamic in-context learning system developed to tackle complex problems by enabling a large language model (LLM) to learn and refine algorithms that solve them. Using the Kaggle Spaceship Titanic dataset, the system allows the LLM to generate natural language rule sets or algorithms that predict outcomes based on input features. The learning process is dynamic because the model updates its predictive rules continuously as it receives new data, improving its accuracy over time. The presenter demonstrates this with graphs showing the model’s accuracy progression, highlighting both cumulative and rolling window accuracy metrics.

The system operates by initially analyzing a set number of rows from the dataset to generate prediction metrics, which are essentially the model’s current understanding or algorithm for solving the problem. It then attempts to predict subsequent rows one at a time. If the model’s prediction is incorrect, it revises its prediction metrics based on the new information and tries again. This iterative process allows the model to refine its algorithm dynamically. The presenter also discusses two different system message styles—verbose and concise—to find an optimal balance for generating effective prediction metrics.

Several scripts are used to run the system, including bulk prediction, which processes multiple rows at once, and single progressive prediction, which predicts one row at a time and updates the model in real time. The presenter shares performance results from different LLMs, including GPT-5 and Grock, with accuracy rates ranging from around 68% to 78% depending on the model and input size. These results demonstrate that the system can effectively learn from data in context and improve its predictions without traditional machine learning model training.

The presenter also reflects on some challenges encountered, such as occasional performance plateaus or declines, which may be due to the model losing connection with the initial training data during iterative updates. To address this, future improvements might include mechanisms for residual learning or backtesting to maintain alignment with the original dataset. Despite these challenges, the presenter is optimistic about the potential of dynamic in-context learning as a novel approach for LLMs to learn algorithms and solve complex prediction tasks.

Finally, the video invites viewers to access the code and additional resources on the presenter’s Patreon, where over 400 LLM-powered applications are available. The presenter offers consulting and weekly meetings for patrons interested in deeper engagement. The video concludes with an encouragement to follow or subscribe for future updates on this promising research direction, emphasizing the innovative nature of using LLMs to generate and refine natural language algorithms dynamically in response to new data.