The video explains that AI models like ChatGPT, Claude, and Gemini often produce generic responses because they are optimized for the average user, not individual needs. To get more personalized results, users should actively adjust four key settings—memory, instructions, apps/tools, and style controls—regularly refining them for better, tailored outputs.
The video explains why AI models like ChatGPT, Claude, and Gemini often produce generic, “averaged” responses that feel competent but not personalized. This happens because these models are trained to satisfy the broadest range of users, much like a restaurant creating a dish that appeals to the masses but delights no one in particular. The underlying training process, called reinforcement learning from human feedback, optimizes for answers that human raters—who are not experts in your specific needs—find generally helpful, clear, and appropriate. As a result, the AI’s default outputs are tailored to a statistical median user, not to your unique preferences or requirements.
To break free from this generic output, the video introduces four key “levers” that users can adjust to personalize their AI experience: memory, instructions, apps and tools, and style controls. Memory allows the AI to retain information about you across sessions, making interactions more context-aware. Each platform handles memory differently: ChatGPT uses saved memories and chat history, Claude uses project-specific memory and memory summaries, and Gemini leverages integration with Google apps for personalization. The effectiveness and privacy implications of these memory features vary, so users should be intentional about what information they share.
Instructions are another powerful lever, letting users specify persistent context about who they are and how they want the AI to behave. Being specific in these instructions is crucial; vague directives like “be concise” are less effective than detailed preferences. Claude, for example, allows users to upload writing samples to create a custom style profile, while developers can use markdown files to encode project-specific rules and standards. Regularly updating these instructions based on recurring corrections can significantly improve the relevance and quality of AI outputs.
The third lever, apps and tools, refers to the external capabilities the AI can access, such as web search, code execution, or integration with other software. The Model Context Protocol (MCP) standard enables these connections, but the range and reliability of available tools differ across platforms. ChatGPT and Claude offer varying degrees of integration with external apps, while Gemini is more limited in this area. Being intentional about which tools are enabled or connected can shape the AI’s responses and make them more useful for your specific tasks.
Finally, style and tone controls allow users to adjust how the AI communicates, from choosing preset personalities to customizing tone and formatting. The key is to align these settings with your actual communication style and needs, avoiding conflicting instructions. Across all four levers, the video emphasizes the importance of specificity and ongoing adjustment: capturing corrections, updating instructions, and refining settings over time leads to compounding improvements. While this personalization requires some effort, especially for frequent users, it transforms the AI from a generic assistant into a tool that truly fits your unique requirements.