The video compares Gemini 3.1 Pro and Claude Opus 4.6 on their ability to redesign a SaaS website’s UI, finding that both struggle with vague prompts but Claude produces much better, clearer, and more functional results when given detailed instructions. Ultimately, the creator concludes that Claude is far superior for UI design tasks, while Gemini’s output is basic and disappointing.
The video compares the UI design capabilities of two leading AI models, Gemini 3.1 Pro and Claude Opus 4.6, by having them redesign a SaaS website’s hero section and overall layout. The creator sets up three tests: first, asking each AI to redesign the site based on a simple workflow prompt; second, using a more detailed designer prompt; and third, giving them full creative control with a comprehensive product brief. The goal is to see which AI produces a more appealing and functional UI, and whether the quality of the prompt or the AI itself is the limiting factor.
In the initial test with a vague prompt, both Claude and Gemini produce results that are underwhelming and barely different from the original site. Claude changes some colors in odd ways (e.g., switching from blue to orange and purple), but the overall structure and content remain almost identical. Gemini, meanwhile, asks more clarifying questions but ultimately delivers a design that is nearly indistinguishable from the original, with only minor tweaks such as a new “creator suite” section. The creator expresses disappointment, noting that neither AI truly reimagines the site or adds significant value.
Recognizing that the prompt may be the issue, the creator then provides both AIs with a detailed product brief, outlining the site’s purpose, tools, and desired user experience. This time, Claude generates a new design that is noticeably different and more thoughtfully structured, with clearer sections, improved content flow, and features like voice profiles and time-saving statistics. However, the creator still finds the UI visually lacking and notes the ongoing challenge of integrating AI-generated code into an existing production environment.
Gemini’s output with the detailed brief is again disappointing, producing a site that looks basic and lacks polish, with poor layout and minimal CSS styling. In contrast, Claude’s design is praised for its clarity, logical structure, and ability to guide users through the product’s value proposition. The creator highlights that while Claude’s UI isn’t perfect, it is far superior to Gemini’s, especially in terms of communicating what the product does and how it benefits users.
The video concludes with reflections on the broader challenges of using AI for UI design. The creator points out that effective results depend heavily on the quality and specificity of prompts, and that AI is not yet capable of replacing skilled web designers—especially when it comes to creativity and nuanced visual design. Ultimately, Claude Opus 4.6 is deemed the clear winner in this head-to-head, while Gemini 3.1 Pro’s UI capabilities are described as “absolutely dreadful.” The creator encourages viewers to share their own experiences and opinions in the comments.