The video critiques Cursor’s exaggerated claims that their AI agents autonomously built complex software like a web browser from scratch, revealing that the resulting code was non-functional, heavily reliant on existing libraries, and plagued with technical issues. It argues that such misleading hype from major AI companies undermines trust and calls for greater honesty and accountability in the industry.
Last month, Cursor, a well-known AI-assisted development tool, published a controversial blog post claiming their autonomous coding agents could build complex software, such as a web browser, from scratch with just a single prompt and no human intervention. The blog detailed ambitious experiments, including building a spreadsheet application, a Windows 7 emulator, and migrating their own codebase from Solid to React. The most eye-catching claim was that their agents built a fully functional web browser in a week, generating over a million lines of code across a thousand files, all autonomously.
Upon closer inspection, however, these claims fell apart. The public GitHub repository for the browser project, called “fast render,” was riddled with issues: the code did not compile, there were no releases or stable branches, and the continuous integration (CI) pipeline had an 88% failure rate. Despite these failures, agents continued merging code, ignoring the broken state of the project. Users who tried to build or run the browser found it non-functional, with basic features like search and page interaction not working at all.
Further scrutiny revealed that the project was not truly built “from scratch.” Instead, it relied heavily on existing open-source libraries from projects like Mozilla’s Servo for fundamental browser components such as HTML and CSS parsing. Experts in browser development, like Gregory Terzian from the Servo project, criticized the code as structurally unsound and bloated, with three million lines of code achieving less than mature projects with a third of the size. This undermined Cursor’s claim of groundbreaking autonomous agentic coding.
The blog post also boasted about the scale and cost of the experiment, claiming trillions of tokens were processed, which, based on standard OpenAI pricing, would have cost between $8 million and $16 million in API usage. This staggering expense resulted in a non-compiling, non-functional codebase, leading many to question the value and intent behind such experiments. Critics suggested that the money would have been better spent supporting actual browser development teams.
Ultimately, the video argues that the real issue is not just the poor quality of the code or the misleading “from scratch” claim, but the way Cursor, as a major player in AI development, framed and hyped the results. Such exaggeration damages trust in the entire AI-assisted development space, especially when the real capabilities of these tools are already impressive. The speaker calls for higher standards of honesty and accountability from influential companies like Cursor, warning that unchecked hype undermines both user trust and the credibility of the industry as a whole.