Why AI-powered dashboards are lying to leadership #futureofwork #ai

The video warns that AI-powered dashboards can give leaders a false sense of complete organizational visibility by making only formal, easily tracked work appear legible, while overlooking the crucial informal and invisible efforts that keep companies running. Relying too much on these dashboards risks ignoring the value of “illegible” work and creates overconfidence in the accuracy and completeness of AI-generated summaries.

The video discusses the dangers of relying too heavily on AI-powered dashboards for organizational visibility, especially at the leadership level. The speaker warns that believing AI can make everything visible is a trap for companies. Instead, AI should be seen as a tool that empowers small teams to do meaningful work, rather than as a means to provide complete transparency to leadership. The speaker draws inspiration from Shan Gadeke’s essay on “legible” and “illegible” work, emphasizing that not all valuable work is easily tracked or made visible through formal systems.

Legible work refers to tasks and activities that are planned, trackable, and explainable—those that show up in systems like Jira, OKRs, and roadmaps. These are the activities that are easy to report on and are visible to leadership and external stakeholders. In contrast, illegible work encompasses the informal, often invisible efforts that keep companies running smoothly. This includes favors, back-channel communications, shared intuition, quick fixes, and ad hoc problem-solving by trusted team members. These activities are crucial, especially in emergencies, but rarely show up in formal reports.

The speaker points out that when critical issues arise—such as a database hitting its limit or a top customer facing a crisis—companies tend to bypass formal processes. Instead, they rely on a small group of trusted individuals to resolve the problem quickly. This reliance on informal networks and tacit knowledge has always been a part of how organizations function, and AI does not eliminate this reality. Rather, AI amplifies both the visibility of formal work and the illusion that everything can be made visible.

A key problem with AI-powered dashboards is that they make it cheap and easy to create the appearance of legibility. Previously, making work visible required significant human effort: engineers wrote tickets, product managers summarized progress, and managers created presentations. AI reduces the cost of these activities to almost zero, making it tempting for organizations to believe they have a complete and accurate picture of what’s happening. This can lead to a false sense of security and overconfidence in the data presented by AI systems.

The speaker notes that this shift is already having real-world consequences, such as companies like Amazon reducing middle management roles whose main job was to collate and summarize information. Large language models (LLMs) can now pull together updates from a wide range of sources—code changes, Slack threads, documentation, meeting notes, on-call logs, and tickets, even poorly written ones. While this can provide useful summaries, it also risks creating a misleading sense of total visibility, obscuring the vital but informal work that keeps organizations functioning effectively.