The paper titled “Cats Confuse Reasoning LLM: Query Agnostic Adversarial Triggers for Reasoning Models” investigates the vulnerability of reasoning models to query-agnostic adversarial triggers. These short, irrelevant texts, when added to math problems, mislead models into producing incorrect answers without changing the problem’s semantics. The authors introduce “CatAttack,” an automated pipeline to generate such triggers on a proxy model and transfer them successfully to more advanced models, increasing error rates by over 300%. This highlights significant weaknesses in reasoning models and concerns about security and reliability. They provide a dataset of these adversarial triggers at Hugging Face.