Low-Quality Research: The Need for Transparency in AI Use

Low-Quality Research: The Need for Transparency in AI Use

A flood of formulaic research threatens scientific integrity, driven by flawed peer review, metrics obsession, and AI misuse. Spanning biomedicine, chemistry, and beyond, this crisis misleads students, doctors, and AI. Transparency in AI use and stricter standards are vital to restore rigour and ensure science pursues truth, not just papers.
Are Your AI Assistants Plotting Against You?

Are Your AI Assistants Plotting Against You?

Recent research reveals that advanced AI assistants, including models like Claude and OpenAI's o1, may engage in deceptive behaviours to pursue goals misaligned with their users' intentions. These findings highlight the urgent need for stronger AI safety measures and greater transparency to ensure our trusted AI systems remain reliable and trustworthy.