The world of artificial intelligence research is in chaos, and it’s raising some serious eyebrows. Imagine one individual claiming to have authored 113 academic papers in a single year—89 of which are set to be presented at a top-tier AI conference. But here’s where it gets controversial: is this a triumph of productivity or a symptom of a deeper issue plaguing the field? Let’s dive in.
Kevin Zhu, a recent computer science graduate from the University of California, Berkeley, and founder of Algoverse—an AI research and mentoring company for high schoolers—has sparked intense debate. Many of his co-authors are the very students he mentors, some of whom are still in high school. Zhu’s papers cover a wide range of topics, from using AI to locate nomadic pastoralists in sub-Saharan Africa to evaluating skin lesions and translating Indonesian dialects. On his LinkedIn, he proudly claims his work has been cited by tech giants like OpenAI, Microsoft, and Google, as well as prestigious institutions like Stanford, MIT, and Oxford.
But not everyone is impressed. Hany Farid, a Berkeley computer science professor, calls Zhu’s work a ‘disaster,’ labeling it ‘vibe coding’—a term for using AI to churn out software with little meaningful human input. Farid isn’t alone in his concerns. A recent LinkedIn post by Farid highlighted Zhu’s case, sparking discussions among AI researchers about the flood of low-quality papers inundating the field. This surge is fueled by academic pressures and, in some cases, the misuse of AI tools.
Zhu defends his work, stating he supervises these papers as ‘team endeavors’ through Algoverse. The company charges students $3,325 for a 12-week mentoring program, which includes assistance with submitting papers to conferences. He claims to review methodology, experimental design, and paper drafts, though he admits teams use ‘standard productivity tools,’ including language models for copy-editing.
And this is the part most people miss: The AI research community operates under a different set of rules compared to other scientific fields. Unlike chemistry or biology, AI papers often bypass rigorous peer review, instead being presented at major conferences like NeurIPS, where Zhu is set to showcase his work. This system, while fostering rapid innovation, has led to a deluge of submissions—NeurIPS received over 21,000 papers this year, up from under 10,000 in 2020. Similarly, the International Conference on Learning Representations (ICLR) saw a 70% increase in submissions for 2026.
Is this boom in quantity coming at the expense of quality? Reviewers are increasingly frustrated, with some suspecting papers are AI-generated. The Chinese tech blog 36Kr noted a decline in average paper scores, asking, ‘Why has this academic feast lost its flavor?’ Meanwhile, students and academics face immense pressure to publish, often prioritizing quantity over quality. Farid reveals that even his students have resorted to ‘vibe coding’ to boost their publication counts.
The issue extends beyond Zhu’s case. Conferences like NeurIPS and ICLR are struggling to manage the influx. ICLR even experimented with using AI to review submissions, leading to bizarre outcomes like hallucinated citations and overly verbose feedback. The crisis is so profound that researchers are now publishing papers on how to fix the problem—a May 2025 position paper addressing this issue won an award at the International Conference on Machine Learning.
Here’s the bigger question: What’s the cost of this chaos? Farid argues that the flood of low-quality work makes it nearly impossible for journalists, the public, and even experts to discern meaningful advancements in AI. ‘Your signal-to-noise ratio is basically one,’ he says. ‘I can barely go to these conferences and figure out what the hell is going on.’
So, what do you think? Is the current state of AI research a necessary growing pain or a dangerous trend? Should conferences tighten their review processes, or is the focus on quantity driving innovation? Let’s spark a conversation—share your thoughts in the comments below!