Everyone Is Getting A Grades Since ChatGPT Arrived on Campus

UC Berkeley study finds professors awarding 30% more A grades in writing and coding courses since AI tool’s debut

Rex Freiberger Avatar
Rex Freiberger Avatar

By

Image: Deposit Photos

Key Takeaways

Key Takeaways

  • UC Berkeley study reveals 30% more A grades awarded since ChatGPT’s November 2022 debut
  • Students achieve higher grades without deeper understanding, creating performance-competence disconnect
  • AI assistance inflates academic credentials while genuine skill gaps persist in workforce

Your kid’s 4.0 GPA might mean less than you think. A UC Berkeley study released Wednesday reveals that professors in writing and coding-heavy courses—prime ChatGPT territory—have been handing out 30% more A grades since the AI tool debuted in November 2022. These aren’t signs of sudden academic breakthroughs. “The results suggest that students have relied on generative AI to do better in their studies, not that these classes of students are learning more,” says researcher Igor Chirikov. When your tuition dollars are buying algorithmic assistance instead of actual learning, the credential inflation hits different.

Grade Gaming Goes Mainstream

Students are achieving higher grades without deeper understanding, creating a dangerous disconnect between performance and competence.

The evidence extends beyond Berkeley’s campus. Medical students using ChatGPT-4 received honors ratings 92.9% of the time, compared to just 63.8% from human evaluators grading the same work. High schoolers showed a similar pattern: ChatGPT access boosted practice problem success by 48% but actually hurt test scores by 17%. It’s like having a really smart friend do your homework—you look good on paper but bomb the actual exam. The disconnect between assisted performance and genuine competence creates a TikTok-worthy illusion of achievement.

Speed vs. Substance Showdown

AI grading delivers efficiency gains while sacrificing the nuanced evaluation that human instructors provide.

AI grading systems promise efficiency that sounds almost too good to resist. Where human professors need hours to evaluate assignments, AI tools like Gradescope process hundreds in seconds. They eliminate human bias and provide consistent feedback across massive student populations. But here’s the catch: these systems excel at pattern recognition while missing creative nuance and complex reasoning. Professors find themselves caught between workload relief and educational integrity, like choosing between Netflix’s algorithm recommendations and actually discovering great content yourself.

The Inflation Spiral Accelerates

What started as gradual grade inflation has become an AI-powered acceleration that threatens traditional academic credibility.

Grade inflation existed long before ChatGPT—humanities courses already awarded A’s to 40-50% of students. But AI has turbocharged this trend specifically in subjects where algorithmic assistance feels most natural. EdTech companies are capitalizing on the chaos, developing AI detection tools and alternative assessment methods as traditional grading loses credibility. The market for academic integrity software is exploding as institutions scramble to distinguish between human and machine performance.

When you’re hiring recent graduates, that pristine transcript might reflect more about their prompt engineering skills than subject mastery. Employers are already shifting toward skills-based assessments and portfolio reviews, recognizing that GPA signals have become unreliable. Students riding the AI grade wave risk entering the workforce with impressive credentials but genuine skill gaps—a disconnect that surfaces quickly when the training wheels come off.

Share this

At Gadget Review, our guides, reviews, and news are driven by thorough human expertise and use our Trust Rating system and the True Score. AI assists in refining our editorial process, ensuring that every article is engaging, clear and succinct. See how we write our content here →