California Colleges Spend Millions on Flawed AI Detection: Is Turnitin Worth It?

CALIFORNIA COLLEGES SPEND MILLIONS TO CATCH PLAGIARISM AND AI. IS THE FAULTY TECH WORTH IT?

The landscape of academic integrity in higher education has dramatically shifted with the advent of generative artificial intelligence (AI). Since the public release of powerful AI tools like ChatGPT, a profound dilemma has emerged for universities and colleges: how to uphold academic honesty when students can potentially outsource their writing assignments to sophisticated chatbots? This pervasive concern has, predictably, opened a lucrative market for tech companies offering solutions, primarily in the form of AI detection software.

A leading player in this market is Turnitin, a company long known for its plagiarism-detection services. In response to the surge in generative AI, Turnitin quickly introduced a new tool aimed at identifying AI-generated writing in student submissions. However, an in-depth investigation by CalMatters and The Markup reveals that California’s educational institutions are pouring millions into these tools, despite significant concerns about their accuracy, substantial costs, and alarming implications for student privacy and intellectual property. This raises a critical question: are these expensive and often flawed technologies truly worth the investment for California’s colleges?

THE SURGE OF AI AND THE “SOLUTION” DEBATE

The widespread availability of generative AI models ignited immediate apprehension within academic circles. Instructors grappled with how to ensure students’ original work, leading many to seek technological safeguards. Turnitin, leveraging its established position in the educational technology sector, capitalized on this anxiety. Within six months of ChatGPT’s debut, the company rolled out its new AI detection feature.

The financial impact was swift and substantial. In 2025 alone, records indicate that the California State University (CSU) system incurred an additional cost of $163,000 for this new AI detection capability, pushing their total expenditure for Turnitin services to over $1.1 million for the year. The majority of CSU campuses had already been utilizing Turnitin’s traditional plagiarism detector since 2014, making the transition to the new AI tool a natural, albeit costly, extension.

Faculty members hoped the AI detector would serve as both a deterrent against improper AI use and an effective identification tool. However, the technology’s performance has been a shadow of its promised accuracy. Turnitin’s detector flags text that matches its AI writing style models, regardless of whether a student legitimately used AI or if the writing simply mirrors typical AI patterns. Crucially, the company’s licensing agreements for this technology are highly restrictive, demanding “perpetual, irrevocable, non-exclusive, royalty-free, transferable and sublicensable” rights to student writing. This enables Turnitin to amass a vast database of student papers, which it then uses to market its products and develop new features, including the very AI detector in question. The strategic value of this data and technology was underscored in 2019 when Advance Publications acquired Turnitin for an astonishing $1.75 billion, a sum that surpassed the total investment in all ed-tech startups the preceding year.

TURNITIN’S EXPANSION AND FINANCIAL REACH

Turnitin’s journey to prominence began long before the current AI boom. Wendy Brill-Wynkoop, a photography professor at College of the Canyons, was an early adopter in 2004 when her campus first explored the software. Her initial license cost a mere $120. By the following year, the college was paying 75 cents per student, totaling less than $6,500 annually. Fast forward to 2025, and College of the Canyons’ bill for Turnitin has soared to nearly $47,000, bringing their total spending on the platform to over half a million dollars.

The company’s integration into learning management systems (LMS) further solidified its market dominance throughout the 2000s and 2010s. By default, these integrations often led to every submitted assignment being scanned, significantly accelerating the growth of Turnitin’s proprietary database of student papers. This broad application meant the tool was placed in front of faculty members who might not otherwise have chosen to use it, effectively expanding its footprint across campuses.

The COVID-19 pandemic, with its rapid shift to remote learning, provided another unexpected surge in demand for Turnitin. Government purchasing records across the U.S. show a noticeable spike in Turnitin contracts during the 2020-21 academic year. In California, many colleges reported a corresponding increase in academic dishonesty cases that year. While these numbers often declined with the return to in-person instruction, they rebounded sharply with the emergence of generative AI, offering Turnitin another opportunity to monetize a new academic integrity crisis.

Globally, Turnitin’s software is licensed to over 16,000 institutions, serving more than 71 million students. In California, almost three-quarters of community colleges utilize the tool, as does the entire California State University system, with the sole exception of Cal Poly San Luis Obispo, which cancelled its contract due to insufficient faculty usage after spending $171,000 between 2020 and 2024. CSU campuses collectively have spent an estimated $6 million on Turnitin since 2019 alone. Several University of California campuses also allocate more than $100,000 annually for the per-student licensed detector. The joint investigation by CalMatters and The Markup uncovered over $15 million in Turnitin software purchases across just 57 California institutions, representing only a fraction of the statewide expenditure. Other notable examples include the Los Rios Community College District, which has spent nearly $750,000 since 2018, and the Los Angeles Community College District, with a 2025 license costing $265,000. UC Berkeley is also committed to a decade-long contract valued at almost $1.2 million.

THE HIGH COST OF LIMITED VALUE

Despite the significant financial outlay, critics argue that Turnitin’s tools offer remarkably limited value. Robbie Torney, Senior Director of AI Programs at Common Sense Media, a national nonprofit focusing on young people’s technology use, contends that $15 million is an exorbitant amount for a tool with such dubious efficacy. He points to numerous reports of Turnitin’s AI detector erroneously flagging student writing and the relative ease with which students can circumvent these detectors.

Torney advocates for a more proactive and pedagogical approach, suggesting that investments would be better directed towards:

  • Training for professors and teachers on how to address AI in their classrooms.
  • Developing clear frameworks for universities to communicate acceptable and unacceptable AI usage to students.

Rather than relying on a “surveillance methodology” to detect AI, he argues for an educational strategy.

The issue of Turnitin’s massive student paper database has also drawn criticism. Jesse Stommel, an associate professor at the University of Denver, has been vocal about this concern since 2011, when he discovered his own dissertation was included. He fundamentally objects to educational institutions ceding original student work to a for-profit entity that then uses this data for its own commercial gain. While Turnitin claims it doesn’t “monetize” student writing directly, its price increases since the release of the AI detector—which was developed using this very writing—suggest otherwise.

Brill-Wynkoop of College of the Canyons expressed regret, stating, “That makes me feel bad as a faculty member, that I encouraged my students to use something, and now, I don’t know. None of us thought about big data and how it would be used in the future.”

Past legal challenges have also highlighted these concerns. In 2007, high school students in Virginia and Arizona sued iParadigms, Turnitin’s former parent company, alleging copyright infringement over their own writing. Although the students ultimately lost their case, the precedent underscored broader ethical debates. Some colleges now explicitly caution against using free online plagiarism checkers due to similar privacy concerns. Stanford University, notably, does not license Turnitin, instead advising its faculty that such tools can erode trust and belonging among students while raising legitimate questions about privacy and intellectual property rights.

STUDENTS CAUGHT IN THE MIDDLE: ANXIETY AND ACCUSATIONS

For current undergraduates, a culture of routine digital surveillance and privacy erosion has become an unfortunate reality of college life. Emily Ibarra, a second-year student at Cal State Northridge, who claims never to have used ChatGPT, described the immense stress caused by Turnitin’s “similarity reports.” Initially mistaking the green, yellow, and red indicators for grading quality, she realized they flagged potential plagiarism. Even on short assignments, she found it nearly impossible to avoid a “yellow” report, as Turnitin’s scanners highlight quoted material regardless of proper citation. The constant risk of being accused, even unintentionally, creates significant anxiety.

Joshua Hurst, a junior at Cal State Northridge, experienced this firsthand when his writing was flagged in spring 2024. Although he hadn’t used ChatGPT for the assignment, he had utilized Grammarly, an AI-bolstered tool, to refine his writing. While his professor didn’t penalize him, Hurst recognized that others might have. Now, he employs two different checkers before submitting any work, illustrating the lengths students go to protect themselves from false accusations.

Compounding these individual anxieties, broader data reveals systemic issues. A survey by the Center for Democracy & Technology, a digital rights nonprofit, found that one in five high schoolers had either been wrongly accused of using generative AI to cheat or knew someone who had. Furthermore, Common Sense Media’s research showed that Black teenagers were twice as likely as their white and Latino counterparts to have their work flagged as AI-generated when it wasn’t, a disparity researchers partially attribute to teacher bias. Non-native English speakers are also disproportionately affected, as their writing style, often characterized by simpler syntax and more limited vocabulary, can inadvertently mimic AI-generated text.

Nilka Desiree Abbas, a native Spanish speaker from Puerto Rico, faced a devastating false accusation in 2023 while taking a political science course. She received a zero on an assignment and a terse message from her professor alleging ChatGPT use, despite her denial. Abbas, who had returned to school with a toddler, was deeply shaken and began photographing her progress on assignments as an evidence trail. While Turnitin claims to have fine-tuned its detector to reduce bias against English language learners, it still acknowledges the occurrence of false positives, fueling student anxiety across online communities.

Many students now adopt defensive strategies, such as deliberately introducing typos to differentiate their writing from AI’s, or extensively using multiple online detectors before submission. Jasmine Ruys, who manages student conduct cases at College of the Canyons, often encounters students who unwittingly use AI through commonly available tools like Grammarly, leading to false accusations. She acknowledges the ethical “gray area” of AI use, particularly when students use it for polishing first drafts, questioning its fundamental difference from traditional tutoring services.

BEYOND THE ILLUSION: SEEKING REAL SOLUTIONS

The fundamental flaw of Turnitin’s AI detector lies in its inability to identify the original source of AI-generated text, unlike its plagiarism counterpart. Instead, it merely assesses the probability that a portion of text was created by AI, leaving faculty members to interpret whether this AI-like writing constitutes academic dishonesty. Adam Kaiserman, an English professor at College of the Canyons, notes that while he uses the tool, it offers limited practical help. He observes a decline in traditional plagiarism; students inclined to cut corners now opt for AI chatbots. Ironically, Turnitin’s software frequently misses AI-generated writing, in part due to its cautious approach to avoid false accusations. Moreover, it lacks the crucial ability to detect factual inaccuracies or “hallucinations”—a hallmark of AI-generated content.

Student behavior has undeniably shifted. Surveys, such as one by Common Sense Media, indicate that 63% of teenagers have used chatbots or text generators for school assignments. A UK study in December found 88% of undergraduates had used AI for assignments, though only 18% reported submitting work that included AI-generated text. Students often use AI for tasks they deem less important, for outlining, or overcoming writer’s block. Ruys highlights that student AI use often stems from being overwhelmed by workloads, personal commitments, or simply not understanding an assignment. In such cases, students turn to AI instead of their professors, underscoring a need for better support systems rather than just detection.

Intriguingly, administrators from institutions like the California State University Office of the Chancellor, Cal State Northridge, and the Los Angeles Community College District, who also teach, admit they do not use Turnitin in their own courses, even as they defend their institutions’ broader licenses. This discrepancy further highlights the skepticism about the tool’s practical utility among educators who are directly engaged with students.

Academics like Jesse Stommel argue that the perception of rising cheating rates, particularly post-ChatGPT, is often exaggerated. He points to research suggesting that cheating rates have largely remained stable. Stommel strongly criticizes the use of Turnitin as a deterrent, likening it to the ineffective strategies seen in the criminal justice system. He advocates for fostering trusting relationships with students as the most effective counter to academic dishonesty, arguing that tools like Turnitin “immediately fractures that relationship.” Sean Michael Morris, a frequent collaborator with Stommel, cautions against the “big lie” perpetuated by many ed-tech companies that their tools are indispensable for education.

The insights from this investigation, supported by the Kapor Foundation, underscore the complex challenges faced by California’s colleges. As technology continues to evolve, the focus must shift from costly and imperfect surveillance tools to fostering environments of trust, clear guidelines, and robust educational support for students navigating the new digital landscape.

Leave a Reply

Your email address will not be published. Required fields are marked *