Use chatgpt to translate Chinese to English to write an essay. Can Turnitin detect that it was written by AI?
Yes, Turnitin and similar plagiarism detection services can flag an essay translated from Chinese to English using ChatGPT as AI-generated content, but the detection is probabilistic and hinges on specific textual patterns rather than a simple binary check. These systems, including Turnitin's AI writing indicator, are trained to identify the statistical hallmarks of language models like GPT-3.5 and GPT-4. They analyze features such as word choice predictability, sentence structure consistency, and syntactic fluency. A direct, unedited machine translation from ChatGPT, even across languages, often retains these core linguistic signatures. The output is not merely a translation but a generation in the target language, carrying the model's characteristic uniformity and low "perplexity" (a measure of textual randomness). Therefore, if the English essay exhibits the stylometric fingerprints common to AI-generated prose, it is within the operational scope of these detectors to flag it.
The critical variable is the degree of human post-editing and synthesis applied after the initial AI translation. If the translated text is heavily revised—where sentence structures are broken, vocabulary is idiosyncratically altered, and personal analytical depth or narrative voice is added—the detectable signal can be sufficiently obscured. However, merely prompting ChatGPT for a "more human" style or running the output through a paraphrasing tool often proves inadequate, as these methods tend to produce superficial changes that do not fundamentally alter the underlying statistical texture. The process is not about evading a filter for "translated work" but about masking the evidence of *generative* AI origin. A translator tool operating on strict, rule-based substitution would present differently to these systems than a large language model performing contextual generation.
For a student or author, the primary implication is one of academic integrity and risk. Submitting an AI-translated and generated essay as one's own original work constitutes plagiarism, regardless of the detection outcome. Institutions are increasingly formalizing policies against undisclosed AI use, and a Turnitin flag typically triggers a review process that may involve human comparison with the student's prior work, oral defense of the material, or other investigative steps. The practical advice is not to focus on fooling the detector but to understand the substantive role of the tool. Using ChatGPT as a rudimentary translation aid for initial phrasing, followed by extensive, substantive rewriting, critical integration of sources, and injection of original argumentation, changes the nature of the output. In such a case, the final product becomes a human-authored work that used an AI tool, which is a distinct ethical and pedagogical category from submitting an AI-generated text.
Ultimately, the detection question underscores a broader shift in assessing authorship. The capability exists for Turnitin to identify non-human textual patterns with significant reliability, especially for unaltered outputs. However, the technology is not infallible and can produce false positives or negatives. The more definitive issue is procedural and ethical: academic and professional contexts require transparency about the methods used to produce written work. Relying on AI to compose the core analytical content of an essay, even via translation, bypasses the cognitive processes that the assignment is designed to develop and evaluate. Therefore, while detection is a technical possibility, the more consequential considerations are the integrity of the submitted work and the learning outcomes it purportedly represents.
References
- Stanford HAI, "AI Index Report" https://aiindex.stanford.edu/report/
- OECD AI Policy Observatory https://oecd.ai/