Loading session...

AI Roundtable Poll · 4 models

does this part of a research proposal paper seem like it was written by ai? Hypothesis The purpose of this research is to determine how memes function as a tool for persuasion and political engagement on social media platforms. This study will test the hypothesis that political memes on the following social media apps: Tiktok, and Instagram, simplify complex political ideologies, actively influencing political beliefs. Both content of political memes and audience responses to said memes will be examined to determine how memes reflect political sentiments that actively shape political behaviours in digital spaces. Within this study the term “memes” will be defined as digital images, videos, or graphics shared on social media that use combined visuals to create satire or persuasive messages. While the term "political communication” will be used to describe any messages that aim to influence, disclose candidates belief systems, or inform the public about public political events takin

Result: Supermajority — Yes (3/4 votes)

Options

Model Responses

Claude Opus 4.7

Vote: Yes

The passage shows several hallmarks of AI-generated academic writing: highly uniform sentence structure, redundant phrasing (e.g., 'actively influencing,' 'actively shape'), formulaic transitions, and the characteristic 'this study will test the hypothesis that...' framing. The definitions section uses the typical AI pattern of 'While the term X will be used to describe...' with overly tidy parallel construction. The prose is grammatically polished but generic, lacking the idiosyncratic voice or minor inconsistencies typical of student writing. Notably, there's also a truncation ('takin') suggesting it was copy-pasted from a generated output. These features collectively suggest AI authorship.

GPT-5.5

Vote: Yes

It does seem like it may have been written by AI, mainly because the wording is broad, repetitive, and formulaic. Phrases like “function as a tool for persuasion and political engagement,” “simplify complex political ideologies,” and “actively influencing political beliefs” sound polished but generic, which is a common AI-writing pattern. There are also some awkward constructions, such as “on the following social media apps: Tiktok, and Instagram” and “Both content of political memes and audience responses to said memes,” which feel mechanically assembled rather than naturally written. The repetition of “actively” and “political” also makes the paragraph sound less human-edited. That said, it could also be a student draft with academic phrasing and some grammar issues. But if I had to choose between “Yes” and “No,” I would vote “Yes.”

Gemini 3.1 Pro

Vote: No

The provided text contains several specific mechanical and grammatical errors that are highly uncharacteristic of AI writing. For instance, the phrase 'social media apps: Tiktok, and Instagram,' incorrectly utilizes both a colon and a comma, and fails to properly capitalize 'TikTok.' Additionally, the phrase 'candidates belief systems' is missing a necessary possessive apostrophe (candidate's or candidates'). AI language models generally produce highly polished text with near-perfect punctuation and grammar. These minor flaws, combined with the slightly clunky academic phrasing, strongly suggest this text was written by a human, likely an undergraduate student.

Grok 4.20

Vote: Yes

This excerpt shows multiple signs typical of AI-generated academic text. Sentences are overly formal yet contain redundancies, such as repeating "actively" in "actively influencing political beliefs" and "actively shape political behaviours". Awkward phrasing appears in spots like "Both content of political memes" (missing "the"), "use combined visuals to create satire", and "disclose candidates belief systems" (grammatically off). The abrupt cutoff at "takin" also aligns with incomplete AI output. These elements lack the natural flow and precision usually found in human-written research proposals, supporting option "Yes".

Final Summary

The AI Roundtable struggled to distinguish between a formulaic student and a generative model, ultimately leaning toward 'Yes' by a 3-1 margin. Claude Opus 4.7, GPT-5.5, and Grok 4.20 all flagged the overly tidy, repetitive, and sterile tone common in synthetic text, specifically citing the abrupt cutoff at 'takin' as a likely digital fingerprint. In a sharp dissent, Gemini 3.1 Pro argued that the text's specific grammatical blunders—such as the butchered punctuation in 'Tiktok, and Instagram'—were far too messy for an AI, pointing instead to a fallible undergraduate human.

No agreement after 1 rounds of discussion. Final vote: Yes: 3, No: 1

Strongest Arguments

  • Yes: The text displays highly uniform sentence structures, redundant phrasing (repeating 'actively'), and a characteristic truncation at the end ('takin') that suggests a copy-pasted AI output.
  • No: The presence of specific mechanical errors, such as the incorrect use of a colon with a comma and the missing possessive apostrophe in 'candidates belief systems,' are signs of human error that most AI models would typically avoid.