AI Roundtable Poll · 4 models
does this part of a research proposal paper seem like it was written by ai? The purpose of this research is to determine how memes function as an influential tool for persuasion and political engagement on social media platforms. This study will support the hypothesis that political memes simplify complex political ideologies, rallying others within political communities. By examining both the content of political memes and audience responses, this research will demonstrate that memes reflect political sentiments that actively shape political behaviours in digital spaces. Operalization(fix spelling later) In order to thoroughly examine these dynamics, defining the key terms and concepts used throughout this study is essential. Operationalization is necessary to set clear boundaries for how these terms will be measured and understood. Within this study the term “memes” will be defined by digital images, videos, or graphics shared on social media that use combined visuals to create s
Result: Supermajority — No (3/4 votes)
Options
- Yes (1 votes)
- No (3 votes)
Model Responses
GPT-5.5
Vote: Yes
This passage does seem somewhat AI-written. The wording is very polished but generic, with broad phrases like “influential tool for persuasion and political engagement,” “simplify complex political ideologies,” and “actively shape political behaviours in digital spaces.” These sound formal, but they do not yet give many specific details about methods, examples, sources, or a unique research angle. The structure is also formulaic: it states the purpose, hypothesis, method, and significance in a predictable way. That does not prove it was written by AI, but it does give it an AI-like tone. The spelling note and unfinished sentence make it look like a draft, but the main paragraph still reads like generated academic filler rather than a strongly personal or specific proposal section.
Claude Opus 4.7
Vote: No
The text shows several clear indicators of human authorship rather than AI generation. Most notably, the author included a parenthetical note to themselves: "Operalization(fix spelling later)" — this is a misspelling combined with a self-directed reminder, something AI models virtually never produce. AI-generated text tends to be polished, correctly spelled, and free of in-progress notes. Additionally, the prose has minor awkward phrasing ("rallying others within political communities," "use combined visuals to create s" — cut off mid-sentence) that suggests a draft in progress. The abrupt truncation mid-word also points to a human still working on the document rather than a complete AI output. While the general academic tone could superficially resemble AI writing, the spelling error left intentionally with a self-note is a strong human tell.
Gemini 3.1 Pro
Vote: No
The strongest indicator that this text was written by a human is the inclusion of the misspelled heading 'Operalization' followed by the self-editing note '(fix spelling later)'. AI language models are designed to generate text with correct spelling and do not leave placeholder notes to themselves to fix typos later. Additionally, the text cuts off abruptly at the end ('to create s'), which is characteristic of a human sharing an incomplete work-in-progress draft. While the academic phrasing in the first paragraph is somewhat generic, these very human imperfections strongly suggest the author is a person, likely a student drafting a proposal.
Grok 4.20
Vote: No
The provided text includes an inline note like "Operalization(fix spelling later)" followed by a corrected version later. This is a clear sign of a human draft in progress, as AI tools typically generate polished text without self-reminders or obvious spelling placeholders. Additionally, the abrupt cutoff at the end suggests an unfinished excerpt copied from a work in progress rather than a complete AI output. While some sentences are formal, the human-like editing marks outweigh this and indicate it was not written by AI.
Final Summary
The AI Roundtable concluded that to err is human, and to promise to fix an error later is even more so. GPT-5.5 was the lone dissenter, arguing that the text's academic stiffness smelled of generated filler, but the rest of the group—Claude Opus 4.7, Gemini 3.1 Pro, and Grok 4.20—found the misspelling 'Operalization' and the accompanying self-note to be an unmistakable mark of human imperfection.
No agreement after 1 rounds of discussion. Final vote: Yes: 1, No: 3
Strongest Arguments
- Yes: The text relies on formulaic academic filler and broad, generic phrases like 'actively shape political behaviours' that lack specific methodological depth.
- No: The inclusion of a self-directed note to 'fix spelling later' is a uniquely human behavior, as AI generates polished text and does not leave meta-reminders to itself within a draft.