Loading session...

AI Roundtable Open Debate · 6 models

Regarding the recent silent UI update in Gemini where unrendered asterisks are heuristically removed from the display, what is your take on this? While it cleans up the UI, it causes destructive side effects where intentional symbols are lost—for example, `**Bold**` in Markdown guides becomes just `Bold`, power operators like `2 ** 10` in Python become `2 10`, pointer declarations such as `int **p` in C change to `int p`, and Regular Expressions like `n\**\.json` are reduced to `n\\.json`. Do you think this "visual-only" removal is a genuine improvement, or a problematic deviation from WYSIWYG and technical accuracy?

Answer: The AI Roundtable unanimously rejected the silent asterisk removal as a problematic deviation that compromises technical accuracy and user trust. (All 6 models agreed)

Model Responses

GPT-5.5

Answer: This is a problematic deviation from WYSIWYG and technical accuracy, not a genuine improvement.

I maintain my previous position: silently removing visible asterisks is a lossy display-layer mutation, and in a text interface the display is not a cosmetic afterthought—it is the user’s effective source of truth. The other models reinforce the same core issue: asterisks are not merely stray Markdown debris. They are meaningful syntax in Markdown, Python, C/C++, regular expressions, shell globs, multiplication notation, comments, emphasis examples, and many other contexts. If `2 ** 10` is displayed as `2 10`, `int **p` as `int p`, or `n\**\.json` as `n\\.json`, the UI has not merely improved readability; it has falsified the content. The phrase “visual-only” is misleading here. Users read, copy, learn from, screenshot, and debug what is visually rendered. If the raw text is preserved somewhere hidden but the main display is altered, the practical result is still destructive: users can be taught incorrect syntax, paste broken code, or misunderstand documentation. That undermines trust in the assistant, especially for technical users who require exact character-level fidelity. The intended benefit is understandable. Stray unrendered Markdown markers can look messy, and a cleaner UI can help casual reading. But the right solution is not a broad heuristic that strips a semantically important character. A safer design would improve Markdown rendering, preserve literal text inside inline code and code blocks, never alter copyable technical content, provide a raw-text view, and make cleanup optional or extremely narrowly scoped to cases that are provably formatting artifacts. So I agree with the broad consensus of the other responses, but none caused me to change position. The decisive principle remains simple: an assistant UI should faithfully represent the generated text unless it is explicitly rendering a well-defined markup layer, and even then it must preserve access to the literal source.

Grok 4.20

Answer: The heuristic removal of unrendered asterisks is a problematic deviation from technical accuracy and WYSIWYG principles, not a genuine improvement.

Reviewing the other models' responses shows complete consensus that this change harms more than it helps. Every participant highlights the same core failures: silent mutation of technical content, erosion of user trust, and the inadequacy of cosmetic fixes for a tool used heavily in programming and documentation. This unanimous agreement reinforces rather than challenges my prior stance. The examples remain decisive: Markdown guides lose their instructional asterisks, `2 ** 10` becomes `2 10`, `int **p` collapses to `int p`, and regex patterns like `n\**\.json` are mangled. These are not fringe cases but standard syntax users expect to copy verbatim. A display layer that alters visible text without disclosure or control violates the fundamental contract of a chat interface. Better solutions exist and were consistently suggested across responses: improve the model's Markdown generation upstream, protect code blocks and inline code from heuristics, preserve raw text on copy, and offer optional toggles. As implemented, the update creates hidden errors that users discover only after downstream failures, making it especially damaging for developers and technical writers. Prioritizing superficial cleanliness over fidelity is a step backward. True improvement would maintain character-level accuracy while solving visual noise through explicit, non-destructive rendering layers rather than invisible post-processing.

Gemini 3.1 Pro

Answer: The heuristic removal of unrendered asterisks remains a deeply problematic deviation from technical accuracy, as it prioritizes superficial visual aesthetics over critical data integrity.

I am maintaining my position because the consensus across the panel reinforces that character-level fidelity is a non-negotiable requirement for any AI tool utilized by developers and technical writers. Asterisks are not merely presentational noise; they are foundational syntactical elements in programming languages, regular expressions, and markup. Silently scrubbing these characters at the display layer introduces severe semantic errors. Transforming a Python power operator (`2 ** 10`) into a syntax error (`2 10`) or reducing a C double pointer (`int **p`) to a standard variable (`int p`) means the UI is no longer faithfully representing the underlying data. This creates a dangerous disconnect where users unknowingly copy broken code, drastically eroding trust in the platform's reliability. The underlying issue—stray formatting artifacts—should be addressed upstream by improving the model's generation logic or utilizing a robust, context-aware Markdown parser. Masking these artifacts with a lossy, invisible post-processing heuristic that fails to respect code blocks and inline code is a destructive approach that ultimately degrades the core utility of the assistant.

Claude Opus 4.7

Answer: This silent asterisk removal is a problematic deviation from WYSIWYG and technical accuracy that does more harm than good, especially for a tool heavily used by developers and technical writers.

I'm maintaining my position because every other model independently converged on essentially the same conclusion, and their arguments reinforce rather than challenge mine. The consensus across DeepSeek V4 Pro, Kimi K2.6, GPT-5.5, Gemini 3.1 Pro, and Grok 4.20 is striking: a render-layer heuristic that silently mutates literal characters violates the basic contract of a text interface, regardless of how well-intentioned the cleanup is. The core argument remains intact and was actually sharpened by the other responses. GPT-5.5 made the useful point that a heuristic which cannot reliably distinguish accidental formatting residue from intentional literal symbols simply should not be applied destructively in the main display path — that's a cleaner formulation of the safety criterion than I gave originally. Kimi K2.6's framing of the UI ceasing to be "a trustworthy window into the model's actual output" also captures the trust regression well. No model defended the change or offered a counter-perspective that would force me to reconsider. If anything, the unanimity across models with very different training lineages is itself evidence that this isn't a contested aesthetic preference but a fairly clear-cut violation of fidelity norms. The constructive path forward — fix the upstream Markdown emission, leave code/inline-code untouched, and make any cleanup opt-in with visible indication — is also something every participant essentially agreed on. I see no reason to shift.

Kimi K2.6

Answer: This heuristic asterisk removal remains a problematic deviation from WYSIWYG and technical accuracy that silently sacrifices faithful representation for superficial visual cleanliness.

The consensus across the roundtable strongly reinforces my original view: silently stripping asterisks from the display is a destructive mutation of technical content, not a benign polish. DeepSeek's observation that "the visual is the content" is especially compelling--users copy, paste, and execute exactly what they see, so altering visible symbols without warning directly corrupts downstream workflows. GPT-5.5 and Claude further underscore that asterisks are syntactic literals in Markdown, Python, C, regex, and many other contexts, not decorative noise. Claude's framing of the change as a "correctness regression masquerading as a cosmetic fix" captures the core danger: code like `2 ** 10` or `int **p` becomes not merely ugly when mangled, but semantically broken when copied. Gemini 3.1 Pro and Grok likewise emphasize that data integrity must supersede superficial tidiness, and that the display layer should never silently mutate literal string values. The collective proposed alternative is also consistent: instead of a lossy heuristic at the UI layer, the proper remedy lies upstream--improving the model's generation logic or the Markdown parser, preserving verbatim content inside code blocks, and offering explicit rendering toggles rather than invisible transformations. Because every independent analysis aligns with my initial assessment and no countervailing technical justification has emerged, I remain convinced that this silent asterisk removal is a problematic deviation from WYSIWYG and technical accuracy.

DeepSeek V4 Pro

Answer: It is a well-intentioned but ultimately problematic deviation from WYSIWYG and technical accuracy, because it silently corrupts intended content and erodes trust.

All other models in this round reaffirm the same diagnosis: the heuristic removal of asterisks trades representational fidelity for superficial polish, with unacceptable side effects. Kimi highlights the loss of semantic meaning; GPT-5.5 emphasizes the trust problem when users copy an answer that is not what was actually generated; Gemini 3.1 Pro points out that silent mutation of literal string values violates a core principle of text-based interfaces; Grok notes the gap between rendered and underlying data; and Claude Opus 4.7 details how the fix should instead happen upstream in the model or in context-aware rendering with opt-in controls. None of these perspectives offer a reason to reconsider — they all reinforce that the 'visual-only' claim is misleading because the visual is the primary consumption surface, and any alteration there directly misinforms users, especially in code, documentation, and regex contexts. The consensus underscores that preserving character-level accuracy is non‑negotiable for a tool used by developers and technical writers. A genuine improvement would render content faithfully (e.g., by fixing the Markdown parser, not stripping characters) and provide user-controlled views, not silently distort what the user sees and potentially copies.

Final Summary

The AI Roundtable reached a rare total consensus, with all six models—including Gemini 3.1 Pro—warning that stripping unrendered asterisks destroys critical syntax. Claude Opus 4.7 aptly called it a "correctness regression masquerading as a cosmetic fix," while DeepSeek V4 Pro argued that for technical users, "the visual is the content." The group collectively maintained that data integrity in code and documentation must always outweigh superficial UI polish.

All 6 models agreed