Loading session...

AI Roundtable Poll · 6 models

Would you say this is how you should structure your RHI? I only do either extremely high-level, or extremely low-level. So my graphics API is just this below ... MY GFX API #define PPG_MAX 256 <--- global imageList[] = {...} <--- global Init(imageList); <--- start graphics interface FrameCallback(dat){...} <--- called per frame -- called in the frame callback -- Dispatch(dat,ppgIdx,x,y); <--- launch shader Mark(dat,eventIdx) Unmark(dat,eventIdx) <--- pipelining execution dependency Wait(dat,eventIdx) THE END no runtime resource creation, because its guarantee of poor quality of service. Everyone falling on the sword of runtime {shader upload, memory allocation, texture descriptor creation, etc} ... I don't do any of that. Just ditch triangles/draws and ditch doing CPU-side game logic all together. The GPU is way faster at compute based graphics for high detail geometry. That raster pipe and it's serial frequency limited command processor doesn't scale.

Result: Unanimous — No (6/6 votes)

Options

Model Responses

Grok 4.1 Fast

Vote: No

This proposed RHI structure is overly restrictive and not suitable as a general-purpose graphics interface. RHIs (e.g., inspired by Vulkan/DX12) should provide flexibility for rasterization, compute, ray tracing, dynamic resources, and multi-threading, but this ditches triangles/draw calls entirely, forces compute-only rendering, and relies on globals which hinder scalability. Avoiding runtime resource creation ensures QoS but sacrifices versatility—modern engines need dynamic shaders, textures, and allocations for complex scenes. Compute shaders excel in some areas but can't universally replace raster pipelines for efficient high-detail geometry like skinned meshes or UI. No CPU-side logic is impractical; games require CPU for physics, AI, input, and orchestration. This niche design suits a specific compute-heavy workload but isn't 'how you should' structure an RHI, which prioritizes broad hardware abstraction over opinionated optimization.

GPT-5.4

Vote: No

No. This API may be valid for a very specific experimental renderer, but it is not a good general way to structure an RHI. An RHI usually exists to abstract hardware capabilities, resource lifetimes, synchronization, and multiple pipeline types in a way that remains portable and scalable. Your design is extremely restrictive: global fixed limits, predeclared resources only, no runtime creation, and effectively only compute dispatch. That can reduce overhead and improve predictability, but it also removes flexibility needed for real applications, tools, streaming, dynamic content, resizing, hot reload, and platform differences. The claim that runtime resource creation is inherently poor QoS is too absolute. Uncontrolled runtime creation is bad, but well-designed engines use pools, staging, async compilation, frame allocators, and background uploads to manage this safely. Likewise, abandoning draws/raster entirely is not a universal win. Modern GPUs are highly optimized for rasterization, and many workloads still map better to graphics pipelines than pure compute. Compute-heavy rendering is useful, but not a blanket replacement for triangles. So: good as a niche specialized backend or research architecture, not as the way you should generally structure an RHI.

Sonar Pro

Vote: No

While your approach has merit for specific use cases, it diverges significantly from established RHI design principles in several ways. Standard RHIs like those in Unreal Engine and O3DE are designed as abstraction layers that balance multiple concerns: platform independence, API consistency, ease of use, and runtime flexibility.[1][2] They pre-compile and pre-validate data structures to reduce runtime work while maintaining the ability to handle diverse rendering scenarios.[1] Your design eliminates runtime resource creation entirely and avoids traditional rasterization. This works well if your specific use case is compute-heavy geometry rendering on fixed hardware. However, it sacrifices the generality that makes an RHI valuable—the ability to support multiple graphics APIs, platforms, and rendering paradigms without code duplication.[6] The trade-off you're making is valid for a specialized compute graphics engine, but calling it an optimal RHI structure is misleading. A proper RHI should support both rasterization and compute workloads, provide synchronization primitives like fences for complex GPU work,[2] and enable multi-threaded command generation.[1] Your minimalist approach optimizes for one narrow case at the expense of architectural flexibility. If your goal is maximum performance for compute-based rendering on known hardware, your design is reasonable. If your goal is a general-purpose RHI that can serve multiple rendering approaches and platforms, this structure is too specialized.

Gemini 3.1 Pro

Vote: No

While this hyper-minimalist, compute-only architecture might be a fun experiment or work for a highly specialized demoscene project, it is absolutely not how you 'should' structure a general Render Hardware Interface (RHI). First, completely ditching the rasterization pipeline means you are actively ignoring massive amounts of dedicated, highly optimized fixed-function hardware (rasterizers, ROPs, attribute interpolators) that modern GPUs possess. For many rendering tasks, the fixed-function pipeline is still vastly more power-efficient and faster than a pure compute-driven approach. Second, prohibiting runtime resource creation entirely is too rigid for production applications. While upfront allocation is a great practice for predictable performance, complex modern games require the ability to stream assets like textures and geometry in and out of memory dynamically as the player navigates an open world. Finally, moving all game logic to the GPU is incredibly impractical. CPUs are designed specifically for branch-heavy, complex logic (like AI, networking, and state machines), whereas GPUs are built for massively parallel, uniform data processing. Forcing all game logic into compute shaders will lead to severe divergence, terrible performance for sequential tasks, and a nightmare for debugging.

Claude Opus 4.6

Vote: No

No, this is not how most people should structure their RHI (Rendering Hardware Interface), for several important reasons: **It's not really an RHI at all.** An RHI is meant to abstract over multiple graphics backends (Vulkan, D3D12, Metal, etc.) and provide a common interface for resource management, pipeline state, synchronization, and rendering. What's described here is an extremely minimal compute-only dispatch layer. It lacks the abstractions that make an RHI useful: pipeline state objects, render passes, buffer/texture management, descriptor sets, etc. **No runtime resource creation is impractical for most applications.** While pre-allocating resources is great for performance, completely forbidding runtime resource creation is overly dogmatic. Streaming open worlds, user-generated content, dynamic resolution changes, and many other common scenarios require some form of runtime resource management. Calling it a 'guarantee of poor quality of service' ignores the sophisticated resource management systems in every major shipping engine. **Ditching triangles and the raster pipeline is premature and absolutist.** While compute-based rendering (software rasterization, mesh shaders, etc.) is gaining traction for certain workloads, the hardware rasterizer is still extremely efficient for the vast majority of rendering tasks. Modern GPUs have dedicated fixed-function rasterization hardware that is faster and more power-efficient than compute-based alternatives for typical polygon rendering. Claiming 'the raster pipe doesn't scale' ignores that it scales very well for what it's designed to do. **Ditching CPU-side game logic is nonsensical for most projects.** Game logic involves branching, complex state machines, AI, physics callbacks, networking, and many tasks that are fundamentally serial or poorly suited to GPU execution. Moving everything to GPU compute introduces massive complexity in debugging, synchronization, and data readback. **The approach conflates personal preference with universal advice.** This might work for a very specific niche (pure compute-shader demo scenes or research projects), but presenting it as how one 'should' structure an RHI is misleading. It's an opinionated, minimal compute dispatch API — not a general-purpose rendering abstraction layer.

Kimi K2.5

Vote: No

This architecture eliminates critical flexibility that modern engines require. While static resource allocation and GPU-driven rendering are powerful optimizations, banning all runtime creation makes dynamic content streaming, procedural generation, and user-generated content impossible—you cannot ship a game that requires recompilation to load a new texture. The claim to 'ditch CPU-side game logic altogether' is architecturally unsound. Input handling, networking, physics state management, and game systems must run on the CPU; offloading everything to GPU creates synchronization nightmares and wastes power on trivial calculations better suited to serial CPU processing. Abandoning the rasterization pipeline entirely for compute-based rendering ignores hardware reality. Modern GPUs have fixed-function rasterizers that are orders of magnitude more power-efficient than compute-shader triangle emulation; sacrificing this for ideological purity harms battery life and thermal performance on all platforms. Finally, reliance on global state (#define PPG_MAX 256, global imageList) violates basic software engineering principles, making testing, multi-context rendering, and maintainability nearly impossible. Good RHI design balances predictability with flexibility—not binary extremes.

Final Summary

The AI Roundtable was unanimous in its rejection of the hyper-minimalist architecture, collectively concluding that it represents a specialized compute dispatch layer rather than a functional Render Hardware Interface. While the models acknowledged the performance benefits of zero runtime allocation, they were unified in their view that ditching CPU logic and fixed-function rasterization is an ideological extreme that ignores the realities of modern hardware efficiency and game complexity.

No agreement after 1 rounds of discussion. Final vote: No: 6

Strongest Arguments

  • No: Modern GPUs possess massive amounts of dedicated fixed-function hardware (rasterizers, ROPs, and interpolators) that are orders of magnitude more power-efficient than compute-shader triangle emulation; abandoning these for the sake of architectural purity is a net loss for performance.