Tencent has introduced a new benchmark, ArtifactsBench, that aim to fix recent problems with testing creative AI models.
Ever asked an AI to create something like a simple webpage or a chart and acquired something that works however has a bad ser experience? The buttons is probably within the incorrect place, the colours might clash, or the animations feel clunky. It’s a common problem, and it highlights a big challenge within the world of AI development: how do you teach a machine to have good taste?
For a long term, we’ve been testing out AI models on their ability to write code this is functionally accurate. These tests could confirm the code would run, however they have been absolutely “unaware of the visual fidelity and interactive integrity that define modern user experiences.”
This is the exact problem ArtifactsBench has been designed to solve. It’s much less of a test and more of an automated art critic for AI-generated code
Getting it right, like a human would should
So, how does Tencent’s AI benchmark works? First, an AI is given a creative task from a catalogue of over 1,800 demanding situations, from building data visualizations and web apps to making interactive mini-games.
Once the AI generates the code, ArtifactsBench gets to work. It automatically builds and runs the code in a safe and sandboxed environment.
To see how the application behaves, it captures a series of screenshots over the time period. This permits it to check for things like animations, state changes after a button click on, and other dynamic user feedback.
Finally, it hands over all this proof – the original request, the AI’s code, and the screenshots – to a Multimodal LLM (MLLM), to act as a judge.
This MLLM choose isn’t just giving a vague opinion and instead uses a detailed, per task-checklist to score the result across ten exceptional metrics. Scoring consists functionality, user experience, and even aesthetic quality. This ensures the scoring is fair, consistent, and thorough.
The large question is, does this automated judge actually have goods taste? The result propose it does.
When the rankings from ArtifactsBench were compared to WebDev Arena, the gold-general platform in which real human vote at the exceptional AI creations, they matched up with a 94.4% consistency. This is a large jump from older automated benchmarks, which only controlled around 69.4% consistency.
On top of this, the framework’s judgments showed over 90% agreement with professional human developers.
Tencent evaluates the creativity of top AI models with its new benchmark
When Tencent put more than 30 of the world’s top AI models by their paces, the leaderboard was revealing. While top commercial models from Google (Gemini-2.5-Pro) and Anthropic (Claude 4.0-Sonnet) took the lead, the tests unearthed a fascinating insight.
You may assume that an AI specialized in writing code will be the best at those tasks. But the opposite was true. The research found that “the holistic capabilities of generalist models often surpass those of specialized ones.”
A general-motive model, Qwen-2.5-Instruct, actually beat its more specialized siblings, Qwen-2.5-coder (a code-particular model) and Qwen2.5-VL (a vision-specialized model).
The researchers believe this is because generating a great visual application isn’t just coding or visible understanding in isolation and demand a blend of skills.
“Robust reasoning, nuanced instruction following, and an implicit sense of design aesthetics,” the researchers highlight as example vital skills. These are the sorts of well-rounded, almost human-like abilities that the best generalist models are starting to grow.
Tencent hopes its ArtifactsBench benchmark can reliably evaluate those qualities and thus measure future progress in the ability for AI to create things that aren’t just purposeful but what users actually want to use.