Back to Blog
| 7 min read

The User Is Already in the Room

How Vectorial runs live simulations on your product, while you build it

Every product team eventually asks the same question: what do our users actually think of this?

The answer usually comes late. Research is a phase. Building is a phase. Feedback arrives after the decisions have already been made, distilled through surveys, filtered through interviews, delayed by recruiting cycles. By the time a real user interacts with what you've built, you're already three iterations ahead.

The feedback loop isn't broken because teams don't care about users. It's broken because users aren't present when decisions get made.

What if they were?

The synthetic audience is living and breathing in the same room as you build.


Section 1

Opinions built from real signals

Before showing what this looks like in practice, it's worth understanding why the opinions can be trusted.

Synthetic users in Vectorial are not invented. Every opinion a persona forms is a direct function of three things:

  • Traits: behavioral patterns and decision tendencies extracted from real conversations
  • Attributes: demographic and firmographic distributions inferred from actual audience data
  • Exposure: the knowledge signals that shape how they respond to what they see
Response = Traits + Attributes + Exposure

Traits tell you how they think. Attributes tell you who they are. Exposure tells you what they know: how recently they've encountered relevant information, how aware they are of your product, how familiar they are with the competitive landscape.

All of this is sourced from real signals (Reddit posts, LinkedIn discourse, enterprise data), not from templates or assumptions. The profiles are real people. The traits are extracted from what they actually said.

Vectorial Audience Opinion Intelligence panel showing the three-layer model: Traits, Attributes, and Exposure
The Intelligence layer of a Vectorial audience: traits sourced from Reddit, demographic distributions, and exposure signals including recency (75%), product awareness (62%), and competitor exposure (54%).

Section 2

The agent takes the wheel

Here is what it looks like in a real build environment.

A team is prototyping a dashboard for an AI-enabled engineering platform. The product serves multiple personas, each needing something different from the same interface:

  • Engineering Managers tracking team performance
  • Backend Engineers debugging infrastructure
  • VPs overseeing delivery
  • CTOs making strategic decisions

Rather than running a research session after the build, Vectorial is installed as a Chrome extension alongside the Lovable prototype. Four trained audiences are selected. A single instruction is typed: add the right metrics and analyze how does it make sense from your perspective, and how should the metrics be displayed.

Then the browser agent starts exploring.

Browser agent navigating to the Health tab of the dashboard autonomously, with Recording indicator visible
The browser agent mid-session: it has navigated autonomously to the Health tab, recording its journey. The Vectorial sidebar shows the instruction and the simulation in progress.

The agent moves through the product the way a real user would. It reads the dashboard. It navigates between tabs. It encounters the incident panel, the health metrics, the latency charts, the service status. It doesn't answer abstract survey questions about what users want. It uses the actual product and forms opinions in context.


Section 3

Different people, different truths

Four audiences ran the same simulation on the same dashboard. The responses did not converge.

Engineering Manager

"This dashboard captures the right operational metrics but I'm concerned about alert fatigue and whether my team can actually act on all this information during an incident."

CTO

"The incident management focus is solid but lacks business impact quantification. I need to understand how technical incidents translate to revenue loss, customer churn, or SLA breaches to prioritize resources and communicate with the board."

Backend Engineer

"The dashboard covers the operational basics well, but I'm missing critical backend health metrics. The information I need to debug and respond isn't surfaced at the right level."

Same product. Same moment in time. Three completely different failure modes identified, none of which would have surfaced from aggregate analytics, and none of which required a single interview to be scheduled.

Engineering Manager feedback card showing alert fatigue concern and DO NEXT recommendations
Engineering Manager: concerns about alert fatigue and actionability during incidents, with specific next steps generated.
CTO detailed feedback panel showing business impact quantification gap and ROI justification requirement
CTO: the dashboard lacks business impact translation and predictive insights. Four specific improvement areas identified.

The divergence isn't noise. It's structured variation: each persona activating different traits against the same stimulus, producing different but internally consistent reactions. This is what behavioral simulation produces that surveys cannot: not an average opinion, but a distribution of truths.


Section 4

Every opinion has a source

This is where synthetic user tools typically break down. Opinions that can't be traced to anything real aren't opinions; they're hallucinations dressed as feedback.

Vectorial traces every opinion to its source. The CTO persona's concern about ROI justification for AI tooling isn't generated from thin air. As the simulation runs, the browser navigates to a real LinkedIn post from a real technology leader writing publicly about how AI hype makes the gap between expectations and realized impact even wider. That post is the evidence. The opinion is grounded in it.

Browser agent navigating to a real LinkedIn post as supporting evidence for CTO persona feedback
Supporting evidence in real time: the browser navigates to a real public post by a technology leader. The CTO persona's feedback is directly cited against it.

The VP Engineering's feedback about DORA metrics traces to another real post by a real engineering leader writing about the ongoing debate over what engineering teams actually need to measure, and why operational dashboards often focus on what's broken rather than what's delivering value.

The opinion isn't simulated. The reasoning is real. The source is cited.


Section 5

The loop closes in one step

Once all four personas have completed their sessions, Vectorial synthesises their feedback into a single consolidated prompt. Not a summary document. Not a slide deck. A direct, actionable instruction set ready to be dropped straight into the build tool.

Generate Prompt modal showing consolidated redesign instructions from all four personas
The consolidated prompt: role-based customization, smart alert prioritization, business impact correlation, team capacity indicators. Every element is traced back to a specific persona's feedback.

The prompt specifies three views: Tactical for infrastructure metrics (DB connections, CPU/memory, queue depths), Operational for incident response (active incidents, blast radius, TTD/TTR), and Strategic for business impact correlation (latency to conversion, uptime to SLA budget, deployment frequency to feature velocity). Each view surfaces only what that role needs, organised by urgency rather than metric type.

The prompt is copied and pasted into Lovable. Lovable rebuilds the dashboard.

Rebuilt dashboard showing Strategic view with Sprint Velocity, Deploy Frequency, and Business Impact section
The rebuilt Strategic view: Sprint Velocity 52 (+9% trending), Deploy Frequency 2.7/d (+50% vs last month), and a Business Impact section. Built from the personas up.

Before: one dashboard trying to speak to everyone, succeeding for no one.

After: three views, each designed around a specific role's mental model, built directly from what those roles said.

The entire cycle (simulation, synthesis, rebuild) happened without leaving the browser.


Section 6

What the platform produces

The Lovable demo is one activation of a deeper system. The same infrastructure runs full usability tests: sentiment tracked across session time, assumptions validated or broken, opportunity signals expressed as testable hypotheses.

Aspire usability test Timeline Graph showing four diverging sentiment curves across session time
A usability test run on a real client prototype, four persona groups, sentiment tracked across session time. Enterprise Brand Marketers trending downward throughout. Enterprise Influencer Marketing Managers climbing to 90. Same product, four completely different emotional journeys.

Where personas diverge, the platform identifies it explicitly: which groups align on which interactions, where risk areas emerge, what the synthesis is across the population. Assumptions that held. Assumptions that failed. Hidden system insights that no individual user would have articulated but that emerge from the pattern across all of them.

Broken vs Validated Assumptions panel from Aspire usability test
Broken vs Validated Assumptions: "Key metrics are centrally located" failed. "Initial load provides immediate feedback" failed. "Users value actionable insights" held. Hidden system insight: fragmented journeys cause users to expend unnecessary effort locating what they need.

The output isn't a report. It's a structured understanding of where your product aligns with how real people think, and where it doesn't.


Product development has always had a timing problem. The people who build and the people who use rarely occupy the same moment. Research tries to bridge that gap, but it bridges it after the fact, or before it with assumptions that don't survive contact with the real thing.

What changes when trained audiences can browse your product, form grounded opinions, and hand you an actionable prompt before you ship is not just the speed of the feedback loop. It's the nature of what you're building toward. You stop designing for an imagined user. You start building for one that is already in the room.

You stop designing for an imagined user. You start building for one that is already there.

Ready To Get Started?

Join the next generation of product development