The User Is Already in the Room
How Vectorial runs live simulations on your product, while you build it
Every product team eventually asks the same question: what do our users actually think of this?
The answer usually comes late. Research is a phase. Building is a phase. Feedback arrives after the decisions have already been made, distilled through surveys, filtered through interviews, delayed by recruiting cycles. By the time a real user interacts with what you've built, you're already three iterations ahead.
The feedback loop isn't broken because teams don't care about users. It's broken because users aren't present when decisions get made.
What if they were?
The synthetic audience is living and breathing in the same room as you build.
Section 1
Opinions built from real signals
Before showing what this looks like in practice, it's worth understanding why the opinions can be trusted.
Synthetic users in Vectorial are not invented. Every opinion a persona forms is a direct function of three things:
- Traits: behavioral patterns and decision tendencies extracted from real conversations
- Attributes: demographic and firmographic distributions inferred from actual audience data
- Exposure: the knowledge signals that shape how they respond to what they see
Traits tell you how they think. Attributes tell you who they are. Exposure tells you what they know: how recently they've encountered relevant information, how aware they are of your product, how familiar they are with the competitive landscape.
All of this is sourced from real signals (Reddit posts, LinkedIn discourse, enterprise data), not from templates or assumptions. The profiles are real people. The traits are extracted from what they actually said.
Section 2
The agent takes the wheel
Here is what it looks like in a real build environment.
A team is prototyping a dashboard for an AI-enabled engineering platform. The product serves multiple personas, each needing something different from the same interface:
- Engineering Managers tracking team performance
- Backend Engineers debugging infrastructure
- VPs overseeing delivery
- CTOs making strategic decisions
Rather than running a research session after the build, Vectorial is installed as a Chrome extension alongside the Lovable prototype. Four trained audiences are selected. A single instruction is typed: add the right metrics and analyze how does it make sense from your perspective, and how should the metrics be displayed.
Then the browser agent starts exploring.
The agent moves through the product the way a real user would. It reads the dashboard. It navigates between tabs. It encounters the incident panel, the health metrics, the latency charts, the service status. It doesn't answer abstract survey questions about what users want. It uses the actual product and forms opinions in context.
Section 3
Different people, different truths
Four audiences ran the same simulation on the same dashboard. The responses did not converge.
"This dashboard captures the right operational metrics but I'm concerned about alert fatigue and whether my team can actually act on all this information during an incident."
"The incident management focus is solid but lacks business impact quantification. I need to understand how technical incidents translate to revenue loss, customer churn, or SLA breaches to prioritize resources and communicate with the board."
"The dashboard covers the operational basics well, but I'm missing critical backend health metrics. The information I need to debug and respond isn't surfaced at the right level."
Same product. Same moment in time. Three completely different failure modes identified, none of which would have surfaced from aggregate analytics, and none of which required a single interview to be scheduled.
The divergence isn't noise. It's structured variation: each persona activating different traits against the same stimulus, producing different but internally consistent reactions. This is what behavioral simulation produces that surveys cannot: not an average opinion, but a distribution of truths.
Section 4
Every opinion has a source
This is where synthetic user tools typically break down. Opinions that can't be traced to anything real aren't opinions; they're hallucinations dressed as feedback.
Vectorial traces every opinion to its source. The CTO persona's concern about ROI justification for AI tooling isn't generated from thin air. As the simulation runs, the browser navigates to a real LinkedIn post from a real technology leader writing publicly about how AI hype makes the gap between expectations and realized impact even wider. That post is the evidence. The opinion is grounded in it.
The VP Engineering's feedback about DORA metrics traces to another real post by a real engineering leader writing about the ongoing debate over what engineering teams actually need to measure, and why operational dashboards often focus on what's broken rather than what's delivering value.
The opinion isn't simulated. The reasoning is real. The source is cited.
Section 5
The loop closes in one step
Once all four personas have completed their sessions, Vectorial synthesises their feedback into a single consolidated prompt. Not a summary document. Not a slide deck. A direct, actionable instruction set ready to be dropped straight into the build tool.
The prompt specifies three views: Tactical for infrastructure metrics (DB connections, CPU/memory, queue depths), Operational for incident response (active incidents, blast radius, TTD/TTR), and Strategic for business impact correlation (latency to conversion, uptime to SLA budget, deployment frequency to feature velocity). Each view surfaces only what that role needs, organised by urgency rather than metric type.
The prompt is copied and pasted into Lovable. Lovable rebuilds the dashboard.
Before: one dashboard trying to speak to everyone, succeeding for no one.
After: three views, each designed around a specific role's mental model, built directly from what those roles said.
The entire cycle (simulation, synthesis, rebuild) happened without leaving the browser.
Section 6
What the platform produces
The Lovable demo is one activation of a deeper system. The same infrastructure runs full usability tests: sentiment tracked across session time, assumptions validated or broken, opportunity signals expressed as testable hypotheses.
Where personas diverge, the platform identifies it explicitly: which groups align on which interactions, where risk areas emerge, what the synthesis is across the population. Assumptions that held. Assumptions that failed. Hidden system insights that no individual user would have articulated but that emerge from the pattern across all of them.
The output isn't a report. It's a structured understanding of where your product aligns with how real people think, and where it doesn't.
Product development has always had a timing problem. The people who build and the people who use rarely occupy the same moment. Research tries to bridge that gap, but it bridges it after the fact, or before it with assumptions that don't survive contact with the real thing.
What changes when trained audiences can browse your product, form grounded opinions, and hand you an actionable prompt before you ship is not just the speed of the feedback loop. It's the nature of what you're building toward. You stop designing for an imagined user. You start building for one that is already in the room.
You stop designing for an imagined user. You start building for one that is already there.