Open Seat

A conceptual scouting platform designed to replace informal recruitment with verified data, semantic search, and contextualised performance metrics.

RoleLead User Researcher & Lead Product Designer
TimelineJan 2026 – Mar 2026
Core SkillsUser Research, Wireframing, Prototyping, Usability Testing
ToolsFigma, NVivo

From Discord Posts to Verified Driver Profiles — Fixing Broken Sim Racing Recruitment

I ran the full design lifecycle solo — from forum observation and semi-structured interviews through wireframing to a final round of usability testing with 5 participants. The most rigorous quantitative testing of my three projects: all task completion rates, timing data, and error counts were formally recorded.

The platform addresses the core problem at the heart of sim racing recruitment: team managers need to trust a driver enough to give them a race seat before they have any verified reason to. This design focuses on the discovery layer. A full trust architecture would require an identity-verification layer, scoped as future work.

Discord Posts, No Verification, No Trust — Sim Racing Recruitment Is Broken

Forum observations across Discord servers and Reddit revealed a fragmented, entirely informal system. Recruitment posts lacked any verification layer, and team managers had no way to assess candidates beyond word-of-mouth references — a process blind to anyone outside an existing network. Observations and semi-structured interviews were synthesised through thematic analysis, surfacing four core findings:

Finding #1

Raw Data Alone Overwhelms — the Platform Must Curate Confidence, Not Just Surface Stats

Dumping verified data at decision-makers creates a new problem. Curation and information hierarchy matter as much as data completeness when trust is being built from scratch.

Finding #2

Soft Skills Matter as Much as Pace — the Platform Must Feel Like a Community, Not a Job Board

Managers explicitly said they hire for reliability and attitude over raw speed. Open Seat needed to replicate the warmth of a personal introduction, not the coldness of a filter-and-reject interface.

Finding #3

iRating Alone Discounts Racing Style, History, and Character

Reducing a driver to a single number erases context. Drivers need to feel validated for their history and style — not just their peak lap time.

Finding #4

Reliability Is the #1 Hire Criterion — Yet No Platform Surfaces It Clearly

Every team manager cited reliability as their top priority. Reliability needed to be treated as a headline metric, not buried in race history tables.

LinkedIn Confirmed the Layout Convention; iRacing Confirmed the Gap

LinkedIn Recruiter

Benchmarked for professional scouting UX. LinkedIn's filtering is powerful but entirely transactional — no mechanism for assessing human fit. This confirmed Open Seat needed a more human, biographical context alongside stats. LinkedIn's profile hierarchy (identity → credentials → history) directly informed the driver profile information architecture.

iRacing Member Site

iRacing's member site surfaces raw stats but has no search, filtering, or comparison functionality. It confirmed the core gap: the data exists, but there is no discovery layer built on top of it.

Two Roles, One Broken System — With Completely Different Points of Failure

James, the Team Owner

"I don't want just the fastest driver — I want someone who is reliable and won't bail last minute."

  • Waits passively in Discord servers for driver responses — no outbound search tools.
  • Has no way to assess temperament or reliability before committing to a driver.

Victor, the Seeking Driver

"I know I am good enough, but I don't have the connections or proof of reliability."

  • Posts stats publicly on Reddit — receives no inbound interest without existing connections.
  • Has no platform to prove his reliability record to strangers who don't know him.
"How might we give team managers the confidence of a personal introduction for drivers they've never met — through verified data, not word-of-mouth?"

Design Decisions

Decision #1

Conceptual Semantic Search to Replicate the "Warm Introduction"

Finding #2 established that hiring runs on cultural fit, not just stat filters. This conceptual AI assistant prototype — currently existing only in Figma — explores how managers could describe their ideal teammate in plain language, searching driver biographies to surface personality matches alongside stat filters. It is a concept that requires extensive validation before implementation.

Validation RoadmapThe immediate next step would be a rigorous validation study measuring the false-positive rate of AI personality attribution from racing telemetry and profile bios — to ensure the system does not misrepresent drivers in ways that harm their hiring prospects.
Open Seat home page with search option
Open Seat home page search option
Open Seat conceptual AI assistant
Open Seat's conceptual AI assistant

Decision #2

Grid Scans Stats, List Reads Bios — Two Layouts for Two Distinct Vetting Stages

Managers vet in two stages: eliminate on numbers, then assess on fit. The grid layout enables rapid scanning of headline performance metrics; the list layout priorises biography and communication style — supporting both stages in one tool. A 4x1 grid configuration was also tested against the 3x1. Participants on the 4x1 took an average of 23 seconds longer to identify an appropriate driver and consistently reported the simultaneous data load as overwhelming. The 3x1 grid was adopted as the column density ceiling.

Grid layout for rapid driver scanning
Grid layout
List layout for bio-first evaluation
List layout

Decision #3

List Layout as Default: 80% of Participants Were More Confident With Their Choice

Both layouts were tested with all 5 participants. The grid enabled faster scanning — participants preferred it for emergency last-minute replacements where speed outweighed depth. The list layout took longer, but 80% of participants reported higher confidence in their final driver choice after reading biographical context before clicking through to a profile — and opened fewer unnecessary profiles in the process. Open Seat's primary use case is considered recruitment, not emergency replacement, so the platform defaults to list layout with grid accessible for time-critical situations.

Decision #4

Strength of Field Score Turns Raw Results Into Context

iRating (a driver's skill rating in iRacing's matchmaking system) alone is reductive — it says nothing about the quality of the competition a driver faced. Displaying Strength of Field (SoF) — the average iRating of all competitors in a given race — alongside every result gives each finish real meaning. A hard-fought P5 in a high-SoF race becomes more visible than a comfortable win against weak opposition. This directly addresses Finding #3: reducing drivers to a single number erases the context that matters.

Performance and reliability metrics put into context with SoF
Performance, reliability, and rating contextualised by Strength of Field

Decision #5

Usability Testing Reduced "Similar Drivers" from 8 to 5 — Eliminating Choice Paralysis

My initial hypothesis was that 8 similar driver suggestions would maximise discovery. Testing with 5 participants showed the opposite: participants struggled to hold 8 dense profiles in working memory, constantly re-reading the list rather than making decisions. 5 emerged as the functional ceiling for this data density within this testing group. A larger sample would be needed to generalise, but the signal was consistent and actionable.

8 similar drivers listed — too many for working memory
8 similar drivers — too many for working memory at this data density

Decision #6

Identity → Performance → Discovery — A Narrative for the Driver Profile

Interviews with team managers revealed a consistent evaluation sequence: they first wanted to know who a driver was as a person, then assess their on-track record, and only then consider alternatives if the driver fell short. The Driver Profile mirrors this: establish identity, prove performance, then offer discovery. The "Similar Drivers" module is placed at the very bottom of the page — appearing only after the manager has evaluated the current driver — so it functions as a safety net rather than a distraction.

Driver profile page showing the identity to performance to discovery information architecture
Driver profile page — information architecture following the manager's natural evaluation sequence

Decision #7

Visualising Reliability — Elevating Consistency Over Peak Pace

Finding #4 established that every team manager cited reliability as their top hiring criterion. Keeping that data buried in raw race logs would have made the platform no better than iRacing's member site. Incidents are measured per corner rather than per race to account for varying race distances — a longer race naturally produces more incidents in absolute terms, so raw counts would penalise drivers who enter longer events. The dashboard surfaces iRating, average finish, safety rating, win rate, and incidents per corner together, shifting the evaluative frame from fastest driver to most consistent teammate.

Reliability dashboard showing incidents per corner and consistency metrics
Critical data points — highlighted by real drivers and team owners during research

100% Task Completion — One Filtering Issue Caught and Fixed in a Second Test Round

Tested with 5 participants recruited from the sim racing community across two structured tasks: finding a driver matching specific criteria using filters, and assessing a driver's reliability stats thoroughly enough to form a hiring opinion.

All 5 participants completed both tasks successfully. Average time on the driver-matching task was 167 seconds; average time to assess reliability stats was 109 seconds. Average error count was 1 click per participant, occurring exclusively in the filtering stage. Post-test tightening of filter logic was validated in a second round with 0 errors recorded. No errors occurred on the reliability stats task.

Reflections & Takeaways

Biographical Context Is What Differentiates This From a Search ToolThe quantitative test results were strong, but the most important finding was qualitative: 80% of participants were more confident in their hiring decision after reading a driver's biography before clicking through. That confidence gap is the core value proposition — and it wouldn't have emerged from a purely stats-driven approach.

The AI Feature Carries Known and Unresolved RiskThe semantic search concept was the most positively received feature in testing, but attributing personality traits from telemetry and bios risks misrepresenting a driver in ways that directly harm their hiring prospects. A production version would require strict confidence thresholds, transparent attribution, and human oversight before any output could be trusted.

A Second Round of Testing Is Worth Budgeting ForFinding and fixing the filtering error in a second test round — reaching 0 errors — was only possible because I'd planned iteration time into the project. In a time-constrained environment, a single round of testing that ends without iteration leaves a known problem unfixed. The second round is often where the real value is validated.