WEVOLV

Platforms built for athletes assume trust is the starting point. It isn’t.

Professional athletes playing abroad struggle to build career-critical connections because there's no support for the most important part, the time spent figuring out if someone is worth trusting before they reach out.

What athletes need from a platform isn't fixed. It shifts based on their situation. A player between contracts in a foreign country needs something different than one settling into a new club mid-season. That insight led to a Connect feature that matches athletes to the right people based on their current situation, not just their career stage.

Role
UX Research · Speculative Design

Duration
Research 7 Weeks, Design 2 Weeks

Methods
Qualitative interviews, Quantitative survey (n=23), Secondary research & competitive analysis

Tools
Notion, Miro, Figma

Every platform skips the same phase.

What athletes actually do before connecting Professional athletes navigating overseas careers spend two to four weeks in passive observation before making any connections. Monitoring social media, asking around informally, piecing together second-hand intel because no platform gives them a safe way to evaluate before committing.

The missing phase is the Evaluate phase. It sits between awareness and engagement and it doesn't exist in any product designed for athletes.

The finding that changed everything Career stage doesn't predict what athletes need. Situation does. An athlete between contracts has completely different needs than a veteran settling into a new country. Even if they're the same age, the same level or same sport.

A digital diagram titled 'Four-phase diagram' displaying four stages of athlete connection decision-making: Observe, Evaluate, Engage, Maintain. The diagram provides descriptions for each stage and additional notes at the bottom.
Woman running up outdoor stadium stairs with blue seating

Designing for the anxiety underneath the composure

Gabby is a veteran. Seven seasons abroad. UK, Netherlands, Luxembourg, Belgium, Finland and Sweden. She knows how this works.

She's between contracts right now, training regularly, coaching to stay connected. From the outside, she looks like exactly what she is: a professional in strategic patience mode, staying ready until the right opportunity surfaces.

But the research found something underneath that composure. Gabby wasn't sure her visibility was actually reaching the right people. Not panic, just quiet, persistent uncertainty about whether the system was working for her or whether she was just waiting in the dark.

That's what this flow was designed for. Not the composed exterior. The uncertainty underneath it.

The system does the labor. She stays ready.

Eight screens. One persona. Every decision traceable to the research.

Dashboard

Problem Gabby is between contracts. She needs to know her visibility is reaching the right people before anything else.

Solution High match surfaces first. Visibility control sits on the status card. One tap, no navigation: athletes named it a protection mechanism, not a setting.

Onboarding

The research found that athletes extend trust incrementally, they need to see the system demonstrate accuracy before they hand over control. This screen is the graduated autonomy moment. The system is capable of acting on Gabby's behalf without asking. But it doesn't. She chooses her intent. The system sets smart defaults, shows her exactly what it turned on, and gives her one tap to adjust. Trust is earned before it's assumed.

Connect

The incoming card CTA reads “See profile" not “Accept intro" because she hasn't seen enough to accept yet. The decision lives on the profile, not the card. She needs context, shared history, a reason to trust before committing to any interaction. Every card in this feed leads with why it was surfaced before asking her to act. The signal card is visually distinct from the peer rows because it's doing a different job: the system found something, not someone. That distinction is load-bearing. Card equals system intelligence. Row equals human connection.

Visibility

The research uncovered a specific form of anxiety: quiet, persistent uncertainty about whether visibility was actually reaching the right people. Not panic. Just an undercurrent that no platform had ever addressed directly. This screen responds with proof, not reassurance. Two agents in France viewed her profile. A connection is active right now. Specific signals, not aggregate metrics. The activity log below closes the loop on what the system has been doing on her behalf so she never has to wonder.

What players know

The research introduced the concept of micro-solidarity: athletes sharing hard-won knowledge with each other because no formal infrastructure exists to do it for them. The invisible labor of being everyone's resource was a recurring theme. This screen makes that intel accessible without requiring any one athlete to carry it. The photo of Lyon answers where before she reads a word. Peer cards lead with shared context. Same coach, same league because the research found that shared experience is the fastest trust signal available.

Trust Score design exploration

The research finding that killed the verified badge: 78% of athletes named peer reviews as their primary trust signal. Background checks registered at 4.3%. The Trust Score responds to that directly. A composite of peer reviews, connection depth, and track record, visible on the primary profile before she decides whether to engage.

Three states are shown because the display is designed to earn trust before stepping back. Early stage: the system shows its work. Three signal bars labeled Strong, Medium or Low across peer reviews, connection depth and track record. Established stage: once Gabby has experienced accurate matches, the score condenses. Three dots with a brief summary line beneath each, less scaffolding, same signals. Low signal: a new profile with limited history shows honestly what's known and what isn't, bars appear at Low or “None yet," because an incomplete signal is more trustworthy than a fabricated one. The same principle that governs onboarding governs this: graduated autonomy applied to transparency.

Early Stage

Established Stage

Low Signal

Supporting screens

The research found that career-level decisions, who sees her, how loudly the system reaches out warrant a deliberate commit moment rather than auto-save. A single Save button throughout the status sheet is that moment. The Still True check-in uses pre-populated confirms because re-entering context she's already shared signals the platform doesn't know her. Confirming rather than re-entering builds trust before asking anything new.

Mobile app screen showing user status settings for a social or networking platform. Options include 'Between contracts', 'Availability — Staying ready', 'Geographic openness — Europe', 'Opportunity type — Playing only', 'Visibility — Connections and trusted ne...', 'Notifications — Weekly summary', and 'Outreach — Draft for me to review.'
Diagram of a multi-agent system architecture titled 'WeWolv Connect Agentic Architecture'. It shows five specialized agents with descriptions and their interactions, including signal sources, fires first alone, routes and arbitrates, parallel specialists, and what the athlete sees. The system includes various data sources, trust graph, situation tags, ambient layers, surface layers, and an autonomous trigger, with color-coded sections for different agent functions.
Flowchart titled WELOW Agentic Flow illustrating stages before app opens, during the app, and athlete decision-making, with sections on invisible and visible athlete states, processes like profile mapping, peer reviews, connection depth, and system responses, using purple, yellow, and green colors.

The intelligence is ambient. She doesn’t have to manage it.

The system gathers context from five signal sources before Gabby opens the app. Her onboarding profile, platform behavior, contract intelligence, the trust graph built from peer reviews and connection depth, and a status tag she controls.

Five specialized agents handle the logic. Each with a defined job, defined inputs, defined outputs. The situation inference agent fires first and alone. One rule governs all conflicts: visibility over relevance, declared over inferred. What she says she's navigating overrides everything the system infers. A system that updates her situation without asking isn't a partner, it's a manager. That distinction is the line between reducing invisible labor and creating a new kind of it.

The outreach draft: where the system’s voice ends and Gabby’s begins

The status sheet has an outreach preference. “Draft for me to review first." That single line is the boundary. The system finds the connection, surfaces the context and writes a first draft. Gabby reads it, edits it and decides whether to send.

That handoff moment, the draft appearing for her review was never designed in this phase. Not because it was forgotten, but because it's the most consequential moment in the whole product. Getting it wrong means the system puts words in her mouth. Getting it right means she has a collaborator, not a robot speaking for her.

Designing that screen is Phase 3.

The questions this work opens

The Evaluate phase in practice. What information does an athlete need to feel confident enough to act? What format makes trust signals legible without feeling invasive?

The graduated autonomy threshold. How many accurate suggestions before an athlete stops second-guessing the system? What does one wrong call do to that trust?

The peer intelligence line. Does surfacing anonymous aggregate sentiment from athletes feel like useful intelligence or like the platform is sharing things that were meant to stay private?

The visibility control in real use. Athletes named control over who sees them as a protection mechanism. Does the one-tap visibility control on the Dashboard feel sufficient, or does she need more granularity?

The transition moment, from inside it. Both interview participants described their first time abroad from the vantage point of players who'd navigated it many times. What does that context collapse feel like while it's actively happening?

A close-up black and white photo of an athletic man with sweat on his face, gazing into the distance, wearing sports attire, with one hand raised near his face.

What this means for my practice

The WEVOLV research wasn't just a project. It was a demonstration of what happens when you research for agency rather than preference. When you ask how people make decisions. What they protect and what makes them act, rather than what features they want or how satisfied they are.

The findings that mattered most weren't the ones that confirmed hypotheses. They were the ones that broke them. Career stage doesn't predict needs, situation does. The missing phase wasn't a UX gap. It was a fundamental misread of how athletes build trust. The closed network isn't just a security feature, it's a design decision with different implications for different users.

Every project I want to do starts with the same question the athletes were asking: who can I trust and how do I know? That turns out to be the question every AI product is asking too. WEVOLV is the clearest example I have of what it looks like in action.