WEVOLV

Platforms designed for athletes are built on the wrong assumption.

Athletes don't connect based on where they are in their career. They connect based on what's happening right now.

I set out to understand how professional athletes playing abroad build trust and make connection decisions. The research challenged my original hypothesis, career stage doesn't drive behavior, situation does. And between evaluating a connection and making one, there's a phase nobody was designing for.

Role
UX Research · Speculative Design

Duration
Research 7 Weeks, Design 2 Weeks

Methods
Qualitative interviews, Quantitative survey (n=23), Secondary research & competitive analysis

Tools
Notion, Miro, Figma

A young female athlete with curly hair in a black Nike headband, wearing a black sports long-sleeve shirt and maroon sports shorts, sitting on a bench at a sports court during dusk.

The problem

Maya is 22. First professional contract in Spain. She doesn't speak the language and doesn't know anyone. Her agent's instructions for arrival were one line.

“Look for the tall guy at the airport.”

— Actual agent instruction for a first-time professional athlete abroad

She found a trainer on Instagram three weeks ago. She still hasn't messaged him. That's not hesitation. That's a rational response to missing infrastructure, and it's the thread that runs through everything this research found.

The platform WEVOLV already had 300 athletes across 14 nationalities. Three connections each. That's 270 honest peer reviews that don't exist yet before the platform even scales. The gap was already visible in the data.

The Ecosystem Athletes
Are Actually Using

Before designing anything, I mapped what athletes use today and why none of it is working.

Every platform forces athletes into a binary: commit publicly (high visibility, high risk) or stay disconnected (zero risk, zero opportunity).

Instagram is where ~90% of athletes do their discovery but it can't tell you whether someone is actually trustworthy. WhatsApp handles direct communication but only once trust already exists. LinkedIn reads as a platform for former athletes, not current ones. And Snapchat? Athletes use it to hide their location from teams and agents. That's damage control, not networking.

Three gaps compound across all of them: no peer signal before engaging, no low-effort way to test whether a contact is worth pursuing and no systematic control over visibility without managing it manually across every platform.

Instagram lets you observe. WhatsApp connects you once trust already exists. Nothing sits in between.

WEVOLV isn't the fifth platform. It's the missing layer that makes the other four work.

Table comparing three research methods: Qualitative Interviews, Quantitative Survey, and Platform Context, including sample sizes and implications for their use.

Research approach

Discovery-first. I deliberately built in ways to be proven wrong.

This was a discovery-first research project. The goal wasn't to arrive with solutions. It was to understand behavior well enough that the right solutions became obvious.

I deliberately built in ways to be proven wrong. Each method was chosen to test a specific hypothesis. If career stage mattered, two interview subjects at very different points in their careers should have had completely different needs.

What happened: Both athletes had the same evaluation process despite very different experience levels. The survey showed zero variance by age. Both methods converged and contradicted my original hypotheses, revealing a better model.

A table comparing hypotheses, predictions, reality, and lessons learned related to athlete management strategies, including control by stage, hidden assumptions, effort/risk tradeoff, and early performance pressure.

What I Tested vs. What I Found

The table shows the formal hypotheses and what the data actually returned. The most important row isn't a formal hypothesis. It's the hidden assumption embedded in the research design itself. I assumed career stage predicted what athletes needed. The data showed zero variance by age. Rejecting that assumption changed the entire product direction more than any deliberate hypothesis did.

The most consequential row is the hidden assumption. It wasn't a formal hypothesis, it was an untested belief in the research design itself. Rejecting it changed the product direction more than any formal hypothesis did.

Table outlining four phases of a process: Observe, Evaluate, Engage, Maintain. Each phase includes recommended behaviors and infrastructure considerations. For example, in the 'Observe' phase, the behavior is to gather signals safely, with infrastructure supporting Instagram public profiles. In the 'Evaluate' phase, assess before committing, though no platform supports this. The 'Engage' phase involves acting when the risk feels manageable, with existing support but some limitations. The 'Maintain' phase focuses on protecting what is working, with existing WhatsApp support and closed networks.

The four-phase behavior model

Every interview and the survey data revealed the same pattern regardless of career stage, experience level or age.

Athletes can Observe. They can Engage. They can Maintain. The gap is Phase 2. The moment between watching someone on Instagram and deciding whether to message them. That's where 2–4 weeks of passive waiting lives.

That's what WEVOLV fills.

This model anchors every feature recommendation and every screen flow that follows. Phase 2 design work starts here.

Where the Research Changed Direction

Good research doesn't just confirm what you thought. It shows you what to build instead and why.

I assumed athletes were hesitant. The data showed they were strategic.

The problem wasn't motivation. It was missing infrastructure. That reframe redirected the design from engagement mechanics toward building the missing Evaluate phase.

I assumed career stage predicted needs. The data pointed to situation instead.

68% of athletes across every age group seek career advice. Zero variance by age. That killed stage-based onboarding and built the case for situation tags “New to Belgium" over “7 years pro."

I assumed one trust signal would anchor decisions. The data revealed a stacking pattern.

78% default to no without peer reviews. 61% also want track record. 61% weight mutual connections. Athletes triangulate, which defined the trust infrastructure: peer reviews + connection depth + track record, visible before any engagement is required.

I assumed control and openness were in tension. The data showed they're the same thing.

Athletes with real control share significantly more, not less. Granular control isn't a privacy feature. It's the condition that makes connection possible. That shaped the three-tier architecture: Inner Circle / Professional / General, available to everyone from day one.

I treated the 86% female sample as a limitation. The data made it a strategic signal.

Women athletes operate two trust modes simultaneously, intimate peer support and functional ecosystem-building. Women's sports investment is up 300% since 2019. Getting this right is a first-mover opportunity, not a demographic edge case.

A digital profile of a man named Mark Sterling, who is a verified Premier basketball skill development trainer, with information on mutual connections, reach, reviews, and peer reviews around his photo.

The Solution

Maya looks up the trainer on WEVOLV before DMing him.

In 2 days. Not 2-4 weeks.

Maya looks up the trainer on WEVOLV. She sees peer reviews from athletes in similar situations. Mutual connections with visible depth not just that a connection exists, but how close it actually is. His track record. She sets her own access terms before any contact happens.

She engages. Conditionally. If it doesn't work out, she ends access cleanly. The full product flow shows how every research finding connects into one end-to-end system. View the product flow

Two young women sitting on a bench in a dimly lit space, one with her eyes closed and head bowed, the other looking away, both with braided hair, dark clothing, and sneakers, in a contemplative posture.

Design Recommendations

These aren't features from a wishlist. Every recommendation traces directly to a behavioral finding. If the finding hadn't existed, the recommendation wouldn't either. The research didn't generate a feature list. It surfaced a behavior model that made certain features inevitable.

Build now

  • Trust score Peer reviews + connection depth + track record, visible before any engagement is required

  • Three-tier privacy controls Inner Circle / Professional / General, universal from day one

  • Situation-based matching “New to Belgium" not “7 years pro"

  • Anonymous feedback 30% fear retaliation; without truly untraceable reviews, trust scores get gamed by silence

  • Relationship depth visibility A flat connection list creates context collapse; athletes need to know how close a mutual connection actually is

Validate before building

  • AI-assisted conversation scaffolding Frameworks for hard professional conversations; concept test before building

  • Proactive intelligence layer Agentic AI pre-processing profiles before the athlete opens the app; build the trust graph first, this depends on it

Don't build

  • Public performance metrics

  • Automated matching without context

  • Generic community features

Reflection

The most valuable moment in this research wasn't a confirmed finding. It was the one that broke an assumption.

When the hypothesis that career stage predicts needs collapsed, it didn't just redirect the product. It redirected how I think about discovery work. The most load-bearing research isn't always the research that tells you you're right. It's the research that proves you wrong early enough to build something better.

The trust triangulation finding had an immediate consequence, the client confirmed it prevented them from shipping a verified badge feature they had been close to building.

If I'd confirmed the hesitant athlete assumption, I would have designed for motivation. Because I found the strategic risk manager instead, the research pointed to infrastructure and cut the time to a confident decision from weeks to days.

Every wrong assumption would have built the wrong product for Maya.

Phase 2 took those open questions. What does the trust score look like on a screen, how does an athlete control her visibility without it feeling like overhead and answered them.