From zero to launch: Standing up research and delivering insights in less than a month

(This project is under NDA, and some details have been generalized or omitted.)

The Project

When I joined NetEase as part of the Frontier Research group, I was brought in to build out the user research function within a cross-functional team. At the time, the team had limited infrastructure for user research, and I was tasked with developing the tools and processes we needed—while also jumping into live projects.

Within my first week, our team was asked to support a competitive online game in development, preparing for one of its first external playtests. The development team wanted feedback from high-ranking competitive players of similar games, to understand how well the game was delivering on its design goals and what needed refinement before release.

Internally, there was some concern that the game was too similar to existing titles. We were asked to evaluate this risk as part of the study, while also providing actionable feedback to help the team improve the experience.

With about one month from kickoff to final report—and no formal tools or processes in place—I had to build the airplane while flying it, developing our research infrastructure alongside planning and running the study.

Team & Role

I led this research project, working with colleagues from Frontier Research and support from the Market Research team. I was responsible for defining our methodology, setting up tools and processes, and ensuring we could deliver actionable insights under a tight timeline.

I collaborated with Yu Zhu, who focused on participant recruitment, and Ren Chen, who supported moderation and logistics. Together, we worked with the development team to define the right player profiles for our study.

During the playtest, Romain Maire, Paul Revert and I moderated the sessions and conducted in-depth interviews with participants, with additional support from Ren Chen, who joined remotely via Zoom.

Process Development

When I joined, we had very little research infrastructure in place—and we needed to run this playtest within two weeks.

Alongside preparing for the study—defining the methodology, working on recruitment, and coordinating with colleagues (covered in the next section)—I also developed the foundational tools and processes we needed to run user research in a structured and consistent way.

Here’s what I built:

  • Templates and protocols for participant recruitment, as well as appropriate profiles in collaboration with the development team and our recruiters.
  • Moderation materials and protocols—everything from how we’d greet participants and explain the study (without introducing bias), to how we’d run sessions smoothly and consistently.
  • A note-taking tool in Excel for our interviews, to speed up data capture and analysis.
  • An analysis tool in Excel to quickly surface patterns from participant data self-report and telemetry data.
  • A reporting template in PowerPoint to help us turn around findings efficiently post-study.

Each of these tools was designed to be reusable for future studies and reviewed with the team for feedback and iteration.

On the technical side, I designed our lab setup to ensure consistency and minimize bias:

  • We used space from the QA team and standardized the PC setups so every participant had the same experience.
  • We ensured that stations were separated so players could focus on their own gameplay, without distractions.
  • I set up OBS (screen recording software) to capture gameplay for analysis, with custom scenes to track participant numbers and timestamps—without recording any personal data.

In the days before the playtest, we tested and adjusted the lab setup and our moderation materials, and set up observation stations so we could observe gameplay in real-time through OBS without interfering with participants.

Research Process

Prep

The study had been agreed before I joined the company, but the mandate was broad. The first thing I did was connect with my manager, production leads from the development team, and the head of publishing to align on specific goals and refine what we needed to learn.

In our downtime, we played the game as a team to get familiar with the mechanics, understand what to focus on, and ensure our study protocols aligned with how the game actually functioned—things like forming teams and launching matches.

Based on this, I updated our materials and protocols to make sure we’d capture meaningful, actionable feedback.

Running the study

Over the course of three days, we ran playtests with around 40 participants. We divided them into teams, had them play the game in multiple sessions, and gathered feedback through surveys at specific intervals. There was a break for lunch, and we ended each day with a 15-20 minute interview with each participant to explore their experience in more depth—including feedback on particular game systems, and comparisons to other games they played.

At the end of each day, I led a team debrief for us to share key takeaways and review what could be improved for the next day.

These were long, intense days, and for most of the team, it was their first time running a study like this. It was rewarding to see it come together smoothly—and even better when one of our key stakeholders stopped by to observe and was impressed by our setup.

Analysis & report

Once the sessions wrapped, we had a ton of data to analyze—interviews, multiple surveys, and telemetry data from 40 participants—with less than a week to deliver the report. This is where the time spent up front developing our analysis tools paid off.

We began with a final team debrief, identifying the four or five most pressing topics from our observations and earlier daily debriefs. I then assigned these topics to team members to investigate further, using the quantitative and qualitative data we’d collected.

Over the next few days, we each dug into our areas, pulling out screenshots, video clips, player quotes, and other evidence to support our findings. I held regular check-ins with each team member to review progress, help refine insights, and make sure we were building a clear, evidence-based narrative.

Once the pieces were in place, I assembled and edited the final report, ensuring it told a polished, coherent story and delivered concrete recommendations.

Since most of the development team did not speak English, I didn’t present the report myself—but it was well received and taken seriously by the team.

Outcome & Impact

Our research helped shape key refinements to the game, and the concerns we flagged were addressed before becoming issues post-launch. In particular:

  • One of the game’s intended differentiators involved a unique level design approach (detail omitted due to NDA). We identified where it worked, where it didn't, and why. We provided guidelines for future design, and the team adopted many of these recommendations. Ultimately, the feature was de-emphasized, as we found it wasn’t a strong differentiator from the players’ perspective—and resources could be refocused elsewhere.
  • We provided detailed feedback on how players approached strategy, learning, and teamwork, with concrete suggestions to improve balance and team dynamics.
  • We surfaced misalignments between player expectations and the current design, which led to tweaks in some designs and messaging. At the same time, feedback confirmed that the core gameplay loop was engaging and satisfying, which helped give the team confidence in their design direction.
  • We reported that similarities to other games on the market weren’t likely to be a problem for players overall. While we identified specific elements that drew criticism during testing—and suggested ways to address them, which were adopted—we concluded that the broader concern was low risk. This finding was controversial internally, and some stakeholders pushed back. However, we stood by our assessment, and on release we were proven correct.

The game has since seen strong post-launch success and a large player base globally. One of our key stakeholders said it was the best report he’d seen at the company in his two years there.

Impact on the research team

Beyond the positive impact on a hit game, this project also helped establish repeatable practices for future research in our team. We now had functioning tools and workflows for:

  • Survey analysis and visualization
  • Interview note-taking and analysis
  • Participant recruitment and management
  • Reporting templates

We reused and iterated on these tools in subsequent studies, streamlining our research efforts moving forward.