‹ Back to Projects

Rungway: Data Labelling Process

00. Overview

Rungway is a real-time employee sentiment platform designed to help leaders address workplace issues and drive meaningful change by connecting with employees at scale.

01. Context
Even with open-door policies, employees may feel overlooked, and workplace sentiment can be hard to gauge. Rungway is a platform that offers real-time insights into employee sentiment by topic, enabling leaders to act swiftly and resolve critical issues.
Rungway is a real-time employee sentiment platform that helps leaders address workplace issues and drive meaningful change by connecting with employees at scale. To enhance the platform’s analytics capabilities, we designed a Data Labelling Feature that allows moderators to categorize content efficiently. This enables the data team to generate insightful reports for clients, offering deeper visibility into company culture trends.
A key feature of Rungway’s offering is its in-depth data reports, which provide clients with valuable insights into their company culture. However, Moderators lacked a structured way to categorize and label content within Rungway. This resulted in inconsistent data classification, limiting the accuracy and depth of reports provided to clients. Our goal was to design a streamlined, intuitive labelling system that would standardize data classification and improve reporting accuracy.
This case study highlights my approach to solving this critical issue, ensuring scalability and enhancing the value delivered to clients.
02. Design Analysis
Defining the problem and priorities.
Before diving into the solution, I collaborated with the product team to identify why designing a back-end labeling feature was critical for Rungway. We recognized that the platform’s detailed data reports, a core USP, received significant client praise for offering deep insights into company culture.
However, the existing process required the moderation team to export and manually categorise data, which was both time-consuming and inefficient. To address this, we aimed to streamline the process by enabling in-app labelling, reducing reliance on external tools and saving valuable time.
To fully understand the challenges, I conducted in-depth one-on-one sessions with the moderation team to observe their current labelling workflow and identify pain points. I also engaged with the data team to uncover frustrations with the existing process. These insights helped me pinpoint key objectives while ensuring the design remained simple and scalable for the initial version, aligning with resource constraints and immediate needs.
03. Objectives
Defining the objectives for the re-design.
Based on the research I conducted, I decided to focus my objectives into four segments:
Streamline the labelling workflow
Make sure the solution is scalable
By introducing an organized, user-friendly interface, moderators can easily assign data tags to platform content, without unnecessary steps or confusion. A key goal for the project was to provide a simple, clean design that allows for quick tagging, real-time updates, and bulk operations when needed. This improvement ensures moderators can effectively manage large volumes of data while maintaining accuracy and consistency in their labeling efforts.
With plans to develop the Rungway back-end even further, the solution needs to be scalable, in a sense that the process can be transferred over to any new back-end iterations. This includes making sure that the feature is designed to handle things such as introducing new labels or ensuring that changing definitions doesn't affect previous data output. This scalable design needs to ensure that the labelling system can evolve with Rungway's needs, without requiring significant rework or downtime.
Remove reliance of external tools
Ensure data categories are clear to moderators
To optimize efficiency and reduce complexity, the labelling feature needs to be designed to function independently from any third-party tools or integrations. By consolidating the entire process within the current Rungway back-end platform, we eliminate friction and potential points of failure that can arise from managing external tools. Moderators will have everything they need to perform tagging tasks directly within the app, fostering a seamless and unified workflow, while minimizing the risk of miscommunication or manual errors that can arise when switching between platforms.
To ensure accuracy and consistency in labelling, it’s crucial that moderators clearly understand each data category and its purpose. The feature needs to have a well-defined taxonomy of tags. Additionally, the interface includes visual cues and tooltips to assist moderators in making informed decisions when assigning tags. This reduces ambiguity, ensures uniformity across the platform, and ultimately provides more reliable data for the clients, allowing for meaningful insights into company culture.
04. Scope and purpose
The scope and purpose of this feature.
Before beginning to design for this feature, I had to make sure I was approaching it systematically to ensure that the back-end labelling process meets the needs of the platform's moderators and, ultimately, the data team and clients.
To achieve this, I organised a brainstorming meeting with the product team (other product designers, product managers, chief of product).  As this was a very important feature for the future of the business, I had to ensure that all the bases was covered. This involved relying on my colleagues for support. The core goal of the feature is to allow moderators to tag content effectively so that data can be organized and reported on, providing valuable insights to clients about their company culture. We had to consider the questions below:
Labelling Types
What data needs to be labelled?
Is it all the posts, comments, or other forms of content such as the Pulses? Is there a possibility of new forms of content coming out in the future? How would the data labels interact with that?
What labels are needed?
Do these labels represent categories, topics, or sentiment? Are they pre-defined or flexible for moderators to assign? What are the existing labels being used in the current process? Which labels are redundant/not being used often? Which label definitions are ambiguous and could be merged with other labels?
How does the labelling influence reports?
Understanding how the labels contribute to the client-facing reports would guide the overall design to ensure that the feature is effective for the data department’s needs.
The User Types
Moderators’ Needs & Workflow:
How do moderators currently label content?
What tools do they use to track or flag content?
Are there any existing pain points in content moderation that the new feature could alleviate?
What are the expected volumes of content they’ll need to tag?
Will the system need to scale with increased usage?
How will moderators interact with labels (e.g., manual input, automated suggestions)?
Will there be any prioritization in the labelling process (e.g., urgent content)?
Insights & Data Team Needs:
How do the data team currently generate reports from the content?
Are there any bottlenecks in extracting insights from the current labelling process?
Are there specific data points or metrics that the reports should focus on (e.g., sentiment trends, specific topics, common issues)?
What kind of flexibility or customization is required in the labelling process to match different client needs?
B2B Client Needs:
What kind of insights do clients value most from the reports generated by labelled content?
Are there any specific trends or key areas of company culture that clients are most interested in understanding through the platform?
05. Potential issues
Identify Potential Challenges
Before moving into wireframing or prototyping, I had to identify potential challenges:
Accuracy and consistency of labelling
How can we ensure that the labels are applied consistently across content? Could there be an automated or semi-automated system to suggest or pre-fill labels to reduce moderator workload?
Scalability
If Rungway grows, we need to ensure that the system can scale with larger volumes of content. Are there any limits or bottlenecks in how we can categorize and report large datasets?
Moderation Overload
If moderators need to tag content manually, how can we reduce cognitive load? Is there a way to make the process more efficient, like batch labelling or predictive labelling?
06. Label Categories
Defining the label categories
After conducting a brainstorming session, I focused on addressing these key questions related to the project. This process involved numerous one-on-one meetings with members of the moderation team to gain insight into how these questions applied to their current workflow. I also observed their labelling process through Microsoft Teams to better understand existing challenges and pain points. Additionally, I arranged individual meetings with members of the Insights & Data team to learn how the moderators transfer labels to the data team for report generation.

Through these discussions, I identified the most frequently used labels and those that were specific or relevant to certain content types. Finally, I organized a meeting with key stakeholders from the Product, Moderation, and Insights & Data teams to finalize the necessary labels and categories for the new design. We decided that the labels were categorised into six main segments (organised from most important to client needs to least):

Sentiment
Agreeableness
Whether the content expressed by the user was perceived as positive, negative, a company challenge, personal challenge or technical challenge.
Whether the content of the user was perceived as friendly, hostile or neutral.
Advocacy
Style
Whether the user's content showed support or opposition to the topic at hand.
The writing style, voice and tone of the user.
Helpfulness
Other
To what extent the user's content was helpful to the topic at hand.
Miscellaneous labelling that didn't sit in any of the other categories.
07. Technical and design constraints and considerations
Gathering the technical and design constraints and considerations.
Before beginning the wireframing/prototyping phase, it was important that I understood the technical and design constraints and considerations, given the challenging timeline we had. I set up one-to-one meetings with members of the Insights and Data team and Engineering team with the goal of addressing these key areas:
Technical Constraints
Labelling and Data Structure
How should the labels be stored and managed on the back-end? Is there a need for a labelling hierarchy or taxonomy? How do we ensure that the system can scale as the amount of data grows?
Integration with Other Systems
Does the labelling feature need to work with other parts of the platform or tools currently used, such as sentiment analysis or reporting systems?
User Permissions and Access Control
Are there any different levels of permissions needed to add, edit, or review labels? How do we manage this at scale?
Automation
For future consideration, could AI or machine learning assist with auto-labelling content or suggesting labels to moderators? What type of training or data would be needed to support this? Is it easy to set up the training data in light of this?
UX/UI Design Considerations
Intuitive Labelling Interface
The labelling process should be as intuitive as possible, minimizing friction. What UI elements would make it easy for moderators to select or input tags? Should we use a type-ahead search, dropdowns, or color-coded tags?
Labelling Suggestions
How can we incorporate smart labelling suggestions to help moderators choose the right label quickly? Would the system surface similar labels used previously to create consistency?
User Feedback
How will moderators know if content has been labelled correctly or needs review?
08. Metrics
Metrics for Success
Before diving into the design and prototyping phase, I had to define how the feature will be measured post-launch:
Adoption Rate: How many moderators are actively using the labelling system?
Efficiency: Does the system reduce the time spent labelling content compared to manual or previous methods?
Accuracy: How accurately are labels being applied, and how often do reports come back needing revisions?
User Satisfaction: Are moderators satisfied with the experience, and do clients find the insights more actionable?
09. Wireframing and Prototyping
Low Fidelity Wireframing
To lay the foundation for the data labelling system, we began by creating low-fidelity wireframes to outline key interactions and user workflows. Using Figma Whiteboard, I mapped out the potential workflows alongside the product team, ensuring a logical structure for labelling, searching, and managing labels.
We then moved to Figma to create wireframes visually representing the labelling interface, including elements such as a tag input field, an AI-assisted suggestions dropdown, a bulk tagging checkbox, and a tag management dashboard.
The wireframing process focused on clarity, usability, and efficiency for moderators. I decided to create three different wireframes with different degrees of complexity. This was because we had limited engineering resources at hand, therefore it was necessary to gauge the possibility of delivering the feature within the two-week sprint we had in place.

Early versions presented a simple text input for labels, but through internal reviews, we identified potential issues like inconsistent label naming and moderator fatigue from manual entry in excel sheets. To address this, I iterated on the design by introducing predefined label categories, a multi-select dropdown, and a machine learning auto-suggestion search bar.
Wireframes evolved through multiple iterations:
Version 1: Basic labelling dropdown with free-text entry.
Version 2: Introduced selecting labels in bulk and structured label categories.
Version 3: Added predictive search bar label and label management feature within modal
We collaborated closely with the data team and moderators, gathering feedback to refine the hierarchy of labels and ensure they aligned with reporting requirements. We decided that the label format should be presented as “Label Category: Label Value”, for example, “Sentiment: Positive”. The final mid-fidelity wireframes were tested before moving to high-fidelity designs, ensuring they were intuitive and aligned with Rungway’s existing UI components and design system.
The final version enabled moderators to select multiple label within predefined categories or quickly find labels via a search bar, which suggested labels based on the first three characters. Moderators could also add new labels, subject to approval by the Data and Insights team.
These refined wireframes served as the blueprint for prototyping, guiding the development of an interactive and user-friendly labelling experience.
10. Final Design
The final design and hand-off.
After multiple iterations and usability testing, we finalised the high-fidelity design for the data labelling feature, ensuring it was intuitive, scalable, and seamlessly integrated into Rungway’s existing UI. The final design included four key components:
1. Labelling Interface
A searchable side menu with smart label suggestions powered by machine learning, allowing moderators to efficiently categorise content.
2. Labelling from the dashboard
A streamlined modal interface enabling moderators to label the content from the moderation dashboard, significantly improving workflow efficiency. Single click edits could be made allowing moderators to change labels easily.
3. Auto-Labelling Predictive Search Bar
Uses predictive in the search bar based on what the user puts into the search bar to reduce manual effort.
Once the final prototype was validated, we prepared for developer handoff by creating a comprehensive design package in Figma. This included:
Annotated Screens & User Flows
Clearly marked elements explaining interactions, micro-interactions, and logic behind machine-learning search bar suggestions.
Component Specifications
A breakdown of design components, including typography, color schemes, and interaction states (e.g., hover, active, and error states).
Edge Case Documentation
Addressing scenarios like duplicate labels, unlabelled content, and error handling.
Prototype Walkthroughs
Clickable prototypes demonstrating the intended user experience, reducing ambiguity for engineers.
To ensure a smooth transition from design to development, I led a design run-through session with the engineers, where we walked through the design rationale, usability findings, and technical considerations. We also maintained an open Slack channel for continuous collaboration, allowing developers to ask questions and request design clarifications in real time.
By the time the feature was implemented, the structured design approach and clear documentation had significantly streamlined the development process. The final result was a powerful, user-friendly tagging system that improved moderator efficiency, ensured consistent data labelling, and enhanced the accuracy of Rungway’s sentiment analysis reports.
11. Conclusion
Conclusion and final thoughts.
The data labelling system significantly enhanced Rungway’s ability to deliver accurate, data-driven insights to clients by enabling efficient and consistent content categorisation. Through a structured design process—spanning user research, wireframing, prototyping, and iterative testing - we created a scalable solution that streamlined the moderator experience while ensuring high-quality data labelling for analytics.
The introduction of labelling from the dashboard not only reduced manual workload but also increased efficiency by allowing moderators to focus on content review rather than repetitive administrative tasks. The ability to also easily edit these label on the fly helped to increase efficiency further.
One of the most impactful aspects of the design was the improved user experience for moderators. Before implementation, labelling content was a manual and inconsistent process, relying on multiple excel sheets which led to errors and delays in report generation. With the new feature, moderators could quickly and accurately assign multiple labels. As a result, data analysts received more structured datasets, leading to more accurate sentiment reports for clients.
This streamlined process eliminated inefficiencies, reduced human error, and ultimately strengthened Rungway’s value proposition by delivering actionable insights that help organisations drive meaningful cultural change. The project underscored the power of thoughtful UX design and data-driven decision-making, setting a strong foundation for future enhancements, such as automated label refinement and machine-learning-driven trend analysis.