05. Potential issues
Before moving into wireframing or prototyping, I had to identify potential challenges:
How can we ensure that the labels are applied consistently across content? Could there be an automated or semi-automated system to suggest or pre-fill labels to reduce moderator workload?
If Rungway grows, we need to ensure that the system can scale with larger volumes of content. Are there any limits or bottlenecks in how we can categorize and report large datasets?
If moderators need to tag content manually, how can we reduce cognitive load? Is there a way to make the process more efficient, like batch labelling or predictive labelling?
06. Label Categories
Defining the label categories
After conducting a brainstorming session, I focused on addressing these key questions related to the project. This process involved numerous one-on-one meetings with members of the moderation team to gain insight into how these questions applied to their current workflow. I also observed their labelling process through Microsoft Teams to better understand existing challenges and pain points. Additionally, I arranged individual meetings with members of the Insights & Data team to learn how the moderators transfer labels to the data team for report generation.
Through these discussions, I identified the most frequently used labels and those that were specific or relevant to certain content types. Finally, I organized a meeting with key stakeholders from the Product, Moderation, and Insights & Data teams to finalize the necessary labels and categories for the new design. We decided that the labels were categorised into six main segments (organised from most important to client needs to least):
Sentiment
Agreeableness
Whether the content expressed by the user was perceived as positive, negative, a company challenge, personal challenge or technical challenge.
Whether the content of the user was perceived as friendly, hostile or neutral.
Advocacy
Style
Whether the user's content showed support or opposition to the topic at hand.
The writing style, voice and tone of the user.
Helpfulness
Other
To what extent the user's content was helpful to the topic at hand.
Miscellaneous labelling that didn't sit in any of the other categories.
07. Technical and design constraints and considerations
Gathering the technical and design constraints and considerations.
Before beginning the wireframing/prototyping phase, it was important that I understood the technical and design constraints and considerations, given the challenging timeline we had. I set up one-to-one meetings with members of the Insights and Data team and Engineering team with the goal of addressing these key areas:
Technical Constraints
Labelling and Data Structure
How should the labels be stored and managed on the back-end? Is there a need for a labelling hierarchy or taxonomy? How do we ensure that the system can scale as the amount of data grows?
Integration with Other Systems
Does the labelling feature need to work with other parts of the platform or tools currently used, such as sentiment analysis or reporting systems?
User Permissions and Access Control
Are there any different levels of permissions needed to add, edit, or review labels? How do we manage this at scale?
Automation
For future consideration, could AI or machine learning assist with auto-labelling content or suggesting labels to moderators? What type of training or data would be needed to support this? Is it easy to set up the training data in light of this?
UX/UI Design Considerations
Intuitive Labelling Interface
The labelling process should be as intuitive as possible, minimizing friction. What UI elements would make it easy for moderators to select or input tags? Should we use a type-ahead search, dropdowns, or color-coded tags?
Labelling Suggestions
How can we incorporate smart labelling suggestions to help moderators choose the right label quickly? Would the system surface similar labels used previously to create consistency?
User Feedback
How will moderators know if content has been labelled correctly or needs review?
08. Metrics
Metrics for Success
Before diving into the design and prototyping phase, I had to define how the feature will be measured post-launch:
• Adoption Rate: How many moderators are actively using the labelling system?
• Efficiency: Does the system reduce the time spent labelling content compared to manual or previous methods?
• Accuracy: How accurately are labels being applied, and how often do reports come back needing revisions?
• User Satisfaction: Are moderators satisfied with the experience, and do clients find the insights more actionable?
09. Wireframing and Prototyping
Low Fidelity Wireframing
To lay the foundation for the data labelling system, we began by creating low-fidelity wireframes to outline key interactions and user workflows. Using Figma Whiteboard, I mapped out the potential workflows alongside the product team, ensuring a logical structure for labelling, searching, and managing labels.
We then moved to Figma to create wireframes visually representing the labelling interface, including elements such as a tag input field, an AI-assisted suggestions dropdown, a bulk tagging checkbox, and a tag management dashboard.
The wireframing process focused on clarity, usability, and efficiency for moderators. I decided to create three different wireframes with different degrees of complexity. This was because we had limited engineering resources at hand, therefore it was necessary to gauge the possibility of delivering the feature within the two-week sprint we had in place.
Early versions presented a simple text input for labels, but through internal reviews, we identified potential issues like inconsistent label naming and moderator fatigue from manual entry in excel sheets. To address this, I iterated on the design by introducing predefined label categories, a multi-select dropdown, and a machine learning auto-suggestion search bar.
Wireframes evolved through multiple iterations:
Version 1: Basic labelling dropdown with free-text entry.
Version 2: Introduced selecting labels in bulk and structured label categories.
Version 3: Added predictive search bar label and label management feature within modal
We collaborated closely with the data team and moderators, gathering feedback to refine the hierarchy of labels and ensure they aligned with reporting requirements. We decided that the label format should be presented as “Label Category: Label Value”, for example, “Sentiment: Positive”. The final mid-fidelity wireframes were tested before moving to high-fidelity designs, ensuring they were intuitive and aligned with Rungway’s existing UI components and design system.
The final version enabled moderators to select multiple label within predefined categories or quickly find labels via a search bar, which suggested labels based on the first three characters. Moderators could also add new labels, subject to approval by the Data and Insights team.
These refined wireframes served as the blueprint for prototyping, guiding the development of an interactive and user-friendly labelling experience.
10. Final Design
The final design and hand-off.
After multiple iterations and usability testing, we finalised the high-fidelity design for the data labelling feature, ensuring it was intuitive, scalable, and seamlessly integrated into Rungway’s existing UI. The final design included four key components:
1. Labelling Interface
A searchable side menu with smart label suggestions powered by machine learning, allowing moderators to efficiently categorise content.
2. Labelling from the dashboard
A streamlined modal interface enabling moderators to label the content from the moderation dashboard, significantly improving workflow efficiency. Single click edits could be made allowing moderators to change labels easily.
3. Auto-Labelling Predictive Search Bar
Uses predictive in the search bar based on what the user puts into the search bar to reduce manual effort.
Once the final prototype was validated, we prepared for developer handoff by creating a comprehensive design package in Figma. This included:
Annotated Screens & User Flows
Clearly marked elements explaining interactions, micro-interactions, and logic behind machine-learning search bar suggestions.
Component Specifications
A breakdown of design components, including typography, color schemes, and interaction states (e.g., hover, active, and error states).
Edge Case Documentation
Addressing scenarios like duplicate labels, unlabelled content, and error handling.
Prototype Walkthroughs
Clickable prototypes demonstrating the intended user experience, reducing ambiguity for engineers.
To ensure a smooth transition from design to development, I led a design run-through session with the engineers, where we walked through the design rationale, usability findings, and technical considerations. We also maintained an open Slack channel for continuous collaboration, allowing developers to ask questions and request design clarifications in real time.
By the time the feature was implemented, the structured design approach and clear documentation had significantly streamlined the development process. The final result was a powerful, user-friendly tagging system that improved moderator efficiency, ensured consistent data labelling, and enhanced the accuracy of Rungway’s sentiment analysis reports.
11. Conclusion
Conclusion and final thoughts.
The data labelling system significantly enhanced Rungway’s ability to deliver accurate, data-driven insights to clients by enabling efficient and consistent content categorisation. Through a structured design process—spanning user research, wireframing, prototyping, and iterative testing - we created a scalable solution that streamlined the moderator experience while ensuring high-quality data labelling for analytics.
The introduction of labelling from the dashboard not only reduced manual workload but also increased efficiency by allowing moderators to focus on content review rather than repetitive administrative tasks. The ability to also easily edit these label on the fly helped to increase efficiency further.
One of the most impactful aspects of the design was the improved user experience for moderators. Before implementation, labelling content was a manual and inconsistent process, relying on multiple excel sheets which led to errors and delays in report generation. With the new feature, moderators could quickly and accurately assign multiple labels. As a result, data analysts received more structured datasets, leading to more accurate sentiment reports for clients.
This streamlined process eliminated inefficiencies, reduced human error, and ultimately strengthened Rungway’s value proposition by delivering actionable insights that help organisations drive meaningful cultural change. The project underscored the power of thoughtful UX design and data-driven decision-making, setting a strong foundation for future enhancements, such as automated label refinement and machine-learning-driven trend analysis.