PROJECT UX CONSULTANCY

CLIENT CASTELLI

ROLE  
Data Analysis
User Testing
Prototyping

INTRODUCTION

Castelli is a storied Italian cycling brand with an inspiring history rooted in innovation. The brand is associated with teams cycling through dramatic landscapes and customers who expect high quality products and designs for their premium cyclewear. They began using Unmade configurators and our supply chain management systems to offer customisable team kits to their customers.


A key service offering for our clients is sharing quantitative data analysis from their custom orders and running qualitative user testing on their configurators. In combination, this research creates powerful foundations for continuous usability improvements and new features, whilst helping inform how we can work collaboratively to develop garments that have exciting opportunities to make the most of our interaction design.


We capture and communicate our analysis through in-person workshops with our clients. The shared insights lead to informed discussions and alignment on the future direction of our products. We gain a clear understanding of our clients strategic objectives, whilst generating a roadmap of priorities that ensure maximum business value for their business and our SaaS products.



CREATING USER FOCUSSED PRODUCTS
PERSONAS AND ARCHETYPES


When we launch with a new client, there is one clear question we ask them to help inform how we develop our user experience for maximum value to the customer:


“Who are the primary customers you’re targeting with customisable products?”


Often clients have a good idea of their core mainline customer, but less of an idea about new customers who can be reached using customisable products. For our clients, this is an innovative opportunity to capture new customers whilst nurturing existing ones. Offering customisable products is a way of creating heightened customer experience, whilst exploring other outcomes such as localized small runs for physical retail (imagine a cycling Jersey only available in limited editions in certain locations)


In this case, we ran a cross-functional internal workshop to develop user personas. User personas can have a bad rap in product design, as some believe they may perpetuate stereotypes. But a difficulty we faced was the client being unable to share a clear view of who their primary customers were with us which began to hinder how we could optimize our user journeys and write clear user stories when developing features. Therefore, we decided to create personas/ archetypes to understand the motivations of their customers and help galvanize objective decision making around the custom experience.


This created empathy from all members of the cross functional team; with engineers being able to see the value of the design work they build more from the customers perspective.


We delved into order data to create storytelling around some key account orders. From these, we could get a sense of users who were engaging with the configurators, as well as the fascinating ways they imparted their cycling teams' brand design onto garments.



Section of a Miro board looking at orders - identifiable information has been redacted.
Bringing orders to life, we gathered examples of custom teamwear that had been created using our configurators and had been manufactured and posted on socials by cycling teams. We could see the size of the orders, and their creativity. Reflecting on these real orders created excitement and empathy from our internal teams regarding the power of the software we work on together. Sometimes you can be so close to the work, you forget the impact it has out there in the real world; cyclists creating charity team kits for endurance rides, a team creating a special kit for a memorial ride for someone they’ve lost, or corporate team events.


We then used a structured template to develop personas, adding details like professional backgrounds, motivations and end goals.


Section of a Miro board looking at final archetypes - some information has been redacted
These evolved into archetypes to make them more memorable, anonymous and shareable for use internally and with our clients who confirmed and edited them slightly. Our two key archetypes were the Lone-Wolf and The Club-Rider.



DEVELOPING UX RESEARCH

CONFIGURATOR USER TESTING


We used our archetypes to generate screener questions to capture user testing participants. Having these archetypes as a reference meant we could be confident participants would have enough context of the cycling world to give informed feedback.


Then we turned to our user testing platform, Userlytics. We developed five benchmarking tests on the configurator for the Lone-Wolves, and five for the Club-Riders. More than five tests have been shown to have diminishing returns, where you observe the same behaviors repeatedly, hence our testing almost always using this pattern of five participants.


Writing a script and forming questions for these tests was a collaborative effort between my Product Manager and myself. Scripts went through various edits and iterations to make sure we were capturing an understanding of how users used the configurators unprompted, and then followed with a series of questions aimed at understanding the usability of different aspects of the process and assessing appetite for additional features.


We made sure to loop in various stakeholders such as account managers and customer success teams who deal day-to-day with clients and often hear first hand insights. We made sure to edit our scripts in a way that allowed us to test some of these insights and develop meaningful hypotheses for testing; testing that would truly inform conversations with clients.


For these tests we also aim to understand how users respond to our configurators in a section which is largely unprompted. This is benchmark testing. An example benchmarking script question is:

“Please customize all possible aspects of this cycling jersey. Take your time and spend at least 10 minutes customizing this garment. Talk us through what you're thinking and doing. Please do not select the 'Save Design' button.”


It was important to note the types of responses and potential observation points to look out for. With this particular question these were things like:

  • Is the user missing certain customisation options?
  • Are parts confusing to the user?
  • How are users engaging with the experience; more with a guided linear approach, or are they going for a more free approach?
  • To see if a user tries to upload a logo unprompted. What happens here - what file format do they choose etc
  • Are users navigating to the more help logo page? Are they reading it?

Final Script.
Once we had satisfactory scripts which included a scenario, questions and a conclusive question matrix, we launched our tests on the Userlytics and waited for participants to send in their results. We received test results which included a video recording of the participant verbally talking through the tasks. The results are always fascinating, to observe how real users use the products we work so closely on, highlighting areas for improvement that can be easily overlooked.

The analysis of these results, understanding patterns, observing pain points and knowing how to understand when a user might be saying what you want to hear whilst showing an opposing behavior takes a lot of time and practice, but really forms the basis of objective conversations with clients.

When we analyzed the verbal and video results, we began pulling out key themes that emerged. Limiting these to five key themes, whilst noting less repetitive observations of pain points helped edit the analysis.

Some things were obvious usability patterns and issues, whilst others came as a surprise. As a designer it was important to make sure I detached any “solutionizing”, and was aware that looking for validation for previous feature ideas is always a challenge that must be separated from the objective work of recording.


PULLING DATA
GA, AMPLITUDE AND HOTJAR


We extracted demographic user data from Google Analytics, such as device type, browser and geography. We used the tool Amplitude to analyse behavioral data, including drop off rates, average engagement time and typical user flows through the configurator. Helpfully, we were also able to gain insights on more specific user actions to our configurator, such as the file types users were attempting to upload. Hotjar provided us with screen recordings and heat maps where we could start to understand patterns of click frustration between users. All of these quantitative elements together, were incredibly helpful in forming a picture of who the existing users of the configurator tool actually were.


The engineers also assisted with pulling order data from our own internal backend systems, to enable us to draw conclusions on what the most popular customisation choices were for the users who ultimately go on to purchase items. This included insights such as the percentage of orders that had user uploaded logos, stock logos or text input. From interpreting the data, we were also able to deduce where future roadmap features not yet developed, such as the ability to transfer design intent from one product to another, would have been very useful for those users ordering a head-to-toe complete look, or for a full team kit. This particular insight was key in driving conversations around efficiency, ease-of-use and customer ultimately, customer satisfaction.


These quantitative insights were invaluable and really helped to corroborate our thoughts from the qualitative research and testing.



COMMUNICATION IS KEY
VISUALISING AND SHARING RESEARCH


Armed with so much useful context and information for sharing with our clients, we began the process of translating it in a digestible way. Our brief was to consolidate it into a two day workshop in person, in Italy. The ultimate aim of the workshop was to:


“Uncover areas where we can drive revenue growth through the overall customer experience and configurator.”


One of the most enjoyable parts of this workshop was the coming together of various disciplines; we consisted of the Head of Product, a Senior Product Manager, an Implementation Manager, and the Customer Account Manager as well as myself, a Senior Product Designer.


Each offering different perspectives; another part of my role was to use my visual design skills to make our complex data and perspectives appear coherent and impactful as it was presented to our clients. Often with global projects, there are language barriers, so making sure every slide has a clear key message is pivotal.


After a couple of weeks of cross-functional preparation in person and asynchronously, we had a final deck and planned workshops armed to travel to Italy with. The final deck included the contextual information and analysis, alongside some prototypes of solutions to some of the painpoints encountered repeatedly by users.   


Tyre changing on an Italian roadside.
Our first morning got off to a great start, as our car's tyre blew on the way to our clients HQ. Undeterred, we got to work changing the tyre. At times like this you just have to roll with the situation. Humour was employed and we got to their office only 10 minutes late.


We began the morning by laying out the context and aims for our user testing, revealing the data we had collected, and touching on the testing results we had conducted. We had our trusty archetypes on hand to keep coming back to and course correct any conversations which started to become more subjective. Whether everyone found the archetypes to be right or wrong, they stirred up debate and evolved further during the initial session, finishing with internal alignment on the client's side. That’s exactly why they’re can be a useful tool.


Personas/ Archetypes


Results & Observations
Using our initial theming during the analysis of the user testing, I had prototyped some “low hanging fruit” improvements which were already being works on by engineers, or required no engineering at all (things like simple copy changes to improve the UX) Here, we are demonstrating that we are rapidly listening and learning from users. We were showing that there was a clear feedback loop in understanding users, and enacting meaningful change to optimize their journeys. 


Results & Observations
There was lots of discussion, and this is where the value of the work really came into play. As the analysis wasn’t based on the whim of opinion, but rooted in testing, conversations were framed in the right way for them to be user-centric, which is most beneficial for our SaaS product and our clients' customers.


We presented order data such as orders placed and garments ordered, EU vs US orders, and charted whether there were order spikes at marketing pushes in different locations. We relayed the percentage of orders that were for single garments, head-to-toe or for cycling teams to help us identify if the custom products were selling to the types of users the client expected. Please note, that for client confidentiality these numbers have been redacted.


We developed a shared understanding of priorities to evolve the product, always coming back to the main aim of maximizing opportunities for growth. Consultancy outside of the editor including user journeys and getting the user to the editor in the first place using marketing were also hot topics of conversation.


Impact Effort Matrix with the client forming our shared roadmap.
Conducting this type of work in-person also created much stronger relationships and bonds between participants both internally and with our clients. Gems of strategic information were offered up that simply don’t come up on a one hour weekly steering call. In person also throws up some downtime opportunities to cap it all off with an Aperol Spritz (in this case, in the city which claimed to have invented them!)

The next stage on returning from the trip was a half day washup from the team who attended, where we agreed our priorities moving forward, shared what we thought went well and what might be improved for subsequent trips with clients in general. The ultimate aim being to learn, iterate and improve our consulting alongside our agile ways of working.



A job welldone.