I answer questions about UX, Information Architecture and other topics on Quora. A selection of these answers will be reposted on Medium with occasional, minor editing for clarity. Following is a question I answered in September.
When is the best time to bring in a UX expert, when you are first building a product or after you have user data?
I work as a UX consultant on a digital ID standard. One of the areas I am researching is usability of identity management products and services. Some of the companies I have interviewed are very small, one or two person startups that do not have budget for outside expertise and others are very large, nationally known brands that themselves have not allocated budget for UX testing. In some cases the product managers and developers are very interested in user experience of their products while others interpret “user” as an electronic agent rather than a human at a computer or device, so invest little to no dollars on UX.
Those who do understand the importance of UX, particularly products intended for the mass consumer market or those for purposes involving repetitive or multitasking/heavy attention load activity that may lead to potential worker injury, for example, will follow usability guidelines such as NNGroup/Jakob Nielsen/Don Norman’s research or actively seek outside UX expertise. At the very least, all user facing products should do some UX studies, sit with users and stakeholders who understand user needs, complaints and feedback and identify key user tasks and potential negative outcomes. Do these exercises at every step of development particularly pre launch and when introducing changes (even if they seem minor).
For our financial wellness tools at Decision Fish, we tested with dozens of prospective users very early, well before launching our first web app, when it was still just an Excel spreadsheet! We did surveys and interviews on how people manage their finances. We watched people use all kinds of personal finance tools from paper to software to just thinking it through. We surveyed them about their pain points. We observed individuals and couples as they walked through our alpha modules and asked them directly to tell us what we are doing wrong. We pivoted quite a bit based on user input.
We even offered financial coaching sessions to prospective users and partners to get deeper feedback into individual concerns. In doing so, we discovered underrepresented use categories that challenged some of the assumptions we made in our design. We collected contact info on interested users for a beta test once we launch and will be offering it as a pilot to companies and partners who are interested in providing it as an employee benefit.
All of the data and feedback we gathered in these sessions helped us to develop our product and adjust our assumptions of how to present information and guide our decision-making tool. All this has happened well before we had actual user data to analyze. That will be our next step, to create a plan for analyzing and learning from our users when we’ve launched and have data to look at. But we will continue to observe, coach and survey users, because we expect continual improvements and adjustments. Because we want our decision tool to be the best it can be for our users.