Consumer Attitudes Towards Product Safety

Report Cover of Consumer Attitudes Towards Product Safety: Physical Consumer Goods vs. Internet Connected Products, featuring a dark purple diagonal section on top with the title and a light purple diagonal section on the bottom, featuring a cartoon of a woman in glasses and a messy bun, holding a papers labeled "product safety" and a thought bubble with a seesaw measuring two, lower white bags on the left side, marked "Injury" and a bag on the higher, right side labeled "loss of privacy"

Just published: “Consumer Attitudes Towards Product Safety: Physical Consumer Goods vs. Internet Connected Products”. In my latest research with Lisa LeVasseur at Internet Safety Labs. we looked consumer perceptions and attitudes of safety of a variety of products. This research received financial support from the Internet Society Foundation.

Yahoo! Finance picked it up!

…and if the 75 min read warning on LinkedIn scares you (it’s mostly charts anyway) jump to the intro and discussion to see what you really should be concerned about as digital makers. This is important information that every product designer and engineer should know.

Some interesting findings about product safety attitudes:

* When it comes to product safety, there’s a double standard among consumers for connected vs. unconnected products.

People expect product makers to be responsible for the safety of things like home goods, cars, cleaning products and the like. But they don’t have the same expectation when it comes to websites, Smart TVS and mobile apps.

* Many consumers appear unaware of the causal connection between personal and societal harms such as physical, emotional, reputational, and financial damage and the systemic loss of privacy tied to connected products and services.

Product consumers are subjecting themselves to more harms than they think when they trust digital product makers to take proper care of their personal information.

* Even though survey respondents didn’t score mobile apps as the “least safe” optionwebsites, smart automobiles and smart homes got that dubious honorconsumers expressed more concern about the safety of apps than the safety of other internet-connected products.

If you find that last point interesting, you will find Internet Safety Lab’s AppMicroscope educating. App Microscope displays Safety Labels for mobile applications. Currently, App Microscope contains over 1700 apps studied in the ISL 2022 K-12 EdTech safety benchmark.

Read the full report at Internet Safety Labs:

Consumer Attitudes Towards Product Safety: Consumer Products vs Internet-Connected Products:

Look for other reports in a summary of my work for Internet Safety Labs.

UX-LX: Talks on Digital Harm and Understanding Searcher Behavior

User Experience Lisbon 2023

In May, I was invited to speak at UX Lisbon, on Preventing Digital Harm in Online Spaces. At the main event, I presented the Internet Safety Lab’s framework for evaluating the relationship that digital technologies have with consumers and what we can do as designers to mitigate the digital harms and dark patterns that could potentially violate that relationship. You can download my presentation below.

On the first day of the event, I ran a half-day, pre-conference workshop titled “Designing Effective Search Strategies” in which I introduced a new framework using observation as a powerful tool to understand site search behavior. To explore this, we broke into seven groups and worked on creating empathy maps, search personas (including group personas) and mapping the user journey toward information discovery. As a takeaway, all participants received a toolkit for crafting these artifacts and a step-by-step process to enhance product search. We got to eat yummy Portuguese snacks, too!

“Noreen … made the interesting point that if we build an accessible design we’ll also be solving many search problems.”

UXLx: UX Lisbon

What a wonderful event, interesting and welcoming people and an absolutely unforgettable time!

I am available to teach your team mitigating digital harm as a solo facilitator or how to understand user search behavior, solo or with my colleagues at the Information Architecture Gateway. Let me know if we can help.

Read the UXLX Write-ups at Medium:

UXLX 2023 Wrap Up: Workshops

UXLX 2023 Wrap Up: Talks Day

Ethics in Computer Programming: Move Fast, and Let Someone Else Break Things

In a session yesterday of the NSF CyberAmbassadors leadership training program, my breakout group were tasked with discussing a case study of a potential ethics violation in research data privacy. The Code of Conduct that we were to use to determine if a violation occurred was the Association for Computing Machinery’s (ACM).

The case study involved a research scientist who had made software to analyze three sets of participant data, including DNA records, medical records and social media posts. There was a problem with the program and the scientist wanted to be able to do a crowdsourced code review. They asked their ERB team to review whether they could release the codebase to the public to crowdsource the problem. The ERB approved the request as long as no participant data was also released or could be reidentified. The case expressed a statement that there was a risk of reidentifying data but didn’t say specifically how. Just that the request was approved.

My first impression was that the research scientist was hiding behind item 2.6 in the ACM Code of Conduct, which says to only do work within your area of competence. The way we read it, the researcher relied on the Ethics Review Board (ERB) to make the ethical determination. Since the ERB approved the study, was the researcher in the clear?

Conversation ensued about how a data analytics program that didn’t include test data could be tested, or whether it could be tested with dummy data and a sample of open social media posts/hashtags, etc. but that was actually aside from our real interest, which was the idea that technology developers, including those with less funding, but also those with fewer guardrails, may not be competent to or interested in make ethical decisions.

Someone brought up AI. People working in AI today or really any large, complex model affecting global populations, are often making decisions way outside of their area of competence. They may do well, in one or two disciplines, but understanding and unraveling the externalities of what the thing will do once it’s in the world is of lesser interest since they aren’t ethicists.

In fact, not all companies have ERBs and many big names, you know who, have quietly and unceremoniously disbanded their ethics teams. In a world of move fast and break things, it’s not their area of competence.

Is this the world we want to live in?

UX-LX: Designing Search Experiences in Lisbon!

May 24, 2023 9:00AM-12:30PM WET
Sensemaking, Search and SEO at UX-LX: UX Lisbon

Designing Effective Search Experiences

How do people locate and discover information online? Well, they type keywords into a search engine and then select items from the search results, right? This is the current mental model of how search/retrieval works for most users. But it’s not the only way people search, nor is it necessarily the most effective for the information seeker.

In this workshop, you will learn about ”Sense-making,” a search behavior that information architects, user experience (UX) and usability pros should not ignore. You will learn how individuals (and groups) plan and carry out search activities. How a searcher’s goals affect their sense-making tasks. And how accessible design and information architectures improve search performance. At the end, you you will understand how to optimize the user experience of your products and search engine results pages, so people get the information they need with less frustration. 

Topics covered:

  • Approaches to sense-making & information seeking behavior
  • Searcher goals that affect sense-making tasks
  • How accessible design and information architecture improve search performance
  • Where & how to implement search-related sense-making in user personas/profiles & customer journeys
  • How to optimize individual search listings for findability & sense-making
  • Search strategies for apps, video, voice and ChatGPT

Exercises:

  • Individual and group search exercise
  • Analyze a selected web page for accessible design and search optimization
  • Incorporate search behavior characteristics into personas and JTBD
  • App, video and voice search optimization
  • Discussion of new and emerging forms of search experiences

Attendees will learn:

  • How to identify search behaviors and incorporate them in personas and JTBD tasks
  • How to architect & optimize different types of search experiences
  • How accessible design can improve search experiences for everyone
  • How search strategy differs for websites, apps, voice, video and emerging experiences


Any requirements for attending: None

Information Architecture Conference 2023

I am also hosting a full day workshop on Safe Tech Audit: Applying IA Heuristics for Digital Product Safety Testing in New Orleans on March 28 at IAC23: The Information Architecture Conference. Registration

CPPA Stakeholder Meeting Discusses “Dark Patterns”

On May 5, 2022, I participated in the California Privacy Protection Agency’s (CPPA) stakeholder meeting, making a public statement about “dark patterns” which I urged them to redefine as “harmful patterns,” and suggested changes to their definitions of “Consent” and “Intentional Action.”

As Jared Spool says, we should be looking at the UX outcome of design decisions, not just the intent, as many designers adopt strategies or work with underlying technologies whose outcomes can be harmful to the technology user and other stakeholders. These UI patterns may not have the intent to do harm. Often the designers’ intent is to provide convenience or a useful service.

Take accessibility overlays that intend to provide a better experience for people with visual or cognitive disabilities but have the effect of overriding necessary controls. Even patterns that affect user behavior, like staying on a page longer, clicking on a link, accepting default cookie settings, etc. may be intended to provide convenience to users, but unknowingly to both the designer and the user, there are processes underlying many of these tools that share data and information about the transaction that can be harmful.

CPRA is defining what it means to consent to data collection and what an intentional user action is. It addresses “dark patterns” as an intentional deception, when often the digital harm is not intentional, yet is deep-rooted. We are hoping to make these harms clearer and provide guidelines for addressing them through our ISL Safe Software Specification.

Read more about the CPPA stakeholder meeting and my statement on behalf of the Internet Safety Labs (formerly the Me2B Alliance):