UX-LX: Talks on Digital Harm and Understanding Searcher Behavior

User Experience Lisbon 2023

In May, I was invited to speak at UX Lisbon, on Preventing Digital Harm in Online Spaces. At the main event, I presented the Internet Safety Lab’s framework for evaluating the relationship that digital technologies have with consumers and what we can do as designers to mitigate the digital harms and dark patterns that could potentially violate that relationship. You can download my presentation below.

On the first day of the event, I ran a half-day, pre-conference workshop titled “Designing Effective Search Strategies” in which I introduced a new framework using observation as a powerful tool to understand site search behavior. To explore this, we broke into seven groups and worked on creating empathy maps, search personas (including group personas) and mapping the user journey toward information discovery. As a takeaway, all participants received a toolkit for crafting these artifacts and a step-by-step process to enhance product search. We got to eat yummy Portuguese snacks, too!

“Noreen … made the interesting point that if we build an accessible design we’ll also be solving many search problems.”

UXLx: UX Lisbon

What a wonderful event, interesting and welcoming people and an absolutely unforgettable time!

I am available to teach your team mitigating digital harm as a solo facilitator or how to understand user search behavior, solo or with my colleagues at the Information Architecture Gateway. Let me know if we can help.

Read the UXLX Write-ups at Medium:

UXLX 2023 Wrap Up: Workshops

UXLX 2023 Wrap Up: Talks Day

Ethics in Computer Programming: Move Fast, and Let Someone Else Break Things

In a session yesterday of the NSF CyberAmbassadors leadership training program, my breakout group were tasked with discussing a case study of a potential ethics violation in research data privacy. The Code of Conduct that we were to use to determine if a violation occurred was the Association for Computing Machinery’s (ACM).

The case study involved a research scientist who had made software to analyze three sets of participant data, including DNA records, medical records and social media posts. There was a problem with the program and the scientist wanted to be able to do a crowdsourced code review. They asked their ERB team to review whether they could release the codebase to the public to crowdsource the problem. The ERB approved the request as long as no participant data was also released or could be reidentified. The case expressed a statement that there was a risk of reidentifying data but didn’t say specifically how. Just that the request was approved.

My first impression was that the research scientist was hiding behind item 2.6 in the ACM Code of Conduct, which says to only do work within your area of competence. The way we read it, the researcher relied on the Ethics Review Board (ERB) to make the ethical determination. Since the ERB approved the study, was the researcher in the clear?

Conversation ensued about how a data analytics program that didn’t include test data could be tested, or whether it could be tested with dummy data and a sample of open social media posts/hashtags, etc. but that was actually aside from our real interest, which was the idea that technology developers, including those with less funding, but also those with fewer guardrails, may not be competent to or interested in make ethical decisions.

Someone brought up AI. People working in AI today or really any large, complex model affecting global populations, are often making decisions way outside of their area of competence. They may do well, in one or two disciplines, but understanding and unraveling the externalities of what the thing will do once it’s in the world is of lesser interest since they aren’t ethicists.

In fact, not all companies have ERBs and many big names, you know who, have quietly and unceremoniously disbanded their ethics teams. In a world of move fast and break things, it’s not their area of competence.

Is this the world we want to live in?

Keep On Trackin’

Me2B Research: Consumer Views on Respectful Technology

In the research I’ve been doing on respectful technology relationships at the Me2B Alliance, it’s a combination of “I’ve got nothing to hide” and “I’ve got no other option”. People are deeply entangled in their technology relationships. Even when presented with overwhelmingly bad scores on Terms of Service and Privacy Policies, they will continue to use products they depend on or that give them access to their family, community, and in the case of Amazon an abundance of choice, entertainment and low prices. Even when they abandon a digital product or service, they are unlikely to delete their accounts. And the adtech SDKs they’ve agreed to track them keep on tracking.