UX-LX: Preventing Digital Harm Keynote and Searcher Behavior Workshop

In May, I was invited to speak at UX Lisbon, on Preventing Digital Harm in Online Spaces. At the main event, I presented the Internet Safety Lab’s framework for preventing digital harm in connected products. This included a discussion of the relationship technologies have with consumers. I demonstrated techniques designers should adopt to mitigate the digital harms and dark patterns that could potentially violate that relationship. You can download my presentation below.

User Experience Lisbon 2023

On the first day of the event, I ran a half-day, pre-conference workshop titled “Designing Effective Search Strategies.” In this session, I introduced a new framework using observation as a powerful tool to understand site search behavior. To explore this, we broke into seven groups and worked on empathy maps, search personas and mapping the user journey. I also introduced including group personas (2 of the groups took as a hint to discover cocktail lounges in Lisbon). As a takeaway, all participants received a toolkit for crafting these artifacts and a step-by-step process to enhance product search. We got to eat yummy Portuguese snacks, too!

“Noreen … made the interesting point that if we build an accessible design we’ll also be solving many search problems.”

UXLx: UX Lisbon

What a wonderful event, interesting and welcoming people and an absolutely unforgettable time!

I am available to teach your team preventing or mitigating digital harm. Or lead a workshop on how to understand user search behavior. I can lead workshops solo or with my colleagues at Information Architecture Gateway. Let me know if we can help.

Read the UXLX Write-ups at Medium:

UXLX 2023 Wrap Up: Workshops

UXLX 2023 Wrap Up: Talks Day

Ethics in Computer Programming: Move Fast, and Let Someone Else Break Things

In a session yesterday of the NSF CyberAmbassadors leadership training program, my breakout group were tasked with discussing a case study of a potential ethics violation in research data privacy. The Code of Conduct that we were to use to determine if a violation occurred was the Association for Computing Machinery’s (ACM).

The case study involved a research scientist who had made software to analyze three sets of participant data, including DNA records, medical records and social media posts. There was a problem with the program and the scientist wanted to be able to do a crowdsourced code review. They asked their ERB team to review whether they could release the codebase to the public to crowdsource the problem. The ERB approved the request as long as no participant data was also released or could be reidentified. The case expressed a statement that there was a risk of reidentifying data but didn’t say specifically how. Just that the request was approved.

My first impression was that the research scientist was hiding behind item 2.6 in the ACM Code of Conduct, which says to only do work within your area of competence. The way we read it, the researcher relied on the Ethics Review Board (ERB) to make the ethical determination. Since the ERB approved the study, was the researcher in the clear?

Conversation ensued about how a data analytics program that didn’t include test data could be tested, or whether it could be tested with dummy data and a sample of open social media posts/hashtags, etc. but that was actually aside from our real interest, which was the idea that technology developers, including those with less funding, but also those with fewer guardrails, may not be competent to or interested in make ethical decisions.

Someone brought up AI. People working in AI today or really any large, complex model affecting global populations, are often making decisions way outside of their area of competence. They may do well, in one or two disciplines, but understanding and unraveling the externalities of what the thing will do once it’s in the world is of lesser interest since they aren’t ethicists.

In fact, not all companies have ERBs and many big names, you know who, have quietly and unceremoniously disbanded their ethics teams. In a world of move fast and break things, it’s not their area of competence.

Is this the world we want to live in?

Crypto, NFTs and Dadaism

POGs (Source: File:Pogslam.jpg – Wikimedia Commons)

Those of you interested in artists’ collaborative spaces may find the Dada.art platform unique. I found it while pondering the connection between NFTs (Non-Fungible Tokens) and Dadaism, an early 20th century, anti-capitalist art movement “expressing nonsense, irrationality, and anti-bourgeois protest in their works.” (I wondered to myself, half seriously, whether anyone had made NFTs from POGs, the 1990s collector’s item. Turns out someone has).

Of course the NFT platform is called DADA.art and they recently sold a collection of collaborative works as an NFT to Metapurse for 500 ETH (Etherium crypto coin). All proceeds were donated back to the community to provide a basic income (in ETH) to artists on the platform. Fascinating.

Keep On Trackin’

Me2B Research: Consumer Views on Respectful Technology

In the research I’ve been doing on respectful technology relationships at the Me2B Alliance, it’s a combination of “I’ve got nothing to hide” and “I’ve got no other option”. People are deeply entangled in their technology relationships. Even when presented with overwhelmingly bad scores on Terms of Service and Privacy Policies, they will continue to use products they depend on or that give them access to their family, community, and in the case of Amazon an abundance of choice, entertainment and low prices. Even when they abandon a digital product or service, they are unlikely to delete their accounts. And the adtech SDKs they’ve agreed to track them keep on tracking.