UX-LX: Talks on Digital Harm and Understanding Searcher Behavior

User Experience Lisbon 2023

In May, I was invited to speak at UX Lisbon, on Preventing Digital Harm in Online Spaces. At the main event, I presented the Internet Safety Lab’s framework for evaluating the relationship that digital technologies have with consumers and what we can do as designers to mitigate the digital harms and dark patterns that could potentially violate that relationship. You can download my presentation below.

On the first day of the event, I ran a half-day, pre-conference workshop titled “Designing Effective Search Strategies” in which I introduced a new framework using observation as a powerful tool to understand site search behavior. To explore this, we broke into seven groups and worked on creating empathy maps, search personas (including group personas) and mapping the user journey toward information discovery. As a takeaway, all participants received a toolkit for crafting these artifacts and a step-by-step process to enhance product search. We got to eat yummy Portuguese snacks, too!

“Noreen … made the interesting point that if we build an accessible design we’ll also be solving many search problems.”

UXLx: UX Lisbon

What a wonderful event, interesting and welcoming people and an absolutely unforgettable time!

I am available to teach your team mitigating digital harm as a solo facilitator or how to understand user search behavior, solo or with my colleagues at the Information Architecture Gateway. Let me know if we can help.

Read the UXLX Write-ups at Medium:

UXLX 2023 Wrap Up: Workshops

UXLX 2023 Wrap Up: Talks Day

Thoughts on Diversity, Equity and Inclusion (DEI) as a Design Framework

“Tools for Accessibility” by Noreen Whysel. AI generated art produced at NightCafe Studio

I was on a call the other day where we were discussing identity services for underserved populations. Someone brought up Diversity, Equity, and Inclusion (DEI) as a framework for ensuring accessible services for all.

DEI, as applied to product and service design, is a three-pronged philosophy that asks if you are assuring that diverse perspectives and lived experiences are being considered in the design of the service; whether access to the design or service is fair to all categories of people; and whether those—whose diverse experiences are considered—feel safe, welcome and included in the service and its outcome.

We discussed DEI in our group, but one person became uncomfortable, insisting that it doesn’t matter who is using the services as long as everyone can use it. He was concerned that focusing on DEI might mean that the unique needs of people, like the parent of a disabled person, would be excluded from consideration in the design of a product or service.

I thought this was an odd framing. He isn’t wrong to worry that caregivers may not have the best-designed experiences, which is why Universal Design, or design that everyone can use without impediment, is so important as a framework.

But rejecting conversations about DEI outright seems short sighted.

As a framework, I like DEI because it offers a reminder that there are people who get forgotten in the design process. It asks questions like “Who are we including?” and “Who are we leaving out?” So, my colleague’s concern about addressing the needs of the parent of a disabled person is exactly the type of inclusion issue that a DEI framework can help to identify.

It is also an area that I have been focusing on at IA Gateway with Shari Thurow and Bev Corwin. We are working on a model for a group persona that addresses the search needs of caregivers and people with a medical concern, whether a family member, acquaintance or someone in guardianship care.

CPPA Stakeholder Meeting Discusses “Dark Patterns”

On May 5, 2022, I participated in the California Privacy Protection Agency’s (CPPA) stakeholder meeting, making a public statement about “dark patterns” which I urged them to redefine as “harmful patterns,” and suggested changes to their definitions of “Consent” and “Intentional Action.”

As Jared Spool says, we should be looking at the UX outcome of design decisions, not just the intent, as many designers adopt strategies or work with underlying technologies whose outcomes can be harmful to the technology user and other stakeholders. These UI patterns may not have the intent to do harm. Often the designers’ intent is to provide convenience or a useful service.

Take accessibility overlays that intend to provide a better experience for people with visual or cognitive disabilities but have the effect of overriding necessary controls. Even patterns that affect user behavior, like staying on a page longer, clicking on a link, accepting default cookie settings, etc. may be intended to provide convenience to users, but unknowingly to both the designer and the user, there are processes underlying many of these tools that share data and information about the transaction that can be harmful.

CPRA is defining what it means to consent to data collection and what an intentional user action is. It addresses “dark patterns” as an intentional deception, when often the digital harm is not intentional, yet is deep-rooted. We are hoping to make these harms clearer and provide guidelines for addressing them through our ISL Safe Software Specification.

Read more about the CPPA stakeholder meeting and my statement on behalf of the Internet Safety Labs (formerly the Me2B Alliance):

Me2B Alliance

Background

The Me2B Alliance is a standards development organization comprised of software engineers, policy analysts, UX experts, business and philanthropic leaders who are committed to giving individuals more say in how technology treats people. We are setting up a rigorous independent testing and certification program for websites, apps and connected devices. The Me2B Alliance is comprised of working groups for Me-s (the consumer), B-s (the business) as well as the Policy and Legal and Certification working groups. Together, we are setting the standard for Respectful Technology.

My Role

My role at the Me2B Alliance is twofold: I am leading up the Research and Validation practice to provide user experience and other research services to the various working groups, exploring questions around the consumer experience of their relationship with digital technology.

Secondly, I am developing the product integrity testing framework for digital technologies, in particular mobile apps and websites. This framework, coupled with data integrity and security testing, makes up the requirements for Me2BA certification.

User Research Methods

Ethnographic Research

I am engaging consumers in one on one conversations about their relationship with technologies they use in their day to day lives. Research questions range from and their understanding of privacy policies, terms of use agreements and other agreements they make implicitly by using a technology. for example, do users change how they interact with a website when they are familiar with the legal terms of the website? And would a score make a difference?

Preference Testing

I performed a series of tests of the certification mark to be used as a symbol of trust in connected digital technologies. This included interviews, focus groups, unmoderated 5-Second preference tests and surveys.

Product Integrity Testing

I developed a UX Integrity framework for the Me2B Safe and Respectful Technology Framework (now published as the Me2B Safe Specification). This framework was based on an applicaiton of IA heuristics to ensure that notices of data collection, use and sharing is Clear, Findable, Accessible, Credible and Communicative or understandable by a wide audience of human and machine readible or accessible devices.

Tools

Interviews and Focus Groups: Zoom, UserInterviews.com, Surveymonkey

Preference Tests and 5 Second tests: UserInterviews.com

Collaboration: Microsoft Teams, Zoom, Microsoft365, Trello, Monday

Artifacts

Safe Tech Audit: IA as a Framework for Respectful Design (April 23, 2022)
Conference Presentation: Information Architecture Conference 2022

Spotlight Report #5: Me2B Alliance Validation Testing Report: Consumer Perception of Legal Policies in Digital Technology (January 18, 2022)

Spotlight Report #3: Me2B Alliance Validation Research: Consumer Sensitivity to Location Tracking by Websites and Mobile Apps (November 5, 2021)

Shedding Light on Dark Patterns: A Case Study on Digital Harms (April 28, 2021)
Conference Presentation: Information Architecture Conference 2021

Webinar: Me2B Research: Consumer Views on Respectful Technology

Future Plans

We are planning to conduct three focus groups per month of consumers and digital product designers/managers. The research will continue to evolve our understanding of how consumers experience their relationship and risks with respect to digital technologies.

Safe Tech Audit Sketchnotes – IAC22

Zsofi Lang’s Sketchnotes from my talk “Safe Tech Audit: IA as a Framework for Respectful Design” from The Information Architecture Conference 2022:

Image

Designing Respectful Technology

Note: this article was originally published as Designing Respectful Tech: What is your relationship with technology? at Boxes and Arrows on February 24, 2022

You’ve been there before. You thought you could trust someone with a secret. You thought it would be safe, but found out later that they blabbed to everyone. Or, maybe they didn’t share it, but the way they used it felt manipulative. You gave more than you got and it didn’t feel fair. But now that it’s out there, do you even have control anymore?

Ok. Now imagine that person was your supermarket. 

Or your doctor. 

Or your boss.

Do you have a personal relationship with technology?

According to research at the Me2B Alliance, people do feel they have a relationship with technology. It’s emotional. It’s embodied. And it’s very personal.

How personal is it? Think about what it would be like if you placed an order at a cafe and they already knew your name, your email, your gender, your physical location, what you read, who you are dating, and that, maybe, you’ve been thinking of breaking up.

Source: “If your shop assistant was an app (hidden camera),” Forbrugerrådet Tænk (Danish Consumer Council), December 2014 (YouTube).

We don’t approve of gossipy behavior in our human relationships. So why do we accept it with technology? Sure, we get back some time and convenience, but in many ways it can feel locked in and unequal.

The Me2B Relationship Model

At the Me2B Alliance, we are studying digital relationships to answer questions like “Do people have a relationship with technology?” (They feel that they do). “What does that relationship feel like?” (It’s complicated). And “Do people understand the commitments that they are making when they explore, enter into and dissolve these relationships?” (They really don’t).

It may seem silly or awkward to think about our dealings with technology as a relationship, but like messy human relationships there are parallels. The Me2BA commitment arc with a digital technology resembles German psychologist George Levenger’s ABCDE relationship model 1, shown by the Orange icons in the image below. As with human relationships, we move through states of discovery, commitment and breakup with digital applications, too.

Source: Me2B Alliance, 2021

Our assumptions about our technology relationships are similar to the ones we have about our human ones. We assume when we first meet someone there is a clean slate, but this isn’t always true. There may be gossip about you ahead of your meeting. The other person may have looked you up on LinkedIn. With any technology, information about you may be known already, and sharing that data starts well before you sign up for an account.

The Invisible Parallel Dataverse

Today’s news frequently covers stories of personal and societal harm caused by digital media manipulation, dark patterns and personal data mapping. Last year, Facebook whistleblowerFrances Hauser exposed how the platform promotes content that they know from their own research causes depression and self-harm in teenage girls. They know this because they know what teenage girls click, post and share.

Technology enables data sharing at every point of the relationship arc, including after you stop using it. Worryingly, even our more trusted digital relationships may not be safe. The Me2B Alliance uncovered privacy violations in K-12 software, and described how abandoned website domains put children and families at risk when their schools forget to renew them. 

Most of the technologies that you (and your children) use have relationships with third party data brokers and others with whom they share your data. Each privacy policy, cookie consent and terms of use document on every website or mobile app you use defines a legal relationship, whether you choose to opt in or are locked in by some other process. That means you have a legal relationship with each of these entities from the moment you accessed the app or website, and in most cases, it’s one that you initiated and agreed to.

All the little bits of our digital experiences are floating out there and will stay out there unless we have the agency to set how that data can be used or shared and when it should be deleted. The Me2B Alliance has developed Rules of Engagement for respectful technology relationships and a Digital Harms Dictionary outlining types of violations, such as:

  • Collecting information without the user’s awareness or consent; 
  • contracts of adhesion, where users are forced to agree with terms of use (often implicitly) when they engage with the content; 
  • Loss or misuse of personally identifiable information; and 
  • Unclear or non-transparent information describing the technology’s policies or even what Me2B Deal they are getting.
Respectful relationships. Data minimization includes: No gossip, no eavesdropping, no stalking. Individual control and autonomy includes: No manipulation, no coercion. Respectful defaults includes Progressive Consent.
Source: Noreen Whysel, Me2B Alliance 2021. Image (right): Pixabay

Respectful technology relationships begin with minimizing the amount of data that is collected in the first place. Data minimization reduces the harmful effects of sensitive data getting into the wrong hands. 

Next, we should give people agency and control. Individual control over one’s data is a key part of local and international privacy laws like GDPR in Europe, and similar laws in CaliforniaColoradoand Virginia, which give consumers the right to consent to data collection, to know what data of theirs is collected and to request to view the data that was collected, correct it, or to have it permanently deleted.

Three Laws of Safe and Respectful Design

In his short story, I, Robot, Isaac Asimov introduced the famous “Three Laws of Robotics,” an ethical framework to avoid harmful consequences of machine activity. Today, IAs, programmers and other digital creators make what are essentially robots that help users do work and share information. Much of this activity is out of sight and mind, which is in fact how we, the digital technology users, like it. 

But what of the risks? It is important as designers of these machines to consider the consequences of the work we put into the world. I have proposed the following corollary to Asimov’s robotics laws:

  • First Law: A Digital Creator may not injure a human being or, through inaction, allow a human being to come to harm.
  • Second Law: A Digital Creator must obey the orders given by other designers, clients, product managers, etc. except where such orders would conflict with the First Law.
  • Third Law: A Digital Creator must protect its own existence as long as such protection does not conflict with the First or Second Law.1

Mike Monteiro in his well-known 2014 talk at An Event Apart on How Designers are Destroying the World discusses the second and third law a lot. While we take orders from the stakeholders of our work—the client, the marketers and the shareholders we design for—we have an equal and greater responsibility to understand and mitigate design decisions that have negative effects.

A Specification for Safe and Respectful Technology

The Me2B Alliance is working on a specification for safe and respectfully designed digital technologies—technologies that Do No Harm. These product integrity tests are conducted by a UX Expert and applied to each commitment stage that a person enters. These stages range from first-open, location awareness, cookie consent, promotional and loyalty commitments, and account creation, as well as the termination of the relationship.

Abby Covert’s IA Principles—particularly Findable, Accessible, Clear, Communicative and Controllable—are remarkably appropriate tests for ensuring that the people who use digital technologies have agency and control over the data they entrust to these products:

Findable: Are the legal documents that govern the technology relationship easy to find? What about support services for when I believe my data is incorrect, or being used inappropriately? Can I find a way to delete my account or delete my data?

Accessible: Are these resources easy to access by both human and machine readers and assistive devices? Are they hidden behind some “data paywall” such as a process that requires a change of commitment state, i.e. a data toll, to access?

Clear: Can the average user read and understand the information that explains what data is required for what purpose? Is this information visible or accessible when it is relevant?

Communicative: Does the technology inform the user when the commitment status changes? For example, does it communicate when it needs to access my location or other personal information like age, gender, medical conditions? Does it explain why it needs my data and how to revoke data access when it is no longer necessary?

Controllable: How much control do I have as a user? Can I freely enter into a Me2B Commitment or am I forced to give up some data just to find out what the Me2B Deal is in the first place? 

Abby’s other IA principles flow from the above considerations. A Useful product is one that does what it claims to do and communicates the deal you get clearly and accessibly. A Credible product is one that treats the user with respect and communicates its value. With user Control over data sharing and a clear understanding of the service being offered, the true Value of the service is apparent.

Over time the user will come to expect notice of potential changes to commitment states and will have agency over making that choice. These “Helpful Patterns”—clear and discoverable notice of state changes and opt-in commitments—build trust and loyalty, leading to a Delightful, or at least a reassuring, experience for your users.

What I’ve learned from working in the standards world is that Information Architecture Principles provide a solid framework for understanding digital relationships as well as structuring meaning. Because we aren’t just designing information spaces. We’re designing healthy relationships.


1 Levinger, G. (1983). “Development and change.” In H.H. Kelley et al. (Eds.), Close relationships (315–359). New York: W. H. Freeman and Company. https://www.worldcat.org/title/close-relationships/oclc/470636389

2  Asimov, I. (1950). I, Robot. Gnome Press.

The Occasional Mentor: Kill Your Darlings

Photo by Steve Johnson on Unsplash

THE OCCASIONAL MENTOR is a semi-regular column based on questions I’ve answered on Quora, heard on Slack groups, and other career advice I’ve given over the prior month. Feel free to challenge me in the comments, if you have a different experience.

Kill Your Darlings

I am working on a project with a friend who is acting as a client for a capstone project with an agile development class. She complained to me that the students were unable to create a simple one-sheet deliverable featuring a proposed design. The problem: WordPress hasn’t been set up yet. It didn’t occur to them that they could mock it up in a drawing program or simply sketch it by hand.

When I do in-class studios, I will often make the design students work entirely on paper and whiteboard, no computers allowed, to ideate and create a paper prototype. It can be done in two hours end to end. Is the final deliverable App Store ready? Of course not. But it is enough to move quite a bit toward a testable idea.

Students today, and especially developers, don’t understand the power of a piece of paper that you can throw away. When you start coding (or drafting in WordPress) too soon, you get too married to the code, making it hard later on to incorporate new learnings from your user research. It’s better practice to stay as low fi as possible for as long as possible. That’s at least one day of a five day sprint. Sometimes two (testing the paper artifacts with users). Then “Kill your darlings” before they become too dear.

Note: The phrase “Kill your darlings” (or “murder your babies”) is often attributed to William Faulkner and is a feature of many descriptions of the Beat poets: William Borroughs, Allan Ginsberg, Jack Kerouac et al. In fact, the concept “murder your babies” can be traced to Sir Arthur Quiller-Couch, a British writer and literary critic in a 1916 lecture series at Cambridge. (Quiller-Couch, Sir Arthur (2000) [1916]. “XII. On Style”On the Art of Writing: Lectures Delivered in the University of Cambridge, 1913–1914 (Online ed.). Bartleby.com.)

Downward Dot Voting

My friend Austin Govella wrote today on using a kind of whole-body dot voting to teach teams to “Vote With Your Feet“. We are in a Liminal Thinking group on Facebook where he initially threw his ideas around. I was excited that he chose to add my comment about using negative dots to vote down ideas, and use them in my undergraduate UX class as a discussion point about what we Won’t do or talk about on a project.

My students get to use dot voting on the first day of our UX/UI class at CUNY CityTech, where we talk about what we are worried about for the upcoming semester. Addressing concerns and potential problems is a good exercise in most occasions, but in these days of online classes, crushing economy and pandemic, talking about our worries is particularly important. It helps to alleviate anxiety and develop a growth mindset toward the months ahead.

In this first class, the students learn about a number of design practices using a shared, online whiteboard, including brainstorming, dot voting, cluster analysis, and Kanban as part of a pre-mortem exercise on what can go wrong with the class. (I learned this exercise while teaching with Jimmy Chandler at the New York Code and Design Academy and modified it for online classes).

To begin, students use virtual sticky notes to write down their concerns about the coming semester. Then the students attach green and red mini circles, three each, to vote on which issues they want to discuss and which ones they don’t. Then we use Kanban (To do, Doing, Done) to keep the discussion orderly.

When I used the technique earlier on, I allowed each student three dots to vote on ideas that they want to discuss. There are usually concerns expressed that are common students commuting to our downtown Brooklyn campus, like getting to class on time (what if the subway breaks down? what if my work goes overtime?), having too much homework (it is a lot of homework, tbh), or dealing with a teammate that doesn’t pull their share (this happens on the job, too, unfortunately). It was OK. But these concerns, being fairly common are covered in the syllabus in the items about time management, group behavior and attendance, so the discussion becomes somewhat procedural.

There are of course new and now-common issues this semester about logging into school instead of commuting, managing family and job expectations, particularly for those students whose families rely on their job income and dealing with the combined stress of school, real and potential loss of family members (at least two of my students had a COVID death in the family and many have been displaced or ill), and just living in the 2020 political and budgetary climate. These issues are very personal and went largely unspoken but manifested as concerns about deadlines, time management and doubts that they have the skills it takes to be successful.

By allowing the students to select some issues that are not particularly a concern was a new idea and I found it especially interesting to explore them with the class. So along with items that have a lot of upvotes, I also selected items that have some upvotes, but also a few downvotes (more than one downvote, so as not to put any one person on the spot).

“Not a Concern” is key phrasing. For the upvote dots, I told the students to “mark items they want to talk about.” For downvote dots, the instructions are “mark items that are Not a Concern.” Te fact that someone wrote the issue on a sticky note in the first place means it is a concern for some students. I inferred from the “Not a Concern” votes that maybe there are people who have discovered ways to deal with the problem.

And for a teacher, it highlights differences in each class (there are always differences) that point to certain pedagogical approaches. For instance in one section the most up-voted concern was “potential weakness in design skills.” In the other section, a similar issue got a lot of down-votes. I wanted to know what was going on so made sure to expand on that concern in the second week’s discussion and begin a discussion of skills development, the importance of practice, and imposter syndrome.

This then becomes an opportunity for a discussion of growth. I tell them to ask themselves, How can I, as a student in a rigorous BFA program, discover ways to develop perspective so I understand where the doubt is coming from? How can I build confidence? Through practice, time management and simply being honest about the particularly stressful challenges this world is throwing at us and asking for help.

So cheers to Austin for giving me a fun topic to explore here. While you are at it, you can find his book, Collaborative Product Design at https://www.agux.co/cpd.