A.I. Tools

Exploring What Makes an AI Ethics Toolkit Tick | by Malak Sadek | Sep, 2023

AI Ethics Toolkits are everywhere, but do we really understand them?

Malak Sadek
Towards Data Science
Photo by Todd Quackenbush on Unsplash — It’s time to dismantle an AI Ethics Toolkit

As AI systems’ use in applications with critical implications continue to multiply, experts have been calling for more participatory and value-conscious practices when designing these systems. There are a number of benefits that increased stakeholder participation can bring to AI systems design, including making them more inclusive, combating existing biases, and providing accountability. In response, the field of AI ethics has produced a significant number of toolkits in recent years. These have come from different sources such as universities, companies, and regulatory institutes, have employed several different techniques and frameworks, and have had different intended audiences ranging from data scientists and AI engineers to designers and the general public (Wong et al., 2023).

Many AI ethics toolkits focus solely on technical aspects relating to AI creation and primarily address technical practitioners (e.g. FAIR Self-Assessment Tool). However, there are a number of toolkits that advocate for and focus on stakeholder participation, and especially stakeholders outside the AI creation team, such as end-users and domain experts (e.g. AI Meets Design Toolkit). Toolkits that do focus on stakeholder inclusion tend to do so by providing resources such as canvases or decks of cards that enable stakeholders, and especially those with non-technical backgrounds, to participate in design activities. This participation can result in a wide range of outcomes from brainstorming consequences of different decisions to building empathy with AI users.

Despite this large number of toolkits, no one has really tried to understand what makes them work or what their underlying strategies are. This makes it unclear whether they actually have a positive effect on toolkit users and the outcomes they produce while using the toolkits, despite their widespread use. Because of this, I wanted to look at how a given toolkit works; which turned out to be a very educative experience.

Part of the ‘AI Meets Design’ Toolkit created by Nadia Piet, AIxDESIGN. Taken from: https://aixdesign.co/shop.

I ran a design workshop with 9 participants working with AI in different capacities. The workshop was made up of an ideation activity for brainstorming use-cases and design features for a fictional conversational AI to help people self-manage diabetes. It included an AI ethics toolkit that was made by an individual designer rather than a corporation or university. I chose this on purpose to really explore what fundamentally made a small-scale toolkit, that wasn’t backed-up by massive funding, tick.

The workshop was done over Miro and lasted for two hours. The way the toolkit works is that it’s made up of a deck of cards. These cards each have a value (e.g. privacy, autonomy, safety, etc.) and How Might We? questions that act as prompts for different ideas that could lead to the given value being respected in the technology being produced. I laid out these cards in the Miro board and left sticky notes for people to brainstorm with, then I assigned each person two values to focus on individually.

After an hour, we all came together and I flipped the cards so now people could not see the values anymore, only the ideas on the sticky notes that people had put down. I asked each person to present the ideas they wrote down, and everyone else had to guess the value that they were aiming to respect.

Screenshot from the workshop Miro board showing the card for ‘honesty’ and the ideas that a participant brainstormed around it.

Using Values as Springboards to Build Empathy and Broader Considerations

The toolkit was designed to use the list of values they gave as springboards for ideating and brainstorming. 3 participants mentioned that this was a very useful approach:

“It was interesting to see how different values are applicable to developing a conversational agent.”

“Thinking about a design from the perspective of one value at a time.”

“Looking at the Value cards and producing ideas around those.”

There was a low focus on technical feasibility in favor of focusing on the values and the importance of different ideas from the perspective of the user. Participants seemed to like/prefer this as it offered a welcome change from focusing on technical aspects and overlooking values like safety and equity. One participant expressed that they felt that these values might not come up when one is thinking about a system and what can be build and the technology behind it, or thinking about what’s easiest or quickest to do. They said that although people should think about these values when designing, that is not always the case as there are other priorities that overshadow this. In short, foregrounding values helped participants to brainstorm and consider commonly overlooked non-technical asepcts.The exercise made participants want to know more about diabetics lives’ and their experiences to understand how to support them in the most customised and specific way possible and to come up with solutions that were actually relevant and not based on assumptions. The values that triggered this desire was ‘tranquility’ (leading to wanting information about the situations that cause them stress and anxiety) and ‘safety’ (leading to wanting information about what life-threating situations exist for them). In short, focusing on values increased participants’ (i) empathy with target users and (ii) curiousity/desire to learn how to best support them.

Photo by Miltiadis Fragkidis on Unsplash — Working with Stakeholders’ values can foster a deeper empathy and understanding

Using Gamification to Increase Engagement and Improve Outcomes

Participants enjoyed the gamification aspect when they had to guess the values immensely:

“I enjoyed guessing which value the ideas were related to — it kept me much more engaged with the ideas being shared rather than just reading through/discussing them.”

“I think the guessing value bit felt most useful for really understanding the different values and how they are interconnected.”

When guessing the values, participants felt that several ideas could correspond to different values and that different ideas under different values were referring to the same concept or suggestion. The discussions also highlighted that several values were too similar or had large overlaps and that it was difficult to differentiate between them. — Many values were thought to be too similar or overlapping, with participants having trouble guessing them or differentiating between them. There are many links between the different values and some can lead to or facilitate others, or embody similar phenomena/emotions. During guessing, ‘equity’ was mistaken for ‘inclusivity’; ‘community’, ‘freedom’ and ‘autonomy’ could not be differentiated from each other; ‘mastery’ was confused for ‘courage’ and ‘curiosity’; and ‘inclusivity’ was mistaken for ‘accessibility’ and ‘respect’. In short, gamification allowed participants to really grasp the interconnections between ideas and values.

Photo by Erik Mclean on Unsplash — Gamification has many benefits. In the context of AI collaborative design it can help participants grasp interconnections and enjoy engagement more.

Future Improvements and Ideas

Three participants noted that they would have liked more contextual information and prompts to be able to better empathise with the target stakeholders of the conversational AI. With specific values such as ‘safety’ and ‘tranquility’, participants were empathising with users and wanted to know more about their experiences to better support them with more customised solutions that specifically address their needs. One participant felt like the initial prompt was lacking feedback from the people that they were designing for (diabetics) to really brainstorm ideas for them. They wanted more customised/specific information on the users’ scenarios, descriptions of moments from their lives, or real-life contextual information that can be used to customise the ideas generated more.One suggestion was to do a workshop such as this one, that focuses on the abstract, high-level first blindly without much prior information, then go and gather user information afterwards for context. Because if you gather the user information first, you might miss out on ideas or values and be too focused on what you gathered. Afterwards you can then bring it all in and brainstorm scenarios and technical aspects again with the benefit of having both sources of knowledge.Two participants noted that if they didn’t have the list of values with definitions then they might have had difficulties thinking of the exact words used to describe values, maybe something around them but not the exact words. Understanding what the values meant would have also been difficult without providing the descriptions for them underneath.

This experiment into what makes an AI Ethics toolkit tick led me to a deeper understanding of the power of using gamification and value-based exercises. Below I’ve distilled my insights into six points that could help you run your own design workshops using AI Ethics toolkits with stakeholders in order to brainstorm or ideate around AI design:

One of the main points of collaboratively thinking about AI Ethics is to make AI more human-centered. Because of this, it’s really important to include a lot of contextual (and preferably first-hand) information about your target users and who you are designing for. Remember that a lot of technical practitioners may not be used to empathy-building exercises and ideation, so help them get there by really clarifying who they’re trying to help.Consider whether your workshop needs to have many activities to work down from a high-level view of what might be relevant or important, right down to the details. Mixing both can be confusing to participants, especially if you’re asking them to rank or rate ideas that might not be comparable on the same level.If you’re working with values, make sure you clearly define what each value means. This is important because different values can mean different things depending on who you ask, and there can be a lot of conflicts and overlaps between them if they’re not clearly defined.Bringing values into your AI design activities can help participants brainstorm and consider commonly overlooked non-technical aspects, help them build empathy with target users, and increase their curisoity and desire to better support those users.Incorporating gamification techniques can help improve participants’ engagement and enjoyment during AI ethics workshops, and it can also help them grasp connections between ideas more deeply.

My PhD project looks at leveraging tools and techniques from design fields to make the design of AI systems accessible and inclusive. I’m working towards creating a participatory process, and a toolkit to support it, to systematically involve people throughout the AI life-cycle — with a focus on value-sensitivity.

You can check out the official page for my project on the Imperial College London website. You can also check out this other article I wrote explaining the details of my PhD project.

I’ve set up this Medium account to publish interesting findings as I work on my PhD project to hopefully spread news and information about AI systems in a way that makes it understandable to anyone and everyone.

Richmond Y. Wong, Michael A. Madaio, and Nick Merrill. 2023. Seeing Like a Toolkit: How Toolkits Envision the Work of AI Ethics. Proc. ACM Hum.-Comput. Interact. 7, CSCW1, Article 145 (April 2023), 27 pages.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Translate »