Caroline Sinders on Ethical Product Design for Machine Learning

Caroline-7056_Crop
Photo by Alannah Farrell

For the past two years, I’ve worked as a machine learning design researcher. Machine learning is programming that learns from user inputs and adapts and improves over time. It’s my humble belief that machine learning and artificial intelligence is going to radically change product design. From the implementation of chat bots to natural language processing implemented to study users’ behaviors and conversational patterns, to analytics APIs designed to study and predict behavior, to computer vision software created to predict crimes and recognize human emotions. In fact, everything I just mentioned already exists. But implementing these algorithms is one thing — how do we design, ethically, using machine learning, and how do we create products that use all of the positive attributes of machine learning without surveilling and harming our users? Can ethical product design exist for machine learning?

I believe firmly that it can. However, machine learning needs to be treated not as a new, out-of-the-box software implementation that has been QAed, tested, and is ready for deployment with few new changes or rollouts. It needs to be treated as highly experimental software.

What do I mean exactly? To design for iOS, a designer does not need to know swift or Xcode, though that is helpful, but a designer needs to understand all of the constraints of mobile and then know that iOS is stack based. Stack based meaning each screen is stacked on top of one another, so you can move forward or back in the stack. The code for the app is using (hopefully) tested and finished APIs, the code can be storing and sifting data and responding to user input, but the way the code is responding to the data is not changing. But when working with machine learning, it’s hard to predict the outcomes of how the algorithm will respond to the data. In essence, the code itself is “unreliable” or, rather, it’s shifting and moving. It’s always in motion. It’s not static, but dynamic. The more a product is interacted with that’s using machine learning, the more it will change. It’s incredibly organic like that. However, there’s more to machine learning than just the algorithm you’re using, it’s also the data that’s being fed to the algorithm.

What’s needed and necessary right now is for designers to have a technical understanding of machine learning. What’s even more necessary is a specific design language built around machine learning and a context for understanding what kind of code is being built, and what kind of data is being fed to the system. This is a new kind of language fluency that’s being demanded, not just of designers, but of technologists and programmers as well. Unlike IOT or blockchain, there are massive ethical considerations at play for machine learning because of the uncertainties of how algorithms will respond to user input as well as data input. In essence, what are the effects of your code?

Who made the data set you’re feeding to your product that is using machine learning, how long did you train that data set, is the data set diverse enough? A few years ago, Google’s auto-tagging image algorithm tagged black people as gorillas, that’s not because it was designed to be racist, but it inherently was. Did anyone making it QA the algorithm with photos of people of different races, did the image data set have enough black people in it? Google “professional hair” and “unprofessional hair.” What’s shown is that mostly caucasian hair is professional, and black hair is almost entirely depicted under ‘“unprofessional hair.” Again, who made the data sets of these images? Who trained and retrained this data, and then who tested it before it went to market?

The design pattern language I’m proposing and am working on building puts these into practice. It’s creating a pipeline of questioning data sets, of letting users know the product they are interacting with is ‘deployed’ but ‘beta’, it’s still being trained. Google recently released an experimental and beta machine learning API that tests toxicity in language. But when interacting with the demo, it has the look and feel of a finished product. There should be more language, right above the bar to enter text that says how experimental it is. Perhaps there should be a visualization of how the API is rating language.

What does machine learning product design look like? Perhaps, it won’t be minimal but transparent. Perhaps, there’s more information but it’s not the most mobile friendly site. Machine learning is at the frontier of design but it’s still in it’s infancy. When it comes to creating ethical machine learning product design, it’s not about minimalism or standard usability but algorithmic transparency, through language, visualizations, and warnings, about what the API is doing, and how.


❔ Whois

Caroline Sinders is an artist and researcher, based in San Francisco but originally from New Orleans. She currently is an Open Lab Fellow with BuzzFeed and Eyebeam focusing on machine learning and design. Prior to her fellowship, she worked as a design researcher for IBM Watson.

Twitter

❤️ Favorite Emoji

😈


Enter your email address to follow this blog and receive notifications of new posts by email.

search previous next tag category expand menu location phone mail time cart zoom edit close