Gadgets

Students confront the unethical side of tech in ‘Designing for Evil’ course

Whether to monitor or mislead users, mismanage or sell their data, or to generate unhealthy habits or thoughts, technology nowadays is lacking no behavior contrary to the ethic. But it's not enough to say "it's scary". Fortunately, a course at the University of Washington gives its students the philosophical ideas to better identify – and repair – the pernicious lack of technology ethics.

"Designing for Evil" has just concluded its first term at the UW Information School, where future creators of applications and services like the ones we depend on every day learn the tools of the trade. But thanks to Alexis Hiniker who teaches the class, they also learn the critical skill of investigating the moral and ethical implications of these applications and services.

What, for example, is a good way to create an inclusive dating app and promote healthy relationships? How can an AI imitating a human avoid unnecessary deception? How to make something as invasive as China's citizen rating system is as friendly as possible?

I spoke to all the student teams during a poster session on the UW campus, and I also spoke with Hiniker, who designed the course and seemed satisfied with the way it turned out.

The premise is that students take an intensive course on ethical philosophy that familiarizes them with influential ideas such as utilitarianism and deontology.

"This is designed to be as accessible as possible to the laity," Hiniker said. "They're not philosophy students – it's a design class, but I wanted to see what I could get out of."

The main text is the popular book of Harvard philosophy professor Michael Sandel Justice who, according to Hiniker, combined the different philosophies in a readable and integrated format. After ingesting this, the students grouped together and selected an application or technology that they would evaluate using the principles described, then prescribing ethical remedies.

As it turned out, finding ethical issues in technology was the easy part – and the fixes for them ranged from trivial to impossible. Their views were interesting, but I had the impression, among many of them, that there was some kind of disappointment at the fact that so much of this that technology offers, or how it offers, is inevitably and fundamentally contrary to ethics

.

I found that the students belonged to one of three categories.

Not fundamentally unethical (but could use an ethical focus)

WebMD is of course a very useful site, but it was clear to the students that it lacked inclusiveness: its symptom checker is stacked against non-English speakers and those who may not know the names of the symptoms. The team suggested a more symptomatic reporter, with a basic body chart and indicators of unwritten symptoms and pain.

Hello Barbie The doll that speaks to children, is certainly a minefield of potential legal and ethical violations, but there is no reason that it can not be done properly. With the parents' consent and careful engineering, it will comply with the privacy laws, but the team said that she had failed some tests to maintain dialogue with the children in good health and informed parents. The interaction scripts, they said, should be public – which is obvious in retrospect – and the audio should be analyzed on the device rather than in the cloud. Finally, a set of warning words or phrases indicating unhealthy behaviors could warn parents of things like self-injury while keeping the rest of the conversation secret.

WeChat Discover allows users to find others around them and see recent photos they've taken – that's opt-in, which is well, but it can be filtered by sex, fostering a culture of branching the team said is frowned upon in China. It also masks many user controls behind multiple levels of menus, which can cause users to share their position when they do not intend it. Some basic IU patches have been proposed by the students, and some ideas on how to combat the possibility of unwanted strides from strangers.

Netflix is not bad, but its tendency to promote binge has deprived its users many hours. This team felt that some basic user-defined limits like two episodes a day, or delaying the next episode for a while, could interrupt the habit and encourage people to regain control of their time.

Fundamentally unethical (corrections are always worth doing)

FakeApp is a way to exchange faces in video, producing compelling counterfeits in which a politician or friend seems to say something that he did not do . It's basically deceptive, of course, in a broad sense, but really only if the clips are transmitted as authentic. The visible and invisible watermarks, as well as the controlled cropping of the source videos, were the suggestion of this team, although ultimately the technology will not yield to these voluntary attenuations. So, really, an informed population is the only answer. Good luck with that!

The system of "social credit" of China is not really, according to students, absolutely unethical – this judgment involves a number of cultural prejudices. But I am comfortable putting it here because of the massive ethical questions that he has ducked and rejected on the way to deployment. Their very practical suggestions, however, were to make the system more accountable and transparent. Challenge behavioral reports, see what types of things contributed to your own score, see how it has changed over time, and so on.

The unethical nature of Tinder according to the team, was based on the fact that it was ostensibly to form human bonds, but was very clearly designed to be a meat market. Forcing people to consider themselves physical objects first and foremost in pursuit of romance is not healthy, they argued, and causes people to devalue themselves. As a countermeasure, they suggested that answers to questions or prompts be the first thing you see about a person. You must scan on this basis before seeing images. I've suggested that you also have questions about the dealbreaker. This is not a bad idea, but open to games (like the rest of online dating).

Fundamentally unethical (corrections are essentially impossible)

The League on the other hand, was an dating app that proved intractable to ethical guidelines. Not only was it a meat market, but it was a meat market where people were paying to be among self-selected "elites" and could filter by ethnicity and other disturbing categories. Their suggestions to remove fees and these filters, among other things, essentially destroyed the product. Unfortunately, the league is an unethical product for immoral people. No changes will change that.

Duplex was taken over by an intelligent team that, nonetheless, clearly began its project only after Google I / O. Unfortunately, they discovered that the inherent fundamental deception to an RN posing as a human being is ethically inadmissible. It could, of course, identify – but it would spoil the whole value proposition. But they also asked a question that I did not think to ask myself in my own cover: why this AI does not exhaust all the other options before calling a human? He could visit the site, send a text, use other applications, etc. AIs in general should default to interacting with sites and applications first, then with other AIs, then only with people – at this point, it should say that it is an AI.

For me, the most valuable part of all these investigations was to learn what, with a little luck, becomes a habit: to look at the fundamental ethical soundness of a business or business. a technology and be able to articulate it.

It may be the difference in a meeting between being able to say something vague and easily take off, like "I do not think it's a good idea", and describe a specific evil and the why this harm is important. perhaps how can this be avoided?

As for Hiniker, she has some ideas for improving the course if she were to be approved for a rehearsal next year. A broader set of texts, for its part: "More diverse writers, more diverse voices," she said. And ideally, it could even be extended to a course of several quarters so that students receive more than a slight ethical breakdown.

With a bit of luck, the children of this course (and others in the future) will be able to help make these choices, which will reduce the number of leagues and duplexes, as well as the 39 other toys and dating applications that comply with COPPA.

Leave a Reply

Your email address will not be published.