This site is in beta. Tell us what you think.
Chapter 5 | Data Ethics Guidebook

Outcome Monitoring and Discovering Harm

It's vital to monitor what happens when we use data to make sure the right outcomes are occurring.

Take the example of the use of algorithms to create risk assessment scores that rate a defendant’s risk of committing future crime. Now widespread in the US justice system, these risk assessments were the subject of an indepth investigation by ProPublica, an independent newsroom producing investigative journalism in the public interest.¹⁸ In 2014, the US Attorney General raised concerns that these scores could be introducing bias to the courts (where they are used to inform decisions on bail, sentencing, and probation). This algorithmic process has been shown to be unreliable in forecasting certain kinds of crime. In fact, in an instance investigated by ProPublica, based on the risk scores assigned to over 7,000 people arrested in a single Florida county in 2013 and 2014, the algorithm used was little more reliable than tossing a coin in its ability to accurately identify re-offenders.

With analytics and machine learning, algorithms can be trained to notice where there has been customer upset, and bring it to the attention of a real human—in other words, detecting that harm may have been done and bringing it to the attention of developers and other stakeholders. On social media, sentiment analysis (a set of tools and practices which deconstruct written language and user behaviors to detect mood) could be used to identify situations where a piece of data shared about a user is causing emotional (and potentially physical) harm. Take the example of a user of a social network uploading a picture of another user and “tagging” them in that picture. If that second user starts reacting negatively in comment threads or receiving negative messages from others, machine learning could identify these situations and escalate them to moderators, presenting an opportunity for that user to “untag” themselves or request removal of the photograph in question. Such an approach could go further by then alerting developers and other business teams to consider such scenarios in their user personas and user stories for subsequent app updates, consent and permissions management approaches. This secondary feedback is key in making sure lessons learned are acted upon and that the appropriate corrective action is taken.

“With analytics and machine learning, algorithms can be trained to notice where there has been customer upset, and bring it to the attention of a real human."

Special Considerations for the Internet of Things and hardware devices

With the advent of mainstream Internet of Things (IoT) devices, a sensor on a device may give a user feedback in the form of a raw number that reflects something about the state of their environment, like body temperature with a connected thermometer, 'no people detected'  from a security camera or 'no cars detected' from a driver assistance system. Such feedback may seem absolute and truthful—and many users take this information at face value. This is important to be aware of because when users overlook the device or system that provides feedback, it suggests data handling and algorithms are also being overlooked. The failure to consider these underlying interactions can result in unintended harm. While it's tempting to say that such awareness should be the user's responsibility—and many dense and unreadable end user license agreements do say that—it's not realistic to expect all users to understand how sensors and machine learning work. They might think that their Tesla vehicle's 'Autopilot' system functions based on radar and thus could detect a large mass in from them and brake in time. In reality, Autopilot functions through cameras and machine learning vision, so a large object only triggers braking if the machine system knows what the side of a truck looks like and that it is to be avoided.This exact scenario contributed to the death of a Tesla driver in Florida, USA, who was over-relying on the Autopilot system, the first known death due connected to autonomous driving systems. The National Highway Traffic Safety Administration, the country's governing body, ruled at the time that a lack of safeguards and end-user education both contributed to the death. Additional collisions under similar conditions occurred even after that ruling.

Monitoring data transformations through user interviews

Interviewing users who have experienced harm can uncover misunderstandings in the way users are perceiving or using applications. This is not blaming users but can rather be used to determine areas where better communication may be required. Noticing where users say that the use or disclosure of data was inappropriate is a form of qualitative forensics that can be linked to other quantitative approaches like behavioral analytics. When users see an app or service that feels uncomfortable, that indicates that consent may not be in place. But information about this discomfort rarely reaches developers or business stakeholders unless they cultivate—and systematize—curiosity about and empathy for users.

To think critically and spot potential harms to users, employees must have a working knowledge of how data moves and is transformed, how cyber-security breaches threaten data (and users), and what end-users expected and consented their data could be used for. This goes for IT stakeholders and employees in general, given the increasingly digital nature of corporations across entire companies. Regularly updating the organization's shared “world view” with both direct and analytics-sourced input from users is an important first step. Once taken, it can be followed by creating feedback loops into the software development process from both human and machine sources.

This will enable empathy for users to be systematized into actionable updates of practices and programming.

Forensic analysis

Forensic analysis of data breaches is becoming more commonplace when data holders experience cyberattacks; similar methods can be used to track data through various servers and applications to determine where personally-identifying data might be vulnerable to disclosure or has been processed in a way contrary to the intent of the user or designers. However, most organizations are not yet prepared to track data well enough to discover, much less mitigate, harms to users.

"The reality is few organizations are currently able to show you the full impact of a breach— few are able to identify all of the systems that were breached, much less what specific data could have been compromised. Understanding the scope of the damage/harm is predicated on having both the right mindset and logging and monitoring in place."

— Lisa O'Connor, Managing Director of Global Security R&D at Accenture

Continual discovery of potential harms

Google is faced with a conundrum: if its machine-learning discovers that a user may have a medical condition, based on what that user is searching for, is it ethical to tell the user? Or unethical? A recent article by Fast.co Design explored this concept:¹⁹

“If Google or another technology company has the ability to spot that I have cancer before I do, should it ethically have to tell me? As complicated as this question sounds, it turns out that most experts I asked—ranging from an ethicist to a doctor to a UX specialist—agreed on the solution. Google, along with Facebook, Apple, and their peers, should offer consumers the chance to opt-in to medical alerts.”

Such conundrums are not limited to search results. However, the uniquely personal (and potentially emotionally and physically harmful) impact of search-based medical analytics is still a nascent conversation that neither healthcare providers nor technology companies are fully prepared to enter into—yet.

Leaders can learn from Google’s example by creating ways for end-users, “observers” (in this case, medical professionals and other researchers), developers, and executives to discover potential harms—even after product launches.