It depends on the individual and their beliefs about artificial intelligence and decision-making. Some people may believe that computers are capable of making fair decisions, while others may have concerns about bias or the lack of human empathy in computer-generated decisions.
Ultimately, the fairness of a decision made by a computer will depend on the data and algorithms used to inform that decision. If the data and algorithms are unbiased and transparent, then the decision made by the computer may be perceived as fair by some people. However, if the data or algorithms are biased or not transparent, then the decision made by the computer may be perceived as unfair by some people.
Machine learning now helps determine who gets a loan, who gets a job, and even who goes to jail. But, when it comes to potentially life-changing decisions, can computers make a fair judgment? Researchers from Germany demonstrated in a study published in the journal Patterns that with human supervision, people believe a computer’s decision can be as fair as a decision made primarily by humans.
“A lot of the discussion on fairness in machine learning has focused on technical solutions, such as how to fix unfair algorithms and make the systems fair,” says Ruben Bach, a computational social scientist and co-author from the University of Mannheim in Germany. “But our question is, what do people think is fair? It’s not just about developing algorithms. They need to be accepted by society and meet normative beliefs in the real world.”
A lot of the discussion on fairness in machine learning has focused on technical solutions, such as how to fix unfair algorithms and make the systems fair. But our question is, what do people think is fair? It’s not just about developing algorithms. They need to be accepted by society and meet normative beliefs in the real world.Ruben Bach
Automated decision-making, where a conclusion is made solely by a computer, excels at analyzing large datasets to detect patterns. Computers are often considered objective and neutral compared with humans, whose biases can cloud judgments. Yet, bias can creep into computer systems as they learn from data that reflects discriminatory patterns in our world. Understanding fairness in computer and human decisions is crucial to building a more equitable society.
To understand what people consider fair on automated decision-making, the researchers surveyed 3,930 individuals in Germany. The researchers gave them hypothetical scenarios related to the bank, job, prison, and unemployment systems. Within the scenarios, they further compared different situations, including whether the decision leads to a positive or negative outcome, where the data for evaluation comes from, and who makes the final decision – human, computer, or both.
“As expected, we saw that completely automated decision-making was not favored,” says computational social scientist and co-first author Christoph Kern of the University of Mannheim. “But what was interesting is that when you have human supervision over the automated decision-making, the level of perceived fairness becomes similar to human-centered decision-making.” The results showed that people perceive a decision as fairer when humans are involved.
People were also more concerned about fairness when it came to decisions involving the criminal justice system or job prospects, where the stakes were higher. Participants deemed decisions that could lead to positive outcomes fairer than those that could lead to negative outcomes, possibly because the weight of losses was greater than the weight of gains. When compared to systems that only use scenario-related data, those that use unrelated data from the internet were deemed less fair, highlighting the importance of data transparency and privacy. The findings demonstrated that context is important. When concerns about fairness arise, automated decision-making systems must be carefully designed.
While the survey’s hypothetical situations may not fully translate to the real world, the team is already planning next steps to better understand fairness. They intend to expand the research to better understand how different people define fairness. They also intend to use similar surveys to elicit more information about concepts such as distributive justice and the fairness of resource allocation within the community.
“In a sense, we hope that people in the industry will take these results as food for thought and things to consider before developing and deploying an automated decision-making system,” Bach says. “We must also ensure that people understand how data is processed and decisions are made based on it.”