by: Abby Duke
In the realm of technology, artificial intelligence (AI) stands as a beacon of innovation, promising to revolutionize industries and enrich daily life. However, beneath its veneer of progress lies a complex web of ethical dilemmas, with bias and discrimination at the forefront. From facial recognition systems to hiring algorithms, AI technologies have been scrutinized for perpetuating societal biases.
One significant example of bias in AI is facial recognition technology. Widely deployed in surveillance systems, law enforcement, and even social media platforms, these algorithms purportedly identify individuals based on facial features. However, these systems have seen inherent biases embedded into the algorithms.
A criminal justice algorithm used in Broward County, Florida, experienced such bias as it mislabeled African-American defendants as “high risk” at nearly twice the rate of mislabeled white defendants. These are known as risk assessments that assess the defendant’s risk of committing future crimes. ProPublica assessed the scores of more than 7,000 people arrested in Broward County and found only 20 percent of the people predicted to commit violent crimes did so.
Disparities within the algorithm increasingly factored in race, the formula was particularly likely to falsely flag black defendants as future criminals according to ProPublica. AI algorithms determine these results with previous data sets, these come into contact with biased human decisions that affect inclusion within the results.
James Manyika wrote for the Harvard Business Review on AI and Machine Learning, “AI can help identify and reduce the impact of human biases, but it can also make the problem worse by baking in and deploying biases at scale in sensitive application areas.”
Facial recognition algorithms have revealed higher error rates when identifying people of color and women, compared to their white and male counterparts. This bias stems from the predominantly white and male datasets used to train AI technology, leading to inaccuracies and misidentifications that disproportionately affect marginalized communities.
In 2018, Amazon abandoned its experimental hiring tool after discovering that it systematically downgraded female candidates. The algorithm, trained on resumes submitted over a decade, learned to favor male applicants due to the male-dominated nature of the tech industry.
The program penalized applicants who attended all women’s colleges, along with applications that contained the word “women’s.” Bias embedded within the system raises questions as to what information should AI technology be trained with to focus on diversity and equality.
Beyond these specific instances, the broader ethical implications of biased AI looms large over the tech industry. As AI systems become increasingly integrated into society, they wield immense power to shape human experiences and influence decision-making processes. However, unchecked bias threatens to erode trust in these technologies and exacerbate existing disparities.
“AI not only replicates human biases, it confers on these biases a kind of scientific credibility. It makes it seem that these predictions and judgments have an objective status,” political philosopher Michael Sandel said for The Harvard Gazette in “Great promise but potential for peril.”
Addressing the bias behind AI necessitates a multifaceted approach that encompasses both technological and ethical considerations. Diversifying datasets and ensuring equitable representation in AI development are imperative steps toward mitigating bias. By incorporating diverse perspectives and experiences, developers can create more inclusive algorithms that accurately reflect the complexities of human society.
Transparency and accountability must also be prioritized in the deployment of AI systems. Companies should be forthcoming about the limitations and potential biases of their algorithms, empowering users to make informed decisions and hold tech entities accountable for any adverse impacts.
“Companies have to think seriously about the ethical dimensions of what they’re doing and we, as democratic citizens, have to educate ourselves about tech and its social and ethical implications,” Sandel said.
Regulatory frameworks play a pivotal role in safeguarding against algorithmic bias. Governments may enact policies that promote fairness, accountability, and transparency in AI development and deployment. By establishing clear guidelines and standards, regulators can mitigate the risks of bias and discrimination while fostering innovation in the tech industry.
As society entrusts increasingly autonomous systems with decision-making power, it is imperative to confront and address the ethical concerns that accompany these advancements. By acknowledging the biases inherent in AI and taking proactive measures to regulate them, we can harness the transformative potential of technology while upholding principles of fairness and equity for all individuals.
Sources:
Pazzanese, C. (2020, October 26). Ethical concerns mount as AI takes bigger decision-making role. Harvard Gazette; Harvard University. https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/
Top 12 Ethics Issues of AI [+ FAQs]. (2023, March 6). University of San Diego Online Degrees. https://onlinedegrees.sandiego.edu/ethics-in-artificial-intelligence/
Manyika, J., Silberg, J., & Presten, B. (2019, October 25). What Do We Do About the Biases in AI? Harvard Business Review; Harvard Business Review. https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai
Vincent, J. (2018, October 10). Amazon reportedly scraps internal AI recruiting tool that was biased against women. The Verge; The Verge. https://www.theverge.com/2018/10/10/17958784/ai-recruiting-tool-bias-amazon-report
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine Bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Leave a comment