Ethical AI

Guiding principles for ethical AI. Source: Kumar 2018.
Guiding principles for ethical AI. Source: Kumar 2018.

We're living in a world where machines and algorithms are increasingly giving recommendations, tagging content, generating reviews and even taking decisions. A number of ethical questions arise. Can we trust machines to do this right thing? Was my loan application processed in a fair manner? Why was my application rejected?

Then there are questions about AI replacing humans in the workplace. With widespread use of personal data, we're worried about privacy and data theft. What happens to us if AI agents acquire human-like cognitive abilities?

Ethical AI addresses these issues by building systems that are safe, fair, transparent, accountable and auditable. To practice ethical AI, it's been said that,

We are not at the point where you can simply download a tool to do the job. Data ethics is still new and it requires critical thinking.

Discussion

  • Are there real-world examples that highlight the need for ethical AI?

    In March 2016, Microsoft released a chatbot named Tay. Tay could interact and learn from real users on social platforms. Soon Tay was exposed to nasty tweets and ended up becoming mean and racist on her own.

    A study by ProPublica found that software used to predict future criminals is biased against blacks. Risk scoring done by the software is seen by judges and directly influences sentences. White defendants were often mislabelled as low risk compared to black defendants. Only 20% of those predicted to commit violent crimes went on to do so.

    In March 2018, an Uber self-driving vehicle killed a pedestrian. Who is responsible: the distracted driver, the pedestrian, Uber, developers who wrote the code, or a sensor manufacturer? It's unrealistic to expect AI systems to be perfect but determining liability isn't trivial.

    More recently, it was found that Facebook's ad delivery algorithms discriminate based on race and gender even when ads are targeted to a broad audience. Influencing factors include user profile, past behaviour and even the ad content.

  • What are the common ethical concerns raised by AI?

    Ethical AI has the following concerns:

    • Bias: AI systems can be biased because they're designed to look for patterns in data and favour those patterns.
    • Liability: AI systems can't be perfect. When mistakes are made, who's responsible?
    • Security: As AI systems advance, how do we stop bad actors from weaponizing them? What happens if robots can fight and drones can attack?
    • Human Interaction: There's already a decline in person-to-person interactions. Are we sacrificing humanity's social aspect?
    • Employment: Repetitive, predictable jobs that can be automated will be automated. Those replaced have to retrain themselves in areas where robots can't come in easily, such as, creative or critical thinking.
    • Wealth Inequality: Companies rich enough to invest in AI will get richer by reducing cost and being more efficient.
    • Power & Control: Big companies that use AI can control and manipulate how society thinks and acts.
    • Robot Rights: If AI systems develop consciousness and emotions, should we give them rights? Can we punish AI and make them suffer?
    • Singularity: What happens if AI surpasses human intelligence? Will they turn against us to defend themselves?
  • How can AI be biased when it's based on trusted data?
    Bias in AI systems. Source: Google 2017.

    Even if data is trusted, it may not be a fair representation of the population. Dataset may be skewed in a number of ways including race, gender, education, or wealth. When algorithms are exposed to this data, they acquire the bias that's present in them.

    Bias in AI comes from human biases since we are the ones building the algorithms and creating/selecting data. Minorities, often in low-income groups, lack access to technology. As a result, AI systems are not trained on such data. One infamous example is Google tagging photo of a black woman as "gorilla".

    There are many ways in which human biases creep into AI systems. Selection bias, interaction bias and latent bias are some examples.

    AI system that's trained primarily on American or European faces will not work well when applied on Asian or African faces. In one example, a Nikon S630 camera hinted to the user that perhaps she has blinked when in reality Chinese people have eyes that are squinty when they smile. In another example, a happy tribal woman was wrongly classified to have "disgusted" emotion.

  • What can we do to make AI more ethical?
    AI ethics by design. Source: Atos 2019.
    AI ethics by design. Source: Atos 2019.

    We need to design our systems to avoid biases. Check that proxy metrics don't introduce bias. We can minimize bias by having diversity in teams, data, and validation.

    For transparency, data must be marked with metadata detailing the source, intended use, usage rights, etc. Data must be managed well leading to algorithms that are better traceable, reproducible and fixable. Context is key to transparency and explainability.

    When AI systems fail, have a plan to fix them in production. We need to monitor them. Let the purpose be clearly defined and bounded. AI systems should not be allowed to explore unintended pathways.

    What we need is a holistic approach. It's not just about technology and tools, but also about leaderships, standards and rules. Microsoft's Satya Nadella has said ethical AI also means how humans interact with AI. We must have empathy, education, creativity, judgement and accountability.

    Accenture has suggested setting up AI advisory bodies. Have discussions. Publish guidelines. Engage with stakeholders. The AI Ethics Lab is looking into integrating ethics and AI right at the R&D phase.

  • Are there published guidelines or code of conduct for practising ethical AI?

    The Institute for Ethical AI & ML has identified 8 ML principles for responsible AI: human augmentation, bias evaluation, explainability by justification, reproducible operations, displacement strategy, practical accuracy, trust by privacy, and security risks. They also identify a 4-phase strategy: by principle, by process, by standards, by regulation. They've also formed the Ethical ML Network.

    The Future of Life Institute has created a interactive map addressing validation, security, control, foundations, verification, ethics and governance.

    In 2018, GE Healthcare published a set of AI principles. Google published its own guidelines of what AI applications should and shouldn't do. Microsoft released ten guidelines for developing conversational AI. One of these guidelines says that developers are accountable until bots become truly autonomous.

Milestones

2014

Amazon starts using AI for filtering resumes and hiring top talent. But in 2015, it's discovered that the algorithm is biased towards male candidates since it was trained on resumes submitted to Amazon over a 10-year period. This dataset had mostly male applicants since they dominated the tech industry. The algorithm just reinforced the bias. The algorithm is found to have other problems as well and is discontinued by Amazon.

2016
Interest in ethical AI starts in 2016. Source: CB Insights 2018.
Interest in ethical AI starts in 2016. Source: CB Insights 2018.

This is the year when people start talking about ethics and AI. Interest in ethical AI grows through 2017 and 2018.

May
2018

Microsoft appoints Tim O'Brien to the role of AI Ethicist, a new full-time position. The role involves AI ethics advocacy and evangelism. In reality, O'Brien clarifies that the role includes IoT, analytics, and AR/VR, since real-world solutions are hybrids of multiple technologies.

Dec
2018

Maria de Kleijn-Lloyd of Elsevier finds from her research that only 0.4% of published work on AI deals with ethics. She comments that there's lot of discourse on ethical AI but not much in terms of research and rigorous inquiry.

Apr
2019

Google disbands Advanced Technology External Advisory Council (ATEAC), which was supposed to advise on ethical AI. This happens due to controversy over ATEAC membership. This example shows that avoiding human bias and discrimination is not easy.

Jan
2020
Mapping different AI principles to eight themes. Source: Fjeld et al. 2020, pp. 8-9.
Mapping different AI principles to eight themes. Source: Fjeld et al. 2020, pp. 8-9.

Fjeld et al. note that many organizations working with AI have published their own guidelines. They study thirty-six such documents and identify eight main themes. Moreover, recent documents tend to cover all the eight themes.

References

  1. AI Ethics Lab. 2019. "Homepage." AI Ethics Lab. Accessed 2019-08-16.
  2. Accenture. 2019. "An Ethical Framework for Responsible AI and Robotics." Accenture. Accessed 2019-08-16.
  3. Angwin, Julia, Jeff Larson, Surya Mattu and Lauren Kirchner. 2016. "Machine Bias." ProPublica, May 23. Accessed 2019-08-16.
  4. Atos. 2019. "Human-centric AI Atos." Journey 2022: Future Vision, Atos. Accessed 2019-08-16.
  5. B Nalini. 2019a. "The Hitchhiker's Guide to AI Ethics." Towards Data Science, via Medium, May 02. Updated 2019-06-21. Accessed 2019-08-16.
  6. B Nalini. 2019b. "Deploying my first ML model and prediction app!" nalinicommits/nbonfastai, GitHub, March 12. Accessed 2019-08-16.
  7. Bersin, Josh. 2018. "The Ethics of Artificial Intelligence: It's Trickier Than You Think." August 18. Accessed 2019-08-16.
  8. Bigelow, Keith. 2018. "Ethics in healthcare aren't new, but their application has never been more important." GE Healthcare, October 01. Accessed 2019-08-16.
  9. Bossmann, Julia. 2016. "Top 9 ethical issues in artificial intelligence." World Economic Forum, October 21. Accessed 2019-08-16.
  10. CB Insights. 2018. "Tech Cos Confront The Ethics Of AI." CB Information Services, Inc, October 03. Accessed 2019-08-16.
  11. Chapman, Glenn. 2019. "Google disbands artificial intelligence ethics board." Yahoo News, April 05. Accessed 2019-08-16.
  12. Dastin, Jeffrey. 2018. "Amazon scraps secret AI recruiting tool that showed bias against women." Reuters, October 10. Accessed 2019-08-16.
  13. Davenport, Thomas. 2019. "What does an AI ethicist do?" World Economic Forum, August 09. Accessed 2019-08-16.
  14. Fjeld, Jessica, Nele Achten, Hannah Hilligoss, Adam Nagy, and Madhulika Srikumar. 2020. "Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for Ai." Berkman Klein Center Research Publication No. 2020-1, January 15. Accessed 2020-01-18.
  15. Google. 2017. "Machine Learning and Human Bias." Google, on YouTube, August 25. 2019-08-16.
  16. Griggs, Troy and Daisuke Wakabayashi. 2018. "How a Self-Driving Uber Killed a Pedestrian in Arizona." NY Times, March 21. Accessed 2019-08-16.
  17. Kilpatrick, Steve. 2018. "8 Ethical questions in artificial intelligence." Logikk, April 12. Accessed 2019-08-16.
  18. Kumar, Ajitesh. 2018. "Ethical AI: Lessons from Google AI Principles." DZone, October 15. Accessed 2019-08-16.
  19. Lee, Odelia. 2009. "Camera Misses the Mark on Racial Sensitivity." Gizmodo, May 15. Accessed 2019-08-16.
  20. Metz, Rachel. 2016. "Why Microsoft Accidentally Unleashed a Neo-Nazi Sexbot." MIT Technology Review, March 24. Accessed 2019-08-16.
  21. Microsoft. 2018. "Responsible bots: 10 guidelines for developers of conversational AI." Microsoft Corporation, November 04. Accessed 2019-08-16.
  22. Nadella, Satya. 2016. "The Partnership of the Future." Slate, June 28. Accessed 2019-08-16.
  23. Pichai, Sundar. 2018. "AI at Google: our principles." Google Blog, June 07. Accessed 2019-08-16.
  24. Robertson, Adi. 2019. "Facebook’s ad delivery could be inherently discriminatory, researchers say." The Verge, April 04. Accessed 2019-08-16.
  25. Squark. 2019. "Ethical AI." Glossary, Squark, Vizadata LLC, April 02. Accessed 2019-08-16.
  26. Sukis, Jennifer. 2019. "The origins of bias and how AI may be the answer to ending its reign." Design at IBM, via Medium, January 13. Accessed 2019-08-16.
  27. The Institute for Ethical AI & ML. 2019. "Homepage." The Institute for Ethical AI & ML. Accessed 2019-08-16.
  28. Vander Ark, Tom. 2018. "Let's Talk About AI Ethics; We're On A Deadline." Forbes, September 13. Accessed 2019-08-16.
  29. Wallace, Nicholas. 2019. "It's time to start some serious research into the ethics of AI." Science|Business, February 26. Accessed 2019-08-16.
  30. integrate.ai. 2019. "The Ethics of Artificial Intelligence." integrate.ai. Accessed 2019-08-16.

Further Reading

  1. Thomas, Rachel. 2018. "AI Ethics Resources." fast.ai, September 24. Accessed 2019-08-16.
  2. Bostrom, Nick and Eliezer Yudkowsky. 2018. "The Ethics of Artificial Intelligence." In Artificial Intelligence Safety and Security, Roman V. Yampolskiy ed., Chapman and Hall/CRC. Accessed 2019-08-16.
  3. Pichai, Sundar. 2018. "AI at Google: our principles." Google Blog, June 07. Accessed 2019-08-16.
  4. Microsoft. 2018. "Responsible bots: 10 guidelines for developers of conversational AI." Microsoft Corporation, November 04. Accessed 2019-08-16.

Article Stats

Author-wise Stats for Article Edits

Author
No. of Edits
No. of Chats
DevCoins
2
0
1331
3
0
69
1
1
27
1319
Words
3
Likes
9715
Hits

Cite As

Devopedia. 2022. "Ethical AI." Version 6, February 15. Accessed 2024-06-25. https://devopedia.org/ethical-ai
Contributed by
3 authors


Last updated on
2022-02-15 11:54:08