Cyber LawResearch Article

Studying The Opportunities And Risks Involved In Artificial Intelligence And Human Rights


Author: Ahsnat Mokarim, 1st year student at Maharashtra National Law University Mumbai.

ABSTRACT

The Digital and Artificial Revolution was a period of transition from mechanical or electrical technology to information and electronics science. It involved a large-scale adoption of cutting-edge technologies such as the internet and smart devices. This entirely transformed the functioning of the world. In contemporary times, all official work, business, communication, transaction, marketing and records are maintained with the help of automated resources. All the sectors of a society be it commercial, educational, healthcare, legal, political, governance, etc. have been revolutionised by these constantly evolving digital and artificial technologies. However, the widespread use of such technologies has also had a significant impact on the Human Rights of individuals. This research article specifically focuses on the opportunities and risks associated with the increasing dominance of artificial intelligence (AI). An attempt is made to understand how the use of AI technology affects human rights, as well as suggestions for how to bridge the gap between them. It is to be pointed out that this article does not cover all the rights mentioned in international frameworks but aims to discuss a few major aspects of the issue.

INTRODUCTION

Artificial intelligence is rapidly transforming the world as we know it. It has a significant impact on the growth of businesses in different sectors. Algorithms and automated data processing are now being used in domains that were exclusive to human beings such as decision-making, medical diagnosis, weather forecasting, etc. According to International Data Corporation (IDC), the Global AI market is expected to reach the $500 billion mark by 2024[1].

The exponential rise in the use of AI as well as their enormous influence on human beings have raised various concerns in the field of human rights. Our fundamental rights to life, privacy, liberty, education, health, equality, fair trials and presumption of innocence along with our freedoms of expression have all been impacted both positively and negatively. The adverse impacts, in turn, raise several complicated questions with respect to the rising role of algorithmic decision-making. For example, who will be liable when a human right is violated by AI-powered decisions? Will it be the creator of that system or the person who implemented that solution[2]? Furthermore, these technologies are still at an early stage of development and are continuously evolving with more sophistication. The challenges associated with AI are complex and numerous which makes it a highly debated issue in current times.

What is Artificial Intelligence (AI) ?

As it is a constantly evolving concept, there is no universally accepted definition of “artificial intelligence” (hereafter written as “AI”). It is often referred to as the technology which can enable machines and computers to perform tasks that require human intelligence. According to a report by Stanford University, AI can be defined as “a science and a set of computational technologies that are inspired by—but typically operate quite differently from—the ways people use their nervous systems and bodies to sense, learn, reason, and take action[3]. Stuart Russell and Peter Norving, in their book Artificial Intelligence: A Modern Approach, gave four categories of AI.

  • Systems that think like humans
  • Systems that act like humans
  • Systems that think rationally
  • Systems that act rationally[4]

As AI has not yet been concretely conceptualised and technologies are constantly improving, what is considered as AI today changes rapidly. This phenomenon is known as the “odd paradox” or the “AI effect”[5]. However, AI is broadly classified into two types; Narrow AI or Artificial Narrow Intelligence (ANI) and Strong AI or Artificial General Intelligence (AGI). The former type is knowledge-based and is trained to perform specific applications. For instance, computers playing chess or spam filtering functions. The famous applications of Siri, Google Assistant, and Alexa are also based on this type. On the other hand, Strong AI is a hypothetical concept where machines can replicate human intelligence in performing any intellectual task. These systems have a certain level of awareness or consciousness. Examples of this category can be seen in several science-fiction films[6].

Applications of AI in upholding human rights

The primary goal of AI is to enable computers to perform cognitive actions such as problem solving, learning, decision making, etc. Over the past decade, there has been an exponential increment in the development and use of AI-based technologies. They have become an integral part of human lives. Facilities available on our smartphones like calling a cab, ordering food, connecting on social media, etc. are directly or indirectly influenced by AI algorithms. Crucial sectors of society such as agriculture, meteorology, healthcare, education, banking, governance, and legal systems are adapting and applying AI in their day-to-day functions. These diverse uses of AI have also benefited our rights by making our lives easier.

AI-driven systems are so ubiquitous that it is not possible to list out all their applications in this section. However, a few of them are discussed below.

AI assistance in Healthcare Sector

One of the most significant contributions of AI has been in the field of healthcare and medical diagnosis. AI has made advancements in all the basic aspects of the healthcare system such as prevention, diagnosis and treatment. AI-assisted diagnosis involves analysis of large data sets, resulting in patient insights and predictive conclusions. This helps in identifying key areas of patient care that require improvement[7]. For example; an AI-powered image recognition system outperformed human dermatologists in correctly detecting cancerous skin lesions in a study undertaken by computer scientist Sebastian Thrun[8]. Similarly, IBM’s Watson health is reported for having predicted rare ailments or identifying treatments for people[9].

It is also reported that AI assists in making healthcare available in rural areas and developing countries which could reduce the disparity in health-care services between cities and rural areas[10]. Efforts are also being made to use AI in predicting future disease-outbreaks which could be extremely helpful in controlling the havoc created by them[11]. Thus, AI involvement in healthcare can result in positive outcomes for our most fundamental human rights including right to life, right to good health and right to access proper healthcare.

Inclusive AI for people with disabilities

The field of AI is constantly being explored to look for ways of aiding people with physical disabilities. More and more products are being designed with specifics to deal with various disabilities such as vision, hearing, mobility, learning, mental health, etc. which can be permanent or temporary. This is referred to as inclusive design and technology. Reported experiences of people engaging with AI technology are broadly positive despite certain shortcomings and frustrations encountered in the process[12]. Few examples of assistive devices supported by AI include tools for image description and smart assistants (Alexa, Siri, etc.) for visually impaired people, speech-to-text converting applications (Google Translator, Ava, etc) for deaf people, applications supporting spoken communication (voiceitt)[13], ‘care robots’ for helping old people[14], and availability of multilingual text-to-speech options, among others[15].

Financial Sectors

Financial and banking sectors are other areas that have gained from AI-based systems. Access to credit is an important right which facilitates people in exercising their economic, social, and cultural rights. It is specifically advantageous for the vulnerable and disadvantaged section of society as it provides them with the resources to ensure their rights of healthcare, education, ownership of properties, and adequate standard of living. The AI approach on credit scoring systems (used by lenders to determine the likelihood of a borrower to make payments based on certain attributes) follows more advanced and sophisticated rules than traditional credit scoring systems. AI accounts for a broader range of factors while assessing a potential borrower, thereby, creating a more accurate, neutral and data-backed analysis[16]. Thus, it promotes financial inclusion and right to equality of marginalised groups (including women) by minimising discrimination and bias which may be associated with them in human-based credit scoring systems[17]. However, chances of bias being fed into AI algorithms which are ultimately designed by human beings still exist. Nonetheless, newer models are being suggested to make more accurate and unbiased systems[18].

Implications of AI on Human Rights

While AI has made enormous contributions in transforming our lives, it also has certain downsides which can have adverse consequences on human rights. It is pertinent to note that even the applications of AI which are used to help mankind can also cause serious harm. Varying concerns on this issue have been raised by different experts and organisations working in these fields. Some of the prominent concerns are addressed in this section.

Privacy issues and Data Protection

The biggest blow of reliance on AI technologies has been on our rights to privacy. In order to make decisions and predictions, AI technologies collect and store vast amounts of data for any analysis. As discussed earlier, AI applications in the healthcare sector can be used to diagnose the medical ailments of patients. This is done through collection of sensitive data which includes the entire medical history of patients, thereby, raising serious privacy and security concerns. Furthermore, information available on the internet such as social media platforms can be further collected to infer an individual’s areas of interests and their political viewpoints.

Such data analysis from large volumes of information may seem harmless but can be used to reveal private information about individuals which can be classified as protected or sensitive information. For example; in 2012, there was a Mobile Data Challenge organised by Nokia where researchers examined machine learning to predict demographic attributes such as an individual’s gender, marital status, occupation, and age[19]. Similarly, a study by Stanford University revealed how researchers used deep neural networks to predict sexual orientations of people from a collection of facial images[20].

Another dimension to privacy concern is in relation to surveillance. AI-based invasive surveillance systems like facial recognition devices are being explored by countries like the USA, China, Saudi Arabia, etc. According to the Carnegie Endowment for International Peace AI Global Surveillance (AIGS) Index, at least 75 out of 176 countries globally are actively using AI technologies for surveillance purposes[21]. This can be specifically dangerous for marginalised sections or people subjected to regimes that could misuse the information to spread discrimination.

Criminal Justice System: Perpetuation of bias and discrimination

Criminal Justice systems have been involving AI decision-making devices at different stages of pretrial detentions, sentences and paroles. This is done in order to remove human biases and reach fair and accurate results along with efficiency. The issue with this process is that these tools heavily rely on data from government databases or previously available risk-assessment reports. Thus, automated risk-assessment tools can be use to further the bias and discrimination already existing in the system[22]. For example, a US risk-assessment algorithm COMPAS was accused of perpetuating racial bias by an organisation ProPublica. It had found that black offenders were twice as likely to be classified as “high risk” than white offenders[23]. This highlights how these tools can have adverse impacts on the rights of marginalised groups and minorities which are often discriminated against. Furthermore, classifying a defendant as having a high or low risk of reoffending may interfere with the rule of presumption of innocence which is mandatory for fair trials[24].

Right to work and livelihood

The role of AI in job automation poses a serious threat to the right to work which could result in changing the labour market both through job creations and destruction. There has been an increase in job losses and this trend is predicted to accelerate with development of different AI technologies. A paper written by Oxford academics was published in 2013 which has estimated that 47% of US jobs are at high risk of automation by mid 2030s[25]. Another paper, written by an MIT Professor, describes how a robot, if added per 1000 workers in the US could result in loss of about 400,000 jobs[26].

Robots and automated machines perform manual tasks more efficiently than humans which increases productivity and revenues. According to a research undertaken by The Boston Consulting Group, the share of tasks performed by robots will rise from a global average of around 10% across all manufacturing industries today to around 25% by 2025[27]. Furthermore, adoption of machines largely impacts the employment and wages of lower and middle class workers (like machine operators, welders, assemblers, etc.)[28] violating their right to adequate standards of living.

Moderation of Online Content and Freedom of Expression

Before AI, the mechanism to bring objectionable content available on the internet and social media platforms primarily involved reviewers employed by the companies to look into the complaints filed by their users. Due to continuous increment in the volume of online content, there has been significant investments in automated systems to perform these tasks. This has resulted in the development of auto-filtering algorithms and content removal systems.

This, however, has raised several concerns in relation to the violation of our rights to privacy and freedoms of expression, opinion, information, movement, assembly, religion, and freedom of press. In addition, automated content moderation is rife with different types of errors and inaccuracies[29]. There are also growing apprehensions over the role of private companies deciding the limits of our freedom of expression. Company guidelines on the nature of acceptable content have also been accused of discriminating against certain opinions or viewpoints, typically favouring the powerful over the marginalised. Facebook, for example, allowed a U.S. Congressman to say that all radicalised Muslims should be “hunted” or “killed,” but it prohibited Black Lives Matter activists from saying that “all white people are racist”[30].

Another set of arguments refer to the use of these systems by authoritarian governments to censor opposing perspectives. In China, machine learning is being used by the internet service providers and government agencies to remove “pornographic and violent content” and topics which may be “politically sensitive” in nature[31]. Furthemore, AI-supported search engines and personalised search results based on users’ history and news feeds restricts diverse content and influences media pluralism[32].

Algorithms on social media platforms can be used for identifying people, predicting potential conflicts, and planned protests. Governments can apply these techniques and take preemptive measures to prevent demonstrations, which will affect our rights to protest and form assembly[33].

Finding a balance between AI and Human Rights

AI applications, currently in place, have an impact on the entire set of human rights mentioned in international human rights instruments, including civil and political rights, along with social, cultural, and economic rights. An important observation in this regard is that the multifaceted impacts of automated technologies are not uniformly spread out. It may affect a particular group of people positively while having a negative impact on others. It is also possible that some individuals may be affected more by these technologies than others[34]. Therefore, the first step towards bridging the gap between AI and human rights should support the research addressing the human rights and ethical consequences of AI.

A large number of current formal and informal institutions are not aptly designed to monitor the implications brought by AI. This demands a need for systemic innovation to facilitate the development of these technologies with a human-rights inclusive approach. To bring about these changes, all relevant stakeholders, including algorithm developers, corporations, law enforcement agencies, academia, governments, and the general public, must make a concerted effort[35].

The understanding of social constructs around AI technologies is another significant dimension to human-rights assessments. AI training systems and algorithms are ultimately designed by human beings. As mentioned in a study by the Council of Europe, “mathematical or computational constructs do not by themselves have adverse human rights impacts but their implementation and application to human interaction does[36]. This implies that the issues are not with algorithms per se, but the input that is fed to them by prejudiced human minds. Thus, it is also crucial to study the human decision-making process in order to understand whether or not there is a difference between the type of decisions (related to human rights) taken by humans and AI[37].

Various organisations and experts have published their findings on this issue and suggested their recommendations. It has been reported that the primary concern with automated systems is the violation of our rights to privacy. More transparency is required in algorithm-based decision-making processes for us to understand the rationale behind their decisions and challenge them. A legislative framework or the establishment of data protection standards has been suggested by different institutions including the UN and Council of Europe. The guidelines should be respected by both the businesses and the governments in order to ensure transparency and accountability. For instance; governments should release all the relevant information (purpose of acquisition, how the system operates, etc.) while acquiring any AI technology. Similarly, private companies should provide all the information regarding the inputs and algorithms which can further be reviewed by experts, legislators, or people at large[38].

Other vital steps include increasing public discourse and AI awareness. These initiatives are required to assist people and younger individuals to understand the implications of AI technologies on their lives. Institutions that use automated processing should provide simple explanations of how algorithms work as well as clarifications on the possibility of biases being included[39].

CONCLUSION

It is evident that the interplay between AI and human rights is a complicated issue with numerous outcomes. However, our reliance on these technologies continues to increase despite the possibility of adverse impacts on our rights. This is because AI has penetrated our lives to a level of necessity. On the other hand, growing concerns with respect to human rights violations have called for regulation of these technologies.

While legal frameworks have become the need of the hour, it is pertinent to note that attempts at regulation can also pose human rights implications as these standards may not sufficiently address both the technical and ethical perspectives appropriately[40]. Thus, the essential requirements for any solution to the above include effective transparency and accountability by the governments and private entities. AI should no longer be viewed as a “black box” which means that a third party cannot meaningfully interpret how that system operates [41].

Furthermore, neither the nations nor the private enterprises can alone formulate any comprehensive mechanism to monitor AI. It is the responsibility of all stakeholders to collaborate and brainstorm in order to find ideas that promote a harmonious approach between technological advancement and a rights-respecting world. It also requires human beings to be conscious of their rights and actions. As Stephen Hawking in a web Summit had remarked, “We stand on the threshold of a brave new world. We all have a role to play in ensuring that we and the next generation have the determination to engage with science … and create a better world for the whole human race[42].


REFERENCES

  1. IDC Forecasts Improved Growth for Global AI Market in 2021, IDC (Feb. 23, 2021), https://www.idc.com/getdoc.jsp?containerId=prUS47482321.

  2. Algorithms and Human Rights, Council of Europe (2018), https://rm.coe.int/algorithms-and-human-rights-en-rev/16807956b5.
  3. Peter Stone et. al., Artificial Intelligence and life in 2030: the one hundred year study on artificial intelligence, Stanford University (2016), https://apo.org.au/sites/default/files/resource-files/2016-09/apo-nid210721.pdf.
  4. Stuart Russell & Peter Norving, Artificial Intelligence: A Modern Approach 2-5 (Prentice Hall 3rd ed. 2009).
  5. Pamela McCorduck, Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence 423 (Taylor & Francis Group 2nd ed. 2004).
  6. Artificial Intelligence, IBM (June 3, 2020), https://www.ibm.com/cloud/learn/what-is-artificial-intelligence#toc-types-of-a-q56lfpGa.
  7. Alicia Phaneuf, Use of AI in healthcare & medicine is booming – here’s how the medical field is benefiting from AI in 2021 and beyond, Insider (Jan. 30, 2021, 2:17 AM), https://www.businessinsider.com/artificial-intelligence-healthcare?IR=T#.
  8. Siddhartha Mukherjee, A.I. Versus M.D., The New Yorker (Mar. 27, 2017), https://www.newyorker.com/magazine/2017/04/03/ai-versus-md.
  9. James Billington, IBM’s Watson Cracks Medical Mystery with Life-Saving Diagnosis for Patient Who Baffled Doctors, International Business Times (Aug. 8, 2016, 6:40 PM), https://www.ibtimes.co.uk/ibms-watson-cracks-medical-mystery-life-saving-diagnosis-patient-who-baffled-doctors-1574963.
  10. Jonathan Guo & Bin Li, The application of medical artificial intelligence technology in rural areas of developing countries, 2 Health Equity 174, (2018).
  11. Zeena Saifi, Victoria Brown & Tom Page, AI and big data joins effort to predict deadly disease outbreaks, CNN (Mar. 6, 2018), https://edition.cnn.com/2018/03/06/health/rainier-mallol-tomorrows-hero/index.html.
  12. Peter Smith & Laura Smith, Artificial intelligence and disability: too much promise, yet too little substance?, 1 AI Ethics 81, (2021), https://link.springer.com/article/10.1007/s43681-020-00004-5.
  13. Jackie Snow, How People with Disabilities Are Using AI to Improve Their Lives, PBS (Jan. 31, 2019), https://www.pbs.org/wgbh/nova/article/people-with-disabilities-use-ai-to-improve-their-lives/.
  14. Rob Girling, Can Care Robots Improve Quality Of Life As We Age?, Forbes (Jan. 18, 2021, 9:00 AM), https://www.forbes.com/sites/robgirling/2021/01/18/can-care-robots-improve-quality-of-life-as-we-age/?sh=38d705d1668b.
  15. AI and Inclusion, The Alan Turing Institute, https://www.turing.ac.uk/research/research-projects/ai-and-inclusion.
  16. Arthur Bachinskiy, The Growing Impact of AI in Financial Services: Six Examples, towards data science (Feb. 21, 2019), https://towardsdatascience.com/the-growing-impact-of-ai-in-financial-services-six-examples-da386c0301b2.
  17. Filippo A. Raso, et.al., Artificial Intelligence & Human Rights: Opportunities & Risks, 2018 Berkman Klein Centre at 29.
  18. Sian Townson, AI Can Make Bank Loans More Fair, Harvard Business Review (Nov. 6, 2020), https://hbr.org/2020/11/ai-can-make-bank-loans-more-fair.
  19. Sanja Brdar, et. al., Demographic Attributes Prediction on the Real-World Mobile Data, Mobile Data Challenge by Nokia Workshop, in Conjunction with Int. Conf. on Pervasive Computing (2012).
  20. Yilun Wang & Michal Kosinski, Deep Neural Networks Are More Accurate Than Humansat Detecting Sexual Orientation from Facial Images, 114 Journal of Personality and Social Psychology 246, (2018).
  21. Gil Press, Artificial Intelligence (AI) Stats News: AI Is Actively Watching You In 75 Countries, Forbes (Sep. 18, 2019, 9:09 AM), https://www.forbes.com/sites/gilpress/2019/09/18/artificial-intelligence-ai-stats-news-ai-is-actively-watching-you-in-75-countries/?sh=28f31bb15809.
  22. Raso, supra note 17, at 22, 23.
  23. Jeff Larson, et. al., How We Analyzed the COMPAS Recidivism Algorithm, ProPublica (May 23, 2016), https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm.
  24. Human Rights in the Age of Artificial Intelligence, Access Now (2018), https://www.accessnow.org/cms/assets/uploads/2018/11/AI-and-Human-Rights.pdf.
  25. Carl Benedikt Frey & Michael Osborne, The Future of Employment: How susceptible are jobs to computerisation?, Oxford Martin Programme on Technology and Employment (Sept. 17, 2013), https://www.oxfordmartin.ox.ac.uk/downloads/academic/future-of-employment.pdf.
  26. Sara Brown, A new study measures the actual impact of robots on jobs. It’s significant., MIT Management Sloan School (July 29, 2020), https://mitsloan.mit.edu/ideas-made-to-matter/a-new-study-measures-actual-impact-robots-jobs-its-significant.
  27. Michael Zinser, Justin Rose & Hal Sirkin, The Robotics Revolution: The Next Great Leap in Manufacturing, BCG (Sept. 23, 2015), https://www.bcg.com/publications/2015/lean-manufacturing-innovation-robotics-revolution-next-great-leap-manufacturing.
  28. Brown, supra note 26, 16.
  29. Paresh Dave, Social media giants warn of AI moderation errors as coronavirus empties offices, Reuters (Mar. 17, 2020, 12:15 AM), https://www.reuters.com/article/us-health-coronavirus-google-idUSKBN2133BM.
  30. Julia Angwin & Hannes Grassegger, Facebook’s Secret Censorship Rules Protect White Men From Hate Speech But Not Black Children, ProPublica (June 28, 2017, 5 AM) https://www.propublica.org/article/facebook-hate-speech-censorship-internal-documents-algorithms.
  31. Yuan Yang, Artificial intelligence takes jobs from Chinese web censors, Financial Times (May 22, 2018) https://www.ft.com/content/9728b178-59b4-11e8-bdb7-f6677d2e1ce8.
  32. Council of Europe, supra note 2, at 17.
  33. Council of Europe, supra note 2, at 23.
  34. Raso, supra note 17, at 4.
  35. Dunja Mijatović, In the era of artificial intelligence: safeguarding human rights, openDemocracy (July 3, 2018), https://www.opendemocracy.net/en/digitaliberties/in-era-of-artificial-intelligence-safeguarding-human-rights/.
  36. Council of Europe, supra note 2, at 8.
  37. Council of Europe, supra note 2, at 9.
  38. Access Now, supra note 24, at 32, 35.
  39. Council of Europe, supra note 2, at 45.
  40. Council of Europe, supra note 2, at 36.
  41. Cynthia Rudin and Joanna Radin, Why Are We Using Black Box Models in AI When We Don’t Need To? A Lesson From An Explainable AI Competition., 1 HDSR, (2019).
  42. John Koetsier, Stephen Hawking Issues Stern Warning On AI: Could Be ‘Worst Thing’ For Humanity, Forbes (Nov. 6, 2017, 2:21 PM), https://www.forbes.com/sites/johnkoetsier/2017/11/06/stephen-hawking-issues-stern-warning-on-ai-could-be-worst-thing-for-humanity/?sh=4dc0f52b53a7.

You may also like

Leave a reply

Your email address will not be published. Required fields are marked *

More in:Cyber Law