HCAI Credit: linkedin.com
Introduction
Humans prefer to be in control of their activities and their behaviour. The consequences can be devastating if this control is lost. Analogue to this ‘loss of control problem’, Artificial Intelligence (AI) has drastically changed our daily life as an ever-increasing library of tools and technologies are invading our privacy. As one result, an influencer economy has emerged with the potential of damaging one’s personal integrity and objectivity for making decisions. Data about our behaviour and interests, monetized by a few Big-Tech companies, undermines our free will without us realizing the process applied. To counteract this control problem, Human Centered AI offers an opportunity to drastically change the design and development process of AI-systems in contrast to the application of conventional AI which might disrupt our socio-economic system beyond repair.
What is Human Centered AI?
Human Centered AI (HCAI) refers to the development of AI technologies that prioritize human needs, values and capabilities. Its methodology ensures that development teams create AI-systems that enhance human abilities and well-being as opposed to replacing or diminishing human assets. It addresses ethical, social and cultural implications and ensures these systems are accessible, usable, and beneficial to all segments of society. Applying HCAI, designers and developers engage in interdisciplinary collaboration and typically involve psychologists, ethicists and domain experts to create transparent, explainable and accountable AI. This approach aligns with the broader movement towards ethical AI and emphasizes the importance of AI-systems that respect human rights, fairness, and diversity. Human-centered AI is crucial because it ensures that AI-systems focus on human needs and values. Incorporating HCAI in AI implies that users are actively engaged in the development process. This collaborative approach leads to more effective and ethical solutions as it harnesses different views and expertise. When developer teams involve users from various backgrounds, they can help identify and mitigate biases in AI-algorithms, leading to more equitable outcomes. Moreover, HCAI fosters trust and acceptance among users. When people understand and see the value of AI-systems, they are more likely to adopt and support these technologies. Hence, trust is essential for the successful integration of AI into everyday life.
Design Principles of HCAI
The fundamentals of HCAI design are rooted in some of the following key principles:
Empathy and Understanding the User: Understanding the needs, challenges, and contexts of the user is paramount. Designers must empathize with users to create AI solutions that genuinely address their problems and enhance their lives. For example, an HCAI healthcare application should be based on in-depth interviews with patients and doctors. It should understand and anticipate the unique needs of different patients and ensure a personalized and empathetic user experience.
Continuous Feedback and Improvement: Human-centered AI is an iterative process that involves continuous testing, feedback and refinement. This approach ensures that AI-systems evolve in response to changing user needs and technological advancements. For example, Tesla’s Autopilot technology aims to continuously improve through over-the-air software updates based on real-world driving data and user feedback, which will enhance safety and performance over time.
Balance between Automation and Human Control: While AI can automate many tasks, it is essential to maintain a balance where humans remain in control, especially in critical decision-making scenarios. This balance ensures that AI augments rather than replaces human capabilities. For example, in an autonomous vehicle where AI handles navigation, there should always be the option for the driver to take manual control. This balance ensures safety and keeps the human in command during critical situations.
Minimizing risk: To minimize the risk of AI, it must be modifiable with ‘undo’ options, transparent and easy to understand in human language. AI should be categorised as controllable or uncontrollable and even partial bans on certain types of AI technology should be considered. We may not ever get to 100% safe AI, but we can make AI safer in proportion to our efforts, which is a lot better than doing nothing.
Some proponents of HCIA are concerned that by not implementing its associated processes and technologies could lead to a takeover by conventional AI-technologies and as a result humanity loses control over its future wellbeing.
Scenarios of an AI-Takeover
An AI-takeover is an imagined scenario in which AI emerges as the dominant form of intelligence on earth and computer programs or robots effectively take control of the planet from the human species and its dependency on human intelligence. Possible scenarios include replacement of the entire human workforce due to automation and the takeover by a superintelligent AI or the threat of a robot uprising. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure that future superintelligent machines remain under human control. Scholars like Nick Bostrom debate how far advanced superhuman intelligence already is, and whether it poses a risk to mankind. According to Bostrom, a superintelligent machine might not be motivated by the same emotional desire to control power that typically drives human behaviour. In contrast such machines rather treat power as a means for attaining its ultimate goal of taking over humanity. He and others have expressed concern that AI would be able to modify its own source code in order to increase its own intelligence. According to Bostrom, a computer program that faithfully emulates a human brain or executes algorithms that are as powerful as the human brain’s algorithms, is capable of thinking orders of magnitude faster than a human, due to being made of silicon rather than flesh. A network of human-level intelligences designed to network together and share complex thoughts and memories seamlessly, could collectively work as a giant unified team of AI-machines, consisting of trillions of human-level intelligences to create a new form of ‘collective superintelligence’ far beyond any level of human intelligence.
There is no Proof that AI can be Controlled
According to Dr Roman V. Yampolskiy, Director of Cybersecurity at the University of Louisville, there is no evidence that AI can be controlled safely. We are facing an almost guaranteed event with the potential to cause an existential catastrophe and many researchers consider this to be the most important problem humanity has ever faced. The software that drives superintelligence differs from conventional programs by its ability to learn new behaviors, adjust its performance and act semi-autonomously in novel situations not experienced before. In an effort to make AI ‘safe’, system developers are confronted with an infinite number of safety issues, increasing the complexity of the problem to be solved. Predicting the problems humanity will face is not possible and mitigating them in security patches may not be enough. AI cannot explain what it has decided, and/or we cannot understand the explanation given as humans are not smart enough to understand the concepts implemented. If we do not understand AI’s decisions and are confronted with a non-transparent ‘black box’, we cannot understand the problem required to reduce the likelihood of future accidents with self-driving cars, for example. If we grow accustomed to accepting AI’s answers without an explanation we will not be able to tell if it provides us with wrong or manipulated answers. As the capability of AI increases, its autonomy increases as well while our control over AI-machines decreases. In order for superintelligence to avoid acquiring inaccurate knowledge and remove all bias from its programs, developers would have to ignore all existing knowledge mapped by conventional AI-systems and rediscover or prove everything from scratch while trying to avoid the pitfalls of new biases being introduced to the system. In order to overcome this conflict Yampolskiy suggests that an equilibrium point could be found at which we sacrifice some positive capabilities of AI-systems by giving up some control and to provide AI-systems with a certain degree of autonomy.
Conclusion
Many scholars, including evolutionary psychologist Steven Pinker, argue that a superintelligent machine is likely to coexist peacefully with humans as proposed by the application of Human Centric AI principles. The fear of a cybernetic revolt is often based on interpretations of humanity’s history, which is rife with incidents of enslavement and genocide. Such fears stem from a belief that competitiveness and aggression are necessary in any intelligent being’s goal for survival. Such human competitiveness stems from the evolutionary background of our intelligence, where the reproduction of genes in the face of human and non-human competitors was the central issue.
HCAI, if programmed with ethical values, is a hopeful light in the scary word of AI
Thank you for this 🙋♀️🙏🏻