May 5, 2021 16:32

Bringing ‘humility’ to AI devices

Humble AI is not simplistically trained to achieve a single objective; it consults before taking decisions

Artificial Intelligence (AI) is now ubiquitous in our lives — search engines, content recommendation algorithms, digital assistants on phones, smart TVs and speakers, news feeds, stock market trading algorithms, futuristic smart cars, drones, and many more. AI is driven by data, algorithms, and goals. Data on what the world is like currently, how it evolves, how our actions affect the world. and what goals we wish the device to achieve, drives the AI system.

In a 2004 movie, I Robot , there is a prescient scene of a car crash involving a truck with two cars. Both cars fall into a river. One has a police officer, actor Will Smith, and the other a 12-year old child and her mother. The time is the future, when AI-enabled robots are used for hazardous tasks. A robot jumps into the water, calculates and determines that it can only save one person in the limited time available before the cars sinks. It saves Will Smith as he is estimated to have the highest probability of survival (45 per cent).

 

 

Still from 'I Robot' (2004)
 

 

 

 

Here, the Robot has data on the cars, persons in the cars, speed of the cars sinking deeper into the water, ability to determine whom it wants to save. If it tries to save everyone, it may not succeed in saving anyone, and the fixed goal programmed into its system is to save the person with the highest probability of survival. Will Smith is troubled at being chosen to be saved over the 12-year-old child with 11 per cent probability.

Human-AI compatibility

Stuart Russell, a leading AI researcher at the University of California, Berkeley, says: “If we succeed in our present approach of designing machines to solve a fixed objective (save the person with the highest probability of survival), we will lose control over the machines when they exceed human capabilities.” He cites a hypothetical example of a machine programmed with a single objective to deliver coffee without failure. The machine becomes intelligent enough to understand it will fail if it is switched off. The machine could then disable its off switch and we would lose control of the machine.

How can we ensure AI doesn't overtake humanity? Russell suggests, one way would be to build ‘humble’ machines. Instead of programming definitive action into the machines, he suggests we build ‘uncertain objectives,’ which enables the machine to ask whether “it would be alright” and consults on human values and preferences before taking decisions. Learning from this periodic consultation process would enable the machines to align their actions with the human values and preferences and create human-compatible AI. This is easier said than done.

Consider the dilemma of human values and preferences. Human values refer to those values which make us human, such as subscribing to honesty, love, peace, truth, and so on. No one will disagree with the uniformity of these across the globe. However, preferences or norms are the specific acceptance or adaptation of these values as per individual societal customs or individual upbringing.

Differences in value systems can be thought of as ‘exceptions’ to the rules associated with values. These could be a general exception or one made for specific situations. For example, stealing as a general exception to the value of honesty would be acceptable to a society of thieves but not a normal society. In the specific situation of a starving individual, stealing in most societies would be condoned as an exception to the value of honesty. Advanced challenges

How can we build humble machines that ask ‘which is alright,’ learn from varying human responses and apply to specific situations without ambiguity?

The Chambers English Dictionary defines humble as an adjective meaning ‘low: lowly: modest: unpretentious: having a low opinion of oneself.’ This definition is not a negative trait. Opinion of one’s self is low only in the sense that it is understood to be not more important than others. It is also not less important than others. Humility as a value allows selflessness and dignity for a better world.

Developing humility through programming and some of the suggestions below is a different ball game:

  1. Spend time listening to others
  2. Ask for help when you need it
  3. Seek feedback from others on a regular basis
  4. Review your actions against the language of pride  

Paucity of time

Another problem is that the quick response time for taking these crucial decisions may not permit for consultation time. A smart car with a failure in its braking system facing the dilemma of hitting an elderly person crossing the street or swerving and hitting a child standing on the side walk would have no time for consulting a human being. It would have to implement the programmed objective and follow the manufacturer’s or programmer’s values.

Could the Robot’s decision (in I Robot ) have been better if it had consulted all the persons affected by the car crash? Would the agitating farmers in India be more amenable to the new laws if they were part of the process of consultation and collaboration in framing them?

Is there a suggestion here of collaboration, consultation, and inclusiveness of not only all living creatures but also AI-enabled machines? Is there a point at which a machine could be recognised as having ‘life’?

Artificial Intelligence will raise a lot of questions about ethics and value systems. Human-compatible answers are being researched in many universities around the world and we will not know their impact until we start the implementation, and gradually learn from our mistakes. This journey is irreversible and we are better off focusing on learning before AI reaches a technological singularity.

(The writer is Head, External Programmes, International School of Management Excellence, Bengaluru.)