What is human trust?
There is no scientific definition of trust, yet our behavior is very much influenced by trust all the time. Trust is both an emotional and logical act. Emotionally, it is where one exposes his vulnerabilities to other people, believing they will not take advantage of one’s openness. Logically, it is where one has assessed the probabilities of gain and loss, calculating an expected utility based on available data, and has concluded that the other person in question will behave in a predictable manner.
Trust implies the following:
Predictability: being able to predict what other people will do and what situations will occur. If we can surround ourselves with people we trust, then we can create a predictable environment.
Value exchange: making an exchange with someone when you do not have full knowledge about them, their intent and the things they are offering to you.
Delayed reciprocity: giving something now with an expectation that it will be repaid, possibly in some unspecified way at some unspecified time in the future.
Exposed vulnerabilities: enabling other people to take advantage of one’s own vulnerabilities, but expecting that they will not do so.
Behavioral studies show that humans generally have a desire to express trust. Likewise they have the desire to be trusted by others.
Trust within the corporate context
Trust is a vital element of corporate culture and leaders understand its importance. In its 2016 global CEO survey, PwC (PricewaterhouseCoopers) reported that 55% of CEOs think that a lack of trust is a threat to their organization’s growth. But most have done little to increase trust, mainly because they aren’t sure where to start. Despite the evidence that you can’t buy higher job satisfaction, organizations still use golden handcuffs to keep good employees in place. While such efforts might boost workplace happiness in the short term, they fail to have any lasting effect on talent retention or performance.
Compared with people at low-trust companies, people at high-trust companies report 74% less stress, 106% more energy at work, 50% higher productivity, 13% fewer sick days, 76% more engagement, 29% more satisfaction with their lives, 40% less burnout. These figures are based on studies by the Gallup organization that has spent decades to monitor what they define as employers’ engagement. According to Gallup a staggering 87% of employees worldwide are not engaged. Many companies are experiencing a crisis of engagement and the lack of mutual trust.
There are many studies and publications to provide guidance how to improve and manage trust. In an article titled ‘The Neuroscience of Trust’ published by the Harvard Business Review in the January–February 2017 Issue, Paul J. Zak, professor of economics and psychology at Claremont Graduate University recommends steps how to raise trust. He collected data in 2016 from a nationally representative sample of 1,095 working adults in the U.S. , applying neuroscientific tests by measuring oxytocin levels in trust related situations. For example, high stress is a potent oxytocin inhibitor with a negative effect on trust. Testing was accomplished by drawing a blood sample from people’s arms before and immediately after they made decisions to trust others (if they were senders) or to be trustworthy (if they were receivers).
Based on his studies Prof. Zak discusses the following instruments for raising the level of trust in a corporate setting:
Recognize excellence: Neuroscientific experiments measuring oxytocin levels shows that recognition has the largest effect on trust when it occurs immediately after a goal has been met.
Give people autonomy in how they do their work: Once employees have been trained, allow them, whenever possible, to manage people and execute projects in their own way. Being trusted to figure things out is a big motivator.
Share information broadly: The uncertainty about their company’s goals, strategies, and tactics leads to chronic stress, which inhibits the release of oxytocin and undermines teamwork.
Facilitate whole-person growth: High-trust workplaces help people develop personally as well as professionally. Numerous studies show that acquiring new work skills isn’t enough; if you’re not growing as a human being, your performance will suffer.
Should we trust Artificial Intelligent Machines (AIMs)?
AIMs combine data with mathematical algorithms to interpret or answer human defined problems.
The big high-tech companies engaged in the production of AIMs are well aware of the fact that distrust will hinder the advancement and market acceptance of AIMs. To reap the potential benefits of AIMs, we will first need to trust them.
In September of 2016 Google, IBM, Amazon, Facebook and Microsoft formed the alliance ‘Partnership on AI’ (www.partnershiponai.org) with the goal to study and formulate best practice in AI Technologies. So far little substance has emerged from this alliance.
Companies engaged in business-to-business activities like IBM promoting the application of AIM services in the health sector for example, need to incorporate the trust factor into their products and services in order to expand their business. Consequently it is the quality of the products that will answer the question if AIMs can be trusted. To reach a trustworthy quality level the following product design issues have to be addressed:
Algorithmic responsibility: Trust is built upon accountability. As such, the algorithms that are applied in AIMs need to be as transparent, or at least interpretable. In other words, they need to be able to explain their behavior in terms that humans can understand — from how they interpreted their input to why they recommended a particular output.
One of the primary reasons for including algorithmic accountability in any AIM is to manage the potential for bias in the decision-making process. Bias can be introduced both in the data sets that are used to train an AIM, and by the algorithms that process that data.
System assurance and security: The integrity of the data and models underlying AIMs, as well as the resiliency of algorithms and systems in the face of a wide range of anomalies and threats must be carefully checked. Anomalies can be introduced by many factors, ranging from incompleteness to malicious attacks. Techniques and processes to protect, detect, correct and mitigate risks due to anomalies must be integrated end-to-end.
Embedded values: AIMs should function according to values that are aligned to those of humans, so that they are accepted by our societies and by the environment in which they are intended to function. This is essential not just in autonomous systems, but also in systems based on human-machine collaboration, since value misalignment could preclude or impede effective teamwork. It is not yet clear what values machines should use, and how to embed these values into them. Several ethical theories, defined for humans, are being considered.
Robustness : Robustness is a measurement of the reliability and predictability of systems. As such, it is a critical requirement of establishing the right level of trust in AIMs. To achieve robustness, all AI systems must be verified, validated and tested, both logically and probabilistically, before they are deployed. Testing needs to confirm that a system does not execute unwanted behaviors. To define those unwanted behaviors, we need to know what is good or bad in a particular situation, referring back to embedded values.
Conclusion
The strict adherence to product design specifications defined by humans will determine if AIMs should be trusted or not. Due to the far reaching consequences from the application of AIM products or services it seems desirable that an independent audit organization verifies that generally agreed upon AIM standards including components of trust are adhered to in the design and execution of the product. Until such a stamp of approval exists it is left up to the manufacturers to convince their customers that their AIM products and services are trustworthy. The economic advantages to employ AIMs are huge, however without trust the efforts to implement AIMs are bound to fail. As humans continue to be part of the equation of our economic wellbeing we should also keep in mind that trust is part of the entire ecosystem as reflected by the following diagram:
Combining the best qualities of AIM applications— such as data analytics, procedural logic, reasoning and sense-making with uniquely human qualities such as value judgment, empathy and esthetics —will lead to better and more informed decisions. We will be able to peer into the vast, unexplored universe of unstructured data and enhance our ability to learn and to discover new avenues of thought and action. However without trust across the entire economic spectrum, from the individual to the corporation up to the AIM product manufacturers, the threat of disruption and failure looms. Very competent leadership to enhance trust is needed to reap the benefits of a historically fundamental change also dubbed as the ‘4th industrial revolution’.