Do we really understand what AI is and what about Leadership?

Posted by Peter Rudin on 28. November 2025 in Essay

Understanding  AI    Credit: www.lse.ac.uk

Introduction

According to Eric Sandosham, Ph.D. Founder & Partner of Red & White Consulting Partners LLP, Former CAO of Citibank APAC, we do not understand AI at all. Despite all the information coverage most people do not really understand what AI is. Is it a case of being naive, ignorant, or misinformed? Citibank recently announced that it will train 175’000 employees on how to write better Gen AI prompts. Accenture similarly announced that those who cannot be re-skilled on AI will need to exit the company. These companies are still thinking that AI literacy is equivalent to using pre-built Gen AI solutions. Despite employees knowing how to leverage some ChatGPT prompts, many seem to have quite strange expectations of what AI can actually do.

What is AI

For most people the term ‘AI’ evokes images of animated intelligent robots because so much of this is deeply influenced by literature. We are fascinated by the creation of synthetic or artificial life. From the ancient Greek myth of Talos the giant bronze automaton, to Mary Shelley’s 1818 novel Frankenstein, to Isaac Asimov’s robot series published between 1940 and 1950, to Stanley Kubrick’s film 2001 ‘A Space Odyssey’ , to the more recent Star Wars, Terminator, and Matrix movies. With each iteration, AI acquires more god-like abilities. All seeing, all knowing, able to accurately predict the future. These descriptions of AI fall into what AI researchers would label Artificial General Intelligence (AGI). However, AGI is still pure fiction as we have no ability to create it yet.

Because literature shapes our perception, AI has evolved from ‘something that could think for itself’ into ‘something that is all knowing’. And this is perhaps at the heart of the mismatch when it comes to understanding AI: the false equivalence between intelligence and knowledge. And the reality is, AI is not a font of knowledge. Knowledge is contextual; it continues to evolve and reshape. And AI mimics intelligence and it does not truly reason from first principles. It’s not connecting the dots but rather relying on dots that have already been connected before. And so when someone asks questions like: “Why can’t AI create data for my model; why can’t AI tell me what features will be most predictive; why can’t AI figure out the cybersecurity flaws in my system?” they believe AI has all the answers to use intelligently, but it simply does not.

Training versus Experience and Generalization

Because AI is based on learning, some researchers believe that large language models (LLM) have been trained on the world’s corpus of knowledge which is equivalent to having learnt through the collective human experience. This is another false equivalence to believe  that sufficient rote learning might equate to experience-based learning. Memorising a bunch of facts and information does not imply comprehension. Comprehension comes when those facts and information are put into service and reviewed against their expected outcomes. This has implications to leaders looking to leapfrog their organisations through AI transformation, bypassing the need to build core competencies in data analytics and data science. They believe they can acquire ‘expertise’ with AI. The reality is that AI is brittle and less agile in a dynamic and rapidly changing world. AI is not an expert because expert-level competency comes from experience. There are no shortcuts for organisations to achieve data and AI competencies. Moreover, generalisation is a key unique human ability that we mostly take for granted. It is not dependent on stochasticity, which is the underlying basis of all LLMs. And this inability for AI to generalise also means that it cannot innovate by recombining existing knowledge, it cannot transfer knowledge and best practices from one domain to another.

The Art of Abstraction

Abstraction is one of those higher-order skills, like critical thinking. Hard to put a finger on, but we sort of intuitively understand it. Abstraction is about simplifying a complex thing into its core essence. For example we drive our cars without really understanding exactly how the combustion engine turns fossil fuel into kinetic energy or how the gears really shift and make the car go faster. At its core, abstraction is about representation and to wrap the complexities up in an idea that resonates. Abstraction is well acknowledged as a foundation for critical thinking. It is also a foundation for generalisation. It is an important cognitive ability, and yet we haven’t really spent enough time discussing it in the wider context of the knowledge economy.

  • Can we better define abstraction?
  • Can abstraction be taught?
  • How does abstraction help in data analytics / data science?

Abstraction is at the heart of problem-solving and solution design. And a good data practitioner is often distinguished by his unique abstraction abilities to ‘cut through the noise’. Achieving abstraction in data practice requires one to identify only the quintessential information signals, to differentiate those information signals from the mixture of signals and noise, and then to concentrate those information signals so that they become immediately noticeable.

Training for Abstraction

Computer science is largely anchored on abstraction, and abstraction is taught as a method of  creating reusable functions from a bunch of coding scripts. Beyond computer science, few other disciplines teach abstraction. Most disciplines do not even recognise this cognitive skill set. One way  one can get started to hone our ability for abstraction is to practice associative thinking using concept mapping. It is a well-known technique whereby  one attempts to associate any given phenomenon / entity with attributes. For example Star Wars is associated with sci-fi and the hero’s journey. Associative thinking is an excellent way to strengthen your ability to think about generalising attributes, category-creation as well as deeper and hidden core concepts.

While humans seem to have a natural ability to abstract from birth, when mastered, abstraction can be a cognitive power for the knowledge worker and data practitioner. Abstraction is something that current AI continues to struggle with. Hence, it is important to spend some time practicing abstraction before solving complex AI-problems for the organisation you are working for.

Courage: the important Leadership Attribute

Academic research makes a distinction between leadership traits and leadership skills; traits are not skills. Traits are based on inherent personality, shaped by lived experiences. Skills, on the other hand, can be learned and acquired. Current consensus considers courage as both a leadership trait and a leadership skill. Courage in corporate leadership is typically defined as a mindful, intentional act or decision that is motivated to bring about a noble good or worthy end, despite substantial potential negative consequences for the actor. It is in essence calculated risk-taking while overcoming fear. Within the corporate setting, a noble good or worthy end can be interpreted as long-term organisational health, ethical responsibility, or the well-being of stakeholders. And substantial negative consequences can be interpreted as negative impact to job security, personal finances, reputation, or social standing.

Courage also has the following sub-dimensions (with examples related to corporate settings):

  1. Courage to Act — e.g. challenging status quo
  2. Courage to Speak Out — e.g. whistleblowing or having difficult conversations
  3. Courage to Be — e.g. admitting mistakes or vulnerabilities
  4. Courage to Persist — e.g. staying the course during crisis

As shared, the current consensus among researchers and consultants is that courage is both a trait and a skill. Within the confines of the corporate world, courage as a leadership skill is clearly what is of interest. It is evident that we can train potential and existing leaders how to take calculated risks, how to have difficult conversations, how to show vulnerability and how to be resilient.

Conclusion

Leadership experts often cite courage as a foundational trait / skill. Philosophers have similarly opined. For example, Aristotle defines courage as the Golden Mean — a virtue at the mid-point between cowardice and rashness, where one faces the right fears for the right reasons. Similarly, famed author C.S. Lewis writes that courage is “the form of every virtue at the testing point.” — meaning that a leader’s integrity, honesty, or justice only matters when it is challenging, risky, or unpopular, and it requires courage to express them. Thus, having a sharper and more useful definition of courage in corporate leadership, and developing the ability to measure it, is going to be critical as leaders navigate the increasingly stormy weather ahead.

One Comment

  • Hello Peter, excellent essay, had to read twice ;). Thank you!
    Understanding AI of a giant provider is simililar as trying to understand any very large IT system (black box syndrom), consumers got used to either trust and (blindly) use the system or simply ignore it (if they can *) .
    Your essay points clearly towards management as prime key for effective progress, which I fully understand. (*) It is management doing the decision which AI service provider and which agents to be used mostly lacking the day to day experience by staff who may see all the negative aspects (incomplete AI replies and and workslop, hallucination,.’impossible’ to iterate/maintain code…., knowing that with sufficient time and Google queries and analytics they would obtain pretty much the same).
    Using the very impressive Gemini 2.5 pro and recent trials with pilot 3.0 still show the clear limits on complex (none mainstream) topics. The LLMs are mostly based on past written text which may be wrong or right and very much influenced by SEO high page rank info (high page rank often manipulated by political/commercial bots/fake accounts from USA and Russia… ).
    Sure it is less frustrating doing a query to gemini then having called a help desk clerk in the early days of digitalization ;). Progress and evolution is massive.
    Considering the handy-cap of current giant LLMs I see more longterm potential in Sapient (https://wp.we0we.com/sapient/) type approach.
    Best greetings Hannes

Leave a Reply

Your email address will not be published. Required fields are marked *