Launching LAC²: AI’s Value to ‘Think Global and Act Local’

Posted by Peter Rudin on 22. April 2022 in Essay

Lucerne                        Picture Credit: kapellbruecke.com

Introduction

LAC2 (https://www.lac2.org) is a non-profit association incorporated under Swiss Law and based in Lucerne, Switzerland. LAC2 stands for ‘Lucerne AI and Cognitive Community.’ Its purpose is to support and assist companies, institutions, and individuals in Central Switzerland, generating value through the application of AI technologies while building a collaborative community and addressing the specific interests of its members. About two years ago, a couple of AI-enthusiasts got together, starting a discussion as to how to promote Central Switzerland as an attractive local AI-Hub. With Start-Ups mushrooming across all regions of Switzerland and many support programs already in place, small to midsized corporations who carry the bulk of Switzerland’s success story as one of the most innovative countries in the world, are challenged to attract the ‘best brains’ for corporate team-engagement. Facing this competitive challenge, Central Switzerland offers several outstanding advantages:

  • An excellent educational system directed towards the practical application of AI
  • Close access to the most beautiful landscapes with mountains and lakes for leisure and sports
  • Access to cultural institutions such as the KKL Lucerne with its world-famous concert events
  • An excellent local travel Infrastructure, avoiding the time-consuming rush hour traffic to large cities
  • Decentralized local community activities with a democratic mindset of collaboration and support

Along with these unique arguments, the original enthusiasm to create a valuable AI-Hub has finally cumulated in the launch of LAC². As an introduction, the following provides a history- and future- focused view as to why LAC² needs to be build.

A Historic View of Computation

Blaise Pascal is usually credited with building the first digital computer in 1642. About the same time, Leibniz introduced a calculator that could add, subtract, multiply, and divide. The formulation of the binary number system finally provided the groundwork for computers and computer science as we know it today. Digital computers utilize this numbering system to store and manipulate their data which includes numbers, words, music, graphics and more. The need for computing-machinery for processing a rapidly growing inventory of information – largely stored via 80-character cards – was much related to military requirements during the second World War.

In 1942, two researchers at the University of Pennsylvania decided to build an electronic, programmable computer to respond to this rapidly growing demand which became known as ENIAC (Electrical Numerical Integrator And Calculator). Due to John von Neumann’s collaboration with the ENIAC team, two quite separate historical strands came together: On the one hand the effort to achieve high-speed, high-precision, automatic calculation and secondly, the attempt to design a machine capable of significant reasoning which stipulated the need for software for solving problems with computational means. The requirement to electronically distribute the computer’s output led to the implementation of the first tele-communication networks.

Responding to the demand for global standardization of communication software, Sir Bernard Lee and his team developed the Internet Protocol, with the HTML language giving birth to the first version of the World Wide Web in late 1991. Despite the burst of the Dotcom bubble and its massive losses on the US stock market in 2001, Google’s 1987 introduction of the search engine signalled the starting point of Big-Tech’s extremely successful business model of monetizing behavioral user data, lowering the barrier that once separated the business from the private sector. This manifestation of exponential technological growth is challenging organisations and individuals alike with an intensity humanity has never experienced before.

Viewing the Future of Digital Transformation

Recognizing the value of data as a source of prediction, decision-making and design, three interrelated components need to be considered for assessing the future of Digital Transformation:

  • Continuing exponential growth of computer performance and storage at lower cost
  • Massive increase in communication bandwidth with competing technologies: Fiber, 5G, etc.
  • Artificial Intelligence with Deep Neural Networks and other means of pattern recognition

With these components as facilitators, the amount of data being generated is ‘exploding‘. New sensing technologies or communication network architectures – as provided by the ‘Internet-of-Things (IoT)’ – are generating data in real time, less prone to errors compared to historically accumulated data. Hence, it is not Digital Transformation defined as a prerequisite to AI, rather the opposite is true: Applying AI technology is a must for reaping the benefits of an increasingly digitized environment. Adapting to this challenge, organisations must manage a business environment where rising complexity is met by competitive offerings of IT-tools. This spiral of innovation puts an immense pressure on human resources which might well overstretch the management capacity of small to midsized companies. Moreover, management teams are challenged as their traditional corporate culture is disrupted by a new form of collaborative and interdisciplinary leadership, with a growing number of competing consultants and freelancers, offering their services and products to manage this change.

The Human/Machine Relationship

Originally focused on mathematics and artificial neural networks, AI-research and its quest for reaching human-like intelligence – also defined as Artificial General Intelligence (AGI) – has expanded into neuroscience and behavioural research. Exploring the functionality of the human brain has just started as closing the gap between artificial and biological intelligence is challenging many existing scientific theories. Along this path, different perspectives of human/machine interaction have evolved:

The technology-centric perspective, whereby AI will soon outperform humankind in all areas, with the primary concern of machines reaching superintelligence. If AI performs at a superhuman level, human involvement in decision making can only weaken or slow down performance. At some point, humans will become incapable of being involved as they can no longer understand the computer’s highly intelligent line of reasoning. Besides science fiction, this perspective also implicates many philosophical and ethical issues.

The human-centric perspective, where humans remain superior to AI in a social and societal context. Human-centrists are convinced that human and artificial intelligence are different by nature and therefore cannot substitute one another. Following this line of reasoning, AI-systems function well in environments in which they are trained yet become brittle in novel, nonstructured situations, which, according to this perspective, represent the majority of problems to be solved.

The collective intelligence-centric perspective, claiming that true intelligence lies in the collaboration of both human and artificial agents. Collective intelligence emerges from the collaboration and collective efforts of individuals applying AI tools, allowing them to collectively act more intelligently than individual entities. Collective intelligence has yielded novel applications such as crowdsourcing for developing software (e.g., Linux) or encyclopaedias (e.g., Wikipedia). Considering today’s business environment and the generation of value, this perspective has become the generally accepted standard, also fostering a trend towards decentralisation in support of local communities.

Future Trends

The endeavour to create AI with computational means began in the late 1950s when a dozen scientists gathered at the Dartmouth College, to explore the application of computers for replicating human intelligence. Since then, AI has gone through several cycles of failure. That is why –  despite six decades of research and development –  we still do not have an AI that rivals the cognitive abilities of a human child, let alone one that can think like an adult. Scientists and experts are divided on the question of how many years it will take to reach AGI. But most agree that we are at least one decade away. Some scientists – with Professor Tenenbaum from MIT as one example – believe that the path forward is hybrid artificial intelligence, a combination of neural networks and rule-based systems. Other scientists believe that neural network models will eventually reach the reasoning capabilities they currently lack. Some scientists are focused on self-supervised learning, a branch of deep learning that manages the experience and reasoning about the world in the same way children do. Next to these academic research efforts, a wide spectrum of Start-Ups is driving the future of AI as well. Born from Alphabet’s ‘moonshot’ division, as one example, the company NextSense aims to sell earbuds that can collect heaps of neural data – and uncover the mysteries of gray matter as a source of knowledge in support of human health applications.

Adding quantum mechanics to the equation with new quantum algorithms contemplating nature as a non-linear, probabilistic space, opens the door to a new class of problem-solving with computational power far exceeding conventional von Neumann systems. Mathematicians have attempted to use linear equations as a key to unlock non-linear, differential ones for decades. In the future, quantum computers with hundreds of qubits and new quantum algorithms will be capable of cracking non-linear equations and non-linear dynamics without resorting to the approximation of linearity. As digital data, converted from analogue sensors in real-time, continues to grow at an exponential rate, we have no choice but to rely on quantum mechanics to process the massive influx of data which will result.

Conclusion: Think Global, Act Local

A principal feature of innovative regions is their capacity to create environments favourable to turning knowledge into new products and services, attracting  venture capital, building organisational learning, integrating skills and, as a result, generate innovation. A widening generation gap of technological comprehension can only be overcome through a mindset of continuous education and the establishment of a local,  decentralized and collaborative platform where experiences and  knowledge can be openly discussed. This exchange of information should not be limited to pure economic issues. Ethics and the misuse of technology has as much impact  on our future wellbeing. Hence, organisations as well as individuals are challenged to constantly ‘reinvent’ themselves, so far with no end in sight. The mission of  LAC² is to support its members on this endeavour. Hence,  become engaged and join LAC²: https://www.lac2.org

Leave a Reply

Your email address will not be published. Required fields are marked *