Author: Peter Rudin

Transformers to Improve Memory, a Paradigm Shift in AI?

Posted by Peter Rudin on 23. September 2022 in Essay No Comments

Our memory is engaged when we try to distinguish between the mental and the physical world. The brain does not represent information – it constructs it. Transformers use a mechanism called self-attention, to detect textual relationships in a series of words and sentences that depend on each other.

Neuroscience research suggests that transformers can mimic brain functionality. Improving the accuracy of memory with a neural foundational model implemented with an intelligent machine, might indeed signal a paradigm shift in AI.

To overcome the complexity, moving from a machine- to a human-centered AI, the Stanford University’s Institute for Human-Centered Artificial Intelligence (HAI), suggests more multi-disciplinary research.

Read More

Yann LeCun’s Vision for the Future of Machine Learning

Posted by Peter Rudin on 16. September 2022 in News No Comments

Professor Yann LeCun, CDS faculty & founding Director and Chief AI Scientist at Meta’s AI lab, wants machines to operate with common sense.

He has built an early version of his world model that is capable of basic object recognition and is currently working on teaching it to make predictions.

LeCun’s new approach is based on a neural network that would be able to process the world at various levels of detail. This network would focus only on those features that are relevant for the task at hand.

His vision places the world model and the configurator as two key elements in a larger system or cognitive architecture that would include other neural networks. 

Read More

Our Brain is not a Computer, Perhaps a Transducer?

Posted by Peter Rudin on 9. September 2022 in Essay No Comments

The computational model, comparing the brain to the computer, has been the most prominent metaphor in neuroscience and AI for decades. It implies that computers are very closely aligned to the functionality of the human brain.

A new theory of how the brain works — the neural transduction theory — might upend everything we know about consciousness and the universe itself. According to this theory our bodies are completely encased by transducers.

The arguments to advance AI-research from a computer to a transducer metaphor are intriguing, especially in respect to causality. Our capacity to adapt will remain the limiting factor unless the quest for survival opens a new chapter in human evolution.

Read More

AI-Model can Detect Parkinson from Breathing Patterns

Posted by Peter Rudin on 2. September 2022 in News No Comments

Parkinson’s disease is notoriously difficult to diagnose as it relies primarily on the appearance of motor symptoms like tremors that often appear several years after the disease has started to spread.

In a massive study the MIT researchers demonstrated that the artificial intelligence assessment of Parkinson’s can be done every night at home while the person is asleep and without touching their body.

The team developed a device with the appearance of a Wi-Fi router, which emits radio signal and analyses the reflections off the breathing individual.

The breathing signal is then fed to the neural network to assess if the individual has Parkinson, improving the chance for successful treatment at an early stage.

Read More

With Curiosity Towards a New AI: The Issue of Learning

Posted by Peter Rudin on 26. August 2022 in Essay No Comments

Animals and humans exhibit learning abilities and understandings of the world that are far beyond the capabilities of current AI and machine learning (ML) systems.

How is it possible for an adolescent to learn to drive a car in about 20 hours of practice and for children to learn language with what amounts to a small exposure.

For many years now Artificial General Intelligence (AGI) has been the holy grail of AI-research Since years with little or no progress for overcoming problems related to causality. Based on a new approach of self-learning systems, with curiosity and common sense as driver, we might finally achieve an AI that serves humans as opposed to humans serving AI-machines.

Read More

This Robot Dog Has an AI-Brain and Taught Itself to Walk

Posted by Peter Rudin on 19. August 2022 in News No Comments

Teaching robots to solve complex tasks in the real world is a foundational problem of robotics research. Current algorithms require too much interaction with the environment to learn successful behaviors, making them impractical for many real-world tasks.

A research team from Berkeley set out to solve this problem with a new algorithm called ‘Dreamer’. Constructing what’s called a ‘world model’, Dreamer can project the probability how a future action will achieve its goal.

Beginning on its back, legs waving, the robot learns to flip itself over, stand up, and walk in an hour. A further ten minutes of harassment with a roll of cardboard is enough to teach it how to withstand and recover from being pushed around by its handlers.

Read More

New Hardware offers Faster computation for AI

Posted by Peter Rudin on 12. August 2022 in News No Comments

A new area of artificial intelligence called analog deep learning promises faster computation with a fraction of the energy usage. Programmable resistors are the key building blocks in analog deep learning, just like transistors are the core elements for digital processors.

A multidisciplinary team of MIT researchers utilized a practical inorganic material in the fabrication process that enables their devices to run about 1 million times faster than the synapses in the human brain.

“Once you have an analog processor, you will no longer be training networks everyone else is working on. You will be training networks with unprecedented complexities that no one else can afford to, and therefore outperform them all”.

Read More

New Foundational AI-Model of Human Thought

Posted by Peter Rudin on 5. August 2022 in News No Comments

From research in neuroscience, cognitive science, and psychology, we know that the human brain is neither a huge homogeneous set of neurons nor a massive set of task-specific programs that each solves a single problem. Instead, it is a set of regions with different properties that support the basic cognitive capabilities that together form the human mind.

This Common Model of Cognition divides humanlike thought into multiple modules, with a short-term
memory module at the center of the model. The other modules (perception, action, skills, and knowledge) interact through it.

When used to model communication patterns in the brain, the Common Model yields more accurate results than leading models from neuroscience.

Read More

BLOOM, A Radical New Project To Democratize AI

Posted by Peter Rudin on 29. July 2022 in News No Comments

Over 1’000 AI researchers have created a multilingual large language model bigger than GPT-3 with 176 billion parameters—and they’re giving it out for free.

Users can pick from a selection of 46 languages and then type in requests for BLOOM to do tasks like writing recipes or poems, translating or summarizing texts, or writing programming code. In addition, AI- developers can use the model as a foundation to build their own applications.

The group is also launching a new ‘Responsible AI License’, which is something like a terms-of-service agreement. It is designed to act as a deterrent from using BLOOM in high-risk sectors such as law enforcement or health care or to harm people.

Read More

Researchers have trained an AI model to ‘think’ like a Baby

Posted by Peter Rudin on 22. July 2022 in News No Comments

Common-sense laws of the physical world are well understood by humans. Even two-month-old infants share this understanding. And yet we are unable to build an AI-system that can rival the common-sense abilities of a developing infant.

Learning through time and experience is important, but it is not the whole story. Princeton University researchers are contributing new insights to the ageold question of what may be innate in humans, and what may be learned.

Beyond that, they are defining new boundaries for what role perceptual data can play when it comes to artificial systems acquiring knowledge and how studies on babies can contribute to building better AI-systems that simulate the human mind.

Read More