News

New Component for Brain-Inspired Computing

Posted by Peter Rudin on 27. May 2022 in News No Comments

Researchers from ETH Zurich, the University of Zurich and Empa have developed a new material to create electronic circuits that emulate the human brain, a research area that is part of neuromorphic computing.

Much like neurons, which are responsible for both data storage and data processing in the brain, scientists want to combine storage and processing in a single electronic component that are more efficient at performing machine-​learning tasks.

“These components come closer to real neurons than previous ones. As a result, they help researchers to better test hypotheses in neuroinformatics and hopefully gain a better understanding of the computing principles of real neuronal circuits in humans and animals”, says Prof. Giacomo Indiveri.

Read More

Do Singularities Exist In Nature?

Posted by Peter Rudin on 13. May 2022 in News No Comments

Singularity as a term is used in many contexts, including within mathematics. The word also appears in speculation about artificial intelligence, such as to describe the day when machines will become more intelligent than humans.

The term singularity that is used in our mathematical models leads to a new understanding of physics. However, matter itself is not a solid thing. It is made of molecules, and it takes quantum physics to explain why Newton’s Law of Gravity does not apply. But somehow – and we do not know how-  nature finds a way to get around it, at least for now.

Hence, we should embrace a mindset that it is okay not to find answers to all of our questions. After all, not knowing is what propels us to keep on looking.

Read More

Brain Waves Are Key To How We Process Information

Posted by Peter Rudin on 29. April 2022 in News No Comments

For years, the brain has been thought of as a biological computer that processes information through traditional circuits, whereby data zips straight from one cell to another.

A new study led by the Salk Institute shows that there is  also a second, very different way that the brain parses information: through the interactions of waves of neural activity.

The researchers hypothesize that different kinds of waves generated interact with each other and that experiments show that the brain responds differently to seeing the same thing under different conditions.

This may explain how the brain’s response can shift from day to day, the researchers say.

Read More

Biotechnology could one-day resolve Blindness

Posted by Peter Rudin on 15. April 2022 in News No Comments

Researchers from the University of Oregon believe that tiny electrodes could someday be implanted into the eye to restore sight in people with macular degeneration or other vision disorders.

A new design for eye and brain implants draws its inspiration from nature. The researchers have grown neurons on a fractal-patterned electrode, one that mimics the repeating branching pattern in which neurons naturally grow.

After conducting several different computer simulations, the results provide experimental evidence that neurons will connect better to a fractal-patterned electrode, allowing better signal transmission between the implant and the brain.

Read More

Why AI-Democratization will Power-Up the Enterprise

Posted by Peter Rudin on 1. April 2022 in News No Comments

Along with all the analytical and operational gains artificial intelligence (AI) brings to the enterprise, there is another, more fundamental change taking place. AI is emerging as the third member in the business relationship, providing key support for deal-making and cooperative engagement.

Democratized AI will one day be considered the ordinary way to interact with the universe with digital assets available on demand and no involvement from IT. This, in turn, should drive up the value of both human and digital resources by shifting the focus to more data-driven decision-making and innovation.

With democratization, we may finally get some reciprocity from our machines as they come to understand how to engage with us.

Read More

A Meta Prototype To Build Virtual Worlds

Posted by Peter Rudin on 18. March 2022 in News No Comments

Meta (formerly Facebook) is testing an artificial intelligence system that lets people build parts of virtual worlds by describing them.

In a pre-recorded video, Zuckerberg walked viewers through the process of making a virtual space with Builder Bot, starting with commands like “let’s go to the beach,” which prompts the bot to create a cartoonish 3D landscape of sand and water around him.

Zuckerberg acknowledged that sophisticated interactivity, including the kinds of usable virtual objects many VR users take for granted, poses major challenges. Eventually you’ll be able to create nuanced worlds to explore and share experiences with others with just your voice, Zuckerberg promised.

Read More

‘Math Neurons’ Identified In The Brain

Posted by Peter Rudin on 4. March 2022 in News No Comments

The brain has neurons that fire specifically during certain mathematical operations. This is shown by a recent study conducted by the Universities of Tübingen and Bonn. The findings indicate that some of the neurons detected are active exclusively during additions, while others are active during subtractions.

The cells discovered encode a mathematical instruction for action. They do not care whether the calculation instruction is written down as a word or a symbol. Figuratively speaking, it is as if the +/- keys on the calculator were constantly changing their location. Researchers also refer to this as “dynamic coding.”

This study marks an important step towards a better understanding of one of our most important symbolic abilities, namely calculating with numbers.

Read More

This AI Learns to Build New AI-Systems Faster

Posted by Peter Rudin on 18. February 2022 in News No Comments

Although machine learning is partially automated, building the algorithms is far more hands on. It takes a good deal of skill and experience to tweak and tune a neural network’s internal settings so that it can learn a task at a high enough level to be useful.

According to a paper published in arxiv.org, a group of researchers have designed a “hypernetwork”- algorithm called GHN-2, that can predict and set the parameters of an untrained neural network much more efficiently compared to current methods.

It took GHN-2 less than a second to predict the parameters of a neural network, whereas the traditionally trained algorithms took some 10,000 times longer to reach the same level of performance, reducing energy and compute costs significantly.

Read More

How Microsoft’s Takeover Will Drive The Metaverse

Posted by Peter Rudin on 4. February 2022 in News No Comments

Microsoft was positioning itself as one of the pioneers of the metaverse even before its US$75 billion deal to buy online gaming giant Activision Blizzard.

Referred to as the age of acceleration, we’re soon going to have mature versions of virtual reality,  artificial intelligence and devices interacting with computers through touch sensors.

When these are integrated they will challenge and change not only how we work, learn and live but our conception of reality and what it is to be human.

While some see this as a new interconnected world of freedom, others fear a digital dystopia where we are seduced, stupefied and puppeteered in the glass cages of a new, subtle and seductive form of capitalism.

Read More

ETH-Software to Speed up Training of Neural Networks

Posted by Peter Rudin on 21. January 2022 in News No Comments

When working with neural networks and deep learning applications, their training is the single most time-consuming and costly process. In fact, a single training run of a high-​level language model can easily cost around ten million dollars.

Torsten Hoefler, a professor at ETH Zurich and leader of the ETH Scalable Parallel Computing Lab, together with two computer scientists from his team, have developed software to run on one of the most powerful supercomputers currently being installed.

Their new software called NoPFS (Near-​optimal Pre-​Fetching System) achieves this by clairvoyance: It exploits a pseudo-​random and therefore predictable process in training more effectively than other tools available to date, cutting training time in half.

Read More