Our memory is engaged when we try to distinguish between the mental and the physical world. The brain does not represent information – it constructs it. Transformers use a mechanism called self-attention, to detect textual relationships in a series of words and sentences that depend on each other.
Neuroscience research suggests that transformers can mimic brain functionality. Improving the accuracy of memory with a neural foundational model implemented with an intelligent machine, might indeed signal a paradigm shift in AI.
To overcome the complexity, moving from a machine- to a human-centered AI, the Stanford University’s Institute for Human-Centered Artificial Intelligence (HAI), suggests more multi-disciplinary research.