Scientists at MIT have devised a way for large language models to keep learning on the fly—a step towards building AI that continually improves itself.
The MIT scheme, called Self Adapting Language Models (SEAL), involves an LLM that learns to generate its own synthetic training data and update procedures based on the input it receives.
However, SEAL still suffers from “catastrophic forgetting,” when ingesting new information causes older knowledge to simply disappear.
Still, for all its limitations, SEAL is an exciting new path for further AI research—and it may well be something that finds its way into future frontier AI models.
MORE