The original version of this story appeared in Quanta Magazine.
A team of computer scientists has created a nimbler, more flexible type of machine learning model. The trick: It must periodically forget what it knows. And while this new approach won’t displace the huge models that undergird the biggest apps, it could reveal more about how these programs understand language.
The new research marks “a significant advance in the field,” said Jea Kwon, an AI engineer at the Institute for Basic Science in South Korea.
The AI language engines in use today are mostly powered by artificial neural networks. Each “neuron” in
→ Continue reading at Wired - Science