HomeNewsTechnologyMIT researchers develop small-scale language model more efficient than larger ones

MIT researchers develop small-scale language model more efficient than larger ones

The self-learning language model can learn from its own predictions eliminating the use of annotated training data.

June 02, 2023 / 17:55 IST
Story continues below Advertisement
MIT researchers develop small scale language model more efficient than larger ones
(Representational Image)

Researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a small, self-learning language model that surpasses large scale language models.

CSAIL's algorithm for the model called Simple Pseudo-Label Editing (SimPLE) allows it to learn from its own predictions, eliminating the use of annotated training data.

Story continues below Advertisement

Also Read | Snapchat launches new AI feature for Snapchat+ members

As reported by Venture Beat, the team claims that the model's performance across various tasks is better than larger, more notable models like OpenAI's GPT-4 or Google's LaMDA.