HomeNewsOpinionOpenAI’s Q* is alarming for a different reason

OpenAI’s Q* is alarming for a different reason

When AI systems start solving problems, the temptation to give them more responsibility is predictable. That warrants greater caution

December 04, 2023 / 16:15 IST
Story continues below Advertisement
OpenAI
With Q*, OpenAI seems to be pushing ChatGPT in a similar direction.

When news stories emerged last week that OpenAI had been working on a new AI model called Q* (pronounced “q star”), some suggested this was a major step toward powerful, humanlike artificial intelligence that could one day go rogue. What’s more certain: The hype around Q* has boosted excitement about the company’s engineering prowess, just as it’s
steadying itself from a failed board coup. Peaks of AI excitement about milestones have taken the public for a ride plenty times before. The real warning we should take from Q* is the direction in which these systems are progressing. As they get better at reasoning, it will become more tempting to give such tools greater responsibilities. More than any concerns about AI annihilation, that alone should give us pause.

OpenAI hasn’t confirmed what Q* is, with reinstated Chief Executive Officer Sam Altman only describing it as an “unfortunate leak,” but in the media, it sounds similar to another system Alphabet Inc.’s Google is working on. Gemini is the big new competitor to ChatGPT, which won’t only generate text and images but also excel at planning and strategizing, according to Google DeepMind CEO Demis Hassabis. DeepMind famously created an AI model that beat champion Go players, and Gemini will use some of those techniques for problem solving.

Story continues below Advertisement

With Q*, OpenAI seems to be pushing ChatGPT in a similar direction since, according to multiple reports, Q* can perform grade-school math. That might sound unimpressive, but combining math capabilities with software that can also write text and create imagery is unique, and ChatGPT until now has struggled to do equations correctly. If it could, that might correlate with an improvement in problem-solving. Math requires understanding a problem and figuring out the steps to solve it before carrying out all the right calculations. That process is a little closer to how we humans think and solve problems.

Early versions of Gemini can already execute some tasks that require planning, according to someone with access to Google’s forthcoming tool who didn’t want to be named due to confidentiality commitments.