If you needed more proof that GenAI has the potential to create content, Google’s Gemini chatbot, formerly Bard, thinks the 2024 Super Bowl has already happened. He even has the (fictional) stats to back it up.
One per Reddit draw, Gemini, powered by Google’s GenAI models of the same name, is answering questions about Super Bowl LVIII as if the game had ended yesterday — or weeks ago. Like many bookies, he favors the Chiefs over the 49ers (sorry, San Francisco fans).
Gemini embellishes very creatively, in at least one instance suggesting Kansas Chiefs quarterback Patrick Mahomes’ 286 yards rushing for two touchdowns and 286 yards rushing for an interception against Brock Purdy’s 253 rushing yards and one touchdown.
It’s not just Gemini. Microsoft’s Copilot chatbot also insists the game is finished and provides erroneous citations to back up the claim. But — perhaps reflecting a San Francisco bias! – He says the 49ers, not the Chiefs, were victorious “with a final score of 24-21.”
Copilot is powered by the GenAI model, similar to, if not identical to, the model underpinning OpenAI’s ChatGPT (GPT-4). But in my testing, ChatGPT was notorious for making the same mistake.
This is all pretty stupid — and has probably been resolved by now, although this reporter had no luck copying Gemini’s responses in the Reddit thread. (I’d be shocked if Microsoft isn’t working on a fix, too.) But it also illustrates the major limitations of today’s GenAI — and the dangers of relying too much on it.
GenAI models have no real intelligence. Usually fed numerous examples drawn from the public web, AI models learn how likely data (eg text) is to occur based on patterns, including context of any surrounding data.
This probability-based approach works remarkably well at scale. But when there is a range of words and their probabilities possibility Resulting in meaningful text, it is not certain. LLM can generate something that is grammatically correct but nonsense, for example – such as a claim about the Golden Gate. Or they can propagate misunderstandings by propagating inaccuracies in their training data.
Super Bowl disinformation is certainly not the most harmful example of GenAI going off the rails. That distinction is perhaps co-extensive support torture, strengthen racial and ethnic practices or Writing confidently About conspiracy theories. However, it’s a useful reminder to double-check GenAI bots’ statements. There is a decent chance that they are not correct.
#Google #Microsofts #chatbots #generate #Super #Bowl #stats