2024-05-18 13:35:57
Artificial? Yes. Intelligent? Maybe: The Great AI Chatbot Race - Democratic Voice USA
Artificial? Yes. Intelligent? Maybe: The Great AI Chatbot Race

Comment on this story

Comment

Here’s something you don’t see everyday: Microsoft Corp. is serving up a snazzy web search tool. And Google, whose search page has barely changed in 24 years, is also racing to launch a just-as-cool revamped tool in the next few weeks. It seems that officially, the new chat-engine wars are underway, with Microsoft on Tuesday announcing its long-awaited integration of OpenAI’s ChatGPT bot into Bing and calling it a “copilot for the web.” Google published a blog post hours earlier about its own chatbot for search, called Bard. For Google in particular, it could be the riskiest strategic move it has made in years, a metaphorical leap off the couch that the company has been relaxing on for far too long.  

This scramble by two typically slow-moving tech giants — whose endgame represents nothing less than owning the next era of online search — will be messy and fraught with risk. Both companies are using AI systems that have been trained on billions of words on the public internet, but which can also give incorrect and even biased information. Google also risks provoking a backlash from the web publishers that are critical to its business.  

Update: Google is already grappling with chatbot accuracy problems after it emerged that an example used in one of Bard’s promotional examples was wrong. The company’s shares declined 8% following the news.

ChatGPT prompted a wave of admiration for its creative responses to human prompts when it launched last year, but there has since been growing concern about its grasp of facts. We don’t have statistics about how often ChatGPT gives incorrect information because OpenAI doesn’t provide those figures. It only says the tool is getting better through regular updates. But the errors are frequent enough — occurring between 5% and 10% of the time I’ve used it — to make users increasingly wary about all of its answers.

And despite strict filters that stop the bot from making political statements or hate speech, users of the popular forum Reddit have figured out how to goad ChatGPT into making expletive-laden tirades against its creators using social engineering tricks. The tool has also, inexplicably, used pro-Russian rhetoric when answering questions about the killing of civilians in Ukraine. 

That gives us a taste of the potential stumbles that lie ahead. At first blush, it looks like Google is being bolder in how it might use the new technology. In the examples that Microsoft posted of the new Bing, it seems the chatbot’s answers will be relegated to the side of the page, not front and center where the usual search results will remain. The bot’s answers also include footnotes and links to source material, something that ChatGPT doesn’t do, but which makes Microsoft’s tool seem significantly more trustworthy.

Google’s own example of Bard, by contrast, showed a single summarized answer in the middle of the page, above its search results and with no footnotes, meaning it would be impossible for a user to identify the source. Why? Perhaps because Google is under pressure to move quickly. The company’s AI lab DeepMind is currently working on a chatbot called Sparrow that will cite sources in its answers to queries, according to a recent Time magazine interview with DeepMind Chief Executive Officer Demis Hassabis. But Google’s new tool isn’t using DeepMind’s technology, at least not yet. 

Neither company’s demonstration showed a warning that their tool could give inaccurate answers, as ChatGPT does. They should. “This technology is much more mature than it was a year ago, but still it’s far from being robust enough to be at the forefront,” says Ori Goshen, co-founder of AI21 Labs, an Israeli startup that sells access to a large-language model that competes with OpenAI’s GPT-3.5. “Big Tech has been pushed to make a move and the technology is not there yet.” 

Firms such as OpenAI have sought to make their language models more accurate by adding billions more parameters, settings used to help them predict words. But making such models a lot bigger doesn’t mean they’ll become a lot more accurate, and some researchers suspect diminishing returns for accuracy as the models grow. Ironing out those persistent few falsehoods could become a chronic challenge for both Microsoft and Google, in the same way that self-driving car technology has been “nearly there” for more than six years.             

Google also risks damaging its relationships with web publishers who rely on the company’s search page to bring clicks to their sites. The company has long had the technology that could have allowed it to introduce AI-powered answers to search queries many years ago — and even talked about doing so back in 2011 — but it never did, because that would disrupt a $150 billion business that hinges on getting people to click around Google’s search results. Now that it’s been forced to act, Google searches could draw eyes away from other websites, and that could provoke anger from publishers who’ve already accused the company of abusing its dominance in online advertising. (Disclaimer: Bloomberg LP is a web publisher that draws traffic from Google searches.) 

Publishers though, can’t delay the inevitable march of technology. Other Big Tech firms like Meta Platforms Inc. and Amazon.com Inc. will be moving to make their own large-language model plays, which will threaten publishers of writing, music, video and more over time. The shift to using such models will be similar to the one made more than a decade ago from desktops to mobile, one that companies like Facebook pulled off spectacularly while others — like Microsoft through its ill-fated purchase of Nokia — flubbed. This pivot will be a messy one, all the more so because the two leading players are rushing in head first.     

More From Bloomberg Opinion:

• How Did Big Tech Get Away With Such an Earnings Thud?: John Authers

• Robots Coming for Our Jobs Will Also Help Fire Us: Beth Kowitt

• Can You Make an Electric Car Without Losing Billions?: Chris Bryant

(Updates after second paragraph with allegations of mistakes in Bard’s promotional examples.)

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of “We Are Anonymous.”

More stories like this are available on bloomberg.com/opinion

Source link: https://www.washingtonpost.com/business/artificial-yes-intelligent-maybe-the-great-ai-chatbot-race/2023/02/08/1952d7d0-a7b4-11ed-b2a3-edb05ee0e313_story.html?utm_source=rss&utm_medium=referral&utm_campaign=wp_business

Leave a Reply

Your email address will not be published. Required fields are marked *