Tech giant Google has announced upgrades to its artificial intelligence technologies, just a day after rival OpenAI announced similar changes to its offerings, with both companies trying to dominate the quickly emerging market where human beings can ask questions of computer systems — and get answers in the style of a human response.
It’s part of a push to make AI systems such as ChatGPT not just faster, but more comprehensive in their responses right away without having to ask multiple questions.
On Tuesday, Google demonstrated how AI responses would be merged with some results from its influential search engine. As part of its annual developers conference, Google promised that it would start to use AI to provide summaries to questions and searches, with at least some of them being labelled as AI at the top of the page.
Google’s AI generated summaries are only available in the U.S., for now — but they will be written using conversational language.
Meanwhile, OpenAI’s newly announced GPT-4o system will be capable of conversational responses in a more human-like voice.
It gained attention on Monday for being able to interact with users while employing natural conversation with very little delay — at least in demonstration mode. OpenAI researchers showed off ChatGPT’s new voice assistant capabilities, including using new vision and voice capabilities to talk a researcher through solving a math equation on a sheet of paper.
At one point, an OpenAI researcher told the chatbot he was in a great mood because he was demonstrating “how useful and amazing you are.”
ChatGPT responded: “Oh stop it! You’re making me blush!”
“It feels like AI from the movies,” OpenAI CEO Sam Altman wrote in a blog post. “Talking to a computer has never felt really natural for me; now it does.”
AI responses aren’t always right
But researchers in the technology and artificial intelligence sector warn that as people get information from AI systems in more user-friendly ways, they also have to be careful to watch for inaccurate or misleading responses to their queries.
And because AI systems often don’t disclose how they came to a conclusion because companies want to protect the trade secrets behind how they work, they also do not tend to show as many raw results or source data as traditional search engines.
This means, according to Richard Lachman, they can be more prone to providing answers that look or sound confident, even if they’re incorrect.
The associate professor of Digital Media at Toronto Metropolitan University’s RTA School of Media says these changes are a response to what consumers demand when using a search engine: a quick, definitive answer when they need a piece of information.
“We’re not necessarily looking for 10 websites; we want an answer to a question. And this can do that,” said Lachman,
However, he points out that when AI gives an answer to a question, it can be wrong.
Unlike more traditional search results where multiple links and sources are displayed in a long list, it’s very difficult to parse the source of an answer given by an AI system such as ChatGPT.
Lachman’s perspective is that it might feel easier for people to trust a response from an AI chatbot if it’s convincingly role-playing as a human by making jokes or simulating emotions that produce a sense of comfort.
“That makes you, maybe, more comfortable than you should be with the quality of responses that you’re getting,” he said.
Business sees momentum in AI
Here in Canada, at least one business working in artificial intelligence is excited by a more human-like interface for AI systems like Google or OpenAI are pushing.
“Make no mistake, we are in a competitive arms race here with respect to generative AI and there is a huge amount of capital and innovation,” said Duncan Mundell, with Alberta-based AltaML.
“It just opens the door for additional capabilities that we can leverage,” he said about artificial intelligence in a general sense, mentioning products his company creates with AI, such as software that can predict the movement of wildfires.
He pointed out that while the technological upgrades are not revolutionary in his opinion, they move artificial intelligence in a direction he welcomes.
“What OpenAI has done with this release is bringing us one step closer to human cognition, right?” said Mundell.
Researcher calls sentient AI ‘nonsense’
The upgrades to AI systems from Google or OpenAI might remind science-fiction fans of the highly conversational computer on Star Trek: The Next Generation, but one researcher at Western University says he considers the new upgrades to be decorative, rather than truly changing how information is processed.
“A lot of the notable features of these new releases are, I guess you could say, bells and whistles,” said Luke Stark, assistant professor at Western University’s Faculty of Information & Media Studies.
“In terms of the capacities of these systems to actually go beyond what they’ve been able to do so far… this isn’t that big of a leap,” said Stark, who called the idea that a sentient artificial intelligence could exist with today’s technology “kind of nonsense.”
The companies pushing artificial intelligence innovations make it hard to get clarity on “what these systems are good and not so good at,” he said.
That’s a position echoed by Lachman, who says that lack of clarity will require users to be savvy about what they read online in a new way because
“Right now, when you and I speak, I’m used to thinking anything that sounds like a person is a person,” he said, pointing out that human users may assume anything that seems like another human will have the same basic understanding of how the world works.
But even if a computer appears to look and sound like a human, it won’t have that knowledge, he says.
“It does not have that sense of common understanding of the basic rules of society. But it sounds like it does.”