Why Chatbots Sometimes Act Weird and Spout Nonsense – The New York Times
No, chatbots aren’t sentient. Here’s how their underlying technology works.
Microsoft released a new version of its Bing search engine last week, and unlike an ordinary search engine it includes a chatbot that can answer questions in clear, concise prose.
Since then, people have noticed that some of what the Bing chatbot generates is inaccurate, misleading and downright weird, prompting fears that it has become sentient, or aware of the world around it.
That’s not the case. And to understand why, it’s important to know how chatbots really work.
Is the chatbot alive?
No. Let’s say that again: No!
In June, a Google engineer, Blake Lemoine, claimed that similar chatbot technology being tested inside Google was sentient. That’s false. Chatbots are not conscious and are not intelligent — at least not in the way humans are intelligent.
Why does it seem alive then?
Let’s step back. The Bing chatbot is powered by a kind of artificial intelligence called a neural network. That may sound like…
The post Why Chatbots Sometimes Act Weird and Spout Nonsense – The New York Times first appeared on SEO, Marketing and Social News | OneSEOCompany.com.
source: https://news.oneseocompany.com/2023/02/16/why-chatbots-sometimes-act-weird-and-spout-nonsense-the-new-york-times_2023021640753.html
Your content is great. However, if any of the content contained herein violates any rights of yours, including those of copyright, please contact us immediately by e-mail at media[@]kissrpr.com.