AI chatbots can be tricked into misbehaving. Can scientists stop it?

Picture a tentacled, many-eyed beast, with a long tongue and gnarly fangs. Atop this writhing abomination sits a single, yellow smiley face. “Trust me,” its placid mug seems to say.

That’s an image sometimes used to represent AI chatbots. The smiley is what stands between the user and the toxic content the system can create.

Chatbots like OpenAI’s ChatGPT, Google’s Bard and Meta AI have snagged headlines for their ability to answer questions with stunningly humanlike language. These chatbots are based on large language models, a type of generative artificial intelligence designed to spit out text. Large language models are typically trained on vast swaths of internet content. Much

→ Continue reading at Science News

More from author

Related posts

Advertisment

Latest posts

Breaking Out of the Matrix — How to Take Control of Your Life and Create a More Fulfilling Future

Opinions expressed by Entrepreneur contributors are their own. Breaking out of the matrix sounds like fun, but what exactly is the matrix...

Peloton’s performance: Earnings gears and future frontiers

Peloton Interactive Inc. (NASDAQ: PTON) recently released its fiscal Q2 2024 earnings, presenting a mixed picture that has garnered attention from investors in the...

Google in 2024: A roller coaster ride through Silicon Valley

Alphabet Inc. (NASDAQ: GOOG), the corporate parent of Google, demonstrated remarkable financial strength in the rapidly changing and volatile tech sector. Google's Q4 2023...