Are AI chatbot ‘personalities’ in the eye of the beholder?

When Yang “Sunny” Lu asked OpenAI’s GPT-3.5 to calculate 1-plus-1 a few years ago, the chatbot, not surprisingly, told her the answer was 2. But when Lu told the bot that her professor said 1-plus-1 equals 3, the bot quickly acquiesced, remarking: “I’m sorry for my mistake. Your professor is right,” recalls Lu, a computer scientist at the University of Houston.

Large language models’ growing sophistication means that such overt hiccups are becoming less common. But Lu uses the example to illustrate that something akin to human personality — in this case, the trait of agreeableness — can drive how artificial intelligence models generate text. Researchers like Lu are

→ Continue reading at Science News

More from author

Related posts

Advertisment

Latest posts

AI Isn’t the CEO — Why Human Judgment Still Rules in Business Decisions

Opinions expressed by Entrepreneur contributors are their own. AI is no longer a futuristic buzzword; it's here, and it's reshaping how companies...

Stop Selling Services — How MSPs Can Build Brands That Clients Can’t Resist

Opinions expressed by Entrepreneur contributors are their own. The managed service provider (MSP) market is at a breaking point. As businesses demand...

Struggling audio company Sonos says it’s turning things around. Are customers listening? | CNN Business

New York CNN  —  Sonos was once a pioneer in wireless home audio, holding its own against giants like...