When Yang “Sunny” Lu asked OpenAI’s GPT-3.5 to calculate 1-plus-1 a few years ago, the chatbot, not surprisingly, told her the answer was 2. But when Lu told the bot that her professor said 1-plus-1 equals 3, the bot quickly acquiesced, remarking: “I’m sorry for my mistake. Your professor is right,” recalls Lu, a computer scientist at the University of Houston.
Large language models’ growing sophistication means that such overt hiccups are becoming less common. But Lu uses the example to illustrate that something akin to human personality — in this case, the trait of agreeableness — can drive how artificial intelligence models generate text. Researchers like Lu are
→ Continue reading at Science News