People talk near a Meta sign outside of the company’s headquarters in Menlo Park, Calif. Jeff Chiu/AP
Jeff Chiu/AP
For years, when Meta launched new features for Instagram, WhatsApp and Facebook, teams of reviewers evaluated possible risks: Could it violate users’ privacy? Could it cause harm to minors? Could it worsen the spread of misleading or toxic content?
Until recently, what are known inside Meta as privacy and integrity reviews were conducted almost entirely by human evaluators.
But now, according to internal company documents obtained by NPR, up to 90% of all risk assessments will soon be automated.
In practice,
→ Continue reading at NPR - Technology