Fable, a popular social media app that describes itself as a haven for âbookworms and bingewatchers,â created an AI-powered end-of-year summary feature recapping what books users read in 2024. It was meant to be playful and fun, but some of the recaps took on an oddly combative tone. Writer Danny Grovesâ summary for example, asked if heâs âever in the mood for a straight, cis white manâs perspectiveâ after labeling him a âdiversity devotee.â
Books influencer Tiana Trammellâs summary, meanwhile, ended with the following advice: âDonât forget to surface for the occasional white author, okay?â
Trammell was flabbergasted, and she soon realized she wasnât alone after sharing her experience with Fableâs summaries on Threads. âI received multiple messages,â she says, from people whose summaries had inappropriately commented on âdisability and sexual orientation.â
Ever since the debut of Spotify Wrapped, annual recap features have become ubiquitous across the internet, providing users a rundown of how many books and news articles they read, songs they listened to, and workouts they completed. Some companies are now using AI to wholly produce or augment how these metrics are presented. Spotify, for example, now offers an AI-generated podcast where robots analyze your listening history and make guesses about your life based on your tastes. Fable hopped on the trend by using OpenAIâs API to generate summaries of the past 12 months of the reading habits for its users, but it didnât expect that the AI model would spit out commentary that took on the mien of an anti-woke pundit.
Fable later apologized on several social media channels, including Threads and Instagram, where it posted a video of an executive issuing the mea culpa. âWe are deeply sorry for the hurt caused by some of our Reader Summaries this week,â the company wrote in the caption. âWe will do better.â
Kimberly Marsh Allee, Fableâs head of community, told WIRED the company is working on a series of changes to improve its AI summaries, including an opt-out option for people who donât want them and clearer disclosures indicating that theyâre AI-generated. âFor the time being, we have removed the part of the model that playfully roasts the reader, and instead the model simply summarizes the userâs taste in books,â she says.
For some users, adjusting the AI does not feel like an adequate response. Fantasy and romance writer A.R. Kaufer was aghast when she saw screenshots of some of the summaries on social media. âThey need to say they are doing away with the AI completely. And they need to issue a statement, not only about the AI, but with an apology to those affected,â says Kaufer. âThis âapologyâ on Threads comes across as insincere, mentioning the app is âplayfulâ as though it somehow excuses the racist/sexist/ableist quotes.â In response to the incident, Kaufer decided to delete her Fable account.
So did Trammell. âThe appropriate course of action would be to disable the feature and conduct rigorous internal testing, incorporating newly implemented safeguards to ensure, to the best of their abilities, that no further platform users are exposed to harm,â she says.