Reporters Without Borders urged Apple to remove its newly launched AI news aggregation feature. Because it simply behaves incorrectly and produces incorrect statements.
electronic brain illusion
For example, an AI error sent a push notification that incorrectly reported that Luigi Mangione, the man responsible for the murder of the president of US company United Healthcare, had committed suicide by shooting himself. The BBC, on which the original report was based, has since contacted Apple to highlight the issue and request a fix.
Vincent Berthier, head of technology and journalism at Reporters Without Borders, called on Apple in a statement to “act responsibly and remove this feature.” Berthier criticized the fact that AI cannot provide reliable facts because it operates “on the basis of probability theory.” Misinformation spread by AI on behalf of the media poses a threat to the credibility of affected media and the public’s right to reliable information.
The organization expressed general concerns about the risks posed by the use of AI in the media. The incident revealed that technology is still “immature” to reliably provide information to the public.
under someone else’s logo
Apple’s Generative AI launched in the US in June and is intended to summarize messages in compact formats such as paragraphs and bullet points. Since its launch in October, the feature has made repeated mistakes. Another example: AI incorrectly summarized a New York Times article claiming that Israeli Prime Minister Benjamin Netanyahu had been arrested.
In fact, the International Criminal Court had issued an arrest warrant for Prime Minister Netanyahu, but this was not accurately reported. False summaries not only pose a risk of spreading disinformation, but they can also undermine a news organization’s credibility. The summary will ultimately appear below each media logo, but will not affect the content.