Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

The Oversight Board calls Meta’s uneven AI moderation ‘incoherent and unjustifiable’


While Meta’s platforms are filled with content more generated by AI, the company still has a lot of work to be done when it comes to enforcing its policies around manipulated media. The critical surveillance committee once again the social media society on its management of these articles, writing in its last decision that its inability to apply its rules is consistent “inconsistent and unjustifiable”.

If it seems familiar, it is because it is Since last year, the supervisory board has used the word “incoherent” to describe Meta’s approach to manipulated media. The board of directors had previously urged Meta to update its rules after a modified video of Joe Biden has become viral on Facebook. In response, Meta said it Its use of labels to identify the content generated by AI and that it would apply more important labels in “high -risk” situations. These labels, like the one below, note when a message has been created or modified using AI.

An example of a label when Meta determines a piece of content managed by AI is An example of a label when Meta determines a piece of content managed by AI is

An example of a label when Meta determines an element of content managed by AI is a “high risk”. (Screenshot (meta))

This approach is still away, said the board of directors. “The Council fears that, despite the growing prevalence of the content manipulated between the formats, the Meta application of its manipulated media policy is incoherent,” he said in his last decision. “Meta’s defect to automatically apply a label to all cases of the same handling is incoherent and unjustifiable.”

The declaration came in a decision related to a post To present audio two politicians from Iraqi Kurdistan. The supposed “registered conversation” included a discussion on the competing for a next election and other “claims plans” for the region. The position was reported to Meta for disinformation, but the company closed the case “without human examination,” said the board of directors. Meta then labeled some instances of the audio clip but not that initially reported.

The case, according to the council, is not an aberrant value. Meta apparently told the Board of Directors that he could not automatically identify and apply labels to audio and video publications, only to “static images”. This means that several instances of the same audio or the same video clip may not obtain the same processing, which the notes of the board of directors could cause additional confusion. The supervisory board also criticized Meta to rely often on third parties to identify the video and audio managed by AI, as it did in this case.

“Since Meta is one of the main technology and AI companies in the world, with its resources and the broad use of Meta platforms, the board of directors reiterates that Meta should prioritize investment in technology to identify and label the manipulated video and audio,” wrote the board of directors. “It is not clear for the board of directors why a company of this technical expertise and resources subcontracts the identification of the media probably manipulated in high-risk situations for media or trusted partners.”

In its recommendations in Meta, the board of directors said that the company should adopt a “clear process” to systematically label “identical or similar content” in situations when it adds a “high -risk” label to a position. The advice also recommended that these labels appear in a language that corresponds to the rest of their parameters on Facebook, Instagram and Threads.

Meta did not respond to a request for comments. The company has 60 days to meet the recommendations of the board of directors.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *