Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Meta plans to automate many of its product risk assessments


A system powered by AI could soon assume responsibility for assessing potential damage and confidentiality risks up to 90% of updates made at Meta Apps like Instagram and Whatsapp, according would have been considered by NPR.

NPR says A 2012 agreement Between Facebook (now Meta) and the Federal Trade Commission requires that the company perform confidentiality examinations of its products, assessing the risks of any potential update. So far, these exams have been widely conducted by human assessors.

As part of the new system, Meta would have said that product teams would be invited to fulfill a questionnaire on their work, then generally receive an “instant decision” with risks identified by AI, as well as requirements that an update or functionality must meet before its launch.

This AI -centered approach would allow Meta to update its products faster, but an old executive told NPR that it also created “higher risks”, because “negative externalities of product changes are less likely to be avoided before starting to cause problems in the world”.

In a statement, a Meta spokesman said that the company “invested more than $ 8 billion in our confidentiality program” and has committed to “deliver innovative products for people while fulfilling regulatory obligations”.

“As the risks evolve and our program mature, we improve our processes to better identify risks, rationalize decision-making and improve people’s experience,” said the spokesperson. “We take advantage of technology to add consistency and predictability to low -risk decisions and rely on human expertise for rigorous assessments and surveillance of new or complex problems.”

This message has been updated with additional quotes from the Meta Declaration.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *