The Federal Trade Commission has struggled over the years to find ways to combat deceptive digital data practices using its limited set of enforcement options. Now, it’s landed on one that could have a big impact on tech companies: algorithmic destruction. And as the agency gets more aggressive on tech by slowly introducing this new type of penalty, applying it in a settlement for the third time in three years could be the charm.
In a March 4 settlement order, the agency demanded that WW International — formerly known as Weight Watchers — destroy the algorithms or AI models it built using personal information collected through its Kurbo healthy eating app from kids as young as 8 without parental permission. The agency also fined the company $1.5 million and ordered it to delete the illegally harvested data.
When it comes to today’s data-centric business models, algorithmic systems and the data used to build and train them are intellectual property, products that are core to how many companies operate and generate revenue. While in the past the FTC has required companies to disgorge ill-gotten monetary gains obtained through deceptive practices, forcing them to delete algorithmic systems built with ill-gotten data could become a more routine approach, one that modernizes FTC enforcement to directly affect how companies do business.
A slow rollout
The FTC first used the approach in 2019, amid scandalous headlines that exposed Facebook’s privacy vulnerabilities and brought down political data and campaign consultancy Cambridge Analytica. The agency called on Cambridge Analytica to destroy the data it had gathered about Facebook users through deceptive means along with “information or work product, including any algorithms or equations” built using that data.
It was another two years before algorithmic disgorgement came around again when the commission settled a case with photo-sharing app company Everalbum. The company was charged with using facial recognition in its Ever app to detect people’s identities in images without allowing users to turn it off, and for using photos uploaded through the app to help build its facial recognition technology.
In that case, the commission told Everalbum to destroy the photos, videos and facial and biometric data it gleaned from app users and to delete products built using it, including “any models or algorithms developed in whole or in part” using that data.
Technically speaking, the term “algorithm” can cover any piece of code that can make a software application do a set of actions, said Krishna Gade, founder and CEO of AI monitoring software company Fiddler. When it comes to AI specifically, the term usually refers to an AI model or machine-learning model, he said.
Clearing the way for algorithmic destruction inside the FTC
It hasn’t always been clear that the FTC might use algorithmic disgorgement more regularly.
“Cambridge Analytica a was a good decision, but I wasn’t certain that that was going to become a pattern,” Pam Dixon, executive director of World Privacy Forum, said regarding the requirement for the company to delete its algorithmic models. Now, Dixon said, algorithmic disgorgement will likely become a standard enforcement mechanism, just like monetary fines. “This is definitely now to be expected whenever it is applicable or the right decision,” she said.
The winds inside the FTC seem to be shifting. “Commissioners have previously voted to allow data protection law violators to retain algorithms and technologies that derive much of their value from ill-gotten data,” former FTC Commissioner Rohit Chopra, now director of the Consumer Financial Protection Bureau, wrote in a statement related to the Everalbum case. He said requiring the company to “forfeit the fruits of its deception” was “an important course correction.”
“If Ever meant a course correction, Kurbo means full speed ahead,” said Jevan Hutson, associate at Hintze Law, a data privacy and security law firm.
FTC Commissioner Rebecca Slaughter has been a vocal supporter of algorithmic destruction as a way to penalize companies for unfair and deceptive data practices. In a Yale Journal of Law and Technology article published last year, she and FTC lawyers Janice Kopec and Mohamad Batal highlighted it as a tool the FTC could use to foster economic and algorithmic justice.
“The premise is simple: when companies collect data illegally, they should not be able to profit from either the data or any algorithm developed using it,” they wrote. “The authority to seek this type of remedy comes from the Commission’s power to order relief reasonably tailored to the violation of the law. This innovative enforcement approach should send a clear message to companies engaging in illicit data collection in order to train AI models: Not worth it.”
Indeed, some believe the threat to intellectual property value and tech product viability could make companies think twice about using data collected through unscrupulous means. “Big fines are the cost of doing business. Algorithmic disgorgement traced to illicit data collection/processing is an actual deterrent,” David Carroll, an associate professor of media design at The New School’s Parsons School of Design, said in a tweet. Carroll sued Cambridge Analytica in Europe to obtain his 2016 voter profile data from the now-defunct company.
Forecast: Future privacy use
When people sign up to use the Kurbo healthy eating app, they can choose a fruit or vegetable-themed avatar such as an artichoke, pea pod or pineapple. In exchange for health coaching and help tracking food intake and exercise, the app requires personal information about its users such as age, gender, height, weight and their food and exercise choices, which improve the app.
In its case against WW, the FTC said that until late 2019, Kurbo users could sign up for the service either by indicating that they were a parent signing up for their child or that they were over the age of 13 and registering for themselves. The agency said the company failed to ensure that the people signing up were actually parents or adult guardians rather than kids pretending to be adults. It also said that from 2014 to 2019, hundreds of users who signed up for the app originally claiming they were over age 13 later changed their profile birth dates to indicate they were actually under 13, but continued to have access to the app.
The fact that algorithmic disgorgement was used by the FTC in relation to one of the country’s only existing federal privacy laws could be a sign that it will be used again, legal and policy experts said. While the Cambridge Analytica and Everalbum cases charged those companies for violating the FTC Act, the Kurbo case added an important wrinkle, alleging that WW violated both the FTC Act and Children’s Online Privacy Protection Act. Both are important pieces of legislation under which the agency can bring consumer protection cases against businesses.
“This means that for any organization that has collected data illegally under COPPA that data is at risk and the models built on top of it are at risk for disgorgement,” Hutson said.
The use of COPPA could be a foundational precedent paving the way for the FTC to require destruction of algorithmic models under future legislation, such as a would-be comprehensive federal privacy law. “It stands to reason it would be leveraged in any other arena where the FTC has enforcement authority under legislation,” Hutson said.
Application of algorithmic disgorgement in the COPPA context is “a clear jurisdiction and trigger of enforcement through a law that exists and explicitly protects kids’ data, [so] if there was a corollary law for everyone it would allow the FTC to enforce in this way for companies that are not just gathering kids’ data,” said Ben Winters, a counsel for the Electronic Privacy Information Center.
He added, “It shows it would be really great if we had a privacy law for everybody, in addition to kids.”