Why your state is (probably) suing Meta

Hi! I’m Jacqueline Nesi, a clinical psychologist, professor at Brown University, and mom of two young kids. Here at Techno Sapiens, I share the latest research on psychology, technology, and parenting, plus practical tips for living and parenting in the digital age. If you haven’t already, subscribe to join nearly 20,000 readers, and if you like what you’re reading, please consider sharing Techno Sapiens with a friend.
Hi there, techno sapiens. By now, you’ve likely heard that last month, 41 states sued Meta for violating consumer protection laws and harming young users. Perhaps unsurprisingly, the news managed to penetrate my flimsily crafted “maternity leave” bubble, and before I knew it, I found myself reading (okay, skimming) the 233-page federal complaint document.
I’ve now repeatedly tried to discuss with my newborn baby the ethical and legal ramifications of Meta’s user engagement practices, but it seems that he’s just, kind of, ignoring me? Honestly, it’s like he doesn’t even care.
So here I am instead, popping back in to discuss it with all of you! [Hi! How are you? I miss you all.]
Here’s the short version: Some of the claims made in the lawsuit are defensible from a research standpoint, and some are not. Ultimately, it’s clear that we need to do a better job of protecting young people online, but I think we can do this without overstating the available evidence.
In sum, the lawsuit argues that Meta deliberately “addicted” young users to Facebook and Instagram and deceived the public about the harms of its products.
The suit refers to these activities as:
“META’S SCHEME TO EXPLOIT YOUNG USERS FOR PROFIT”
It breaks down this “scheme” into five claims. Let’s walk through each one.
The suit alleges that Meta’s business model requires maximizing the time teens spend on their platforms—more time means more advertising dollars. Of course, the business model also incentivizes maximizing time spent for older users, but young users may be particularly valuable to Meta, as they’re more likely to become lifelong customers.
My take: Tough to argue with this one.
The suit claims: “Meta has developed and refined a set of psychologically manipulative Platform features designed to maximize young users’ time spent on its Social Media Platforms. Meta was aware that young users’ developing brains are particularly vulnerable to certain forms of manipulation, and it chose to exploit those vulnerabilities through targeted features…”
It calls out five “addicting” features: (1) recommendation algorithms, (2) “likes,” (3) notifications, (4) visual filters, and (5) “content presentation formats” like infinite scroll (i.e., where there’s no end to your feed)
My take: This seems like a matter of definitions.
In some cases, these features might improve users’ experience on the platforms. In theory, for example, recommendation algorithms could help teens discover more content and people they’re interested in.
However, these features also have serious downsides—namely, that they make these products very hard to stop using. Features like notifications and infinite scroll rely on well-established psychological principles. For example, we know that when people get “rewards” at unpredictable intervals for a behavior, they do that behavior more often (i.e., “variable reward schedules”). We check social media so frequently because the “rewards” (e.g., a message from a friend, a new Taylor and Travis video) are unpredictable. Whether this constitutes “psychological manipulation” or makes it “unsafe” depends how you define those terms.
Might young people be more easily swayed by these features, due to aspects of their developing brains, like lower capacity for self-regulation and heightened sensitivity to social rewards? Yes. The data suggest that 36% of U.S. teens (41% of girls) feel that they spend too much time on social media. But does this constitute “exploiting their vulnerabilities”? Again, depends how you define it.
According to the suit, Meta regularly publicized the percentage of content on its platforms that was removed for violating its Community Standards. This percentage was provided as an estimate of the prevalence rate of harmful content. For example, in 2021, Meta reported that “less than 0.05% of views were of content that violated our standards against Suicide & Self-Injury.”
My take: The majority of this section of the suit has been redacted, so it’s difficult to determine whether or how Meta might have deceived the public about these numbers.
Worth noting that data I collected with Common Sense Media earlier this year suggests that among U.S. girls ages 11-15 who use Instagram, 41% say they’re exposed to suicide-related content on the platform at least monthly. This doesn’t mean Meta lied about the numbers—the content these girls are referencing may not actually violate Meta’s standards, and Meta’s metric of “percentage of total views” (versus users) may still be accurate. Still, it raises the question of how “prevalence” is calculated and made public.
Ah yes. Another day, another claim about the evidence linking social media and teen mental health. The suit makes the case that Meta’s products are causing harm to teens’ health, and that the company has refused to address the problem.
My take: For a fuller discussion, see my prior posts, in which I lay out the current state of the evidence on this topic. But to quickly summarize: there is some evidence that social media use is linked with negative mental health outcomes among teens—in general, the effects seem to vary across different teens and to be largely dependent on how those teens are using the platforms. I would not call the current research evidence “overwhelming” in showing that social media use is causing mental health problems in teens.
When it comes to making new laws around social media, I believe that this distinction doesn’t actually matter. We do not need to meet a scientific standard of proof for “overwhelming evidence” in order to argue that requiring some common-sense safety standards for platforms makes sense.
When it comes to proving platforms guilty in a suit such as this one, though, I don’t know. That standard of proof is for the courts to decide.
COPPA (Children’s Online Privacy Protection Rule) is a law that places certain requirements on websites that are either directed to children under 13 years old, or that “have actual knowledge” that children under 13 are using them. These websites, for example, cannot collect data from children without parent consent.
Meta has long maintained that their products are designed for youth ages 13 and older. However, the suit alleges that: 1) Facebook and Instagram are, actually, directed to children under 13, and 2) Meta did have “actual knowledge” that children under 13 were using its products. By collecting data from these children, this would put them in violation of COPPA.
My take: From a research standpoint, the data is pretty clear: kids under 13 are using social media. For example, that data I collected with Common Sense Media showed that 41% of U.S. girls ages 11 to 12 say they’ve ever used Instagram. Does Meta know this? As others have argued, this one might be easier to prove.
Recent efforts at federal legislation to protect children online have largely stalled, and new state laws (such as those in California and Arkansas) have faced challenges in the courts. This lawsuit seems to be a new tactic by state lawmakers to rein in social media companies, but I worry that many of its claims won’t hold up under scrutiny.
Ultimately, I think it’s clear that we need to make these platforms safer for young people, but my hope is that we can do so without needing to overstate the available evidence. How, exactly, to do this effectively—through regulatory efforts, public health warnings, and lawsuits like this one—turns out to be a thorny problem.
What did you think of this week’s Techno Sapiens? Your feedback helps me make this better. Thanks!
The Best | Great | Good | Meh | The Worst
ncG1vNJzZmislZi1r7vSmqeinZ6oe7TBwayrmpubY7CwuY6pZrCgqWLGsMHRZqqtmaSaeqq%2FjKmpqJqRl7m6edKuoKef