41 States Sue Meta for Allegedly Hooking Teens on Social Media

1685577152819

In the United States, 41 states have filed lawsuits against Meta for allegedly driving social media addiction in its young users (under the age of 18), amid growing concerns about the negative effects of platforms.

In 2021, the world welcomed a US district court’s order for Facebook to disclose various materials to The Gambia relating to hate speech against the Rohingya community in Myanmar.

In doing so, the court strengthened The Gambia’s claims in a pending action before the International Court of Justice. This action claims the Myanmar government had, through its genocidal actions against the Rohingya people, breached its obligations under the Genocide Convention – and that hate speech amplified on Facebook enabled the violence.

The U.S lawsuits allege Meta has been harvesting young users’ data, deploying features to promote compulsive use of both Facebook and Instagram, and misleading the public about the negative effects of these features.

What might we expect to happen next? And are there potential consequences for Australia?

Leveraging whistleblower revelations

The most significant suit, filed in a federal court in California, involves 33 states. The claim is based on breaches of state consumer protection statutes and common law principles regarding deceptive, unfair or unconscionable conduct, and federal privacy statutory provisions and regulations (collectively “COPPA”) which specifically protect children.

This co-ordinated action is reminiscent of other class actions in the US and United Kingdom by Rohingya refugees against Facebook for its role in enabling hate speech against their community in Myanmar.

These cases rely in part on revelations made by former Meta employee Frances Haugen in 2021 about the role Facebook’s algorithms play in facilitating harms on the platform. Haugen’s testimony suggests algorithms deployed across Facebook and Instagram were designed to increase content sharing, and therefore profits, using data harvested from users over many years.

These algorithms play a crucial role in determining what kind of content viewers are exposed to, how long they engage with it, and the likelihood of them sharing it.

According to Haugen, Meta made changes to its algorithms in 2018 to prioritise meaningful social interactions. These changes, she said, impacted how content was viewed on the news feed, leading to increased sharing of negative content such as hate speech.

Concerns over algorithms and content

The California case is notable for the specific allegations around strategies used to keep young people interacting with Facebook and Instagram. For instance, the plaintiffs have elaborated on the impact of the “infinite scroll” feature introduced in 2016.

This feature prevents users from viewing a single post in isolation. Instead it provides a continuous stream of content without a natural endpoint. Haugen described this as being similar to giving users small dopamine hits. It leaves them wanting more and less likely to exercise self-control.

The plaintiffs in the California case claim this feature encourages users, and especially young users, to compulsively use the platforms – negatively affecting their wellbeing and mental health.

They say the recommendation algorithms used by Meta periodically present users with harmful materials. These include “content related to eating disorders, violent content, content encouraging negative self-perception and body image issues, [and] bullying content”.

They also allege features such as “variable reward schedules” are implemented to encourage compulsive use by young people. This causes further physical and mental harm (such as from a lack of sleep).

Consequences for Australia

In the US, federal laws substantially restrict liability of online intermediaries such as Meta for content shared by users.

In contrast, Australia’s Online Safety Act empowers the eSafety Commissioner to compel social media platforms and other online intermediaries to remove problematic material from circulation. This includes material relating to cyberbullying of children, cyberabuse of adults, image-based abuse and abhorrent violent material.

The Federal Court can impose significant penalties for violations of the Online Safety Act. But this doesn’t cover all the harmful content on social media, such as some linked to eating disorders and negative self-image.

Addressing young users’ compulsive social media use is a different challenge altogether. Some measures against this are possible. For example, if the US deception allegations are proven, any evidence that this extends to Australian users may ground an action against Meta for misleading or deceptive conduct (or false or misleading representations) under the Australian Consumer Law.

Last year, A$60 million in civil penalties was awarded against Google LLC for false or misleading representations in 2017-2018. A smaller A$20 million penalty was awarded against two of Meta’s subsidiaries in 2023.

Penalties under the Australian Consumer Law have increased since the Google case, likely due to the deep pockets of platforms. Options for courts awarding penalties include 30% of a platform’s turnover, or three times the value of the benefit to the offending entity.

However, platforms are in a stronger position where conduct isn’t misleading, false or deceptive, but is merely “manipulative” or “unfair”. For instance, the infinite scroll feature is unlikely to be considered misleading or deceptive under Australian law.

Australia also has no legislative equivalent to COPPA. Australia’s law of unconscionable conduct requires such a high level of harsh or oppressive conduct that it’s extremely difficult to prove.

One recent unconscionable conduct case brought by a problem gambler based on the addictive design of electronic poker machines failed in the Federal Court.

Shortcomings in the current law have, in part, led to calls for a new prohibition on unfair trading practices. Pressure is also mounting to reform the ineffective and under-enforced Privacy Act.

We need collaboration and innovation

There are still many gaps in Australian law required to protect consumers, especially children, against harms posed by social media platforms. But domestic law can only go so far in protecting people using a medium that operates (mostly) seamlessly across borders.

As such, international law scholars have suggested more creative approaches in the context of online hate speech. One suggestion has been to make platforms accountable for their actions under the laws of the country where they are headquartered, for enabling crimes that have taken place in other jurisdictions.

As society grapples with the implications of mass data collection and profit-maximising algorithms, protecting individuals will require international co-operation and a re-evaluation of legal frameworks.

Shared with

Discover more from The Gambia Journal

Subscribe to get the latest posts sent to your email.

Leave a Reply

Facebook
Twitter
LinkedIn
WhatsApp
Email
Telegram
Pinterest
Reddit
Print
Tumblr

Related Popular Posts

Translate »