LOS ANGELES (AP) — The world’s biggest social media companies face several landmark trials this year that seek to hold them responsible for harms to children who use their platforms. Opening statements for the first, in Los Angeles County Superior Court, began on Monday.

Instagram’s parent company Meta and Google’s YouTube face claims that their platforms deliberately addict and harm children. TikTok and Snap, which were originally named in the lawsuit, settled for undisclosed sums.

Jurors got their first glimpse into what will be a lengthy trial characterized by dueling narratives from the plaintiffs and the two remaining social media companies named as defendants. Opening arguments in the landmark case began Monday at the Spring Street Courthouse in downtown Los Angeles.

Mark Lanier delivered the opening statement for the plaintiffs first, in a lively display where he said the case is as “easy as ABC,” which he said stands for “addicting the brains of children.” He called Meta and Google “two of the richest corporations in history” who have “engineered addiction in children’s brains.”

At the core of the Los Angeles case is a 19-year-old identified only by the initials “KGM,” whose case could determine how thousands of other, similar lawsuits against social media companies will play out. She and two other plaintiffs have been selected for bellwether trials — essentially test cases for both sides to see how their arguments play out before a jury and what damages, if any, may be awarded, said Clay Calvert, a nonresident senior fellow of technology policy studies at the American Enterprise Institute.

  • NateNate60@lemmy.world
    link
    fedilink
    English
    arrow-up
    36
    ·
    2 days ago

    America really has a litigation culture, not because people are particularly fond of lawsuits, but because problems which are generally solved by legislative enactments or actions by regulatory bodies in other countries, aren’t in the US, and thus the only way to find out who is right is to go to court.

    • tyler@programming.dev
      link
      fedilink
      English
      arrow-up
      24
      ·
      2 days ago

      The thing is, we had regulatory bodies that did that. Then citizens united happened and now companies can sue the government for infringing on their rights as “people” since clearly our constitution meant corporations are people. As a result every single regulatory body has to fight every single change in court.

  • Alloi@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    2 days ago

    removing or changing section 230 would also allow lemmy instances to be sued or taken down as well, for the content posted by users. it would increase government surveillance and basically allow the american government to dictate content across the entire internet. no more freedom of speech, whistleblowers, organization of protests, etc.

    this all sounds well and good “for the sake of the chillren” but its a trojan horse for government censorship.

    the only people who would be able to afford the bill for what happens after this would be american social media companies. anything “independant” or emerging like the fediverse would get bot swarmed with “illegal content” and then immediately sued into oblivion and outright removed.

    this ensures complete loyalty of the digital space to the whims of the american government.

    it would also allow them to remove things like wikipedia, the way back machine, the internet archive, and sites holding or spreading things around like the epstein files or at least sites holding peoples opinions of them.

    • CrackedLinuxISO@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      Seems like the case is about inherently addictive features of the website, and not about hosted content.

      the lawsuit claims that this was done through deliberate design choices made by companies that sought to make their platforms more addictive to children to boost profits. This argument, if successful, could sidestep the companies’ First Amendment shield and Section 230

      • Alloi@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        1 day ago

        kind of missing the forest for the trees here. the issue is in order to make that change and hold these platforms accountable, through changes to section 230, you would then open the door to all platforms being held accountable, and create a new loop hole for more government control of all platforms. this would cause intense censorship and algorithmic control of content, and the means in which it is shared, spread, or created.

        the internet is inherently addictive, always has been, always will be. its the greatest technology mankind has ever developed, it connects us all to each other, and the collective library of human knowledge. there is no world where a human brain, adult, or child, does not engage with that level of connectivity without some level of addiction.

        ive been watching this for a while now, and the support, timing, and language around it is being engaged by both sides of the political spectrum. which in this particular time period, is extremelly worrisome.

        attacking “addictive features” (which i am not saying there isnt room for improvement for) is a foot in the door for further amendments. most people just “think of the chillren” when they see this, and its understandable, we love our kids, so we should as parents limit screen time, or not make it an option at all, kids cant buy their own phines, computers, or pay for wifi, and it takes a few minutes to put parental controls on all your kids devices. besides that, most people are not educated in the subject of internet policy over the last 30 years, or why section 230 is so important. it is quite literally the reason you and i can have this exchange without the government filtering what can and cannot be exchanged.

        the fact of the matter is right in the paragraph you quoted

        This argument, if successful, could sidestep the companies’ First Amendment shield and Section 230

        its not just about the companies, its about section 230, and as a biproduct digital ID requirements by large platforms. which is something needed for a larger agenda that goes beyond the united states government, by the ruling elites of the world. but thats a rabbit hole ill allow you to fall in yourself. the united states just so happens to be the center of digital infrastructure and platforms shared by every country in the planet.

        discord, as an example, will soon require users to upload a copy of their ID or a facial scan to use their platform.

        “to protect the children”

        then every major platform will. for “liability reasons” and to “protect the children”

        then the internet as a whole will require it.

        “to protect the children”

        then you wont be able to do a god damn thing without big brother logging and arresting people left and right for whatever digital crimes the powers that be decide are crimes that week. basically, thought crime.

        and platforms from the fediverse, and all over the internet will have to bend the knee, and police content to extremes we havent yet seen. nobody will be anonymous anymore. and resistance goes back to the stone age of hand written letters and secret handshakes under the bridge.

        heres a good write up about section 230. the above mentioned article already discusses some of the pushes for digital ID already. in various forms. some more invasive than others.

        https://www.internetsociety.org/blog/2026/02/30-years-of-section-230-why-we-still-need-it-for-a-safer-internet/

        heres a decent video about the history of the internet, section 230, and implications of this lawsuit and the other actions around section 230. its a bit long, but worth it. if you want a laymans understanding.

        https://youtu.be/_eqt8vrtP-U

        and below, here is a summary of section 230 from wikipedia.

        Summary In the United States, Section 230 is a section of the Communications Act of 1934 that was enacted as part of the Communications Decency Act of 1996, which is Title V of the Telecommunications Act of 1996, and generally provides immunity for online computer services with respect to third-party content generated by their users. At its core, Section 230©(1) provides immunity from liability for providers and users of an “interactive computer service” who publish information provided by third-party users:

        No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

        Section 230©(2) further provides “Good Samaritan” protection from civil liability for operators of interactive computer services in the voluntary good faith removal or moderation of third-party material the operator “considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”

        Section 230 was developed in response to a pair of lawsuits against online discussion platforms in the early 1990s that resulted in different interpretations of whether the service providers should be treated as publishers, Stratton Oakmont, Inc. v. Prodigy Services Co., or alternatively, as distributors of content created by their users, Cubby, Inc. v. CompuServe Inc. The section’s authors, Representatives Christopher Cox and Ron Wyden, believed interactive computer services should be treated as distributors, not liable for the content they distributed, as a means to protect the growing Internet at the time.

        Section 230 was enacted as section 509 of the Communications Decency Act (CDA) of 1996 (a common name for Title V of the Telecommunications Act of 1996). After passage of the Telecommunications Act, the CDA was challenged in courts and was ruled by the Supreme Court in Reno v. American Civil Liberties Union (1997) to be unconstitutional, though Section 230 was determined to be severable from the rest of the legislation and remained in place. Since then, several legal challenges have validated the constitutionality of Section 230.

        Section 230 protections are not limitless and require providers to remove material that violates federal criminal law, intellectual property law, or human trafficking law. In 2018, Section 230 was amended by the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA-SESTA) to require the removal of material violating federal and state sex trafficking laws. In the following years, protections from Section 230 have come under more scrutiny on issues related to hate speech and ideological biases in relation to the power that technology companies can hold on political discussions and became a major issue during the 2020 United States presidential election, especially with regard to alleged censorship of more conservative viewpoints on social media.

        Passed when Internet use was just starting to expand in both breadth of services and range of consumers in the United States, Section 230 has frequently been referred to as a key law, which allowed the Internet to develop.

        there.

        i did my part.

        now i must rest.

        • CrackedLinuxISO@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          22 hours ago

          You make a good point, and one that I didn’t necessarily consider.

          Maybe it’s naïveté, but I do still imagine this case could be hypothetically won without trampling section 230. Mostly because we have actual evidence that Meta designs their products to be harmful: Whistleblower leaks and books hace clearly demonstrated that management works to juice profits at the cost of users. Eg: Collecting data about users with body-image issues and selling it to beauty advertisers. When you can point to actual emails between decision-makers saying “Ignore this problem, it makes too much money for us to solve”, I’d hope the case would revolve around not letting people prioritize shitty business decisions at the cost of people. Then theoretically, as long as you don’t have a bunch of lemmy mods coordinating similar practices, the case wouldn’t apply to them.

          Hmm, now that I type it out, that’s definitely a naïve take. I don’t expect to see actual justice against corporations in the USA any time soon.

  • Korkki@lemmy.ml
    link
    fedilink
    English
    arrow-up
    9
    ·
    2 days ago

    They are going to play the same old “freedom of choice” defense… aren’t they.

    It’s not our fault we made it purposefully addictive, you could just not watch it. Hasn’t this been the case with every tobacco-, soda-, fast food-, etc company. For example: the whole mainstream idea that weight gain is about caloric imbalance and not consuming what you eat. That is the mainstream because is helps the food companies sway public opinion for their cause. It’s not our food that is horrible slop, disruptive to metabolism and engineered to make people eat more and more and still crave more, it’s the people who could just not eat it and if they do eat it they could like run 10km to sweat off the effects of like one sandwich.

    They always shift the responsibility to the individuals when they are pressed on their wrongdoings. “The freedom of choice” at large is the great lie that at large keeps society running and is the main defense against any complain why something is systematically shit and fundamentally inhuman, from food to labor markets.

    • ZeDoTelhado@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      I think the food analogy is a good one here. I have debated personally a lot about this false sense of choice, when in reality you are bombarded with every psychological tactic to keep you hooked. Instagram in this sense is no different. If it lawsuit leads to somewhere, I do not know, however at some point the whole manipulative algorithms should be addressed (but by who and when are the biggest questions)

      • Korkki@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        at some point the whole manipulative algorithms should be addressed (but by who and when are the biggest questions)

        I have long since been of the opinion that all the big social multinational media should be seen as global technical, communication and media infrastructure. All the companies should be seized and put under some global foundation or the UN, everything open sourced, costs paid by member states and the platforms forced to remain impartial and to be organized for improving human condition, development, communication and understanding. If there is no need for profit then there is no need for entrapping users in toxic swamps of algorithm hell for more platform engagement.

        at some point the whole manipulative algorithms should be addressed (but by who and when are the biggest questions)

  • paraphrand@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    2 days ago

    Parents, right? That’s always the solution to platforms.

    Edit: all the ironic upvotes. I was being sarcastic. Parents won’t keep their predator sons and daughters off Roblox.

  • Perspectivist@feddit.uk
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 days ago

    My unpopular opinion is that social media is simply inherently incompatible with human nature. I don’t think it’s anyone’s fault per se. It’s like heroin in the sense that it doesn’t matter how you distribute it - it’s going to cause harm because hijacking our reward systems is the reason we use it in the first place. If you modify it so all that goes away, then what you’re left with is water - and nobody wants that.

    I don’t know what the solution is, though. I don’t think banning it is a solution, but I’m not sure how to square the harmfulness of it. It’s not just kids it’s bad for - it’s everyone. And yeah, there are degrees to it - perhaps Lemmy is objectively better than an algorithm-based message board like Reddit, but something being better doesn’t make it good. A non-toxic heroin that you can’t OD on is also better than the alternative, but it’s still going to be harmful. It’s an arbitrary line we collectively just decide to draw somewhere - even though you could argue infinitely about nudging it one way or the other.

    • jungle@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      I have talked to product people in large Internet companies and you’d be shocked to learn that they think what they’re doing (maximizing engagement and using dark patterns) is not only fine, but that they’re not doing enough. These are not good people.

    • forrgott@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      Not a lawyer, but when the corporation goes to trial instead of selling, that’s not a great sign.

      So, my guess is either the defendants know they are probably going to win, or the penalty for losing is likely to be insignificant.

  • MrSulu@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    And the $multi billion companies will use every bent strategy available to delay, prevent, obfuscate evidence, attack & destroy witnesses etc. They will water down the impact to harm minimilisation outcome and so set out the precedent for how bad companies can be and get away with it. We really need that precedent to be seriously strong.