• 1 Post
  • 7 Comments
Joined 2 years ago
cake
Cake day: December 9th, 2023

help-circle


  • “The current LLM tech landscape positions [neurodivergent people] to dominate,” according to the application. “Pattern recognition. Non-linear thinking. Hyperfocus. The cognitive traits that make the neurodivergent different are precisely what make them exceptional in an AI-driven world.”

    What a load of bullshit, LLMs will be used in a million ways to sideline neurodivergent people in society whether it be BS AI “help” for a neurodivergent student replacing a human teacher or job applications using AI to illegally screen and filter out neurodivergent people, this is a bad decade for neurodivergent people and it is likely only to get worse as societies collapse into bigotry from the endless stresses and catastrophies of runaway climate change.



  • Sure, but personal blogs, esoteric smaller websites and social media are where all the actual valuable information and human interaction happens and despite the awful reputation of them it is in fact traditional news media and associated websites/sources that have never been less trustable or useless despite the large role they still play.

    If companies fail to integrate the actual valuable parts to the internet in their scraping, the product they create will fail to be valuable past a certain point shrugs. If you cut out the periphery of the internet paradoxically what you accomplish is to cut out the essential core out of the internet.


  • In the realm of LLMs sabotage is multilayered, multidimensional and not something that can easily be identified quickly in a dataset. There will be no easy place to draw some line of “data is contaminated after this point and only established AIs are now trustable” as every dataset is going to require continual updating to stay relevant.

    I am not suggesting we need to sabotage all future endeavors for creating valid datasets for LLMs either, far from it, I am saying sabotage the ones that are stealing and using things you have made and written without your consent.


  • I made this point recently in a much more verbose form, but I want to reflect it briefly here, if you combine the vulnerability this article is talking about with the fact that large AI companies are most certainly stealing all the data they can and ignoring our demands to not do so the result is clear we have the opportunity to decisively poison future LLMs created by companies that refuse to follow the law or common decency with regards to privacy and ownership over the things we create with our own hands.

    Whether we are talking about social media, personal websites… whatever if what you are creating is connected to the internet AI companies will steal it, so take advantage of that and add a little poison in as a thank you for stealing your labor :)