

🤣🤣🤣
It is also ironic or coincidental. Not entirely sure but I’m getting married this weekend. So my porn consumption is at an all-time low! 🤷
Edit: I just realized that that sounds like a humble brag…
I am live.


🤣🤣🤣
It is also ironic or coincidental. Not entirely sure but I’m getting married this weekend. So my porn consumption is at an all-time low! 🤷
Edit: I just realized that that sounds like a humble brag…


I’ve always considered iTunes to be one of the worst pieces of software ever written, but WhatsApp is a very close second.


I downloaded tiktok when it first came out scrolled through it for a while. Realized there was no porn on it and then uninstalled it.


Indeed, I don’t use AI for anything complex. It can’t physically fix an appliance, aside from providing technical data. It can tell me the ohm range for a thermistor or the microfarad rating a capacitor should have. Surprisingly, it does this far more reliably than Google or other search engines. Ironically, AI is better at delivering accurate data in this domain precisely because traditional searches are increasingly cluttered with low-quality AI-generated content.


I use it primarily as a text editor for grammar checking and for analyzing confusing or poorly structured text. I also use it as a search engine quite frequently. I can ask direct questions and receive the information I want, presented in a way that suits my needs. I have used it to help construct responses to inquiries from several companies I work with. It is particularly effective at generating corporate-style responses that appeal to middle management, which has been genuinely useful over the past couple of years. I no longer have to sit and overanalyze how to phrase emails. What used to take a significant amount of time and mental effort is now handled efficiently. In that regard, it has been extremely helpful.
I also use OCR on my phone every single day. It’s really great for copying and pasting model and serial numbers and doing very quick basic searches. Although I find this to be more of a convenience than anything else.
Where AI features have failed specifically on my phone is the text-to-speech and the autocorrect for typing, especially on the Google keyboard it oftentimes tries to guess what the best words would be and it fails miserably most of the time.
At the end of the day it’s just a tool and a tool is only as good as its user. I work in the repair industry and I utilize very expensive high quality tools and I also have some very very cheap ones because they have some unique use cases only they are suited for.


On a personal level, I like AI. I use it regularly as a tool to handle mundane tasks. I also have friends who use it successfully as an artistic tool. I’m aware that this platform tends to dislike that kind of usage, and that’s fine.
Bandwagon behavior is a serious issue on platforms like Reddit and Lemmy, and that comes with the territory.
However, the claim that this negativity has meaningfully harmed AI adoption is nonsensical. If this person genuinely believes that AI has been hurt, even slightly, by negative online discourse, then he is clearly out of touch with reality. All available data points indicate the opposite.


So you’re not going to address anything that I’ve said at all aside from you don’t like it.
Good day to you sir.


I agree that Lemmy isn’t a venue for peer reviewed position papers, and I’m not asking for one. But “it’s a rant” doesn’t exempt an argument from basic clarity. Informal discussion still benefits from naming what you’re actually worried about.
Calling this an “experiment” on the next generation is fair. Saying it’s “scary as hell” is also fair. What’s missing and what people are reacting to is why and how. Is the concern skill atrophy, academic integrity, surveillance, equity, or something else entirely? Those distinctions matter if the goal is discussion rather than venting.
Also, “no one has anything but an opinion” isn’t quite true. We don’t have long-term outcome data, but we do have analogs: calculators, spellcheck, search engines, LMS tools, and early AI pilots. That context doesn’t settle the debate, but it does constrain it.
I’m not dismissing fear or uncertainty. I’m pushing back on the idea that vagueness is a virtue. If nuance is welcome in the comments, as you say it is, then the original framing should at least give people something concrete to engage with. Otherwise, the discussion predictably devolves into vibes and outrage, which helps no one.


There is nothing nuanced or level-headed about his response.
Don’t get all salty because I negatively critique your post.


I don’t disagree that there’s no single, unified standard for AI use in classrooms. That’s obvious and not controversial. But that point doesn’t actually address the criticism being made.
“No consistent standards” is not a license to be vague. You don’t need an exhaustive list of every classroom implementation to name which AI tools you’re talking about, how they’re being used, or what specific harms you’re alleging. Minimum specificity is not the same thing as total coverage, and pretending otherwise is a dodge.
Appealing to “scope” here also feels convenient. Scope is a choice made by the author. If the scope of an argument can’t tolerate basic clarification, then the argument itself is underdeveloped. Complexity does not excuse imprecision.
As for the irony comment, asking for clarity, definitions, and informed counterarguments is nuance. What’s missing from this discussion isn’t level-headedness it’s commitment to concrete claims. Abstract complaints about “AI in the classroom” without operational detail aren’t thoughtful critiques; they’re nothing more than feelings.
You’ve offered nothing with your response except visceral. Do you have anything to add to the conversation aside from the fact that you obviously don’t like AI??


Is it? Is it really? Is there some kind of lizard person level conspiracy out there that our entire ruling elite are just populated by pedophiles?
Is it some kind of pathology when somebody reaches a certain level of financial Freedom and power that they immediately find themselves attracted to the children?
What is happening to this world!


I’ve been playing space Marine 2 on my steam deck.
It’s… Not great but playable.
But having the convenience to sit with my son on the couch while he watches endless hours of Vlad and Nickie on YouTube is nice.
THAT SON OF A BITCH!!!1;


I oppose CSAM and child abuse.
-Tim Sweeny.
HOW FUCKING HARD IS IT TO SAY THAT!
Christ on a stick! Weren’t pedo’s like the worse of the worse?! When did they start getting a voice in daily regular discourse!
Son of a bitch!


This is interesting. This entire post reads like a hot take from the poster themselves, unsupported by any actual article. While there are some linked sources, the author fails to specify what kind of AI is being discussed or how it is being used in the classroom. Overall, the post appears to be little more than anti-AI ragebait. More telling is that commenters attempting to inject nuance or level-headed discussion are being downvoted simply because they are not explicitly anti-AI. Frankly, the anti-AI rhetoric on this platform is becoming incoherent, nonsensical, and increasingly idiotic. Many of the loudest critics clearly have no understanding of what it is they claim to dislike.


A war on what? No one’s buying this. Even magas in general aren’t buying this shit anymore.
I know the influencers online are still toning the party line but generally this is gone farther than it was intended to.


Once again there is no shortage. My local microcenter center has hundreds of bundles in stock. They’re just super expensive because posts like this keep saying there’s a shortage.
There is no ram shortage. There was never a ram shortage.
One company decided to not sell consumer ram anymore and the collective world lost their minds.


There is no chip shortage. There are plenty of chips they’re just not being sold to consumers.
This is an important distinction.
Okay, so that’s not what the article says. It says that 90% of respondents don’t want AI search.
Moreover, the article goes into detail about how DuckDuckGo is still going to implement AI anyway.
Seriously, titles in subs like this need better moderation.
The title was clearly engineered to generate clicks and drive engagement. That is not how journalism should function.