That’s the point here. AI can allow for tedious tasks to be automated. I could have a button in my browser that, when clicked, tells the AI to follow up on those sources to confirm that they say what the article says they say. It can highlight the ones that don’t. It can add notes mentioning if those sources happen to be inherently questionable - environmental projections from a fossil fuel think tank, for example. It can highlight claims that don’t have a source, and can do a web search to try to find them.
These are all things I can do myself by hand, sure. I do that sometimes when an article seems particularly important or questionable. It takes a lot of time and effort, though. I would much rather have an AI do the grunt work of going through all that and highlighting problem areas for me to potentially check up on myself. Even if it makes mistakes sometimes that’s still going to give me a far more thoroughly checked and vetted view of the news than the existing process.
I think you’ll like digg. All of the features you love there. Why don’t you try it out?
It’s the pinnacle of innovation (AI) there. I even heard Sam Altman is there, thank god!
AIs summarize posts and moderate the platform. Oh, literally utopia!
Friendly reminder: Don’t forget to try out OpenAI’s new AI browser. It literally does what you described.
Source for what?
??
You’re specifically making claims about me in your comment. “Source?” for those claims.
Maybe you’ve become so reliant on AI you cant read and understand comments anymore? Put this exchange into ChatGPT and have it explain for you.
Okay, so how do you go about the process of fact checking every news article you read?
You’re never going to believe this: i can take an article at face value because it’s not being routed through a slop generator when i read it.
Whether or not a source can be believed to be true is not within the scope of this thread.
Right, you take the article at face value. So exactly as I originally said:
certainly not by using llms, that’s for sure
Okay, we’ve established how you don’t do it. So how do you go about the process of fact checking every news article you read?
I check the sources.
For every news article you read?
That’s the point here. AI can allow for tedious tasks to be automated. I could have a button in my browser that, when clicked, tells the AI to follow up on those sources to confirm that they say what the article says they say. It can highlight the ones that don’t. It can add notes mentioning if those sources happen to be inherently questionable - environmental projections from a fossil fuel think tank, for example. It can highlight claims that don’t have a source, and can do a web search to try to find them.
These are all things I can do myself by hand, sure. I do that sometimes when an article seems particularly important or questionable. It takes a lot of time and effort, though. I would much rather have an AI do the grunt work of going through all that and highlighting problem areas for me to potentially check up on myself. Even if it makes mistakes sometimes that’s still going to give me a far more thoroughly checked and vetted view of the news than the existing process.
Did you look at the link I gave you about how this sort of automated fact-checking has worked out on Wikipedia? Or was it too much hassle to follow the link manually, read through it, and verify whether it actually supported or detracted from my argument?
I think you’ll like digg. All of the features you love there. Why don’t you try it out? It’s the pinnacle of innovation (AI) there. I even heard Sam Altman is there, thank god!
AIs summarize posts and moderate the platform. Oh, literally utopia!
Friendly reminder: Don’t forget to try out OpenAI’s new AI browser. It literally does what you described.
Don’t fall for their redirect. This thread is about them trusting “AI”.