

14·
2 months agoThe trouble is just volume and time, even just reading through the description and “proof it works” would take a few minutes, and if you’re getting 10s of these a day it can easily eat up time to find the ones worth reviewing. (and these volunteers are working in their free time after a normal work day, so wasting 15 or 30 minutes out of the volunteers one or two hours of work is throwing away a lot of time.
Plus, when volunteering is annoying the volunteers stop showing up which kills projects
I don’t think it will make enough difference, but RAG stands for Retrieval Augmented Generation.
There’s a few ways to do it, but basically it’s a way add extra information to the conversation. By default the model only knows what it generates, plus what is in the conversation. RAG adds extra information to the mix.
The simplest approach is to scan the conversation for keywords and add information based on them.
So you ask “what is the capital of France” and instead of the model answering/hallucinating by itself, your app could send the full Wikipedia page for France along with your question, and the model will almost always return the correct answer from the Wikipedia page and hallucinate much less. In practice it gets a lot more complicated and I’m not up to date on recent RAG but the idea is the same.