mentalgear
6 days ago
This is how the future of "AI" has to look like: Fully-traceable inferences steps, that can be inspected & adjusted if needed.
Without this, I don't see how we (the general population) can maintain any control - or even understanding - of these larger and more opaque becoming LLM-based long-inference "AI" systems.
Without transparency, Big Tech, autocrats and eventually the "AI" itself (whether "self-aware" or not) will do whatever they like with us.
moffkalast
6 days ago
You've answered your own question as to why many people will want this approach gone entirely.
Imustaskforhelp
6 days ago
I always really like answers like yours as they are clever and in my opinion maybe a bit true as well
I think that tho there are a lot of things public can do and maybe raising awareness about these stuff could be great as well.
turnsout
6 days ago
I agree transparency is great. But making the response inspectable and adjustable is a huge UI/UX challenge. It's good to see people take a stab at it. I hope there's a lot more iteration in this area, because there's still a long way to go.
SilverElfin
6 days ago
In the least, we need to know what training data goes into each AI model. Maybe there needs to be a third party company that does audits and provides transparency reports, so even with proprietary models, there are some checks and balances.
Blamklmo
6 days ago
[dead]
zapataband2
6 days ago
[dead]