concinds
a month ago
> To trust these AI models with decisions that impact our lives and livelihoods, we want the AI models’ opinions and beliefs to closely and reliably match with our opinions and beliefs.
No, I don't. It's a fun demo, but for the examples they give ("who gets a job, who gets a loan"), you have to run them on the actual task, gather a big sample size of their outputs and judgments, and measure them against well-defined objective criteria.
Who they would vote for is supremely irrelevant. If you want to assess a carpenter's competence you don't ask him whether he prefers cats or dogs.
jesenator
a month ago
Yeah, it's a good point. The examples (jobs, loans, videos, ads) we give are more examples of how machine learning systems make choices that affect you, rather than how LLMs/generally intelligent systems do (which is what we really want to talk about). I'll try to update this text soon.
Maybe better examples are helping with health advice, where to donate, finding recipes, or examples of policymakers using AI to make strategic decisions.
These are, although maybe not on their face, value laden questions, and often don't have well defined objective criteria for their answers (as another comment says).
Let me know if this addresses your comment!
godelski
a month ago
> measure them against well-defined objective criteria.
If we had well-defined objective criteria then the alignment issue would effectively not existzuhsetaqi
a month ago
> measure them against well-defined objective criteria
Who does define objective criteria?
shaky-carrousel
a month ago
It's an awful demo. For a simple quiz, it repeatedly recomputes the same answers by making 27 calls to LLMs per step instead of caching results. It's as despicable as a live feed of baby seals drowning in crude oil; an almost perfect metaphor for needless, anti-environmental compute waste.
Herring
a month ago
Psychological research (Carney et al 2008) suggests that liberals score higher on "Openness to Experience" (a Big Five personality trait). This trait correlates with a preference for novelty, ambiguity, and critical inquiry.
In a carpenter maybe that's not so important, yes. But if you're running a startup or you're in academia or if you're working with people from various countries, etc you might prefer someone who scores highly on openness.
binary132
a month ago
but an LLM is not a person. it’s a stochastic parrot. this crazy anthropomorphizing has got to stop
jesenator
a month ago
I think the stochastic parrot criticism is a bit unfair.
It is, in a way, technically true that LLMs are stochastic parrots, but this undersells their capabilities (winning gold on the international math olympiad, and all that).
It's like saying that human brains are "just a pile of neurons", which is technically true, but not useful for conveying the impressive general intelligence and power of the human brain.
stevenalowe
a month ago
Yeah ChatGPT says they really hate that!
jesenator
a month ago
Nice one