Centaur: A controversial leap towards simulating human cognition

37 pointsposted 3 days ago
by CharlesW

14 Comments

fwlr

2 days ago

Some psychologists finetuned a Llama-family LLM on some psychology data - “a data set called Psych-101, which contained data from 160 previously published psychology experiments, covering more than 60,000 participants who made more than 10 million choices in total”, so I guess maybe 10 million tokens?

That part seems to make sense, but I cannot rightly comprehend the confusion that follows.

Some psychology researchers are claiming it has become a model of human cognition? (Because it can imitate the way a psychology study participant answers psychology study questions?)

Other psychology researchers are disputing this by testing its reaction time and digit span memory? (Are they administering an iq test? A cranial nerves exam?)

lewdwig

3 days ago

I don’t think healthy scepticism is (or should be) controversial. But I find it interesting how willing certain people are to confidently claim that a model does or does not accurately model human cognition when we clearly still _barely understand human cognition_.

Where do people derive their certainty, which seems to me largely misplaced?

joe_the_user

2 days ago

But I find it interesting how willing certain people are to confidently claim that a model does or does not accurately model human cognition when we clearly still _barely understand human cognition_.

Wait a second. Certainly being confident of some claim of understanding some very-not-understood thing is dubious.

But consider some rando who says "I understand X very-not-understood-thing" without strong evidence of one sort or another. Yes I feel moderately confident they are wrong. And I think your statement is presently a rather problematic false-equivalence between these situations.

add-sub-mul-div

2 days ago

I don't understand the "does or does not" framing, as if there's symmetry to the debate. One side is making the claim that we're on the verge of creating that which you say we barely understand. The other side should remain at certainty that no such thing is happening until there's proof that it is. And what would that proof look like when, without said understanding, all we can do is point to output from a program that looks or sounds like its training data?

windowshopping

2 days ago

Looking and sounding like your training data != recreating the source.

Is ChatGPT a human mind?

jedimastert

3 days ago

The tech industry has a very long and proud history of gaining a very surface level understanding of a different industry then immediately claiming to "disrupt" it.

The unearned confidence of tech bros should be studied

somenameforme

2 days ago

Is the goal of the pitch to try to meaningfully and objectively advance the reaches of the knowledge of humanity, in which case self scrutiny is certainly critical? Or is the goal of the pitch to obtain $$$ funding and investment, get something publishable, gain personal renown, etc in which case self scrutiny is counter productive?

monkaiju

2 days ago

The burden of proof is on the one making the (extraordinary) claim.

allears

3 days ago

Neuroscience is still struggling to understand the basic operations of the brain, let alone the "mind." There's no agreed-upon definition of "intelligence." Can you define cognition (literally, "knowing") without defining intelligence?

These fields are making remarkable strides, but they're still in their infancy. Whoever writes these breathless press releases, they probably have a degree in marketing.

senectus1

2 days ago

I have the same objections to the term "IQ".

dvh

2 days ago

Ok so was commander Data sentient or not?