lukeinator42
13 hours ago
It has been clear for a long time (e.g. Marvin Minsky's early research) that:
1. both ANNs and the brain need to solve the credit assignment problem 2. backprop works well for ANNs but probably isn't how the problem is solved in the brain
This paper is really interesting, but is more a novel theory about how the brain solves the credit assignment problem. The HN title makes it sound like differences between the brain and ANNs were previously unknown and is misleading IMO.
mindcrime
13 hours ago
> The HN title makes it sound like differences between the brain and ANNs were previously unknown and is misleading IMO.
Agreed on both counts. There's nothing surprising in "there are differences between the brain and ANN's."
But their might be something useful in the "novel theory about how the brain solves the credit assignment problem" presented in the paper. At least for me, it caught my attention enough to justify giving it a full reading sometime soon.
mindcrime
6 hours ago
s/their might/there might/
Dang it, how did I miss that. Uugh. :-(
dawnofdusk
11 hours ago
Are there any results about the "optimality" of backpropagation? Can one show that it emerges naturally from some Bayesian optimality criterion or a dynamic programming principle? This is a significant advantage that the "free energy principle" people have.
For example, let's say instead of gradient descent you want to do a Newton descent. Then maybe there's a better way to compute the needed weight updates besides backprop?
roenxi
7 hours ago
I'd be willing to be proven wrong, but as a starting point I'd suggest it obviously isn't optimal for what it is being used for. The performance on tasks of AI seems to be quite poor relative to the time spent training. For example, when AIs overtake humans at Baduk it is normal for the AI to have played several orders of magnitude more games than elite human players.
The important thing is backprop does work and so we're just scaling it up to absurd levels to get good results. There is going to be a big step change found sooner or later where training gets a lot better. Maybe there is some sort of threshold we're looking for where a trick only works for models with lots of parameters or something before we stumble on it, but if evolution can do it so will researchers.
mrfox321
11 hours ago
Second order methods, and their approximations, can be used in weight updating, too.
ergonaught
11 hours ago
> The HN title makes it sound like differences between the brain and ANNs were previously unknown and is misleading IMO
There are no words in the title which express this. Your own brain is "making it sound" like that. Misleading, yes, but attribute it correctly.
perching_aix
10 hours ago
"differs fundamentally", being in the tense that it is, with the widely known context that AI is "modeled after the brain", definitely does suggest that oh no, they got the brain wrong when that modelling happened, therefore AI is fundamentally built wrong. Or at least I can definitely see this angle in it.
The angle I actually see in it though is the typical pitiful appeal to the idea that the brain is this incredible thing we should never hope to unravel, that AI bad, and that everyone working on AI is an idiot as per the link (and then the link painting a leaps and bounds more nuanced picture).
ConspiracyFact
7 hours ago
The title does express that, due to context. An article in Nature with the title "X is Y" suggests that, until now, we didn't know that X is Y, or we even thought that X is definitely not Y.