robertclaus
a year ago
As a computer scientist, I was blown away the first time my friend explained to me that his research focused on the timing of neuron spikes, not their magnitude. After talking about it for a while I realized that machine learning neural networks are much closer to simple early models of how neuron's work (averages and all), not how neuron's actually signal. Makes sense when you consider how the latest LLM models have almost as many parameters as we have neurons, but we still seem pretty far from AGI.
lamename
a year ago
Yes and no. An alternate perspective is that the output of each neuron in an artificial neural net is analogous to an F-I curve in a real neuron (spike frequency-input DC current curve). In this way, different neurons have different slopes and intercepts in their FI curves, just as a network of ANN neurons effectively have their activation functions tweaked after applying weights.
I usually only say this to other neuroscientists who have a background in electrophysiology. The analogy isn't perfect, and is unnecessary to understand what ANNs are doing, but the analogy still stands.
yberreby
a year ago
> the latest LLM models have almost as many parameters as we have neurons
I often see this take, but the apt comparison is between parameter and synapse count, not neuron count. You should be counting hidden units rather than weights if you want to compare to neuron counts.
Bjartr
a year ago
To expand on that, as a point of comparison, a single neuron can have thousands of synaptic connections. So we're still a few orders of magnitude out from modeling NNs that have a degree of connectivity similar to the brain, even though the synapse counts are comparable.
markhahn
a year ago
I think you should compare synapse and weight counts if you want a measure of the network's state/capacity. If you want something closer to its compute power, compare neurons and hidden units.
RaftPeople
a year ago
> I think you should compare synapse and weight counts if you want a measure of the network's state/capacity. If you want something closer to its compute power, compare neurons and hidden units.
Even this is far too simple.
1-Astrocytes have processes (extensions) with localized calcium wave signaling as well as global cell calcium wave signaling that is involved in computation of some types of sensory inputs
2-Astrocytes detect and produce neuro-transmitters as well as glial-transmitters (and generally control/influence the synapses between neurons)
3-Neurons have tunneling nanotubes that dynamically connect cells together (on short time scales) and can transfer proteins, calcium (action potentials), etc.
4-Some types of neurons have natural resonance properties which impacts their function and processing of inputs
5-Some types of neurons learn input patterns in isolation (purkinje cells), meaning one single neuron can learn a time series of input and respond appropriately when pattern detected
etc.
There is a long list of interesting and unknown capabilities, it's pretty difficult to compare neurons+cells to an ANN at this point, too much unknown.
dilawar
a year ago
In many, perhaps most, signalling pathways, amplitude doesn't matter much (it does at log-scale). Given how well we control temperature and therefore rate of the reaction, it makes sense to use timing to fight off the noise.
llm_trw
a year ago
Put another way, there is a reason why FM radio sounds much better than AM radio.
ithkuil
a year ago
An elephant brain has 3 times as many neurons as a human.
They are pretty smart animals but so are dogs who have way less neurons. The point here being that the number of neurons is just one of the many factors that determines intelligence (general or not)
smokel
a year ago
Even among humans with roughly similar neuron counts, there are notable examples of individuals displaying extreme stupidity.
ithkuil
a year ago
To further complicate matters, some forms of stupidity are not the lack of intelligence but the consequence of other cognitive processes that may (or would) otherwise useful in social animals because they reinforce group bonds and have a role in group identity definition. Not all those processes are well adapted to the modern society.
Ey7NFZ3P0nzAe
a year ago
80% of human neurons are actually in the cerebellum and are not related to consciousness at all
ithkuil
a year ago
we're talking about general intelligence and not about consciousness at all.
it's unclear how much of what we call intelligence is involving the cerebellum as well, but it may be quite relevant.
Of course the main reason why it's hard to have these kinds of conversations is that we don't all use the same definition of what "intelligence" means.
Ey7NFZ3P0nzAe
a year ago
The cerebellum is not, afaik, relevant to intelligence in any way shape or form. There are case studies of people with more or less complete agenesia (=lack) of the cerebellum who didn't know about it until they went into an MRI for unrelated reasons. Iirc they mostly had below average motor skills but that's it.
a_c
a year ago
Human neural network build and trim connections constantly [1]. I imagine we will get much closer to AGI if we can update models dynamically, instead of just adding more neurons and more training. After all human didn't need reading billions of articles before writing an average one.