Has there been anything written about AI "intelligence" from people well read in even the basic and foundational writings on epistemology? For example, I see a lot of people using Hume's way of thinking about how knowledge is formed without addressing Kant's fairly persuasive refutation of it in CPR and without addressing the dead end that is the resulting philosophical skepticism Hume espoused.
In this book, I see Hume cited in a misunderstanding of his thought, and Kant is only briefly mentioned for his metaphysical idealism rather than his epistemology, which is a legitimately puzzling to me. Furthermore, to refer to Kant's transcendental idealism as "solipsism" is so mistaken that it's actually shocking. Transcendental idealism has nothing whatsoever to do with "solipsism" and is really just saying that we (like LLMs!) don't truly understand objects as "things in themselves" but rather form understanding of them via perceptions of them within time and space that we schematize and categorize into rational understandings of those objects.
Regarding Hume, the author brings up his famous is/ought dichotomy and misrepresents it as Hume neatly separating statements and "preferring" descriptive ones. We're now talking more about fact-value distinction because this is not talking about moral judgments but rather descriptive vs prescriptive statements, but I'll ignore that because the two are so often combined. The author then comes to Hume's exact conclusion, but thinks he is refuting Hume when he says:
>While intuitive, the is/ought dichotomy falls apart when we realize that models are not just inert matrices of numbers or Platonic ideas floating around in a sterile universe. Models are functions computed by living beings; they arguably define living beings. As such, they are always purposive, inherent to an active observer. Observers are not disinterested parties. Every “is” has an ineradicable “oughtness” about it.
The author has also just restated a form of transcendental idealism right before dismissing Kant's (and the very rigorously articulated "more recent postmodern philosophers and critical theorists") transcendental idealism! He is able to deftly, if unconvincingly, hand wave it with:
>We can mostly agree on a shared or “objective” reality because we all live in the same universe. Within-species, our umwelten, and thus our models—especially of the more physical aspects of the world around us—are all virtually identical, statistically speaking. Merely by being alive and interacting with one another, we (mostly) agree to agree.
I think this bit of structuralism is where the actual solipsism is happening. Humanity's rational comprehension of the world is actually very contingent. An example of this is the study that were done by Alexander Luria on remote peasant cultures and their capacity for hypothetical reasoning and logic in general. They turned out to be very different from "our models" [1]. But, even closer to home, I share the same town as people who believe in reiki healing to the extent that they are willing to pay for it.
But, more to the point, he has also simply rediscovered Hume's idea, which I will quote:
>In every system of morality, which I have hitherto met with, I have always remarked, that the author proceeds for some time in the ordinary way of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when of a sudden I am surprised to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not.
Emphasis mine. Hume's point was that he thought descriptive statements always carry a prescriptive one hidden in their premise, and so that, in practice, "is" statements are always just "ought" statements.
Had the author engaged more actively with Hume's writing, he would have come across Hume's fork, related to this is-ought problem, and eventually settled on (what I believe to be) a much more important epistemological problem with regards to generative AI: the possibility of synthetic a priori knowledge. Kant provided a compelling argument in favor of the possibility of synthetic a priori knowledge, but I would argue that it does not apply to machines, as machines can "know" things only by reproducing the data they are trained with and lack the various methods of apperception needed to schematize knowledge due to a variety of reasons, but "time" being the foremost. LLMs don't have a concept of "time"; every inference they make is independent, and transformers are just a great way to link them together into sequences.
I should point out that I'm not a complete AI skeptic. I think that it could be possible to have some hypothetical model that would simply use gen AI as its sensory layer and combine that with a reasoning component that makes logical inferences that more resemble the categories that Kant described being used to generate synthetic a priori knowledge. Such a machine would be capable of producing true new information rather than simply sampling an admittedly massive approximation of the joint probability of semiotics (be it tokens or images) and hoping that the approximation is well constructed enough to interpolate the right answer out. I would personally argue that the latter "knowledge", when correct, is nothing more than persuasive Gettier cases.
Overall, I'm not very impressed with the author's treatment of these thinkers. Some of the other stuff looks interesting, but I worry it's a Gell-Mann amnesia effect to be too credulous, given that I have done quite a bit of primary source study on 19th century epistemology as a basis for my other study in newer writing in that area. The author's background is in physics and engineering, so I have a slight suspicion that (since he used Hume's thought related to moral judgments rather than knowledge), these are hazily remembered subjects from a rigorous ethics course he took at Princeton, but that is purely speculative on my part. I think he has reached a bit too far here.
1: https://languagelog.ldc.upenn.edu/nll/?p=481 (I am referring mostly to the section in blue here)