sega_sai
6 days ago
I think the interface of LLM with formalized languages is really the future. Because here you can formally verify every statement and deal with hallucinations.
thesz
6 days ago
So, you looking for Cyc [1], practically.
trenchgun
5 days ago
>The Cyc knowledge base involving ontological terms was largely created by hand axiom-writing
>An inference engine is a computer program that tries to derive answers from a knowledge base. The Cyc inference engine performs general logical deduction.[8] It also performs inductive reasoning, statistical machine learning and symbolic machine learning, and abductive reasoning. The Cyc inference engine separates the epistemological problem from the heuristicproblem. For the latter, Cyc used a community-of-agents architecture in which specialized modules, each with its own algorithm, became prioritized if they could make progress on the sub-problem.
Not really.
notTooFarGone
6 days ago
You reminded me of my pains with OpenCyc in my student job. Thanks :D
auggierose
6 days ago
Does Cyc have proofs?
cess11
6 days ago
In a sense, yes, since it has a foundation in Prolog style facts and rules, and supposedly can output its reasoning.
auggierose
6 days ago
Ok, sounds like in principle you could have proofs, but in practice, you don't?
Are there any checks for the consistency of all facts?
cess11
6 days ago
How could there be? Gödel gets in the way.
auggierose
5 days ago
In most interactive theorem provers the check is simple: you single out the axioms you believe, and all other stuff you do is guaranteed to preserve consistency. That works in an ITP system because there are only a few axioms.
If Cyc has 100000 axioms/facts, then that's a problem.
cess11
5 days ago
It's not a theorem prover, it's more like a support system for decision making.
They can probably isolate 100k facts from some domain for you if you pay for it, but their idea is to run inference on millions of "common sense" and everyday facts and have been for decades.
auggierose
5 days ago
That sounds to me like something that will be subsumed by the combination of ITP and AI.
cess11
5 days ago
They've refused to die for many, many years by now. Not sure what you mean by ITP but if it's thrombocytopenia, they claim to have a lot of customers in medicine already.
thesz
5 days ago
If axioms, facts and rules have types, these types can guide choice.
auggierose
5 days ago
Not a believer in types here, sorry. I'd rather have simple theorems instead without wading through a swamp of types first. And I am sure AI will be able to handle theorems just fine.
samweb3
6 days ago
I am building Memelang (memelang.net) to help with this as well. I'd love your thoughts if you have a moment!
thesz
6 days ago
You are building an SQL in disguise.
First, you need to encode "memes" and relations between them at scale. This is not a language problem, it is data handling problem.
Second, at some point of time you will need to query memes and relations between them, again, at scale. While expression of queries is a language problem, an implementation will heavily use what SQL engines does use.
And you need to look at Cyc: https://en.wikipedia.org/wiki/Cyc
It does what you are toing do for 40 (forty) years now.
jamilton
6 days ago
The PHP implementation works by converting it to SQL, so I'm sure they're aware.
chvid
6 days ago
Maybe include a section titled 'how is this different from and how is it related to' ie. relational algebra?
meindnoch
6 days ago
Looks like an incomplete Prolog.
est
6 days ago
> formalized languages is really the future
Hmm, maybe it's time for symbolism to shine?
nybsjytm
6 days ago
If this were the case, I don't see why we'd need to wait for an AI company to make a breakthrough in math research. The key issue instead is how to encode 'real-life' statements in a formal language - which to me seems like a ludicrous problem, just complete magical thinking.
For example, how might an arbitrary statement like "Scholars believe that professional competence of a teacher is a prerequisite for improving the quality of the educational process in preschools" be put in a lean-like language? What about "The theoretical basis of the October Revolution lay in a development of Marxism, but this development occurred through three successive rounds of theoretical debate"?
Or have I totally misunderstood what people mean when they say that developments in automatic theorem proving will solve LLM's hallucination problem?
nine_k
6 days ago
You can't talk about formally verifiable truthiness until you solve epistemology. This can be achieved formally in mathematics, with known principal limitations. Here strict theorem-proving, Lean-style, is viable.
It can also be achieved informally and in a fragments way in barely-mathematical disciplines, like biology, linguistics, and even history. We have chains of logical conclusions that do not follow strictly, but with various probabilistic limitations, and under modal logic of sorts. Several contradictory chains follow under the different (modal) assumptions / hypotheses, and often both should be considered. This is where probabilistic models like LLMs could work together with formal logic tools and huge databases of facts and observations, being the proverbial astute reader.
In some more relaxed semi-disciplines, like sociology, psychology, or philosophy, we have a hodgepodge of contradictory, poorly defined notions and hand-wavy reasoning (I don't speak about Wittgenstein here, but more about Freud, Foucault, Derrida, etc.) Here, I think, the current crop of LLMs is applicable most directly, with few augmentations. Still a much, much wider window of context might be required to make it actually productive, by the standards of the field.
nybsjytm
6 days ago
Hmmm I think even in something very nominally nearby like theoretical physics, there's very little that's similar to theorem proving. I don't see how AlphaProof could be a stepping stone to anything like what you're describing.
Generally, I think many people who haven't studied mathematics don't realize how huge the gulf is between "being logical/reasonable" and applying mathematical logic as in a complicated proof. Neither is really of any help for the other. I think this is actually the orthodox position among mathematicians; it's mostly people who might have taken an undergraduate math class or two who might think of one as a gateway to the other. (However there are certainly some basic commonalities between the two. For example, the converse error is important to understand in both.)
auggierose
6 days ago
I don't think that mathematicians are actually in the best position to judge how math/logic might help in practical applications, because they are usually not interested in practical applications at all (at least for the last 70 years or so). Especially, they are also not interested in logic at all.
But logic is very relevant to "being logical/reasonable", and seeing how mathematicians apply logic in their proofs is very relevant, and a starting point for more complex applications. I see mathematics as the simplest kind of application of logic you can have if you use only your brain for thinking, and not also a computer.
"Being logical/reasonable" also contains a big chunk of intuition/experience, and that is where machine learning will make a big difference.
user
5 days ago
benlivengood
6 days ago
Probabilistic reasoning is possible in a formal setting; It produces a probability distribution over answers. To ground probabilistic logic itself I'm not aware of much progress beyond the initial idea of logical induction[0].
nybsjytm
6 days ago
This takes for granted a formal setting, which is what I'm questioning in any of these 'real world' contexts.
benlivengood
6 days ago
> For example, how might an arbitrary statement like "Scholars believe that professional competence of a teacher is a prerequisite for improving the quality of the educational process in preschools" be put in a lean-like language? What about "The theoretical basis of the October Revolution lay in a development of Marxism, but this development occurred through three successive rounds of theoretical debate"?
> This takes for granted a formal setting, which is what I'm questioning in any of these 'real world' contexts.
A formal model of semantics would likely be a low-level physical representation of possible states augmented with sound definitions of higher-level concepts and objects. I don't think humans are capable of developing a formal semantics that would work for your sentences (it's taken us hundreds of years to approach formalization of particle physics), but I think that an automated prover with access to physical experiments and an LLM could probably start building a more comprehensive semantics.
lukeschlather
5 days ago
People talk about how LLMs need "more data" but data is really sort of a blunt instrument to try and impart the LLM a lot of experience. Unfortunately data isn't actually experience, it's a record of an experience.
Now what we've seen with e.g. Chess and Go, is that when you can give a tensor model "real experience" at the speed of however many GPUs you have, the model is quickly capable of superhuman performance. Automatic theorem proving means you can give the model "real experience" without armies of humans writing down proofs, you can generate and validate proofs as fast as a computer will let you and the LLM has "real experience" with every new proof it generates along with an objective cost function, was the proof correct?
Now, this might not let us give the LLMs real experience with being a teacher or dealing with Marxist revolutionaries, but it would be a sea change in the ability of LLMs to do math, and it probably would let us give LLMs real experience in writing software.
ianhorn
5 days ago
This is a thing I'm working on, so I have some potentially useful thoughts. tl;dr, it doesn't have to be about encoding arbitrary real life statements to be super duper useful today.
> how might an arbitrary statement like "Scholars believe that professional competence of a teacher is a prerequisite for improving the quality of the educational process in preschools" be put in a lean-like language?
Totally out of scope in the any near future for me at least. But that doesn't prevent it from being super useful for a narrower scope. For example:
- How might we take a statement like "(x + 1) (x - 5) = 0" and encode it formally?
- Or "(X X^T)^-1 X Y = B"?
- Or "6 Fe_2 + 3 H_2 O -> ?"?
We can't really do this for a huge swath of pretty narrow applied problems. In the first, what kind of thing is X? Is it an element of an algebraically closed field? In the second, are those matrices of real numbers? In the third, is that 6 times F times e_2 or 6 2-element iron molecules?
We can't formally interpret those as written, but you and I can easily tell what's meant. Meanwhile, current ways of formally writing those things up is a massive pain in the ass. Anything with a decent amount of common sense can tell you what is probably meant formally. We know that we can't have an algorithm that's right 100% of the time for a lot of relevant things, but 99.99% is pretty useful. If clippy says 'these look like matrices, right?' and is almost always right, then it's almost always saving you lots of time and letting lots more people benefit from formal methods with a much lower barrier to entry.
From there, it's easy to imagine coverage and accuracy of formalizable statements going up and up and up until so much is automated that it looks kinda like 'chatting about real-life statements' again. I doubt that's the path, but from a 'make existing useful formal methods super accessible' lens it doesn't have to be.
tucnak
5 days ago
Grammar sampling has been around for months, and remained largely unexplored. Don't fall into the common trap of thinking in a fixed language, rather think about a superset of possible languages (grammars) and how they could evolve from one another. I bet, if there's a breakthrough, it's probably in "differential grammars," or whatever it would be called: plug that into the backtracking sampler, & you have your System 2.
gosub100
5 days ago
Ok so Claude says that the Riemann hypothesis is proven and gives you 1200 pages of math symbols backing it up. Now what?
You probably say "now a human can verify it" but what if the humans are wrong? What is the source of truth?
Turneyboy
5 days ago
The nice thing about formal verification is exactly that. You have a separate tool that's very much like a compiler that can check those 1200 pages and tell you that it's true.
The source of truth here is the code we wrote for the formal verification system.
gosub100
4 days ago
Could formal verification prove the nexus between the Taniyama-Shimura conjecture and Fermat's last theorem? I don't claim to know what the former is, but I am skeptical that a system that can only validate what we already know would have any luck with sudden leaps between disparate corners of the field. Or worse, what if we had this verification 30 years ago, Wiles gets excited thinking he proved a 300 year old mystery, only to be told by the algorithm, "sorry son, this code doesn't verify. Try again next time!"
raincole
6 days ago
It's obviously not the future (outside of mathematics research). The whole LLM boom we've seen in the past two years comes from one single fact: peopel don't need to learn a new language to use it.
seizethecheese
6 days ago
Both comments can be right. People don’t need to know HTML to use the internet.
nickpsecurity
6 days ago
Natural language -> Formal Language with LLM-assisted tactics/functions -> traditional tools (eg provers/planners) -> expert-readable outputs -> layperson-readable results.
I can imagine many uses for flows where LLM’s can implement the outer layers above.
Groxx
6 days ago
The difficulty then will be figuring out if the proof is relevant to what you want, or simply a proof of 1=1 in disguise.
aaron695
6 days ago
[dead]