_______ __ _______
| | |.---.-..----.| |--..-----..----. | | |.-----..--.--.--..-----.
| || _ || __|| < | -__|| _| | || -__|| | | ||__ --|
|___|___||___._||____||__|__||_____||__| |__|____||_____||________||_____|
on Gopher (inofficial)
HTML Visit Hacker News on the Web
COMMENT PAGE FOR:
HTML Why formalize mathematics â more than catching errors
ramanvarma wrote 20 hours 18 min ago:
Not super convinced by this analogy. Tooling and convenience feel
secondary in math. If formalization doesn't help us do better
mathematics, not just more structured mathematics, I'm pretty skeptical
these benefits will justify the cost.
bowsamic wrote 1 day ago:
The issue is when people conflate formalism with truth itself. I know a
lot of people who reject anything that isn't under the umbrella of
"stuff that is formalised", even if it can be formalised but was simply
not presented as formalised in its first incarnation.
vesterthacker wrote 1 day ago:
Rado Kirov shows that formalization transforms how mathematicians think
about structure and collaboration. My work begins from the same
premise, but in the world of programming and system software. I aim to
bring formal structure to programming itself, treating algorithms,
operating systems, and programming languages as subjects that can be
expressed with the same rigor as mathematics.
I just released my treatise yesterday, at [1] Elements of Programming
presents programming as a mathematical discipline built on structure,
logic, and proof. Written in the style of Euclidâs Elements, it
defines computation through clear axioms, postulates, and propositions.
Each book develops one aspect of programming as a coherent system of
reasoning.
Book I establishes identity, transformation, and composition as the
foundations of computation.
Book II introduces algebraic structures such as categories, functors,
and monads.
Book III unites operational and denotational semantics to show that
correctness means equivalence of meaning.
Book IV formalizes capability-based security and verification through
invariants and confinement.
Book V connects type theory with formal assurance, explaining how types
embody proofs.
Book VI extends these ideas into philosophy and ethics, arguing that
software expresses human intention and responsibility.
HTML [1]: https://leanpub.com/elementsofprogramming
anonymousDan wrote 1 day ago:
I don't get the point about trivial proofs. Can't you just tell Lean to
assume something is true and then get on with the rest of the
interesting part?
jojomodding wrote 1 day ago:
You can but that ruins the fun and also misses the point. How do you
know your "trivial" theorem is actually trivial? Proofs are
mechanized to increase our trust into them, and it defeats the point
if you have to still manually review a myriad of helper lemmas.
anonymousDan wrote 1 day ago:
Yeah I guess it's more a question of methodology for me. You have
several parts of a proof, and your intuition guides you that
certain parts are more likely to be risky than others. Better to
get those straight first since you've a higher chance of failure
(potentially rendering much of the work you have already done
pointless). Then you can come back to flesh out the hopefully more
straightforward parts. This is as opposed to taking a purely
bottom-up approach. At least that's how I often tackle a complex
coding problem - I am no mathematician!
lacker wrote 1 day ago:
For anyone that's interested in formalizing mathematics but wished
there was an easier way to do it, I've been working on a different sort
of theorem prover recently. [1] The idea is that there's a small AI
built into the VS Code extension that will fill in the details of
proofs for you. Check it out if you're interested in this sort of
thing!
HTML [1]: https://acornprover.org
jongjong wrote 1 day ago:
I think the analogy between JavaScript and TypeScript is not 100%
because although JavaScript has some quirks in its design, it is fully
consistent. My biggest issue with math is symbols that are reused to
mean different things in different contexts. It makes maths more
time-consuming to learn and makes it difficult to jump between
different fields.
Personally, at times, I struggled with the dual nature of mathematics;
its extreme precision in meaning combined with vague and inconsistent
use of symbols is challenging... Especially frustrating when learning
something new and some symbols that you think you understand turn out
to mean something else; it creates distrust towards maths itself.
dkural wrote 1 day ago:
It's good to remind yourself of Bill Thurston's points: [1] I love the
analogy in David Bessis's wonderful book Mathematica (nothing to do
with Wolfram). We all know how to tie our shoes. Now, write in words
and symbols to teach someone how you tie your shoes. This is what a
proof is.
Often even people with STEM degrees confuse what mathematicians do with
the visible product of it - symbols and words on a page. While the
formalism of mathematics has immense value for precision, and provides
a "serialization language" (to borrow a CS analogy), it would be akin
to confusing a Toaster with the Toaster manual, or shoelaces with the
instructions.
HTML [1]: https://www.ams.org/journals/bull/1994-30-02/S0273-0979-1994-0...
mprast wrote 1 day ago:
I started having a much easier time with mathematics when I realized
and got comfortable with this idea. In hindsight it should've been
obvious to me as a programmer - when I'm building something, I don't
ideate in terms of individual lines of code, after all
konne88 wrote 2 days ago:
I have proven quite a few theorems in Lean (and other provers) in my
life, and the unfortunate reality is that for any non-trivial math, I
still have to figure out the proof on paper first, and can only then
write it in Lean. When I try to figure out the proof in Lean, I always
get bogged down in details and loose sight of the bigger picture. Maybe
better tactics will help. I'm not sure.
thaumasiotes wrote 1 day ago:
> Maybe better tactics will help. I'm not sure.
I don't see why they would.
If anyone is curious about the phenomenon, the second problem in
session 7 at [1] [ âx.(r(x)ââ¥)âr(f(x)) â¹
âx.r(x)â§r(f(f(x))) ] is one where the proof is straightforward,
but you're unlikely to get to it by just fooling around in the
prover.
HTML [1]: https://incredible.pm/
practal wrote 1 day ago:
In principle, LLMs can do this already. If you ask Claude to
express this in simple words, you will get this translation of the
theorem:
"If applying f to things makes them red whenever they're not
already red, then there must exist something that is red AND stays
red after applying f twice to it."
Now the proof is easy to see, because it is the first thing you
would try, and it works: If you have a red thing x, then either x
and f(f(x)) are both red, or f(x) and f(f(f(x)) are both red. If x
is not red, then f(x) is red. Qed.
You will be able to interact like this, instead of using tactics.
spider-mario wrote 22 hours 44 min ago:
For anyone else for whom the justification for âeither x and
f(f(x)) are both red, or f(x) and f(f(f(x)) are both redâ was
not immediately obvious:
H: âx.(r(x)ââ¥)âr(f(x))
goal: âx.r(x)â§r(f(f(x)))
If f(f(x)) is red:
x is a solution (QED).
Otherwise:
f(f(x)) not being red means f(x) must have been (by
contraposition of H) and that f(f(f(x))) will be (by H);
therefore, f(x) is a solution.
lordnacho wrote 2 days ago:
Noob question here.
Say I'm wanting to formalize a proof. How do I know that what I'm
writing is actually a correct formulation?
If it gets more complicated, this problem gets worse. How do I know the
thing it is checking is actually what I thought it was supposed to
check?
I guess this is a bit like when you write a program and you want to
know if it's correct, so you write some tests. But often you realize
your tests don't check what you thought.
Atiscant wrote 15 hours 41 min ago:
A point that is maybe not obvious to people who have not done
mathematics at a high level or done ânewâ mathematics, is that
often you end of changing your theorem or at least lemmas and
definitions while figuring out the proof. That is, you have something
you want to prove, but maybe it is easier to proving something more
general or maybe your definitions need to change slightly.
Anecdotally, during a project I spend perhaps a year figuring out
exactly the right definition for a problem to be able to prove it. Of
course, this was a very new thing. For well-know areas it is often
straight forward, but at the frontier, both definitions and theorems
often change as your proceed and understand the problem better.
aureianimus wrote 1 day ago:
In a lot of cases you can get far by locally proofreading the
definitions.
Trying to formally prove something and then failing is a common way
people find out they forgot to add an hypothesis.
Another pitfall is defining some object, but messing up the
definitions, such that there's actually no object of that kind. This
is addressed by using test objects. So suppose you define what a ring
is, then you also prove that real numbers and polynomials are
examples of the thing you defined.
Almondsetat wrote 2 days ago:
Being sure that you are proving the right thing is something that can
never be formally guaranteed.
konne88 wrote 2 days ago:
You don't know. Even with the best theorem provers, your definitions
are still trusted. The best way I've found to help with this is to
keep your definitions simple, and try to use them to do things (e.g.
can you use your definition to solve other problems, does it work on
some concrete examples, etc).
eig wrote 2 days ago:
Iâm not a mathematician, so could someone explain the difference in
usage between Lean and Coq?
On a surface level my understanding is that both are computer augmented
ways to formalize mathematics. Why use one over the other? Why was Lean
developed when Coq already existed?
aureianimus wrote 1 day ago:
I think the difference is mostly cultural. The type theories of Lean
and Rocq are fairly close, with the exception that Lean operates with
definitional proof irrelevance as one of the default axioms. This
causes Lean to lose subject reduction and decidability of definitinal
equality as properties of the language.
Many people in the Rocq community see this as a no-go and some argue
this will cause the system to be hard to use over the long run. In
the Lean community, the interest in type theory is at a much lower
level, and people see this as a practical tradeoff. They recognize
the theoretical issues show up in practice, but so infrequently that
having this axiom is worth it.
I consider this matter to be an open question.
If you look at what's being done in the communities, in Lean the
focus is very much on and around mathlib. This means there's a fairly
monolithic culture of mathematicians interested in formalizing,
supplemented with some people interested in formal verification of
software.
The Rocq community seems much more diverse in the sense that
formalization effort is split over many projects, with different
axioms assumed and different philosophies. This also holds for
tooling and language features. It seems like any problem has at least
two solutions lying around.
My personal take is that this diversity is nice for exploring
options, it also causes the Rocq community to move slower due to
technical debt of switching between solutions.
warkdarrior wrote 2 days ago:
Lean has a good library of formalized mathematics, but lacks code
extraction (you cannot generate a program from the proofs it
constructs). So it is more suitable and highly used by mathematicians
to prove theorems.
Coq has always focused on proving program correctness, so it sees
lots of use by computer scientists. It also does code extraction, so
after you prove a program correct in Coq you can generate a fast
version of that program without the proof overhead.
Ericson2314 wrote 2 days ago:
Rocq is ancient and has some longstanding UX problems. It is pleasant
to try making a new code base.
This is kinda like asking, why write Clang when we already had GCC?
Or, why making Python if we already have Perl?
It's good to have some competition for these things, Rocq I believe
felt the heat and has been also doing some good things in recent
years.
KalMann wrote 1 day ago:
Why do you keep saying Rocq when he asked about Coq? Are they the
same thing?
Epa095 wrote 1 day ago:
[1] `The Rocq Prover (formerly named Coq) [...] `
HTML [1]: https://en.wikipedia.org/wiki/Rocq
mlpoknbji wrote 2 days ago:
I think that (most) mathematicians were not that interested in formal
proof until quite recently (as opposed to computer scientists), and
most of the interest in lean has been self-reinforcing, namely there
is a (relatively speaking) huge library of formally verified
mathematics. So now basically anyone who cares about formal
verification as a tool for mathematics is working in lean. There are
of course numerous techincal differences which you can read about if
you google coq vs lean.
kachnuv_ocasek wrote 2 days ago:
Lean has much better UX to be frank. Rocq is fine, but if I were to
start formalising today, I'd pick Lean.
gtsnexp wrote 2 days ago:
Ask HN: Whatâs the single best resource for learning Lean (beyond the
official docs)?
aureianimus wrote 1 day ago:
All the good resources are listed here: [1] I recommend the natural
number game (also mentioned above) for a casual introduction to the
mathematics side, just to get a feeling.
If you are serious about learning lean, I recommend Functional
Programming in Lean for learning it as a programming language and
Theorem Proving in Lean for learning it as a proof assistant
HTML [1]: https://lean-lang.org/learn/
mlpoknbji wrote 2 days ago:
Assuming you have some math background but no Lean background:
HTML [1]: https://adam.math.hhu.de/#/g/leanprover-community/nng4
fouronnes3 wrote 2 days ago:
Is there, somewhere, a list of theorems that were considered proved and
true for a while, but after attempts at formalization the proof was
invalidated and the theorem is now unknown or disproved?
robinzfc wrote 1 day ago:
There was a question [1] on mathoverflow about this with a couple of
interesting answers and comments.
HTML [1]: https://mathoverflow.net/questions/291158/proofs-shown-to-be...
wslh wrote 2 days ago:
HTML [1]: https://en.wikipedia.org/wiki/List_of_incomplete_proofs?wpro...
nomilk wrote 2 days ago:
Related: Terrence Tao discussing Lean (programming language for
formalising mathematical proofs) on Lex podcast (starts 1h20m): [1] I
know nothing of mathematics but found it fascinating, especially the
idea that if outside information changes that affects your proof, you
can have the Lean compiler figure out which lines of your proof need
updating (instead of having to go over every line, which can take days
or more).
HTML [1]: https://www.youtube.com/watch?v=HUkBz-cdB-k&t=1h20m10s
umutisik wrote 2 days ago:
As a former professional mathematician: the benefits mentioned in the
article (click-through definitions and statements, analyzing meta
trends, version control, ...) do not seem particularly valuable.
The reason to formalize mathematics is to automate mathematical proofs
and the production of mathematical theory.
UltraSane wrote 2 days ago:
Lean is amazing for collaboration because anyone can contribute to a
proof and their work be automatically verified.
pfdietz wrote 2 days ago:
Another reason to formalize math is that formalized proofs become
training material for automated mathematics.
Ultimately we want all of the math literature to become training
material, but that would likely require automated techniques for
converting it to formalized proofs. This would be a back-and-forth
thing that would build on itself.
westurner wrote 2 days ago:
> While Paulson focuses on the obvious benefit of finding potential
errors in proofs as they are checked by a computer, I will discuss some
other less obvious benefits of shifting to formal math or âdoing math
with computersâ
From [1] sort of re: Tao's Real Analysis formalisms:
> So, Lean isn't proven with HoTT either.
HTML [1]: https://news.ycombinator.com/item?id=44214804
KalMann wrote 2 days ago:
I want to respond to each of his points one by one
> powering various math tools
I don't think going through a math proof like they were computer
programs is a good way to approach mathematics. In mathematics I think
the important thing is developing a good intuition and mental model of
the material. It's not a huge problem if the proof isn't 100% complete
or correct if the general approach is good. Unlike programming, where
you need a program to work 99.9% of the time, you have to pay close
attention to all the minute details.
> analyzing meta-math trends
I'm highly skeptical of the usefulness of this approach in identifying
non-trivial trends. In mathematics the same kinds of principles can
appear in many different forms, and you won't necessarily use the same
language or cite the same theorems even though the parallels are clear
to those who understand them. Perhaps LLMs with their impressive
reasoning abilities can identify parallels but I doubt a simple program
would yield useful insights.
> Basically, the process of doing math will become more efficient and
hopefully more pleasant.
I don't see how his points make things more efficient. It seems like
it's adding a bunch more work. It definitely doesn't sound more
pleasant.
cbondurant wrote 2 days ago:
Lean was a gamechanger for me as someone who has a "hobby" level
interest in abstract mathematics. I don't have the formal education
that would have cultivated the practice and repetition needed to just
know on a gut level the kinds of formal manipulations needed for
precise and accurate proofs. but lean (combined with its incredibly
well designed abbreviation expansion) gives probably the most intuitive
way to manipulate formal mathematical expressions that you could hope
to achieve with a keyboard.
It provides tools for discovering relevant proofs, theorems, etc.
Toying around with lean has actively taught me math that I didn't know
before. The entire time it catches me any time I happen to fall into
informal thinking and start making assumptions that aren't actually
valid.
I don't know of any way to extract the abbreviation engine that lean
plugins use in the relevant editors for use in other contexts, but man,
I'd honestly love it if I could type \all or \ne to get access to all
of the mathematical unicode characters trivially. Or even extend it to
support other unicode characters that I might find useful to type.
anon291 wrote 2 days ago:
I mean, if you understand leans system then you understand the formal
manipulation needed for precise and accurate proofs. Most
mathematical papers are rather handwavy about things and expect
people to fill in the formalism, which is not always true, as we have
seen
bwfan123 wrote 2 days ago:
Bessis [1] argues that formalism - or loosely math writing - is
foundational to clarifying intuition/meaning in a way that natural
language cannot. Imagine it as a scalpel carving out precise shapes
from the blur of images we carry thereby allowing us to "see" things
we otherwise cannot.
I am curious to try out lean to understand how definitions in lean
are able to operationally capture meaning in an unambiguous manner.
HTML [1]: https://www.amazon.com/Mathematica-Secret-World-Intuition-Cu...
confidantlake wrote 1 day ago:
It is interesting that you argue for formalism using a metaphor in
natural language, rather than use a mathematical/data oriented
argument. I find the metaphor pleasing in a way that I suspect a
more data driven argument would not be.
daveguy wrote 1 day ago:
Right tool for the job. Just like formalism is for math.
lo_zamoyski wrote 2 days ago:
For mathematics and certain fields, that is true. But the formalism
matters, and as some have argued, the Fregean style that came to
dominate in the 20th century is ill-suited for some fields, like
linguistics. One argument is that linguists using this style
inevitably recast natural language in the image of the formalism.
(The traditional logical tradition is better suited, as its point
of departure is the grammar of natural language itself.)
No formalism is ontologically neutral in the sense that there is
always an implied ontology or range of possible ontologies. And it
is always important to make a distinction between the abstractions
proper to the formalism and the object of study. A common fallacy
involves reifying those abstractions into objects of the theory, at
least implicitly.
rpcope1 wrote 1 day ago:
> And it is always important to make a distinction between the
abstractions proper to the formalism and the object of study. A
common fallacy involves reifying those abstractions into objects
of the theory, at least implicitly.
I agree 100% and feel like I have seen a lot of people in physics
kind of fall into this trap. The model is not the thing itself.
KolenCh wrote 1 day ago:
Are you sure you are really talking about Physics? Are you
talking about actual research in physics, or physicists
applying their way of thinking in other things?
js8 wrote 2 days ago:
I just had a similar discussion with a coworker, he was
advocating that LLMs are practically useful, but I argued they
are kinda bad because nobody knows how they really work. I think
it's somewhat return to pre-enlightenment situation where the
expert authority was to be taken for their word, there was no way
to externally verify their intuitive thought process, and I
believe success of science and engineering is based on our formal
understanding of the process and externalization of our thoughts.
Similar in mathematics, formalization was driven by this concern,
so that we wouldn't rely on potentially wrong intuition.
I am now in favor of formalizing all serious human discourse
(probably in some form of rich fuzzy and modal logic). I
understand the concern for definition, but in communication, it's
better to agree on the definition (which could be fuzzy) rather
than use two random definitions and hope for their match. (I am
reminded of koan about Sussman and Minsky [1] )
For example, we could formally define an airplane as a machine
that usually has wings, usually flies. This would be translated
into a formula in fuzzy logic which would take, for a given
object, our belief this object is a machine, has wings and flies,
and would return how much it is an airplane under some notion of
usually.
I freely admit this approach wouldn't work for dadaist literary
writers, but I don't want lawyers or politicians or scientists to
be that.
HTML [1]: http://www.catb.org/jargon/html/koans.html
skybrian wrote 2 days ago:
The project to formalize everything has been tried before and
abandoned. Some issues: [1] Formalism isn't the right tool for
a lot of semi-factual fields like journalism or law. Even in
business, numbers are of course used in accounting, but much of
it depends on arbitrary definitions and estimates. (Consider
depreciation.)
HTML [1]: https://metarationality.com/sort-of-truth
lanstin wrote 1 day ago:
Lawyers (here on HN) have said that contracts that specify
everything are too expensive to come up with. Better to cover
the most common cases and have enough ambiguity so that weird
eventuality end up litigated.
Ericson2314 wrote 2 days ago:
The people that make theorem provers, because they are type
theorists and not set theorists doing ZFC derivatives, are very
aware of your last point. Painfully aware, from years of people
dismissing their work.
Read Andrej Bauer on them many foundations of math, for example.
Clearly he is a believer in "no one true ontology".
lo_zamoyski wrote 2 days ago:
> The people that make theorem provers [...] are very aware of
your last point.
> Clearly he is a believer in "no one true ontology".
My point wasn't that you should aim for some kind of fictitious
absence of ontological commitments, only that whatever language
you use will have ontological commitments. Even the type
judgement e:t has ontological implications, i.e., for the term
e to be of type t presupposes that the world is such that this
judgement is possible.
You can still operate under Fregean/Russellian presuppositions
without sets. For example, consider the problem of bare
particulars or the modeling of predicates on relations.
practal wrote 1 day ago:
Indeed, and e:t in type theory is quite a strong ontological
commitment, it implies that the mathematical universe is
necessarily subdivided into static types. My abstraction
logic [1] has no such commitments, it doesn't even presuppose
any abstractions. Pretty much the only requirement is that
there are at least two distinct mathematical objects.
HTML [1]: http://abstractionlogic.com
oersted wrote 2 days ago:
I've been excited about Lean for years, not because of correctness
guarantees, but because it opens the door to doing maths using software
development methods.
Libraries of theorems and mathematical objects, with well defined
abstractions that are ergonomic to apply in target use cases.
Accompanied by good documentation, focused less on how the theorems are
proven (how the functions are implemented), and more on what to use
them for and how. With proper version control and package management.
I believe that all these practices could vastly improve collaboration
and research velocity in maths, as much or more than AI, although they
are highly complementary. If maths is coding, AI will be much better at
it, and AI will be more applicable to it.
thaumasiotes wrote 1 day ago:
> I've been excited about Lean for years, not because of correctness
guarantees, but because it opens the door to doing maths using
software development methods.
> Libraries of theorems and mathematical objects, with well defined
abstractions that are ergonomic to apply in target use cases.
Accompanied by good documentation, focused less on how the theorems
are proven (how the functions are implemented), and more on what to
use them for and how.
How is any of that different from what we had in math before Lean?
lanstin wrote 1 day ago:
It is more software ish. You don't just have a citation to earlier
results, you can import the library. And you don't have to trust
collaborators as much, the proof engine validates. And you can use
github to coordinate large projects with incremental but public
progress.
vonnik wrote 2 days ago:
Out of curiosity, does anyone know the mathematicians actively
leaning into AI + Lean?
kronicum2025 wrote 2 days ago:
I'm leaning a lot into AI + lean. It's a fantastic tool to find new
proofs. The extremly rigid nature of lean means you can really
check programs for correctness. So that part of AI is solved. The
only thing that remains is generating proofs, and that is where
there's nothing in AI space right now. As soon as we do get
something, our mathematical knowledge is going to explode.
vonnik wrote 1 day ago:
What kind of math do you do, and what would âgenerating
proofsâ look like do you think?
vonnik wrote 1 day ago:
i don't know why this was down-voted... i'm genuinely
interested in the answers. feel free to dm me.
griffzhowl wrote 2 days ago:
Kevin Buzzard has been the main mathematician involved with Lean
This is a recent talk where he discusses putting it together with
LLMs (he's somewhat sceptical it'll be revolutionary for producing
new mathematics any time soon)
HTML [1]: https://www.youtube.com/watch?v=K5w7VS2sxD0
oersted wrote 2 days ago:
Terence Tao is well known for being enthusiastic about Lean and AI
and he regularly posts about his experiments.
He is also a serious research mathematician at the top of his game,
considered by many one of the best mathematicians alive. This might
be biased by the fact that he is such a good communicator, he is
more visible than other similarly good mathematicians, but he is a
Fields medallist all the same.
thechao wrote 2 days ago:
Terence Tao posts on mathstodon fairly regularly about lean, AI,
and math. I'm not going to interpret his posts.
anon291 wrote 2 days ago:
As a a hobbyist mathematician / type theorist, chatgpt et al are
great at 'looking up' theorems that you want to exist but that you
may not have read about yet. It's also good at connecting disparate
areas of math. I don't think lean subsumes AI. Rather, lean allows
you to check the AI proof. ChatGPT genuinely does have a knack for
certain lines of thought.
nyrikki wrote 2 days ago:
LLMs and Lean are orthogonal, neither subsumes either.
They both can be useful or harmful, do to their respective
strengths and trade offs.
PAC/statistical learning is good at needles in the haystack
problems assuming that the tail losses, simplicity bias, and corpus
representation issues are acceptable and you understand that it is
fundamentally existential quantification and control for automation
bias etcâ¦
Lean is a wonderful collection of concepts and heuristics but due
to Rice and Gödel etc⦠will not solve all problems with software
development.
How Gödelâs second incompleteness theorem shows that you can
prove anything, without that proof being meaningful is a lens into
that.
It is horses for courses, and remember that even in sub-TC total
functional programming, proving and arbitrary functions is very
hard, while constructing one is far more tractable.
Even those proofs donât demonstrate semantic correctness. History
is riddled with examples of people using powerful tools that
elegantly explain flawed beliefs.
The 2009 crash and gaussian copula as an example.
Get all the value you can out of these tools, but use caution,
especially in math, where superficially similar similarities often
have conflicting conventions, constraints, and assumptions.
Obviously if you problem is ergotic with the Markov property, both
will help, but Automated theorem proving and PAC learning will
never be a meta theory of the other IMHO.
anon291 wrote 1 day ago:
> How Gödelâs second incompleteness theorem shows that you can
prove anything,
That is not at all what it says.
> They both can be useful or harmful,
If a proof is admitted into lean, there is no doubt as to its
truth. There is no way in which lean can be construed as harmful.
> The 2009 crash and gaussian copula as an example.
There is nothing mathematical about the economics behind the 2009
crash. Such things are statistical measurements, which admit the
possibility of failure, not mathematical conclusions that are
demonstrably true.
eynsham wrote 2 days ago:
> Gödelâs second incompleteness theorem shows that you can
prove anything, without that proof being meaningful is a lens
into that.
What has Gödel incompleteness to do with that? We can just take
any sentence Ï as an axiom, and weâve a trivial proof thereof.
constantcrying wrote 2 days ago:
Much of the argument is the same as for the initial push to formalize
mathematics in the late 19th century. Formalisms allow for precision
and help reduce errors, but the most important change was in how
mathematicians were able to communicate, by creating a shared
understanding.
Computerized mathematics is just another step in that direction.
dboreham wrote 2 days ago:
Imho it was always "computerized", they just didn't have a computer.
To me the approaches used in the early 20th century look like people
defining a simple VM then writing programs that "execute" on that VM.
constantcrying wrote 1 day ago:
Exactly. The step to formalize mathematics through computation is
just the logical consequence of the program of the formalizers.
The idea actually goes back to Leibnitz, who was very much
overoptimistic about computability, but already conceived of the
idea of a logic machine, which could deter the truth value of any
statement.
ljlolel wrote 1 day ago:
Which fell apart in shambles under Gödel
jojomodding wrote 1 day ago:
Incredible, you managed to mention Gödel's incompleteness
theorem on HN without wildly mis-stating what it's about ;)
zozbot234 wrote 2 days ago:
> Imho it was always "computerized", they just didn't have a
computer.
They had a whole lot of computers, actually. But back then the
"computers" were actual people whose job was to do computations
with pen and paper (and a few very primitive machines).
DIR <- back to front page