The most famous transcendental numbers

176 pointsposted 5 days ago
by vismit2000

156 Comments

Syzygies

4 days ago

Mathematicians get enamored with particular ways of looking at things, and fall into believing this is gospel. I should know: I am one, and I fight this tendency at every turn.

On one hand, "rational" and "algebraic" are far more pervasive concepts than mathematicians are ever taught to believe. The key here is formal power series in non-commuting variables, as pioneered by Marcel-Paul Schützenberger. "Rational" corresponds to finite state machines, and "Algebraic" corresponds to pushdown automata, the context-free grammars that describe most programming languages.

On the other hand, "Concrete Mathematics" by Donald Knuth, Oren Patashnik, and Ronald Graham (I never met Oren) popularizes another way to organize numbers: The "endpoints" of positive reals are 0/1 and 1/0. Subdivide this interval (any such interval) by taking the center of a/b and c/d as (a+c)/(b+d). Here, the first center is 1/1 = 1. Iterate. Given any number, its coordinates in this system is the sequence of L, R symbols to locate it in successive subdivisions.

Any computer scientist should be chomping at the bit here: What is the complexity of the L, R sequence that locates a given number?

From this perspective, the natural number "e" is one of the simpler numbers known, not lost in the unwashed multitude of "transcendental" numbers.

Most mathematicians don't know this. The idea generalizes to barycentric subdivision in any dimension, but the real line is already interesting.

afiori

3 days ago

Using this representation can one "efficiently" sum or multiply numbers? I was under the impression that this was its main drawback

brianberns

5 days ago

I read this with pleasure, right up until the bit about the ants. Then I saw the note from myself at the end, which I had totally forgot writing seven years ago. I probably first encountered the article via HN back then as well. Thanks for publishing my thoughts!

mg

5 days ago

Three surprising facts about transcendental numbers:

1: Almost all numbers are transcendental.

2: If you could pick a real number at random, the probability of it being transcendental is 1.

3: Finding new transcendental numbers is trivial. Just add 1 to any other transcendental number and you have a new transcendental number.

Most of our lives we deal with non-transcendental numbers, even though those are infinitely rare.

canjobear

5 days ago

> 1: Almost all numbers are transcendental.

Even crazier than that: almost all numbers cannot be defined with any finite expression.

dwohnitmok

5 days ago

This is not necessarily true. It is possible for all real numbers (and indeed all mathematical objects) to be definable under ZFC. It is also possible for that not to be the case. ZFC is mum on the issue.

I've commented on this several times. Here's the most recent one: https://news.ycombinator.com/item?id=44366342

Basically you can't do a standard countability argument because you can't enumerate definable objects because you can't uniformly define "definability." The naive definition falls prey to Liar's Paradox type problems.

canjobear

4 days ago

I think you're overthinking it. Define a "number definition system" to be any (maybe partial) mapping from finite-length strings on a finite alphabet to numbers. The string that maps to a number is the number's definition in the system. Then for any number definition system, almost all real numbers have no definition.

mswtk

4 days ago

Sure, you can do that. The parent's point is that if you want this mapping to obey the rules that an actual definition in (say) first-order logic must obey, you run into trouble. In order to talk about definability without running into paradoxes, you need to do it "outside" your actual theory. And then statements about cardinalities - for example "There's more real numbers than there are definitions." - don't mean exactly what you'd intuitively expect. See the result about ZFC having countable models (as seen from the "outside") despite being able to prove uncountable sets exist (as seen from the "inside").

dorgo

4 days ago

This argument is valid for every infinite set, for example: the natural numbers.

canjobear

3 days ago

No, you can establish a bijection between strings and natural numbers, very easily.

dorgo

a day ago

I missunderstood "finite-length strings" as strings capped in length by a finite number N.

dwohnitmok

4 days ago

> I think you're overthinking it.

No, this is a standard fallacy that is covered in most introductory mathematical logic courses (under Tarski's undefinability of truth result).

> Define a "number definition system" to be any (maybe partial) mapping from finite-length strings on a finite alphabet to numbers.

At this level of generality with no restrictions on "mapping", you can define a mapping from finite-length strings to all real numbers.

In particular there is the Lowenheim-Skolem theorem, one of its corollaries being that if you have access to powerful enough maps, the real numbers become countable (the Lowenheim-Skolem theorem in particular says that there is a countable model of all the sets of ZFC and more generally that if there is a single infinite model of a first-order theory, then there are models for every cardinality for that theory).

Normally you don't have to be careful about defining maps in an introductory analysis course because it's usually difficult to accidentally create maps that are beyond the ability of ZFC to define. However, you have to be careful in your definition of maps when dealing with things that have the possibility of being self-referential because that can easily cross that barrier.

Here's an easy example showing why "definable real number" is not well-defined (or more directly that its complement "non-definable real number" is not well-defined). By the axiom of choice in ZFC we know that there is a well-ordering of the real numbers. Fix this well-ordering. The set of all undefinable real numbers is a subset of the real numbers and therefore well-ordered. Take its least element. We have uniquely identified a "non-definable" real number. (Variations of this technique can be used to uniquely identify ever larger swathes of "non-definable" real numbers and you don't need choice for it, it's just more involved to explain without choice and besides if you don't have choice, cardinality gets weird).

Again, as soon as you start talking about concepts that have the potential to be self-referential such as "definability," you have to be very careful about what kinds of arguments you're making, especially with regards to cardinality.

Cardinality is a "relative" concept. The common intuition (arising from the property that set cardinality forms a total ordering under ZFC) is that all sets have an intrinsic "size" and cardinality is that "size." But this intuition occasionally falls apart, especially when we start playing with the ability to "inject" more maps into our mathematical system.

Another way to think about cardinality is as a generalization of computability that measures how "scrambled" a set is.

We can think of indexing by the natural numbers as "unscrambling" a set back to the natural numbers.

We begin with complexity theory where we have different computable ways of "unscrambling" a set back to the natural numbers that take more and more time.

Then we go to computability theory where we end up at non-computably enumerable sets, that is sets that are so scrambled that there is no way to unscramble them back to the natural numbers via a Turing Machine. But we can still theoretically unscramble them back to the natural numbers if we drop the computability requirement. At this point we're at definability in our chosen mathematical theory and therefore cardinality: we can define some function that lets us do the unscrambling even if the actual unscrambling is not computable. But there are some sets that are so scrambled that even definability in our theory is not strong enough to unscramble them. This doesn't necessarily mean that they're actually any "bigger" than the natural numbers! Just that they're so scrambled we don't know how to map them back to the natural numbers within our current theory.

This intuition lets us nicely resolve why there aren't "more" rational numbers than natural numbers but there are "more" real numbers than natural numbers. In either case it's not that there's "more" or "less", it's just that the rational numbers are less scrambled than the real numbers, where the former is orderly enough that we can unscramble it back to the natural numbers with a highly inefficient, but nonetheless computable, process. The latter is so scrambled that we have no way in ZFC to unscramble them back (but if you gave us access to even more powerful maps then we could scramble the real numbers back to the natural numbers, hence Lowenheim-Skolem).

It doesn't mean that in some deep Platonic sense this map doesn't exist. Maybe it does! Our theory might just be too weak to be able to recognize the map. Indeed, there are logicians who believe that in some deep sense, all sets are countable! It's just the limitations of theories that prevent us from seeing this. (See for example the sketch laid out here: https://plato.stanford.edu/entries/paradox-skolem/#3.2). Note that this is a philosophical belief and not a theorem (since we are moving away from formal definitions of "countability" and more towards philosophical notions of "what is 'countability' really?"). But it does serve to show how it might be philosophically plausible for all real numbers, and indeed all mathematical objects, to be definable.

I'll repeat Hamkins' lines from the Math Overflow post because they nicely summarize the situation.

> In these pointwise definable models, every object is uniquely specified as the unique object satisfying a certain property. Although this is true, the models also believe that the reals are uncountable and so on, since they satisfy ZFC and this theory proves that. The models are simply not able to assemble the definability function that maps each definition to the object it defines.

> And therefore neither are you able to do this in general. The claims made in both in your question and the Wikipedia page [no longer on the Wikipedia page] on the existence of non-definable numbers and objects, are simply unwarranted. For all you know, our set-theoretic universe is pointwise definable, and every object is uniquely specified by a property.

canjobear

3 days ago

I think I understand your argument (you could define "the smallest 'undefinable' number" and now it has a definition) but I still don't see how it overcomes the fact that there are a countable number of strings and an uncountable number of reals. Can you exhibit a bijection between finite-length strings and the real numbers? It seems like any purported such function could be diagonalized.

dwohnitmok

2 days ago

My other reply is so long that HN collapsed it, but addresses your particular question about how to create the mapping between finite-length strings and the real numbers.

Here's another lens that doesn't answer that question, but offers another intuition of why "the fact that there are a countable number of strings and an uncountable number of reals" doesn't help.

For convenience I'm going to distinguish between "collections" which are informal groups of elements and "sets" which are formal mathematical objects in some kind of formal foundational set theory (which we'll assume for simplicity is ZFC, but we could use others).

My argument demonstrates that the "definable real numbers" is not a definition of a set. A corollary of this is that the subcollection of finite strings that form the definitions of unique real numbers is not necessarily an actual subset of the finite strings.

Your appeal that such definitions are themselves clearly finite strings is only enough to demonstrate that they are a subcollection, not a subset. You can only demonstrate that they are a subset if you could demonstrate that the definable real numbers form a subset of the real numbers which as I prove you cannot.

Then any cardinality arguments fail, because cardinality only applies to sets, not collections (which ZFC can't even talk about).

After all, strictly speaking, an uncountable set does not mean that such a set is necessarily "larger" than a countable set. All it means is that our formal system prevents us from counting its members.

There are subcollections of the set of finite strings that cannot be counted by any Turing Machine (non-computably enumerable sets). It's not so crazy that there might be subcollections of the set of finite strings that cannot be counted by ZFC. And then there's no way of comparing the cardinality of such a subcollection with the reals.

Another way of putting it is this: you can diagonalize your way out of any purported injection between the reals and the natural numbers. I can just the same diagonalize my way out of any purported injection between the collection of definable real numbers and the natural numbers. Give me such an enumeration of the definable real numbers. I change every digit diagonally. This uniquely defines a new real number not in your enumeration.

Perhaps even more shockingly, I can diagonalize my way out of any purported injection from the collection of finite strings uniquely identifying real numbers to the set of all natural numbers. You purport to give me such an enumeration. I add a new string that says "create the real number such that the nth digit is different from the real number of the nth definition string." Hence such a collection is an uncountable subcollection of a countable set.

dwohnitmok

2 days ago

> Can you exhibit a bijection between finite-length strings and the real numbers? It seems like any purported such function could be diagonalized.

Let's start with a mirror statement. Can you exhibit an bijection between definitions and the subset of the real numbers they are supposed to refer to? It seems like any purported such bijection could be made incoherent by a similar minimization argument.

In particular, no such function from the finite strings to the real numbers, according to the axioms of ZFC can exist, but a more abstract mapping might. In much the same way that no such function from definitions to (even a subset of) the real numbers according to the axioms of ZFC can exist, but you seem to believe a more abstract mapping might.

I think your thoughts are maybe something along these lines:

"Okay so fine maybe the function that surjectively maps definitions to the definable real numbers cannot exist, formally. It's a clever little trick that whenever you try to build such a function you can prove a contradiction using a version of the Liar's Paradox [minimality]. Clearly it definitely exists though right? After all the set of all finite strings is clearly smaller than the real numbers and it's gotta be one of the maps from finite strings to the real numbers, even if the function can't formally exist. That's just a weird limitation of formal mathematics and doesn't matter for the 'real world'."

But I can derive an almost exactly analogous thing for cardinality.

"Okay so fine maybe the function that surjectively maps the natural numbers to the real numbers cannot exist, formally. It's a clever little trick that whenever you try to build such a function you can prove a contradiction using a version of the Liar's Paradox [diagonalization]. Clearly it definitely exists though right? After all the set of all natural numbers is clearly just as inexhaustible as the real numbers and it's gotta be one of the maps from the natural numbers to the real numbers, even if the function can't formally exist. That's just a weird limitation of formal mathematics and doesn't matter for the 'real world'."

I suspect that you feel more comfortable with the concept of cardinality than definability and therefore feel that "the set of all finite strings is clearly 'smaller' than the real numbers" is a more "solid" base. But actually, as hopefully my phrasing above suggests, the two scenarios are quite similar to each other. The formalities that prevent you from building a definability function are no less artificial than the formalities that prevent you from building a surjection from the natural numbers to the real numbers (and indeed fundamentally are the same: the Liar's Paradox).

So, to understand how I would build a map that maps the set of finite strings to the real numbers, when no such map can formally exist in ZFC, let's begin by understanding how I would rigorously build a map that maps all sets to themselves (i.e. the identity mapping), even when no such map can formally exist as a function in ZFC (because there is no set of all sets).

(I'm choosing the word "map" here intentionally; I'll treat "function" as a formal object which ZFC can prove exists and "map" as some more abstract thing that ZFC may believe cannot exist).

We'll need a detour through model theory, where I'll use monoids as an illustrative example.

The definition of an (algebraic) monoid can be thought of as a list of logical axioms and vice versa. Anything that satisfies a list of axioms is called a model of those axioms. So e.g. every monoid is a model of "monoid theory," i.e. the axiomos of a monoid. Interestingly, elements of a monoid can themselves be groups! For example, let's take the set {{}, {0}, {0, 1}, {0, 1, 2}, ...}, as the underlying set of a monoid whose monoid operation is just set union and whose elements are all monoids that are just modular addition.

In this case not only is the parent monoid a model of monoid theory, each of its elements are also models of monoid theory. We can then in theory use the parent monoid to potentially "analyze" each of its individual elements to find out attributes of each of those elements. In practice this is basically impossible with monoid theory, because you can't say many interesting things with the monoid axioms. Let's turn instead to set theory.

What does this mean for ZFC? Well ZFC is a list of axioms, that means it can also be viewed as a definition of a mathematical object, in this case a set universe (not just a single set!). And just like how a monoid can contain elements which themselves are monoids, a set universe can contain sets that are themselves set universes.

In particular, for a given set universe of ZFC, we know that in fact there must be a countable set in that set universe, which itself satisfies ZFC axioms and is therefore a set universe in and of itself (and moreover such a countable set's members are themselves all countable sets)!

Using these "miniature" models of ZFC lets us understand a lot of things that we cannot talk about directly within ZFC. For example we can't make functions that map from all sets to all sets in ZFC formally (because the domain and the codomain of a function must both be sets and there is no set of all sets), but we can talk about functions from all sets to all sets in our small countable set S which models ZFC, which then we can use to potentially deduce facts about our larger background model. Crucially though, that function from all sets to all sets in S cannot itself be a member of S, otherwise we would be violating the axioms of ZFC and S would no longer be a model of ZFC! More broadly, there are many sets in S, which we know because of functions in our background model but not in S, must be countable from the perspective of our background model, but which are not countable within S because S lacks the function to realize the bijection.

This is what we mean when we talk about an "external" view that uses objects outside of our miniature model to analyze its internal objects, and an "internal" view that only uses objects inside of our miniature model.

Indeed this is how I can rigorously reason about an identity map that maps all sets to themselves, even when no such identity function exists in ZFC (because again the domain and codomain of a function must be sets and there is no set of all sets!). I create an "external" identity map that is only a function in my external model of ZFC, but does not exist at all in my set S (and hence S can generate no contradiction to the ZFC axioms it claims to model because it has no such function internally).

And that is how we can talk about the properties of a definability map rigorously without being able to construct one formally. I can construct a map, which is a function in my external model but not in S, that maps the finite strings of S (encoded as sets, as all things are if you take ZFC as your foundation) that form definitions to some subset of the real numbers in S. But there's multiple such maps! Some maps that map the finite strings of S to the real numbers "run out of finite strings," but we know that all the elements of S are themselves countable, which includes the real numbers (or at least S's conception of the real numbers)! Therefore, we can construct a bijective mapping of the finite strings of S to the real numbers of S. Remember, no such function exists in S, but this is a function in our external model of ZFC.

Since this mapping is not a function within S, there is no contradiction of Cantor's Theorem. But it does mean that such a mapping from the finite strings of S to the real numbers of S exists, even if it's not as a formal function within S. And hence we have to grapple with the problem of whether such a mapping likewise exists in our background model (i.e. "reality"), even if we cannot formally construct such a mapping as a function within our background model.

And this is what I mean when I say it is possible for all objects to have definitions and to have a mapping from finite strings to all real numbers, even no such formal function exists. Cardinality of sets is not an absolute property of sets, it is relative to what kinds of functions you can construct. Viewed through this lens, the fact that there is no satisfiability function that maps definitions to the real numbers is just as real a fact as the fact that there is no surjective function from the natural numbers ot the real numbers. It is strange to say that the former is just a "formality" and the latter is "real."

For more details on all this, read about Skolem's Paradox.

dwohnitmok

2 days ago

> elements of a monoid can themselves be groups

Whoops I meant monoids. I started with groups of groups but it was annoying to find meaningful inverse elements.

zeroonetwothree

5 days ago

Maybe it would be better to say almost all numbers are not computable.

canjobear

4 days ago

Chaitin's constant is definable but not computable.

dinosaurdynasty

5 days ago

Leads to really fun statements like "there exists a proof that all reals are equal to themselves" and "there does not exist a proof for every real number that it is equal to itself" (because `x=x`, for most real numbers, can't even be written down, there are more numbers than proofs).

user

5 days ago

[deleted]

bjourne

5 days ago

Really? Which number can't be defined with a finite expression?

wiml

4 days ago

Any HN comment is a finite expression, so it's impossible for me to specify a particular one. But the number of finite expressions is countable, and the number of reals is vastly more than a countable number, so most reals cannot be described in any human sense.

bjourne

4 days ago

If you can't specify it or describe it how do you know it exists?

wiml

4 days ago

I think (I am not a mathematician) that depends on whether you accept non-constructive proofs as valid. Normally you reason that any mapping from natural numbers onto the reals is incomplete (eg Cantor's argument), and that the sets of computable or describable numbers are countable, and therefore there exist indescribable real numbers. But if you don't like that last step, you do have company:

https://en.wikipedia.org/wiki/Constructivism_%28philosophy_o...

afiori

3 days ago

There are more infinite sequences than finite ones.

So not all infinite sequences can be uniquely specified by a finite description.

Like √2 is a finite description, so is the definition of π, but since there is no way to map the abstract set of "finite description" surjectively to the set of infinite sequences you find that any one approach will leave holes.

hanche

4 days ago

You can't know. However, it is a consequence of the axiom of choice (AC). You can't know if AC is true either; but mathematics without it is really really hard, so it usually assumed.

tzs

4 days ago

Most of them. The reals are uncountable. The set of finite expressions is countable.

sorokod

5 days ago

By common definition of "almost all", 1 == 2

user

5 days ago

[deleted]

testaccount28

5 days ago

how can i pick a real number at random though?

i tried Math.random(), but that gave a rational number. i'm very lucky i guess?

andrewflnr

5 days ago

You can't actually pick real numbers at random. You especially can't do it on a computer, since all numbers representable in a finite number of digits or bits are rational.

teraflop

5 days ago

Careful -- that statement is half true.

It's true that no matter what symbolic representation format you choose (binary or otherwise) it will never be able to encode all irrational numbers, because there are uncountably many of them.

But it's certainly false that computers can only represent rational numbers. Sure, there are certain conventional formats that can only represent rational numbers (e.g. IEEE-754 floating point) but it's easy to come up with other formats that can represent irrationals as well. For instance, the Unicode string "√5" is representable as 4 UTF-8 bytes and unambiguously denotes a particular irrational.

andrewflnr

5 days ago

I was careful. :)

> representable in a finite number of digits or bits

Implying a digit-based representation.

jamster02

4 days ago

> the Unicode string "√5" is representable as 4 UTF-8 bytes

As the other person pointed out, this is representing an irrational number unambiguously in a finite number of bits (8 bits in a byte). I fail to see how your original statement was careful :)

> representable in a finite number of digits or bits

dullcrisp

4 days ago

I don’t think those bits unambiguously represent the square root of five. Usually they represent either 3800603189 or -494364107.

bsaul

4 days ago

Isn't "unambiguous representation" impossible in practice anyway ? Any representation is relative to a formal system.

I can define sqrt(5) in a hard-coded table on a maths program using a few bytes, as well as all the rules for manipulating it in order to end up with correct results.

dullcrisp

4 days ago

Well yeah but if we’re being pedantic anyway then “render these bits in UTF-8 in a standard font and ask a human what number it makes them think of” is about as far from an unambiguous numerical representation as you could get.

Of course if you know that you want the square root of five a priori then you can store it in zero bits in the representation where everything represents the square root of five. Bits in memory always represent a choice from some fixed set of possibilities and are meaningless on their own. The only thing that’s unrepresentable is a choice from infinitely many possibilities, for obvious reasons, though of course the bounds of the physical universe will get you much sooner.

cozzyd

5 days ago

Or use pieee-754 which is the same as iee-754 but everything is mimtipled by pi.

electroglyph

5 days ago

i really wanted "mimtipled" to be a word =)

cozzyd

4 days ago

I guess my phone thinks it might be since it didn't correct it :)

tantalor

5 days ago

Pick a digit, repeat, don't stop.

markusde

5 days ago

Exactly right. You can pick and use real numbers, as long as they are only queried to finite precision. There are lots of super cool algorithms for doing this!

jibal

5 days ago

That's just saying that you can pick and use rational numbers (which are a subset of the reals.)

markusde

3 days ago

Kind of, but you're not just picking rationals, you're picking rationals that are known to converge to a real number with some continuous property.

You might be interested in this paper [1] which builds on top of this approach to simulate arbitrarily precise samples from the continuous normal distribution.

[1] https://dl.acm.org/doi/10.1145/2710016

skulk

5 days ago

Not really. You can simulate a probability of 1/x by expanding 1/x in binary and flipping a coin repeatedly, once for each digit, until the coin matches the digit (assign heads and tails to 0 and 1 consistently). If the match happened on 1, then it's a positive result, otherwise negative. This only requires arbitrary but finite precision but the probability is exactly equal to 1/x which isn't rational.

jibal

4 days ago

No, it isn't ... an infinite expansion isn't possible.

jibal

5 days ago

At no point will your number be transcendental (or even irrational).

tantalor

5 days ago

That's why you can't stop.

jibal

4 days ago

That's irrelevant. It's like saying that you can count to infinity if you never stop counting ... but no, every number in the count is finite.

tantalor

4 days ago

That's how limits at infinity work.

jibal

3 days ago

No, it certainly isn't.

techas

5 days ago

And don’t die.

mg

5 days ago

How did you test the output of Math.random() for transcendence?

When you apply the same test to the output of Math.PI, does it pass?

BeetleB

5 days ago

All floating point numbers are rational.

zeroonetwothree

5 days ago

All numbers that actually exist in our finite visible universe are rational.

edanm

4 days ago

What does "actually exist" mean? Does Pi "actually exist"?

user

5 days ago

[deleted]

tsimionescu

4 days ago

Not really. In all of our physical theories, curved paths are actual curves. So, (assuming circular orbits for a second) the ratio between the length of the Earth's orbit around the Sun and the distance between the Earth and the Sun is Pi - so, either the length of the path or the straight line distance must be an irrational number. While the actual orbit is elliptical instead of circular, the relation still holds.

Of course, we can only measure any quantity up to a finite precision. But the fact that we chose to express the measurement outcome as 3.14159 +- 0.00001 instead of expressing it as Pi +- 0.00001 is an arbitrary choice. If the theory predicts that some path has length equal exactly to 2.54, we are in the same situation - we can't confirm with infinite precision that the measurement is exactly 2.54, we'll still get something like 2.54 +- 0.00001, so it could very well be some irrational number in actual reality.

jmgao

5 days ago

Well, except for inf, -inf, and nan.

Someone

5 days ago

and, depending on how you define the rationals, -0.

https://en.wikipedia.org/wiki/Integer: “An integer is the number zero (0), a positive natural number (1, 2, 3, ...), or the negation of a positive natural number (−1, −2, −3, ...)”

According to that definition, -0 isn’t an integer.

Combining that with https://en.wikipedia.org/wiki/Rational_number: “a rational number is a number that can be expressed as the quotient or fraction p/q of two integers, a numerator p and a non-zero denominator q”

means there’s no way to write -0 as the quotient or fraction p/q of two integers, a numerator p and a non-zero denominator q.

kridsdale1

5 days ago

Use an analog computer. Sample a voltage. Congrats.

why-o-why

5 days ago

Sample it with what? An infinite precision ADC?

This is how old temperature-noise based TRNGs can be attacked (modern ones use a different technique, usually a ring-oscillater with whitening... although i have heard noise-based is coming back but i've been out of the loop for a while)

rcxdude

5 days ago

Well, sampling is technically an analog operation that is separate from the conversion operation that makes the result digital. But then analog circuits don't ever actually hold a single real number, in practice there is always noise and that in practice limits the precision to less than what can be fairly easily achieved digitally.

why-o-why

4 days ago

Sure, but we are talking about generating a random number, not sampling noise: those are two different things, albeit the former can be derived from the latter but not directly and as simply as the parent post claimed. Just sampling analog noise does not generate a "true" random number that can satisfy a set of design parameters to configure the NIST 800-90b entropy assessment (well, one could pick shitty parameters for the probability tests, but let's assume experts at the helm). Hence the need for software whitening.

https://en.wikipedia.org/wiki/Hardware_random_number_generat...

https://github.com/usnistgov/SP800-90B_EntropyAssessment

(^^^ this is a fun tool, I recommend playing with it to learn how challenging it is to generate "true" random numbers.)

An infinite precision ADC couldn't be subject to thermal attack because you could just sample more bits of precision. (Of course, then we'd be down to Planck level precision so obviously there are limits, but my point still stands, at least _I_ think it does. :))

jibal

5 days ago

Use an analog computer how, to do what? An analog computer can do analog operations on analog signals, but you can't get an irrational number out of it ... this can be viewed as a sort of monad.

user

5 days ago

[deleted]

tzs

5 days ago

If we are including numbers that aren't actually proven to be transcendental but that most mathematicians think are, I'd put Lévy's constant on the list.

It is e^(pi^2/(12 log 2))

Here's where it comes from. For almost all real numbers if you take their continued fraction expansion and compute the sequence of convergents, P1/Q1, P2/Q2, ..., Pn/Qn, ..., it turns out that the sequence Q1^(1/1), Q2^(1/2), ..., Qn^(1/n) converges to a limit and that limit is Lévy's constant.

barishnamazov

5 days ago

Don't want to be "that guy," but Euler's constant and Catalan's constant aren't proven to be transcendental yet.

For context, a number is transcendental if it's not the root of any non-zero polynomial with rational coefficients. Essentially, it means the number cannot be constructed using a finite combination of integers and standard algebraic operations (addition, subtraction, multiplication, division, and integer roots). sqrt(2) is irrational but algebraic (it solves x^2 - 2 = 0); pi is transcendental.

The reason we haven't been able to prove this for constants like Euler-Mascheroni (gamma) is that we currently lack the tools to even prove they are irrational. With numbers like e or pi, we found infinite series or continued fraction representations that allowed us to prove they cannot be expressed as a ratio of two integers.

With gamma, we have no such "hook." It appears in many places (harmonics, gamma function derivatives), but we haven't found a relationship that forces a contradiction if we assume it is algebraic. For all we know right now, gamma could technically be a rational fraction with a denominator larger than the number of atoms in the universe, though most mathematicians would bet the house against it.

servercobra

5 days ago

Both Euler's and Catalan's list "(Not proven to be transcendental, but generally believed to be by mathematicians.)". Maybe updated after your comment?

gizmo686

5 days ago

> Essentially, it means the number cannot be constructed using a finite combination of integers and standard algebraic operations (addition, subtraction, multiplication, division, and integer roots)

Slight clarification, but standard operations are not sufficient to construct all algebraic numbers. Once you get to 5th degree polynomials, there is no guarantee that their roots can be found through standard operations.

hidroto

4 days ago

I am no mathematician, but i think you may be overstating Galois result. it says that you cant write a single closed form expression for the roots of any quintic using only (+,-,*,/,nth roots). This does not necessarily stop you from expressing each root individually with the standard algebraic operations.

gizmo686

4 days ago

I think you are thinking of the Abel–Ruffini impossibility theorum, which states that there is no general solution to polynomials of degree 5 or greater using only standard operations and radicals.

Galois went a step further and proved that there existed polynomials whose specific roots could not be so expressed. His proof also provided a relatively straightforward way to determine if a given polynomial qualified.

hidroto

4 days ago

Thanks for the correction. It seems that all the layman’s explanations on Galois theory i have seen have been simplified to the point of being technically wrong, as well as underselling it.

mswtk

4 days ago

Technically, the actual statement in Galois theory is even more general. Roughly, it says that, for a given polynomial over a field, if there exists an algorithm that computes the roots of this polynomial, using only addition, subtraction, multiplication, division and radicals, then a particular algebraic structure associated with this polynomial, called its Galois group, has to have a very regular structure.

So it's a bit stronger than the term "closed formula" implies. You can then show explicit examples of degree 5 polynomials which don't fulfill this condition, prove a quantitative statement that "almost all" degree 5 polynomials are like this, explain the difference between degree 4 and 5 in terms of group theory, etc.

mjd

4 days ago

I'm glad that someone decided to be "that guy". Putting the Euler-Mascheroni constant third the list was a very questionable choice.

zkmon

5 days ago

If a number system has a transcendental number as its base, would these numbers still be called transcendental in that number system?

moefh

5 days ago

Yes. A number is transcendental if it's not the root of a polynomial with integer coefficients; that's completely independent of how you represent it.

gizmo686

5 days ago

The notion of transcendental is not related to how we right numbers. However, in abstract algebra, we generalize the notion of algebraic/transental to arbitrary fields. In such a framework, a number is only transental relative to a particular field.

For instance, the standard statement that pi us transcendental would become the pi is transcendental in Q (the rational numbers). However, pi is trivially not transcendental over Q(pi), which is the smallest field possible after adding pi to the rational numbers. A more interesting question is if e is transcendental over Q(pi); as far as I am aware that is still an open problem.

frutiger

5 days ago

I think the elements of the base need to be enumerable (proof needed but it feels natural), and transcendental numbers are not enumerable (proof also needed).

JadeNB

5 days ago

I think your parent comment was speaking of a "base-$\alpha$ representation", where $\alpha$ is a single transcendental number—no concerns about countability, though one must be quite careful about the "digits" in this base.

(I'm not sure what "the elements of the base need to be enumerable" means—usually, as above, one speaks of a single base; while mixed-radix systems exist, the usual definition still has only one base per position, and only countably many positions. But the proof of countability of transcendental numbers is easy, since each is a root of a polynomial over $\mathbb Q$, there are only countably many such polynomials, and every polynomial has only finitely many roots.)

jibal

4 days ago

> I think the elements of the base need to be enumerable (proof needed but it feels natural)

Proof of what? Needed for what?

The elements of the number system are the base raised to non-negative integer powers, which of course is an enumerable set.

> transcendental numbers are not enumerable

Category mistake ... sets can be enumerable or not; numbers are not the sort of thing that can be enumerable or not. (The set of transcendental numbers is of course not enumerable [per Georg Cantor], but that doesn't seem to be what you're talking about.)

senfiaj

5 days ago

> Euler's constant, gamma = 0.577215 ... = lim n -> infinity > (1 + 1/2 + 1/3 + 1/4 + ... + 1/n - ln(n)) (Not proven to be transcendental, but generally believed to be by mathematicians.)

So why bring some numbers here as transcendental if not proven?

rkowalick

5 days ago

As far I know, Euler's constant hasn't even been proven to be irrational.

auggierose

5 days ago

Because it still might be transcendental. Just because you don't know if the list is correct, doesn't mean it isn't.

senfiaj

5 days ago

Yes it's "likely" to be transcendental, maybe there are some evidences that support this, but this is not a proof (keep in mind that it isn't even proven to be irrational yet). Similarly, most mathematicians/computer scientist bet that P ≠ NP, but it doesn't make it proven and no one should claim that P ≠ NP in some article just because "it's most likely to be true" (even though some empirical real life evidence supports this hypothesis). In mathematics, some things may turn out to be contrary to our intuition and experience.

auggierose

5 days ago

It comes with the explicit comment "Not proven to be transcendental, but generally believed to be by mathematicians."

That's really all you can do, given that 3 and 4 are really famous. At this point it is therefore just not possible to write a list of the "Fifteen Most Famous Transcendental Numbers", because this is quite possibly a different list than "Fifteen Most Famous Numbers that are known to be transcendental".

senfiaj

5 days ago

So "Fifteen Most Famous Transcendental Numbers" isn't the same as "Fifteen Most Famous Numbers that are known to be transcendental"?

I might be OK with title "Fifteen Most Famous Numbers that are believed to be transcendental" (however, some of them have been proven to be transcendental) but "Fifteen Most Famous Transcendental Numbers" is implying that all the listed numbers are transcendental. Math assumes that a claim is proven. Math is much stricter compared to most natural (especially empirical) sciences where everything is based on evidence and some small level of uncertainty might be OK (evidence is always probabilistic).

Yes, in math mistakes happen too (can happen in complex proofs, human minds are not perfect), but in this case the transcendence is obviously not proven. If you say "A list of 15 transcendental numbers" a mathematician will assume all 15 are proven to be transcendental. Will you be OK with claim "P ≠ NP" just because most professors think it's likely to be true without proof? There are tons of mathematical conjectures (such as Goldbach's) that intuitively seem to be true, yet it doesn't make them proven.

Sorry for being picky here, I just have never seen such low standards in real math.

auggierose

5 days ago

You are not picky, you just don't understand my point.

"Fifteen Most Famous Transcendental Numbers" is indeed not the same as "Fifteen Most Famous Numbers that are known to be transcendental". It is also not the same as "Fifteen Most Famous Numbers that have been proven to be transcendental". Instead, it is the same as "Fifteen Most Famous Numbers that are transcendental".

That's math for you.

senfiaj

5 days ago

Again, it seems we are arguing because of our subjective differences in the title correctness and rigor. Personally, I would not expect such title even from a pop-math type article. At least it should be more obvious from the title.

"Transcendental" or even "irrational" isn't a vibesy category like "mysterious" or "beautiful", it's a hard mathematical property. So a headline that flatly labels a number "transcendental" while simultaneously admitting "not even proven" inside the article, looks more like a clickbait.

auggierose

4 days ago

Not sure why you would think that anyone thinks that transcendental is a "vibesy" category, or why you would think that you are more invested in the "hardness" of mathematical properties than anyone else here.

You clearly still don't understand. And to call the title "clickbait" is pretty silly.

loloquwowndueo

5 days ago

So it’s like “15 oldest actors to win an Oscar” and including someone who’s nominated this year but hasn’t actually won. But he might, right?

No, my dudes. Just no. If it’s not proven transcendental, it’s not to be considered such.

chvid

5 days ago

I think the Oscars should go to the algebraic numbers - think about it - they are far less common ...

user

5 days ago

[deleted]

nuancebydefault

5 days ago

I would have expected more numbers originating from physics, like Reynolds number (bad example since it is not really constant though).

The human-invented ones seem to be just a grasp of dozens man can come up with.

i to the power of i is one I never heard of but is fascinating though!

SOTGO

5 days ago

To prove something is transcendental we would need to know how to compute it exactly, and I’m struggling to see how that would come up frequently in a physics context. In physics most constants are not arbitrary real numbers derived from a formula, they’re a measured relationship, which sort of inherently can’t be proved to be transcendental

cozzyd

4 days ago

Yeah I'd expect Bessel function zeroes and such

keepamovin

5 days ago

This guy's books sounds fascinating, Keys to Infinity and Wonder of Numbers. Definitely going to add to Kindle. pi transcends the power of algebra to display it in its totality what an entrace

I think I read a book by this guy as a kid: it was an illustrated mostly black and white book about Chaitin's constant, halting problema and various ways of counting over infinite sets.

tshaddox

5 days ago

> Did you know that there are "more" transcendental numbers than the more familiar algebraic ones?

Indeed. And by similar arguments, there are more uncomputable real numbers than computable real numbers. (And almost all transcendental numbers are uncomputable).

drob518

5 days ago

Some of these seem forced. For instance, does Chapernowne's number (number 7 on the list, 0.12345678910111213141516171819202122232425...) occur in nature, or was it just manufactured in a mathematical laboratory somewhere?

zeeboo

5 days ago

It is indeed manufactured specifically to show the existence of "normal" numbers, which are, loosely, numbers where every finite sequence of digits is equally likely to appear. This property is both ubiquitous (almost every number is normal in a specific sense) and difficult to prove for numbers not specifically cooked up to be so.

drob518

5 days ago

Okay, fair. It just seemed to me to have pretty limited utility.

kaffekaka

5 days ago

Hm who cares about utility in this case?

drob518

5 days ago

Well, if we don’t care about utility I could define infinitely many transcendental numbers with no utility other than I just made them up. The number that is the concatenation of the digits of all prime numbers in sequence, for instance: 0.23571113171923… I christen this Dave’s Number. (It probably already has a name, but I’m stealing it.) Let’s add it to the list. Now we can define Dave’s Second Number as the first prime added to Dave’s Number: 2.235711131723… Dave’s Third Number is the second prime added to Dave’s Number: 3.235711131723… Since we’re cataloguing numbers with no utility, let’s add them all to the list.

kaffekaka

4 days ago

The list was for famous numbers. Yours might get there, but not so fast.

jerf

5 days ago

All the transcendental numbers are "manufactured in a mathematical laboratory somewhere".

In fact we can tighten that to all irrational numbers are manufactured in a mathematical laboratory somewhere. You'll never come across a number in reality that you can prove is irrational.

That's not necessarily because all numbers in reality "really are" rational. It is because you can't get the infinite precision necessary to have a number "in hand" that is irrational. Even if you had a quadrillion digits of precision on some number in [0, 1] in the real universe you'd still not be able to prove that it isn't simply that number over a quadrillion no matter how much it may seem to resemble some other interesting irrational/transcendental/normal/whatever number. A quadrillion digits of precision is still a flat 0% of what you'd need to have a provably irrational number "in hand".

user

5 days ago

[deleted]

tshaddox

5 days ago

> You'll never come across a number in reality that you can prove is irrational.

If a square with sides of rational (and non-zero) length can exist in reality, then the length of its diagonal is irrational. So which step along the way isn't possible in reality? Is the rational side length possible? Is the right angle possible?

613style

5 days ago

They're saying you can't find a ruler accurate enough to be sure the number you measure is sqrt(2) and not sqrt(2) for the first 1000 digits then something else. And eventually, as you build better and better rulers, it will turn out that physical reality doesn't encode enough information to be sure. Anything you can measure is rational.

zeroonetwothree

5 days ago

A perfect mathematical square cannot exist in reality.

tshaddox

4 days ago

Which part is impossible? It was implied that perfect rational numbers are possible, so I’m wondering what stops the square with rational side length from existing and having an irrational diagonal.

5ver

5 days ago

It appears quantum phenomena are accurately described using mathematics involving trig functions. As such we do encounters numbers in reality that involve transcendental numbers, right?

kergonath

5 days ago

You don’t need quantum mechanics. Trigonometric functions are everywhere in classical mechanics. Gaussians, exponential, and logs are everywhere in statistical physics. You cannot do much if you don’t use transcendental numbers. Hell, you just need a circle to come across pi. It’s rational numbers that are special.

kevin_thibedeau

5 days ago

They're accurately modeled. Just as Newtownian phenomena are accurately modeled, until they aren't. Reality is not necessarily reflective of any model.

jerf

5 days ago

Consider the ideal gas law: pV=nRT

Five continuous quantities related to each other, where by default when not specified we can safely assume real values, right? So we must have real values in reality, right?

But we know that gas is not continuous. The "real" ideal gas law that relates those quantities really needs you to input every gas molecule, every velocity of every gas molecule, every detail of each gas molecule, and if you really want to get precise, everything down to every neutrino passing through the volume. Such a real formula would need to include terms for things like the self-gravitation of the gas affecting all those parameters. We use a simple real-valued formula because it is good enough to capture what we're interested in. None of the five quantities in that formula "actually" exist, in the sense of being a single number that fully captures the exact details of what is going on. It's a model, not reality.

Similarly, all those things using trig and such are models, not reality.

But while true, those in some sense miss something even more important, which I alluded to strongly but will spell out clearly here: What would it mean to have a provably irrational value in hand? In the real universe? Not metaphorically, but some sort of real value fully in your hand, such that you fully and completely know it is an irrational value? Some measure of some quantity that you have to that detail? It means that if you tell me the value is X, but I challenge you that where you say the Graham's Number-th digit of your number is a 7, I say it is actually a 4, you can prove me wrong. Not by math; by measurement, by observation of the value that you have "in hand".

You can never gather that much information about any quantity in the real universe. You will always have finite information about it. Any such quantity will be indistinguishable from a rational number by any real test you could possibly run. You can never tell me with confidence that you have an irrational number in hand.

Another way of looking at it: Consider the Taylor expansion of the sine function. To be the transcendental function it is in math, it must use all the terms of the series. Any finite number of terms is still a polynomial, no matter how large. Now, again, I tell you that by the Graham's Number term, the universe is no longer using those terms. How do you prove me wrong by measurement?

All you can give me is that some value in hand sure does seem to bear a strong resemblance to this particular irrational value, pi or e perhaps, but that's all. You can't go out the infinite number of digits necessary to prove that you have exactly pi or e.

Many candidates for the Theory of Everything don't even have the infinite granularity in the universe in them necessary to have that detailed an object in reality, containing some sort of "smallest thing" in them and minimum granularity. Even the ones that do still have the Planck size limit that they don't claim to be able to meaningfully see beyond with real measurements.

5ver

4 days ago

Yes, I can’t prove I have pi. But you can’t prove that I don’t. I’m not a physicist I’m a mathematician. Quantum phenomena appear to actually, in reality be “probabilistic” and to actually involve irrational numbers.

If rationals exist in reality and you are comfortable with Graham’s number existing in reality (which has more digits in its base 10 representation than the number of particles in the observable universe) then why not irrationals? They are the completion of the rationals.

Unless you are a finitist.

Strilanc

5 days ago

It's fame comes from the simplicity of its construction rather than its utility elsewhere in mathematics.

For example, Graham's number is pretty famous but it's more of a historical artifact rather than a foundational building block. Other examples of non-foundational fame would be the famous integers 42, 69, and 420.

eichin

5 days ago

> mathematical laboratory

Love the image of mathematicians laboring over flasks and test tubes, mixing things and extracting numbers... would have far more explosions than day-to-day mathematics usually does...

user

5 days ago

[deleted]

tantalor

5 days ago

Yes, it occurs in the nature of the mathematician's mind.

sriku

4 days ago

i^i isn't unique right? The "let x = π/2" could very well have been "let x = π(4k+1)/2" for any integer k.

phyzome

5 days ago

Yes, but the most famous ones are boring, we already know these! Let's get a list of the least famous transcendental numbers.

adrian_b

5 days ago

It should be noted that the number e = 2.71828 ... does not have any importance in practice, its value just satisfies the curiosity to know it, but there is no need to use it in any application.

The transcendental number whose value matters (being the second most important transcendental number after 2*pi = 6.283 ...) is ln 2 = 0.693 ... (and the value of its inverse log2(e), in order to avoid divisions).

Also for pi, there is no need to ever use it in computer applications, using only 2*pi everywhere is much simpler and 2*pi is the most important transcendental number, not pi.

d-us-vb

5 days ago

This comment is quite strange to me. e is the base of the natural logarithm. so ln 2 is actually log_e (2). If we take the natural log of 2, we are literally using its value as the base of a logarithm.

Does a number not matter "in practice" even if it's used to compute a more commonly use constant? Very odd framing.

adrian_b

4 days ago

The number "e" itself is never needed in any application.

It is not used for computing the value of ln(2) or of log2(e), which are computed directly as limits of some convergent series.

As I have said, there is no reason whatsoever for knowing the value of e.

Moreover, there is almost never a good choice to use the exponential function or the hyperbolic logarithm function (a.k.a. natural logarithm, but it does not really deserve the name "natural").

For any numeric computations, it is preferably to use everywhere the exponential 2^x and the binary logarithm. With this choice, the constant ln 2 or its inverse appears in formulae that compute derivatives or integrals.

People are brainwashed in school into using the exponential e^x and the hyperbolic logarithm, because this choice was more convenient for symbolic computations done with pen on paper, like in the 19th century.

In reality, choosing to have the proportionality factor in the derivative formula as "1" instead of "ln 2" is a bad choice. The reason is that removing the constant from the derivative formula does not make it disappear, but it moves it into the evaluation of the function and in any application much more evaluations of the functions must be done than computations of derivative or integral formulae.

The only case when using e^x may bring simplifications is in symbolic computations with complex exponentials and complex logarithms, which may be needed in the development of mathematical models for some linear systems that can be described by linear systems of ordinary differential equations or of linear equations with partial derivatives. Even then, after the symbolic computation produces a mathematical model suitable for numeric computations it is more efficient to convert all exponential or logarithmic functions to use only 2^x and binary logarithms.

lutusp

5 days ago

> It should be noted that the number e = 2.71828 ... does not have any importance in practice, its value just satisfies the curiosity to know it, but there is no need to use it in any application.

In calculations like compound financial interest, radioactive decay and population growth (and many others), e is either applied directly or derived implicitly.

> ... 2*pi is the most important transcendental number, not pi.

Gotta agree with this one.

adrian_b

4 days ago

When using the exponential e^x or the natural logarithm, the number "e" is never used. Only ln 2 or its inverse are used inside the function evaluations, for argument range reduction.

In radioactive decay and population growth it is much simpler conceptually to use 2^x, not e^x, which is why this is done frequently even by people who are not aware that the computational cost of 2^x is lower and its accuracy is greater.

In compound financial interest using 2^x would also be much more natural than the use of e^x, but in financial applications tradition is usually more important than any actual technical arguments.

lutusp

4 days ago

> When using the exponential e^x or the natural logarithm, the number "e" is never used. Only ln 2 or its inverse are used inside the function evaluations, for argument range reduction.

That is only true in the special case of computing a half-life. In the general case, e^x is required. When computing a large number of cases and to avoid confusion, e^x is the only valid operator. This is particularly true in compound interest calculations, which would fall apart entirely without the presence of e^x and ln(x).

> In radioactive decay and population growth it is much simpler conceptually to use 2^x, not e^x

See above -- it's only valid if a specific, narrow question is being posed.

> In compound financial interest using 2^x would also be much more natural than the use of e^x

That is only true to answer a specific question: How much time to double a compounded value? For all other cases, e^x is a requirement.

If your position were correct, if 2^x were a suitable replacement, then Euler's number would never have been invented. But that is not reality.

adrian_b

4 days ago

No, you did not try to understand what I have written.

The use of ln 2 for argument range reduction has nothing to do with half lives. It is needed in any computation of e^x or ln x, because the numbers are represented as binary numbers in computers and the functions are evaluated with approximation formulae that are valid only for a small range of input arguments.

The argument range reduction can be avoided only if you know before evaluation that the argument is close enough to 0 for an exponential or to 1 for a logarithm, so that an approximation formula can be applied directly. For a general-purpose library function you cannot know this.

Also the use of 2^x instead of e^x for radioactive decay, population growth or financial interest is not at all limited to the narrow cases of doublings or halvings. Those happen when x in an integer in 2^x, but 2^x accepts any real value as argument. There is no difference in the definition set between 2^x and e^x.

The only difference between using 2^x and e^x in those 3 applications is in a different constant in the exponent, which has the easier to understand meaning of being the doubling or halving time, when using 2^x and a less obvious meaning when using e^x. In fact, only doubling or halving times are directly measured for radioactive decay or population growth. When you want to use e^x, you must divide the measured values by ln 2, an extra step that brings no advantage whatsoever, because it must be implicitly reversed during every subsequent exponential evaluation when the argument range reduction is computed.

lutusp

4 days ago

> The use of ln 2 for argument range reduction has nothing to do with half lives.

That is a false statement.

> In fact, only doubling or halving times are directly measured for radioactive decay or population growth.

That is a false statement -- in population studies, as just one example, the logistic function (https://en.wikipedia.org/wiki/Logistic_function) tracks the effect of population growth over time as environmental limits take hold. This is a detailed model that forms a cornerstone of population environmental studies. To be valid, it absolutely requires the presence of e^x in one or another form.

> ... because the numbers are represented as binary numbers in computers and the functions are evaluated with approximation formulae that are valid only for a small range of input arguments.

That is a spectacularly false statement.

> There is no difference in the definition set between 2^x and e^x.

That is absolutely false, and trivially so.

> No, you did not try to understand what I have written.

On the contrary, I understood it perfectly. From a mathematical standpoint, 2^x cannot substitute for e^x, anywhere, ever. They're not interchangeable.

I hope no math students read this conversation and acquire a distorted idea of the very important role played by Euler's number in many applied mathematical fields.

jcranmer

5 days ago

It took me quite a bit to figure out what you're trying to say here.

The importance of e is that it's the natural base of exponents and logarithms, the one that makes an otherwise constant factor disappear. If you're using a different base b, you generally need to adjust by exp(b) or ln(b), neither of which requires computing or using e itself (instead requiring a function call that's using minimax-generated polynomial coefficients for approximation).

The importance of π or 2π is that the natural periodicity of trigonometric functions is 2π or π (for tan/cot). If you're using a different period, you consequently need to multiply or divide by 2π, which means you actually have to use the value of the constant, as opposed to calling a library function with the constant itself.

Nevertheless, I would say that despite the fact that you would directly use e only relatively rarely, it is still the more important constant.

BigTTYGothGF

5 days ago

What an odd thing to say. I find that it shows up all the time (and don't find myself using 2pi any more than pi).

adrian_b

4 days ago

Pi not multiplied by 2 has only one application, which is ancient. For most objects, it is easier to measure directly the diameter than the radius. Then you can compute the circumference by multiplying with Pi.

Except for this conversion from directly measured diameters, one rarely cares about hemicycles, but about cycles.

The trigonometric functions with arguments measured in cycles are more accurate and faster to compute. The trigonometric functions with arguments measured in radians have simpler formulae for derivatives and primitives. The conversion factor between radians and cycles is 2Pi, which leads to its ubiquity.

While students are taught to use the trigonometric functions with arguments measured in radians, because they are more convenient for some symbolic computations, any angle that is directly measured is never measured in radians, but in fractions of a cycle. The same is true for any angle used by an output actuator. The methods of measurement with the highest precision for any physical quantity eventually measure some phase angle in cycles. Even the evaluations of the trigonometric functions with angles measured in radians must use an internal conversion between radians and cycles, for argument range reduction.

So the use of the 2*Pi constant is unavoidable in almost any modern equipment or computer program, even if many of the uses are implicit and not obvious for whoever does not know the detailed implementations of the standard libraries and of the logic hardware.

If trigonometric functions with arguments measured in radians are used anywhere, then conversions between radians in cycles must exist, either explicit conversions or implicit conversions.

If only trigonometric functions with arguments measured in cycles are used, then some multiplications with 2Pi or its inverse appear where derivatives or primitives are computed.

In any application that uses trigonometric functions millions of multiplications with 2Pi may be done every second. In contrast, a multiplication by Pi could be needed only at most at the rate at which one could measure the diameters of some physical objects for which there would be a reason to want to know their circumference.

Because Pi is needed so much more rarely, it is simpler to just have a constant Pi_2 to be used in most cases and for the rare case of computing a circumference from the diameter to use Pi_2*D/2,

BigTTYGothGF

4 days ago

> The trigonometric functions with arguments measured in cycles are more accurate and faster to compute.

Please expand on this. Surely if that were the case, numerical implementations would first convert a radian input to cycles before doing whatever polynomial/rational approximation they like, but I've never seen one like that.

> Because Pi is needed so much more rarely, it is simpler to just have a constant Pi_2 to be used in most cases and for the rare case of computing a circumference from the diameter to use Pi_2*D/2,

Well of course, that's why you have (in C) M_PI, M_PI2, and so on (and in some dialects M_2PI).

adrian_b

4 days ago

> Surely if that were the case, numerical implementations would first convert a radian input to cycles before doing whatever polynomial/rational approximation they like, but I've never seen one like that.

Then you have not examined the complete implementation of the function.

The polynomial/rational approximation mentioned by you is valid only for a small range of the possible input arguments.

Because of this, the implementation of any exponential/logarithmic/trigonometric function starts by an argument range reduction, which produces a value inside the range of validity of the approximating expression, by exploiting some properties of the function that must be computed.

In the case of trigonometric functions, the argument must be reduced first to a value smaller than a cycle, which is equivalent to a conversion from radians to cycles and then back to radians. This reduction, and the rounding errors associated with it, is avoided when the function uses arguments already expressed in cycles, so that the reduction is done exactly by just taking the fractional part of the argument.

Then the symmetry properties of the specific trigonometric function are used to further reduce the range of the argument to one fourth or one eighth of a cycle. When the argument had been expressed in cycles this is also an exact operation, otherwise it can also introduce rounding errors, because adding or subtracting Pi or its submultiples cannot be done exactly.

BigTTYGothGF

4 days ago

> The polynomial/rational approximation mentioned by you is valid only for a small range of the possible input arguments

I was assuming that as part of the table stakes of the conversation.

Let's look at something basic and traditional like Cephes: https://github.com/jeremybarnes/cephes/blob/master/cmath/sin...

We start of with a range reduction to [0, pi/4] (presumably this would be [0, 1/8] in cycles), and then the polynomial happens.

If cycles really were that better, why isn't this implemented as starting with a conversion to cycles, then removal of the interval part, and then a division by 8, followed by whatever the appropriate polynomial/rational function is?

> adding or subtracting Pi or its submultiples cannot be done exactly.

I was also assuming that we've been talking about floating point this whole time.

constantcrying

5 days ago

>but there is no need to use it in any application.

Applications such as planes flying, sending data through wires, medical imaging (or any of a million different direct applications) do not count, I assume?

Your naivety about what makes the world function is not an argument for something being useless. The number appearing in one of the most important algorithms should give you a hint about how relevant it is https://en.wikipedia.org/wiki/Fast_Fourier_transform

adrian_b

4 days ago

I am sorry, but comments like this are caused by the naivety of not knowing how the function evaluations are actually implemented.

None of the applications mentioned by you need to use the exponential e^x or the natural logarithm, all can be done using the exponential 2^x and the binary logarithm. The use of the less efficient and less accurate functions remains widespread only because of bad habits learned in school, due to the huge inertia that affects the content of school textbooks.

The fast Fourier transform is written as if it would use e^x, but that has been misleading for you, because it uses only trigonometric functions, so it is irrelevant for discussing whether "e" or "ln 2" is more important, because neither of these 2 transcendental constants is used in the Fast Fourier Transform.

Moreover, FFT is an example for the fact that it is better to use trigonometric functions with the arguments measured in cycles, i.e. functions of 2*Pi*x, instead of the worse functions with arguments measured in radians, because with arguments expressed in cycles the FFT formulae become simpler, all the multiplicative constants explicitly or implicitly involved in the FFT direct and inverse computations being eliminated.

A function like cos(2*Pi*x) is simpler than cos(x), despite what the conventional notation implies, because the former does not contain any multiplication with 2*Pi, but the latter contains a multiplication with the inverse of 2*Pi, for argument range reduction.

BigTTYGothGF

4 days ago

I think that perhaps people are conflating the fourier transform (FT) with the fast fourier transform.

It's true that the FFT does not use either of the transcendental numbers e or ln(2), but that's because the FFT does not use transcendental numbers at all! (Roots of unity, sure, but those are algebraic)

> all the multiplicative constants explicitly or implicitly involved in the FFT direct and inverse computations being eliminated.

Doesn't that basically get you a Hadamard transform?

adrian_b

4 days ago

FFT can be done avoiding the use of any transcendental constants, but the conventional formulae for FFT use the transcendental 2Pi both explicitly and implicitly.

The FFT formulae when written using the function e^ix contain an explicit division by 2Pi which must be done either in the direct FFT or in the inverse FFT. It is more logical to put the constant in the direct transform, but despite this most implementations put the constant in the inverse transform, presumably because a few applications use only the direct transform, not also the inverse transform.

Some implementations divide by sqrt(2Pi) in both directions, to enable the use of the same function for both direct and inverse FFT.

Besides this explicit used of 2Pi, there is an implicit division by 2Pi in every evaluation of e^ix, for argument range reduction.

If instead of using e-based exponentials one uses trigonometric functions with arguments measured in cycles, not in radians, then both the explicit use of 2Pi and its implicit uses are eliminated. The explicit use of 2Pi comes from computing an average value over a period, by integration followed by division by the period length, so when the period is 1 the constant disappears. When the function argument is measured in cycles, argument range reduction no longer needs a multiplication with the inverse of 2Pi, it is done by just taking the fractional part of the argument.

constantcrying

4 days ago

>I am sorry, but comments like this are caused by the naivety of not knowing how the function evaluations are actually implemented.

I am sorry, but comments like this are caused by the naivety of not knowing a single things about mathematics.

Do you not understand that mathematics is not just about implementation, but about forming models of reality? The idea of trying to model a physical system while pretending that e.g. the solution of the differential equations x'=x does not matter is just idiotic.

The idea that just because some implementation can avoid a certain constant, that this constant is irrelevant is immensely dumb and tells me that you lack basic mathematical education.

qnleigh

5 days ago

Uuuuuum no?

e^(ix) = cos(x) + isin(x). In particular e^(ipi) = -1

(1 + 1/n)^n = e. This is part of what makes e such a uniquely useful exponent base.

Not applied enough? What about:

d/dx e^x = e^x. This makes e show up in the solutions of all kinds of differential equations, which are used in physics, engineering, chemistry...

The Fourier transform is defined as integral e^(iomega*t) f(t) dt.

And you can't just get rid of e by changing base, because you would have to use log base e to do so.

Edit: how do you escape equations here? Lots of the text in my comment is getting formatted as italics.

selecsosi

5 days ago

Guessing the original comment hasn't taken complex analysis or has some other oriented view point into geometry that gives them satisfaction but these expressions are one of the most incredible and useful tools in all of mathematics (IMO). Hadn't seen another comment reinforcing this so thank you for dropping these.

Cauchy path integration feels like a cheat code once you fully imbibe it.

Got me through many problems that involves seemingly impossible to memorize identities and re-derivation of complex relations become essentially trivial

adrian_b

4 days ago

Complex exponentials and complex logarithms are useful in some symbolic computations, those involving formulae for derivatives or primitives, and this is indeed the only application where the use of e^x and natural logarithm is worthwhile.

However, whenever your symbolic computation produces a mathematical model that will be used for numeric computations, i.e. in a computer program, it is more efficient to replace all e^x exponentials and natural logarithms with 2^x exponentials and binary logarithms, instead of retaining the complex exponentials and logarithms and evaluating them directly.

At the same time, it is also preferable to replace the trigonometric functions of arguments measured in radians with trigonometric functions of arguments measured in cycles (i.e. functions of 2*Pi*x).

This replacement eliminates the computations needed for argument range reduction that otherwise have to be made at each function evaluation, wasting time and reducing the accuracy of the results.

lutusp

5 days ago

> Edit: how do you escape equations here? Lots of the text in my comment is getting formatted as italics.

Just escape any asterisks in your post that you want rendered as asterisks: this: \* gives: *.

user

5 days ago

[deleted]

user

5 days ago

[deleted]

adrian_b

4 days ago

Even when you use the exponential e^x and the hyperbolic logarithm a.k.a. natural logarithm (which are useful only in symbolic computations and are inferior for any numeric computation), you never need to know the value of "e". The value itself is not needed for anything. When evaluating e^x or the hyperbolic logarithm you need only ln 2 or its inverse, in order to reduce the argument of the functions to a range where a polynomial approximation can be used to compute the function.

Moreover, you can replace any use of e^x with the use of 2^x, which inserts ln(2) constants in various places, (but removes ln 2 from the evaluations of exponentials and logarithms, which results in a net gain).

If you use only 2^x, you must know that its derivative is ln(2) * 2^x, and knowing this is enough to get rid of "e" anywhere. Even in derivation formulae, in actual applications most of the multiplications with ln 2 can be absorbed in multiplications with other constants, as you normally do not have 2^x expressions that are derived, but 2^(a*x), where you do ln(2)*a at compile time.

You start with the formula for the exponential of an imaginary argument, but there the use of "e" is just a conventional notation. The transcendental number "e" is never used in the evaluation of that formula and also none of the numbers produced by computing an exponential or logarithm of real numbers are involved in that formula.

The meaning of that formula is that if you take the expansion series of the exponential function and you replace in it the argument with an imaginary argument you obtain the expansion series for the corresponding trigonometric functions. The number "e" is nowhere involved in this.

Moreover, I consider that it is far more useful to write that formula in a different way, without any "e":

1^x = cos(2Pi*x) + i * sin(2Pi*x)

This gives the relation between the trigonometric functions with arguments measured in cycles and the unary exponential, whose argument is a real number and whose value is a complex number of absolute value equal to 1, and which describes the unit circle in the complex plane, for increasing arguments.

This formula appears more complex only because of using the traditional notation. If you call cos1 and sin1 the functions of period 1, then the formula becomes:

1^x = cos1(x) + i * sin1(x)

The unary exponential may appear weirder, but only because people are habituated from school with the exponential of imaginary arguments instead of it. None of these 2 functions is weirder than the other and the use of the unary exponential is frequently simpler than of the exponential of imaginary arguments, while also being more accurate (no rounding errors from argument range reduction) and faster to compute.

adrian_b

4 days ago

I want to add that any formula that contains exponentials of real arguments, e^x, and/or exponentials of imaginary arguments, e^(i*x), can be rewritten by using only binary exponentials, 2^x, and/or unary exponentials, 1^x, both having only real arguments.

With this substitution, some formulae become simpler and others become more complicated, but, when also considering the cost of the function evaluations, an overall greater simplicity is achieved.

In comparison with the "e" based exponentials, the binary exponential and the unary exponential and their inverses have the advantage that there are no rounding errors caused by argument range reduction, so they are preferable especially when the exponents can be very big or very small, while the "e" based exponentials can work fine for exponents guaranteed to be close to 0.