On the Nature of Time

285 pointsposted 14 hours ago
by iamwil

179 Comments

foundry27

12 hours ago

I think it’s really interesting to see the similarities between what Wolfram is saying and the work of Julian Barbour on time being an emergent property. Both suggest a similar underlying ontology for the universe: a timeless, all-encompassing realm containing all possible states / configurations of everything. But what’s really fascinating is that they reach this conclusion through different implementations of that same interface. Barbour talks about a static geometric landscape where time emerges objectively from the relational (I won’t say causal) structures between configurations, independent of any observer. On the other hand, Wolfram’s idea of the Ruliad is that there’s a timeless computational structure, but time emerges due to our computational limitations as observers navigating this space.

They’ve both converged on a timeless “foundation” for reality, but they’re completely opposite in how they explain the emergence of time: objective geometry, vs. subjective computational experience

pizza

11 hours ago

I was literally thinking of the same similarities. Barbour's exposition of the principle of least action as being time is interesting. There's a section in The Janus Point where he goes into detail about the fact that there are parts of the cosmos that (due to cosmic inflation) are farther apart in terms of light-years than the universe is old, and growing in separation faster than c, meaning that they are forever causally separated. There will never be future changes in state from one that result in effects in the other. In a way, this also relates to computation, maybe akin to some kind of undecidability.

Another thing that came to mind when reading the part about how "black holes have too high a density of events inside of them to do any more computation" is Chaitin's incompleteness theorem: if I understand it correctly, that basically says that for any formal axiomatic system there is a constant c beyond which it's impossible to prove in the formal system that the Kolmogorov complexity of a string is greater than c. I get the same kind of vibe with that and the thought of the ruliad not being able to progressively simulate further states in a black hole.

psychoslave

5 hours ago

>There's a section in The Janus Point where he goes into detail about the fact that there are parts of the cosmos that (due to cosmic inflation) are farther apart in terms of light-years than the universe is old, and growing in separation faster than c, meaning that they are forever causally separated. There will never be future changes in state from one that result in effects in the other. In a way, this also relates to computation, maybe akin to some kind of undecidability.

Ho, I love this hint. However even taking for granted that no faster than light travel is indeed an absolute rule of the universe, that doesn't exclude wormhole, or entangled particles.

https://scitechdaily.com/faster-than-the-speed-of-light-info...

csomar

8 hours ago

> There will never be future changes in state from one that result in effects in the other.

You are assuming that the Principle of locality is true and proven. This is far from being the case from my understanding.

adrianN

2 hours ago

You can’t really prove things in physics, but to my knowledge we don’t have observations that contradict locality.

ziofill

8 hours ago

Actually, the parts of the universe receding from us faster than the speed of light can still be causally connected to us. It’s a known “paradox” that has the following analogy: an ant walks on an elastic band toward us at speed c, and we stretch the band away from us by pulling on the far end at a speed s > c. Initially the ant despite walking in our direction gets farther, but eventually it does reach us (in exponential time). The same is true for light coming from objects that were receding from us at a speed greater than c when they emitted it. See https://en.m.wikipedia.org/wiki/Ant_on_a_rubber_rope

tempaway456456

2 hours ago

I don't think they are saying anything similar at all. Julian Barbour finds a way to get rid of Time completely (by saying every possible state exists and there must be some law that favours states that _seem_ to be related to _apparently_ previous states). Wolfram is more focused on making sense of 'time is change' through the lens of computation.

pyinstallwoes

10 hours ago

Without time you’d be everything all at once, which isn’t capable of having an experience, that is to also say: a location.

To have experience, requires position relative to the all, the traversal of the all is time.

More like a play head on a tape, you’re the play head traversing and animating your own projection.

hackinthebochs

10 hours ago

The universe doesn't need to evolve for us to have experience. We would experience evolution through the state space because its structure is oriented such as to experience evolution through time. Each point in experience-time (the relative time evolution experienced by the structure) is oriented towards the next point in experience-time. Even if all such points happen all at once, the experience of being a point in this structure oriented towards the next point is experienced subjectively as sequential. In other words, a block universe would contain sequences of Boltzman brains who all subjectively experience time as sequential.

The real question is why would such a universe appear to evolve from a low entropy past following a small set of laws.

pyinstallwoes

2 hours ago

Well, it doesn’t evolve. You just render it as evolving to perceive yourself / itself. The only way to have the state of being of observation and perception is to not be everything which gives rise to directionality.

raattgift

2 hours ago

Boltzmann brains are extremely ephemeral.

An analogy is that of stirring a vat of alphabet soup and noticing that there is a fair number of single-letter words popping into view ("A", "I"), a smaller number of two-letter words, an even smaller number of three-letter words ... a very very small chance of a twenty-letter word ... and a vanishingly small chance of the 189819-letter monster <https://en.wiktionary.org/wiki/Appendix:Protologisms/Long_wo...> popping into view. The stirring doesn't stop just because a multiletter word appears, so multiletter words are quickly broken up and even valid single-letter words get hidden behind the "B"s and "Q"s and other letters in the soup.

Boltzmann brains will fluctuate out of existence on the order of a small multiple of the light-crossing time of the brain-matter that fluctuated into existence. As the brains are human, they won't even have a chance to react. Although their false memories are encoded however true memories exist in our own brains, they'll have no time to have a reminiscence or notice their lack of sensory organs. (Which is probably good, since they would quickly suffer and die from lack of pressure and oxygen).

A Boltzmann-brain with a full encoding of a life worth of false memories (from never-existing sensory input) is a much larger number of letters. Also, in a cold universe, the stirring is slower, and the letters sparser. Boltzmann brains are tremendously unlikely except in a verrrrrrrrry big volume of spacetime. But with a sufficiently big volume of spacetime, or one with an energetic false vacuum, one should expect a lot of Boltzmann brains. This view puts some limits on our own cosmos's vacuum, since we don't see lots of Boltzmann brains (or even much less complicated but RADAR-detectable and/or eclipsing strucures) fluctuating into brief existence in our solar system.

Boltzmann brains are low-entropy. A persisting Boltzmann brain (fluctuating into existence and staying in existence for a long time) is much lower entropy still. This poses problems for hypotheses that the entire early universe fluctuated into existence and then evolved into the structures we see now. Here there are human brains attached to sensory apparatus, whose memories correlate fairly well with their history of input (and recordings by ancestors, and fossil records, and so on): a system with much much lower entropy than Boltzmann brains, so what suppresses relatively high-entropy structures (including Boltzmann brains) from dominating (by count) our neighbourhood?

Also, if the universe supports large low-entropy fluctuations, galaxies that briefly (~ hundred thousand years) fluctuated in and out of existence should be much more common than galaxies with a history consistent with billions of years of galactic evolution, and you'd expect random variations in morphology, chemistry, and so forth; that's not what we see.

This is a bit annoying, as it would be handy to point to Boltzmannian fluctuation theory as the source of the tremendously low entropy in the very early universe, i.e., it could have arisen spontaneously in a less precisely ordered space. Oh well.

> why would ... a universe appear to evolve from a low entropy past following a small set of laws

Thermodynamics.

The issue is: where did the low entropy past come from? Once you have that, evolving into a higher entropy structure-filled present is not too hard -- that's essentially what we have with the standard cosmology from about the electroweak epoch onwards.

So in summary:

> sequences of Boltzman brains who all subjectively experience time as sequential

whatever these might be, they aren't Boltzmann brains, since the latter don't subjectively experience anything as objectively they fluctuate out of existence in something like a nanosecond.

Very briefly, the short existence is driven by interacting fields and the need to keep entropy (relatively) high: if your starting point just before the appearance of the brain is a region that is high quality vacuum, you have to come up with protons, calcium nuclei, ... and all that requires very careful aim to have one split-second "movie frame" of brain. You need much better "aim" which really drives down the entropy (which corresponds a much larger fluctuation) to go from vacuum to a Boltzmann brain that doesn't disintegrate starting in the very next frame thanks to overshoots of momentum.

The higher the entropy of the Boltzmann brain, the clearer the stat mech argument. (If one gets stuck thinking about human brains, C. elegans apparently develop memories and store them in their nerve ring. Why isn't the outer space of our solar system full of those Boltzmann-C.-elegans brains fluctuating in and out of existence with each possessing false memories of sensory stimuli? Smaller fluctuations, so there should be many more of those than human Boltzmann brains).

marcus_holmes

6 hours ago

Maybe we do experience everything at once, but then have to process it in a time-like manner to make any sense of it.

Like everything else that we "experience", maybe the perception that reaches our consciousness has nothing to do with what's actually out there.

There are no purple photons.

pyinstallwoes

2 hours ago

Yeah, god is everything, which can’t have experience, as it’s experiencing everything at once - thus the monad splits itself, allowing perception as a fraction of the whole which is experienced as time and direction.

JumpCrisscross

10 hours ago

> have experience, requires position relative to the all, the traversal of the all is time

You’re describing timelike experience. Photons “experience” events as in they are part of causality. But they do so in a non-timelike manner.

pyinstallwoes

2 hours ago

Said a human.

If it’s not time-like, then it’s everything, thus it can’t have experiences thus god. God splits (monad becomes many) to experience being (shards in multiplicity of the one through division: oooh spooky golden mystery).

yarg

9 hours ago

I think that time isn't what we think it is - but I don't think it's all already set; rather I think that the past can be constrained by the future just as the future is constrained by the past.

I don't think that there's spooky action at a distance (it's fundamentally equivalent to retrocausality, and the consequences of the distant foreign event cannot outpace its light cone anyway).

I think its a superposition of states of a closed time-like curve thing being fleshed out as its contradictions are resolved and interactions are permitted between its colocated non-contradictory aspects.

But I'm not a physicist, so that's probably all just bullshit anyway.

andoando

7 hours ago

I suspect there are many different mental conceptions that amount to the same facts of nature.

bmitc

7 hours ago

I generally like the idea of most everything being emergent, but where does it stop? Is it emergence all the way down?

bbor

7 hours ago

Idk, just looking at it now Barbour seems much, much more rigorous. The linked article is more “using scientific terms to muse about philosophy” than physics, IMHO. For example;

  In essence, therefore, we experience time because of the interplay between our computational boundedness as observers, and the computational irreducibility of underlying processes in the universe.
His big insight is literally the starting point of Hegel’s The Science of Logic, namely that we are finite. That in no ways justifies all the other stuff (especially multiverse theory), and it’s not enough to build a meaningfully useful conception of time, at all. All it gets you is that “if you were infinite you wouldn’t experience time”, which is a blockbuster-sci-fi-movie level insight, IMO.

I can’t help but think of Kant as I write this; he wrote convincingly of the difference between mathematical intuition and philosophical conception, a binary Wolfram would presumably—and mistakenly-identify with solid logic vs meaningless buffoonery. But refusing to acknowledge our limits makes you more vulnerable to mistakes stemming from them, not less.

  …the metaphysic of nature is completely different from mathematics, nor is it so rich in results, although it is of great importance as a critical test of the application of pure understanding—cognition to nature. For want of its guidance, even mathematicians, adopting certain common notions—which are, in fact, metaphysical—have unconsciously crowded their theories of nature with hypotheses, the fallacy of which becomes evident upon the application of the principles of this metaphysic, without detriment, however, to the employment of mathematics in this sphere of cognition. 
Worth remembering at this point that Aristotle coined “physics” for the mathematical study of physis (nature), which was then followed up by a qualitatively different set of arguments interpreting and building upon that basis in a work simply titled metaphysics (after physics). We’ve learned infinitely more mathematical facts, but IMO “what is time, really?” will forever remain beyond their reach, a fact determined not by the universe but by the question itself.

TL;DR: if you’re gonna try to talk cognition you should at least admit that you’re writing philosophy, and ideally cite some philosophers. We’ve been working on this for a hot minute! Barbour seems to be doing something much less ambitious: inventing the most useful/fundamental mathematical framework he can.

m3kw9

11 hours ago

So you are saying there is a version of me that is king of the universe in some timeline?

grugagag

9 hours ago

In a skin enclosed universe you are already King Meatbag, ruler over your mind and body.

biofox

4 hours ago

My body disagrees.

pixl97

11 hours ago

If the universe is infinite then there is a possibility that you are a king of an observable universe somewhere.

xandrius

10 hours ago

Infinite does not mean that all the permutations are possible.

You being you and you becoming a king might simply not be a combination which is compatible.

kridsdale3

7 hours ago

Great way to let someone down who asks you out.

There are no branches in the Ruliad in which you and I end up together. I have foreseen it.

mensetmanusman

8 hours ago

You vastly misunderestimate infinity if you don’t recognize that anything feasible will happen.

jbotz

4 hours ago

Depends on how you define feasible.

Take Wolfram's 1-dimensional cellular automata... some of them have infinite complexity, and of course you can "run" them for infinite time, and the "current" state is constantly expanding (like the Universe). So let's define "something feasible" as some specific finite bit pattern on the 1-dimensional line of an arbitrary current state. Is that "feasible" bit pattern guaranteed to appear anywhere in the automaton's present or future? I believe, and if I understand correctly, so does Wolfram, that for any reasonably complex "feasible pattern" the answer is no; even though the automaton produces infinitely many states, it is not guaranteed to explore all conceivable states.

In other words, in a given Universe (which has a specific set of rules that govern its evolution in time) even though there are infinitely many possible states, not all conceivable states are a possible result of that evolution.

pantulis

4 hours ago

There are infinite numbers between 3 and 4, yet none of them is number 7.

adastra22

8 hours ago

It is simpler than that. Wolfram has a long history of plagiarizing ideas and passing them off as his own.

mensetmanusman

8 hours ago

That’s the history of 99.9999% of ideas based on the average token generation rate of humanity.

PaulDavisThe1st

8 hours ago

The mother of someone who was a friend in the 90s used to always pepper her speech with attributions for almost everything she was saying (in any "serious" conversation). "I think it was Popper who said ..." "Schenk developed this idea that ...")

It was * so * annoying to listen to.

adastra22

7 hours ago

We should hold dinner-table conversations and scientific letters to different standards.

adastra22

7 hours ago

Real scientists tend to try to be careful about attribution and especially don't just blatantly regurgitate the last thing they read and pass it off as their own. That is highly frowned upon in polite academic society.

zaptheimpaler

13 hours ago

I think he's a quack trying to torture an explanation of the universe out of his pet theory that uses a lot of words to say simple things but doesn't predict anything. If "time is what progresses when one applies computational rules" then how is the order in which the rules are applied defined in the first place?

Computational irreducibility is a neat idea but i'm not sure its novel or something that explains the entire universe. My basic intro course on differential equations taught us that the vast majority of them cannot be solved analytically, they have to be approximated. I don't know if the irreducibility idea is anything fundamentally different than saying some problems are hard, whether its non analytical equations or NP hard problems.

kouru225

11 hours ago

I think you’re slightly misunderstanding his concept of computational irreducibility. It’s more like the halting problem than anything: basically he’s saying that dynamic systems can’t be reduced to an equation that is easier to calculate and so you just have to simulate the entire system, run it, and watch what happens. This means we can’t ever predict the future within these systems.

niobe

10 hours ago

Well I wouldn't put it quite like that either.. because you have to be careful what you mean by 'simulate' and 'easier'.

There could be multiple ways to simulate the same system, i.e. produce the same evolutionary output steps. Wolfram tends to imply there is only one most-expensive way for systems that are computationally irreducible and that way is grinding through a recursive computation. I think that's partly because the simple experiments, like cellular automata, he used to come up with this principle actually explore the 'space of simple rules', not the 'space of ordered sets of states of systems'.

Of course the latter is a much more computationally expensive things to do but it seems to me it would generalise better to the universe. Because in the universe what we're really observing is the evolution of states not the outputs of rules. There may be other hidden assumptions in the principle if you assume that all systems can and do evolve from simple rules as much of Physics does. Nevertheless, you need a high bar if you're going to state universal principles.

Perhaps the simplest way to state the principle is: say we set up a simple iterative computation where the input to step n, is the output of step n-1. Then there's no way to compute state n without having previously computes states n-2, n-3 etc. That's what he means by irreducible. In other words it's "necessarily recursive" which may be a better and more focused term.

I'm cautious about making it mean more than that, since Wolfram tends to write in great leaps of conclusions without showing us his working. Nevertheless I enjoy following his ideas, and I did find aspects of this article quite thought provoking.

kouru225

9 hours ago

> I think that's partly because the simple experiments, like cellular automata, he used to come up with this principle actually explore the 'space of simple rules', not the 'space of ordered sets of states of systems'.

I think it’s the opposite actually. He chose to study these recursive systems because they seem to describe reality, and then when he found more evidence that they do a good job describing reality, he kept studying… so on and so forth. Basically a sort of hermeneutic circle type deal.

You do a much more thorough job of describing it. I should’ve mentioned the recursive part earlier. I just kinda assume we all already know we’re talking about recursion and time steps and that’s not a useful assumption

kridsdale3

7 hours ago

It's pretty dang hard to give the output to Fibonacci(x) for any x up to infinity without having done the work up to that point.

kouru225

6 hours ago

Good point. We’re still not describing it perfectly. Admittedly I’m doing it by my memory of the last time I read Wolframs ideas. I think we unfortunately have to describe it using Kolmogorov complexity: what is the length of the shortest computer program that produces the object as an output? What Wolfram means by computational irreducibility is that he asserts that reality itself is the shortest length computer program that can produce its own output, and it can’t be shortened (reduced) any further without losing information.

Edit: sorry I think I still haven’t fully described it. Will have to come back to it tomorrow when I’ve had some sleep

Xcelerate

6 hours ago

Actually there’s an explicit formula for Finonacci(x) that involves phi. I think you can use generating functions to derive it.

(But your overall point still stands.)

PaulDavisThe1st

8 hours ago

Your comment makes me think about statistical mechanics and microstates. That is to say ... in a complex system with properties that are a function of microstates, whether the internal structure of the microstates that correspond to a given property matter can depend on your point of view or interest in the system.

Heat, for example, is a statistical property of a system, and a given temperature can correspond to a vast number of possible microstates of the system. For some purposes, you care precisely which microstate the system is in; for others, you do not, and the temperature property is entirely adequate to describe the system.

Rules may describe the microstate, but may be (depending on your POV) be irrelevant to the property.

Using Wolfram's model of the world, there may indeed be a cellular automata following rules that underlies the property, but there may be no reason to care about it in a given instance; instead you're interested in the "evolution of states" (i.e. values of the property).

Some complexity scientists are quite taken with this idea of not needing to care about the lower levels of a system when consider higher level behavior. In their view (and rightly so, IMO [0]) you don't always need to consider the rules that drive (say) physics when considering (say) psychology.

[0] except that I think that Hofstadter's "heterarchy" idea is likely to be even more accurate - interesting systems are the ones in which there are complex feedback systems between different levels of the system.

kridsdale3

7 hours ago

It seems pretty clear to me that this desire for "perfect" layers of abstraction is something we strive for due to our own intellectual limits, and that in reality all abstractions are lossy to some degree. Heat as a single integer in Degrees F is good enough most of the time but when you're designing CVD for a Silicon Fab you might actually care about the positions and orientations and vectors of the gas molecules.

sitkack

10 hours ago

That smells like the Universe is the best Computer for computing the future of the Universe tautology.

kridsdale3

7 hours ago

So apparently the inside of the Black Hole event horizon is just "Forty Two".

wizzwizz4

10 hours ago

Funny you should say this, because most work on the halting problem is reducing the systems down to equations that are easier to calculate.

kridsdale3

7 hours ago

But is the theory that such work can continue forever?

wizzwizz4

an hour ago

There's a point at which it becomes impossible: the nth Busy Beaver number is independent of ZF, for n≤745 (ref: https://wiki.bbchallenge.org/wiki/Cryptids). So no, such work cannot continue forever.

We don't know whether such work can continue up to that point. The only way to find that out is to explore the relevant mathematics, and see if we find something fundamentally irreducible. There's no long-term pattern to the proofs, despite the presence of short-term patterns. (In this sense, the hunt for Busy Beavers is computationally irreducible – but there are still easier and harder ways of approaching the hunt.)

Trufa

4 hours ago

I don't mean it as an attack, I honestly mean it as a straightforward question, what are your qualifications in this matter to call someone as accomplished as SW a quack?

sbussard

9 hours ago

He treats computation as if it is a fundamental law of nature, but I don’t find that assertion compelling. I’m also more of a pilot wave theory advocate, which although incomplete, cuts off several diseased (renormalized) branches of quantum physics.

lisper

13 hours ago

I wrote up more or less the same idea ten years ago, but in what I think is a more accessible presentation:

https://blog.rongarret.info/2014/10/parallel-universes-and-a...

whyenot

12 hours ago

I have read and appreciated your writings going back to the comp.lang.lisp days, but a blog post that starts with “if you haven’t read the previous post, please do before reading the rest of this one” is not what I would consider accessible. …and that previous post then asks the reader to first read a paper or watch a video before proceeding. While a decade later than what you wrote, Wolfram’s article is much more self contained and complete.

WhitneyLand

12 hours ago

Thank you so much for this.

Whenever people criticize Wolfram the comeback is often, he’s just trying to discuss big ideas that mainstream science won’t talk about. Of course that’s not the reason for the criticism at all and I think your work here shows that it’s totally fine to speculate and get a little philosophical. The results can be interesting and thought provoking.

There’s a difference between big ideas and grandiosity. It also shows big ideas can stay scientifically grounded and don’t require making up corny terminology (Ruliad? lol).

Q_is_4_Quantum

9 hours ago

It is possible to make quantitative statements that I think capture many of the intuitions you assert. Here was one attempt:

https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.10...

That particular proposal was mathematically wrong for reasons I still find physically perplexing (it turns out that for some events quantum theory allows for stronger memory records - defined via classial mutual information - of entropy decreasing events!). A simple example is in here: https://arxiv.org/abs/0909.1726 (I am second author).

lisper

7 hours ago

Very interesting! Thanks for the pointers! I'll need to take some time to digest these.

nis0s

11 hours ago

Do physicists think time actually exists? I wonder if someone has reasoned that time is an accounting method that humans have developed to make sense of their experienced change of systems.

Wolfram uses the words progression and computation a lot in his essay, but there’s an implicit bias there of assuming a process is deterministic, or has some state it’s driving towards. But none of these “progressions” mean anything, I think. It seems they are simply reactions subject to thermodynamics.

If no one observed these system changes, then the trends, patterns, and periodicity of these systems would just be a consequence of physics. It seems what we call “time” is more the accumulation of an effect rather than a separate aspect of physics.

For example, I wonder what happens in physics simulations if time is replaced by a measure of effect amplitude. I don’t know, tbh, I am not a physicist so maybe this is all naïve and nonsense.

bubblyworld

8 hours ago

Time "exists" in physics in the same way everything else in physics does - namely, the value we measure with clocks in the real world satisfies all of the same properties (at least in certain regimes of the universe) as the thing we call "time" in various physics theories like relativity/classical mechanics. And those theories make (reasonably) correct predictions about the values we measure in the real world.

Is it possible that these properties are the result of some other interactions that have very different laws at a lower level? Absolutely! But the discovery of particles didn't cause the sun to disappear, if that makes sense.

goatlover

8 hours ago

> Do physicists think time actually exists?

Yes, spacetime is important for General Relativity, cosmology and thermodynamics. Whether it's fundamental or emerges from something more fundamental is an open question though.

mensetmanusman

8 hours ago

Time is just a measure of change. No change. No time.

We are interested in a peculiar rate of time based on the heart beat of our experience.

deepfriedchokes

7 hours ago

It could be that what changes is our perception of reality, not reality itself.

worstspotgain

13 hours ago

Thought experiment on the nature of reality:

- In a much larger universe, write down in a log book every event to every particle at every instant, from the Big Bang to the restaurant.

- Put it on the fireplace mantle and leave it there.

This is basically a log of a simulation. It exists in much the same way as an ongoing simulation would, except that its time dimension isn't shared with the simulating universe. But every observer within has had the same observations as if it did.

skissane

13 hours ago

This assumes that a map, if sufficiently detailed, is identical to the territory.

Maybe it is, maybe it isn’t - but it is a highly debatable metaphysical assumption. I’m not sure how seriously we should take some people’s claims that they “know” that such an assumption is actually true

worstspotgain

13 hours ago

It's an argument about simulations, not about reality. If reality is a simulation, then arguments about simulations apply to it, but that's the big if.

skissane

13 hours ago

Not necessarily. Suppose that consciousness/qualia/etc is “something extra” which has to be added to non-mental reality, as some dualists believe. Then, it would be possible that we live in a simulation which contains consciousness because that “something extra” has somehow been added to it. And yet, maybe the “much larger” universe which contains our simulation also contains such a “log book” of a very similar universe to our own, also containing intelligent life - and yet, if the “something extra” has not been added to that “log book”, it would lack consciousness and qualia, unlike our own universe.

I’m not arguing that a dualism (of this sort) is actually true, merely that we don’t (and can’t) know for a fact that it is false. But if we can’t know for a fact that it is false, then even if we (somehow) knew our reality was simulated, that wouldn’t give us grounds to make confident inferences about the nature of other simulations, or the nature of simulations in themselves

worstspotgain

13 hours ago

I agree with your post. However, I was using the most mechanical meaning of simulation: "the production of a computer model of something, especially for the purpose of study," which implies determinism and excludes the "something extra."

skissane

11 hours ago

It doesn’t actually exclude the “something extra”, it is neutral as to whether or not there is any “something extra”

Panpsychists claim everything is conscious, even rocks, even atoms. Again, I don’t claim this is true (I’d be rather shocked if I somehow found out it was), but we can’t know for a fact that it is false. Yet if panpsychism (or at least certain versions thereof) is true, every simulation (even a simulation of the weather, or of crop growth) is conscious, simply because absolutely everything is. But I don’t think most standard definitions of “simulation” are excluding that possibility - on the contrary, they are agnostic with respect to it, treating its truth or falsehood as outside of their scope

It also doesn’t necessarily imply determinism because some computer simulations use RNGs. Most commonly people use pseudorandom RNGs for this, but there is nothing in principle stopping someone from replacing the pseudorandom RNG with a hardware RNG based on some quantum mechanical process, such that it is indeterministic for all practical purposes, and the question of whether it is ultimately deterministic or indeterministic depends on controversial questions about QM to which nobody knows the answers

worstspotgain

10 hours ago

> It doesn’t actually exclude the “something extra”, it is neutral as to whether or not there is any “something extra”

Roger, that's even better. I tried to clarify the log book idea in another reply.[1] The question is whether you can have reality (from the observer's perspective) just based on whether coherent information exists in any setting.

Basically the question is whether we can go from "I think, therefore I am" to "something is constructing information." The latter is obviously a simpler, lower-level proof than other concepts of existence.

That brings us back to the "something extra." Is it required for our observations to be possible, i.e. can we rule out the log book conjecture? I don't think we can, but I might be wrong.

[1] https://news.ycombinator.com/item?id=41783599

orangecat

10 hours ago

And yet, maybe the “much larger” universe which contains our simulation also contains such a “log book” of a very similar universe to our own, also containing intelligent life - and yet, if the “something extra” has not been added to that “log book”, it would lack consciousness and qualia, unlike our own universe.

In that case, the non-conscious people in the log book would spend a lot of time pontificating on their experiences of consciousness and how mysterious it is and whether it's possible for there to be other universes that contain entities like themselves except not conscious. They'd be having these discussions for reasons that have nothing to do with actually being conscious, but coincidentally their statements would perfectly correspond with our actual perceptions of consciousness. Maybe not logically impossible, but it seems extremely improbable.

(This is pretty much the argument at https://www.lesswrong.com/posts/fdEWWr8St59bXLbQr/zombies-zo... which I find persuasive).

mistermann

13 hours ago

The word "simulation" is it self a simulation. So is the word "is".

https://en.m.wikipedia.org/wiki/Semiotics

Reality is a multi-disciplinary domain, but it gives off the appearance of being physics only, because of its metaphysical nature.

FrustratedMonky

13 hours ago

Except for the randomness introduced in Quantum Mechanics.

If they ever solve the randomness, then if the map is down to every particle, then yes, the map and reality could be the same. But think at that point you need a computer the size of reality to keep track of every particle.

Or, maybe the entire universe is one giant wave equation. But again, I think you need a computer the size of the universe to solve it.

skissane

13 hours ago

We don’t know for a fact that QM contains irreducible indeterminism. If many worlds is true, then QM is ultimately deterministic. Same if hidden variables is true. A large class of local hidden variable theories have been ruled out by Bell’s theorem, but non-local hidden variable theories survive it (such as the Bohm interpretation and the transactional interpretation), as do local hidden variable theories which deny the Bell theorem’s assumptions about the nature of measurement, such as superdeterminist local hidden variable theories.

FrustratedMonky

29 minutes ago

Wasn't the Noble prize last year for eliminating local hidden variables? That spooky action at a distance does occur?

And for many worlds. Doesn't it punt randomness into other universes, but doesn't help us solve for results in our own individual universe. Since we can't measure what happens in the other universe we don't really know. If there were two results, and one is in one universe, and one in our universe, sure we determinedly know both results. But we don't know which universe we are in, so instead of a random result, now we have 2 answers and 2 universes, but now randomly don't know where we are?

goatlover

8 hours ago

An MWI universe would be hard to simulate though. There's an unknown vast number of branches.

zeven7

5 hours ago

Maybe with a quantum computer in a larger multi worlds universe?

worstspotgain

13 hours ago

Are you saying that some things are just not simulable, given a sufficiently large and powerful computer, or that the universe is or might be infinite?

FrustratedMonky

13 hours ago

If the universe is real, not simulation.

If you know the position and speed, everything, about every particle, then you should be able to extrapolate the future by calculating it. The problem is you need a computer the size of the universe to do that calculation.

So even thought the map is the territory, equal scale, and you have the map. It is little worthless because the map ends up being reality.

Edit: Little different than the idea that if this is simulation, you can do clipping and only render what we see. I'm saying the entire universe is 'real'.

worstspotgain

13 hours ago

If the universe is not infinite, and if individual particles and waves are calculable, it follows that one can postulate a larger universe capable of simulating it, or a large enough log book in this example.

What I find interesting is looking at whether some observable things look like they might be performance optimizations, or even "magic seeds" (as in RNG seeds.)

No proof of a simulation obviously, but maybe hints.

tines

13 hours ago

> If you know the position and speed, everything, about every particle, then you should be able to extrapolate the future by calculating it.

But isn't that the exact thing that quantum mechanics refutes? You cannot know the future just from the past; you can only know the probabilities of different futures.

worstspotgain

12 hours ago

OK, but if you own the machine, you can just pick the outcome you want, or draw it from the distributions at random. We (observers inside the machine) cannot know the future of course.

FrustratedMonky

12 hours ago

Yes. I referred to the randomness that would prevent this, "once that is solved".

Guess I'm in the camp that eventually we'll find some model or discover something new, to discover what is behind the randomness, so it is no longer just random. But, yes, that is big IF.

Until then, with current theories, we couldn't do these calculations. They'd just be approximations accounting for some randomness.

Filligree

11 hours ago

Many-Worlds doesn’t contain or require any randomness.

I guess for whatever reason you don’t consider that to be the correct discovery?

FrustratedMonky

37 minutes ago

Because it doesn't remove the randomness from our universe. It punts it to other universes. That is great, but doesn't allow us to predict things in our individual universe.

Or another way of saying it. We have 2 answers, they are determined. That is great, we know the 2 answers, one in each universe. Now the problem is we don't know what universe we are in. Now which universe we are in is random.

We didn't move the ball towards doing something useful in our own universe.

Filligree

5 minutes ago

The 'universes' are loose abstractions, not a defined part of the theory; there's no actual hard distinction between timelines, in much the same way as coastlines don't have a defined length. They all blend into each other if you look closely enough.

That said, isn't the obvious answer 'all of them'?

jorvi

12 hours ago

> The problem is you need a computer the size of the universe to do that calculation.

I’m not sure were you get that idea from. The amount of calculations we can do, per say, 1 000 000 molecules dedicated to the calculation has absolutely skyrocketed, and will continue to skyrocket.

FrustratedMonky

12 hours ago

"The amount of calculations we can do, per say, 1 000 000 molecules dedicated to the calculation "

Lets say it takes 100 molecules in a circuit to calculate 1 particles state. Then you already would need a universe 100X the size to calculate our 1X size universe.

I'm assuming all particles, not that this is somehow clipping and only rendering what we see. I'm not talking about the brain in box simulation, I'm talking about idea that entire universe is out there. What would it take to calculate every position of every particle.

jorvi

an hour ago

> Lets say it takes 100 molecules in a circuit to calculate 1 particles state. Then you already would need a universe 100X the size to calculate our 1X size universe.

That’s not how it works though. You’d have a lot of fixed costs to build a computer that simulates exactly the behavior of one particle. But then simulating a secondary particle will have a much, much, much smaller marginal cost.

Since you brought up clipping, games are actually a perfect example. You can see games as very crude simulations of our own reality, or slices of it. Take for example Red Dead Redemption 2. Run it on a PS5. Now compare the size of your PS5 to the mass of what was the old Wild West territory :)

Plus there’s the whole quantum computing thing, where in a way you’re reaching into “alternate” realities for extra compute.

FrustratedMonky

20 minutes ago

Yes. Just like a Minecraft World is like the size of 64,000 Earths, but it runs on my laptop.

That is what I'm saying is not happening. I'm saying that in a particle collider, we measure particles, and those exist all the time, not just when we are looking at them. Like, I have DNA and bones, they exist all the time, not just a simulation showing a 'skin' so it doesn't have to render everything.

Unless you are making a bigger point. That a computer that could be simulating every single particle, must exist outside this universe, and maybe mass and energy in this outside universe is so radically different we can't even grasp the scales of it.

Just like someone inside a minecraft world with blocks, couldn't grasp the amount of energy in our world.

throw310822

6 hours ago

Now shred that log to particles and scatter them everywhere, and you have the "dust theory". Neither the time dimension or the log are shared with the simulating universe, and yet they are still valid for the observers within the universe.

If the sequence of the log states is entirely deterministic based on the initial state, then you don't even need to actually write down the entire log for it to "exist". This is Greg Egan's Permutation City.

worstspotgain

5 hours ago

Can we reduce this to an estimate of survivorship bias? If there is only one universe, then our survival is clearly explained: we're in the only reality there is. If all possible universes exist, then we really lucked out in ending up in this one (well, depending on who wins the election I guess.)

In the middle are the permutations selected through the filter of other realities, when they chose which universes to simulate. We lucked out but not as much, because uninteresting universes never made it out of the entropic soup.

It would have to be a conditional estimate of course, because our sentience biases our contemplation.

amelius

13 hours ago

If I took the binary representation of that log and XOR'ed it with a random binary string, then would the result also have observers with the same observations?

worstspotgain

13 hours ago

Good question? :) I'd say no.

How about an exact copy of the log book, but with one bit flipped. Voila, mostly universal physics.

julianeon

8 hours ago

There's a hidden condition here.

How do you know every event to every particle?

The answer to that will literally change what gets written in the log book.

kridsdale3

7 hours ago

The point is the log is a graph or a tree, not an array.

kouru225

11 hours ago

Ok but the act of writing it down would always take longer than the actual unfolding of the universe itself. Just like the halting problem, we can’t skip ahead at any point and we have no idea what will come next.

worstspotgain

11 hours ago

Sure, but the timebases are different. Maybe it took the butterflypeople a thousand butterflyweeks to write it out.

Let me restate the metaphysics a bit differently. Let's say there's no us, no butterflypeople, nothing at all. Entropy reigns supreme, no information is organized.

Now add the butterflypeople. They write the humanpeople's log book. Information exists in organized form. The humanpeople's bits have been divined out of the great entropic nothing. Maybe that's all it takes?

oersted

6 hours ago

Quick appreciation for the Douglas Adams reference :)

neom

11 hours ago

Every time I read stuff like this I get super drawn to thinking about Sunyata* - In Mahayana buddhism, my understanding is that Sunyata doesn't mean absolute nothingness or no existence, but all things are devoid of intrinsic, independent existence. Everything is empty of inherent nature because everything is interdependent... phenomena exist only in relation to causes and conditions. This relational existence assumes that things do not possess an unchanging essence... the ultimate sense, there is no fixed reality. What might seem like "everything" is actually permeated by "nothingness" or "emptiness" and that phenomena arise dependent on conditions, without intrinsic, permanent nature.

https://en.wikipedia.org/wiki/%C5%9A%C5%ABnyat%C4%81

kridsdale3

7 hours ago

My mind also went here when reading TFA.

The all-time-all-space-all-branches brane of the Ruliad we call the Universe is the continuous one-ness and our selves are just the single-perspective projection models of that universe in our neurons that persist across edits to the neurons, until such as point as we update the model to see the larger picture and we can call that Nirvana, if we wish.

darshanime

11 hours ago

Sunyata comes from Sunya, which in Sanskrit means "zero", another idea invented by the Indians.

drdeca

13 hours ago

Is any of what he’s saying here, something he hasn’t essentially already said before?

The parts of this which were a little surprising to me (e.g. the bit about comparing time to heat, and the bit about running out of steps to do at an event horizon) iirc all linked to a thing posted a while ago?

I don’t share his enthusiasm for the phrase “computational irreducibility”. I would prefer to talk about e.g. no-speedup theorems.

whatshisface

13 hours ago

It has been said before, but by Stephen Wolfram.

nitwit005

12 hours ago

It feels like this could be a perfectly decent article if he toned down his ego and referenced existing work (other than his own).

But I don't think that's possible for him.

wavewrangler

2 hours ago

Wolfram has always been difficult for me to follow. I think it's because he tends to drone on, I don't know why. I don't think even he knows why. My understanding of what I have managed to listen to or read is that being who we are, we don't process information fast enough in order to see much of what is around us, even while it is happening before us. An example is to take a minute under consideration, you can think about how long a minute is. It's tangible to us. It's not very long. But if we think about how long a femtosecond is, it is not tangible at all. We can't experience a femtosecond. We can experience a whole bunch of femto seconds, but not just one. This is just one example of what I perceived the meaning of his thinking to be. Is that wrong, or so far off? Not only can we not experience a femtosecond, we will never be able to experience a femtosecond because our brains are simply not fast enough and aren't built to exist at such a scale. If that's what it means, then does that mean that he is referring to our ability to exist in certain scales, and our tendency to know the scale in which we exist? And, to exist outside of that scale, requires different computational parameters? Additionally, is this an extension of dimensions, just in time, not space? Does he differentiate between the two?

I know that the perception of scale has more to do with, well, perception, whereas computational irreducibility (as I understand it to be, anyway) is more of a function of natural processes....or THE underlying function from which all other functions stemming from that, are built upon. ... Right? Between that and perception of the scale in which we have evolved to exist in, it seems like they are at least closely related...

Some of what has been discussed here in the comments has me doubting my understanding, is the reason I ask.

To extend my question, could computational irreducibility help to explain why the Universe tends to "recycle" so many parts of itself? Is that some sort of telltale sign that when we see these patterns (golden ratio, fractals, recurring structures in naturee), we are looking at a fundamental aspect of the universe in some form, or it's computationally irreducible equivalent, or is this to be determined?

ziggyzecat

2 hours ago

> THE underlying function

So this is about where it clicked for me: A function, to us normies, is something consisting of at least one part that doesn't do anything and another part that does something but has no tangible form, 'the operation'. So, to me, irreducible can only mean that there is some level where the function is the thing and vice versa, so that this irreducible function, from our (current) space-time-perspective, has no constituents except 'self'.

Which is nonsense, because self is worthless without stuff it can react with or to. Except, is it really?

A femtosecond can't be experienced because subpixel-sized movements/fractions of reactions happen during this short measurement. But that's irrelevant for the interface between this function and nature and evolution from their current space-time-POV and their, and thus our, space-time-blind-spots. It's like thought and action when there is not enough time to stop a movement or when stopping that exact movement would terminate the intended result.

But I actually don't think that irreducibility is the right term. It should be liminality or something, focusing on the fact that nothing temporary is measurable before the emergence of THE underlying function, which is what I used to think The Planck length is for (more or less) constant space.

tunesmith

9 hours ago

I like thinking about hypergraphs that continually rewrite themselves. I've thought about it in terms of literary critique, or in "compiling" a novel. It reminds me of petri nets in a sense, where at any given moment, a character has a static model of the world, which can be diagrammed through a causal graph of conclusions and premises. Then, an event happens, which changes their understanding of the world; the hypergraph gets rewritten in response.

I've toyed with this with my own graph software when writing novels. It's of course impossible to fully document every characters' model before and after every event that affects them, but even doing so at key moments can help. I've wished more than once that I could "compile" my novel so it could automatically tell me plot holes or a character's faulty leap in logic (at least, one that would be out of character for them).

I've also tried the more common advice of using a spreadsheet where you have a column for each character, and rows indicating the passage of time. There you're not drawing hypergraphs but in each cell you're just writing raw text describing the state of the character at that time. It's helpful, but it falls apart when you start dealing with flashbacks and the like.

openrisk

13 hours ago

Seems like an appropriate post on a day when the Nobel of Physics was awarded not for Physics discoveries but for computer science...

But from Wheeler's "it from bit" to Wolfram's computational universes, the question is: where is the beef.

Now, there might be ultimately something worthwhile with the obsession with digi-physics. Mental models that seemed disparate may merge and become fruitful. It doesnt even have to be a fully formed toolkit. Newton's invention of calculus was kinda sketchy. But he was explaining things with it, things that were not undestood before.

WillyWonkaJr

13 hours ago

Wolfram does offer an interesting alternative to viewing the universe as a manifold with a tensor (the GR view). He believes it's a graph with computational rules. Are they the same? Mathematically, manifolds have a clear notion of dimension. This affects things like the inverse square rule. Wolfram's view of the ruliad, an evolving graph with rules, does bring up the question of dimension.

But at the end of the day he needs to make a concrete prediction that differs than the current view in order to have people devote a lot of time studying his world view. He's a brilliant guy and the Wolfram Language is fantastic, but he really needs to humble himself enough to value the work of convincing others.

kridsdale3

7 hours ago

I honestly don't think he cares about 'mainstream acceptance'. He is a prolific publisher of his detailed thoughts, which in the pre-academic-gatekeeping-establishment era, was enough for any serious philosopher.

He's a hobbyist. That doesn't make him any less prestigious if his ideas are neat.

openrisk

6 hours ago

The gatekeeping and manipulation going on in formal scientific publishing is notorious, but that is not the issue here.

The fundamental algorithm of advancing physical science has always been: a set of "principles" or proto-concepts, a set of matching mathematical tools (that dont even need to be very rigorous), using these tools to explain a slice of reality (experimental outcomes) and, finally, predicting unknown behaviors that can be sought, can be confirmed (and celebrated).

Sometimes even just a purely equivalent mathematical representation is fine, as it may give handles for calculations and thinking.

But whatever the program with digi-physics is, it doesnt follow these age-old patterns that establish validity and usefulness intrinsically and not because some gatekeepers say so.

The primary utility seems to be to enhance the prestige and toolkit of computational physics, which is fine, but totally not justifying the universality claims.

XorNot

12 hours ago

Worth noting this is ultimately the problem with string theory: String theory does provide a suite of mathematical tools which can solve real physics problems and give valid answers but they're known physics problems that can also be solved with other tools.

To be useful as a theoretical framework it always needed to be able to predict something which only string theory could - as a "more accurate view of reality".

Which is the same problem here: you've got to make a prediction, an accessible prediction, and ideally also not introduce any new incompatibilities.

Shawnecy

8 hours ago

Is there anything testable or falsifiable here? Otherwise it's just preaching beliefs.

kridsdale3

7 hours ago

That's the whole point of philosophy.

thrance

6 hours ago

Not really, modern philosophy attempts to present valid arguments based on a few axioms. You can then decide for yourself if you assume these axioms yourself, in which case you also have to accept the conclusion of the argument.

psychoslave

5 hours ago

Ok, so after the article on time as ought to be an emergent property[1], here we go with time from a computational point of view.

Can we at least receive a definition of computation that is not somehow depending of time being a given, explicitly or implicitly?

Am I alone finding this a bit taking aback? Like this is not physics or even general philosophy but plain old theological focus on the prime mover.

[1] https://www.quantamagazine.org/the-unraveling-of-space-time-...

hnax

10 hours ago

Where it's nowadays standard practice in science to conceive of time as the dimension along which events are tagged, I would suggest the opposite: process, as a sequence of events, induces time. But also in the modern conception, time is derived from atomic events produced by a nuclear source. So, fundamentally the two conceptions are the same, but the process conception allows for greater freedom in what the underlying process may entail.

marcus_holmes

6 hours ago

Im curious about how this relates to deterministic time and the lack of free will.

>Our minds are “big”, in the sense that they span many individual branches of history. And they’re computationally bounded so they can’t perceive the details of all those branches, but only certain aggregated features. And in a first approximation what then emerges is in effect a single aggregated thread of history.

Does this allow free will?

GistNoesis

4 hours ago

I think Stephen at least dares to ask the question.

Here is a little thought-experiment on the Nature of Time.

You take the three body problem and you pick an initial condition and generate the trajectory of the three body from 0 to T by integrating through time with some numerical scheme like Runge-Kutta.

Now you do it again, and again, generating each time a "universe" of three-body trajectories. Doing so allows you to build a dataset of physically realist three-body trajectories.

And now the kicker : You train a diffusion model on this (potentially infinite synthetic) dataset. Once trained, to build a "universe" (aka 3-body trajectories) you only need to sample from this diffusion model. There is no more need to integrate through time. Past, present and future inside the universe just fold themselves into place in order to make sure the universe follows the time-evolution constraint.

When working numerically, both these schemes can theoretically be as accurate as desired (error smaller than any chosen epsilon), although the diffusion model seems to potentially necessitate more memory in toy model, it's not evident as the universe is stored in a compressed fashion which necessitate less memory when the universe is no longer a toy model.

The underlying question I perceive from Stephen works are is whether it's more efficient computationally to explore all possible universes simultaneously in which case time is a mere constraint you have to solve, or to generate each universe independently stepping through internal time.

Although it may seems to be the same (our perception only having access to a slice of the multiverse), as in the end you get in both cases a physically consistent universe, the nature of the sampling process change the distribution of possible states. It also opens the possibility of shifting across various universes, not that we would be physically aware of (the previous universe and future universe), but we would benefit by experiencing a "better" universe. It's the same vibe of ideas which states that our universe has been fine-tuned for life to be possible.

alkonaut

3 hours ago

Is this a guest writer? It doesn’t have the Wolfram tone at all. It describes a universe that isn’t centered on Stephen Wolfram, for example.

fpoling

9 hours ago

Physics does not explain flow of time at all. If one films a thrown ball, physics can tell from few frames its speed or where the ball is on the following or previous frames. But it tells nothing about why, when see the film, we perceive the ball moving. Articles like the above misses this.

In fact there is no even notion of direction of time in physics. All physical models are time-reversible. And even if we observe violation of, say, CPT, in nature, it still will not explain while we perceive time flowing in a particular direction.

This is very well discussed in the book “Time’s Arrow” by Huw Price.

Kapura

9 hours ago

The author discusses some of these points. One excerpt:

> But even at a much more mundane level there’s a certain crucial relationship between space and time for observers like us. The key point is that observers like us tend to “parse” the world into a sequence of “states of space” at successive “moments in time”. But the fact that we do this depends on some quite specific features of us, and in particular our effective physical scale in space as compared to time.

> In our everyday life we’re typically looking at scenes involving objects that are perhaps tens of meters away from us. And given the speed of light that means photons from these objects get to us in less than a microsecond. But it takes our brains milliseconds to register what we’ve seen. And this disparity of timescales is what leads us to view the world as consisting of a sequence of states of space at successive moments in time.

> If our brains “ran” a million times faster (i.e. at the speed of digital electronics) we’d perceive photons arriving from different parts of a scene at different times, and we’d presumably no longer view the world in terms of overall states of space existing at successive times.

> The same kind of thing would happen if we kept the speed of our brains the same, but dealt with scenes of a much larger scale (as we already do in dealing with spacecraft, astronomy, etc.).

fpoling

9 hours ago

This still misses the biggest question about the nature of time. The problem is not that we perceive the world as a set of space-like frames. The problem is why our consciousness perceives the frames moving from one to another at all and in particular direction.

qaq

8 hours ago

Is it a question about nature of time or about our perception of time though?

goatlover

8 hours ago

Because the universe is evolving from a low entropy state to a high one.

fpoling

6 hours ago

This does not explain the flow of time nor the direction of how consciousness perceives it. A low entropy is just a low probability state. Such state in the past is just as unlikely as in future as physical models are time-reversible.

Moreover, there is no evolution in physical models. The universe is just 4-dimensional thing. Surely time in physics is different from space as we can predict across time based on on the condition in 3-d space-like surface, while if one make a slice in the 4-d universe with 2 space dimensions and one time-dimension, predicting across the remaining space dimension is impossible.

But that does not explain why our perception flows from one space-like slice to another and in particular direction. Surely some of the slices are less common (low entropy) then others (high entropy), but there is no movement or evolution.

A good analogy is a rod with a color gradient from white on one end and black on another with white turning into black quickly so most of the rod is black. We can arbitrary call the white side first and even say that the color evolves from white to black. Then as the white side is a low probability as a randomly selected slice of the rod will be black, we can even say that the color evolves from a low probability to high probability stare. But this is arbitrary as in reality color does not evolve and there is just the single colored rod.

_cs2017_

8 hours ago

I don't understand how computational irreducibility matters for the perception of time. Surely, even a computationally reducible universe could be so insanely expensive to predict that it wouldn't matter?

I also don't understand why our inability to predict the future is related to our perception of time.

Overall, my impression is that this is an essay in philosophy (i.e, devoid of any content) rather than science.

projectileboy

10 hours ago

Fascinating, but I really wish this work was being published as a series of papers in peer-reviewed journals. Otherwise it’s hard to take the work seriously.

Q_is_4_Quantum

9 hours ago

Surely Wofram deserves the Nobel as much as Hopfield and Hinton? Not for this stuff of course (which I doubt many take seriously), but because he also provided us with an amazing computational tool without which physics would be very far behind where it is today?

[And at least I knew his name already unlike our current laureates whom I just had to look up!]

tux3

9 hours ago

This year is an exception because of the AI Gen AI Artificial Intelligence AI AI zeitgeist.

If we keep giving the physics Nobel to people building computer tools, soon it will have to be renowned physicist Linus Torvalds, whose computational platform underlies every big physics experiment.

I'm not sure physicists would be thrilled if we keep going in that direction.

CSMastermind

9 hours ago

I think this is one of the rare times I feel comfortable speculating that had he not created Mathematica than someone else would have.

There was a demand and plenty of people with interest.

He was just in the right place with the right set of skills to execute on it before others and won the market in its infancy. Also it's a small enough market that the like of Mircosoft didn't feel the need to come in and crush him like they did Lotus 1-2-3.

Q_is_4_Quantum

8 hours ago

I suspect you are right - but multiple Nobel prizes have gone to people who got there only very slightly ahead of others in the race. Would be tough to argue that there are many prizes which are for work that wouldn't have been done within a decade of when the winner actually did do it.

hyperhello

13 hours ago

Okay. Time is a computation. Patterned or otherwise predictable computations can be performed instantly and thus are not time. Only results that can’t be precomputed are part of our perceptions. That’s what I got out of it.

DiscourseFan

12 hours ago

Its certainly interesting, though the language its couched in wouldn't be found in any philosophical discussion on time. This is all to say that it deals with concepts that have been discussed in philosophy for a long time, and these insights wouldn't be considered "new" to someone from say mid-19th century Prussia. Certainly the "progressive unfolding of the truth," in qualitatively different steps which Wolfram adopts here as his concept of time is no different from Hegel's concept of time and the movement of history. I would recommend, for anyone interested in this sort of thing, to just read the "Preface" to his Phenomenology of Spirit.[0]

[0]https://files.libcom.org/files/Georg%20Wilhelm%20Friedrich%2...

FDAiscooked

10 hours ago

Disregard anything Stephen Wolfram says about anything other than his Mathematica software. He's a pretentious, arrogant twat who thinks he's unlocked the keys to the Universe and is trying to convince the rest of the world of his brilliance.

fuzzfactor

11 hours ago

You have to figure time would carry on even if nothing else was happening . . .

. . . at the time ;)

ndsipa_pomu

3 hours ago

That doesn't seem likely. If there was nothing happening, then how could you determine one instant from another - without any change there can be no concept of time.

fuzzfactor

16 minutes ago

>That doesn't seem likely.

Really I guess I've always felt that way when you think about it conceptually, but maybe all it has to do is be slightly more likely than time standing still while other things do not ;)

You might also very well be able to say that without time there would be no concept of change either :)

inshard

5 hours ago

Computationally unbounded observers see more of the future but what of free will?

twilo

13 hours ago

I believe it's simply a unit of measurement we use to understand the movement or rhythm on which the universe operates, so it could be termed the "progress of computation" if that makes more sense but it's all in the same effort.

arkj

7 hours ago

SW is the Derrida of computation. More words to add more confusion than explain anything.

lostmsu

13 hours ago

Discussed in Permutation City

A_D_E_P_T

12 hours ago

Yeah. I'm in the middle of writing a book about this, but in a sense it was also discussed by the Pythagoreans. And they (correctly, I think,) went a step further:

"The Pythagoreans too used to say that numerically the same things occur again and again. It is worth setting down a passage from the third book of Eudemus' Physics in which he paraphrases their views:

‘One might wonder-whether or not the same time recurs as some say it does. Now we call things 'the same' in different ways: things the same in kind plainly recur - e.g. summer and winter and the other seasons and periods; again, motions recur the same in kind - for the sun completes the solstices and the equinoxes and the other movements; But if we are to believe the Pythagoreans and hold that things the same in number recur - that you will be sitting here and I shall talk to you, holding this stick, and so on for everything else - then it is plausible that the same time too recurs.’"

- Simplicius, Commentary on the Physics 732.23-33.

Branching paths, "all possible mathematics," etc. In a universe which appears to be discrete, which can support finitist arguments, and where the potential number of paths is starkly finite -- this eventually leads to the conclusion that all paths eventually recur.

Filligree

11 hours ago

Strictly speaking, it only leads to the conclusion that eventually the universe will enter a loop passing through a finite number of states.

There’s no requirement that the current state is part of the loop. Or indeed that any state containing conscious observers is.

pizza

9 hours ago

The bit in Permutation City about siphoning compute by exploiting the magnitudes of vector computations as a kind of scratch space out of algorithms that only needed the resulting angles… wonder if you could modify the DoRA parameter-efficient finetuning algorithm to do something like that lol, since it also splits up the new weights into angular and magnitude components..

raldi

8 hours ago

So you can't go back in time for the same reason you can't go left in Super Mario Bros.

thrance

6 hours ago

Wolfram's theories are still largely pseudoscientific, in that way they look a lot like string theory, minus the public funding the latter received.

Neither theory is really falsifiable : if new experiments are made that contradict the theory, it can just be adjusted to fit the new observations. As a consequence, those theories are unable to make any kind of prediction about our reality, which makes them pretty much useless. No wonder this "research" was never published in any physics journal.

hoseja

5 hours ago

Wolfram article on the nature of reality.

Cellular automaton on the first screen.

akomtu

11 hours ago

Time and space probably belong to consciousness, rather than the real world. The objective "true" reality may be utterly incomprehensible in its complexity, but we can imagine a "slice" of that reality that arbitrarily defines space and time so that the interior of that slice follows some reasonable rules. That slice of reality can be thought of as a high-level consciousness that defines rules of our physics. Other slices of the same reality are possible, GR-like or QM-like, including those that are computational and discrete in nature. One universe, but many interpretations. Within each slice of reality, it may be possible to define smaller subsets of reality, corresponding to smaller consciousness, down to the human or even more primitive levels. So what Wolfram is describing may be true, objectively, to the observers of a computational slice of the universe, just like the MWI may be simultaneously true to the observers of the MWI slice of reality.

nyc111

6 hours ago

"(as I’ve argued at length elsewhere)"

Everything he writes is "at length". This looks like an interesting read with good ideas but it is so long and has no structure that I gave up reading. It may help to give an abstract in the beginning of the article.

The problem with the treatment of time in physics is that we can only measure time intervals not the philosophical Time (with capital T). But physicists gladly conflate the two.

Mach said: Absolute time [the philosophical Time] cannot be measured by comparison with another motion, it has therefore neither a practical nor a scientific value.

Which means that all of the "t" terms standing for time in astronomical equations are for time intervals and tell us nothing about the philosophical Time.

vivzkestrel

9 hours ago

when you die, people say that your time has ended. Does anyone know scientifically speaking what happens to time for a dead person

zanethomas

9 hours ago

The web became trashed over a decade ago.

hiddencost

12 hours ago

I really think that Wolfram's descent into fringe science has hurt a lot of well meaning people that don't know better and think that because he's developed useful software that he should be listened to in these domains.

qaq

10 hours ago

Oh maybe because he has a PhD in particle physics from Caltech ?

xanderlewis

8 hours ago

Eric Weinstein also has a PhD in physics; it doesn't preclude you being (or becoming) a crank.

qaq

8 hours ago

What is specifically crank about his theory? From outsiders perspective having theories that require a bunch of extra dimensions just to make the math work sound no less cranky.

xanderlewis

7 hours ago

I'm not claiming to be qualified to judge it, but it's quite clear that no one who is takes it seriously. He also seems to spend most of his time pontificating about things he has no expertise in and using his genuine expertise in physics to show off in front of easily-impressed podcast hosts — not a great sign.

qaq

7 hours ago

" pontificating about things he has no expertise in" again he has PhD from Caltech in particle physics he had a good number of published works in quantum field theory how are you coming to the conclusion he is pontificating about things he has no expertise in?

XorNot

12 hours ago

The crackpot trajectory of otherwise smart people is fairly well trodden with a number of indicators and nobel laureates who have walked it - one of which is when people start stepping well outside their field...and then also tend to start stepping into "the biggest problems" of wherever they point themselves.

Mistletoe

12 hours ago

I call it helicoptering, my old boss used to love to do it. Helicopter down onto a problem, act like everyone that already studied it was an idiot and hadn’t spent their life trying to solve X, stir a bunch of dust up, accomplish nothing, and helicopter away again to something else.

jpitz

11 hours ago

Almost like time is the stack and space is the heap.

Meh. Almost.

mistermann

13 hours ago

> If we were not computationally bounded, we could “perceive the whole of the future in one gulp” and we wouldn’t need a notion of time at all.

Maybe, if we assume we aren't axiomatically bound, despite knowing that we are (but that knowledge is rarely in context, so we can only know it sometimes...once again: time...weird).

"Thought is Time."

- Jiddu Krishnamurti

downboots

11 hours ago

> perceive the whole of the future in one gulp

"Therefore, as regards such knowledge, they know all things at once" Summa

Vecr

10 hours ago

You could perceive (maybe? Depends on how it's hooked up) a future (a simulation based the information you have), but there's no reason to think that's what the future is with certainty. Map/territory stuff too.

mistermann

8 hours ago

> but there's no reason

What is it that you refer to here?

Koshkin

13 hours ago

I guess I’ll just wait for Sabine to say something about this.

goatlover

8 hours ago

I'm guessing she'll be pretty sarcastic as she's not overly fond of mathematical theories that aren't testable, to say the least.

kgwgk

an hour ago

Except for superdeterminism - but maybe she doesn’t have a choice.