brink
4 days ago
Krapivin made this breakthrough by being unaware of Yao's conjecture.
The developer of Balatro made an award winning deck builder game by not being aware of existing deck builders.
I'm beginning to think that the best way to approach a problem is by either not being aware of or disregarding most of the similar efforts that came before. This makes me kind of sad, because the current world is so interconnected, that we rarely see such novelty with their tendency to "fall in the rut of thought" of those that came before. The internet is great, but it also homogenizes the world of thought, and that kind of sucks.
aidenn0
4 days ago
> I'm beginning to think that the best way to approach a problem is by either not being aware of or disregarding most of the similar efforts that came before. This makes me kind of sad, because the current world is so interconnected, that we rarely see such novelty with their tendency to "fall in the rut of thought" of those that came before. The internet is great, but it also homogenizes the world of thought, and that kind of sucks.
I think this is true only if there is a novel solution that is in a drastically different direction than similar efforts that came before. Most of the time when you ignore previous successful efforts, you end up resowing non-fertile ground.
layman51
4 days ago
Right, the other side is when you end up with rediscoveries of the same ideas. The example that comes to my mind is when a medical researcher found the trapezoidal rule for integration again[1].
[1]: https://fliptomato.wordpress.com/2007/03/19/medical-research...
Shorel
3 days ago
That's not really a problem.
In one hand, it shows the idea is really useful on its own.
And on the other hand, it shows that currently forgotten ideas have a chance to being rediscovered in the future.
bluGill
3 days ago
It is not a problem if you are a student learning how to solve problems. Solving previously solved problems is often a good way to learn - because there is a solution you know you are not hitting something that cannot be solved, and your teacher can guide you if you get stuck.
For real world everyday problems normally it is an application of already solved theory or it isn't worth working on at all. We still need researchers to look at and expand our theory which in turn allows us to solve more problems in the real world. And there are real world problems that we pour enormous amounts of effort into solving despite lacking theory, but these areas move much slower than the much more common application of already solved theory and so are vastly vastly more expensive. (this is how we get smaller chip architectures, but it is a planet scale problem to solve)
p00dles
3 days ago
I agree -> even if someone spends their time "rediscovering" an existing solution, I think that the learning experience of coming up with a solution without starting from current best solution is really valuable. Maybe that person doesn't reach the local maximum on that project, but having a really good learning experience maybe enables them to reach a global maximum on one of their next projects.
If I want some novel ideas from a group of people, I'm going to give them the framework of the problem, split them into groups so that they don't bias each other, and say: go figure it out.
blablablerg
3 days ago
It is, because you are wasting time reinventing the wheel. Also if something is already well researched, you might miss intricacies, traps, optimizations etc. previous researchers have stumbled upon.
dwaltrip
3 days ago
It isn’t necessarily “wasted” time. There are more ways to look at it, as well as 2nd order and 3rd order effects (and so on).
It’s a powerful skill to be able to try to solve things from first principles. And it’s a muscle you can strengthen.
It would be a bit silly to never look anything up, but it isn’t so black and white.
Shorel
3 days ago
You need to be able to do both.
Only reading the existing literature is not good enough.
The capacity to create ideas is also something that needs to be practiced.
Brian_K_White
3 days ago
This is a good example of how the most obvious intuition can be wrong, or at best incomplete.
djmips
2 days ago
I find a lot of the time concepts and patterns can be the same in different fields, just hidden by different nomenclature and constructed upon a different pyramid of knowledge.
It's nice we have a common language that is mathematics, the science of patterns, to unify such things but it's still going to be a challenge because not everyone is fluent in all the various branches of mathematics.
It's even mind-blowing how many ways you can approach the same problem with equivalencies between different types of mathematics itself.
sitkack
4 days ago
I think that shows how great the trapezoidal rule is. I feel like this is brought out too many times, that now it is used to make fun of people. It is 18 years old at this point.
FartyMcFarter
4 days ago
I mean, it sort of deserves being made fun of. 18 years ago Google existed, surely you'd search for "area under a curve" before going through all the effort of writing a paper reinventing integrals?
Edit: actually the paper was written in 1994, not sure what the "18 years" was referring to. But still, peer review existed and so did maths books... Even if the author can be excused somewhat (and that's already a stretch), peer reviewers should definitely not let this fly.
mgens
3 days ago
Unfortunately quite common to see serious mathematical issues in the medical literature. I guess due to a combination of math being essential to interpreting medical data and trial results, but most practitioners not having much depth of math knowledge. Just this week I came across the quote "Frequentist 95% CI: we can be 95% confident that the true estimate would lie within the interval." This is an incorrect interpretation of confidence intervals, but the amusing part is that it is from a tutorial paper about them, so the authors should have known better. And cited by 327! https://pmc.ncbi.nlm.nih.gov/articles/PMC6630113/
ashoeafoot
4 days ago
Does it make you less of a peer to others who found it before ? At leas the author showed ability to think creative for himself , not paralyzed by the great stagnation like the rest of us.
mpweiher
4 days ago
Herself. Mary Tai.
And what makes you less of a peer is not knowing the basics. And being so unaware of apparently not knowing the basics, and/or uninterested, that you don't bother to check something that is highly checkable.
eru
3 days ago
Even worse: you didn't just think so in private, but you decided to publish your 'great' discovery.
fooker
3 days ago
The blame is on the reviewers.
This is why peer review exists. One can not known everything themselves. It's fairly common for CS paper submissions to reinvent algorithms and then tone down the claims after reviewers suggest that variants already exist.
fragmede
3 days ago
> highly checkable
in 1994?
reshlo
2 days ago
Calculus textbooks existed in 1994. It just took me 30 seconds to find “area under a curve” in the index of the one I own, and another 30 seconds to find the section on numerical integration, which describes the trapezoidal approximation.
fragmede
2 days ago
So you already know the particular area of the larger topic of mathematics that you need to look for, you already have a textbook for that particular subject in your possession, meaning you don't need to go to the library and somehow figure out the right book to choose from the thousands the 510 section; you know what you are looking for exists, and then you aren't surprised you can find it?
I know how to find the area under the curve, but there's so much biology I don't know jack shit about. Back in 1994, It would have been hopeless for me to know the Michaelis-Menten model even existed if it had been relevant to my studies in computer science. That you can right click on those words in my comment in 2025 and get an explanation of what that is and can interrogate chatgpt to get a rigorous understanding of it shouldn't make it seem like finding someone in the math department to help you in 1994 was easier than just thinking logically and reinventing the wheel.
Joker_vD
2 days ago
There is thing called "higher education". Ostensibly one of its chief purposes is to arm you with all that interconnected knowledge and facts that is useful in your chosen field of study. To the boot, you get all of that from several different human beings who you can converse with, to improve the scope and precision of the knowledge you're receiving. You know, "standing on the shoulders of the giants" and all of that stuff?
fragmede
2 days ago
ostensibly.
reshlo
2 days ago
> So you already know the particular area of the larger topic of mathematics that you need to look for
So did the author of the paper. The paper’s title itself mentions the area under a curve. It would not have been difficult to find information about how to calculate an approximation of the area under a curve in the library.
fragmede
4 days ago
I'd argue this is an argument against purely peer review, as her peers also weren't mathematicians.
Some of us when learning calculus wonder if we'd been alive before it was invented, if we'd be smart enough to invent it. Dr. Tai provably was. (the trapezoid rule, anyway) So I choose to say xkcd 1053 to her, rather than bullying her for not knowing advanced math.
eru
3 days ago
> Dr. Tai provably was.
No, we have no proof of that. We just know that she published a paper explaining the trapezoidal rule.
(A) That approximation for 'nice' curves was known long before calculus. Calculus is about doing this in the limit (or with infinitesimals or whatever) and also wondering about mathematical niceties, and also some things about integration. (B) I'm fairly certain she would have had a bit of calculus at some point in her education, even if she remembered it badly enough to think she found something new.
fragmede
2 days ago
I mean, it's possible she reinvented the wheel because what she really needed in her life is for the math department to laugh at her, but that seems far fetched to me.
rchard2scout
3 days ago
The "18 years" probably refers to the date since the linked blogpost was published, March 2007.
gsf_emergency_2
4 days ago
This is the the strongest argument for not shaming reinvention...
Unless the victims are world-class..? (Because it's not entirely not self-inflicted)
https://news.ycombinator.com/item?id=42981356
Shades of the strong-link weak-link dilemma too
eru
3 days ago
> This is the the strongest argument for not shaming reinvention...
Sounds like a pretty weak argument? I'm sure there are some good arguments for re-invention. But this ain't one of them.
Basically, re-invention for fun or to help gain understanding is fine. But when you publish a 'new' method, it helps to do a bit of research about prior work. Especially when the method is something you should have heard about during your studies.
Dansvidania
4 days ago
there might be a decent amount of "survivorship bias" too. meaning you only hear of the few events where someone starts from first principles and actually finds, be it luck or smarts, a novel solution which improves on the status quo, but i'd argue there are N other similar situations where you don't end up with a better solution.
That being said, I so disagree with just taking the "state of the art" as written in stone, and "we can't possibly do better than library x" etc.
smaudet
4 days ago
Plenty of "state of the art", at least a decade ago, that was not very state of anything.
I think bias is inherent in our literature and solutions. But also, I agree that the probability of a better solution degrades over time (assuming that the implementations themselves do not degrade - building a faster hash table does not matter if you have made all operations exponentially more expensive for stupid, non-computational, reasons)
croo
3 days ago
In 1973 Clifford Cock solved the problem of public keys first time in history that no one in GCHQ managed to solve in the past 3 years. He jolted down the solution in half hour after hearing about it then wondered why is it such a big thing for everyone else. A fresh view unclouded by prejudice can make all the difference.
mmusson
4 days ago
Or worse, pursuing something already proven not to work.
porkbrain
4 days ago
Viewed through the lens of personal development I suppose one could make an argument that there wasn't much difference between rediscovering an existing valid or invalid solution. Both lead to internalisation of a domain's constraints.
SkyBelow
3 days ago
Outside of math and computational science, nothing is proven to not work because scientific research doesn't work in proofs. Even in math and computational science, there are fields dedicated to researching known proven wrong logic because sometimes there are interesting findings, like hypercomputation.
phyzix5761
4 days ago
That's how the best discoveries are made.
Manabu-eo
3 days ago
Or how a lot of time is wasted. For example on perpetual motion machines and infinite data compression.
phyzix5761
3 days ago
A lot of major scientific discoveries were made while people were trying to turn base metals into gold; also known as alchemy.
Some examples include discovering phosphorus, the identification of arsenic, antimony, and bismuth as elements rather than compounds, and the development of nitric acid, sulfuric acid, and hydrochloric acid. Alchemy ultimately evolved into modern chemistry.
I think the key is that thinking that something is a waste of time is the type of mentality that prevents individuals from pursuing their interests to the point where they actually make important discoveries or make great inventions.
If you put enough time and energy into anything you're bound to learn a lot and gain valuable insights at the very least.
somenameforme
4 days ago
The problem is that when the proof is wrong, as in this case a related conjecture held up for 40 years, which is not a "proof" per se, but still ostensibly an extremely high reliability indicator of correctness.
Another example is when SpaceX was first experimenting with reusable self landing rockets. They were being actively mocked by Tory Bruno, who was the head of ULA (basically an anti-competitive shell-but-not-really-corp merger between Lockheed and Boeing), claiming essentially stuff along the lines of 'We've of course already thoroughly researched and experimented with these ideas years ago. The economics just don't work at all. Have fun learning we already did!'
Given that ULA made no efforts to compete with what SpaceX was doing it's likely that they did genuinely believe what they were saying. And that's a company with roots going all the way back to the Apollo program, with billions of dollars in revenue, and a massive number of aerospace engineers working for them. And the guy going against them was 'Silicon Valley guy with no aerospace experience who made some money selling a payment processing tool.' Yet somehow he knew better.
necovek
4 days ago
All the cases you bring up are not "proofs": a conjecture is very much not one, it's just that nobody bothered to refute this particular one even if there were results proving it isn't (cited in the paper).
Similarly, ULA had no "proof" that this would be economically infeasible: Musk pioneered using agile ship-and-fail-fast for rocket development which mostly contradicted common knowledge that in projects like these your first attempt should be a success. Like with software, this actually sped things up and delivered better, cheaper results.
somenameforme
3 days ago
The Apollo missions, of which Boeing was a key player, were also a 'ship and fail fast' era. It led to some humorous incidents like the strategy for astronaut bathroom breaks to simply be 'hold it' which was later followed up by diapers when we realized on-pad delays happen. Another one was the first capsule/command module being designed without even a viewport. Of course it also led to some not so humorous incidents, but such rapid advances rarely come for free.
In any case Musk definitely didn't pioneer this in space.
eru
3 days ago
> Of course it also led to some not so humorous incidents, but such rapid advances rarely come for free.
Luckily, you can run a lot higher risks (per mission) when going unmanned, and thus this becomes a purely economic decision there, almost devoid of the moral problems of manned spaceflight.
Manned spaceflight has mostly been a waste of money and resources in general.
somenameforme
3 days ago
The first man on Mars will likely discover far more in a week than we have in more than 50 years of probes.
There's a fundamental problem with unmanned stuff - moving parts break. So for instance Curiosity's "drill" broke after 7 activations. It took 2 years of extensive work by a team full of scientists to create a work-around that's partially effective (which really begs a how many ... does it take to screw in a light bulb joke). A guy on the scene with a toolkit could have repaired it to perfection in a matter of minutes. And the reason I put drill in quotes is because it's more like a glorified scraper. It has a max depth of 6cm. We're rather literally not even scratching the surface of what Mars has to offer.
Another example of the same problem is in just getting to places. You can't move too fast for the exact same reasons, so Curiosity tends to move around at about 0.018 mph (0.03 km/h). So it takes it about 2.5 days to travel a mile. But of course that's extremely risky since you really need to make sure you don't bump into a pebble or head into a low value area, meaning you want human feedback with about a 40 minute round trip total latency on a low bandwidth connection - while accounting for normal working hours on Earth. So in practice Curiosity has traveled a total of just a bit more than 1 mile per year. I'm also leaving out the fact that the tires have also, as might be expected, broken. So it's contemporary traveling speed is going to be even slower.
Just imagine trying to explore Earth traveling around at 1 mile a year and once every few years (on average) being able to drill hopefully up to 6cm! And all of these things btw are bleeding edge relative to the past. The issue of moving parts break is just an unsolvable issue for now and for anytime in the foreseeable future.
----------
Beyond all of this, there are no "moral problems" in manned spaceflight. It's risky and will remain risky. If people want to pursue it, that's their choice. And manned spaceflight is extremely inspiring, and really demonstrates what man is capable of. Putting a man on the Moon inspired an entire generation to science and achievement. The same will be true with the first man on Mars. NASA tried to tap into this with their helicopter drone on Mars but people just don't really care about rovers, drones, and probes.
eru
3 days ago
For the cost of sending a guy, you can probably just send ten probes.
somenameforme
2 days ago
You get extremely diminishing returns with probes. There's only so much you can do from orbit. Rovers are substantially more useful, but are extremely expensive. Curiosity and Perseverance each cost more than $3 billion. As the technology advances and we get the basic infrastructure setup, humans will rapidly become much cheaper than rovers.
A big cost with rovers is the R&D and one-off manufacturing of the rover itself. With humans you have the added cost of life support, but 0 cost in manufacturing and development. The early human missions will obviously be extremely expensive as we pack in all the supplies to start basic industry (large scale Sabatier Reactions [1] will be crucial), energy, long-term habitation, and so on.
But eventually all you're going to need to be paying for is food/life support/medicine/entertainment/etc, which will be relatively negligible.
ExoticPearTree
2 days ago
Yeah, but then you are going to get a very little return from those 10 probes.
Sending a person there for a one way mission would probably give us more data than 100 probes. And I have a feeling that there are a lot of people willing to go on a such a mission.
eru
2 days ago
I don't share your optimism.
Have a look at https://www.nasa.gov/humans-in-space/20-breakthroughs-from-2... and keep in mind that those are those are already the highlights. The best they could come up with.
somenameforme
2 days ago
What sort of things would you expect on the list? A lot of those are critical prerequisites for humanity's advancement. They also left out some really important stuff like studies on sex in space, exercise in space, effects of radiation in space (as well as hardening electronics), and so on.
A space station on Mars would probably not provide much more than that so should be a low priority, but obviously the discoveries to be made on land trounce those to be made in space.
bluGill
3 days ago
Eventually you cannot run high risks in unmanned. If a rocket fails getting a satellite to orbit just build a new one. However missions to the outer planets are often only possible once every several hundred years (when orbits line up) and so if you fail you can't retry. Mars you get a retry every year and a half (though you get about a month). If you want to hit 5 planets that is a several hundred year event. And the trip time means if you fail once you reach the outer planet all the engineers who knew how the system works have retired and so you start from scratch on the retry (assuming you even get the orbits to line up)
eru
3 days ago
> However missions to the outer planets are often only possible once every several hundred years (when orbits line up) and so if you fail you can't retry.
Just send ten missions at the same time. No need to wait until you fail.
bluGill
2 days ago
Ten that fail in the exact same way is a real possibility.
necovek
3 days ago
Sure, it's better to frame it as "reintroduction": for those early attempts to be succesful with Soviets pushing on the other side as well, it is a strategy that works the fastest.
Thanks for the funny incidents as well, and my empathy for the not so funny ones!
vkou
4 days ago
Also, SpaceX was exactly one failed launch (with every prior one being a failure) from bankruptcy.
Had that one also been a failure, he wouldn't be running the US government and we'd all be talking about how obviously stupid reusable rockets were.
necovek
3 days ago
Had they received the same grant money as Boeing ($4.2b vs $2.6b), it wouldn't have been such a close call.
I'd also note that they were also late by 3 years or so: this did not produce miracles, it was just much cheaper and better in the end than what Boeing is still trying to do.
Manabu-eo
3 days ago
He is talking about Falcon 1, not CCDev. There was no close call at CCDev, nor any grant money for Falcon 1.
necovek
2 days ago
Thanks for the correction/clarification.
Still, I would be surprised if SpaceX did not greatly benefit from knowledge gained in Falcon 1 development when building their Falcon 9 rocket and then optimizing it for reusability — they started development of Falcon 9 while Falcon 1 was still operating.
NateEag
3 days ago
This illustrates beautifully how stupid labeling ideas stupid is.
To know that that an idea or approach is fundamentally stupid and unsalvageable requires a grasp of the world that humans may simply not have access to. It seems unthinkably rare to me.
WalterBright
3 days ago
On the other hand, I knew from the beginning that the Space Shuttle design was ungainly, looking like a committee designed it, and unfortunately I was right.
(Having a wing, empennage and landing gear greatly increased the weight. The only thing that really needs to be returned from space are the astronauts.)
kruador
3 days ago
It was designed to support a specific Air Force requirement: the ability to launch, release or capture a spy satellite, then return to (approximately) the same launch site, all on a single orbit. (I say 'approximately' because a West Coast launch would have been from Vandenberg Air Force Base, returning to Edwards Air Force Base.)
The cargo bay was sized for military spy satellites (imaging intelligence) such as the KH-11 series, which may have influenced the design of the Hubble Space Telescope. Everything else led on from that.
Without those military requirements, Shuttle would probably never have got funded.
I'm listening to "16 Sunsets", a podcast about Shuttle from the team that made the BBC World Service's "13 Minutes To The Moon" series. (At one point this was slated to be Season 3, but the BBC dropped out.) https://shows.acast.com/16-sunsets/episodes/the-dreamers covers some of the military interaction and funding issues.
somenameforme
2 days ago
You're saying the same thing he is, but with more precise examples. There were also plenty of more useless requirements which is what he was getting at with it being 'designed by committee.' It was also intended to be a 'space tug' to drag things to new orbits, especially from Earth to the Moon, and this is also where its reusable-but-not-really design came from.
It's also relevant that the Space Shuttle came as a tiny segment of what was originally envisioned as a far grander scheme (in large part by Werner von Braun) of complete space expansion and colonization. The Space Shuttle's origins are from the Space Transportation System [1], which was part of a goal to have humans on Mars by no later than 1983. Then Nixon decided to effectively cancel human space projects after we won the Space Race, and so progress in space stagnated for the next half century and we were left with vessels that had design and functionality that no longer had any real purpose.
[1] - https://en.wikipedia.org/wiki/Space_Transportation_System
somenameforme
3 days ago
Let alone on launch. It's amusing that NASA is supposed to be this highly conservative safety-first environment, yet went with a design featuring two enormous solid rocket boosters. We knew better than this even during the Saturn era was very much a move fast and break things period of development.
bluGill
3 days ago
There is nothing wrong with solid rocker boosters for that application. The issue is they failed to figure out figure out the limits and launched when it was too cold. (they also should have investigated when they saw unexpected non-fatal seal issues)
Solid boosters are more complex and so Saturn could not have launched on time if they tried them. So for Saturn with a (arbitrary) deadline not doing them was the right call. Don't confuse right call with best call though: we know on hindsight that Saturn launched on time, nobody knows what would have happened if they had used solid boosters.
somenameforme
3 days ago
I wasn't referencing Challenger in particular. I'm speaking more generally. SRBs are inherently fire and forget. This simply increases the risk factor of rockets substantially, and greatly complicates the risks and dangers in any sort of critical scenario. In modern times when we're approaching the era of rapid complete reuse, they're also just illogical since they're not meaningfully reusable.
bluGill
3 days ago
The SRBs were resued. Like everything on the shuttle there was far more rebuilding needed before they were reused, but they were resued.
somenameforme
3 days ago
Yeah, but that qualifier you put there means I think you need to frame it as "reused." They dragged a couple of giant steel tubes out of the ocean after a salt water bath and then completely refurbished and "reused" them. It's technically reuse, but only just enough to fit the most technical definition of the word, and certainly has no place in the modern goal of launching, landing, inspecting/maintaining (ideally in a time frame of hours at the most), and then relaunching.
The only real benefit of SRBs is cost. They're dirt cheap and provide a huge amount of bang for your buck. But complete reuse largely negates this benefit because reusing an expensive product is cheaper, in the longrun, than repeatedly disposing (or "reusing") a cheap product.
scheme271
4 days ago
Do we know that the economics work for SpaceX? It's a private company and it's financials aren't public knowledge, it could be burning investor money? E.g. Uber was losing around 4B/yr give or take for a very long time.
somenameforme
4 days ago
You can't know anything for certain but most of every analysis corroborates what they themselves say - they're operating at a healthy (though thin) margin on rocket launches and printing money with Starlink.
The context of this of course is that they've sent the cost of rocket launches from ~$2 billion per launch during the Space Shuttle era, to $0.07 billion per launch today. And the goal of Starship is to chop another order of magnitude or two off that price. By contrast SLS (Boeing/NASA's "new" rocket) was estimated to end up costing around $4.1 billion per launch.
varjag
3 days ago
To be fair cost per launch was in that ballpark already ($$0.15-0.05) with Ariane, Atlas and Soyuz non-reusable vehicles. SpaceX maintains the cost just about to undercut the competition.
eru
3 days ago
I think they maintain the price there. They'll want to drive the cost as low as possible, because price - cost = profit for them. A penny saved is a penny earned.
kevin_thibedeau
2 days ago
They undercut every other launch provider. There's no way they're burning investor money to achieve that at the scale of their operations. This is all due to the cost savings of reusable F9. If they wanted to they could jack up their rates and still retain customers and still be the cheapest. There is no reason to believe they are unprofitable.
withinboredom
3 days ago
Most of their income comes from government subsidies and grants. So, it is rather funny to see the owner of the company running around the government and "cutting" costs.
somenameforme
3 days ago
SpaceX's total funding from government grants and subsidies is effectively $0. They do sell commercial services to the government and bid on competitive commercial contracts, but those are neither grants nor subsidies.
withinboredom
2 days ago
Ummm... you know the government granted them a bunch of money to go to the moon, right?
somenameforme
2 days ago
No, they didn't. The government wants to get to the Moon via the Artemis program (which will never go anywhere, but that's a different topic) and so NASA solicited proposals and bids for a 'human landing system' [1] for the Moon. SpaceX, Blue Origin, and Dynetics all submitted bids and proposals. SpaceX won.
Amusingly enough Blue Origin then sued over losing, and also lost that. They were probably hoping for something similar to what happened with Commercial Crew (NASA's soliciting bids from companies to get astronauts to the ISS). There NASA also selected SpaceX, but Boeing whined to Congress and managed to get them to force NASA to not only also pick Boeing, but to pay Boeing's dramatically larger bid price.
SpaceX has since not only sent dozens of astronauts to the ISS without flaw, but is now also being scheduled to go rescue the two guinea pigs sent on Boeing hardware. They ended up stranded on the ISS for months after Boeing's craft was deemed too dangerous for them to return to Earth in.
eru
3 days ago
If they can keep raising money from investors, that seems proof enough to me that the economics must be good enough.
Ie investors would only put up with losing money (and keep putting up money), if they are fairly convinced that the long run looks pretty rosy.
Given that we know that SpaceX can tap enough capital, the uglier the present day cashflow, the rosier the future must look like (so that the investors still like them, which we know they do).
baq
3 days ago
The economics very likely didn’t work. It’d be irresponsible for a launch company to model Starlink without a customer knocking on your door with a trailer full of dollars to sponsor the initial r&d and another bus of lawyers signing long term commitments. Vertical integration makes the business case much more appealing.
mike_hearn
3 days ago
Does SpaceX have any investors other than Musk? I thought he bootstrapped it.
skissane
3 days ago
Musk owns 42% of SpaceX's total equity and 79% of the voting equity.
The non-Musk shareholders range from low-level SpaceX employees (equity compensation) through to Alphabet/Google, Fidelity, Founders Fund.
There are actually hundreds of investors. If you are ultra-wealthy, it isn't hard to invest in SpaceX. If you are the average person, they don't want to deal with you, the money you can bring to the table isn't worth the hassle–and the regulatory risk you represent is a lot higher
mike_hearn
3 days ago
Thanks, that's interesting!
eru
3 days ago
> Musk owns 42% of SpaceX's total equity and 79% of the voting equity.
How much of their balance sheet is debt vs equity?
Eg in theory you could have lots and lots of (debt) investors and still only a single shareholder.
skissane
3 days ago
> How much of their balance sheet is debt vs equity?
I believe it is almost all equity, not debt.
There is such a huge demand to invest in them, they are able to attract all the investment they need through equity. Given the choice between them, like most companies, they prefer equity over debt. Plus, they have other mechanisms to avoid excessive dilution of Elon Musk's voting control (non-voting stock, they give him more stock as equity compensation)
eru
3 days ago
> Given the choice between them, like most companies, they prefer equity over debt.
What do you mean by 'most companies'? Many companies use debt on their balance sheet just fine, and even prefer it. Banks, famously, have to be restrained from making their balance sheet almost all debt.
baq
3 days ago
The easiest way to get upside exposure in Starlink and wider spacex is to buy alphabet.
ExoticPearTree
2 days ago
I think Musk had a better imagination and the money to fund that imagination without constraints or internal politics.
godelski
4 days ago
I'm not sure this is true, though I think it looks true.
I think the issue is that when a lot of people have put work into something you think that the chances of success yourself are low. This is a pretty reasonable belief too. With the current publish or perish paradigm I think this discourages a lot of people from even attempting. You have evidence that the problem is hard and even if solvable, probably is timely, so why risk your entire career? There are other interesting things that are less risky. In fact, I'd argue that this environment in of itself results in far less risk being taken. (There are other issues too and I laid out some in another comment) But I think this would look identical to what we're seeing.
lo_zamoyski
4 days ago
Right. FWIW, Feynman predicted that physics would become rather boring in this regard, because physics education had become homogenized. This isn't to propose a relativism, but rather that top-down imposed curricula may do a good deal of damage to the creativity of science.
That being said, what we need is more rigorous thinking and more courage pursuing the truth where it leads. While advisors can be useful guides, and consensus can be a useful data point, there can also be an over-reliance on such opinions to guide and decide where to put one's research efforts, what to reevaluate, what to treat as basically certain knowledge, and so on. Frankly, moral virtue and wisdom are the most important. Otherwise, scientific praxis degenerates into popularity contest, fitting in, grants, and other incentives that vulgarize science.
MVissers
4 days ago
I think that's why most innovative science today happens at the intersection of two domains– That's where someone from a different field can have unique insights and try something new in an adjacent field. This is often hard to do when you're in the field yourself.
phyzix5761
4 days ago
But how can you ever discover a novel solution without attempting to resow the ground?
bluedino
3 days ago
Run a mile in 4 minutes. Eat 83 hot dogs in 10 minutes.
Everything is impossible until someone comes along that's crazy enough to do it.
taylorius
3 days ago
So if there's no solution in a particular area, you won't find it? You may be on to something there! :-)
robotelvis
4 days ago
In my experience the best approach is to first try to solve the problem without having read the prior work, then read the prior work, then improve your approach based on the prior work.
If you read the prior work too early to you get locked into existing mindsets. If you never read it then you miss important things you didn’t thought of.
Even if your approach is less good than the prior work (the normal case) you gain important insights into why the state of the art approach is better by comparing it with what you came up with.
dpatru
3 days ago
A decade ago I read this same advice in "The Curmudgeon's Guide to Practicing Law": spend at least a little time trying to solve the problem before you look to how other's have solved it. One benefit is that occasionally you may stumble on a better method. But the more common benefits is that it helps develop your problem-solving skills and it primes you to understand and appreciate existing solutions.
brookst
4 days ago
What if you’ve already read the prior work before trying to solve the problem?
kortilla
4 days ago
Then you’re very unlikely to come up with a novel approach. It’s very difficult to not let reading “state of the art” research put up big guardrails in your mind about what’s possible.
All of the impressive breakthroughs I saw in academia in the CS side were from people who bothered very little with reading everything related in literature. At most it would be some gut checks of abstracts or a poll of other researchers to make sure an approach wasn’t well explored but that’s about it.
The people who did mostly irrelevant incremental work were the ones who were literature experts in their field. Dedicating all of that time to reading others’ work puts blinders on both your possible approaches as well as how the problems are even defined.
agumonkey
3 days ago
Maybe some people tried to develop out-of-the-box sessions to force investigating absurd axioms and see how it goes.
HelloNurse
3 days ago
Worst case: you don't have a fresh perspective, but you have learned something and you can try plenty of other problems.
There's also a fair chance of finding possibilities that are "obviously" implicit in the prior work but haven't yet been pursued, or even noticed, by anyone.
ibejoeb
3 days ago
In all seriousness, if you're cool with it, LSD. Or anything else that can take you out of the ordinary course of though.
cubefox
4 days ago
> If you read the prior work too early to you get locked into existing mindsets.
I agree, though in some cases coming up with your own ideas first can result in you becoming attached to them, because they are your own. It is unlikely for this to happen if you read the prior work first.
Though I think overall reading the prior work later is probably still a good idea, but with the intention not to become too impressed with whatever you come up before.
helloplanets
4 days ago
> The developer of Balatro made an award winning deck builder game by not being aware of existing deck builders.
He was aware of deck builders and was directly inspired by Luck be a Landlord, but he was not aware of just how massive the genre is.
Direct quote from the developer:
> The one largest influence on Balatro was Luck Be a Landlord. I watched Northernlion play for a few videos and loved the concept of a non-fanatsy themed score attach roguelike a ton, so I modified the card game I was working on at the time into a roguelike.
> I cut myself off from the genre at that point intentionally, I wanted to make my own mistakes and explore the design space naively just because that process is so fun. I hear the comparison to Slay the Spire a lot but the truth is that I hadn't played that game or seen footage of it when I designed Balatro, not until much later.
https://www.reddit.com/r/Games/comments/1bdtmlg/comment/kup7...
HelloUsername
3 days ago
Exactly, more info in this interview on Bloomberg on 7-feb-2025: https://www.bloomberg.com/news/newsletters/2025-02-07/maker-...
jqr-
3 days ago
chambers
4 days ago
“They’re cheering for you,” she said with a smile.
“But I could never have done it,” [Milo] objected, “without everyone else’s help.”
“That may be true,” said Reason gravely, “but you had the courage to try;
and what you can do is often simply a matter of what you will do.”
“That’s why,” said King Azaz, “there was one very important thing about your quest
that we couldn’t discuss until you returned.”
“I remember,” said Milo eagerly. “Tell me now.”
“It was impossible,” said the king, looking at the Mathemagician.
“Completely impossible,” said the Mathemagician, looking at the king.
“Do you mean … ,” said the bug, who suddenly felt a bit faint.
“Yes, indeed,” they repeated together, “but if we’d told you then, you might not have gone …
and, as you’ve discovered, so many things are possible just as long as you don’t know they’re impossible.”
- The Phantom Tollbooth (1961)somenameforme
4 days ago
I think going one layer lower - the fundamental issue is that the internet drives people to unrealistic perceptions of the competence of others. Think about all of the undeniably brilliant people that have been involved in software over the past 40 years, and how many of them used hash tables in performance critical environments. Let alone mathematicians and others using them in applied domains. And you think there's something fundamental that all of these people just somehow missed?
The argument of 'if that's such a good idea, why wouldn't somebody have just done it already?' seems to have grown exponentially with the advent of the internet. And I think it's because the visibility of competence of other's became so much more clear. For those who lived through e.g. Carmack's Golden Age you knew you were never going to be half the coder he was, at least based on the image he successfully crafted. That 'slight' at the end is not to say he wasn't a brilliant developer or even perhaps the best in the world at his peak, but rather that brilliance + image crafting creates this Gargantuan beast of infallibility and exceptionalism that just doesn't really exist in reality. I think it's from this exact phenomena that you also get the practical fetishism of expertise.
rincebrain
4 days ago
A professor I had in college, whose first published result was from a piece of homework he turned in where he incidentally solved an open question about bound on a problem, had a curious habit.
I ended up failing and taking his course again (because I had A Lot going on in college), and thus, noticed something.
Each semester, on one of the assignments in the latter half of the class, he assigned one problem out of, perhaps, 30 in the problem set, where as written, it was actually an open problem, and then a day or two before they were due, he'd send out an "oops, my bad" revised version.
I suspect that this was not an accident, given that it always happened only once.
vitus
3 days ago
> A professor I had in college, whose first published result was from a piece of homework he turned in where he incidentally solved an open question about bound on a problem, had a curious habit.
A related anecdote is about George Dantzig (perhaps best known for the simplex algorithm):
> During his study in 1939, Dantzig solved two unproven statistical theorems due to a misunderstanding. Near the beginning of a class, Professor Spława-Neyman wrote two problems on the blackboard. Dantzig arrived late and assumed that they were a homework assignment. According to Dantzig, they "seemed to be a little harder than usual", but a few days later he handed in completed solutions for both problems, still believing that they were an assignment that was overdue. Six weeks later, an excited Spława-Neyman eagerly told him that the "homework" problems he had solved were two of the most famous unsolved problems in statistics.
SideQuark
4 days ago
Picking two examples out of all people approaching problems, while ignoring wasted effort and failures to make progress because of not understanding current knowledge, is an absolutely terrible reason to approach from ignorance.
The biggest gains in theory and in practice are far more often obtained by masters of craft, giving much more weight to attacking problems from a position of knowledge.
In fact, even in this case, this progress required that the author was aware of very recent results in computer science, was thinking deeply about them, and most likely was scouring the literature for pieces to help. The “Tiny Pointers” paper is mentioned directly.
mangodrunk
3 days ago
Well said. I really dislike the narrative here, that ignorance is something that leads to discovery. One, the poster gives two examples, as if there's something we should gain for such a small sample. In addition to that, one of the examples isn't valid. The student's former professor is a co-author of the "Tiny Pointers" [1] paper that he was reading and working through. And, it was a conjecture, I don't see how someone should think that it would mean it's impossible.
I would rather, instead of thinking ignorance is a key ingredient to discovery, that instead it's the willingness to try things.
dataviz1000
4 days ago
A similar idea came up in Veritasium's latest video today. Training AI by DeepMind to predict protein folding greatly improved by withholding the most evident information about a protein's primary structure — its linear polypeptide chain — within the The Structure Module step. [0]
After asking ChatGPT not to agree with me that your comment and these two different approaches to solving problems are the alike, it concluded there still might be similarities between the two.
[0] https://youtu.be/P_fHJIYENdI?feature=shared&t=1030
[1] https://chatgpt.com/share/67aa8340-e540-8004-8438-3200e0d4e8...
layer8
4 days ago
It’s important to think outside the box, and that’s easier when you’re not aware of the box, but we also stand on the shoulders of giants, and are doomed to repeat history if we don’t learn from it. As usual, things aren’t clear-cut.
itwasntexample
4 days ago
Plus, their choice of bringing up Balatro wasn't correct either. The developer DID play other deck builders, just not that many. Particularly, they played Slay the Spire which is the genre's most influential "Giant" and the entire reason for the game's progression structure (Small fights leading to a known big fight with particular anti-strategy gimmicks).
zimpenfish
3 days ago
> Particularly, they played Slay the Spire
[0] links to an interview where the developer says they didn't play Slay The Spire ("the truth is that I hadn't played that game or seen footage of it when I designed Balatro")
GuB-42
3 days ago
> It’s important to think outside the box, and that’s easier when you’re not aware of the box
I don't think so. If you are not aware of the box, there is a much greater chance for you to be well within the box and not realizing it. By that, I mean that either you rediscovered something, or you are wrong in the same way as many others before you. By chance, you may find something new and unexpected, but that's more of an exception than the rule.
selimthegrim
4 days ago
Have the herd of wildbeest been hunting for room to stow their carry-on luggage?
Nevermark
4 days ago
Nothing is more depressing than hunting for a parking spot, down a long one-way strip of parked cars, right behind another car hunting for a parking spot.
cdelsolar
4 days ago
I’ve been working on and off for years on a scrabble endgame solver; it uses all these techniques from chess like transposition tables, Negamax with alpha beta pruning, NegaScout, aspiration search and so on. There’s a French person who built his own endgame solver and this solver is significantly faster than mine, even with all of the optimizations that I’ve put into it. He is kind of secretive about it because it’s closed source and he makes some money on it, but we’ve talked a bit about it, compared some positions and we’ve determined that his move generation algorithm is actually not asoptimized as mine. But he can still solve the endgame faster despite seeing fewer positions, which implies to me that he’s doing a significantly better job of pruning the tree.
But when we try to talk details, I asked him for example do you use minimax with alphabeta pruning and he told me like “I’m not sure if I am using minimax or what that is :(“ .. I ask him to describe what he does, he essentially describes minimax with pruning. I’ve sorta figured out that he must be doing some very intelligent version of an aspiration search. It’s really eye-opening because he doesn’t have any of this training. He’s never seen any related algorithms, he’s just figuring all this out on his own.
smj-edison
4 days ago
I think of Andre Geim as a great example of balancing the two. I couldn't find the exact quote, but he said something to the effect of "when I enter a new field, I make sure I learn the basics so I don't spend all my time making dumb mistakes. But I don't get so into it that I get stuck in the mindshare."
I'll also say I think that diversity in approaches is more important than One Right Way. Some people need to set out on their own, while others spend decades refining one technique. Both have led to extraordinary results!
eterevsky
3 days ago
If we achieved local maximum at something, the only way to progress is to make a big leap that brings you out of it. The trouble is that most of such big leaps are unsuccessful. For every case like you are describing there are probably hundreds or thousands of people who tried to do it and ended up with something worse than the status quo.
abetusk
4 days ago
I disagree.
Many problems are abstract and so we have to build "cartoon" models of what's going on, trying to distill the essence of the problem down to a simple narrative for what the shape of the problem space is and where the limitations are. That often works but backfires when the cartoon is wrong or some assumptions are violated about when the cartoon description works.
Results like this are pretty rare, nowadays, and I suspect this happened because the problem was niche enough or some new idea has had time to ferment that could be applied to this region. This seems like a pretty foundational result, so maybe I'm wrong about that for this case.
A lot of progress is made when there's deeper knowledge about the problem space along with some maturity for when these cartoon descriptions are invalid.
genghisjahn
4 days ago
This reminds me of the Neal Stephenson article "Innovation Starvation" from 2011:
>A number of engineers are sitting together in a room, bouncing ideas off each other. Out of the discussion emerges a new concept that seems promising. Then some laptop-wielding person in the corner, having performed a quick Google search, announces that this “new” idea is, in fact, an old one—or at least vaguely similar—and has already been tried. Either it failed, or it succeeded. If it failed, then no manager who wants to keep his or her job will approve spending money trying to revive it. If it succeeded, then it’s patented and entry to the market is presumed to be unattainable, since the first people who thought of it will have “first-mover advantage” and will have created “barriers to entry.” The number of seemingly promising ideas that have been crushed in this way must number in the millions. What if that person in the corner hadn’t been able to do a Google search?
>In a world where decision-makers are so close to being omniscient, it’s easy to see risk as a quaint artefact of a primitive and dangerous past (…) Today’s belief in ineluctable certainty is the true innovation-killer of our age
pjc50
4 days ago
I believe Ramanujan did the same with various maths problems. The Cambridge undergrad course sprinkles a few unsolved problems in the practice questions just in case someone does this again.
mangodrunk
3 days ago
Ramanujan read many math books.
voidhorse
4 days ago
There's a reason the phrase "beginner's luck" exists. I'm not sure the naïveté and success are causally related so much as they might be coincident.
Could knowing about prior research skew one's perspective and tarnish novel thought? Sure. But we don't know. Maybe we'd have an even better Balatro if the creator knew about some other deck builders. Maybe we wouldn't, we don't know. We cannot prove the counterfactual.
On the opposite extreme, there are examples of thinkers whose success stemmed from knowing much about one domain or much about many domains and integrating (Luhmann, Goethe, Feynman, Von Neumann etc.). In the general case, I think we are probably much better off promoting knowledge and study, and not ignorance and chance.
That said, I do think we should retain our willingness to play and to try things that are "out of bounds" with respect to the existing accumulated knowledge. We should live informed lives, but play and explore like unschooled children.
necovek
4 days ago
> the authors have also learned of several other hash tables that make use of the same high-level idea in different settings [7, 9].
At least part of the result was already known, and the fact authors didn't know about it mostly goes to the large corpus of knowledge we already posses.
But the core inspiration came from looking at another recent research paper "Tiny Pointers": that is totally against your premise.
If Krapivin was a software engineer looking to implement this solution as optimization for a particular problem, he would have done so without ever thinking of making a research paper to prove it formally, but mostly relied on benchmarking to prove his implementation works better.
Now, it has always been somewhat true that lots of existing knowledge limits our creativity in familiar domains, but you need both to really advance science.
latexr
4 days ago
That is called Shoshin, or Beginner’s Mind.
ajross
4 days ago
> I'm beginning to think that the best way to approach a problem is by either not being aware of or disregarding most of the similar efforts that came before.
This is an "Einstein failed Math" fallacy. It's true that novel and notable work tends strongly not to be bound by existing consensus, which when you say it that way is hardly surprising. So yes, if consensus is wrong in some particular way the people most likely to see that are the ones least invested in the consensus.
But most problems aren't like that! Almost always the "best" way to solve a problem, certainly the best way you're going to solve the problem, is "however someone else already solved it". But sometimes it's not, and that's when interesting stuff happens.
anvuong
3 days ago
This is confirmation/survivorship bias. You only hear about these positive cases. The vast majority just ends up rediscovering old techniques and their year-long paper/work got rejected.
taurknaut
4 days ago
> Krapivin made this breakthrough by being unaware of Yao's conjecture.
I don't think there's any evidence of this. Yao's conjecture is not exactly standard undergraduate material (although it might be—this is a commentary on detail rather than difficulty. But i certainly didn't encounter this conjecture in school). If not knowing this conjecture was the key, millions and millions of students failed to see what Krapivin did. I imagine you'd have to ask him what the key to his insight is.
Hashing is a pretty unintuitive sort of computation. I'm not surprised that there are still surprises.
mangodrunk
3 days ago
Great point. Also, Krapivin was working on another paper co-authored by his former professor. He in fact was not working from ignorance. And like you said, most of everyone didn’t know anything about this conjecture, so ignorance certainly wasn’t an ingredient here.
jay_kyburz
4 days ago
According to RPS the quote is that he had "barley played any roguelikelike deckbuilders" not that he was not aware of them.
There are a lot of great deck builders that are not roguelike. Has he played Dominion, Magic the Gathering, Hearthstone?
hans-dampf
3 days ago
Your exact thoughts have already been put to paper by L.P.Hammet, godfather of physical organic chemistry (exact description of chemical reactions):
one might “... overlook the great difference between exact theory and approximate theory. Again, let me emphasize my great respect for approximate theory. [...] if one starts looking for an effect predicted by this kind of theory to be impossible, the odds are against a favorable outcome. Fortunately, however, the community of scientists, like that of horseplayers, contains some people who prefer to bet against the odds as well as a great many who always bet on the favorite. In science we should, I think, do all we can to encourage the man who is willing to gamble against the odds of this sort.
This does not mean that we should encourage the fool or the ignoramus who wants to play against suicidal odds, the man who wants to spend his time and usually someone else’s money looking for an effect incompatible with, let us say one of the conclusions reached by Willard Gibbs. Gibbs started from thoroughly proven generalizations, the first and second laws of thermodynamics, and reasoned from them by exact mathematical procedures, and his conclusions are the best example I know of exact theory, theory against which it is futile to struggle.”
awesome_dude
4 days ago
There's a problem in all human understanding - knowing when, and knowing when not to apply pre-existing knowledge to a problem.
Have we been grinding away in the right direction and are only moments away from cracking the problem, or should we drop everything and try something completely new because we've obviously not found the solution in the direction we were heading.
To put it into a CS type context - Should we be using a DFS or BFS search for the solution, because we don't have knowledge of future cost (so UCS/Djikstra's is out) nor do we know where the solution lies in general (so A* is out, even if you ignore the UCS component)
chasing
3 days ago
> I'm beginning to think that the best way to approach a problem is by either not being aware of or disregarding most of the similar efforts that came before.
Both Danny Trejo and Tim Allen spent time in prison before becoming famous. While that's interesting, I'm not sure I'm ready to believe that's the best way to become a professional actor.
Edit to be a little less snarky, apologies:
"Outsiders" are great for approaching problems from fresh angles, but I can almost guarantee that the majority of nuts-and-bolts progress in a field comes from people who "fall in the rut of thought" in the sense that they area aware enough of the field to know which paths might be most fruitful. If I had to place a bet on myself, I wouldn't take a wild uninformed swing: I'd get myself up to speed on things first.
Outsiders sometimes do great work. They also sometimes:
https://www.reddit.com/r/mathmemes/comments/wq9hcl/terrence_...
delichon
4 days ago
Unaccompanied Sonata is a 1979 short story by Orson Scott Card that takes this to an extreme, and has haunted me since I read it in the eighties.
RALaBarge
3 days ago
There is a book about this theory written in the 1960's called 'The Structure of Scientific Revolution' by Kuhn that talks about some sciences which progress one funeral at a time and how progress is not linear. He also remarks how people from outside the standard thoughts and education surrounding the current system are typically the ones to actually progress science.
One example is Geocentrism vs Copernican astronomical models -- Copernican could never have sprung from the status quo because everything revolved around the Earth in Geocentrism instead of around the Sun. You can't square that circle.
https://en.wikipedia.org/wiki/The_Structure_of_Scientific_Re...
GuB-42
3 days ago
About geocentrism vs heliocentrism, 3blue1brown has recently released a video [1] that talks about about it. It is about the cosmic distance ladder, but geocentrism is mentioned, and it makes a lot of sense in context.
To summarize: heliocentrism was known to the ancient Greeks, who realized the Sun was much bigger than the Earth, so it would seem logical to have the Earth go around the sun. But the counterargument was that if the Earth goes around the Sun, the stars should move relative to each other during the year, because of parallax, and they didn't have instruments that were precise enough to see it, so they assumed that they didn't. Copernicus major contribution wasn't heliocentrism, but the orbital periods of planets. And the model wasn't complete until Kepler calculated the shapes of the orbits. For details, watch the video, it is really good.
Galanwe
3 days ago
I'm being picky here, but I don't think you portray an fair view of Kuhn's epistemology here.
Kuhn does not define a value-scale of both methods, on the contrary, he merely introduces the concept of different researchs: one being critical (developing new paradigms) and one being accumulating (further refining existing paradigms).
He also hints to the almost inevitably organic interactions between the two, such that critical research naturally evolves from a pragmatic need to express things simply from a new paradigm when the old one becomes too clumsy for a use case.
This is what happened in your example as well. Copernic (and later Galileo) did not invent heliocentrism out of the blue, the theory around it existed since antic Greece. It is even arguably the Renaissance, leading metaphysicists to revisit ancien texts, that spurred the idea to Copernic to consider it. But ultimately the need for the new paradigm was pushed by the need to revisit the calendar, which was drifting, and the difficulty to do it in a geocentric world, where you have to take planet retrocession into account.
rcxdude
3 days ago
Heliocentrism was well known, the issue was that the copernican model was a bad model for the evidence and knowledge of physics available at the time (it was basically equivilent to a geocentric model but less need more epicycles, not less, and also required that the earth rotated and some unusual properties for stars). It took Kepler figuring out ellipses and slowly beating out epicycles as a way to do the math, as well as some other experiments which established the world did indeed rotate (not for lack of trying by heliocentricism advocates, but it's a hard measurement to make), to bring the idea mainstream. (And arguably only Newton's laws of motion actually tied it all together)
zellyn
3 days ago
Having just finally read (well, listened to) Kuhn's book, I can say:
(a) I wouldn't quite characterize the book as being "about this theory" — it's a bit more nuanced. He definitely says that it's usually younger scientists with less invested in the currently reigning theory that are most likely to push forward a revolution. However, I don't recall any examples in the book of people who where wholly _unaware_ of the previous theory.
(b) You should absolutely, definitely read it. It's a classic for a reason, and the writing style is a delight.
giantg2
3 days ago
I've always had a mind that worked that way - I can imagine how something works or could work before looking up how it actually does work. But there's no real benefit to thinking that way in my experience. Thinking differently has only been a career impediment or gotten on my teachers nerves for being "smart" in my experience.
For example, as a young kid I saw a geometric ball made up of hinges that allow it to expand and contract, and in some stages it looks a little like a gear. So then I started wondering if you change gears instead of switching gears in a car. Then a decade or so later I started seeing CVT transmissions in cars, which is the same concept where you can change the size/ratio by expanding or contracting the roller instead of switching gears.
klik99
3 days ago
I agree in the specific case that the state of the art is in a local maxima, but saying "the best way to approach a problem is by not being aware of disregarding previous attempts" ignores the much more frequent banal work of iterative improvement. Leaping out of a local maxima is rare and sexy and gets articles written about you and is important, but the work of slowly iterating up to a nearby peak is also important.
I think progress needs both individual achievements who break out of preconceived notions and the communal work of improving within the notions we currently have.
chikere232
4 days ago
Last year's Advent of Code had a task that was NP complete and lacked good well known approximation algorithms. I almost gave up on it when I realised as that feels impossible
In practice the data was well behaved enough and small enough that it was very doable.
0x38B
3 days ago
"fall[ing] in the rut of thought" reminds me of this paragraph from "The Footpath Way":
> So long as man does not bother about what he is or whence he came or whither he is going, the whole thing seems as simple as the verb "to be"; and you may say that the moment he does begin thinking about what he is (which is more than thinking that he is) and whence he came and whither he is going, he gets on to a lot of roads that lead nowhere, and that spread like the fingers of a hand or the sticks of a fan; so that if he pursues two or more of them he soon gets beyond his straddle, and if he pursues only one he gets farther and farther from the rest of all knowledge as he proceeds. You may say that and it will be true. But there is one kind of knowledge a man does get when he thinks about what he is, whence he came and whither he is going, which is this: that it is the only important question he can ask himself. (The Footpath Way, Introduction (1))
Even though the author is talking about a different kind of knowledge, the image of sticks of a fan - where going down one gradually excludes the others - stuck with me.
ibejoeb
3 days ago
This is a really tough problem. I don't think ignorance is the answer, but it's also difficult to set aside things that seam legitimate and go down a rabbit hole of reinventing something on a hunch. I guess the saving grace is that it's impossible to know enough about such a wide swathe that it's often a problem. With large models that conceivably can encode the collective knowledge, though, we have to be vigilant about creating an orthodoxy that ultimately constrains us.
youniverse
4 days ago
I watched a casual youtube video by a philosophy professor talking about the same thing that great scholars are different than great thinkers. Many great thinkers came up with great philosophies because they misread past works.
If anyone wants to watch: https://youtu.be/4vou_dXuB8M?si=Wdr7q96MFULPAEc4
Definitely something we should all keep in mind that sometimes you just have to pave your own way and hope it is great on its own merits.
namibj
3 days ago
Eventually I'll get to actually rolling a POC/tech demonstrator that just has less modules at perhaps less current density, for showing that even several kV DC can be efficiently transformed not just on paper to few or sub kV DC. At enough voltage grounding is no longer optional anyways, so might as well do essentially an auto transformer plus extra protection to protect humans against electric shock (RCD doesn't work directly, but the functionality can still be offered, it just has to sense quite differently).
Why DC? An overhead line only limited by peak voltage (arc) and thermals can carry twice the power when running DC instead of AC, assuming both measured relative to ground.
Also, you can run you transistors completely steady-state at all frequency components between their own switching fundamental and your load transients. No more over provisioning just to make up for legacy 50/60 Hz AC.
Also, to a degree, you can just plug raw batteries in with that be DC grid, at most having a little bit of DC regulation to force the voltage a bit higher/lower than the batteries. Like, a power supply basically rated to a couple percent of the battery input/output max power: only need to move the small extra voltage, though ofc at the full current.
Lastly, DC converters are just way smaller and lighter, so you could avoid the heavy bulky transformers in trains and alleviate power limiting from them. Relevant for fast double-decker trains because you'd prefer to have human space where you currently park the transformer.
I have to say though, novel development of technology by pulling recent innovations in the fundamental/material science fields underlying the target, is very not an easy thing to do.
dathinab
3 days ago
I would say not letting your thoughts be constrained by the bias of existing approaches.
This isn't easy, at all. It requires training yourself into having a open and flexible mind in general.
Not knowing about something is more like a cheat to get there easier.
But it's supper common that innovation involves a lot of well known foundation work and just is very different in one specific aspects, and it's quite hard to know about the other foundation work but not that specific aspect especially if you don't even know which aspect can be fundamentally be "revolutionized"/"innovated".
But what always help if you learn about a new topic is to try blindly first yourself and then look at what the existing approaches do. Not just for doing ground braking work but even for e.g. just learning math.
One of the math teachers I had over the school years before university used this approach for teaching math it yielded way better independent understanding and engagement it helped me a lot later one. Sadly I only had that teacher for 2 years.
eru
3 days ago
> I'm beginning to think that the best way to approach a problem is by either not being aware of or disregarding most of the similar efforts that came before.
I don't think that's warranted.
You will find that the vast majority of lottery winners have bought lottery tickets. However that doesn't mean that buying lottery tickets is a good idea financially.
rincebrain
4 days ago
Kind of?
You get novel branches of thought, but in the limit case, you're also reinventing the universe to bake an apple pie.
So there's something of a tradeoff between attempting to ensure people can do more than mimic existing doctrine and efficiency of getting up to speed without having to re-prove existing math and science.
The Balatro dev also, for example, has talked about how he was heavily influenced by several specific other games.
huijzer
4 days ago
Walter Isaacson said something similar about Einstein and Steve Jobs. Sometimes you need to reject commonly held assumptions to make progress. Einstein rejected the idea of ether. According to Isaacson this was probably because Einstein was working outside of university. Inside university, professors would likely have pushed Einstein to stick to the idea of ether.
SkyBelow
3 days ago
Best for an individual or for society?
Consider a simplified example. There is some area of scientific research. Working within the framework gives you a 1 in 4 chance of making some minor improvement. Working outside the framework gives you a 1 in a million chance to create a great leap in knowledge.
For any single individual, the best choice is the former. The latter is a gamble that most people will lose, wasting their lives chasing crazy theories.
For society, you want a split. You need some doing the second option to have the eventual amazing discovery, but you also need to progress the current understanding further.
If we introduce a chance for the minor progress to lead to the same major advancement, it becomes a bit more simple for society to calculate the best allocation of researchers, but for any single person, the best option still remains to dedicate themselves to the small advancement.
shkkmo
4 days ago
> I'm beginning to think that the best way to approach a problem is by either not being aware of or disregarding most of the similar efforts that came before.
Extrapolating a "best way" from a couple of examples of success is bad reasoning. There are definitely ways in which it can be necessary to ignore the standing wisdom to make progress. There are also definitely ways in which being ignorant of the knowledge gained by past attempts can greatly impede progress.
I would point out, that it is also possible to question and challenge the assumptions that prior approaches have made, without being ignorant of what those approaches tried.
Figuring which is which, is indeed hard. Generally, it seems like it works well to have a majority of people expanding/refining prior work and a minority people going in and starting from scratch to figure out which of the current assumptions or beliefs can be productively challenge/dropped. The precises balance point is vague, but it seems pretty clear that going to far either direction harms the rate of progress.
germandiago
3 days ago
It is just easier to think out of the box when you do not have your mind "polluted" with previous ideas and from time to time someone appears that was thinking just in another way, probably the most obvious to them without knowing about the orthodox thinking in the subject.
This is valuable.
thenoblesunfish
4 days ago
Ok, but you are disregarding the 1000s of things the undergrad was aware of and the fact that he worked with other researchers who were aware of the existing results enough to understand the significance of the result.
The real trick is simply to try to understand things directly and not rely on proof by authority all the time.
kazinator
4 days ago
It takes time to read all the prior research. You could grow old by the time you get through it all. Likelihood of contributing to the field declines with age.
You might believe someone's proof of a conjecture and then be discouraged from delving any more into that rabbit hole.
More often than not you will be reinventing something. But that's not necessary less productive than reading other people's work. In the former case, you're at least making something, if not new.
So there are some arguments for being fresh in a well-trodden field with an ocean of research that you cannot boil all at once.
On the other hand, there is the publish-or-perish pressure in academia, which requires original research. You could just keep busy and throw enough shit agains the wall such that enough of it sticks.
xyzzy_plugh
4 days ago
Domain knowledge is valuable as you can wield it as opportunities arise to great effect. This lets you leap frog problems by applying known solutions. There's risk of being blind to novel approaches that require innovation.
Being capable of tackling problems from first principles is invaluable because we frequently encounter problems that are novel in some dimension, even if that dimension is the combination of dimensions. This lets you leap frog large problems by decomposition, possibly going against the grain and innovating by, hopefully, simplifying. However there is risk in falling into traps that countless others have already learned the hard way.
This may come as a surprise to some but, believe it or not, you can have both. In fact, you should.
NohatCoder
3 days ago
There is certainly a need for ignoring common wisdom if you want to make something new. I don't think being unaware of it is necessary as long as you are willing to go forward while being told that you are on a fool's errand.
globular-toast
3 days ago
> I'm beginning to think that the best way to approach a problem is by either not being aware of or disregarding most of the similar efforts that came before.
I think sometimes this is true. On the time I've had new starters on my engineering team I've always tried to teach them about the problem before they get exposed to any of our solutions. Sometimes they will have brand new insights that we've been completely blind to. It doesn't always happen, but there is only one opportunity for this, once they've seen the solutions they can't be unseen.
kbenson
4 days ago
It's less that it's the best way to approach a problem, but that it optimizes for a different goal. Building on existing knowledge is how you find the local maxima for a problem by moving along the slop you have. Starting from scratch is how you find different slopes, which may lead to higher local maximas.
Of course, if you happen to be on a slope that leads to the global maxima, starting from scratch is far less effective. We don't really know where we are usually, so there's a trade-off.
There was a good article posted to HN years ago that covered this and used rocketry as one of the examples, but I don't recall what it was. The author was well known, IIRC.
andai
4 days ago
In university lectures, we'd be presented with a problem on one slide, and then like ten seconds later the solution on the next. I'd usually cover my ears and look away because I was still busy coming up with my own solution!
speleding
3 days ago
Somewhat surprisingly (to me), this is also found for User Interfaces [0]. The best initial design for a User Interface for a feature phone was done by designers who were not shown previous work by other designers. Iterations based on previous designs were only better if they were shown the "winning" initial design".
3abiton
4 days ago
This soinds like the approach deepseek CEO used for hiring. He favored young inexperienced teams so they can bring a fresh perspective and try things from new way. It paid off nicely.
bell-cot
3 days ago
> I'm beginning to think that the best way to approach a problem is by either not being aware of or disregarding most of the similar efforts that came before...
That depends...
- Krapivin was an undergrad, tinkering with stuff for fun. If he'd put a couple months into this, and just ended up re-inventing a few things? That'd be decently educational.
- Vs. if your team needs to ship product, on something resembling the schedule? Yeah. You definitely stick to tried-and-true algorithms.
immibis
4 days ago
Well, sometimes. Other times, perhaps even most times, you bang your head against a wall for weeks and get nowhere.
George Dantzig also solved two open problems because he thought they were homework.
rollcat
3 days ago
In terms of practical engineering, this is also why I love to do side projects that reject existing bodies of libraries, and try to work up from first principles, and/or focus on composability rather than integration.
It's a trade-off, at first it takes longer to iterate on features, but sometimes a more minimal and/or composable tool finds its way to production. Real Systems are made of duct tape anyways.
TZubiri
4 days ago
When training physically, you can overtrain one muscle and depend on them. By not using those muscles on purpose you can improve your other muscles.
It is well known that limitations improve creativity.
That said I still think the best path is to learn a classical path, if you want you can question some axioms, but it's mostly irrational in that there's almost no reward for you personally, except clout, most of the reward goes to the whole science.
Owlettotoo
3 days ago
Sometimes insight can come by evaluating the problem at its rawest form. In short, a wild but fresh perspective.
kristopolous
4 days ago
I used to think this a few decades ago. I think it's just as accessible with some mix of anti-authoritarianism and defiant personality.
Essentially you learn a thing, you accept it for now and you think "well but maybe!"
Like I personally think there should be multiple mathematical zeroes but I accept it as wackiness unless I can clearly demonstrate coherency and utility as to why.
yodsanklai
3 days ago
> I'm beginning to think that the best way to approach a problem is by either not being aware of or disregarding most of the similar efforts that came before.
Maybe it's just because there are more people working on these problems who don't know previous approaches than the opposite.
vkou
4 days ago
> I'm beginning to think that the best way to approach a problem is by either not being aware of or disregarding most of the similar efforts that came before.
Survivorship bias, you aren't aware of all the failures where people who were unaware of prior art made all the mistakes predictable to people who were.
indymike
4 days ago
> I'm beginning to think that the best way to approach a problem is by either not being aware of or disregarding most of the similar efforts that came before.
Everyone likes to focus on why you cannot do and why trying will be futile.
You don't have to disregard prior efforts. You just have to focus on one simple question:
"how can I do ______ ?"
fcq
3 days ago
Absolutely true! I concur 100% with your take.
Funny this breakthrough happens at same time Antirez made this post https://news.ycombinator.com/item?id=42983275
ijustlovemath
4 days ago
This is the biggest risk of AI imo; almost by definition your thoughts regress to the mean when using it
encipriano
3 days ago
This is nonsense. You need to double check the answers, spot mistakes, adapt the code to your needs and go to the sources it lists to learn rapidly about that particular thing.
ijustlovemath
3 days ago
it's fundamentally how these things work; learning token distribution given prior context. The expected output over time is the mean value of that distribution. Regression to the mean is the danger I'm talking about.
fennecbutt
3 days ago
That's a load of selection bias though. I'm sure there have been many, many more people who don't know anything about deck builder games who tried to make one and didn't succeed.
dabeeeenster
4 days ago
Interesting idea! Clifford Cocks also made a breakthrough in/invented Public Key Encryption without realising it https://en.wikipedia.org/wiki/Clifford_Cocks
temporallobe
4 days ago
We’re too afraid of violating some unwritten rule about reinventing the wheel. Or something.
dumbfounder
3 days ago
It’s easy to think outside the box when you don’t know where the box is.
swayvil
4 days ago
I think it's 2 different approaches, some enjoy the one (playing with the thing itself) and some enjoy the other (playing with the various secondhand abstractions that refer to the thing).
They are different tastes. They deliver different results.
agumonkey
3 days ago
Similarly, the fortran or algol team implemented a lot of optimization tricks on first try, things that are now considered advanced, without "knowing it".
obelos
4 days ago
I think you're forgetting to put “all the times ignorance didn't produce a breakthrough” in the denominator.
hn_throwaway_99
4 days ago
Hah, such a great way to put it.
This is relevant to HN because I'm probably paraphrasing this incorrectly but pg has said the following about why it's hard to launch a startup: the vast majority of ideas that sound stupid are, in fact, stupid. The ones that sound like great ideas have most likely already been done. Thus, the startups that have huge growth potential tend to be the ones that sound stupid given the conventional wisdom (so unlikely to have been tried yet) but are, contrary to the norm, actually great ideas in disguise. These ideas are basically by definition very rare.
wnc3141
4 days ago
I advise anyone with a startup idea to just make a prototype you would like, then see if its reinventing the wheel. Repeat where necessary
brookst
4 days ago
Better yet, let customers decide if it’s reinventing the wheel. Many times, founders prematurely decide it’s duplicative, or delude themselves into thinking it’s not.
We all guess at the value customers receive, but only they can say for sure.
wnc3141
4 days ago
Totally agree, manage risk where necessary such that you don't rely on a project getting traction. If it does, great.
arcxi
4 days ago
and compare it to all the times breakthrough is made by iteration
fmbb
4 days ago
Also, making a good game is not a scientific breakthrough. There were great deck builders before.
spacemadness
4 days ago
Also there isn’t anything that hasn’t been done before either in Balatro, it’s just a nice combination of deck builder tricks applied to poker. And the presentation is also well done which has nothing to do with the mechanics.
brookst
4 days ago
I respectfully but totally disagree.
Balatro took the basic game mechanics of a very familiar game and said “what if they were dynamic”. The world’s a big place and I’m willing to believe it’s been done before… but I can’t think of one.
It’s the combination of familiar scoring mechanics with fun meta game modifiers that made Balatro so successful. What happens to poker if two of a kind is suddenly the most important hand? Or if not playing face cards leads to incrementally better scores every hand?
Again, I can’t claim it’s never been done, but saying it’s just another deck builder is missing the point.
dhc02
4 days ago
This is elegantly stated.
ballenf
4 days ago
That misses the point that there may be breakthroughs that are much harder or near impossible to make if you're familiar with the state-of-the-art.
sangnoir
4 days ago
What's the proportion to breakthroughs that are easier with familiarity? How many accidental discoverers do we need to match the output of a Terrance Tao or an Erdős?
dmurray
4 days ago
That seems like the wrong question to ask. After all, there's no shortage of people who are unfamiliar with Yao's conjecture.
Or alternatively, even the most well-read person is not au fait with the state of the art in almost all subjects, so they have a chance to make an accidental discovery there.
But this kid wasn't an outsider: he was already studying computer science at perhaps the most rigorous and prestigious institution in the world, and it's not a coincidence that he made this discovery rather than an equally talented twenty-year-old who works in a diamond mine in Botswana. There's no risk that we'll reduce the number of accidental discoveries by educating people too much.
nimih
4 days ago
> That misses the point that there may be breakthroughs that are much harder or near impossible to make if you're familiar with the state-of-the-art.
If that's the point, you should maybe try and find even a single example that supports it. As the article points out, Krapivin may not have been familiar with Yao's conjecture in particular, but he was familiar with contemporary research in his field and actively following it to develop his own ideas (to say nothing of his collaborators). Balatro's developer may not have been aware of a particular niche genre of indie game[1], but they were clearly familiar with both modern trends/tastes in visual and sound design, and in the cutting edge of how contemporary video games are designed to be extremely addictive and stimulating. To me, these examples both seem more like the fairly typical sorts of blind spots that experts and skilled practitioners tend to have in areas outside of their immediate focus or specialization.
Clearly, both examples rely to some extent on a fresh perspective allowing for a novel approach to the given problem, but such stories are pretty common in the history of both math research and game development, neither (IMO) really warrants a claim as patently ridiculous as "the best way to approach a problem is by either not being aware of or disregarding most of the similar efforts that came before."
[1] And as good of a video game as Balatro is, there are plenty of "roguelite deckbuilder" games with roughly the same mechanical basis; what makes it so compelling is the quality of its presentation.
causal
4 days ago
Yeah. Classic base rate neglect.
bweller
4 days ago
See einstellung effect:
dinkumthinkum
4 days ago
I get what you are saying but what if the amount of breakthroughs by people that did know about what came before was orders of magnitude higher than this number, would that change your mind?
tehjoker
4 days ago
You hear about this stuff because it's notable. Almost 100% of the time, if you disregard what other people have done, you are going to waste a lot of time.
implmntatio
3 days ago
Yup. And we programmed all that into LeGenAIs and LeGPTs and so on ... a splendidly perfect annihilation of all things evolutionary.
UltraSane
4 days ago
For every case like this you have thousands of people who waste a huge amount of time and mental effort recreating something that has already been invented.
tgauda
3 days ago
Every notable discovery has disproved something that everyone else thought was true. Naivety can be a superpower when inventing.
skgough
4 days ago
Maybe the best way to have the best of both worlds is to ensure well-established areas of research are open to “outsider art” submissions on the topic?
rnewme
3 days ago
I think it's more about working on a problem you spotted instead of endlessly reading, hoarding info, literature etc.
redcobra762
4 days ago
https://thedecisionlab.com/biases/availability-heuristic
You've remembered two examples of this (arguably) happening, so you attempt to draw a conclusion based on the ease with which you came up with those examples. But in reality, this method of inference is prone to error, as it doesn't consider the denominator, or how many attempts were made to achieve the results you're able to remember.
brookst
4 days ago
You’re hitting on innovation versus invention. True invention is getting more and more rare. Innovation is alive and well.
schneems
4 days ago
Sounds like a bit of survivorship bias. Every success from people following well known principles does not translate into a blog post or research paper. You also don’t hear about all of the people who failed because they tried something novel and it didn’t work.
I would suggest positive takeaways is to: trust but verify. If you’ve got a novel solution idea and don’t understand why others aren’t doing it that way, do both and compare. You’re guaranteed to learn something one way or another. Also: if you reinvent the wheel or do something suboptimal then that’s okay too. Sometimes the solutions don’t make sense until you see what doesn’t work. Likewise: be open to learning from others and exploring solutions outside of your predefined notion of how things should work.
postalrat
4 days ago
A while back I was asked to write some software to make print out labels small enough to fit through a button hole. I convinced myself it wasn't possible because the spacing between the cutter and the print head. Then my boss showed me a label that was printed by a competitors system and I had it figured out within an hour or so. Although this was a minor thing it convinced how powerful knowing something is possible (or not) really is.
schneems
4 days ago
That’s a wonderful and visceral story.
To me science is about defining the bounds of the possible. To do that you also need to push the bounds and occasionally update everything you thought you knew. In the case of CS where we find new, lower bounds, I find that especially exciting.
I meant to append this to my original reply but I’ll say here: I enjoyed Knowledge Based AI from Georgia Tech. The lectures tallied about different types of knowledge acquisition as it applies to humans and them how we’ve mapped that to machines. That along with HCI were my favorite OMSCS courses.
In your case, seeing a new, lower bounds helped push you to look in different directions and re-examine your assumptions. In KBAI we weren’t allowed to share code (of course) but were given very wide leniency in what we could share. Posing my performance numbers and reading other’s gave me a sense of what’s possible and had a similar effect as your story.
SecretDreams
4 days ago
This is how I operate with most tech related topics. Just assume it's possible and proceed accordingly.
danpalmer
4 days ago
> You also don’t hear about all of the people who failed because they tried something novel and it didn’t work.
Or all the people who, through ignorance or hubris, thought they could do better than the state of the art. Or all the people who independently invent things that already exist. These may be what you're referring to, but thought it's worth re-stating these as they are far more common cases than people inventing truly novel approaches.
sesteel
4 days ago
I cannot tell you the number of times I thought I invented something new and novel only to later find out it already existed. So, while it is true that you can sometimes find paths untraveled, many things related to first principles seem already heavily explored in CS.
schneems
3 days ago
That sounds like it could be interpreted two ways. On one hand you’re not the first to discover something, on the other hand your invention is validated as being worthwhile.
In times like those (depending on how much work I put into it) I might retrace the steps I took when I was searching for a solution before writing my own and if I can find something like a stack overflow post then link to the ultimate solution. Or blog about it using the search terms I originally tried etc.
A core part of science is reproducing others work.
Also from HCI one thing I took away from research on brainstorming: slight variations and deviations can be novel and produce a better outcome. The research here is that people misunderstanding someone else’s idea isn’t a problem, but rather generates a brand new alternative. If you feel you’ve redone some work, look a little closer, perhaps something small about it is novel or new.
lysecret
3 days ago
I feel like there is already a movement, "thinking from first principles" along this direction.
hassleblad23
4 days ago
You have to be naive to be an innovator.
resters
4 days ago
All scientific progress consists of leveraging some past work and overturning other past work. This is no different.
moi2388
4 days ago
No, but starting from first principles does work. And being unaware of previous work helps you do this.
hombre_fatal
3 days ago
The creator of Halo’s soundtrack didn’t listen to music in fear of it influencing him.
throwaway519
4 days ago
In the spirit of your observation, I encourage you to make your observation again.
Xcelerate
4 days ago
Now we just need a smart person who is somehow unaware of the halting problem.
wnolens
4 days ago
I had to delete Balatro last week to break an addiction. It's so so good
godelski
4 days ago
I actually have a hot take that is related to this (been showing up in a few of my recent comments). It is about why there's little innovation in academia, but I think it generalizes.
Major breakthroughs are those that make paradigm shifts. So, by definition, that means that something needs to be done that others are not doing. If not, things would have been solved and the status quo method would work.
Most major breakthroughs are not the result of continued progress in one direction, but rather they are made by dark horses. Often by nobodies. You literally have to say "fuck you all, I'm doing this anyways." Really this is not so much different than the founder mentality we encourage vocally yet discourage monetarily[0]. (I'm going to speak from the side of ML, because that's my research domain, but understand that this is not as bad in other fields, though I believe the phenomena still exists, just not to the same degree). Yet, it is really hard to publish anything novel. While reviewers care a lot about novelty, they actually care about something more: metrics. Not metrics in the way that you provided strong evidence for a hypothesis, but metrics in the way that you improved the state of the field.
We have 2 big reasons this environment will slow innovation and make breakthroughs rare.
1. It is very hard to do better than the current contenders on your first go. You're competing against not one player, but the accumulated work of thousands and over years or decades. You can find a flaw in that paradigm, address the specific flaw, but it is a lot of work to follow this through and mature it. Technological advancement is through the sum of s-curves, and the new thing always starts out worse. For example, think of solar panels. PVs were staggeringly expensive in the beginning and for little benefit. But now you can beat the grid pricing. New non-PV based solar is starting to make their way in and started out way worse than PV but addressed PV's theoretical limitations on power efficiency.
2. One needs to publish often. Truly novel work takes a lot of time. There's lots of pitfalls and nuances that need to be addressed. It involves A LOT of failure and from the outside (and even the inside) it is near impossible to quantify progress. It looks no different than wasting time, other than seeing that the person is doing "something." So what do people do? They pursue the things that are very likely to lead to results. By nature, these are low hanging fruit. (Well... there's also fraud... but that's a different discussion) Even if you are highly confident a research direction will be fruitful, it will often take too much time or be too costly to actually pursue (and not innovative/meaningful enough to "prototype"). So we all go in mostly the same direction.
(3. Tie in grants and funding. Your proposals need to be "promising" so you can't suggest something kinda out there. You're competing against a lot of others who are much more likely to make progress, even if the impact would be far lower)
So ironically, our fear of risk taking is making us worse at advancing. We try so hard to pick what are the right directions to go in, yet the truth is that no one has any idea and history backs this up. I'm not saying to just make it all chaotic. I think of it more like this: when exploring, you have a main party that travels in a set direction. Their strength together makes good progress, but the downside is there's less exploration. I am not saying that anyone should be able to command the ship on a whim, but rather that we need to let people be able to leave the ship if they want and to pursue their hunches or ideas. Someone thinks they saw an island off in the distance? Let them go. Even if you disagree, I do not think their efforts are fruitless and even if wrong they help map out the territory faster. But if we put all our eggs in one basket, we'll miss a lot of great opportunities. Right now, we let people off the main ship when there's an island that looks promising, and there are those that steal a lifeboat in the middle of the night. But we're all explorers and it seems like a bad idea to dissuade people who have that drive and passion in them. I know a lot of people in academia (including myself) who feel shackled by the systems, when really all they want to do is research. Not every one of these people are going to change things, in fact, likely most won't. But truth is, that's probably true if they stay on the ship too. Not to mention that it is incredibly common for these people to just leave academia all together anyways.
Research really is just a structured version of "fuck around and find out". So I think we should stop asking "why" we should pursue certain directions. "Because" is just as good of an excuse as any. In my ideal world, we'd publish anything if there is technical correctness and lack of plagiarism. Because the we usually don't know what is impactful. There are known knowns, known unknowns, and unknown unknowns. We really are trying to pretend that the unknown unknowns either don't exist, are not important, or very small. But we can't know, they're unknown unknowns, so why pretend?
[0] An example might be all the LLM based companies trying to make AGI. You want to compete? You're not going to win by making a new LLM. But one can significantly increase their odds by taking a riskier move, and fund things that are not well established. Other types of architectures. And hey, we know the LLM isn't the only way because we humans aren't LLMs. And we humans also use a lot less energy and require far less data, so even if you are fully convinced that LLMs will get us all the way, we know there are other ways to solve this problem.
emrah
4 days ago
Yes and we should have at least a few competing AI architectures too
77pt77
4 days ago
This is nothing but extreme selection/survivor bias.
Shorel
3 days ago
No, no, no. That's the wrong thing to take away from it.
Something I got from Richard Feynman descriptions of his method of study, was to first and foremost, read the prompt of the problems, and work diligently trying to solve the problems by himself, for a reasonable amount of time.
Then, and only then, go and read the other solutions. The solutions can be the same, they can be different, and by doing all this preliminary work the researcher can truly understand the nuances of these solutions, something they can't grasp if the solutions were shown just after reading the problem.
So, the best way to approach a problem is:
- Try to solve it by yourself. Several times if necessary, give it an honest effort.
- Then, solved or not, go and read other people's solutions.
ComplexSystems
4 days ago
This is also apparently true for playing Go.
robblbobbl
3 days ago
This. I'm sorry for that guy but that is great news!