We can’t circumvent the work needed to train our minds

271 pointsposted 6 hours ago
by maksimur

85 Comments

trjordan

4 hours ago

I was talking with somebody about their migration recently [0], and we got to speculating about AI and how it might have helped. There were basically 2 paths:

- Use the AI and ask for answers. It'll generate something! It'll also be pleasant, because it'll replace the thinking you were planning on doing.

- Use the AI to automate away the dumb stuff, like writing a bespoke test suite or new infra to run those tests. It'll almost certainly succeed, and be faster than you. And you'll move onto the next hard problem quickly.

It's funny, because these two things represent wildly different vibes. The first one, work is so much easier. AI is doing the job. In the second one, work is harder. You've compressed all your thinking work, back-to-back, and you're just doing hard thing after hard thing, because all the easy work happens in the background via LLM.

If you're in a position where there's any amount of competition (like at work, typically), it's hard to imagine where the people operating in the 2nd mode don't wildly outpace the people operating in the first, both in quality and volume of output.

But also, it's exhausting. Thinking always is, I guess.

[0] Rijnard, about https://sourcegraph.com/blog/how-not-to-break-a-search-engin...

klodolph

4 hours ago

I’ve tried the second path at work and it’s grueling.

“Almost certainly succeed” requires that you mostly plan out the implementation for it, and then monitor the LLM to ensure that it doesn’t get off track and do something awful. It’s hard to get much other work done in the meantime.

I feel like I’m unlocking, like, 10% or 20% productivity gains. Maybe.

rorylaitila

4 hours ago

Yeah I think this is what I've tried to articulate to people that you've summed up well with "You've compressed all your thinking work, back-to-back, and you're just doing hard thing after hard thing" - Most of the bottleneck with any system design is the hard things, the unknown things, the unintended-consequences things. The AIs don't help you much with that.

There is a certain amount of regular work that I don't want to automate away, even though maybe I can. That regular work keeps me in the domain. It leads to epiphany's in regards to the hard problems. It adds time and something to do in between the hard problems.

CuriouslyC

4 hours ago

I stay at the architecture, code organization and algorithm level with AI. I plan things at that level then have the agent do full implementation. I have tests (which have been audited both manually and by agents) and I have multiple agents audit the implementation code. The pipeline is 100% automated and produces very good results, and you can still get some engineering vibes from the fact that you're orchestrating a stochastic workflow dag!

danenania

4 hours ago

I'd actually say that you end up needing to think more in the first example.

Because as soon as you realize that the output doesn't do exactly what you need, or has a bug, or needs to be extended (and has gotten beyond the complexity that AI can successfully update), you now need to read and deeply understand a bunch of code that you didn't write before you can move forward.

I think it can actually be fine to do this, just to see what gets generated as part of the brainstorming process, but you need to be willing to immediately delete all the code. If you find yourself reading through thousands of lines of AI-generated code, trying to understand what it's doing, it's likely that you're wasting a lot of time.

The final prompt/spec should be so clear and detailed that 100% of the generated code is as immediately comprehensible as if you'd written it yourself. If that's not the case, delete everything and return to planning mode.

mmargenot

4 hours ago

> You have to remember EVERYTHING. Only then you can perform the cognitive tasks necessary to perform meaningful knowledge work.

You don't have to remember everything. You have to remember enough entry points and the shape of what follows, trained through experience and going through the process of thinking and writing, to reason your way through meaningful knowledge work.

rafaquintanilha

4 hours ago

"It is requisite that a man should arrange the things he wishes to remember in a certain order, so that from one he may come to another: for order is a kind of chain for memory" – Thomas Aquinas, Summa Theologiae. Not ironically I found the passage in my Zettelkasten.

keremk

4 hours ago

Actually this is how LLMs (with reasoning) work as well. There is the pre-training which is analogous to the human brain getting trained by as much information as possible. There is a "yet unknown" threshold of what is enough pre-training and then the models can start reasoning and use tools and the feedback from it to do something that resembles to human thinking and reasoning. So if we don't pre-train our brains with enough information, we will have a weak base model. Again this is of course more of an analogy as we yet don't know how our brains really work but more and more it is looking remarkably aligned with this hypothesis.

stronglikedan

4 hours ago

I always tell people that I don't remember all the answers, only where to find them.

mvieira38

2 hours ago

Just to be clear, are you saying that to know something:

1- You may remember only the initial state and the brain does the rest, like with mnemonics

2- You may remember only the initial steps towards a solution, like knowing the assumptions and one or two insights to a mathematical proof?

I'd say a Zettlekasten user would agree with you if you mean 1

skybrian

2 hours ago

This is task-specific. Consider having a conversation in a foreign language. You don't have time to use a dictionary, so you must have learned words to be able to use them. Similarly for other live performances like playing music.

When you're writing, you can often take your time. Too little knowledge, though, and it will require a lot of homework.

mallowdram

4 hours ago

Of course you have to remember everything. Your brain stores everything, and you then get to add things by forgetting, but that does not mean you erase things. The brain is oscillatory, it works somehow by using ripples that encode everything within differences, just in case you have to remember that obscure action-syntax...a knot, a grip, a pivot that might let you escape death. Get to know the brain, folks.

HPsquared

4 hours ago

A bit like the memory palace. One memory leads to another. Not random-access.

tikhonj

4 hours ago

> You have to remember EVERYTHING. Only then you can perform the cognitive tasks necessary to perform meaningful knowledge work.

If humans did not have any facilities for abstraction, sure. But then "knowledge work" would be impossible.

You need to remember some set of concrete facts for knowledge work, sure, but it's just one—necessary but small—component. More important than specific factual knowledge, you need two things: strong conceptual models for whatever you're doing and tacit knowledge.

You need to know some facts to build up strong conceptual models but you don't need to remember them all at once and, once you've built up that strong conceptual understanding, you'll need specifics even less.

Tacit knowledge—which, in knowledge work, manifests as intuition and taste—can only be built up through experience and feedback. Again, you need some specific knowledge to get started but, once you have some real experience, factual knowledge stops being a bottleneck.

Once you've built up a strong foundation, the way you learn and retain facts changes too. Memorization might be a powerful tool to get you started but, once you've made some real progress, it becomes unnecessary if not counterproductive. You can pick bits of info up as you go along and slot them into your existing mental frameworks.

My theory is that the folks who hate memorization are the ones who were able to force their way through the beginner stages of whatever they were doing without dull rote memorization, and then, once there, really do not need it any more. Which would at least partly explain why there are such vehement disagreements about whether memorization is crucial or not.

tkiolp4

6 minutes ago

I use AI at work, and certainly I’m doing less deep thinking over time. But at home, on side projects I still do it the traditional way. This is because I enjoy the process of thinking (rather than shipping. I actually never shipped any side project).

I guess I’m lucky, deep thinking (on interesting things at home) is a hobby so I feel less encouraged to automate that away. I never cared about my jobs, so as long as I bring home money, it’s fine.

keiferski

4 hours ago

I am sympathetic to memory-focused tools like Anki and Zettelkasten (haven't used the latter myself, though) but I think this post is a bit oversimplified.

I think there are at least two models of work that require knowledge:

1. Work when you need to be able to refer to everything instantly. I don't know if this is actually necessary for most scenarios other than live debates, or some form of hyper-productivity in which you need to have extremely high-quality results near-instantaneously.

(HN comments are, amusingly, also an example – comments that are in-depth but come days later aren't relevant. So if you want to make a comment that references a wide variety of knowledge, you'll probably need to already know it, in toto.)

2. Work when you need to "know a small piece of what you don't remember as a whole", or in other terms, know the map, but not necessarily the entire territory. This is essentially most knowledge work: research, writing, and other tasks that require you to create output, but that output doesn't need to be right now, like in a debate.

For example, you can know that X person say something important about Y topic, but not need to know precisely what it was – just look it up later. However, you do still need to know what you're looking for, which is a kind of reference knowledge.

--

What is actually new lately, in my experience, is that AI tools are a huge help for situations where you don't have either Type 1 or Type 2 knowledge of something, and only have a kind of vague sense of the thing you're looking for.

Google and traditional search engines are functionally useless for this, but asking ChatGPT a question like, "I am looking for people that said something like XYZ." This previously required someone to have asked the exact same question on Reddit/a forum, but now you can get a pretty good answer from AI.

throwway120385

4 hours ago

The AI can also give you pretty good examples of "kind" that you can then evaluate. I've had it find companies that "do X" and then used those companies to understand enough about what I am or am not looking for to research it myself using a search engine. The last time I did this I didn't end up surfacing any of what the AI provided. It's more like talking to the guy in the next cubicle, hearing some suggestions from them, and using those suggestions to form my own opinion about what's important and digging in on that. You do still have to do the work of forming an opinion. The ML model is just much better at recognizing relationships between different words and between features of a category of statements, and in my case they were statements that companies in a particular field tended to make on their websites.

rzzzt

4 hours ago

Pilots have both checklists that they can follow without memorizing, but also memory items that have to be performed almost instinctively if they encounter the precondition events.

skybrian

2 hours ago

Live performance (like conversation or playing music) often relies on memory to do it well.

That might be a good criteria for how much to memorize: do you want to be able to do it live?

Etheryte

4 hours ago

> If you can’t produce a comprehensive answer with confidence and on the whim the second you read the question, you don’t have the sufficient background knowledge.

While the article makes some reasonable points, this is too far gone. You don't need to know how to "weigh each minute spend on flexibility against the minutes spent on aerobic capacity and strength" to put together a reasonable workout plan. Sure, your workouts might not be as minmaxed as they possibly could be, but that really doesn't matter. So long as the plan is not downright bad, the main thing is that you keep at it regularly. The same idea extends to nearly every other domain, you don't need to be a deep expert to get reasonably good results.

cyanydeez

4 hours ago

The US is, however, learning exactly what happens when rationality is not part of the equation. This is all a dance around what is a "fact" and how to string facts into a reasoning model that lets you predict or confirm other potential facts, etc...

It's simply different people we're talking about. Certain personalities are always going to gravitate to the "search for reason" model in life rather than "reason about facts".

Ethee

2 hours ago

I've been having conversations about this topic with friends recently and I keep coming back to this idea that most engineering work, which I will define as work that begins with a question and without a clear solution, requires a lot of foundational understanding of the previous layer of abstraction. If you imagine knowledge as a pyramid, you can work at the top of the pyramid as long as you understand the foundation that makes up your level, however to jump a level above or below that would require building that foundation yet again. Computer science fits well into this model where you have people at many layers of abstractions who all work very well within their layer but might not understand as much about the other layers. But regardless of where you are in the pyramid, understanding ALL the layers underneath will lead to better intuition about the problems of your layer. To farm out the understanding for these things will obviously end up having negative impact not just on overall critical thinking, but on the way we intuit how the world works.

vjvjvjvjghv

5 hours ago

It’s the same with math. A lot of people say they don’t need to be able to do basic arithmetic because they can use a calculator. But I think that you can process the world much better and faster if at a minimum you have some intuition about numbers and arithmetic.

It’s the same with a lot of other things. AI and search engines help a lot but you are at an advantage if at least you have some ability to gauge what should be possible and how to do it.

hennell

4 hours ago

I used to find it weird how many people would make an excel formula on data they couldn't intuitively check. Like even basic level 'what percentage increase is a8 from a7' - they enter a formula then don't know if it's correct. I always wrote formulas on numbers I can reason with. If a8 is 120, and a7 is 100 you can immediately tell if you've gone wrong. Then you change for 1,387 and 1,252 and know it's going to be accurate.

People do the same with AI, ask it about something they know little about then assume it is correct, rather than checking their ideas with known values or concepts they might be able to error check.

RicoElectrico

4 hours ago

With or without calculator some people have an aversion to calculation and that's the problem in my opinion. How much bullshit you can refute with back of the envelope calculations is remarkable.

This, and knowing by heart all the simple formulas/rules for area/volume/density and energy measurements.

The classic example being pizza diameter.

ergonaught

2 hours ago

The actual central point is that the brain requires conditioning via experience. That shouldn't be controversial, and I can't decide if the general replies here are an extended and ironic elaboration of his point or not.

If you never memorize anything, but are highly adept at searching for that information, your brain has only learned how to search for things. Any work it needs to do in the absence of searching will be compromised due to the lack of conditioning/experience. Maybe that works for you, or maybe that works in the world that's being built currently, but it doesn't change the basic premise at all.

crims0n

4 hours ago

I agree with the point being made, even if it is taken to an extreme. I would say you don't need to remember everything, but you do need to have been exposed to it. Not knowing what you don't know is a huge handicap in knowledge work.

“Try to learn something about everything and everything about something.”

tolerance

4 hours ago

The author makes a lot of bold claims and I don’t take his main one serious re: remembering everything. I think he’s being intentionally hyperbolic. But the gist is sound to me, if you can put one together. He needs an editor.

> To find what you need online, you require a solid general education and, above all, prior knowledge in the area related to your search. > > [...] > > If you can’t produce a comprehensive answer with confidence and on the whim [...] you don’t have the sufficient background knowledge. > > [...] > > This drives us to one of the most important conclusions of the entire field of note-taking, knowledge work, critical thinking and alike: You, not AI, not your PKM or whatever need to build the knowledge because only then it is in your brain and you can go the next step. > > [...] > > The advertised benefits of all these tools come with a specific hidden cost: Your ability to think. [This passage actually appears ahead of the previous one–ed.]

This is best read alongside: https://news.ycombinator.com/item?id=45154088

AndyNemmity

3 hours ago

Before the internet we asked people around us in our sphere. If we wanted to know the answer to a question, we asked, they made up an answer, and we believed it and moved on.

Then the internet came, and we asked the internet. The internet wasn't correct, but it was a far higher % correct than asking a random person who was near you.

Now AI comes. It isn't correct, but it's far higher % correct than asking a random person near you, and often asking the internet which is a random blog page which is another random person who may or may not have done any research to come up with an answer.

The idea that any of this needs to be 100% correct is weird to me. I lived a long period in my life where everyone accepted what a random person near them said, and we all believed it.

Gormo

26 minutes ago

How is an LLM making stochastic inferences based on aggregations of random blog pages more likely to be correct than looking things up on decidedly non-random blog pages written by people with relevant domain knowledge?

buellerbueller

3 hours ago

If you are asking random people, then your approach is incorrect. You should be asking the domain experts. Not gonna ask my wife about video games. Not gonna ask my dad about computer programming.

There, I've shaved a ton of the spread off of your argument. Possibly enough to moot the value of the AI, depending on the domain.

bwfan123

3 hours ago

Descartes' brief rules for the direction of the mind [1] is pertinent here, as it articulates beautifully what it means to do "thinking" and how that relates to "memory".

Concepts have to be "internalized" into intuition for much of our thinking, and if they are externalized, we become a meme-copy machine as opposed to a thinking machine.

[1] https://en.wikipedia.org/wiki/Rules_for_the_Direction_of_the...

BinaryIgor

3 hours ago

A bit too extreme, but there definitely is something to it; trivially, you need to challenge your mind all the time and at regularly work at the edge of your current abilities to progress further. I like this part a lot:

"In knowledge work the bottleneck is not the external availability of information. It is the internal bandwidth of processing power which is determined by your innate abilities and the training status of your mind."

low_tech_punk

3 hours ago

This piece reminds me of another article musing on the necessity of manual memory: https://numinous.productions/ttft/#how-important-is-memory.

That article articulated the reason slightly differently, arguing you need to hold multiple concepts in your head at the same time in order to develop original ideas.

Still, I'm not sure you have to remember everything, but I agree you have to remember the foundational things at the right abstraction layer, upon which you are trying to synthesize something new.

flerchin

4 hours ago

Before the internet I siloed knowledge that I could lookup to books. Don't worry, the kids will be ok.

defanor

4 hours ago

Indeed, I thought that "decades old" sounds like an underestimate there: Socrates is said to have criticized writing for letting people to not train their memory, so that would be millennia by now. Though of course it is possible that the article's author would not agree with that, and would have a beef with more easily searchable content only, like the people who criticized tables of contents. I do not mean that they were all wrong though: probably the degree to which knowledge is outsourced matters, maybe some transitions were more worthwhile than others, and possibly something was indeed lost with those.

mallowdram

4 hours ago

Sorry, kids lack the foundational ability to remember, reason, imagine because their phones cauterize their basic intelligence foundations in sharp wave ripples: navigation, adventurous short-cuts, vicarious trial and error, these are the basis for memory consolidation. And we build this developmentally until we are 16 or so. Once we offload this dev to phones, we are essentially unintelligent buffoon, lacking the basis for knowledge. The kids are DOA.

Animats

3 hours ago

"To find what you need online, you require a solid general education and, above all, prior knowledge in the area related to your search." And get off my lawn. That's more a criticism of search engines than AI, anyway.

Insisting that people know exercise physiology to work out is a bit much. That's what trainers and coaches are for. Now drop and give me twenty.

The real problem with LLMs remains that they can't really do the job of thinking yet, because they're wrong too often. They can both hallucinate and get lost. What the AI situation will be in five years, we don't know.

js8

4 hours ago

While I agree with the gist of the article, I think the AI example is poor, because we know AI can make stuff up and it's a problem. So this failure of AI to be reasonably correct weakens the argument. In the old days, you would rely on an expert (through say a book, like encyclopedia) to tell you this. The issue then becomes who you trust.

I would say your own knowledge is like a memory cache. If you know stuff, then the relevant work becomes order of magnitudes faster. But you can always do some research and get other stuff in the cache.

(Human mind is actually more than a cache because you also create mental models, which typically stay with you. So it's easier to pickup details after they get evicted, because the mental model is kept. I think the goal of memorising stuff in school should be exactly that - forget all the details, but in the learning process build a good mental model that you have for life.)

firefoxd

4 hours ago

One thing that I like is that things are much easier in person. When someone shows me an AI overview they just googled on their phone, I can say "I don't think that's true." Then we can discuss. The more we talk about the subject, the more we develop our knowledge. It's not black and white.

But online? @grok is this true?

dghlsakjg

4 hours ago

The irony here is using fitness as an example of knowable things.

Fitness guidelines is very much not a settled science, and is highly variable per individual beyond the very basics (to lose weight eat fewer calories than you burn, to build muscle you should lift heavy things).

For every study saying that 8-12 reps x3 is the optimal muscle growth strategy there is another saying that 20x2 is better, and a third saying that 5x5 is better. If you want to know how much protein you should eat to gain muscle mass, good luck; most studies have settled on 1.6g/kg per day as the maximum amount that will have an effect, but you can find many reputable fitness sources suggesting double that.

You can memorize "facts", but they will change as the state of the art changes... or is Pluto still a planet?

The ability to parse information and sources, as well as knowing the limits of your knowledge is far more important than memorizing things.

procaryote

4 hours ago

They're very knowable, it's just that there's a lot more money in making things up

qwertytyyuu

4 hours ago

You definitely do not need to remember everything, it’s not worth the effort to try, famously in programming even the best look up things they have looked up before.

Memory is helpful but brains aren’t hard drives, they aren’t designed to store information perfectly.

birdman3131

4 hours ago

AI is not a replacement for a greybeard.

That said AI, Search and the like can be quite useful and helpful.

wduquette

3 hours ago

I remember the dotCom bubble. After the bubble burst, people got on with putting storefronts and other kinds of business on-line in a more sober fashion.

I predict the same thing will happen with the current AI tools: the bubble will burst, a bunch of folks will lose their shirts, and the world in general will come to a more realistic and sober understanding of what they are good for. We will figure out how to provide the useful parts without massive data centers and it will become natural. (I remember when things a graphics card can do trivially required a supercomputer with supporting staff.)

wolttam

4 hours ago

I remember things just fine, just not at the sufficient detail to remember all aspects at the drop of a pin. What I hold on to are the core concepts that allow me to hit the ground running when I have to interact with the subject-matter again.

NitpickLawyer

4 hours ago

> What I hold on to are the core concepts that allow me to hit the ground running when I have to interact with the subject-matter again.

Exactly. And those also come with doing the thing, or watching the thing being done, or reading about the thing, or thinking about the thing. You often don't have to actively try to kastle / cram / grok / etc the information to remember it. Just being exposed to it will make you remember some of it. Knowing where to get more accurate info is often a greater skill / benefit than knowing the details yourself. Especially in fields where knowing all the details yourself is almost impossible.

Weak article.

darepublic

4 hours ago

At my first software dev internship my manager asked me to code in languages I was not trained in. I told him I would need some time to study up on these. He scoffed and said just look up what you need on Google. Initially I resisted, I felt like it was too shallow. That it was akin to copying answers I didn't really comprehend. However it didn't take long to pick up the habit. Learning is like going to the gym now, it's self enforced discipline

bluGill

3 hours ago

That is because learning a computer language is not hard. Learning to program in the first place is hard, but once you know that the language itself is not hard to learn. However it is easy to verify someone knows the syntax of some specific language and hard to check if they actually know how to program.

procaryote

4 hours ago

> The reduced engagement with the material reduces the emotional weight of the whole line of action. You mind is an engine that is fuelled by emotion. Without any emotion, you don’t think. Rather, you try to imitate thinking efficiently.

This doesn't sound true and they don't seem to offer any support for the claim.

There's a whole host of emotion-driven cognitive biases, where an effective counter is to reduce the emotional weight of the whole line of action.

Of course, to their credit, it's only by remembering those biases that I could see their error

OmarShehata

4 hours ago

The first thing that happened in your mind when you read that sentence is (1) a bad feeling. That then triggered (2) a rational, conscious thought that interpreted that bad feeling: "this feels bad because it's not true, here are the reasons why it is not true.

There is ALWAYS an "emotional/intuitive" response that precedes the rational, conscious thought. There's a ton of research on this (see system 1 vs system 2 thinking etc).

There is no way to stop the emotional "thought" from happening before the "rational thought". What you can do is build a loop that self reflects to understand why that emotion was triggered (sometimes, instead of "this feels bad because it's wrong", it's "this feels bad because it points to an inconvenient truth" or "I am hungry and everything I am reading feels bad")

jbreckmckye

4 hours ago

Isn't your argument a support of his claim?

If emotions did not weigh on recall, surely there would be no "emotion-driven cognitive biases"

rambambram

3 hours ago

Off topic: This site has an input somewhere that takes focus on page load, but I can't see it. When I used the arrow down key some previously used suggestions jumped into view in a dropdown menu.

DiscourseFan

4 hours ago

Isn’t the irony of Pheadrus (the dialogue where Socrates speaks against writing) that its a written work, in a dramatic setting? Like, yes, writing, technology can make us stupid, but this article was written and transmitted to us via social media.

lordnacho

4 hours ago

It's like a cache level issue.

In the old world, you had your wet brain memory. You needed to fill it with the order of the alphabet, so that you could make use of paper reference works. You also needed some arithmetic and some English style notes, in case you wanted to express yourself. You had to remember things like there/their/they're and that kind of thing. You needed a small encyclopedia so that you could have a clue about where general knowledge could be found.

On top of this, you were expected to layer on your professional knowledge. If you were a doctor, a huge number of Latin terms. A more detailed understanding of how the body works that you got from the base installation. Something about how the profession works. You would get a sense for what was likely through experience. A BS detector, in some ways.

All because your brain ain't gonna get bigger, and paper information technology was what it was: found in a library, limited in size, hard to search, slow to update.

Nowadays, the brain cache is no different in capability. But the external memory system you are accessing is completely different. It's massive, it can update in real time, and it's very searchable.

So you need to keep different things in your brain cache to take advantage of this.

But what do you need?

The only component that really matters is the BS detector.

Not only do I not need to be able to do long division, I don't even need to know that a calculator exists for calculating my tax, a calculator containing the Haversine formula is out there somewhere, and a so on. I just assume something like epochconverter exists, and that I can plug a timestamp into it and get something readable out.

I just assume that I will be able to find things that I want, the tradeoff being that I don't have sit there and refresh my math knowledge to calculate distances on a sphere, type out a program, and run it for a single output. The other side of this coin is of course, I don't know whether the guy whose work I am borrowing did it correctly, whether he has some sort of interest he's not disclosed, and whether it's safe. I also have to compromise on any variation between what he built and what I wanted.

I do this with almost everything now. I can't help it, having grown up and been educated before the internet exploded, and having started work just as the explosion was happening.

So I'd say you actually don't have to remember everything, but you do have to use your judgement in everything.

ripped_britches

4 hours ago

I use chatgpt to learn a ridiculous amount of knowledge that was not possible in 2020.

If you are using it to only decrease cognitive load (instead of keeping cognitive load constant while doing MORE), then you’re using it wrong

hammock

4 hours ago

As someone who can answer all of those questions about the workout plan in depth, it’s not a bad plan. It’s actually quite good. Missing a little detail but that’s OK.

Wasn’t a good example for me.

alphazard

4 hours ago

> Rowlands et al. wrote about the so called “digital natives” that they lack the critical and analytical thinking skills to evaluate the information they find on the internet.

This doesn't match the cultural shift in the last 20 years. A generation of people grew up with chat rooms and immediately discovered the ability to misrepresent oneself on the internet. "On the internet, no one knows you're a dog", as they say. That whole demographic assumes that media is lying by default. Compare that to previous generations that trusted certain media institutions like cable news, newspapers, radio shows, etc. because the production value and scarcity of media instilled trust.

Trust in media institutions is at an all time low, and will likely never recover. That has to be attributed to the newer generations. They are more skeptical of propaganda than ever before. To them, the high production value media outlets are just a quaint legacy variety of content slop.

scottLobster

4 hours ago

Well it doesn't help that the media, even when it doesn't lie, often simply refuses to report on various issues depending on the whims of producers.

I'm an older millennial, probably one of the last generations who was formally taught that organizations like the New York Times and CNN were authoritative, bibliography-worthy sources of information due to their reputation and standards. I haven't cared much about what either outlet has produced in years. For every good investigative piece there's a mountain of obvious propaganda or refusal to cover topics they find uncomfortable with any objectivity.

The signal to noise ratio is so low, why pay attention? There's a lot of bad takes on twitter and non-mainstream media (to put it mildly) but it at least makes me aware of more things.

neonrider

2 hours ago

> That has to be attributed to the newer generations. They are more skeptical of propaganda than ever before. To them, the high production value media outlets are just a quaint legacy variety of content slop.

Right. The skeptical newer generation knows better. It's the generation that is immune to influence. They're so resistant to it that they've finally driven advertisers to realize that spamming YouTube, IG, TikTok, with ads peddling some new hype every week is pointless.

Sarcasm aside, the newer generation, in any generation, is always as naive as they're said to be. You're not born with wisdom and your parents can't save you from the candle fire, no matter how much they try. Sooner or later, you'll have to burn that finger to learn. Life is an experience game. No way around it.

dragontamer

4 hours ago

> That whole demographic assumes that media is lying by default.

Yes. And then they turn around and trust that the Boston Marathon Bomber was some random kid because Reddit said so.

The new generation of netizens distrusts classic media and then suddenly trusts Reddit and Google searches and random blogs.

Bad Twitter arguments citing YouTube videos talking about a Redditors problem about Microsoft Updates and SSDs just broke through a week or two ago and nearly everyone involved in the discussion is utterly wrong.

blackbear_

4 hours ago

> They are more skeptical of propaganda than ever before.

Of boomer propaganda. But don't worry, as voters evolve, so does propaganda.

RicoElectrico

4 hours ago

> If anything the newer generation is more skeptical of propaganda than ever before.

Laughs in Polish GenZ voting for Konfederacja (alt-right)

leslielurker

4 hours ago

We can remember it for you wholesale specifically

johongo

3 hours ago

It was a typical attitude among my physicist and mathematician friends that memorizing was for suckers, even though it was often required to reproduce long proofs or derivations. These were people to whom mathematics came naturally; their understanding, memory, curiosity, and experience just compressed that knowledge until it was trivial to memorize. Unfortunately, many walked away with a sense of not needing to know things until they need them, but good luck with that in a systems design interview.

constantcrying

2 hours ago

But in how many cases is the "why" more important than the "how"?

People can drive a car, without understanding any of the mechanical and electrical systems of a car. In fact understanding these does essentially nothing for what most people use a car for.

>If you can’t produce a comprehensive answer with confidence and on the whim the second you read the question, you don’t have the sufficient background knowledge.

And then what? Why would this matter at all? You can successfully use a workout schedule without understanding what its strengths and weaknesses are and how they align with your goals.

shadowgovt

3 hours ago

Possibly worth considering the source: the author coaches on a mechanism for collecting and collating information. They have a vested interest in the notion you have to do the work (instead of letting the machine do it for you); indeed, they hope you do the work using this method they coach on...

(That having been said: I've used zettelkasten myself a bit and I'd say it's worth a try. Probably not for everyone but the underlying idea of "building out an artifact to supplement your memory and understanding of what you've seen" is an intriguing approach).

digitalbullshit

4 hours ago

Reminds me of when I debug why my aquascape (I’m a beginner) keeps having algae outbreak such as GSA and Cyanobacteria.

I have a lot of things correct, CO2, light, photoperiod. LLM told me that I have too much kH and gH and too much phosphate.

Followed the LLM advice, didn’t work. Apparently its an outdated advice. My kH and gH is high (not using RoDi water, just NJ tap water) but not that high such that plants struggle to get CO2, and my level of phosphate doesn’t necessarily what made the algae bloom.

Turns out my nutrition was the culprit. I bottomed out on Nitrogen all the time. But when I told that my Nitrate is 0 the LLM didn’t say a thing and instead “you got everything correct”.

Digital bullshit.

hn_throw_250910

4 hours ago

A great deal of anti-AI posts as of late seem like milquetoast pearl clutching to me. They don’t want to outright say they feel threatened/devalued but the arguments they put forward are not only unconvincing, but in this case among many others actively work against them.

nb. I tried really hard to not point out the smugness of Zettelkasten which I suspect emboldens this feeling of superiority, because I’d rather sit this one out and see how it goes. Something tells me the AI will win by a landslide.

nancyminusone

3 hours ago

Fine, but I ask you what does it mean to "win" in this context?

add-sub-mul-div

4 hours ago

What's the difference between milquetoast pearl clutching and a take you disagree with about a currently hot topic?

How do you tell when someone who disagrees with you is coming from a place of feeling threatened or devalued vs. a place you'd consider legitimate?

Hoping your methods are reliable and transferable, you're applying them to "a great deal" of posts so maybe they'd be helpful to me too.

ajuc

4 hours ago

It's like with math. You could theoretically only memorize the axioms and rederive everything else on the fly.

But in practice, you don't have enough working memory or processing power to do that, so you'd be stuck with the math a few derivation steps above the axioms only.

To actually use math for problem solving, you need to memorize everything up to the bleeding edge, and to train yourself to operate on intermediate-level abstractions intuitively.

mock-possum

4 hours ago

> Looks good alright? Or does it? How do you know? You can’t if you don’t have sufficient background knowledge … If you can’t produce a comprehensive answer with confidence and on the whim the second you read the question, you don’t have the sufficient background knowledge.

> “I just ask ChatGPT for that, too!”, the AI generation might ask. Ok, and then what? How can you assess the answers … you are taking on an impossible task, because you can’t use enough of your brain for your cognitive operations.

So it’s Zeno’s paradox of knowing stuff?

It can’t be impossible to know things, you’ve just got to decide when you know enough to get going on. Otherwise you’re mired in analysis paralysis and you never get anything done.

I do agree that deep knowledge of the foundations a subject - particularly a skilled practice or craft - is a path to proficiency and certainly a requirement for mastery. But there are plenty of times when you can get away with ‘just reading the documentation’ and doing as instructed.

You do not first need to invent the universe in order to begin exercising, you can just start talking a 20 minute walk after lunch.

SpaceManNabs

3 hours ago

you can hoard anything, including knowledge.

zach_miller

4 hours ago

Surprised this is on top. Human beings have been using tools to augment memory since writing. Lots of these tools are faulty but then so too is memory.

If you want to remember everything good luck, but I am not convinced.

fullshark

4 hours ago

I've reached my limit and I'm going to tap out of reading the blogosphere. It almost exclusively contains half-baked musings and overly reductive conclusions of little value.

vxxzy

4 hours ago

i will now no longer press F3 or / instead i will read.

komali2

4 hours ago

I am either too stupid or too lazy to remember everything. Or my version of ADHD specifically restricts my ability to remember things, or to have sufficient energy to engage daily with the process needed to memorize information (e.g. build and review anki decks).

I gave up on Anki. I abandoned org-roam and switched to trillium with very simple summaries of hand written notes. I don't summarize articles I read, I simply save them to my bookmarks with good tags.

I might be stupider now, I don't know. My mandarin improves at the same pace it did before, when I was doing daily flashcards, according to my mandarin teacher. Except now I don't have the torturous Todo item of "do mandarin flashcards" weighing on me every day, or the huge catch-up if I miss even two days (again, I am stupid - I usually miss 20% of my cards, daily. Often the same cards. I have forgotten the same word over 100 times.)

But, the other day I went to trivia in Cambridge UK, where I was probably the dumbest person in the town and almost certainly in the pub! And everyone was answering very interesting questions with correct answers, about geography, history, science, even the geography of the Americas which as an American should at least have been something I knew better than my British colleagues! Nope. It made me regretful that I grew up in suburban America and didn't get a nice bit of background education like Europeans do, or maybe for me being dumb or having ADHD. It I always admired smart people and I always wanted to count myself amongst them - it's why I brute forced my way into engineering by picking the lowest barrier to entry bit (frontend) and gritting my teeth and smashing pencils through hour after hour of tutorial and project until I finally had a portfolio I could get hired off of. And still almost always, dumbest person in the company (there have been less productive than me, though), and no idea how to change it. I'm wondering again if there's something I can do to be smarter.

I am fascinated by the promise of zettelkasten, spaced repetition, fish oil, whatever, but none of it seems to deliver!

zozbot234

2 hours ago

People should just stop worrying about their Anki backlog. The truth is, every little bit you do helps, even just one card a day. That "backlog" is just telling you that you are falling short of perfection wrt. keeping the whole content of your deck securely memorized. You'll still be able to recall the bulk of it as long as you keep a reasonable routine.

reddit_clone

2 hours ago

As a fellow sufferer of same issues, I sympathize.

You are too hard on yourself though.

It is like a running race, when we are required to carry 50 Lb bags on our backs, while others are running freely.

At some point we need to accept and not worry too much!

Avicebron

5 hours ago

TLDR: firing and forgetting the first result off of a single google search isn't the best long term approach. But this guy had a bri..coaching to sell you that apparently makes you a better human being

tolerance

4 hours ago

> But this guy had a bri..coaching to sell you that apparently makes you a better human being

To the credit of the author and this individual post, you would have to go out of your way to come to that conclusion. That is, there isn’t anything present in the actual article to suggest that what you’re alleging is valid.