davnicwil
11 hours ago
It's an interesting one. We'll have to discover where to draw that line in education and training.
It is an incredible accelerant in top-down 'theory driven' learning, which is objectively good, I think we can all agree. Like, it's a better world having that than not having it. But at the same time there's a tension between that and the sort of bottom-up practice-driven learning that's pretty inarguably required for mastery.
Perhaps the answer is as mundane as one must simply do both, and failing to do both will just result in... failure to learn properly. Kind of as it is today except today there's often no truly accessible / convenient top-down option at all therefore it's not a question anyone thinks about.
xboxnolifes
11 hours ago
How I see it, LLMs aren't really much different than existing information sources. I can watch video tutorials and lectures all day, but if I don't sit down and practice applying what I see, very little of it will stick long term.
The biggest difference I see is, pre-LLM search, I spent a lot more time looking for a good source for what I was looking for, and I probably picked up some information along the way.
danielrm26
6 hours ago
Definitely. We have to find ways to replicate this.
One thing I've noticed is that I've actually learned a lot more code about things I didn't understand before. Just because I built guardrails to make sure that they are built exactly the perfect way that I like them to be built. And then I've watched my AI build it that way dozens of times now. Start to finish. So now I've just seen all the steps so many times that now I understand a lot more than I did before.
This sort of thing is definitely possible, but we have to do it on purpose.
danielrm26
11 hours ago
OP here, yeah, I think that's a really good point.
I feel like the way I'm building this in is a violent maintenance of two extremes.
On one hand, fully merged with AI and acting like we are one being, having it do tons of work for me.
And then on the other hand is like this analog gym where I'm stripped of all my augmentations and tools and connectivity, and I am being quizzed on how good I could do just by myself.
And based on how well I can do in the NAUG scenario, that's what determines what tweaks need to be made to regular AUG workflows to improve my NAUG performance.
Especially for those core identity things that I really care about. Like critical thinking, creating and countering arguments, identifying my own bias, etc.
I think as the tech gets better and better, we'll eventually have an assistant whose job is to make sure that our un-augmented performance is improving, vs. deteriorating. But until then, we have to find a way to work this into the system ourselves.
davnicwil
11 hours ago
there could also be an almost chaos-monkey-like approach of cutting off the assistance at indeterminate intervals, so you've got to maintain a baseline of skill / muscle memory to be able to deal with this.
I'm not sure if people would subject themselves to this, but perhaps the market will just serve it to us as it currently does with internet and services sometimes going down :-)
I know for me when this happens, and also when I sometimes do a bit of offline coding in various situations, it feels good to exercise that skill of just writing code from scratch (erm, well, with intellisense) and kind of re-assert that I can do it now we're in tab-autocomplete land most of the time.
But I guess opting into such a scheme would be one-to-one with the type of self determined discipline required to learn anything in the first place anyway, so I could see it happening for those with at least equal motivation to learn X as exist today.
mathgeek
10 hours ago
> We'll have to discover where to draw that line in education and training.
I'm not sure we (meaning society as a whole) are going to have enough say to really draw those lines. Individuals will have more of a choice going forward, just like they did when education was democratized via many other technologies. The most that society will probably have a say in is what folks are allowed to pay for as far as credentials go.
What I worry about most is that AI seems like it's going to make the already large have/not divide grow even more.
davnicwil
10 hours ago
that's actually what I mean by we. As in, different individuals will try different strategies with it, and we the collective will discover what works based on results.
nhinck2
9 hours ago
> It is an incredible accelerant in top-down 'theory driven' learning
Is it? People claim this but I really haven't seen any proof that it is true.