easterncalculus
20 hours ago
> I find it particularly disillusioning to realize how deep the LLM brainworm is able to eat itself even into progressive hacker circles.
That's the thing, hacker circles didn't always have this 'progressive' luddite mentality. This is the culture that replaced hacker culture.
I don't like AI, generally. I am skeptical of corporate influence, I doubt AI 2027 and so-called 'AGI'. I'm certain we'll be "five years away" from superintelligence for the forseeable future. All that said, the actual workday is absolutely filled with busy work that no one really wants to do, and the refusal of a loud minority to engage with that fact is what's leading to this. It's why people can't post a meme, quote, article, whatever could be interpreted (very often, falsely) as AI-generated in a public channel, or ask a chatbot to explain a hand-drawn image without the off chance that they get an earful from one of these 'progressive' people. These people bring way more toxicity to daily life than who they wage their campaigns against.
mattgreenrocks
19 hours ago
> This is the culture that replaced hacker culture.
Somewhere along the lines of "everybody can code," we threw out the values and aesthetics that attracted people in the first place. What began as a rejection of externally imposed values devolved into a mouthpiece of the current powers and principalities.
This is evidenced by the new set of hacker values being almost purely performative when compared against the old set. The tension between money and what you make has been boiled away completely. We lean much more heavily on where someone has worked ("ex-Google") vs their tech chops, which (like management), have given up on trying to actually evaluate. We routinely devalue craftsmanship because it doesn't bow down to almighty Business Impact.
We sold out the culture, which paved the way for it to be hollowed out by LLMs.
There is a way out: we need to create a culture that values craftmanship and dignifies work done by developers. We need to talk seriously and plainly about the spiritual and existential damage done by LLMs. We need to stop being complicit in propagating that noxious cloud of inevitability and nihilism that is choking our culture. We need to call out the bullshit and extended psyops ("all software jobs are going away!") that have gone on for the past 2-3 years, and mock it ruthlessly: despite hundreds of billions of dollars, it hasn't fully delivered on its promises, and investors are starting to be a bit skeptical.
In short, it's time to wake up.
ineedasername
17 hours ago
"There is a way out: we need to create a culture that values craftmanship and dignifies work done by developers. We need to talk seriously and plainly about the spiritual and existential damage done by LLMs"
This is the exact sentiment when said about some other profession or craft, countless people elsewhere and on HN have noted that it's neither productive not wise to be so precious about a task that evolved as a necessity into the ritualized, reified, pedestal-putting that prevents progress. It conflates process with every single other thing about whatever is being spoken about.
Also: Complaining that a new technology bottlenecked by lack of infrastructure, pushback from people with your mindset, poorly understood in its best use because the people who aren't of your mindset are still figuring out and creating the basic tooling we currently lack?
That is a failure of basic observation. A failure to see the thing you don't like because you don't like and decide not to look. Will you like it if you look? I don't know, sounds like your mind is made up, or you might find good reasons why you should maintain your stance. In the later case, you'd be able to make a solid contribution to the discussion.
dspillett
17 hours ago
I'm firmly in the “don't want to use it; if you want to, feel free, but stop nagging me to” camp.
Oh, and the “I'm not accepting 'the AI did it' as an excuse for failures” camp. Just like outsourcing to other humans: you chose the tool(s), you are responsible for verifying the output.
I got into programming and kicking infrastructure because I'm the sort of sad git who likes the details, and I'm not about to let some automaton steal my fun and turn me into its glorified QA service!
I'd rather go serve tables or stack shelves, heck I've been saying I need a good long sabbatical from tech for a few years now… And before people chime in with “but that would mean dropping back to minimum wage”: if LLMs mean almost everybody can program, then programming will pretty soon be a minimum wage job anyway, and I'll just be choosing how I earn that minimum (and perhaps reclaiming tinkering with tech as the hobby it was when I was far younger).
ineedasername
14 hours ago
“Don’t want… “not accepting”
Now this, putting aside my thoughts above, i find a compelling argument. You just don’t want to. I think that should go along with a reasonable understanding of what a person is choosing to not use, but I’ll presume you have that.
Then? Sure, the frustrating part is to see someone making that choice tell other people that theirs is invalid, especially when we don’t know what the scene will look like when the dust settles.
There’s no reason to think there wouldn’t be room for “pure code” folks. I use the camera comparison— I fully recognize it doesn’t map in all respect to this. But the idea that painters should have given up paint?
There were in fact people at the time who said, “Painting is dead!”. Gustav Flaubert, famous author, said painting was obsolete. Paul Delaroche Actually said it was dead. Idiots. Amazingly talented and accomplished, but short sighted, idiots. Well like be laughing at some amazing and talented people making such statement about code today in the same light.
Code as art? Well, two things: 1) LLM’s have tremendous difficulty parsing very dense syntax, and then addressing the different pieces and branching ideas. Even now. I’m guessing this transfers to code that must be compact, embedded, and optimized to a precision such that sufficient training data, generalizable to the task with all the different architectures of microcontrollers and embedded systems… not yet. My recommendation to coders who want to look for areas where AI will be unsuitable? There’s plenty of room at the bottom. Career has never taken me there, but the most fun I’ve had coding has been homebrew microcontrollers.
2) code as art. Not code to produce art, or not something separable from the code that created it. Think Thing minor things from the past like the obfuscated C challenges. Much of that older hacker ethos is fundamentally an artistic mindset. Art has a business model, some enterprising person aught to crack the code of coding code into a recognized art form where aesthetic is the utility.
I don’t even mean the visual code, but that is viable: Don’t many coders enjoy the visual aesthetic of source code, neatly formatted, colored to perfect contrasts between types etc? I doubt that’s the limit of what could be visually interesting, something that still runs. Small audience for it sure— same with most art.
Doesn’t matter, I doubt that will be something masses of coders turn to, but my point is simply that there are options there are options that involve continuing the “craft” aspects you enjoy, whether my napkin doodle of an idea above holds or not. The option, for many, may simply not include keeping the current trajectory of their career. Things change: not many professional coders that began at 20 in 1990 have been able— or willing— to stay in the narrow area they began in. I knew some as a kid that I still know, some that managed to stay on that same path. He’s a true craftsman at COBOL. When I was a bit older in one of my first jobs he helped me learn my way around a legacy VMS cluster. Such things persist, reduced in proportion to the rest is all. But that is an aspect of what’s happening today.
johnnyanmac
8 hours ago
>there are options there are options that involve continuing the “craft” aspects you enjoy
My endgame is not to be beholden to any given corporations' sense of value (because it is rarely in the engineering), so I don't personally care what happens at large. I'll still enjoy the "craft" on my own and figure out the lines where I need to take a disciplined stance and grind it out myself, where I take on a dependency, or where I leave the work to a black box.
But if time comes for collaboration, then we'll work as a team. AKA we'll decide those lines and likely compromise on values to create something larger than the all of us. I doubt my line will ever be "let's just vibecode everything". But it's likely not going to be "use zero AI" unless I have a very disciplined team at hand and no financial stress between any of us.
j2kun
17 hours ago
> prevents progress
"progress" is doing a lot of work here. Progress in what sense, and for whom? The jury is still out on whether LLMs even increase productivity (which is not the same as progress), and I say this as a user of LLMs.
johnnyanmac
9 hours ago
>That is a failure of basic observation. A failure to see the thing you don't like because you don't like and decide not to look. Will you like it if you look?
Maybe we're observing different parts of the elephant. This is my industry right now: https://www.pcgamer.com/games/call-of-duty/call-of-duty-blac...
A deca-billion dollar franchise now owned by a trillion dollar tech company... using it to make art that wouldn't pass a junior interview. It's no surprise there's such a strong rejection by the community who's paying attention. Cheaping out on a product a consumer pays $70 + a bunch of microtransactions for clearly shows the company's priorities.
Maybe there are spaces where you find success, but it's very clear that the water is muddying at large. You don't argue against a swamp by saying "but my corner here is clean!".
trinsic2
17 hours ago
Man, there is something true in what he is saying though. Can't you see it? I like the idea of some of this technology. I think its cool you can use natural language to create things. I think there is real potential in using these tools in certain context, but the way in which these tools got introduced, no transparency, how its being used to shape thought, the over-reliance on it and how its use to take away our humanity is a real concern.
If this tech was designed in an open way and not put under paywalls and used to develop models that are being used to take away peoples power, maybe I'd think differently. But right now its being promoted by the worst of the worst, and nobody is talking about that.
jimbokun
17 hours ago
What’s your solid contribution to the discussion?
ineedasername
17 hours ago
Responding to and enumerating, in this case, the viewpoint of someone. It's the general process by which discussions take place and progress.
If the thread were about 1) the current problems and approaches AI alignment, 2) the poorly understood mechanisms of hallucination, 3a) the mindset the doesn't see the conflict whey they say "don't anthropomorphize" but runs off to create a pavlovian playground in post-training, 3b) the mindsets that do much the reverse and how both these are dangerous and harmful, 4) the poorly understood trade off of sparse inference optimizations. But it's not, so I hold those in reserve.
quantummagic
18 hours ago
> we need to create a culture that values craftmanship and dignifies work done by developers.
Mostly I agree with you. But there's a large group of people who are way too contemptuous of craftsmen using AI. We need to push back against this arrogant attitude. Just as we shouldn't be contemptuous of a woodworking craftsman using a table saw.
cool_dude85
18 hours ago
>Just as we shouldn't be contemptuous of a woodworking craftsman using a table saw.
Some tools are table saws, and some tools are subcontracting work out to lowest cost bidders to do a crap job. Which of the two is AI?
andai
18 hours ago
I've been programming for 20 years and GPT-4 (the one from early 2023) does it better than me.
I'm the guy other programmers I know ask for advice.
I think your metaphor might be a little uncharitable :)
For straightforward stuff, they can handle it.
For stuff that isn't straightforward, they've been trained on pattern matching some nontrivial subset of all human writing. So chances are they'll say, "oh, in this situation you need an X!", because the long tail is, mostly, where they grew up.
--
To really drive the point home... it's easy to laugh at the AI clocks.[0] But I invite you, dear reader, to give it a try! Try making one of those clocks! Measure how long it takes you, how many bugs you write. And how well you'd do it if you only had one shot, and/or weren't allowed to look at the output! (Nor Google anything, for that matter...)
I have tried it, and it was a humbling experience.
rozap
17 hours ago
Now tell the AI to distill a bunch of user goals into a living system which has to evolve over time, integrate with other systems, etc etc. And deliver and support that system
I use Claude code every day and it is a slam dunk for situations like the one above, fiddly UIs and the like. Seriously , some of the best money I spend. But it is not good at more abstract stuff. Still a massive time saver for me and does effectively do a lot of work that would have gotten farmed out to junior engineers.
Maybe this will change in a few years and I'll have to become a potato farmer. I'm not going to get into predictions. But to act like it can do what an engineer with 20 years of experience can do means the AI brain worm got you or it says something about your abilities.
yawnxyz
17 hours ago
right, but this is akin to arguing why the table saw also does not do x/y/z — I don't know why we only complain about AI and how it does NOT do everything well yet.
Maybe it's expectations set by all the AI companies, idk, but this kind of mentality seems very particular to AI products and nothing else.
touisteur
16 hours ago
I'm OK pondering the right use for the tool for as long as it'll take for the dust to settle. And I'm OK too trying some of it myself. What I resent is the pervasive request/pressure to use it everywhere right now, or 'be left out'.
My biggest gripe with the hype, as there's so much talk of craftmanship here, is: most programmers I've met hate doing code reviews and a good proportion prefer rewriting to reading and understanding other people's code. Now suddenly everyone is to be a prompter and astute reviewer of a flood of code they didn't write and now that you have the tool you should be faster faster faster or there's a problem with you.
johnnyanmac
8 hours ago
well that's the issue. The table saw is a tool, we can very clearly agree it's good at cutting a giant plank of wood but horrible at screwing a bolt in. A carpenter can do both, but not a table saw. We never try to say the table saw IS the carpenter.
All this hype and especially the AGI talks want to treat the AI as an engineer itself. Even an assuredly senior engineer above is saying that it's better than them. So I think it's valid to ask "well can it do [thing a senior engineer does on the daily]" if we're suggesting that it can replace an engineer.
rozap
16 hours ago
I'm not complaining about it, I said in my post that it's a huge time saver. It's here to stay, and that's pretty clear to see. It has mostly automated away the need for junior engineers, which just 5 years ago would have been a very unexpected outcome, but it's kind of the reality now.
All that being said:
There's a segment of the software eng population that has their heads in the sand about it and the argument basically boils down to "AI bad". Those people are in trouble because they are also the people who insist on a whole committee meeting and trail of design documents to change the color of a button on a website that sells shoes. Most of their actual hard skills are pretty easy to outsource to an AI.
There's also a techbro segment of the population, who are selling snake oil about AGI being imminent, so fire your whole team and hire me in order to outsource your entire product to an army of AI agents. Their thoughts basically boil down to "I'm a grifter, and I smell money". Nevermind the fact that the outcome of such a program would be a smoldering tire fire, they'll be onto the next grift by then.
As with literally everything, there are loud, crazy people on either side and the truth is in the middle somewhere.
blackqueeriroh
4 hours ago
Junior engineers will be fine; OpenAI is actually choosing to hire juniors now because they just learned all their theory and structure, and are way more willing to push the LLMs to see what they can do.
Bad code is bad code. There’s been bad code since day one; the question is how fast are you willing to fail, learn, fail again, learn more, and keep going.
LLMs make failing fast nearly effortless, and THAT is power that I think young people really take to.
nasmorn
18 hours ago
AI doesn’t program better than me yet. It can do some things better than me and I use it for that but it has no taste and is way too willing to write a ton of code. What is great about it compared to an actual junior is if i find out it did something stupid it will redo the work super fast and without getting sad
QuercusMax
17 hours ago
Too willing to write a ton of code - this is absolutely one of the things that drives me nuts. I ask it to write me a stub implementation and it goes and makes up all the details of how it works, 99% of which is totally wrong. I tell it to rename a file and add a single header line, and it does that - but throws away everything after line 400. Just unreliable and headache-inducing.
efdee
15 hours ago
For me, AI is definitely a table saw. YMMV.
bigstrat2003
18 hours ago
That's because there's nothing "craftsman" about using AI to do stuff for you. Someone who uses AI to write their programs isn't the equivalent of a carpenter using a table saw, they are the equivalent of a carpenter who subcontracts the piece out to someone else. And we wouldn't show respect to the latter person either.
MrDarcy
18 hours ago
I’m a hacker and I’d show respect to that latter person if they did the subcontracting and reviewed their craft well.
terminalbraid
18 hours ago
But you wouldn't call them a craftsperson because they didn't do any craft other than "be a manager". Reviewing work is not on the same plane as actually creating something.
pixl97
18 hours ago
Why are we crafting code?
Simply put most industries started moving away from craftsmanship starting in the late 1700s to the mid 1900s. Craftsmanship does make a few nice things but it doesn't scale. Mass production lead to most people actually having stuff and the general condition of humanity improving greatly.
Software did kind of get a cheat code here though, we can 'craft' software and then endlessly copy it without the restrictions of physical objects. With all that said, software is rarely crafted well anyway. HN has an air about it that software developers are the craftsman of their gilded age, but most software projects fail terribly and waste huge amounts of money.
fragmede
17 hours ago
Does Steve Jobs deserve any respect for building the iPhone then? What is this "actually creating"? I'm sure he wasn't the one to do any of the "actually creating" and yet, there's no doubt in my mind that he deserves credit for the iPhone existing and changing the world.
blackqueeriroh
4 hours ago
> Does Steve Jobs deserve any respect for building the iPhone then?
No. Because he didn’t build it. He didn’t even have the idea for it. He gets respect for telling a lot of people “no” and for saying “not this” and “not that,” for being an excellent editor, but he does NOT get any credit for building m the iPhone.
That was thousands of other people.
By the way, what does being an editor look like?
It looks a lot like telling an LLM, “no, not that. Not that either. Try it this way. Mmm, not quite. Here, let me show you a sketch. Try something like that. Yes, that’s it!!”
MrDarcy
17 hours ago
I honestly don’t understand why you’re presuming to tell me what I think.
I consider myself a craftsman. I craft tools. I also am a manager. I also am a consultant. I am both a subcontractor and I subcontract out.
Above all else I’m a hacker.
I also use LLM’s daily and rather enjoy incorporating this new technology into what I consider my craft.
Please stop arrogantly presuming you know what is best for me to think and feel about all of this.
wpm
18 hours ago
I'm no fan of "AI" but I think it could be argued that if we're sticking to the metaphor, the carpenter can pick up the phone and subcontract out work to the lowest bidder, but perhaps that "work" doesn't actually require high craftsmanship. Or we could make the comparison that developers building systems of parts need to know how they all fit together, not that they built each part themselves, i.e., the carpenter can buy precut lumber rather than having to cut it all out of a huge trunk themselves.
andai
18 hours ago
What about an architect who outsources the bricklaying? A designer who outsources manufacturing?
andai
15 hours ago
I'm not implying a hierarchy of value or status here, btw. And the point about difficulty is interesting too. I did manual labor and it was much harder than programming, as you might expect!
You can certainly outsource "up", in terms of skill. That's just how business works, and life... I called a plumber not so long ago! And almost everyone outsources their health...
latexr
17 hours ago
Brick laying isn’t architecture and manufacturing isn’t design. Those are separate fields and crafts.
Avicebron
17 hours ago
It's very telling when someone invokes this comparison..I see it fairly often. It implies there is this hirearchy of skill/talent between the "architect" and the "bricklayer" such that any architect could be a bricklayer but a bricklayer couldn't be an architect. The conceit is telling.
gosub100
17 hours ago
Masonry is hard work but not low-skill, FYI.
ineedasername
16 hours ago
Nothing craftsman? The detail required to setup a complex image gen pipeline to produce something the has the consistent style, composition, placement, etc, and quite a bit more-- for things that will go into production and need a repeatable pipeline-- it's huge. Take every bit as much creative vision.
Taking just images, consider AI merely a different image capture mechanism, like the camera is vs. painting. (You could copy.paste many critiques about this sort of ai and just replace it with "camera") Sure it's more accessible to a non professional, in AI's case much more so than cameras wear to years of learning painting. But there's a world of difference between what most people do in a prompt online and how professionals integrating it into their workflow are doing. Are such things "art"? That's not a productive question, mostly, but there's this: when it occurs, it has every bit as much intention, purpose and from a human behind it as that which people complain is lacking, but are referring to the one-shot prompt process in their mind when they do.
flatline
17 hours ago
Almost every bit of work I've hired people to do has been through an intermediary of some sort. Usually one with "contractor" or "engineer" as a title. They are the ones who can organize others, have connections, understand talent, plan and keep schedules, recognize quality, and can identify and troubleshoot problems. They may also be craftsmen, or have once been, but the work is not necessarily their craft. If you want anything project-scoped, you have a team, there is someone in a leadership role (even if informally), someone handling the money, etc. Craftsmanship may or may not happen within that framework, they are somewhat orthogonal concerns, and I don't see any reason to disrespect the people that make room for it to happen.
Of course you can also get useless intermediaries, which may be more akin to vibe coding. Not entirely without merit, but the human in the loop is providing questionable value. I think this is the exception rather than the norm.
newman8r
18 hours ago
> And we wouldn't show respect to the latter person either.
Not respect as a carpenter, but perhaps respect as a businessperson or visionary.
efdee
15 hours ago
I respectfully disagree, but disagree hard.
a) Nothing about letting AI do grunt work for you is "not being a craftsman". b) Things are subcontracted all the time. We don't usually disrespect people for that.
thewebguyd
17 hours ago
Where do you draw the arbitrary line of what is craftmanship and what's not?
Using that line of reasoning I could also argue "Using libraries isn't craftmanship, a real craftsman implements all functionality themselves."
johnnyanmac
8 hours ago
> way too contemptuous of craftsmen using AI
We need to find such craftsmen first. A true craftmaan should be able to let the code speak for itself. And ideally they'd be able to teach well enough to have other adapt such a workflow, which inevitably includes constraints and methodologies.
That's the things I don't see enough of in these discussions. We're very afraid to talk about what AI is bad at, as if it's some sort of pampered child we need to keep pleasing. That's not how we attain progress in the craft. Maybe in the stock market, but at that point it's clear what the focus is.
Survived
17 hours ago
A LLM is more like a CNC panel saw; feed a sheet in one end, stack up parts from the other.
It reduces craftsmanship to unskilled labor.
The design work and thinking happen somewhere else. The operator comes in, punches a clock, and chokes on MDF dust for 8 hours.
CamperBob2
17 hours ago
No, the idea is that such a CNC saw shouldn't need an operator at all. To the extent it still does, the operator doesn't even need to be in the same town, much less the same building.
This is a GOOD thing.
Survived
16 hours ago
Good or bad, converting craft work to production work is not making the craft worker more productive, it's eliminating the craft worker.
The unskilled operator's position is also precarious, as you point out, but while it lasts, it's a different and (arguably) less satisfying form of work.
The LLM is not a table saw that makes a carpenter faster, it's an automatic machine that makes an owner's capital more efficient.
CamperBob2
16 hours ago
(Shrug) I don't know about "owners" and "capital," but used properly, they make me more efficient.
everdrive
18 hours ago
>Somewhere along the lines of "everybody can code," we threw out the values and aesthetics that attracted people in the first place.
At some point people started universally accepting the idea that any sort of gatekeeping was a bad thing.I think by now people are starting to realize that this was a flawed idea. (at best, gatekeeping is not a pure negative; it's situational) But, despite coming to realize this I think parts of our culture still maintain this as a default value. "If more people can code, that's a _good_ thing!" Are we 100% sure that's true? Are there _no_ downsides? Even if it's a net positive, we should be able to have some discussion about the downsides as well.
idiotsecant
18 hours ago
Your point is that the hacker ethos involved ... Fewer people being excited about programming? I don't think we experienced this on the same planet.
Web 1.0 was full of weirdos doing cool weird stuff for the pure joy of discovery. That's the ethos we need back, and it's not incompatible with AI. The wrong turn we took was letting business overtake joy. That's a decision we can undo today by opting out of that whole ecosystem.
tbrownaw
17 hours ago
You get a very different crowd if something is a (unprofitable but) fun hobby vs being a well-paying profession.
vladms
17 hours ago
I think that the ratio of weirdos doing stuff remained constant through the population, it's just that the whole population is now on the web, so they are harder to find.
Not to mention 20 years ago I personally (and probably others my age) had much more time to care about random weird stuff.
So, I am skeptical without some actual analysis or numbers that things really are so bad.
newman8r
9 hours ago
> Web 1.0 was full of weirdos doing cool weird stuff for the pure joy of discovery. That's the ethos we need back, and it's not incompatible with AI.
True. And to some extent, I've seen more 'useless but fun' projects in the last year because they can be done in an afternoon rather than a week. We need more of that.
JuniperMesos
15 hours ago
This is because in Web 1.0 times, only weird hacker types were capable of using the internet effectively. Normies (and weirdos who were weird in ways not related to familiarity with and interest in personal computer technology) were simply not using the internet in earnest, because it wasn't effective for their needs yet. Then people made that happen and now everyone is online, including boring normies with boring interests.
If you want a space where weird hacker values and doing stuff for the pure joy of discovery reign, gatekeep harder.
blackqueeriroh
4 hours ago
> If you want a space where weird hacker values and doing stuff for the pure joy of discovery reign, gatekeep harder.
That’s not what was said. What they said is that they wanted more people doing things for the pure joy of discovery, and to make that happen, everyone needs to have more free time and less financial stress or be able to make fun stuff WAY faster, like with LLMs!
pixl97
18 hours ago
> That's a decision we can undo today by opting out of that whole ecosystem.
Ah yes, we'll also skip out on eating too.
idiotsecant
17 hours ago
There's a mountain of software work you can do that doesn't involve participating in this rat race. There's nothing that says you need to make 500k and live in silicon valley. It's possible to be perfectly happy working integrating industrial control systems in a sleepy mountain town where cost of living is practically nothing. I am well qualified to make that statement.
Glemkloksdjf
18 hours ago
We need to change the underlying system.
We do not need to do things no one needs. We do not need a million differen webshops, and the next CRUD application.
We need a system which allows the earth resources being used as efficient and fair as possible.
Then we can again start apprechiating real craftmanship but not for critical things and not because we need to feed ourselves but because we want to do it.
vladms
17 hours ago
Each time someone says "we" without asking me I find it at least insulting. With this mindset the next step might be to tell me what I need, without considering my opinion.
Yes, the current system seems flawed, but is the best we came up with and is not fixed either, it is slowly evolving.
Yes, some resources are finite (energy from the sun seems quite plenty though), but don't think we will be ever able to define "fair". I would be glad with "do not destroy something completely and irremediably".
jpadkins
17 hours ago
> We need a system which allows the earth resources being used as efficient and fair as possible.
To what goals? Who gets to decide what is fair?
ctoth
16 hours ago
The number of not-so-secretly centralized-economy types on HN has actively surprised me whenever I see this.
Who is we? and how do we decide?
fragmede
17 hours ago
> We do not need a million differen webshops, and the next CRUD application.
The thing about capitalism is that unecessary webshop isn't getting any customers if it's truly unecessary, and will soon be out of business. We can appreciate Ghostty, because why? Because the guy writing it is independently wealthy and can fly jets around for fun, and has deigned to grace us with his coding gifts once again? Don't get me wrong, it's a nice piece of software, but I don't know that system's any better.
Glemkloksdjf
17 hours ago
Capitalism looks like it does because humans don't have perfect knowledge (we don't know the best shop).
Also competition is a core driver for cost reduction and progress in capitalism.
And on a big picture pov: There is only one Amazon, Alibaba etc.
zero_k
16 hours ago
YES. The "This is evidenced by the new set of hacker values being almost purely performative" is so incredibly true. I went to a privacy event about Web3, and the event organisers hired a photographer who took photos of everyone (no "no photo" stickers available), and they even flew a drone above our heads to take overarching videos of everyone :D I guess "privacy" should have been in quotes. All the values and aesthetics of the original set of people who actually cared about privacy (and were attracted to it) has been evaporated. All that remained are the hype. It was wild.
robot-wrangler
18 hours ago
I realized recently that if you want to talk about interesting topics with smart people, if you expect things like critical thinking and nuanced discussion, you're currently much better off talking literature or philosophy than anything related to tech. I mean, everyone knows that discussing politics/economics is rather hopelessly polarized, everyone has their grievances or their superstitions or injuries that they cannot really put aside. But this is a pretty new thing that discussing software/engineering on merits is almost impossible.
Yes, I know about the language / IDE / OS wars that software folks have indulged in before. But the reflexive shallow pro/anti takes on AI are way more extreme and are there even in otherwise serious people. And in general anti-intellectual sentiment, mindless follow-the-leader, and proudly ignorant stances on many topics are just out of control everywhere and curiosity seems to be dead or dying.
You can tell it's definitely tangled up with money though and this remains a good filter for real curiosity. Math that's not maybe related to ML is something HN is guaranteed to shit on. No one knows how to have a philosophy startup yet (WeWork and other culty scams notwithstanding!). Authors, readers, novels, and poetry aren't moving stock markets. So at least for now there's somewhere left for the intellectually curious to retreat
majormajor
18 hours ago
I don't really see it any different than the Windows/Unix, Windows/Mac, etc, flame wars that boiled even amongst those with no professional stake it in for decades. Those were otherwise serious people too, parroting meaningless numbers and claims that didn't actually make much of a difference to them.
If anything, the AI takes are more much more meaningful. A Mac/PC flame war online was never going to significantly affect your career. A manager who either is all-in on AI or all-out on it can.
robot-wrangler
18 hours ago
OS and IDE wars are something people take pretty seriously in their teens and very early careers, and eventually become more agnostic about after they realize it's not going to be the end-all predictor of coworker code quality. It predicts something for sure, but not strictly skill-level.
Language-preference wars stick around until mid-career for some, and again it predicts something. But still, serious people are not likely to get bogged down in pointless arguments about nearly equivalent alternatives at least (yaml vs json; python vs ruby).
Shallow takes on AI (whether they are pro or anti) are definitely higher stakes than all this, bad decisions could be more lasting and more damaging. But the real difference to my mind is.. AI "influencers" (again, pro or anti) are a very real thing in a way that doesn't happen with OS / language discussions. People listen, they want confirmation of biases.
I mean there's always advocates and pundits doing motivated reasoning, but usually it's corporate or individuals with clear vested interests that are trying to short-circuit inquiry and critical thinking. It's new that so many would-be practitioners in the field are eager to sabotage and colonize themselves, and forcing a situation where honest evaluations and merit-based discussion of engineering realities are impossible
idiotsecant
18 hours ago
This is classically framed as philosophy vs sophistry. The truth is that both are necessary, but only one makes money. When your entire culture assigns value with money it's obvious which way the scales will tip.
exasperaited
18 hours ago
> But the reflexive shallow pro/anti takes on AI are way more extreme
But this is philosophy (and ethics/morality)
My feelings about AI, about its impact on every aspect of our lives, on the value of human existence and the purpose of the creative process, have less to do with what AI is capable of and more to do with the massive failures of ethics and morality that surround every aspect of its introduction and the people who are involved.
Humans will survive. Humanity is on the ropes.
fragmede
17 hours ago
> Math that's not maybe related to ML is something HN is guaranteed to shit on.
Eh, I mean here's one about the Ulam spiral that did pretty well: https://news.ycombinator.com/item?id=2047857
The fast inverse sqrt that John carmack did not. write also does well. I know there's many more. Are you sure that's not just a caricature of Hacker News you've built up in your head?
robot-wrangler
17 hours ago
Visualizations and code always help. But to name two recent disappointments, stuff like https://news.ycombinator.com/item?id=46049932 and https://news.ycombinator.com/item?id=45957911 comes to mind as not meeting a high standard. To be clear, no expertise is fine, but no curiosity is bad.
blackqueeriroh
5 hours ago
> we need to create a culture that values craftmanship and dignifies work done by developers.
Developers waste a lot of time writing a bunch of boilerplate code that they hate writing and that doesn’t make them happy and has nothing to do with craftmanship. We also just spent 60 years in a culture that dignified work done by developers to the extreme, and honestly that produced some of the most narcissistic minds the world has ever seen: Thiel, Andressen, et al - and why? Because we dignify work in a capitalistic culture by increasing wages.
You want to talk about a culture that values craftmanship? Let everyone have the time and the freedom and the security to build whatever they want to build, instead of what they have to build in order to have health insurance.
> We need to talk seriously and plainly about the spiritual and existential damage done by LLMs.
Uhhhhhh…..excuse me?
> despite hundreds of billions of dollars, it hasn't fully delivered on its promises, and investors are starting to be a bit skeptical.
More money was invested in the dot-com boom, and in the lead up to the railroad era, before anyone could ride. So this isn’t new.
alganet
4 hours ago
I think this boilerplate thing got lost in translation some time ago.
We dislike _the presence_ of boilerplate, not the time spent writing it. If another thing writes it for you, it implies that _now boilerplate exists_, and it sucks.
It makes me unhappy when it exists. It makes me unhappy if it appears in seconds.
That said, there is some potential for using AI to reduce boilerplate and help create more meaningful software. However, that is definitely not the way things are shaping out to be.
JohnBooty
17 hours ago
we need to create a culture that values craftmanship
and dignifies work done by developers.
I don't think that is at ALL at odds with using AI as a coding assistant!I am not going to tell you I am a coding god, but I have been doing this for nearly 30 years and I feel I'm pretty competent craftsman.
AI has helped me to be a better craftsman. The big picture ideas are mine, but AI has helped immensely with some details.
supern0va
13 hours ago
"We routinely devalue craftsmanship because it doesn't bow down to almighty Business Impact."
I actually disagree with this pretty fundamentally. I've never seen hacker culture as defined by "craftsmanship" so much as about getting things done. When I think of our culture historically, it's cleverness, quick thinking, building out quick and dirty prototypes in weekend "hackathons", startup culture that cuts corners to get an MVP product out there. I mean, look at your URL bar: do you think YC companies are prioritizing artisanal lines of code?
We didn't trade craftsmanship for "Business Impact". The latter just aligns well with our culture of Getting Shit Done. Whether it's for play (look at the jank folks bring out to the playa that's "good enough") or business, the ethos is the same.
If anything, I feel like there has been more of an attempt to erase/sideline our actual culture by folks like y'all as a backlash against AI. But frankly, while a lot of us scruffy hacker types might have some concerns about AI, we also see a valuable tool that helps us move faster sometimes. And if there's a good tool that gets a thing done in a way that I deem satisfactory, I'm not going to let someone's political treatise get in my way. I'm busy building.
jrm4
17 hours ago
"Dignifies work done by developers?"
Hmm. No. Not really. I don't think "hacker" ever much meant this at all; mostly because "hacker" never actually was much connected to "labor for money."
"Going to work" and "being a hacker" were overwhelmingly mutually exclusive. Hacking was what you don't do on company time (in favor of the company.)
asdfman123
17 hours ago
This is the fate that befalls any wildly successful subculture: the MOPs start showing up, fascinated by it, and the sociopaths monetize it to get rich. The original geeks who created the scene become increasingly powerless.
Relevant article: https://meaningness.com/geeks-mops-sociopaths
JustExAWS
18 hours ago
I’ve been a “software engineer” or closely adjacent for 30 years. During that time, I’ve worked for small and medium “lifestyle companies”, startups, boring Big Enterprise, $BigTech and over the past 5 years (including my time at $BigTech) worked as a customer facing cloud consultant where I’ve seen every type of organization imaginable and how they work. No one ever gave a rip about “craftsmanship”. They hire you for one reason - to make them more money than they are paying you for or to save them more money than you are costing them. As far as me, I haven’t written a single line of code for “enjoyment” since the day I stepped into college. For the next four years it was about getting a degree and for the next 30, it was about exchanging my labor for money to support my addictions to food and shelter - that’s the transaction. I don’t dislike coding or dread my job. But at the end of the day (and at the beginning of the day) I’ve found plenty of things I enjoy that don’t involve computers - working out, teaching fitness classes part time, running, spending time with family and friends, traveling, etc. If an LLM helps me exchange my labor for money more efficiently, I’m going to use it just like I graduated from writing everything in assembly in 1987 on my Apple //e to using a C compiler or even for awhile using Visual Basic 6.
HWR_14
18 hours ago
> If an LLM helps me exchange my labor for money more efficiently
Except that's unproven. It might make you more productive, but whether you get any of that new value is untested.
Glemkloksdjf
18 hours ago
Right now its just a tool you can use or not and if you are smart enough, you figure out very quickly when to use a tool for efficency and when not.
I do not vibe code my core architecture because i control it and know it very well. I vibe code some webui i don't care about or a hobby idea in 1-4h on a weekend because otherwise it would take me 2 full weekends.
I fix emails, i get feedback etc.
When I do experiemnts with vibe coding, i'm very aware what i'm doing.
Nonetheless, its 2025. Alone 2026 we will add so much more compute and the progress we see is just crazy fast. In a few month there will be the next version of claude, gpt, gemini and co.
And this progress will not stop tomorrow. We don't know yet how fast it will progress and when it will be suddenly a lot better then we are.
Additionally you do need to learn how to use these tools. I learned through vibe coding that i have to specify specific things i just assume the smart LLM will do right without me telling for example.
Now i'm thinking about doing an experiemnt were i record everything about a small project i want to do, to then subscribe it into text and then feeding it into an llm to strucuture it and then build me that thing. I could walk around outside with a headset to do so and it would be a fun experiemnt how it would feel like.
I can imagine myself having some non intrusive AR Google and the ai sometimes shows me results and i basically just give feedback .
JustExAWS
18 hours ago
Well I have personally tested it on the green field projects I mostly work on and it does the grunt work of IAC (Terraform) and even did a decently complicated API with some detailed instructions like I would give another developer.
I’ve done literally dozens of short term quick turn around POCs from doing the full stack from an empty AWS account to “DevOps” to the software development -> training customers how to fish and showing them the concepts -> move on to next projects between working at AWS ProServe and now a third party consulting company. I’m familiar with the level of effort for these types of projects. I know how many fewer man hours it takes me now.
I have avoided front end work for well over a decade. I had to modify the front end part of the project we released to the customer that another developer did to remove all of the company specific stuff to make it generic so I could put it in our internal repo. I didn’t touch one line of front end code to make the decently extensive modifications, honestly I didn’t even look at the front end changes. I just made sure it worked as expected.
HWR_14
16 hours ago
> I know how many fewer man hours it takes me now.
But how much has your hourly rate risen?
JustExAWS
15 hours ago
If you are “consulting” on an hourly rate, you’re doing it wrong. The company and I get paid for delivering projects not the number of hours we work. A smaller project may just say they have me for 6 weeks with known deliverable. I’m rarely working 40 hours a week.
When I did do one short term project independently, I gave them the amount I was going to charge for the project based on the requirements.
All consulting companies - including the division at AWS - always eventually expand to the staff augmentation model where you assign warm bodies and the client assigns the work. I have always refused to touch that kind of work with a ten foot pole.
All of my consulting work has been working full time and salaries for either the consulting division of AWS where I got the same structured 4 year base + RSUs as every other employee or now making the same amount (with a lot less stress and better benefits) in cash.
I’m working much less now than I ever have in my life partially because I’m getting paid for my expertise and not for how much code I can pump out.
HWR_14
13 hours ago
You are kind of dodging the question. It sounds like you are not making more money or working fewer hours because of AI.
JustExAWS
13 hours ago
I am working fewer hours. I at most work 4 hours a day unless it’s a meeting heavy day. I haven’t typed a line of code in the last 8 months yet I’ve produced just as much work as I did before LLMs.
twosdai
18 hours ago
I really agree with your point. I think that this forum being hackernews and all though lends itself to a slightly different kind of tech person. Who really values for themselves and their team, the art of getting stuck in with a deeply technical problem and being able to overcome it.
JustExAWS
17 hours ago
You really think that people at BigTech are doing it for the “enjoyment” and not for the $250K+ they are making 3 years out of college? From my n=1 experience, they are doing it for the pay + RSUs.
If you see what it takes to get ahead in large corporations, it’s not about those who are “passionate”, it’s about people who know how to play the game.
If you look at the dumb AI companies that YC is funding, those “entrepreneurs” aren’t doing 996 because they enjoy it. They are looking for the big exit.
potbelly83
16 hours ago
I don't know, look at someone like https://news.ycombinator.com/user?id=dmbaggett he seems to be an entrepreneur who enjoys what he's doing.
raw_anon_1111
16 hours ago
Now compare that to these founders.
https://docs.google.com/spreadsheets/d/1Uy2aWoeRZopMIaXXxY2E...
How many of them do you think started their companies out of “passion”?
Some of the ones I spotted checked had a couple of non technical founders looking for a “founding engineer” that they could underpay with the promise of “equity” that would probably be worthless.
potbelly83
16 hours ago
I'm not disagreeing with the fact that there's a shit ton of founders out there looking for a quick pay day (I'd guess the majority fall into that category). Just pointing out there are exceptions, and the exceptions can be quite successful.
CamperBob2
18 hours ago
We need to talk seriously and plainly about the spiritual and existential damage done by LLMs.
I'm tempted to say "You're not helping," as my eyes roll back in their sockets far enough to hurt. But I can also understand how threatening LLMs must appear to programmers, writers, and artists who aren't very good at their jobs.
What I don't get is why I should care.
jimbokun
17 hours ago
The question about why you should care about others and not just yourself has literature stretching back thousands of years. Maybe start with one of the major world religions?
CamperBob2
17 hours ago
Which one do you suggest? Many of them come with a nasty non-compete clause.
trinsic2
13 hours ago
Have you seen the latest AI slop in game design lately, destroying human creativity?
Have you seen how this tech is being used to control narratives to subjugate populations to the will of authoritarian governments?
This shit is real. We are slowly sliding into a world where every aspect of our lives are going to be dictated by people in power with tools that can shape the future by manipulating what people think about.
If you don't care that the world is burning to the ground, good luck with that. Im not saying the tech is necessarily bad, its the way in which we are allowing it to be used. There has to be controls is place to steer this tech in the right direction or we are heading for a world I don't want to be apart of.
CamperBob2
13 hours ago
Have you seen the latest AI slop in game design lately, destroying human creativity?
This just in: 90% of everything is crap. AI does not, cannot, and will not change that.
Have you seen how this tech is being used to control narratives to subjugate populations to the will of authoritarian governments?
Can't say as I have.
The only authoritarians in this thread are the ones telling us what we should and should not be allowed to do with AI.
thewebguyd
18 hours ago
> All that said, the actual workday is absolutely filled with busy work that no one really wants to do, and the refusal of a loud minority to engage with that fact is what's leading to this.
The attitude and push back from this loud minority has always been weird to me. Ever since I got my hands on my first computer as a kid, I've been outsourcing parts of my brain to computing so that I can focus on more interesting things. I no longer have to remember phone numbers, I no longer have to carry a paper notepad, my bookshelf full of reference books that constantly needed to be refreshed became a Google search away instead. Intellisense/code completion meant I didn't have to waste time memorizing every specific syntax and keyword. Hell, IDEs have been generating code for a long time. I was using Visual Studio to automatically generate model classes from my database schema for as long as I can remember, and even generating CRUD pages.
The opportunity to outsource even more of the 'busywork' is great. Isn't this was technology is supposed to do? Automate away the boring stuff?
The only reasoning I can think of is that the most vocal opponents work in careers where that same busywork is actually most of their job, and so they are naturally worried about their future.
Wowfunhappy
17 hours ago
> Ever since I got my hands on my first computer as a kid, I've been outsourcing parts of my brain to computing so that I can focus on more interesting things. I no longer have to remember phone numbers, I no longer have to carry a paper notepad, my bookshelf full of reference books that constantly needed to be refreshed became a Google search away instead. Intellisense/code completion meant I didn't have to waste time memorizing every specific syntax and keyword. Hell, IDEs have been generating code for a long time. I was using Visual Studio to automatically generate model classes from my database schema for as long as I can remember, and even generating CRUD pages.
I absolutely agree with you, but I do think there's a difference in kind between a deterministic automation you can learn to use and get better at, and a semi-random coding agent.
The thing I'm really struggling with is that unlike e.g. code completion, there doesn't seem to be a clear class of tasks that LLMs are good at vs bad at. So until the LLMs can do everything, how do I keep myself in the loop enough that I'll have the requisite knowledge to step in when the LLM fails?
You mention how technology means we no longer have to remember phone numbers. But what if all digital contact lists had a very low chance of randomly deleting individual contacts over time? Do you keep memorizing phone numbers? I'm not sure!
jimbokun
17 hours ago
Like the almost but not quite self driving cars.
doug_durham
17 hours ago
Thank you for expressing well what I was thinking. I derive intense joy from coding. Like you my over my 40 year career I've been exploiting more and more ways to outsource work to computers. The space of software is so vast that I've never worried for a second that I'd not have work to do. Coding is a means to solving interesting problems. It is not an end in itself.
trinsic2
13 hours ago
When you off load that stuff to a computer, you loose cognitive abilities. Heck Im even being careful how much I use mapping tools now because I want to know where I am going and how I get there.
FYI: I do not work for any corporations, I provide technical services directly to the public. So there is really concerns about this tech by everyday people that do not have a stake in keeping a job.
jimbokun
17 hours ago
The marketing for AI is that it will soon replace THE INTERESTING PARTS too. Because it will be better than humans at everything.
For you, what are “the interesting parts”, and why do you believe in principle a machine won’t do those parts better than you?
thewebguyd
17 hours ago
What are the "interesting parts" is hard to quantify because my interests vary, so even if a machine can do those parts better than me, doesn't necessarily mean I'll use the machine.
The arts is a good example. I still enjoy analog photography & darkroom techniques. Digital can (arguably) do it better, faster, and cheaper. Doesn't change the hobby for me.
But, at least the option is there. Should I need to shoot a wedding, or some family photos for pay, I don't bust out my 35mm range finder and shoot film. I bring my R6, and send the photos through ImagenAI to edit.
In that way, the interesting parts are whatever I feel like doing myself, for my own personal enjoyment.
Just the other day I used AI to help me make a macOS utility to have a live wallpaper from an mp4. Didn't feel like paying for any of the existing "live wallpaper" apps. Probably a side project I would never have done otherwise. Almost one shot it outside of a use-after-free bug I had to fix myself, which ended up being quite enjoyable. In that instance, the interesting part was in the finding a problem and fixing it, while I got to outsource 90% of the rest of the work.
I'm rambling now, but the TL;DR is I'm more so excited about having the option to outsource portions of something rather than always outsourcing. Sometimes all you need is a cheap piece of mass produced crap, and other times you want to spend more money (or more time) making it yourself, or buying handmade from an expert craftsman.
JohnBooty
17 hours ago
This was very insightful. It made me think about how "hacker culture" has changed.
I'm middle-aged. 30 years ago, hacker culture as I experienced it was about making cool stuff. It was also about the identity -- hackers were geeks. Intelligent, and a little (or a lot) different from the rest of society.
Generally speaking, hackers could not avoid writing code. Whether it was shell scripts or HTML or Javascript or full-blown 3D graphics engines. To a large extent, coding became the distinguishing feature of "hackers" in terms of identity.
Nearly anybody could install Linux or build a PC, but writing nontrivial code took a much larger level of commitment.
There are legitimate functional and ethical concerns about AI. But I think a lot of "hackers" are in HUGE amounts of denial about how much of their opposition to AI springs from having their identities threatened.
jimbokun
17 hours ago
Well there are a lot of us very clear that our identities are being threatened and scared shitless we will lose the ability to pay our rent or buy food because of it.
JohnBooty
17 hours ago
As somebody currently navigating the brutal job market, I'm scared shitless about that too. I have to tell you though, that the historical success rate of railing against "technologies that make labor more efficient" is currently at 0.0000000%.
We've survived and thrived through inflection points like this before, though. So I'm doing my best to have an adapt-or-die mindset.
"computers are taking away human jobs"
"visual basic will eliminate the need for 'real coders'"
"nobody will think any more. they'll 'just google it' instead of actually understanding things"
"SQL is human readable. it's going to reduce the need for engineers" (before my time, admittedly)
"offshoring will larely eliminate US-based software development"
etc.
Ultimately (with the partial exception of offshoring) these became productivity-enhancers that increased the expectations placed on the shoulders of engineers and expanded the profession, not things that replaced the profession. Admittedly, AI feels like our biggest challenge yet. Maybe.
blackqueeriroh
4 hours ago
We exist inside of capitalism - particularly late-stage deeply unregulated capitalism.
You are always very close to losing the ability to pay your rent or buy food not because your identity is being threatened but because a bunch of people who don’t care about you and only care about making money will happily lay you off without a second thought if they think it will make the even richer people above them happy.
And they will do this whether there is an AI boom or not.
yawnxyz
12 hours ago
This kind of nails it; hacker culture folks grew up and got families and mortgages, so changes comes with the territory
thewebguyd
17 hours ago
> opposition to AI springs from having their identities threatened.
I think there's definitely some truth to this. I saw similar pushback from the "learn to code" and coding bootcamp era, and you still frequently see it in Linux communities where anytime the prospect of more "normies" using Linux comes up, a not insignificant part of the community is actively hostile to that happening.
The attitude goes all the way back to eternal september.
tbrownaw
15 hours ago
And it's "the bootcamp era" rather than the new normal because it didn't work out as well as advertised. Because of the issues highlighted in that pushback.
ukFxqnLa2sBSBf6
18 hours ago
I consider myself progressive and my main issue with the technology is that it was created by stealing from people who have not been compensated in any way.
I wouldn’t blame any artist that is fundamentally against this tech in every way. Good for them.
notJim
17 hours ago
Every artist and creator of anything learned by engaging with other people's work. I see training AI as basically the same thing. Instead of training an organic mind, it's just training a neural network. If it reproduces works that are too similar to the original, that's obviously an issue, but that's the same as human artists.
dweinus
16 hours ago
This is a bad-faith argument, but even if I were to indulge it: human artists can/do get sued for mimicing the works of others for profit, which AI precisely does. Secondly, many of the works in question have explicit copyright terms that prohibit derivative works. They have built a multi-billion dollar industry on scaled theft. I don't see a more charitable interpretation.
blackqueeriroh
4 hours ago
LLMs don’t have to be able to mimic things. And go ahead and sue OpenAI and Anthropic! It won’t bother me at all. Fleece those guys. Take their money. It won’t stop LLMs, even if we bankrupted OpenAI and Anthropic.
hackable_sand
5 hours ago
> I see training AI as basically the same thing
Of course you do.
_DeadFred_
12 hours ago
Human beings are human beings.
For profit products are for profit products, that are required to compensate if they are derivative of other works (in this case, there would be no AI product without the upstream training data, which checks the flag that it's derivative).
If you would like to change the laws, ok. But simply breaking them and saying 'but the machine is like a person' is still... just breaking the laws and stealing.
blackqueeriroh
4 hours ago
Many of those people have absolutely been compensated, many times over. Should they have a perpetual right to longer than their entire life, to profit off something they did 40 years ago?
musicale
18 hours ago
It's "unauthorized use" rather than "stealing", since the original work is not moved anywhere. It's more like using your creative work to train a software system that generates similar-looking, competing works, for pennies, at industrial scale and speed.
andrei_says_
18 hours ago
Obtaining without payment or consent and then using to create derivative works at scale?
And the pedantry matters only because the entities criming are too big and rich and financed by the right people.
It is basically a display of the societal threshold beyond which laws are not enforced.
musicale
18 hours ago
> Obtaining without payment or consent
Usually "obtaining" is just making a bunch of HTTP requests - which is kind of how the web is designed to work. The "consent" (and perhaps "desired payment" when there is no paywall) issue is the important bit and ultimately boils down to the use case. Is it a human viewing the page, a search engine updating its index, or OpenAI collecting data for training? It is annoying when things like robots.txt are simply ignored, even if they are not legally or technically binding.
The legal situation is unsurprisingly murky at the moment. Copyright law was designed for a different use case, and might not be the right tool or regulatory framework to address GenAI.
But as I think you are suggesting, it may be an example of regulatory entrepreneurship, where (AI) companies try to move forward quickly before laws and regulations catch up with them, while simultaneously trying to influence new laws and regulations in their favor.
[Copyright law itself also has many peculiarities, for example not applying to recipes, game rules, or fashion designs (hence fast fashion, knockoffs, etc.) Does it, or should it, apply to AI training and GenAI services? Time will tell.]
crimsoneer
18 hours ago
The pedantry matters for the same reason it mattered when the music industry did this to Napster: because the truth is important.
jimbokun
17 hours ago
Ok Mr. (Or Ms.) Pedant you know what the intended meaning was.
taco_emoji
18 hours ago
everybody knows this, you are being uselessly pedantic
LPisGood
17 hours ago
> hacker circles didn't always have this 'progressive' luddite mentality
Richard Stallman has his email printed out on paper for him to read, and he only connects to the internet by using wget to fetch web pages and then has them printed off.
bravetraveler
18 hours ago
In a way, the busy work is padding. If the day becomes entirely difficult, I want more reward or time away.
I understand how LLMs may improve the situation for the employer, personally or with peers: no.
oytis
16 hours ago
IDK, to me it looks that hacker culture has always been progressive, it's just definition of what is progressive has changed somewhat.
But hacker culture always sought to empower an individual (especially a smart, tech-savvy individual) against corporations, and rejection of gen AI seems reasonable in this light.
If hacker culture wasn't luddite, it's because of the widespread belief that the new digital technology does empower the individual. It's very hard to believe the same about LLMs, unless your salary depends on it
portana77
9 hours ago
Why do you have to reject LLMs? Wouldn’t a real hacker grab a local model and fiddle with it? Who are you people?
amarant
19 hours ago
It's Turing's Law:
Any person who posts a sufficiently long text online will be mistaken for an AI.
chemotaxis
18 hours ago
> It's why people can't post a meme, quote, article, whatever could be interpreted (very often, falsely) as AI-generated in a public channel, or ask a chatbot to explain a hand-drawn image without the off chance that they get an earful from one of these 'progressive' people.
It happens, but I think it's pretty uncommon. What's a lot more common is people getting called out for offloading tasks to LLMs in a way that just breaches protocol.
For example, if we're having an argument online and you respond with a chatbot-generated rebuttal to my argument, I'm going to be angry. This is because I'm putting an effort and you're clearly not interested in having that conversation, but you still want to come out ahead for the sake of internet points. Some folks would say it's fair game, but consider the logical conclusion of that pattern: that we both have our chatbots endlessly argue on our behalf. That's pretty stupid, right?
By extension of this, there's plenty of people who use LLMs to "manage" their online footprint: write responses to friends' posts, come up with new content to share, generate memes, produce a cadence of blog posts. Anyone can ask an LLM to do that, so what's the point of generating this content in the first place? It's not yours. It's not you. So what's the game, other than - again - trying to come out on top for internet points?
Another fairly toxic pattern is when people use LLMs to produce work output without the effort to proofread or fact-check it. Over the past year or so, I've gotten so many LLM-generated documents that simply made no sense, and the sender considered their job to be done and left the QA to me.
jimbokun
17 hours ago
The reason to be angry about the chatbot generated argument is that without sources it’s likely to have hallucinated a few things.
blackqueeriroh
4 hours ago
This is also quite true of most human generated arguments
bill3389
18 hours ago
unfortunately, it will be less and less purely human generated content any more. it will be more and more AI generated or AI assisted content in the future.
We are angry because we grow up in an age that content are generated by human and computer bot are inefficient. however, for newer generation, AI generated content will be a new normal, like how we see people from a big flat box (TV)
aprilthird2021
18 hours ago
I'm looking at code at my tech job right now where someone AI outsourced it, didn't proofread it, and didn't realize a comparison table it built is just running over the same dataset twice causing every comparison to look "identical" even when the data isn't
blackqueeriroh
4 hours ago
Yes, but that isn’t the fault of the LLM. That’s human sloppiness. Humans have failed to review sloppy and bad code since people have been coding.
kyle-rb
17 hours ago
People assume programmers have the same motivations as luddites but "smashing the autolooms" presumably requires firebombing a whole bunch of datacenters, whereas it's pretty easy to download and run an open-source Chinese autoloom.
GrantMoyer
18 hours ago
I largely agree with this, but at the same time, I empathize with the FA's author. I think it's because LLMs feel categorically different from other technological leaps I've been exited about.
The recent results in LLMs and diffusion models are undeniably, incredibly impressive, even if they're not to the point of being universally useful for real work. However they fill me with a feeling of supreme dissapointment, because each is just this big black box we shoved an unreasonable amount of data into and now the black box is the best image processing/natural language processing system we've ever made, and depending on how you look at it, they're either so unimaginably complex that we'll never understand how they really work, or they're so brain-dead simple that there's nothing to really understand at all. It's like some cruel joke the universe decided to play on people who like to think hard and understand the systems around them.
jlarcombe
11 hours ago
Agree totally. Reminiscent of the Paul Erdös reaction to the proof of the Four Colour Problem.
It's been quite good reading these comments because a lot of them have put into words my own largely negative feelings about the AI ubiquitous hype, which I have found it hard to articulate. Your second paragraph, and someone else's comment about how they are attracted to computer science because they like fiddly detail and so are uninterested in a machine hiding all that, and a third comment about how so-called "busy work" is actually a good way of padding out difficult stuff and so a job of work becomes much less palatable when it is excised entirely.
The other thing I find deeply depressing is the degree to which people are thrilled (genuinely) by dreadful looking AI art and unbearable to read AI prose. Makes me think I've been kidding myself for years that people by and large have a degree of taste. Then again maybe it just means it's not to my taste..
raincole
18 hours ago
> It's like some cruel joke the universe decided to play on people who like to think hard and understand the systems around them.
Yeah. This cruel joke even has a name: The Bitter Lesson.
https://en.wikipedia.org/wiki/Bitter_lesson
But think about it: if digital painting were solved not by a machine learning model, but human-readable code, it would be an even more bleak and cruel joke, isn't it?
GrantMoyer
17 hours ago
> if digital painting were solved not by a machine learning model, but human-readable code, it would be an even more bleak and cruel joke, isn't it?
On the contrary, I'm certain such a program would be filled with fascinating techniques, and I have no dread for the idea that humans aren't special.
Glemkloksdjf
17 hours ago
Interesting that people seem to have this assumption.
"The lesson is considered "bitter" because it is less anthropocentric than many researchers expected and so they have been slow to accept it."
I mean we are so many people on the planet, its easy to feel useless when you know you can get replaced by millions of other humans. How is that different being replaced by a computer?
I was not sure how AGI would come to us, but I assumed there will be AGI in the future.
Weirdest thing for me is mathematics and physics: I assumed that would be such an easy field to find something 'new' through brute force alone, im more shocked that this is only happening now.
I realized with DeepMind and Alphafold that the smartest people with the best tools are in the industry and specificly in the it industry because they are a lot better using tools to help them than normal researchers who struggle writing code.
LocalH
18 hours ago
A good start (albeit the most basic one) would be to encourage budding hackers to read through the Jargon File.
Y_Y
18 hours ago
I think that's going to become like asking a child to read Shakespeare; surely valuable, but requiring a whole parallel text to give modern translation and context.
m000
18 hours ago
I think you're missing that a lot of what we call "learning" would be categorized as "busy work" after the fact. If we replace this "busy work" with AI, we are becoming collectively more stupid. Which may be a goal on itself for our AI overlords.
As mr Miyagi said: "Wax on. Wax off."
This may turn out very profitable for the pre-AI generations, as the junior to senior pipeline won't churn seniors at the same rate. But following generations are probably on their way to digital serfdom if we don't act.
thewebguyd
17 hours ago
> If we replace this "busy work" with AI, we are becoming collectively more stupid.
I've seen this same thing said about Google. "If you outsource your memory to Google searching instead, you won't be able to do anything without and you'll become dumber."
Maybe that did happen, but it didn't seem to result in any meaningful change on the whole. Instead, I got to waste less time memorizing things, or spending time leafing through thousand page reference manuals, to find something.
We've been outsourcing parts of our brains to computers for decades now. That's what got me interested and curious about computers when I got my first machine as a kid (this was back in the late 90s/early 00s). "How can I automate as much of the boring stuff as possible to free myself up for more interesting things."
LLMs are the next evolution of that to an extent, but I also think they do come with some harms and that we haven't really figured out best practices yet. But, I can't help but be excited at the prospect of being able to outsource even more to a computer.
jonas21
17 hours ago
Indeed, this line of reasoning goes all they way back to Socrates who argued that outsourcing your memory to writing would make you stupider [1]:
> For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.
I, for one, am glad we have technologies -- like writing, the internet, Google, and LLMs -- that let us expand the limits of what our minds can do.
[1] https://www.perseus.tufts.edu/hopper/text?doc=Perseus%3Atext...
jimbokun
17 hours ago
If you extend Google to include social networks the damage to human mental health and well being is difficult to calculate.
blackqueeriroh
4 hours ago
But that is a design choice and not a fundamental property!
jimbokun
17 hours ago
Pathway to Idiocracy.
tbrownaw
17 hours ago
> Which may be a goal on itself for our AI overlords.
That doesn't seem exactly likely.
johnnyanmac
9 hours ago
>All that said, the actual workday is absolutely filled with busy work that no one really wants to do, and the refusal of a loud minority to engage with that fact is what's leading to this.
And the dangerous part is that we are so hasty to remove that "busy work" that we fail to make sure it's done right. That willful ignorance seems counter to hacker culture which should encourage curiosity and a deeper understanding.
>It's why people can't post a meme, quote, article, whatever could be interpreted (very often, falsely) as AI-generated in a public channel, or ask a chatbot to explain a hand-drawn image without the off chance that they get an earful from one of these 'progressive' people.
In my experience, it's often is Ai generated more often than not. And yes, it is worth calling out. If you can't engage with the public, why do you expect them to engage with you?
It's like being mad about being passed on the current round of a game, all while you clearly have a phone to your ear.
andrei_says_
18 hours ago
I have only experienced the exact opposite - AI tools being forced on employees left and right, and infinite starry eyed fake enthusiasm amongst a rising ocean of slop poisoning all communication and written human knowledge at scale.
I am yet to see issues caused by restrain.
> It's why people can't post a meme, quote, article, whatever could be interpreted (very often, falsely) as AI-generated in a public channel, or ask a chatbot to explain a hand-drawn image without the off chance that they get an earful from one of these 'progressive' people. These people bring way more toxicity to daily life than who they wage their campaigns against.
blackqueeriroh
4 hours ago
> I have only experienced the exact opposite - AI tools being forced on employees left and right, and infinite starry eyed fake enthusiasm amongst a rising ocean of slop poisoning all communication and written human knowledge at scale. > I am yet to see issues caused by restrain.
Again, how is that the fault of the technology?
MetaWhirledPeas
17 hours ago
Well there's more than just one hacker circle. That was never really the case and it's less and less the case as the earth's technologically-inclined population increases.
Culture is emergent. The more you try to define it, the less it becomes culture and the more it becomes a cult. Instead of focusing on culture I prefer to focus on values. I value craftsmanship, so I'm inclined to appreciate normal coding more than AI-assisted coding, for sure. But there's also a craftsmanship to gluing a bunch of AI technologies together and observing some fantastic output. To willfully ignore that is silly.
The OP's rant comes across as a wistful pining for the days of yore, pinning its demise on capitalists and fascists, as if they had this AI thing planned all along. Focusing on boogeymen isn't going to solve anything. You also can't reverse time by demanding compliance with your values or forming a union. AI is here to stay and we're going to have to figure out how to live with it, like it or not.
zemo
17 hours ago
> This is the culture that replaced hacker culture.
Breathless hustlecore tech industry culture is a place where finance bros have turned programmers into dogs that brag to one another about what a good dog they are. We should reject at every turn the idea that such a culture represents the totality of programming. Programming is so much more than that.
nrclark
19 hours ago
Ironically, the actual luddites weren't anti-technology at all. Mechanized looms at the time produced low-quality, low-durability cloth at low prices. The luddite pushback was more about the shift from durable to disposable.
It's a message that's actually pretty relevant in an age of AI slop.
CharlesW
18 hours ago
They were anti-technology in the sense that they destroyed the machines, because of the machines' negative effects on pay and quality. Maybe you could debate whether they were anti-technology absent its effects, but all technologies have effects. https://en.wikipedia.org/wiki/Luddite
blackqueeriroh
4 hours ago
Show me AI slop and then I’ll immediately show you AI haute cuisine.
If you think LLMs only produce low effort garbage, you truly haven’t been paying attention.
ronsor
18 hours ago
The only thing more insufferable than the "AI do everything and replace everyone" crowd is the "AI is completely useless" crowd. It's useful for some things and useless for others, just like any other tool you'll encounter.
musicale
18 hours ago
The proposition that AI is completely is trivially nullified. For example, it is provably useful for large-scale cheating on course assignments - a non-trivial task that had previously used human-operated "essay mills" and other services.
shadowgovt
18 hours ago
Hackers in the '80s were taking apart phone hardware and making free long-distance calls because the phone company didn't deserve its monopoly purely for existing before they were born. Hackers in the '90s were bypassing copyright and wiping the hard drive of machines they cobbled together out of broken machines to install an open source OS on it so that Redmond, WA couldn't dictate their computing experience.
I think there's a direct through-line from hacker circles to modern skepticism of the kind of AI discussed in this article: the kind where rules you don't control determine the behavior of the machine and where most of the training and operation of the largest and most successful systems can, currently, only be accessed via the cloud portals of companies with extremely questionable ethics.
... but I don't expect hackers to be anti-AI indefinitely. I expect them to be sorting out how many old laptops with still-serviceable graphics cards you have to glue together to build a training engine that can produce a domain-specific tool that rivals ChatGPT. If that task proves impossible, then I suspect based on history this may be the one place where hackers end up looking a little 'luddite' as it were.
... because "If the machine cannot be tamed it must be destroyed" is very hacker ethos.
k6hkUZtLUM
16 hours ago
The whole point was to take these things apart, figure out how they work, and make them things we want them to do instead of being bound by arbitrary rules.
Bypassing arbitrary (useless, silly, meaningless, etc) rules has always been a primary motiving factor for some of us :D
paganel
19 hours ago
That's because AI-generated memes are lame, not saying that memes are smart, generally speaking, but the AI-generated ones are even lamer. And nothing wrong with being a luddite, to the contrary, in this day and age still thinking that technology is the way forward no matter what is nothing short of criminal.
anubistheta
18 hours ago
I agree. I think this is what happens when a persons transitions from a progressive mindset to a conservative one, but has made being "progressive" a central tenant of their identity.
Progressiveness is forward looking and a proponent of rapid change. So it is natural that LLM's are popular amongst that crowd. Also, progressivism should be accepting of and encouraging the evolution of concepts and social constructs.
In reality, many people define "progressiveness" as "when things I like happen, not when things I don't like happen." When they lose control of the direction of society, they end up just as reactionary and dismissive as the people they claim to oppose.
>AI systems exist to reinforce and strengthen existing structures of power and violence. They are the wet dream of capitalists and fascists.
>Craft, expression and skilled labor is what produces value, and that gives us control over ourselves
To me, that sums up the author's biases. You may value skilled labor, but generally people don't. Nor should they. Demand is what produces value. The later half of the piece falls into a diatribe of "Capitalism Bad".
octorian
18 hours ago
Just seeing that sentence fragment about "structures of power and violence" told me so much about the author. Its the sort of language that brings with it a whole host of stereotypes, some of which were immediately confirmed with a little more digging (and others would require way too much effort to confirm, but likely could be).
And yes, this whole "capitalism bad" mentality I see in tech does kinda irk me. Why? Because it was capitalism that gave them the tools to be who they are and the opportunities to do what they do.
thewebguyd
17 hours ago
> And yes, this whole "capitalism bad" mentality I see in tech does kinda irk me. Why? Because it was capitalism that gave them the tools to be who they are and the opportunities to do what they do.
It's not hard to see why that mentality exists though. That same capitalism also gave rise to the behemoth, abusive monopolies we have today. It gave rise to the over financialization of the sector and declining product quality because you get richer doing stock buybacks and rent-seeking instead of making a better product.
Early hacker culture was also very much not pro-capitalism. The core principle of "Information should be free" itself is a statement against artificial scarcity and anti-proprietary systems, directly opposed to the capitalist ethos of locking up knowledge for profit. The FOSS we use and love rose directly from this culture, which is fundamentally communal, not capitalist.
QuercusMax
16 hours ago
You're completely ignoring the huge amount of public money that went into building the Internet and doing research.
Capitalism didn't build the internet: public spending did.
_DeadFred_
11 hours ago
The government and military spending gave us the tools we have today.
Glemkloksdjf
17 hours ago
Capitalism is bad.
I'm not ignorant to this fact that it helped us for quite a long time but it also created climate change. Overpopulation.
We are still stuck on planet earth, have not figured out the reason for live or the origin of the universe.
I would prefer a world were we think about using all the resources earth provides sustainable and how to use them the most efficient way for the max amount of human beings. The rest of it we would use to advance society.
I would like to have Post-Scarcity Scientific Humanism
jimbokun
17 hours ago
You would need to demonstrate that some other system would have given us all the things you want while avoiding every problem you cite, while not introducing other comparable or worse problems.
maddmann
17 hours ago
How did capitalism create overpopulation? Isn’t that more related to agriculture and better medical tech?
pksebben
19 hours ago
Likely progressive, but definitely not luddite [0]. Anti-capitalist for sure.
I struggle with this discourse deeply. With many posters like OP, I align almost completely - unions are good, large megacorps are bad, death to facists etc. It's when we get to the AI issue that I do a bit of a double take.
Right now, AI is almost completely in the hands of a few large corp entities, yes. But once upon a time, so was the internet, so were processing chips, so was software. This is the power of the byte - it shrinks progressively and multiplies infinitely - thus making it inherently diffuse and populist (at the end of the day). It's not the relationship to our cultural standards that causes this - it's baked right into the structure of the underlying system. Computing systems are like sand - you can melt them into a tower of glass, but those are fragile and will inevitably become sand once again. Sand is famously difficult to hold in a tight grasp.
I won't say that we should stop fighting against the entrenchment of powers like OpenAI - fine, that's potentially a worthy fight and if that's what you want to focus on go ahead. However, if you really want to hack the planet, democratize power and distribute control, what you have to be doing is working towards smaller local models, distributed training, and finding an alternative to backprop that can compete without the same functional costs.
We are this close to having a guide in our pocket that can help us understand the machine better. Forget having AI "do the work" for you, it can help you to grok the deeper parts of the system such that you can hack them better - and if we're to come out of this tectonic shift in tech with our heads above water, we absolutely need to create models that cannot be owned by the guy with the $5B datacenter.
Deepseek shows us the glimmer of a way forward. We have to take it. The megacorp AI is already here to stay, and the only panacea is an AI that they cannot control. It all comes down to whether or not you genuinely believe that the way of the hacker can overcome the monolith. I, for one, am a believer.
jimbokun
17 hours ago
Not true for the Internet. It was the open system anyone could join and many people were shocked it succeeded over the proprietary networks being developed.
billy99k
18 hours ago
How are unions any better than mega corps? My brother is part of a union and the leaders make millions.
He's pigenholed at the same low pay rate and can't ever get a raise, until everyone in the same role also gets a raise (which will never happen). It traps people, because many union jobs can't or won't innovate, and when they look elsewhere, are underskilled (and stuck).
You mention 'deepseek'. Are you joking? It's owned by the Chinese government..and you claim to hate fascism? Lol?
Big companies only have the power now, because the processing power to run LLMs is expensive. Once there are break throughs, anyone can have the same power in their house.
We have been in a tech slump for awhile now. Large companies will drive innovations for AI that will help everyone.
pksebben
18 hours ago
That's not a union - that's another corporate entity parading as a union. A union, operating as it should, is governed by the workers as a collective and enriches all of them at the same rate.
Deepseek is open source, which is why I mention it. It was made by the Chinese government but it shows a way to create these models at vastly reduced cost and was done with transparent methodology so we can learn from it. I am not saying "the future is Deepseek", I am saying "there are lessons to be learned from Deepseek".
I actually agree with you on the corporate bootstrap argument - I think we ought to be careful, because if they ever figure out how to control the output they will turn off outputs that help develop local models (gotta protect that moat!), but for now I use them myself to study and learn about building locally and I think everyone else ought to get on this train as well. For now, the robust academic discourse is a very very good thing.
hobofan
18 hours ago
The top of megacorps make 4-6 orders of magnitude more than labor union leaders. To claim that there is no difference is mindboggeling.
billy99k
11 hours ago
Having 10 million and 30 million aren't all that different to the minimum wage workers barely making ends meet.
_whiteCaps_
17 hours ago
Just because your brother's union sucks, doesn't mean they all do.
billy99k
12 hours ago
I've been on both sides of it (in a union and working with union members when there was no other choice).
They all work the same way. I'm fundamentally against the idea of unions after seeing how they stifle innovation in nearly all industries they control.
bgwalter
19 hours ago
Being anti "AI" has nothing to do with being progressive. Historically, hackers have always rejected bloated tools, especially those that are not under their control and that spy on them and build dossiers like ChatGPT.
Hackers have historically derided any website generators or tools like ColdFusion[tm] or VisualStudio[tm] for that matter.
It is relatively new that some corporate owned "open" source developers use things like VSCode and have no issues with all their actions being tracked and surveilled by their corporate masters.
Please do no co-opt the term "hacker".
forgetfulness
19 hours ago
Hackers never had a very cohesive and consistent ideology or moral framework, we heard non stop of the exploits of people funded as part of Cold War military pork projects that got the plug pulled eventually, but some antipathy and mistrust of the powerful and belief in the power of knowledge were recurrent themes nonetheless
So why is it a surprise that hackers mistrust these tools pushed by megacorps, that also sell surveillance to governments, with “suits” promising other “suits” that they’ll be making knowledge obsolete? That people will no longer need to use their brains, that people with knowledge won’t be useful?
It’s not Luddism that people with an ethos of empowering the individual with knowledge are resisting these forces
amarant
19 hours ago
The problem here isn't resisting those forces, that's all well and good.
The problem is the vast masses falling under Turing's Law:
"Any person who posts a sufficiently long text online will be mistaken for an AI."
Not usually in good faith however.
forgetfulness
18 hours ago
I don’t know how we’ll fix it
Just taking what people argue for on its own merits breaks down when your capacity to read whole essays or comments chains is so easily overwhelmed by the speed at which people put out AI slop
How do you even know that the other person read what they supposedly wrote, themselves, and you aren’t just talking to a wall because nobody even meant to say the things you’re analyzing?
Good faith is impossible to practice this way, I think people need to prove that the media was produced in good faith somehow before it can be reasonably analyzed in good faith
It’s the same problem with 9000 slop PRs submitted for code review
amarant
18 hours ago
I've seen it happen to short, well written articles. Just yesterday there was an article that discussed the authors experiences maintaining his FOSS project after getting a fair number of users, and if course someone in the HN comments claimed it was written by AI, even though there were zero indications it was, and plenty of indications it wasn't.
Someone even argued that you could use prompts to make it look like it wasn't AI, and that this was the best explanation that it didn't look like ai slop.
If we can't respect genuine content creators, why would anyone ever create genuine content?
I get that these people probably think they're resisting AI, but in reality they're doing the opposite: these attacks weighs way heavier on genuine writers than they do on slop-posters.
The blanket bombing of "AI slop!" comments is counterproductive.
It is kind of a self fulfilling prophesy however: keep it up and soon everything really will be written by AI.
FuriouslyAdrift
19 hours ago
VSCodium is the open source "clean" build of VS Code without all the Microsoft telemetry and under MIT license.
Aurornis
19 hours ago
> Hackers have historically derided any website generators or tools like ColdFusion[tm] or VisualStudio[tm] for that matter.
A lot of hackers, including the black hat kind, DGAF about your ideological purity. They get things done with the tools that make it easy. The tools they’re familiar with.
Some of the hacker circles I was most familiar with in my younger days primarily used Windows as their OS. They did a lot of reverse engineering using Windows tools. They might have used .NET to write their custom tools because it was familiar and fast. They pulled off some amazing reverse engineering feats.
Yet when I tell people they preferred Windows and not Linux you can tell who’s more focused on ideological purity than actual achievements because eww Windows.
> Please do no co-opt the term "hacker".
Right back at you. To me, hacker is about results, not about enforcing ideological purity about only using the acceptable tools on your computer.
In my experience: The more time someone spends identifying as a hacker, gatekeeping the word, and trying to make it a culture war thing about the tools you use, the less “hacker” like they are. When I think of hacker culture I think about the people who accomplish amazing things regardless of the tools or whether HN finds them ideologically acceptable to use.
phil21
19 hours ago
> To me, hacker is about results
Same to me as well. A hacker would "hack out" some tool in a few crazy caffeine fueled nights that would be ridiculed by professional devs who had been working on the problem as a 6 man team for a year. Only the hacker's tool actually worked and saved 8000 man-hours of dev time. Code might be ugly, might use foundational tech everyone sneers at - but the job would be done. Maintaining it left up to the normies to figure out.
It implies deep-level expertise about a specific niche in the space they are hacking on. And it implies "getting shit done" - not making things full of design beauty.
Of course there are different types of hackers everywhere - but that was the "scene" to me back in the day. Teenage kids running circles around the greybeards clucking at the kids doing it wrong.
Aurornis
19 hours ago
> but that was the "scene" to me back in the day.
Same. Back then, and even now, the people who were busy criticizing other people for using the wrong programming language, text editor, or operating system were a different set of people than the ones actually delivering results.
In a way it was like hacker fashion: These people knew what was hot and what was not. They ran the right window manager on the right hardware and had the right text editor and their shell was tricked out. They knew what to sneer at and what to criticize for fashion points. But actually accomplishing things was, and still is, orthogonal to being fashionable.
DennisP
19 hours ago
To wit: my brother has never worked as a developer and has just a limited knowledge of python. In the past few days, he's designed, vibe-coded, and deployed a four-player online chess game, in about four hours of actual work, using Google's Antigravity. I looked at the code when it was partly done, and it was pretty good.
The gatekeepers wouldn't consider him a hacker, but that's kinda what he is now.
mattgreenrocks
19 hours ago
Ideological purity is a crutch for those that can't hack it. :)
I love it when the .NET threads show up here, people twist themselves in knots when they read about how the runtime is fantastic and ASP.NET is world class, and you can read between the lines of comments and see that it is very hard for people to believe these things while also knowing that "Micro$oft" made them.
Inevitably when public opinion swells and changes on something (such as VSCode), all the dissonance just melts away, and they were _always_ a fan. Funny how that works.
lxgr
19 hours ago
> hackers have always rejected bloated tools [...] Hackers have historically derided any website generators
Ah yes, true hackers would never, say, build a Debian package...
Managing complexity has always been part of the game. To a very large extent it is the game.
Hate the company selling you a SaaS subscription to the closed-source tool if you want, and push for open-source alternatives, but don't hate the tool, and definitely don't hate the need for the tool.
> Please do no co-opt the term "hacker".
Indeed, please don't. And leave my true scotsman alone while we're at it!
bgwalter
19 hours ago
Local alternatives don't work, and you know that.
billy99k
18 hours ago
Being anti-ai means you want to conserve the old ways in favor of new technology. Hardly what I would call 'progressive'.
poszlem
19 hours ago
> That's the thing, hacker circles didn't always have this 'progressive' luddite mentality. This is the culture that replaced hacker culture.
People who haven't lived through the transition will likely come here to tell you how wrong you are, but you are 100% correct.
codeflo
19 hours ago
You were proven right three minutes after you posted this. Something happened, I'm not sure what and how. Hacking became reduced to "hacktivism", and technology stopped being the object of interest in those spaces.
Blackthorn
19 hours ago
> and technology stopped being the object of interest in those spaces.
That happened because technology stopped being fun. When we were kids, seeing Penny communicating with Brain through her watch was neat and cool! Then when it happened in real life, it turned out that it was just a platform to inject you with more advertisements.
The "something" that happened was ads. They poisoned all the fun and interest out of technology.
Where is technology still fun? The places that don't have ads being vomited at you 24/7. At-home CNC (including 3d printing, to some extent) is still fun. Digital music is still fun.
DennisP
18 hours ago
A lot of fun new technology gets shouted down by reactionaries who think everything's a scam.
Here on "hacker news" we get articles like this, meanwhile my brother is having a blast vibe-coding all sorts of stuff. He's building stuff faster than I ever dreamed of when I was a professional developer, and he barely knows Python.
In 2017 I was having great fun building smart contracts, constantly amazed that I was deploying working code to a peer-to-peer network, and I got nothing but vitriol here if I mentioned it.
I expect this to keep happening with any new tech that has the misfortune to get significant hype.
tekne
19 hours ago
It's not ads, honestly. It's quality. The tool being designed to empower the user. Have you ever seen something encrusted in ads be designed to empower the user? At least, it necessitates reducing the user's power to remove the ads.
But it's fundamentally a correlation, and this observation is important because something can be completely ad-free and yet disempowering and hence unpleasant to use; it's just that vice-versa is rare.
SpicyLemonZest
19 hours ago
> It's not ads, honestly. It's quality. The tool being designed to empower the user. Have you ever seen something encrusted in ads be designed to empower the user? At least, it necessitates reducing the user's power to remove the ads.
Yes, a number of ad-supported sites are designed to empower the user. Video streaming platforms, for example, give me nearly unlimited freedom to watch what I want when I want. When I was growing up, TV executives picked a small set of videos to make available at 10 am, and if I didn’t want to watch one of those videos I didn’t get to watch anything. It’s not even a tradeoff, TV shows had more frequent and more annoying ads.
recursive
18 hours ago
> Video streaming platforms, for example, give me nearly unlimited freedom to watch what I want when I want.
But they'd prefer if it was shorts.
SpicyLemonZest
18 hours ago
No, they wouldn't. On Youtube, for example, videos were consistently trending longer over time, and you used to see frequent explainers (https://www.wired.com/story/youtube-video-extra-long/) on why this was happening and how Youtube benefits from it. Short-form videos are harder to monetize and reduce retention, but users demand them so strongly that most platforms have built a dedicated experience for them to compete with TikTok.
recursive
16 hours ago
If that was true, I would be able to turn off shorts from my recommendation feed.
SpicyLemonZest
12 hours ago
You can. It’s not a hermetic seal, I assume because they live in the same database as normal videos, but if you’re thinking of the separate “shorts” section there’s a triple dot option to turn it off.
recursive
12 hours ago
I've clicked this triple dots many times. I never saw such an option. I saw "show fewer shorts", and even that seems to be temporary.
swat535
15 hours ago
> That happened because technology stopped being fun.
Exactly and I'm sure it was our naivete to think otherwise. As software became more common, it grew, regulations came in, corporate greed took over and "normies" started to use.
As a result, now everything is filled in subscriptions, ads, cookie banners and junk.
Let's also not kid ourselves but an entire generation of "bootcamp" devs joined the industry in the quest of making money. This group never shared any particular interest in technology, software or hardware.
bikelang
18 hours ago
The ads are just a symptom. The tsunami of money pouring in was the corrosive force. Funny enough - I remain hopeful on AI as a skill multiplier. I think that’ll be hugely empowering for the real doers with the concrete skill sets to create good software that people actually want to use. I hope we see a new generation of engineer-entrepreneurs that opt to bootstrap over predatory VCs. I’d rather we see a million vibrant small software businesses employing a dozen people over more “unicorns”.
phil21
19 hours ago
>The "something" that happened was ads. They poisoned all the fun and interest out of technology.
Disagree. Ads hurt, but not as much as technology being invaded by the regular masses who have no inherit interest in tech for the sake of tech. Ads came after this since they needed an audience first.
Once that line was crossed, it all became far less fun for those who were in it for the sheer joy, exploration, and escape from the mundane social expectations wider society has.
It may encompass both "hot takes" to simply say money ruined tech. Once future finance bros realized tech was easier than being an investment banker for the easy life - all hope was lost.
Blackthorn
19 hours ago
I don't think that just because something becomes accessible to a lot more people that it devalues the experience.
To use the two examples I gave in this thread. Digital music is more accessible than ever before and it's going from strength to strength. While at-home subtractive CNC is still in the realm of deep hobbyists, 3d printing* and CNC cutting/plotting* (Cricut, others) have been accessible and interested by the masses for a decade now and those spaces are thriving!
* Despite the best efforts of some of the sellers of these to lock down and enshittify the platforms. If this continues, this might change and fall into the general tech malaise, and it will be a great loss if that happens.
benmmurphy
18 hours ago
my guess is something like detailed in this article: https://meaningness.com/geeks-mops-sociopaths
jrm4
17 hours ago
No. You're both about 50% correct; what's making everything weird is that the things associated with "hacking" transitioned from "completely optional side hobby" to "fundamental basis of the economy, both bullshit and not."
This is why I'm finding most of this discussion very odd.
dude250711
19 hours ago
The folks who love command-line and terminals had not been luddites all this time?
grayhatter
18 hours ago
lol, no. They're people who think faster. Someone who uses vscode will never produce code faster than someone proficient in vim. Someone who clicks through GUI windows will never be able to control their computer as fast as someone with a command prompt.
I'm sure that there are some examples who enjoy it for the interface. I think CRT term/emulator is peak aesthetic. And a few who aren't willing to invest the time to use a gui an a terminal, and they learned the terminal first.
Calling either group a luddite is stupid, but if I was forced to defend one side. Given most people start with a gui because it's so much easier. I'd rather make the argument that those who never progress onto the faster more powerful options deserve the insult of luddite.
rkomorn
16 hours ago
> Someone who uses vscode will never produce code faster than someone proficient in vim.
Is this an actually serious/honest take of yours?
I've been using vim for 20 years and, while I've spent almost no time with VS Code, I'd say that a lot of JetBrains' IDEs' built in features have definitely made me faster than I ever was with vim.
Oh wait. No true vim user would come to this conclusion, right?
grayhatter
12 hours ago
The take was supposed to be read as slightly hyperbolic. Because while the fastest user of an IDE, has never come close to the fastest I've seen in vim, as you pointed out, thats not really a reasonable comparison either. Here I'm intentionally only considering raw text editing speed, jumping across lines, switching files. If you're including IDE features, when you expect someone in vim to leave vim, you're comparing something that doesn't equate to my strawman.
My larger point was it's absurd to say someone who's faster using [interface] is a luddite because they don't use [other interface] with nearly identical features.
> Oh wait. No true vim user would come to this conclusion, right?
I guess that's fitting insult, given I started with a strawman example too.
edit: I can offer another equally absurd example, (and why I say it's only slightly hyperbolic because the following is true), I can write code much faster using vim, than I can with [IDE], I don't even use tab complete, or anything similar either. I, personally, am able to write better code, faster, when there's nothing but colored text to distract me. Does that make me a luddite? I've tried both, and this fits better for me. Or is it just how comfortable you are with a given interface? Because I know most people can find tab complete useful.
rkomorn
11 hours ago
> My larger point was it's absurd to say someone who's faster using [interface] is a luddite because they don't use [other interface] with nearly identical features.
Okay. That, I agree with.
esafak
17 hours ago
IDEs have keyboard shortcuts too, you know.
otabdeveloper4
19 hours ago
> is absolutely filled with busy work that no one really wants to do
Well, LLMs don't fix that problem.
(They fix the "need to train your classification model on your own data" problem, but none of you care about that, you want the quick sci-fi assistant dopamine hit.)
brendoelfrendo
19 hours ago
> That's the thing, hacker circles didn't always have this 'progressive' luddite mentality.
I think, by definition, Luddites or neo-Luddites or whatever you want to call them are reactionaries but I think that's kind of orthogonal to being "progressive." Not sure where progressive comes in.
> All that said, the actual workday is absolutely filled with busy work that no one really wants to do, and the refusal of a loud minority to engage with that fact is what's leading to this.
I think that's maybe part of the problem? We shouldn't try to automate the busy work, we should acknowledge that it doesn't matter and stop doing it. In this regard, AI addresses a symptom but does not cure the underlying illness caused by dysfunctional systems. It just shifts work over so we get to a point where AI generated output is being analyzed by an AI and the only "winner" is Anthropic or Google or whoever you paid for those tokens.
> These people bring way more toxicity to daily life than who they wage their campaigns against.
I don't believe for a second that a gaggle of tumblrinas are more harmful to society than a single Sam Altman, lol.
Perepiska
17 hours ago
There's a simple solution: anyone who posts AI-generated content can label it as "AI-generated" and avoid misleading people.