I think the problem is the data.
Software engineering is a weird niche that is both a high paying job and something you can almost self-teach from widely available free online content. If not self-teach, you can rely on free online content for troubleshooting, examples, etc.
A lot of other industries/jobs are more of an apprenticeship model, with little data and even less freely available on open internet.
> something you can almost self-teach from widely available free online content.
I think you massively underestimate just how much data is online for everything, especially once you include books which are freely available on every possible subject (illegally, perhaps, but if Meta can download them for free then so can everyone else).
There's less noise for many other subjects than for software engineering, there's often just a couple rather than 100s of competing ways to do everything. There might just be one coursebook rather than 1000s of tutorials. But the data for self teaching is absolutely there.
Consider two fields with vast amounts of literature: medicine and law.
Medicine faces two key challenges. First, while research follows the scientific method, much of what makes a good doctor—intuition, pattern recognition, and clinical judgment—is rarely documented. Second, medical data is highly sensitive, limiting access to real-world cases, images, and practice opportunities. Theory alone is not enough; hands-on experience is essential.
Law presents a different problem: unknown unknowns. The sheer volume of legal texts makes it nearly impossible to be sure you’ve found everything relevant. Even with search tools, gaps in knowledge remain a major risk.
Compounding this is the way law is actually practiced. Every judge and lawyer operates with a shared foundation of legal principles so basic they are almost never discussed. The real work happens at two higher levels: first, the process—how laws are applied, argued, and enforced in practice. Then, at a third, more abstract level, legal debates unfold about interpretation, precedent, and systemic implications. The first level is assumed, the second is routine, and only the third is where true discussion happens.
Self-teaching is easier in fields where knowledge is structured, accessible, and complete. Many subjects are not.
Really fantastic comment. I would add one criteria to where self-teaching is easier: rapidly testable hypotheses.
A significant chunk of human knowledge is not publicly accessible. You cannot self-teach how to make a modern aircraft, jet engine, nuclear reactor, radar tech, advanced metallurgy etc.
Similarly, I would wager most of the useful economics and financial theory that humans have come up with is only known to hedge or prop trading firms.
For some subjects, the entire journal-published academic body of knowledge for it is probably some useless fraction of the whole and university academia is operating nowhere close to the cutting edge. People are probably doing PhDs today on theses that some defense contractor or HFT firm already discovered 20 years ago.
Even things like specialized medical knowledge, I would wager is largely passed down through mentor-mentee tradition and/or private notes as opposed to textbooks. It's unlikely that you can teach yourself how to do surgery just from textbooks. I once had a pathologist's report use a term for a skin condition that was quite literally ungoogleable. The skin condition itself was fairly ordinary, but the term used was outright esoteric and yet probably used on a daily basis by that pathologist. Where did he learn it from?
Not everything is on the Internet.
Taylor Wilson built a nuclear reactor at his home when he was 14. People build jet engines and put them on modern model aircraft every day.
If the instructions aren’t immediately available, the internet provides connections and forums to find anything your heart desires.
Information wants to be free.
Arbitraging micro-opportunities (or far more likely, deploying insider information masked as HFT or some secret sauce arbitrage) is not economically useful.
The difference is in the cost of the equipment.
Sure, you can learn all about power electronics by yourself. But have some ideas you want to implement? Hundreds to tens of thousands of dollars.
> Software engineering is a weird niche that is both a high paying job and something you can almost self-teach
If you meant programming, I agree it could be self-taught, but not SE. SE is the set of techniques, practices, tools that we have collected over decades for producing multi-versioned software that meets a certain reliability rating. Not all of these is freely available online.
Unless you are talking about people who are actual licensed engineers, this is a distinction without a difference.
Thing is, I’ve never met someone in software with a professional license.
I didn't mean the professional license, rather the ensemble of practices, tools, etc. It is practiced in safety critical domains.
I'm self-taught and had a job in the autonmous vehicle industry writing software that included safety-critical functionality.
I had about 12 YoE at the time, and my manager didn't realize I didn't have a degree until after I was hired. Apparently it hadn't affected my offer, and he was more impressed than anything.
You say:
> SE is the set of techniques, practices, tools that we have collected over decades for producing multi-versioned software that meets a certain reliability rating. Not all of these is freely available online.
The same way there's no single guide on the internet on how to be the kind of engineer who builds reliable or extensible software, I don't think there's a guide hiding in the average CS curriculum.
Most of it consists of getting repetitions building software that involves the least predictable building block in all of software engineering (people), in all its various forms: from users, to other developers, to yourself (in the future), to "stakeholders", etc.
Learning how to predict and account for the unpredictability in all the people who will intersect with some facet of your software is the closest I've seen to a "universal method" for creating software that meets the criteria you defined.
And honestly I'd be concerned if someone told me you can just be taught some blessed set of tools and practices to get around it... that sounds a lot like not having actually internalized why they work in the first place, and the "why" is arguably more valuable than the tools and practices themselves.
this is a challenging point of view.. on one side, a "a job in the autonmous (sic) vehicle industry writing safety critical software" sounds like one of the most slave-ish jobs in the world. This person had a 100 other people checking every tiny result, plus automated testing frameworks and hundreds of pages of "guidelines" .. in other words, the least creative and most guard-checked software possible.
On the other hand, an open and level playing field does not exist in the thirty-some odd years of open markets software development. No one since Seymour Cray has done complete systems design, really.. it is turtles all the way down. You have to get hardware to run on, and the software environment is going to have been defined for that.. CPU architectures and programming languages. People who write whole systems generally do it in teams.
The arrogant and self-satisfied tone of the corporate worker-bee says that there is no such thing as real software engineering skills?
like defining "health" or other broad topics.. the closer the topic is examined, the more holes in the arguments. I am glad I never punched a time clock for Elon Musk, however, all things considered.
You write too poorly to be this condescending.
this is my real reaction to the post .. but conversation here could be more inquiring.. to find insight. My bad.. no happiness
Digesting your thoughts before vomiting out a reaction is allowed.
There are plenty of self-taught people in the open source space making highly reliable software.
...who don't get hired at "real" jobs because they can't produce a bubble sort in 15 minutes on a whiteboard.
I feel very fortunate that the core blender devs had the patience to put up with my stupid amateur mistakes while I learned the skills to become a helpful contributor back in the day.
The vast majority of people learn this on the job. This is certainly not taught in schools (or is only barely scratched as a topic).
Sure. And that's why SWEs will be fine in the world of AI, as the rote work is more easily automated.
The contrast is that for a lot of other jobs, the rote tasks are not routinely solvable with free online content in text form.
I'll bite. Can you list specific things not freely available online?
I would agree that the products coming out so far lack imagination, but hard disagree on the impact. LLMs have completely transformed multiple industries already. In SWE, I would estimate that junior positions shrank by 70-80%, but even that is less extreme than what is going on in other industries.
In marketing, the entire low-end to mid-tier market is gone. Instead of having teams working on projects for small to mid-sized companies, there's now a single Senior managing projects with the help of LLMs. I know multiple agencies who cut staff by 80-90% without dropping revenue.
Translation (of books, articles, subtitles) was never well paid, even for very complex and demanding work. My partner did it a bit on the side, mostly justifying the low pay with some moral bla about spreading knowledge across cultures... With LLMs you can completely cut out the grunt part. You define the hard parts (terms that don't translate well), round out the edges and edge out the fluff, and every good translator becomes two to ten times more productive. Since work is usually paid by the page, people in the industry got a very decent (at least temporary) pay jump, I would imagine around 100%.
Support is probably the biggest one though. It is important to remember that outsourcing ot India only works for English speaking countries. And even that isn't super cheap.
Here in Germany, if you don't have back-up wealth, it is your constitutional right to get some support from the state (~1400 euro), but you are obligated to find a job as soon as possible, and they will try to help you find a role. Support was always one of the biggest industries to funnel people towards. I talked to a friend working there, and according to them the complete industry basically stopped advertising new positions, the only ones that are left are financial services. The rest went all in on LLMs and just employ a fraction of the support stuff to deal with things escalating enough.
And that's not even touching on all the small things. How much energy is spent on creating pitch decks, communicating proposals, writing documentation etc? It probably goes up as far as 50% of work in large Orgs, and even if you can just save 5% of your time by using LLMs to phrase or organize, there is a decent ROI for companies to pay for them.
I think a lot of this is because the economic pressure is weak right now both on the side of labor and consumers, due to decades of severe upward wealth transfer. A lot of these companies are not improving or even maintaining their productivity or quality of service, and while there are probably some productivity gains for engineers, I suspect based on what I'm seeing that this is going to burn a lot of people out, as there is significant social pressure both from peers and employers to exaggerate this somewhat. People can have too heavy a workload for a decent amount of time before breaking.
There's just no countervailing force to make these decisions that immediately painful for them. Sectors are monopolized, people are tired and desperate, tech workers are in a basically unprecedented bout of instability.
The situation is super dark from a lot of angles, but I don't think it's really "the overwhelming usefulness of AI" that's to blame here. As far as I can tell, the biggest thing these technologies are doing is providing a cover story for private-equity-style guttings of various knowledge work verticals for short-term profit, which was kind of inevitable given that's been happening across the board in the larger economy, it's just another pretense that works for different verticals.
There are cases where LLMs seem really genuinely useful (Mostly ones that are for and by SWEs, like generating documentation or smoothing some ramp processes in learning new libraries or languages) and those don't seem to be "transformative" at scale yet, unless we count "transforming" many products into buggier products that are more brittle and frustrating to interact with
>I know multiple agencies who cut staff by 80-90% without dropping revenue.
I'm finding it hard to reconcile this with my own experiences. My whole team ( 5 people ) left last year ( for better pay I guess ) and the marketing agency in germany Im working for had to substitute them with freelancers. To offset the cost they fired the one guy who was hired to push the whole LLM AI topic.
We managed to fill one junior position by offering 10k+ more then in their last job. The firm would love to hire people to replace the freelancers.
We had to cut stuff lately. But mostly they closed the kitchen which wasn't used due to work from home policy.
Definitely don't see any stuff reduction due to automation / LLM use. They still pay (external) people 60€ per written text/article. Because clients don't like LLM written stuff.
Actually I have interacted with multiple translators in multiple industries and I haven't seen any disruption (although I agree with your statement that it was never well paid)
- Synchronous translation at political/economic events still needs a personm as it ever did
- LLMs are nowhere near the level to be able to translate fine literature at a high enough quality to be publishable
- Translating software is still very hard, as the translator usually needs a ton of context/reference for commonly used terminology - we partnered with a machine translation company, and what they produced sucked balls.
I have friends who work as translators, and we make use of translation services as a company, and I haven't seen the work going away.
> I would estimate that junior positions shrank by 70-80%
This just isn't true, it's nowhere close.
>LLMs have completely transformed multiple industries already.
If this was true we would see the results in productivity and unemployment stats. We don't though, so far the effect hasn't registered.
We’re trying to use it for industrial apps. Been over a year of R&D. Some good but often mixed results. Adherence to prompts is a big issue for us. It’s most useful not as a chatbot but to give explained descriptions of what the user is seeing, so they don’t need to dig down into 20 graphs and past history. That necessitates being able to refer to things with URIs which works 95% of the time but the 5% is killer since it’s difficult to detect issues and leads to dead links.
I tried to build a BIG E2E automation pipeline, along the lines of like, replace a team of 5 with this one simple tasks. And as I was doing it, all I could think was, just use chatgpt. Sure it can't actually automate what you're doing, but it'll get you there 80% as fast as fully-automated, with 90% less risk of error/nonsense at the end. Ironically, this company blocks all LLM websites, they even block GH on their employees' computers.
I'm curious about your approach and the nature of those industrial apps. Is it more of recommender agents accessing available sources (through URIs) - or more like explainers. Would be great to connect https://shorturl.at/xdOee
Claude is mostly used by software engineers. That's an important distinction to make.
I love Claude, but let's not ignore that in the LLM race, they're not exactly the leading player.
Can I ask a dumb question as an LLM newbie? What is it about Claude that makes it so good at basic software engineering tasks? Do you think it was finely tuned to be good at these tasks? No joke/trolling: A bunch of people have posted on HN in the last 6 months about creating MVPs (Minimum Viable Products) -- usually web apps -- using Claude. As a non-web-app programmer, I think this is amazing progress!
I think it understands the context better and it was possibly fine tuned better. I have been using GPT since 3 and while the replies have obviously gotten more accurate, it still makes weird assumptions at times, whereas Claude seems to "get it" more often. In tasks other than coding, I've found gpt to be more detailed by default and yet Claude seems to hit the mark better.
IMO it was just the strongest model for awhile for programming. It got the answers right more often than not.
It is faster than the reasoning/chain of thought models. With current o1 and DeepSeek though I haven't logged into Claude in a few weeks.
I have no inside knowledge but I am kind of expecting Sonnet chain of thought any day now and I am sure that will be incredible.
This is gonna sound strange but:
Anthropic's llms always (always? at least since 2) have a distinctive "personality". I obv don't know how to quantify it or what "it" really is, but if you've used it you might know what I mean. Maybe that "personality" is conducive to swe?
Fair, but do you think OpenAI and Gemini are going to be like directionally similar? How much of OpenAI's traffic is from Co-Pilot and other related tools, for example. My local IDE probably generates more queries a day than (pick a profession, idk, nurse? insurance sales? construction worker?) does in a month!
I would be pretty shocked if the Anthropic reasoning model is not mind blowing and doesn't take the lead back.
But "AI" tools have more or less seeped into every mainstream product... this is a strong "defensive move" for companies in anticipation of more to come.
We aren't leaving MS Office or Adobe because they already pushed out some minimal innovation. But what about the products you don't even know about? For lawyers, doctors, logistics, sales, marketing, wood workers, handymen? In Europe or Asia?
New product by bringing true innovation could easily push out legacy business by "shiny new thing"(AI) and better UX alone. A lot of software in these areas simply hasn't improved for 10 years - with a great idea and a dedicated team it's a landslide waiting to happen.
Claude is indeed far more familiar amongst software engineers.
Google Gemini integration into their docs/sheets/slides and Gmail perhaps will show different demographics in a few months, and that is yet before we heard from OpenAI.
You may be right, but I doubt it. I suspect similar usage metrics for Gemini and OpenAI
Maybe spend and better models will help this (I’ve not used the deep research models so maybe we are there already). But even day to day coding, the LLMs are great helpers but giving them anything more than a slightly complicated prompts and it seems like these models become completely helpless. You just constantly need a human in the loop because these models are too dumb or lack the context to understand the big picture.
Maybe these models will get better as they’re given more context and can understand the full stack but for now they cannot.
And this is just with code where it already has billions of examples. Nevermind any less data-rich fields. The models still need to get smarter.
I don't think it's necessarily because of lack of generalizability. We (SWEs) built it, so we naturally have the most intimate knowledge of how to dogfood/use it. And so the cycle intensifies (use, provide feedback, improve). There's many positive examples of LLMs being useful in document based workflows in other domains as well!
Maybe! But you could say the inverse of lots of things that SWE's built. SWE's built the bloomberg terminal! SWE's built CRMs! I think its at least possible that LLMs are VERY useful for SWEs and a small number of other professions, but is unlikely to massively scale beyond that
If you were on the early internet talking to someone about music or woodworking or whatever, you could reasonably assume they were a tech person because it was not simple to get online. It took a minute for it to spread.
Daniel Rock has done some interesting work on the ROI of AI in general (also, I believe two of his papers are referenced in this study). Note that this doesn't explicitly restrict itself to covering LLMs, but... still a very interesting body of work.
https://www.danielianrock.com/research
My term for this is “Whitey’s goin’ to the data center”. We are looking at an arms race, where there really is genuine new technology and it will make a difference - but at the 1-2% per annum of an economy level - compounded over fifty years that is geo political dominance yes, but it’s not “machines of loving grace” level growth.
We already have thousands of geniuses working across our economies and teaching our youth. The best of our minds have every year or so been given a global stage in Nobel speeches. We still ignore their arses and will ignore it when AI tells us to stop fighting or whatever.
The real issue here is that wafer scale chips give 900,000 cores, and nothing but embarrassingly parallel code can use it - and frankly no coder I know writes code like that - we have to rethink our whole approach now Moores law is over. Only AI has anything like ability to use the processing ability being built today - the rest of us can stick to cores from 2016 and nothing would change.
Throwing hundreds of billions at having a bad way to program 1 million cores because we have not rethought software and businesses to cope seems wrong - both because “Whitey” can spend it on better things but also because it is an opportunity - imagine being 900,000 times faster than your competitors - what’s does that even mean?
Edit: Trying to put it another way - there are two ways AI can help us - it can improve cancer treatments at every stage of medical care, through careful design and creation of medical AI models that can slowly ratchet up diagnosis, treatment and even research and analysis. This is human organisations harnessing and adapting around a new technology
Or AI can become so smart it just invents a cure for cancer.
I absolutely think the first is going to happen and will benefit the denizens of the first world first. The second one requires two paradigm shifting leaps in the same sentence. Ten years ago I would have laughed in Anthropics face. Today I just give it a low probability multipled by another low probability- and that is an incredible shift.
I mean, are any of us shocked that folks who work with computers or are computer enthusiasts are early adopters of LLMs?
I feel like this has less to do with what LLMs are best at and more to do with which folks are mostly likely to spend time using a chat bot.
> I fear the economic reality on the backside of this kind of spend.
Minor nitpick. Use of the word 'spend' as a noun is not widespread and not well known.
Yeah fair, I forget HN is very international, this may read as just straight up weird
As someone who works in finance, I would disagree. I asked ChatGPT:
Is the noun spend rare?
ChatGPT said:
The noun "spend" is relatively rare compared to its more common form as a verb. While "spend" is widely used as a verb (meaning to give money or time for something), as a noun, it refers to an expenditure or the act of spending, and it’s not as commonly encountered.
In most contexts, people would use alternatives like "expenditure," "spending," or "outlay" instead of "spend" as a noun. That said, it is still used occasionally in certain contexts, especially in financial or informal language.
Well, ChatGPT is making the same point. Not well known outside the financial industry.
The majority of audience and posters of ycombinator are not in that industry group, right?
“Spend” is a common term in advertising, which is arguably the single largest employer of software engineers