barishnamazov
24 days ago
I struggle with the "good guys vs bad guys" framing here.
Is a small indie dev "dodgy" if they use AI to unblock a tricky C# problem so they can actually finish their game? Yarn Spinner seems to conflate "Enterprise Scale Replacement" (firing 500 support staff) with "assistive tooling" (a solo dev using GenAI for texture variants).
By drawing such a hard line, they might be signaling virtue to their base, but they are also ignoring the nuance that AI -- like the spellcheckers and compilers before it -- can be a force multiplier for the very creatives they want to protect.
Personally, I do agree that there are many problems with companies behind major LLMs today, as well as big tech companies C-levels who don't understand why AI can't replace engineers. But this post, as much as written in a nice tone, doesn't frame the problem correctly in my mind.
GaryBluto
24 days ago
> I struggle with the "good guys vs bad guys" framing here.
It's because generative AI has become part of the "culture wars" and is therefore black and white to lots of people.
marcus_holmes
24 days ago
Seeing a lot of this, that any use of LLMs is immediately condemned.
I think it's self-defeating, but virtue signallers gonna virtue signal.
raincole
24 days ago
It really doesn't matter though.
> Is a small indie dev "dodgy" if they use AI to unblock a tricky C# problem so they can actually finish their game?
No amount of framing (unless written into law) would stop small indie devs from doing this. AI is just too efficient, making too much sense economically. People who are willing to starve for their ideology is always the minority.
Even artisans who build hand-made wooden furniture use power tools today. The tools that make economical sense will prevail one way or another.
gamblor956
24 days ago
I think that presumes that an LLM is capable of "unblocking a tricky C# problem" that a group of humans cannot with research. LLMs don't understand the code they output; they just regurgitate code that's already in their training set (the proof is in the pudding; see the many posts on HN about these LLM coding assistants outputting copyrighted code token for token).
So if the tricky C# problem isn't already in their data set, the output of the LLM is, at best, random crap. Even the worst human effort would exceed the output of the LLM, and that is the average case for any "tricky" problem. LLMs are fundamentally only useful on the most common types of problems that are can better be addressed by using frameworks, plugins, or APIs.
(And on that note: every programmer I've met who says that LLM coding agents 10x'd their output is the type of programmer that would have been PIP'd or fired 10 years ago for incompetence. We used to call them "code monkeys" for obvious reasons. Junior programmers think that LLM coding agents are awesome because they don't have the experience or skill to understand just how bad the output of LLM coding agents is, and the few that survive in the industry long enough to become senior programmers will laugh at their younger selves at how much of an unmaintainable mess they made vibe coding.)
llms01
24 days ago
I think most people don't have an issue with the models themselves, just the big service providers who are up to some very shady and possibly illegal stuff.
Personally I'd rather a future where everyone used local models.
thefz
24 days ago
> Is a small indie dev "dodgy" if they use AI to unblock a tricky C# problem so they can actually finish their game?
What about learning the tools you use everyday by yourself?
tpm
24 days ago
That's very hard if not impossible in current dev environment to do by yourself. You simply can't learn every part of every tool and if it's something only used rarely there is little reason to learn too much about it. You also want to build and ship in some reasonable timeframe.
xyzsparetimexyz
23 days ago
> a solo dev using GenAI for texture variants
While not as bad as firing 500 people, using ai to generate slop (and it is inherently slop due to being generated quickly by ai) is still bad.
johnthedebs
24 days ago
I think the only people they're calling "dodgy" are the ones offering these AI tools, and not the people using them.
furyofantares
24 days ago
Here is some text from the post
> You need to realise that if you use them, you’re both financially and socially supporting dodgy companies doing dodgy things. They will use your support to push their agenda. If these tools are working for you, we’re genuinely pleased. But please also stop using them.
> Your adoption helps promote the companies making these tools. People see you using it and force it onto others at the studio, or at other workplaces entirely. From what we’ve seen, this is followed by people getting fired and overworked. If it isn’t happening to you and your colleagues, great. But you’re still helping it happen elsewhere. And as we said, even if you fixed the labour concerns tomorrow, there are still many other issues. There’s more than just being fired to worry about.