raincole
12 hours ago
> Video games stand out as one market where consumers have pushed back effectively
No, it's simply untrue. Players only object against AI art assets. And only when they're painfully obvious. No one cares about how the code is written.
If you actually read the words used in Steam AI survey you'll know Steam has completely caved in for AI-gen code as well. It's specifically worded like this:
> content such as artwork, sound, narrative, localization, etc.
No 'code' or 'programming.'
If game players are the most anti-AI group then it's crystal clear that LLM coding is inevitable.
> This stands in stark contrast to code, which generally doesn't suffer from re-use at all, or may even benefit from it, if it's infrastructure.
Yeah, exactly. And LLM help developers save time from writing the same thing that has be done by other developers for a thousand times. I don't know how one can spins this as a bad thing.
> Classic procedural generation is noteworthy here as a precedent, which gamers were already familiar with, because by and large it has failed to deliver.
Spore is well acclaimed. Minecraft is literally the most sold game ever. The fact one developer fumbled it doesn't make the idea of procedural generation bad. This is a perfect example of that a tool isn't inherently good or bad. It's up to the tool's wielder.
bartread
9 hours ago
> Classic procedural generation is noteworthy here as a precedent, which gamers were already familiar with, because by and large it has failed to deliver.
Yes, this is a wildly uneducated perspective.
Procedural generation has often been a key component of some incredibly successful, and even iconic games going back decades. Elite is a canonical example here, with its galaxies being procedurally generated. Powermonger, from Bulldog, likewise used fractal generation for its maps.
More recently, the prevalence of procedurally generated rogue-likes and Metroidvanias is another point against. Granted, people have got a bit bored of these now, but that's because there were so many of them, not because they were unsuccessful or "failed to deliver".
bombcar
8 hours ago
Procedural generation underlies the most popular game of all time (Minecraft) and is foundational for numerous other games of a similar type - Dwarf Fortress, et al.
And it's used to power effect where you might not expect it (Stardew Valley mines).
What procedural generation does NOT work at is generating "story elements" though perhaps even that can fall, Dwarf Fortress already does decently enough given that the player will fill in the blanks.
optionalsquid
8 hours ago
> And it's used to power effect where you might not expect it (Stardew Valley mines).
Apparently Stardew Valley's mines are not procedurally generated, but rather hand-crafted. Per their recent 10 year anniversary video, the developer did try to implement procedural generation for the mines, but ended up scrapping it:
https://www.stardewvalley.net/stardew-valley-10-year-anniver...
bombcar
8 hours ago
They're quasi-generated with random elements and fixed elements - similarly to early Diablo procedural generation.
xerox13ster
5 hours ago
That’s not the same procedural generation as GPT or diffusion and you know it.
It’s not even in the same ballpark as Elite, NMS, terraria, or Minecraft.
The levels are all hand drawn, not generated by an algorithm, even if they’re shuffled. Eric Barone, the developer, has publicly said as much. Are you calling him a liar?
It’s like the difference between sudoku/crossword and conways game of life
morissette
6 hours ago
And here I thought the most popular game of all time was Soccer or Super Mario Bros 3
bee_rider
4 hours ago
I think they meant videogame, ruling out soccer.
It looks like the Super Mario Bros series has a good showing, but it is the first one. I bet 3 falls into an unlucky valley where the game-playing population was not quite as large as it is now, but it isn’t early enough to get the extreme nostalgia of the first one.
https://en.wikipedia.org/wiki/List_of_best-selling_video_gam...
Of course this assumes sales=popularity, but the latter is too hard to measure.
6510
5 hours ago
Quality is the same thing as popularity. That is why mcdonalds has 12 Michelin stars.
dkersten
4 hours ago
Almost every 3D game in the past 20 years uses procedural foliage generation (eg SpeedTree and similar). Many use procedural terrain painting. Many use tools like Houdini.
So procedural generation is extremely prevalent in most AAA games and has been for a long time.
nikitau
8 hours ago
Roguelike/lites are is of the most popular genres of indie games nowadays. One of it's main characteristics is randomization and procedural generation.
tanjtanjtanj
7 hours ago
While there are many Roguelikes with procedural generation, I think the most popular ones do not. Slay the Spire, Risk of Rain 2, Hades 1/2, BoE etc are all handmade stages with a random order with randomized player powers rather than procedurally generated.
banannaise
4 hours ago
I've seen a couple roguelike developers report that they played around with procedural generation, but it was difficult to prevent it from creating dungeons that were bad, unfun, or just straight-up killscreens. Turns out it's often easier to simply hand-draw good maps than to get the machine to generate okay-to-good ones.
Procedural generation is good when variety matters more than quality, which is a relatively rare occurrence.
htek
3 hours ago
That says more about the developer than procedural generation as a whole. Using procedural generation IS difficult, it requires understanding how to set up constraints on your p-random generated elements and ensuring the code validates that you have a "good" level/puzzle/whatever before dumping the PC into it.
techpression
8 hours ago
I’m a hard core rogue-like player (easily over a thousand hours at least in all the games I’ve played) but even so I can admit that hey have nothing compared to a well crafted world like you’d find in From Software titles or Expedition 33, or classic Zelda games for that matter. Making a great world is an incredibly hard task though and few studios have the capabilities to do so.
angry_octet
2 hours ago
Rogue-like games use the most simple randomisation to generate the next room, and I burnt hundreds of hours in Mines of Moria before I forced myself to quit.
Now with an LLM I could have AD&D-like campaigns, photorealistic renders of my character and the NPCs. I could give it the text of an AD&D campaign as a DM and have it generate walking and talking NOCs.
The art of those great fantasy artists is definitely being stolen in generated images, and application of VLMs should require payment into some sort of art funding pool. But modern artists could well profit by being the intermediary between user and VLM, crafting prompts, both visual and textual, to give a consistent look and feel to a game.
The essay author is smoking crack.
bee_rider
4 hours ago
It’s a different type of thing, really. I like rogue-likes because they are a… pretty basic… story about my character, rather than a perfectly crafted story about somebody else’s.
Even when I play a game like Expedition 33 or Elden Ring, my brain (for whatever reason) makes a solid split between the cutscene versions of the characters and the gameplay version. I mean, in some games the gameplay characters is a wandering murderer, while the cutscene characters have all sorts of moral compunctions about killing the big-bad. They are clearly different dudes.
Dumblydorr
8 hours ago
Is it wildly uneducated to not know any of the games you mentioned? I didn’t realize education covered less known video games? Wouldn’t a better example be No Man’s Sky, if we’re talking procedural gen and eventually a good game.
In any case, I agree that gamers by and large don’t care to what extent the game creation was automated. They are happy to use automated enemies, automated allies, automated armies and pre-made cut scenes. Why would they stop short at automated code gen? I genuinely think 90% wouldn’t mind if humans are still in the loop but the product overall is better.
Ensorceled
8 hours ago
> Is it wildly uneducated to not know any of the games you mentioned? I didn’t realize education covered less known video games?
Yes. It is "wildly uneducated" to have, and express, strong opinions about ANY field of endeavour where you are unfamiliar with large parts of that field.
Almondsetat
7 hours ago
Large? That's your opinion
mikkupikku
6 hours ago
If you haven't heard of the modern roguelike genre you've probably been living under a rock, it seems like every other game these days at least calls itself such. Usually the resemblance to Rogue is so remote that it strains the meaning of the term, but procedural generation of levels is almost universal in this loosely defined genre.
Elite is a bit more obscure, but really anybody who aims to be familiar with the history of games should recognize the name at least. Metroidvania isn't a game, but is a combination of the names of Metroid and Castlevania and you absolutely should know about both of those.
Powermonger is new to me.
And while the comment in question didn't mention it, others have: Minecraft. If you're not familiar with Minecraft you must be Rip Van Winkle. This should be the foremost game that comes to mind when anybody talks about procedural generation.
Ensorceled
6 hours ago
Of course it is.
Almondsetat
6 hours ago
Then it is "wildly uneducated" to have, and express, strong opinions about ANY field of endeavour where you cannot substantiate your claims.
Ensorceled
19 minutes ago
Honest question: are you enjoying this? I looked at your comment history and you don't seem like a troll. What is going on right now?
dec0dedab0de
6 hours ago
No, it's simply untrue. Players only object against AI art assets. And only when they're painfully obvious. No one cares about how the code is written.
This reminded me of a conversation about AI I had with an artist last year. She was furious and cursing and saying how awful it is for stealing from artists, but then admitted she uses it for writing descriptions and marketing posts to sell her art.
WarmWash
4 hours ago
Everyone is in it for themselves.
The world makes waay more sense when you really internalize that. It doesn't necessarily mean people are selfish, large groups often have aligned interests, but when an individuals interest alignment changes, then their group membership almost always changes too.
I'd bet she has a bunch of pirated content and anti-copyright remarks from the golden age of piracy as well.
shadowgovt
3 hours ago
If she's a practicing artist, she almost certainly cut her teeth doing tracing at some point. And if a digital artist, she almost certainly used a cracked copy of a tool.
The big eye-opener for me in college was taking a class that put me up-close with artists and learning that there were, in the whole class, a grand total of two students who hadn't started doing 3D modeling on a cracked copy of Maya (and the two, if memory serves, learned on Blender).
tovej
3 hours ago
That's not true. Most people are interested in fostering a community, even when it means sacrifice.
There _have_ however been studies that show that this attitude is prevalent in (neoclassical) economics students and others who are exposed to (neoclassical) economic thinking: https://www.sciencedirect.com/science/article/abs/pii/S22148...
It's very effective propaganda. And we have a good example of it here. (Mot saying you're spreading it maliciously, but you are spreading it).
Throaway1985123
an hour ago
People are in it for themselves...when it comes to participating in our capitalist economic system. The 2nd part is often left unsaid.
WarmWash
23 minutes ago
Humans overwhelming group themselves with groups that provide themselves with the best value prop. When the individuals circumstances change, which causes the value prop of the group to change, people overwhelming move to a new group. It's not a capitalist or socialist thing.
Lord-Jobo
5 hours ago
Which I would point out isn’t necessarily hypocrisy on their part.
I can rage against guns and gun manufacturers for their negative effects on our nation and hate when they are used for monstrous evil, but also believe that police should have firearms and that the second amendment is important. It’s a tool. You can hate the way it’s made and marketed, and hate many of its popular use cases, and still think there are acceptable ways to use and market it without requiring a total abolition.
Aushin
3 hours ago
I mean, the police probably shouldn't have firearms and the second amendment is one of the worst legal creations in human history.
raincole
5 hours ago
Sinix even explicitly says that AI is an IP theft machine but it's okay to use AI to generate 360 rotation video to market your 2D works[0].
To summarize this era we live in: my AI usage is justified but all the other people are generating slop.
[0]: https://www.youtube.com/watch?v=z8fFM6kjZUk
[1]: Disclaimer: I deeply respect Sinix as an art educator. If it weren't him I wouldn't have learnt digital painting. But it's still quite a weird take of him.
Sharlin
10 hours ago
> Yeah, exactly. And LLM help developers save time from writing the same thing that has be done by other developers for a thousand times.
Before LLMs we did already have a way to "save developers time from writing the same thing that has been done by other developers for a thousand times", you know? A LLM doing the same thing the 1001st time is not code reuse. Code reuse is code reuse.
raincole
9 hours ago
Because code reuse is hard. Like, really hard. If it weren't we wouldn't be laughing at left-pad. If it weren't hard we wouldn't have so many front-end JavaScript frameworks. If it weren't Unreal wouldn't still have their own GC and std-like implementation today. Java wouldn't have been reinventing build system every five years.
The whole history of programming tool is exploring how to properly reuse code: are functions or objects the fundamental unit of reuse? is diamond inheritance okay? should a language have an official package management? build system? should C++ std have network support? how about gui support? should editors implement their own parsers or rely on language server? And none of these questions has a clear answer after thousands if not millions of smart people attempted. (well perhaps except the function vs object one)
Electron is the ultimate effort of code reuse: we reuse the tens of thousands of human-years invested to make a markup-based render engine that covers 99% of use case. And everyone complains about it, the author of OP article included.
LLM-coding is not code reuse. It's more like throwing hands up and admitting humans are yet not smart enough to properly reuse code except for some well-defined low level cases like compiling C into different ISA. And I'm all for that.
Garlef
7 hours ago
I think you could also argue that LLMs in coding are actually just a novel approach at code reuse: At the microscopic level, they excel at replicating known patterns in a new context.
(Many small dependencies can be avoided by letting the LLM just re-implememt the desired behavior; ~ with tradeoffs, of course)
The issue is orchestrating this local reuse into a coherent global codebase.
bluefirebrand
4 hours ago
LLMs in coding are like code reuse in the same way your neighbor hotwiring your car you parked in your driveway is just borrowing it
You didn't park your car in your driveway so anyone could take it to get groceries
plagiarist
an hour ago
I didn't accept "copyright infringement is literal property theft" when the corporations were trying to convince us it was.
layer8
6 hours ago
The problems with leftpad are a problem with the NPM ecosystem, not with code reuse as such. There are other dependency ecosystems that don't have these problems.
FpUser
7 hours ago
>"well perhaps except the function vs object one"
If this is what I think it is, I consider it very lopsided view, failure to recognize what model fits for what case and looking at everything from a hammer point of view
raincole
5 hours ago
I think function is the fundamental unit and object is an extra level over it (it doesn't mean there is no use for object). Thinking objects/classes are the fundamental/minimal level is straight up wrong.
Of course it's just my opinion.
FpUser
3 hours ago
My opinion: Fundamental levels are data and operations (your functions). Not my view that class is a foundation. It is a representation convenient for some cases and not so much for other
bandrami
6 hours ago
I have terrible news: LLMs don't actually make it easier, though it feels like they do at first
foobarbecue
9 hours ago
Hard agree. Before LLMs, if there was some bit of code needed across the industry, somebody would put the effort into writing a library and we'd all benefit. Now, instead of standardizing and working together we get a million slightly different incompatible piles of stochastic slop.
remich
4 hours ago
Yeah and then when that library stops being maintained or gets taken over, everything breaks.
edgyquant
6 hours ago
This was happening before llms in webdev
deltaburnt
5 hours ago
I don't think we should use webdev as an example of why lossy copy and paste works for the industry.
mexicocitinluez
7 hours ago
Before LLMs companies and people were forced to use one-size-fits-all solutions and now they can build custom, bespoke software that fits their needs.
See how it's a matter of what you're looking at?
porridgeraisin
9 hours ago
Oh come on, you don't have to be condescending about function calls.
Sharlin
9 hours ago
I was talking about libraries, higher-level units of reuse than individual functions. And your "syntactic" vs "semantic" reuse makes zero sense. Functions are literally written and invoked for their semantics – what they make happen. "Syntactic reuse" would be macros if anything, and indeed macros are very good at reducing boilerplate.
You might have a more compelling argument if instead of syntax and semantics you contrasted semantics and pragmatics.
porridgeraisin
9 hours ago
A library is a collection of data structures functions. My argument still holds.
> Syntactic reuse would be macros
Well sure. My point is that what can be reused is decided ahead of time and encoded in the syntax. Whereas with LLMs it is not, and is encoded in the semantics.
> Pragmatics
Didn't know what that is. Consider my post updated with the better terms.
runarberg
7 hours ago
I’m not sure your logic is sound. It sounds like you are insisting on some nuance which simply isn’t there. LLM generates unmaintainable slop, which is extremely difficult to reason about, uses wrong abstractions, violates DRY, violates cohesion, etc.
The industry has known how to reuse codes for two decades now (npm was released 16 years ago; pip 18 years ago). Using LLMs for code reuse is a step in the wrong direction, at least if you care about maintaining your code.
porridgeraisin
5 hours ago
Oh sure the quality is extremely unreliable and I am not a fan of its style of coding either. Requires quite a bit of hand holding and sometimes it truly enrages me. I am just saying that LLM technology opens up another dimension of code reuse which is broader. Still a ways to go, not in the foundation model, those have plateaued, but in refining them for coding.
naasking
6 hours ago
> LLM generates unmaintainable slop
LLMs generate what you tell them to, which means it will be slop if you're careless and good if you're careful, just like programming in general.
dannersy
9 hours ago
You're cherry picking. The open world games aren't as compelling anymore since the novelty is wearing off. I can cherry pick, too. For example, Starfield in all its grandeur is pretty boring.
And the users may not care about code directly, but they definitely do indirectly. The less optimized and more off-the-shelf solutions have seen a stark decrease in performance but allowing game development to be more approachable.
LLMs saving engineers and developers time is an unfounded claim because immediate results does not mean net positive. Actually, I'd argue that any software engineer worth their salt knows intimately that more immediate results is usually at the expense of long term sustainability.
whywhywhywhy
7 hours ago
Startfield is boring because of the bad writing and they made a space exploration game where there are loading screens between the planet and space and you don’t actually explore space.
They fundamentally misunderstood what they were promising, it’s the same as making a pirate game where you never steer the ship or drop anchor.
You can prove people are not bored with the concept as new gamers still start playing fallout new Vegas or skyrim today despite them being old and janky.
alexpotato
2 hours ago
This is why Sid Meier's Pirates [0] remains such a great game.
It was really a combination of mini-games:
- you got steer a ship (or fleet of ships) around the Caribbean
- ship to ship combat
- fencing
- dancing (with the Governors' daughters)
- trading (from port to port or with captured goods0
- side quests
Each time I played it with my oldest, it felt like a brand new game.
dannersy
2 hours ago
I think my point stands. Procedural generation is a tool that usually works best when it is supplementary. What makes New Vegas an amazing game is all the hand built narratives and intricate storylines. So yeah, I agree, Starfield is boring because of the story. But if the procedural vastness was interesting enough to not be boring, then we wouldn't be talking about this to begin with.
Zarathruster
4 hours ago
Yeah I mean, I think procgen is cool tech, but there's a reason we don't talk about Daggerfall the same way we talk about Morrowind
dannersy
2 hours ago
Agreed.
mexicocitinluez
7 hours ago
> Starfield in all its grandeur is pretty boring.
And yet "No Mans Sky" is massively popular.
> ny software engineer worth their salt knows intimately that more immediate results is usually at the expense of long term sustainability.
And any software engineer worth their salt realizes there are 100s if not 1000s of problems to be solved and trying to paint a broad picture of development is naive. You have only seen 1% (at best) of the current software development field and yet you're confidently saying that a tool that is being used by a large part of it isn't actually useful. You'd have to have a massive ego to be able to categorically tell thousands of other people that what they're doing is both wrong and not useful and that they things they are seeing aren't actually true.
dannersy
2 hours ago
No Man's Sky got better as they were more intentional with their content. The game has more substance and a lot of that had to be added by hand. It is dropped in procedurally but they had to touch it up, manually, to make it interesting. Let's not revise history.
I don't think it has anything to do with ego. There are studies on the topic of AI and productivity and I assume we have a way to go before we can say anything concretely. Software workflows permeate the industry you're in. You're putting words in my mouth, I said nothing about what people are doing is wrong or not useful. I said the claim that generative AI is making engineers more productive is an unfounded one. What code you shit out isn't where the work starts or ends. Using expedient solutions and having to face potentially more work in the future isn't even something that is a claim about software, I can make that claim about life.
You need to evaluate what you read rather than putting your own twist on what I've said.
mexicocitinluez
2 hours ago
You said:
> LLMs saving engineers and developers time is an unfounded claim
By whom exactly? If I say it saves me time, and another developer says the same, and so on, than it is categorically not unfounded. In fact, it's the opposite.
You've completely missed the point if you don't understand how telling other people that their own experience in such a large field is "unfounded" simply because it doesn't line up with your experience.
> we have a way to go before we can say anything concretely
No YOU do. It's quite apparent to me how it can save time in the myriad of things I need to perform as a software developer (and have been doing).
theshrike79
12 hours ago
Also "AI" has been in gaming, especially mobile gaming, for a literal decade already.
Household name game studios have had custom AI art asset tooling for a long time that can create art quickly, using their specific style.
AI is a tool and as Steve Jobs said, you can hold it wrong. It's like plastic surgery, you only notice the bad ones and object to them. An expert might detect the better jobs, but the regular folk don't know and for the most part don't care unless someone else tells them to care.
And then they go around blaming EVERYTHING as AI.
keyringlight
9 hours ago
Another example is upscaled texture mods, which has been a trend for a long while before 'large language' took off as a trend. Mods to improve textures in a game are definitely not new and that probably means including from other sources, but the ability to automate/industrialize that (and presumably a lot of training material available) meant there was a big wave of that mod category a few years back. My impression is that gamers will overlook a lot so long as it's 'free' or at least are very anti-business (even if the industry they enjoy relies upon it), the moment money is involved they suddenly care a lot about the whole fabric being hand made and need verification that everyone involved was handsomely rewarded.
KellyCriterion
9 hours ago
This should be completely crushed by Nano Banana models?
theshrike79
9 hours ago
The issue isn't objective quality or realism, it's sticking to a specific style consistently.
_Everyone_ (and their grandmother) can instantly tell a ChatGPT generated image, it has a very distinct style - and in my experience no amount of prompting will make it go away. Same for Grok and to a smaller degree Google's stuff.
What the industry needs (and uses) is something they can feed a, say, wall texture into and the AI workflow will produce a summer, winter and fall variant of that - in the exact style the specific game is using.
mejutoco
7 hours ago
I think txt2img and img2img are terms to find those uses.
bavell
6 hours ago
And comfyUI workflows. People have been doing this for awhile now.
theshrike79
3 hours ago
ComfyUI is relatively new, but pretty good at what it does
raincole
8 hours ago
If we're talking about texture upscaling alone (I suppose that's what the parent comment means), Nano Banana is a huge overkill.
delaminator
11 hours ago
"I hate CGI video"
"So you hated the TV Series Ugly Betty then?"
"What? that's not CGI!"
This video is 15 years old
wormpilled
10 hours ago
I think that's a different category, though. Those backgrounds are actual video recordings of real places, not 3D environments modeled from scratch. It looks 'real' because the background actually exists.
theshrike79
9 hours ago
It's still 100% CGI compositing and definitely not all of them are real places or real objects.
In that specific 15 year old example they're mostly composited, you're right about that.
DonHopkins
7 hours ago
I love Ian Hubert's demos of green screening in Blender.
https://www.youtube.com/watch?v=RxD6H3ri8RI
His Blender Conference talk about photogrammetry / camera projection / projection mapping was fantastic:
World Building in Blender - Ian Hubert
delaminator
7 hours ago
Computer Generated Imagery.
runarberg
6 hours ago
Your case would have been better if you had used Mad Max: Fury Road, or even Titanic as examples, rather then a mediocre TV show nobody remembers. Ugly Betty used green screens to make production cheaper, that did not improve the show (although it may have improved the profit margins). Mad Max: Fury Road on the other hand used CGI to significantly improve the visual experience. The added CGI probably increased the cost of the production, and subsequently it is one of the greatest, most awesome, movie ever made.
Actually if you look at the scene from Greys Anatomy [0:54] you can see where CGI is used to improve the scene (rather then cut costs), and you get this amazing scene of the Washington State Ferry crash.
I think you can see the parallels here. When people say they hate AI they are generally referring to the sloppy stuff it generates. It has enabled a proliferation of cheap slop. And with few exception it seems like generating cheap slop is all it does (these exception being specialized tools e.g. in image processing software).
delaminator
6 hours ago
> mediocre TV show
Won 3 Primetime Emmys
52 wins & 124 nominations total
https://www.imdb.com/title/tt0805669/awards/
I guess it's just too lowbrow for you.
runarberg
5 hours ago
Award winning shows and movies does not exclude forgettable cash grabs.
However, my counter examples included Grey’s Anatomy, Mad Max, and Titanic. None of these are considered high literature exactly (and all of them are award winning as well).
trashymctrash
12 hours ago
If you read the next couple of paragraphs, the author addresses this:
> That said, Steam's policy has been recently updated to exclude dev tools used for "efficiency gains", but which are not used to generate content presented to players.
I only quoted the first paragraph, but there is more.
BloondAndDoom
8 hours ago
One the topic procedural generation; rogue likes are all about it and new generation Diablo like games have definitely similar things, well respected new games like Blue Prince. There has never been such as successful period of time for procedural generation in games like now, and all of these are pre-AI. AI powered procedural generation is wet dream of rogue-like lovers
hiddevb
7 hours ago
I don't think I agree with this take.
I love procedural generation, and there is definitely a craft to it. Creating a process that generates a playable level or world is just very interesting to explore as an emergent system. I don't think LLMs will make these system more interesting by default. Of course there are still things to explore in this new space.
It's similar to generative/plotter art compared to a midjourney piece of slop. The craft that goes into creating the code for the plotter is what makes it interesting.
1899-12-30
6 hours ago
The key to non-disruptive LLM integration is using it in a purely additive way, supplementing a feature with functionality that couldn't be done before rather than replacing an existing part. Like adding ai generated images to accompany the dwarf fortress artifact descriptions. It could completely togglable and doesn't disrupt any existing mechanics, but would provide value to those that don't mind the slop.
Izkata
4 hours ago
> Spore is well acclaimed.
Its creature creator was, but as a game it was always mediocre to bad. They had to drop something like 90% of the features and extremely dumb down the stages to get it released.
It was also what introduced a lot of us to SecuROM DRM - it bricked my laptop in the middle of a semester.
larodi
8 hours ago
> No one cares about how the code is written.
I would overstate:
No one even cares how architecture is done. Unless you are the one fixing it or maintaining it.
Sorry, no one. We all know Apple did some great stuff with their code, but we care more about the awful work done on the UI, right? I mean - the UI seems to not be breaking in these new OSs which is amazing feature... for a game perhaps, and most likely the code is top notch. But we care about other things.
This is the reality, and the blind notion that so-many people care about code is super untrue. Perhaps someone putting money on developers care, but we have so many examples already of money put on implementation no matter what the code is. We can see everywhere funds thrown at obnoxious implementations, and particularly in large enterprises, that are only sustained by the weird ecosystem of white-collar jobs that sustains this impression.
Very few people care about the code in total, and this can be observed very easy, perhaps it can be proved no other way around is possible.
TimTheTinker
7 hours ago
This is overstating it. Computers are amazing machines, and modern operating systems are also amazing. But even they cannot completely mask the downstream effects of poor quality code.
You say you don't care, but I bet you do when you're dealing with a problem caused by poor code quality or bad choices made by the developer.
miningape
4 hours ago
Yep willing to bet that the majority of people saying "users don't care how well the code is written" will crash out when some software they're using is slow and buggy, even more extremely if it glitches and deletes their work.
Just like how most people don't care how well a bridge is designed... until it collapses.
AnotherGoodName
4 hours ago
Games with ai art assets are some of the most popular right now in any case. Arc raiders being a great example where some of the voice assets are AI generated.
Be careful of reading any viewpoint on the internet. Apparently no one used facebook or instagram and everyone boycotts anything with ai in it.
In reality i think you’d be foolish not to make use of the tools available. Arc Raiders did the right thing by completely ignoring those sorts of comments. There may be a market for 100% organic video games but there’s also a market for mainstream ‘uses every ai tool available’ type of games.
Throaway1985123
an hour ago
Spore was not well-acclaimed precisely because it failed to live up to its promises as a world-builder. Only the 1st two stages were any good.
larsiusprime
7 hours ago
Also RE: procgen, one of the hit games right now, Mewgenics, is doing super well and uses it extensively. Obviously it's old school procgen that makes use of tons of authored content, but it's still procgen.
Nursie
9 hours ago
> Spore is well acclaimed.
Spore was fun (IMHO) but at the time of release was considered a disappointment compared to its hype.
tovej
12 hours ago
An LLM has never saved me time. It has always produced something that doesn't quite work, has the rough shape of what I want, but somehow always gets all the details wrong.
I can type up what I want much faster and be sure it's at least solving the right problem, even if it may have bugs.
There are also tools to generate boilerplate that work much much better than LLMs. And they're deterministic.
dntrshnthngjxct
10 hours ago
If you do not plan out the architecture soundly, no amount of prompting will fix it if it is bad. I know this because my "handmade" project made with backward compatibility and horrible architecture keeps being badly fixed by LLM while the ones that rely on preemptive planning of the features and architecture, end up working right.
dncornholio
9 hours ago
LLM's keep messing up even on a plain Laravel codebase..
tovej
37 minutes ago
You're just strawmanning now. I've prompted extremely well-specced, contained features, and the LLM has failed nonetheless.
In fact, the more details I give it about a specific problem, the more it seems to hallucinate. Presumably because it is more outside the training set.
mikkupikku
10 hours ago
I think that's true, but something even more subtle is going on. The quality of the LLM output depends on how it was prompted in a way more profound than I think most people realize. If you prompt the LLM using jargon and lingo that indicate you are already well experienced with the domain space, the LLM will rollplay an experienced developer. If you prompt it like you're a clueless PHB who's never coded, the LLM will output shitty code to match the style of your prompt. This extends to architecture, if your prompts are written with a mature understanding of the architecture that should be used, the LLM will follow suit, but if not then the LLM will just slap together something that looks like it might work, but isn't well thought out.
simonask
7 hours ago
This is magical thinking.
LLMs are physically incapable of generating something “well thought out”, because they are physically incapable of thinking.
Tossrock
4 hours ago
Tell Donald Knuth that: https://www-cs-faculty.stanford.edu/~knuth/papers/claude-cyc...
mikkupikku
3 hours ago
I don't care if the machine has a soul, I only care what the machine can produce. With good prompting, the machine produces more ""thoughtful"" results. As an engineer, that's all I care about.
Marha01
4 hours ago
It is magical thinking to claim that LLMs are definitely physically incapable of thinking. You don't know that. No one knows that, since such large neural networks are opaque blackboxes that resist interpretation and we don't really know how they function internally.
You are just repeating that because you read that before somewhere else. Like a stochastic parrot. Quite ironic. ;)
tovej
27 minutes ago
They really aren't that mysterious. We can confidently say that they function at the lexical level, using Monte Carlo principles to carve out a likely path in lexical space. The output depends on the distribution of n-grams in the training set, and the composition of the text in it's context window.
This process cannot produce reasoning.
1) an LLM cannot represent the truth value of statements, only their likelihood of being found in its training data.
2) because it uses lexical data, an LLM will answer differently based on the names / terms used in a prompt.
Both of these facts contradict the idea that the LLM is reasoning, or "thinking".
This isn't really a very hit take either, I don't think I've talked to a single researcher who thinks that LLMs are thinking.
bendmorris
5 hours ago
You're going to get a lot of "skill issue" comments but your experience basically matches mine. I've only found LLMs to be useful for quick demos where I explicitly didn't care about the quality of implementation. For my core responsibility it has never met my quality bar and after getting it there has not saved me time. What I'm learning is different people and domains have very different standards for that.
vntok
11 hours ago
> An LLM has never saved me time. It has always produced something that doesn't quite work, has the rough shape of what I want, but somehow always gets all the details wrong.
This reads like a skill issue on your end, in part at least in the prompting side.
It does take time to reach a point where you can prompt an LLM sufficiently well to get a correct answer in one shot, developing an intuitive understanding of what absolutely needs to be written out and what can be inferred by the model.
Jooror
11 hours ago
I’m curious about how you landed “git gud; prompt better” and not “maybe the domain I work in is a better fit for LLM code”. Or, to be a bit less generous, consider the possibility that the code you’re generating is boilerplate, marshaling, and/or API calls. A facade of perceived complexity over something that’s as complex as a filter-map or two.
3371
10 hours ago
Sharing my 2 cents.
In the past 2 months I've been using all the SOTA models to help me design a new DSL for narrative scripting (such as game story telling) and a c# runtime implementation o the script player engine.
The language spec and design is about 95% authored by me up to this point; I have the LLMs work on the 2nd layer: the implementation specs/guidelines and the 3rd layer: concrete c# implementation.
Since it's a new language, I consider it's somewhat new/novel tasks for LLMs (at least, not like boilerplate stuff like HTTP API or CRUD service). I'd say, these LLMs have been very helpful - you can tell they sometimes get confused and have trouble to comply to the foreign language spec and design - but they are mostly smart enough to carry out the objectives, and they get better and better after the project got on track and has plenty of files/resources to read and reference.
And I'd also say "prompt better" is a important factor, just much more nuanced/complicated. I started with 0 experience with LLM agents and have learned a lot about how to tame them, and developed a protocol to collaborate with agents, these all comes from countless trial and errors, but in the end get boiled down to "prompt better".
Jooror
6 hours ago
I wonder if my intuition here is correct; I would posit that “PL implementation” is a far more popular and well-explored field than it seems. How many toy/small/labor-of-love langs make it to Show HN? How many more simply don’t?
I’ve never personally caught the language implementation bug. I appreciate your perspective here.
3371
6 hours ago
I totally agree, and I was fully aware of how common people make language for fun when I replied.
But I feel like the rationale would still stands: Considering LLMs' natures, common boilerplate tasks are easy because they can kind of just "decompress" from training data. But for a new language design, unless the language is almost identical to some other captured by the model, "decompression" would just fail.
tovej
4 hours ago
As someone who has implemented a fair few DSLs, lexical and syntactic analysis is pretty much the same anywhere, and the structure of the lexer/parser does not really depend on the grammar of the language.
And even semantic analysis is at least very similar in most PLs. Even DSLs. Assuming you're using concepts like variables and functions.
When it comes to codegen / interpreter runtimes, things start to diverge. But this also depends on the use case. More often than not a DSL is a one-to-one map to an existing language, with syntactic sugar on top.
I'm curious, what's the DSL you're working on?
3371
2 hours ago
It's pretty much WIP but if you are interested here is the repo. https://github.com/No3371/zoh
The points you brought up all are valid. Lexer, parser and general concepts are not language-specific, yes, and I wasn't talking about how the implementation is different.
When I said "you can tell they sometimes get confused and have trouble to comply to the foreign language spec and design", I was thinking about the many times they just fail to write in my language even when provided will full language specs. LLMs don't "think" and boilerplate is easy for LLMs because highly similar syntax structure even identical code exist in their training data, they are kind of just copying stuff. But that doesn't work that well when they are tasked to write in a original language that is... too creative.
tovej
4 hours ago
I am prompting better. It doesn't help the LLM be more productive than me on a regular tuesday.
Sure, I can get the task done by delegating everything to an agentic workflow, but it just adds a bunch of useless overhead to my work.
I still need to know what the code does at the end of the day, so I can document it and reason about it. If I write the code myself, it's easy. If an LLM does it, it's a chore.
And even without those concerns, the LLM is still slower than me. Unless it's trivial boilerplate, in which case other tools serve me better and cheaper.
I'll note that a compiler is one of the most well understood and implemented software projects, much of it open source, which means the LLM has a lot of prior art that it can copy.
rybosworld
9 hours ago
When web search first arrived, the same thing happened. That is, some people didn't like using the tool because it wasn't finding what they wanted. This is still true for a lot of folks today, actually.
It's less "git gud; prompt better", and more, "be able to explain (well) what you want as the output". If someone messages the IT guy and says "hey my computer is broken" - what sort of helpful information can the IT guy offer beyond "turn it on and off again"?
tovej
4 hours ago
I can assure you I give LLMs all the information they need. Including hints to what kind of solution to use. They still fail.
rybosworld
3 hours ago
So how do you rectify your anecdotal experience against those made by public figures in the industry who we can all agree are at least pretty good engineers? I think that's important because if we want to stay ~anonymous, neither you nor I can verify the reputation of one another (and therefore, one another's relative master of the "Craft").
Here are some well known names who are now saying they regularly use LLM's for development. For many of these folks, that wasn't true 1-2 years ago:
- Donald Knuth: https://www-cs-faculty.stanford.edu/%7Eknuth/papers/claude-c...
- Linus Torvalds: https://arstechnica.com/ai/2026/01/hobby-github-repo-shows-l...
- John Carmack: https://x.com/ID_AA_Carmack/status/1909311174845329874
My point being - some random guy on the internet says LLM's have never been useful for them and they only output garbage vs. some of the best engineers in the field using the same tools, and saying the exact opposite of what you are.
bendmorris
10 minutes ago
>Here are some well known names who are now saying they regularly use LLM's for development. For many of these folks, that wasn't true 1-2 years ago:
This is a huge overstatement that isn't supported by your own links.
- Donald Knuth: the link is him acknowledging someone else solved one of his open problems with Claude. Quote: "It seems that I’ll have to revise my opinions about “generative AI” one of these days."
- Linus Torvalds: used it to write a tool in Python because "I know more about analog filters—and that’s not saying much—than I do about python" and he doesn't care to learn. He's using it as a copy-paste replacement, not to write the kernel.
- John Carmack: he's literally just opining on what he thinks will happen in the future.
tovej
10 minutes ago
You are overstating those sources. That alone makes me doubt that you're engaging in this discussion in good faith.
I read them all, and in none of them do any of the three say that they "regularly use LLMs for development".
Carmack is speculating about how the technology will develop. And Carmack has a vested interest in AI, so I would not put any value on this as an "engineers opinion".
Torvalds has vibe coded one visualizer for a hobby project. That's within what I might use to test out LLM output: simple, inconsequential, contained. There's no indication in that article that Linus is using LLMs for any serious development work.
Knuth is reporting about somebody else using LLMs for mathematical proofs. The domain of mathematical proofs is much more suitable for LLM work, because the LLM can be guided by checking the correctness of proofs.
And Knuth himself only used the partial proof sent in by someone else as inspiration for a handcrafted proof.
I don't mind arguing this case with you, but please don't fabricate facts. That's dishonest
mikkupikku
9 hours ago
> I’m curious about how you landed “git gud; prompt better” and not “maybe the domain I work in is a better fit for LLM code”.
1. Personal experience. Lazy prompting vs careful prompting.
2. They're coincidentally good at things I'm good at, and shit at things I don't understand.
3. Following from 2, when used by somebody who does understand a problem space which I do not, they easily succeed. That dog vibe coding games succeeded in getting claude to write games because his master knew a thing or two about it. I on the other hand have no game Dev experience, even almost no hobby experience with games specifically, so I struggle to get any game code that even remotely works.
Jooror
6 hours ago
Irrespective of the domain you specifically listed in 3 (game dev is, believe it or not, one of the “more complex” domains), you have completely failed to miss the point.
> 2. They're coincidentally good at things I'm good at, and shit at things I don't understand.
This may well be! In the perfect world this would be balanced with the knowledge that maybe “the things you’re good at” are objectively* easier than “things you don’t understand”. Speaking for myself, I’m proficient in many more easy things than hard things.
*inasmuch as anything can be “objectively” easier
mikkupikku
3 hours ago
I have definitely considered the possibility that I'm simply good at easy things and the LLM is good at easy things, and that hard things are hard for both of us. And there certainly must be some element of that going on, but I keep noticing that different people get different quality results for the same kind of problems, and it seems to line up with how good they themselves would be at that task. If you know the problem space well, you can describe the problem (and approaches to it) with a precision that people unfamiliar with the problem space will struggle with.
I think you can observe this in action by making vague requests, seeing how it does, then roll back that work and make a more precise request using relevant jargon and compare the results. For example, I asked claude to make a system that recommends files with similar tags. It gave me a recommender that just orders files by how many tags they had in common with the query file. This is the kind of solution that somebody may think up quick but it doesn't actually work great in practice. Then I reverted all of that and instead specified that it should use a vector space model with cosine similarity. It did pretty good but there was something subtly off. That is however about the limit of my expertise in this direction, so I tabbed over to a session with ChatGPT and discussed the problem on a high level for about 20 minutes, then asked ChatGPT to write up a single terse technically precise paragraph describing the problem. I told ChatGPT to use no bullet points and write no psuedocode, telling it the coding agent was already an expert in the codebase so let it worry about the coding. I give that paragraph to claude and suddenly it clicks, it bangs out a working solution without any drama. So I conclude the quality of the prompting determined the quality of the results.
vntok
9 hours ago
The parent is specifically talking about producing boilerplate code -a domain in which LLM excell at- and not having had any success at that. It's therefore not a leap of logic to assume they haven't put (enough) effort into getting better at prompting first, which is perfectly fine per se but leans towards a skill issue and not an immutable property of gen AI.
The uncomfortable fact remains that one cannot really expect to get much better results from an LLM without putting some work themselves. They aren't magical oracles.
tovej
4 hours ago
That is not at all what I said, please read my post more carefully before speculating.
I am talking about using LLMs in general, not for boiler plate specifically.
My point about boilerplate is that I have tools that solve this for me already, and do it in a more predictable way.
amiga386
6 hours ago
> Players only object against AI art assets. And only when they're painfully obvious.
Restaurant-goers only object against you spitting in their food if it's painfully obvious (i.e. they see you do it, or they taste it)
Players are buying your art. They are valuing it based on how you say you made it. They came down hard on asset-flipping shovelware before the rise of AI (where someone else made the art and you just shoved it together... and the combination didn't add up to much) and they come down hard on AI slop today, especially if you don't disclose it and you get caught.
riversflow
5 hours ago
> They came down hard on asset-flipping shovelware before the rise of AI
That’s not what I remember, I remember PUBG being a viral hit that extensively used asset flipping.
amiga386
4 hours ago
The more nuanced take is that, if somehow your game is actually good or interesting despite being full of other people's assets, players will see the value that you created (e.g. making a fun game). This is missing in most "asset-flip" games.
Another example comes from Getting Over It with Bennett Foddy, which despite the fact it uses a lot of pre-bought art assets, the entire game has the indisputable hallmark of Bennett Foddy -- it has a ridiculously tricky control mechanism, and the whole game world you play in, should you make any mistakes, has a strong likelyhood of dropping you right back at the start, and it's all your own fault for not being able to recover from your mistakes under pressure. You can see this theme in his other games like QWOP and Baby Steps
krige
10 hours ago
> Spore is well acclaimed
And yet it also effectively ended Will Wright's career. Rave press reviews are not a good indicator of anything, really.
h2zizzle
8 hours ago
Tbf Spore's acclaim comes with the caveat that it completely failed to live up to years of pre-release hype. Much of the goodwill it's garnered since, which is reflected in review scores, only came after the storm of controversy over Spore not being "the ultimate simulator which would mark the 'end of history' for gaming" died down.
And you wouldn't really have any idea this was the case if you weren't there when it happened.
mathgradthrow
7 hours ago
localization? Why would you oppose LLMs doing localization?
fhd2
7 hours ago
I guess the chain of reasoning would be: AI for art is bad -> Writing is art -> Translation is writing.
Personally, I do appreciate good localisation, Nintendo usually does a pretty impressive job there. I play games in their original language as long as I actually speak that language, so I don't have too many touch points with translations though.
teamonkey
4 hours ago
It’s bad at it. At least, it can’t be guaranteed to get nuance or context correct in a way that doesn’t feel artificial to a fluent speaker.
My favourite example I saw was where Google translated an information page of the Italian branch of a large multinational as “this is the UK branch of [multinational]”, presumably because the LLM thought that was more contextually appropriate in English.
JadeNB
7 hours ago
In case they hallucinate? There's no point having content in a wide variety of languages if it's unpredictably different from the original-language content.
SirMaster
6 hours ago
>No one cares about how the code is written.
People definitely do care. Nobody wants vibe-coded buggy slop code for their game.
They want well designed and optimized code that runs the game smoothly on reasonable hardware and without a bunch of bugs.
roesel
6 hours ago
No one wants _buggy slop code_ for their game, but ultimately no one cares whether is has been hand crafted or vibe-coded.
As proof, ask yourself which of the following two options you would prefer:
1. buggy code that was hand-written 2. optimized code that was vibe-coded
I'll bet most people will choose 2.
SirMaster
6 hours ago
I've never seen something as complex as a video game vibe coded that was actually well optimized. Especially when the person doing the prompting is not a software developer.
So I personally do care and I am someone, so the answer is not no one.
llm_nerd
6 hours ago
Your second paragraph does not follow, at all, from the first. These are completely orthogonal demands.
The gaming industry is absolutely overwhelmed with outrageously inefficient, garbage, crash-prone code. It has become the norm, and it has absolutely nothing to do with AI.
Like https://nee.lv/2021/02/28/How-I-cut-GTA-Online-loading-times.... That something so outrageously trash made it to a hundreds-of-million dollar game, cursing millions to 10+ minute waits, should shame everyone involved. It's actually completely normal in that industry. Trash code, thoughtless and lazily implemented, is the norm.
Most game studios would likely hugely improve their game, har har, if they leveraged AI a lot more.
fzeroracer
11 hours ago
> Yeah, exactly. And LLM help developers save time from writing the same thing that has be done by other developers for a thousand times. I don't know how one can spins this as a bad thing
Do you ever ask why you're writing the same thing over and over again? That's literally the foundational piece of being an engineer; understanding when you're reinventing the wheel when there's a perfectly good wheel nearby.
porridgeraisin
9 hours ago
When you make a function
f(a, b, c)
It is reusable only if simply changing a, b, c is enough to give the function that you want. Options object etc _parameterise_ that function. It is useful only if the variability in reuse you desire is spanned by the parameters. This is syntactic reuse.With LLMs, the parameterisation goes into semantic space. This makes code more reusable.
A model trained on all of GitHub can reuse all that code regardless of whether they are syntactically reusable or not. This is semantic reuse, which is naturally much broader.
fzeroracer
9 hours ago
There are two important failures I see with this logic:
First, I am not arguing for reusability. Reusability is one of the most common mistakes you can make as a software engineer because you are over-generalizing what you need before you need it. Code should be written for your specific use case, and only generalized as problems appear. But if you can recognize that your specific use case fits a known problem, then you can find the best way to solve that problem, faster.
Second, when you're using an LLM to make your code more 'reusable' you are taking full responsibility for everything that LLM vomits out. You're no longer assembling a car from well known parts, taking care to tailor it to your use case as needed. You're now building everything in said car, from the tires to the engine and the rearview mirror.
Coding is a constant balance between understanding what you're solving for and what can solve it. Using LLMs takes the worst of both worlds, by offloading both your understanding of the problem and your understanding of the solution.
raw_anon_1111
6 hours ago
> Second, when you're using an LLM to make your code more 'reusable' you are taking full responsibility for everything that LLM vomits out. You're no longer assembling a car from well known parts, taking care to tailor it to your use case as needed. You're now building everything in said car, from the tires to the engine and the rearview mirror.
If you are anything above a mid level ticket taker, your responsibility exceeds what you personally write. When I was an “architect” responsible for the implementation and integration work of multiple teams at product companies - mostly startups - and now a tech lead in consulting, I’m responsible for knowing how a lot of code works that I further write and I’m the person called to carpet by the director/CTO then and the customer now.
I was responsible for what the more junior developers “vomit out”, the outside consulting company doing the Salesforce integration or god forbid for a little while the contractors in India. I no more cars about whether the LLM decided to use a for loop or while loop than I cared about the OSQL (not a typo) that the Salesforce consultants used. I care about does the resulting implementation meet the functional and non functional requirements.
On my latest two projects, I understand the customer from talking to sales before I started, I understand the business requirements from multiple calls with the customer, I understand the architecture because I designed it myself from the diagrams and 8 years of working with and (in a former life at AWS) and reviewing it with the customer.
As far as reusability? I’ve used the same base internal management web app across multiple clients.
I built it (with AI) for one client. Extracted the reusable parts and removed the client specific parts and deployed a demo internally (with AI) and modified it and added features (with AI) for another client. I haven’t done web development since 2002 seriously except a little copy paste work. I didn’t look at a line of code. I used AWS Cognito for authentication. I verified the database user permissions.
Absolutely no one in the value chain cares if the project was handcrafted or written by AI - as long as it was done on time, on budget and meets requirements.
Before the gatekeeping starts, I’ve been working for 30 years across 10 jobs and before that I was a hobbyist for a decade who started programming in 65C02 assembly in 1986.
porridgeraisin
9 hours ago
I am not talking about using an LLM to make code reusable in the sense youre arguing.
My point is that the very act of training an LLM on any corpus of code, automatically makes all of that code reusable, in a much broader semantic way rather than through syntax. Because the LLM uses a compressed representation of all that code to generate the function you ask it to. It is like having an npm where it already has compressed the code specific to your situation (like you were saying) that you want to write.
hamdingers
6 hours ago
At least to some extent, the anti-ai folks don't care about ai assisted programming because they see programmers as the "techbro" boogieman pushing ai into their lives, not fellow creatives who are also at a crossroads.
lxgr
11 hours ago
> I don't know how one can spins this as a bad thing.
People spin all kinds of things if they believe (accurately or not) that their livelihood is on the line. The knee-jerk "AI universally bad" movement seems just as absurd to me as the "AGI is already here" one.
> Spore is well acclaimed. Minecraft is literally the most sold game ever.
Counterpoint: Oblivion, one of the first high-profile games to use procedural terrain/landscape generation, seemed very soulless to me at the time.
As I see it, it's all a matter of how well it's executed. In the best case, a skilled artist uses automation to fill in mechanical rote work (in the same way that e.g. renaissance artists didn't make every single brushstroke of their masterpieces themselves).
In the worst (or maybe even average? time will tell) case, there are only minimal human-made artistic decisions flowing into a work and the output is a mediocre average of everything that's already been done before, which is then rightfully perceived as slop.
mikkupikku
10 hours ago
> Counterpoint: Oblivion, one of the first high-profile games to use procedural terrain/landscape generation, seemed very soulless to me at the time.
Is that even a counter point? Nobody in their right mind would ever claim that procedural generation is impossible to fuck up. The reason Minecraft/etc are good examples is because they prove procedural generation can work, not that it always works.
lxgr
9 hours ago
True, I should have said "counterexample". Procedural generation is just another tool, in the end, and it can be used for great or mediocre results like any other.
zimpenfish
10 hours ago
> Oblivion, one of the first high-profile games to use procedural terrain/landscape generation
I might be misremembering but wasn't the Oblivion proc-gen entirely in the development process, not "live" in the game, which means...
> "In the best case, a skilled artist uses automation to fill in mechanical rote work"
...is what Bethesda did, no?
lxgr
9 hours ago
Yes, but I beg to differ on the "skilled" part. I find the result very jarring somehow; the scale of the world didn't seem right. (Probably because it was too realistic; part of the art of game terrain design is reconciling the inherently unrealistic scales.)
bombcar
8 hours ago
WoW had this but you never really thought about it - even the massive capital cities were a few blocks at most.
The problem with procedural generation is it's hard to make it as action-packed and desirable as WoW zones, and even those quickly become fly-over territory.