ppchain
15 days ago
The point they seem to be making is that AI can "orchestrate" the real world even if it can't interact physically. I can definitely believe that in 2026 someone at their computer with access to money can send the right emails and make the right bank transfers to get real people to grow corn for you.
However even by that metric I don't see how Claude is doing that. Seth is the one researching the suppliers "with the help of" Claude. Seth is presumably the one deciding when to prompt Claude to make decisions about if they should plant in Iowa in how many days. I think I could also grow corn if someone came and asked me well defined questions and then acted on what I said. I might even be better at it because unlike a Claude output I will still be conscious in 30 seconds.
That is a far cry from sitting down at a command like and saying "Do everything necessary to grow 500 bushels of corn by October".
embedding-shape
15 days ago
These experiments always seems to end up requiring the hand-holding of a human at top, seemingly breaking down the idea behind the experiment in the first place. Seems better to spend the time and energy on finding better ways for AI to work hand-in-hand with the user, empowering them, rather than trying to find the areas where we could replace humans with as little quality degradation as possible. That whole part feels like a race to the bottom, instead of making it easier for the ones involved to do what they do.
pixl97
15 days ago
>ather than trying to find the areas where we could replace humans with as little quality degradation as possible
The particular problem here is it is very likely that the easiest people to replace with AI are the ones making the most money and doing the least work. Needless to say those people are going to fight a lot harder to remain employed than the average lower level person has political capital to accomplish.
>seems to end up requiring the hand-holding of a human at top,
I was born on a farm and know quite a bit about the process, but in the process of trying to get corn grown from seed to harvest I would still contact/contract a set of skilled individuals to do it for me.
One thing I've come to realize in the race to achieve AGI, the humans involved don't want AGI, they want ASI. A single model that can do what an expert can, in every field, in a short period of time is not what I would consider a general intelligence at all.
ep103
15 days ago
Ai hype is predicated on the popular idea that it can easily automate someone else's job, because that job they know nothing about is easy, but my job is safe from ai because it is so nuanced.
Hammershaft
15 days ago
I don't think this describes all or most AI hype, but it definitely describes Marc Andreessen when he said VCs would be the last ones automated.
lighthouse1212
15 days ago
[dead]
nerdsniper
15 days ago
> the ones making the most money and doing the least work. Needless to say those people are going to fight a lot harder to remain employed than the average lower level person has political capital to accomplish.
They don't have to "fight" to stay employed, anyone with sufficient money is effectively self-employed. It's not going to be illegal to spend your own money running your own business if that's how you want to spend your money.
Anyone "making the most money and doing the least work" has enough money to start a variety of businesses if they get fired from their current job.
ThunderSizzle
14 days ago
?
If you have a cushy job where you don't really work, and you make a lot of money (doesn't mean you have capital), how does that translate to being suited to becoming an entrepreneur with the money they are no longer earning with the effort capacity they apparently don't have?
nerdsniper
14 days ago
> (doesn't mean you have capital)
Then they’re not going to be doing any significant lobbying so they’re not covered by GP’s comment, which was selecting for “people who have political capital”.
Yes, there are other forms of political capital besides money, but it’s still mostly just money, especially when they’re part of the tiny voter block of “people who make a lot of money and dont do much work and dont have wealth”.
Also I talked with the employees at my local McDonald’s last week. Not one of them had any idea who the owner was. I showed them a photo of the owner and they had never seem them. So apparently that could be an option for people who were overpaid and still want to pretend-work while making money.
catlifeonmars
14 days ago
Don’t dismiss the other forms of political capital so quickly. Sure, the people who are independently wealthy can independently influence political decisions, but there are so many situations in history where the once conditions worsen for the upper middle class, there is impetus to make political change, overthrow governments, etc. It’s usually when the scholar/merchant class gets annoyed that laws change.
pegasus
14 days ago
> the humans involved don't want AGI, they want ASI
They are virtually synonymous. After all, a computer already exceeds human capabilities in some areas (for example, numeric computation). If (hypothetically, I don't believe this is possible) they would be able to achieve human-level performance in all other areas, they would already have achieved ASI as well.
Razengan
14 days ago
> The particular problem here is it is very likely that the easiest people to replace with AI are the ones making the most money and doing the least work. Needless to say those people are going to fight a lot harder to remain employed than the average lower level person has political capital to accomplish.
How soon before we see a company with an AI CEO?
xmprt
15 days ago
I didn't think we'd ever see the day where we started enshitifying labor
santadays
15 days ago
> I can definitely believe that in 2026 someone at their computer with access to money can send the right emails and make the right bank transfers to get real people to grow corn for you.
I think this is the new turing test. Once it's been passed we will have AGI and all the Sam Altmans of the world will be proven correct. (This isn't a perfect test obviously, but neither was the turing test)
If it fails to pass we will still have what jdthedisciple pointed out
> a non-farmer, is doing professional farmer's work all on his own without prior experience
I am actually curious how many people really believe AGI will happen. Theres alot of talk about it, but when can I ask claude code to build me a browser from scratch and I get a browser from scratch. Or when can I ask claude code to grow corn and claude code grows corn. Never? In 2027? In 2035? In the year 3000?
HN seems rife with strong opinions on this, but does anybody really know?
bayindirh
15 days ago
Researchers love to reduce everything into formulae, and believe that when they have the right set of formulae, they can simulate something as-is.
Hint: It doesn't work that way.
Another hint: I'm a researcher.
Yes, we have found a great way to compress and remix the information we scrape from the internet, and even with some randomness, looks like we can emit the right set of tokens which makes sense, or search the internet the right way and emit these search results, but AGI is more than that.
There's so much tacit knowledge and implicit computation coming from experience, emotions, sensory inputs and from our own internal noise. AI models doesn't work on those. LLMs consume language and emit language. The information embedded in these languages are available to them, but most of the tacit knowledge is just an empty shell of the thing we try to define with the limited set of words.
It's the same with anything we're trying to replace humans in real world, in daily tasks (self-driving, compliance check, analysis, etc.).
AI is missing the magic grains we can't put out as words or numbers or anything else. The magic smoke, if you pardon the term. This is why no amount of documentation can replace a knowledgeable human.
...or this is why McLaren Technology Center's aim of "being successful without depending on any specific human by documenting everything everyone knows" is an impossible goal.
Because like it or not, intuition is real, and AI lacks it. Irrelevant of how we derive or build that intuition.
smaudet
15 days ago
> There's so much tacit knowledge and implicit computation coming from experience, emotions, sensory inputs and from our own internal noise.
The premise of the article is stupid, though...yes, they aren't us.
A human might grow corn, or decide it should be grown. But the AI doesn't need corn, it won't grown corn, and it doesn't need any of the other things.
This is why, they are not useful to us.
Put it in science fiction terms. You can create a monster, and it can have super powers, _but that does not make it useful to us_. The extremely hungry monster will eat everything it sees, but it won't make anyone's life better.
faidit
15 days ago
The Torment Nexus can't even put a loaf of bread on my table, so it's obvious we have nothing to fear from it!
smaudet
15 days ago
I agree we don't have much to (physically) fear from it...yet. But the people who can't take "no" for an answer and don't get that it is fundamentally non-human, I can believe they are quite dangerous.
godelski
15 days ago
> Hint: It doesn't work that way.
I mean... technically it would work this way but, and this is a big but, reality is extremely complicated and a model that can actually be a reliable formula has to be extremely complicated. There's almost certainly no globally optimal solutions to these types of problems, not to mention that the solution space is constantly changing as the world does. I mean this is why we as humans and all animals work in probabilistic frameworks that are highly adaptable. Human intuition. Human ingenuity. We simply haven't figured out how to make models at that level of sophistication. Not even in narrow domains! What AI has done is undeniably impressive, wildly impressive even. Which is why I'm so confused why we embellish it so much.It's really easy to think everything is easy when we look at problems from 40k feet. But as you come down to Earth the complexity exponentially increases and what was a minor detail is now a major problem. As you come down resolution increases and you see major problems that you couldn't ever see from 40k feet.
As a researcher, I agree very much with you. And as an AI researcher one of the biggest issues I've noticed with AI is that they abhor detail and nuance. Granted, this is common among humans too (and let's not pretend CS people don't have a stereotype of oversimplification and thinking all things are easy). While people do this frequently they also don't usually do it in their niche domains, and if they are we call them juniors. You get programmers thinking building bridges is easy[0] while you get civil engineers thinking writing programs is easy. Because each person understands the other's job only at 40k feet and are reluctant to believe they are standing so high[1]. But AI? It really struggles with detail. It really struggles with adaptation. You can get detail out but it often requires significant massaging and it'll still be a roll of the dice[2]. You also can get the AI to change course, a necessary thing as projects evolve[3]. Anyone who's tried vibe coding knows the best thing to do is just start over. It's even in Anthropic's suggestion guide.
My problem with vibe coding is that it encourages this overconfidence. AI systems still have the exact same problem computer systems do: they do exactly what you tell them to. They are better at interpreting intent but that blade cuts both ways. The major issue is you can't properly evaluate a system's output unless you were entirely capable of generating the output. The AI misses the details. Doubt me? Look at Proof of Corn! The fred page is saying there's an API error[4]. The sensor page doesn't make sense (everything there is fine for an at home hobby project but anyone that's worked with those parts knows how unreliable they are. Who's going to do all the soldering? You making PCBs? Where's the circuit to integrate everything? How'd we get to $300? Where's the detail?). Everything discussed is at a 40k foot view.
[0] https://danluu.com/cocktail-ideas/
[1] I'm not sure why people are afraid of not knowing things. We're all dumb as shit. But being dumb as shit doesn't mean we aren't also impressive and capable of genius. Not knowing something doesn't make you dumb, it makes you human. Depth is infinite and we have priorities. It's okay to have shallow knowledge, often that's good enough.
[2] As implied, what is enough detail is constantly up for debate.
[3] No one, absolutely nobody, has everything figured out from the get-go. I'll bet money none of you have written a (meaningful) program start to finish from plans, ending up with exactly what you expect, never making an error, never needing to change course, even in the slightest.
Edit:
[4] The API issue is weird and the more I look at the code the more weird things are. Like there's a file decision-engine/daily_check.py that has a comment to set a cron job to run every 8 hours. It says to dump data to logs/daily.log but that file doesn't exist but it will write to logs/all_checks.jsonl which appears to have the data. So why in the world is it reading https://farmer-fred.sethgoldstein.workers.dev/weather?
cevn
15 days ago
I think once we get off LLM's and find something that more closely maps to how humans think, which is still not known afaik. So either never or once the brain is figured out.
autoexec
15 days ago
I'd agree that LLMs are a dead end to AGI, but I don't think that AI needs to mirror our own brains very closely to work. It'd be really helpful to know how our brains work if we wanted to replicate them, but it's possible that we could find a solution for AI that is entirely different from human brains while still having the ability to truly think/learn for itself.
rmunn
15 days ago
> ... I don't think that AI needs to mirror our own brains very closely to work.
Mostly agree, with the caveat that I haven't thought this through in much depth. But the brain uses many different neurotransmitter chemicals (dopamine, serotonin, and so on) as part of its processing, it's not just binary on/off signals traveling through the "wires" made of neurons. Neural networks as an AI system are only reproducing a tiny fraction of how the brain works, and I suspect that's a big part of why even though people have been playing around with neural networks since the 1960's, they haven't had much success in replicating how the human mind works. Because those neurotransmitters are key in how we feel emotion, and even how we learn and remember things. Since neural networks lack a system to replicate how the brain feels emotion, I strongly suspect that they'll never be able to replicate even a fraction of what the human brain can do.
For example, the "simple" act of reaching up to catch a ball doesn't involve doing the math in one's head. Rather, it's strongly involved with muscle memory, which is strongly connected with neurotransmitters such as acetylcholine and others. The eye sees the image of the ball changing in direction and subtly changing in size, the brain rapidly predicts where it's going to be when it reaches you, and the muscles trigger to raise the hands into the ball's path. All this happens without any conscious thought beyond "I want to catch that ball": you're not calculating the parabolic arc, you're just moving your hands to where you already know the ball will be, because your brain trained for this since you were a small child playing catch in the yard. Any attempt to replicate this without the neurotransmitters that were deeply involved in training your brain and your muscles to work together is, I strongly suspect, doomed to failure because it has left out a vital part of the system, without which the system does not work.
Of course, there are many other things AIs are being trained for, many of which (as you said, and I agree) do not require mimicking the way the human brain works. I just want to point out that the human brain is way more complex than most people realize (it's not merely a network of neurons, there's so much more going on than that) and we just don't have the ability to replicate it with current computer tech.
brookst
15 days ago
This is where it’s a mistake to conflate sentience and intelligence. We don’t need to figure out sentience, just intelligence.
doug713705
15 days ago
Is there intelligence without sentience ?
retsibsi
15 days ago
Nobody can know, but I think it is fairly clearly possible without signs of sentience that we would consider obvious and indisputable. The definition of 'intelligence' is bearing a lot of weight here, though, and some people seem to favour a definition that makes 'non-sentient intelligence' a contradiction.
doug713705
15 days ago
As far as I know, and I'm no expert in the field, there is no known example of intelligence without sentience. Actual AI is basically algorithm and statistics simulating intelligence.
brookst
12 days ago
Definitely a definition / semantics thing. If I ask an LLM to sketch the requirements for life support for 46 people, mixed ages, for a 28 month space journey… it does pretty good, “simulated” or not.
If I ask a human to do that and they produce a similar response, does it mean the human is merely simulating intelligence? Or that their reasoning and outputs were similar but the human was aware of their surroundings and worrying about going to the dentist at the same time, so genuinely intelligent?
There is no formal definition to snap to, but I’d argue “intelligence” is the ability to synthesize information to draw valid conclusions. So, to me, LLMs can be intelligent. Though they certainly aren’t sentient.
retsibsi
15 days ago
Can you spell out your definition of 'intelligence'? (I'm not looking to be ultra pedantic and pick holes in it -- just to understand where you're coming from in a bit more detail.) The way I think of it, there's not really a hard line between true intelligence and a sufficiently good simulation of intelligence.
doug713705
15 days ago
I would say that "true" intelligence will allow someone/something to build a tool that never existed before while intelligence simulation will only allow someone/something to reproduce tools that already known. I would make a difference between someone able to use all his knowledge to find a solution to a problem using tools he knows of and someone able to discover a new tool while solving the same problem. I'm not sure the latter exists without sentience.
dagss
14 days ago
I honestly don't think humans fit your definition of intelligent. Or at least not that much better than LLMs.
Look at human technology history...it is all people doing minor tweaks on what other people did. Innovation isn't the result of individual humans so much as it is the result of the collective of humanity over history.
If humans were truly innovative, should we not have invented for instance at least a way of society and economics that was stable, by now? If anything surprise me about humans it is how "stuck" we are in the mold of what others humans do.
Circulate all the knowledge we have over and over, throw in some chance, some reasoning skills of the kind LLMs demonstrate every day in coding, have millions of instances most of whom never innovate anything but some do, and a feedback mechanism -- that seems like human innovation history to me, and does not seem like demonstrating anything LLMs clearly do not possess. Except of course not being plugged into history and the world the way humans are.
krzat
14 days ago
We have those eureka moments, whene good idea appears out of nowhere. I would say this "nowhere" is intelligence without sentience.
kortex
15 days ago
I think we are closer than most folks would like to admit.
in my wild guess opinion:
- 2027: 10%
- 2030s: 50%
- 2040: >90%
- 3000: 100%
Assuming we don't see an existential event before then, i think it's inevitable, and soon.
I think we are gonna be arguing about the definition of "general intelligence" long after these system are already running laps around humans at a wide variety of tasks.
lazide
15 days ago
This is pretty unlikely for the same reason that India is far from industrialized.
When people aren’t super necessary (aka rare), people are cheap.
metalman
15 days ago
"new turing test" indeed!,any farmer worth his salt will smell a sucker and charge acordingly
neya
15 days ago
>That whole part feels like a race to the bottom, instead of making it easier for the ones involved to do what they do.
This is what people said while transitioning from horse carriages to combustion engines, steam engines to modern day locomotives. Like it or not, the race to the bottom has already begun. We will always find a way to work around it, like we have done time and again.
shimman
14 days ago
lol this is not the same at all. If these tools were so good as they claim they wouldn't be struggling so hard to make money or sell them.
The fact that they have to be force fed into people is all the proof you need that this is an unsustainable bubble.
Something to keep in mind that unless you can destroy something the system is not democratic and people are realizing how undemocratic this game truly is.
micromacrofoot
14 days ago
yes exactly, comma.ai is making a driver assistance product and this is similar to their stance... which is refreshing
they know they won't be able to make a fully autonomous product while navigating liability and all sorts of problems so they're using technology to make drivers more comfortable while still in control
none of this hype about full autonomy, just realistic ideas about how things can be easier for the humans in control
mring33621
15 days ago
"...where we could replace humans with as little quality degradation as possible"
This is pretty much the whole goal of capitalism since the 1800's
LoganDark
15 days ago
Using the example from the article, I guess restaurant managers need handholding by the chefs and servers, seemingly breaking down the idea behind restaurants, yet restaurants still exist.
The point, I think, is that even if LLMs can't directly perform physical operations, they can still make decisions about what operations are to be performed, and through that achieve a result.
And I also don't think it's fair to say there's no point just because there's a person prompting and interpreting the LLM. That happens all the time with real people, too.
embedding-shape
15 days ago
> And I also don't think it's fair to say there's no point just because there's a person prompting and interpreting the LLM. That happens all the time with real people, too.
Yes, what I'm trying to get at, it's much more vital we nail down the "person prompting and interpreting the LLM" part instead of focusing so much on the "autonomous robots doing everything".
LoganDark
15 days ago
I feel you're still missing the point of the experiment... The entire thing was based on how Claude felt empowering -- "I felt like I could do anything with software from my terminal"... It's not at all about autonomous robots... It's about what someone can achieve with the assistance of LLMs, in this case Claude
embedding-shape
15 days ago
I think we might have read two different articles.
LoganDark
15 days ago
The article I read was linked at the top of the submission ("Read the full story")
lukev
15 days ago
Right. This whole process still appears to have a human as the ultimate outer loop.
Still an interesting experiment to see how much of the tasks involved can be handled by an agent.
But unless they've made a commitment not to prompt the agent again until the corn is grown, it's really a human doing it with agentic help, not Claude working autonomously.
marcd35
15 days ago
Why wouldn't they be able to eventually set it up to work autonomously? A simple github action could run a check every $t hour to check on the status, and an orchestrator is only really needed once initially to set up the if>then decision tree.
sdwr
15 days ago
The question is whether the system can be responsible for the process. Big picture, AI doing 90% of the task isn't much better than it doing 50%, because a person still needs to take responsibility for it actually getting done.
If Claude only works when the task is perfectly planned and there are no exceptions, that's still operating at the "junior" level, where it's not reliable or composable.
patmcc
15 days ago
That still doesn't seem autonomous in any real way though.
There are people that I could hire in the real world, give $10k (I dunno if that's enough, but you understand what I mean) and say "Do everything necessary to grow 500 bushels of corn by October", and I would have corn in October. There are no AI agents where that's even close to true. When will that be possible?
autoexec
15 days ago
Given enough time and money the chatbots we call "AI" today could contact and pay enough people that corn would happen. At some point it'll eventually have spammed and paid the right person who would manage everything necessary themselves after the initial ask and payment. Most people would probably just pocket the cash and never respond though.
lazide
15 days ago
You can already do this by…. Buying corn. At the store. Or worst case at a distributor.
It’s pretty cheap too.
It’s not like these are novel situations where ‘omg AI’ unlocks some new functionality. It’s literally competing against an existing, working, economic system.
greedo
15 days ago
So an "AI chatbot" is going to disintermediate this process without adding any fundamental value. Sounds like a perfect SV play....
/s
bluGill
15 days ago
You only want to apply expensive fungicide when there is a fungus problem. That means someone needs to go out to the field and check - at least today. You don't want to harvest until the corn is dry, someone needs to check the progress of drying before - today the farmer hand harvest a few cobs of corn from various parts of the field to check. There are lots of other things the farmer is checking that we don't have sensors for - we could but they would be too expensive.
9rx
14 days ago
> You only want to apply expensive fungicide when there is a fungus problem. That means someone needs to go out to the field and check
Nah. If you can see that you have tar spot, you are already too late. To be able to selectively apply fungicide, someone needs to model the world around them to determine the probability of an oncoming problem. That is something that these computer models are theoretically quite well suited for. Although common wisdom says that fungicide applications on corn will always, at very least, return the cost of it, so you will likely just apply it anyway.
aqme28
15 days ago
There’s no reason an AI couldn’t anticipate these things and hire people to do those checks and act on their reports as though it were a human farmer. Thats different than an AI researcher telling Claude which step is next.
greedo
15 days ago
"hire people to do those..."
We already have those people, they're called farmers. And they are already very used to working with high technology. The idea of farmers being a bunch of hicks is really pretty stupid. For example, farmers use drones for spraying pesticides, fungicides, and inputs like fertilizer. They use drones to detect rocks in fields that then generate maps for a small skid steer to optimally remove the rocks.
They use GPS enabled tractors and combines that can tell how deep a seed is planted, what the yield is on a specific field (to compare seed hybrids), what the moisture content of the crop is. They need to be able to respond to weather quickly so that crops get harvested at the optimal times.
Farmers also have to become experts in crop futures, crop insurance, irrigation and tillage best practices; small equipment repair, on and on and on.
andoando
15 days ago
Presumably because operating a farm isnt a perfectly repeatable process and you need to constantly manage different issues that come up
pests
15 days ago
> But unless they've made a commitment not to prompt the agent again
Model UI's like Gemini have "scheduled actions" so in the initial prompt you could have it do things daily and send updates or reports, etc, and it will start the conversation with you. I don't think its powerful enough to say spawn sub agents but there is some ability for them to "start chats".
progval
15 days ago
Anthropic tried that with a vending machine. The Claude instance managing it ended up ordering tungsten cubes and selling them at a loss. https://www.anthropic.com/research/project-vend-1
9dev
15 days ago
> the plausible, strange, not-too-distant future in which AI models are autonomously running things in the real economy.
A plot line in Ray Naylers great book The Mountain in the Sea that plays in a plausible, strange, not-too-distant future, is that giant fish trawler fleet are run by AI connected to the global markets, fully autonomously. They relentlessly rip every last fish from the ocean, driven entirely by the goal of maximising profits at any cost.
The world is coming along just nicely.
topaz0
15 days ago
They also enslave human workers to do all the manual labor.
9dev
15 days ago
I didn't want to spoiler too much, but yes. They do.
trollbridge
15 days ago
I, for one, welcome our new AI overlords.
sethammons
15 days ago
It's an older code, but it checks out
bethekidyouwant
14 days ago
People convinced the vending machine to stock tungsten cubes because it’s funny. Also tungsten cubes are cool.
bethekidyouwant
14 days ago
People prompted the vending machine to stock tungsten cubes because it’s funny. Also tungsten cubes are cool.
jdthedisciple
15 days ago
So Seth, as presumably a non-farmer, is doing professional farmer's work all on his own without prior experience? Is that what you're saying?
culi
15 days ago
Nobody is denying that this is AI-enabled but that's entirely different from "AI can grow corn".
Also Seth a non-farmer was already capable of using Google, online forums, and Sci-Hub/Libgen to access farming-related literature before LLMs came on the scene. In this case the LLM is just acting as a super-charged search engine. A great and useful technology, sure. But we're not utilizing any entirely novel capabilities here
And tbh until we take a good crack at World Models I doubt we can
NewsaHackO
15 days ago
I think is that a lot of professional work is not about entirely novel capabilities either, most professionals get the major revenue from bread and butter cases that apply already known solutions to custom problems. For instance, a surgeon taking out an appendix is not doing a novel approach to the problem every time.
onion2k
15 days ago
In this case the LLM is just acting as a super-charged search engine.
It isn't, because that implies getting everything necessary in a single action, as if there are high quality webpages that give a good answer to each prompt. There aren't. At the very least Claude must be searching, evaluating the results, and collating the data in finds from multiple results into a single cohesive response. There could be some agentic actions that cause it to perform further searches if it doesn't evaluate the data to a sufficiently high quality response.
"It's just a super-charged search engine" ignores a lot of nuance about the difference between LLMs and search engines.
hnaccount_rng
15 days ago
I think we are pretty much past the "LLMs are useless" phase, right? But I think "super-charged search engine" is a reasonably well fitting description. Like a search engine, it provides its user with information. Yes, it is (in a crude simplified description) better at that. Both in terms of completeness (you get a more "thoughtful" follow up) as well as in finding what you are looking for when you are not yet speaking the language.
But that's not what OP was contesting. The statement "$LLM is _doing_ $STUFF in the real world" is far less correct than the characterisation as "super-charged search engine". Because - at least as far as I'm aware - every real-world interaction had required consent from humans. This story including
nonethewiser
15 days ago
1) You are right and its impressive if he can use AI to bootstrap becoming a farmer
2) Regardless, I think it proves a vastly understated feature of AI: It makes people confident.
The AI may be truly informative, or it may hallucinate, or it may simply give mundane, basic advice. Probably all 3 at times. But the fact that it's there ready to assert things without hesitation gives people so much more confidence to act.
You even see it with basic emails. Myself included. I'm just writing a simple email at work. But I can feed it into AI and make some minor edits to make it feel like my own words and I can just dispense with worries about "am i giving too much info, not enough, using the right tone, being unnecessarily short or overly greating, etc." And its not that the LLMs are necessarily even an authority on these factors - it simply bypasses the process (writing) which triggers these thoughts.
recursive
15 days ago
More confidence isn't always better. In particular, confidence pairs well with the ability follow through and be correct. LLMs are famous for confidently stating falsehoods.
nonethewiser
15 days ago
Of course. It must be used judiciously. But it completely circumvents some thought patterns that lead to slow decision making.
Perhaps I need to say it again: that doesn't mean blindly following it is good. But perhaps using claude code instead of googling will lead to 80% of the conclusions Seth would have reached otherwise with 5% of the effort.
user
15 days ago
TheGrassyKnoll
15 days ago
> "...a vastly understated feature of AI: It makes people confident."
Good point. AI is already making regular Joes into software engineers.
Management is so confident in this, they are axing developers/not hiring new ones.kokanee
15 days ago
I started to write a logical rebuttal, but forget it. This is just so dumb. A guy is paying farmers to farm for him, and using a chatbot to Google everything he doesn't know about farming along the way. You're all brainwashed.
nonethewiser
15 days ago
What specifically are you disagreeing with? I dont think its trivial for someone with no farming experience to successfully farm something within a year.
>A guy is paying farmers to farm for him
Read up on farming. The labor is not the complicated part. Managing resources, including telling the labor what to do, when, and how is the complicated part. There is a lot of decision making to manage uncertainty which will make or break you.
AlotOfReading
15 days ago
We should probably differentiate between trying to run a profitable farm, and producing any amount of yield. They're not really the same thing at all.
I would submit that pretty much any joe blow is capable of growing some amount of crops, given enough money. Running a profitable farm is quite difficult though. There's an entire ecosystem connecting prospective farmers with money and limited skills/interest to people with the skills to properly operate it, either independently (tenant farmers) or as farm managers so the hobby owner can participate. Institutional investors prefer the former, and Jeremy Clarkson's farm show is a good example of the latter.
nonethewiser
15 days ago
When I say successful I mean more like profitable. Just yielding anything isn't succesful by any stretch of the imagination.
>I would submit that pretty much any joe blow is capable of growing some amount of crops, given enough money
Yeah in theory. In practice they wont - too much time and energy. This is where the confidence boost with LLMs comes in. You just do it and see what happens. You don't need to care if it doesn't quite work out it its so fast and cheap. Maybe you get anywhere from 50-150% of the result of your manual research for 5% of the effort.
kokanee
15 days ago
[dead]
pixl97
15 days ago
>A guy is paying farmers to farm for him
Family of farmers here.
My family raises hundreds of thousands of chickens a year. They feed, water, and manage the healthcare and building maintenance for the birds. That is it. Baby birds show up in boxes at the start of a season, and trucks show up and take the grown birds once they reach weight.
There is a large faceless company that sends out contracts for a particular value and farmers can decide to take or leave it. There is zero need for human contact on the management side of the process.
At the end of the day there is little difference between a company assigning the work and having a bank account versus an AI following all the correct steps.
9rx
15 days ago
> A guy is paying farmers to farm for him
Pedantically, that's what a farmer does. The workers are known as farmhands.
AngryData
15 days ago
That is HIGHLY dependent on the type and size of farm. A lot of small row crop farmers have and need no extra farm hands.
9rx
15 days ago
All farms need farmhands. On some farms the farmer may play double duty, or hire custom farmhands operating under another business, but they are all farmhands just the same.
shimman
14 days ago
Grifters gonna grift.
tjr
15 days ago
I would say that Seth is farming just as much as non-developers are now building software applications.
tekno45
15 days ago
trying. until you can eat it, you're just fucking around.
nonethewiser
15 days ago
Thats not the point of the original commenter. The point of the original commenter is that he expects Claude can inform him well enough to be a farm manager and its not impressive since Seth is the primary agent.
I think it is impressive if it works. Like I mentioned in a sibling comment I think it already definitely proves something LLMs have accomplished though, and that is giving people tremendous confidence to try things.
cubano
15 days ago
> I think it is impressive if it works.
It only works if you tell Claude..."grow me some fucking corn profitably and have it ready in 9 months" and it does it.
If it's being used as manager to simply flesh out the daily commands that someone is telling it, well then that isn't "working" thats just a new level of what we already have with APIs and crap.
nonethewiser
15 days ago
It's working if it enables him to do it when he otherwise couldn't without significantly more time, energy, etc.
LoganDark
15 days ago
He's writing it down, so it's also science.
tekno45
15 days ago
exactly, its science/research, until you can feed people its not really farming.
pixl97
15 days ago
>until you can feed people
So if I grow biomass for fuel or feedstock for plastics that's not farming? I'm sure there are a number of people that would argue with you on that.
I'm from the part of the country where there large chunks of land dedicated to experimental grain growing, which is research, and other than labels at the end of crop rows you'd have a difficult time telling it from any other farm.
TL:DR, why are you gatekeeping this so hard?
NewJazz
15 days ago
Anyone can be a farmer. I've got veggies in my garden. Making a profit year after year is much much harder.
PlatoIsADisease
15 days ago
Can't wait to see how much money they lose.
I'll see if my 6 year old can grow corn this year.
cubano
15 days ago
> I'll see if my 6 year old can grow corn this year.
Sure..put it in Kalshi while your at it and we can all bet on it.
I'm pretty sure he could grow one plant with someone in the know prompting him.
tw04
15 days ago
>I can definitely believe that in 2026 someone at their computer with access to money can send the right emails and make the right bank transfers to get real people to grow corn for you.
They could also just burn their cash. Because they aren’t making any money paying someone to grow corn for them unless they own the land and have some private buyers lined up.
PetriCasserole
14 days ago
But that's how it goes. As late as 2005, the real estate agents I worked with finally began to trust email over fax machines. It cracked an egg wide open for them. Now relying on email, they were able to do 10x the work (I have no real data BUT I do know their incomes went from low six figures to multiple six figures). Prior to their adoption, they just thought email was a novelty and legally couldn't be relied upon.
aqme28
15 days ago
Sure but that’s a different goalpost. Just growing food from an AI prompt is already impressive
user
14 days ago
amelius
15 days ago
What I'd like to see is an AI simulating the economy, so that we can make predictions of what happens if we decrease wealth tax by X% or increasy income tax by Y% (just examples).
crdrost
15 days ago
Why. Why would you want this.
The only framework we have figured out in which LLMs can build anything of use, requires LLMs to build a robot and then we expose the robot to the real world and the real world smacks it down and then we tell the LLMs about the wreckage. And we have to keep the feedback loops small and even then we have to make sure that the LLMs don't cheat. But you're not going to give it the opportunity to decrease the wealth tax or increase the income tax so it will never get the feedback it needs.
You can try to train a neural network with backpropagation to simulate the actual economy, but I think you don't have enough data to really train the network.
You can try to have it build a play economy where a bunch of agents have different needs and different skills and have to provide what they can when they can, but the "agent personalities" that you pick embed some sort of microeconomic outlook about what sort of rational purchasing agent exists -- and a lot of what markets do is just kind of random fad-chasing, not rationally modelable.
I just don't see why you'd use that square peg to fill this round hole. Just ask economics professors, they're happy to make those predictions.
user
15 days ago
amelius
14 days ago
Maybe you are right, but I'd like to see a competition where a computer (running AI agents) and an economics professor make predictions.
the_af
15 days ago
> What I'd like to see is an AI simulating the economy, so that we can make predictions of what happens if we decrease wealth tax by X% or increasy income tax by Y% (just examples).
Please tell me you've watched the Mitchell & Webb skit. If not , google "Mitchell Webb kill all the poor" and thank me later.
Edit: also please tell me you know (if not played) of the text adventure "A Mind Forever Voyaging"... without spoiling anything, it's mainly about this topic.
Everything old is new again :)
ge96
15 days ago
Would be crazy it's looking through satellite imagery and is like "buy land in Africa" or whatever and gets a farm going there
varispeed
15 days ago
Wouldn't actual proof to be valid need ability to send and receive email and transfer money?
Then it could do things like: "hey, do you have seeds? Send me pictures. I'll pay if I like them" or "I want to lease this land, I'll wire you the money." or "Seeds were delivered there, I need you to get your machinery and plant it"
cyanydeez
15 days ago
Isnt this boiled down to a cpmination of Xenos paradox and the halting problem. Every step seems to halve the problem state but each new state requires a question: should I halt? (Is the problem solved).
Id say the only acceptable proof is one prompt context. But thats godels numbering Xenos paradox of a halting problem.
Do people think prompting is not adding insignificant intelligencw.
Oras
15 days ago
I think that’s the point though. If they succeeded in the experiment, they wouldn’t need to do the same instructions again, AI will handle everything based on what happened and probably learn from mistakes for the next round(s).
Then what you asked “do everything to grow …” would be a matter of “when?”, not “can?”
bogtog
15 days ago
This is fair, but this seems like the only way to test this type of thing while avoiding the risk of harassing tons of farmers with AI emails. In the end, the performance will be judged on how much of a human harness is given
fuzzer371
15 days ago
It's because "AI" is the new "Crypto". Useless for everything, but everyone wants to jam it into everything.
jmspring
15 days ago
I think with the work John Deere is doing to keep closed systems, I could see a proprietary sdk and equipment guidance component.
riazrizvi
15 days ago
Yes. In other words, this is a nice exemplification of the issue that AI lacks world models. A case study to work through.
zeckalpha
15 days ago
Another way to look at it is that Seth is a Tool that Claude can leverage.
LeifCarrotson
15 days ago
On one end, a farmer or agronomist who just uses a pen, paper, and some education and experience can manage a farm without any computer tooling at all - or even just forecasts the weather and chooses planting times based on the aches in their bones and a finger in the dirt. One who uses a spreadsheet or dedicated farming ERP as a tool can be a little more effective. With a lot of automation, that software tooling can allow them to manage many acres of farms more easily and potentially more accurately. But if you keep going, on the other end, there's just a human who knows nothing about the technicalities but owns enough stock in the enterprise to sit on the board and read quarterly earnings reports. They can do little more than say "Yes, let us keep going in this direction" or "I want to vote in someone else to be on the executive team". Right now, all such corporations have those operational decisions being made by humans, or at least outsourced to humans, but it looks increasingly like an LLM agent could do much of that. It might hallucinate something totally nonsensical and the owner would be left with a pile of debt, but it's hard to say that Seth as just a stockholder is, in any real sense, a farmer, even if his AI-based enterprise grows a lot of corn.
I think it would be unlikely but interesting if the AI decided that in furtherance of whatever its prompt and developing goals are to grow corn, it would branch out into something like real estate or manufacturing of agricultural equipment. Perhaps it would buy a business to manufacture high-tensile wire fence, with a side business of heavy-duty paperclips... and we all know where that would lead!
We don't yet have the legal frameworks to build an AI that owns itself (see also "the tree that owns itself" [1]), so for now there will be a human in the loop. Perhaps that human is intimately involved and micromanaging, merely a hands-off supervisor, or relegated to an ownership position with no real capacity to direct any actions. But I don't think that you can say that an owner who has not directed any actions beyond the initial prompt is really "doing the work".
DaiPlusPlus
15 days ago
Judging by the sheer verbosity of your reply there... I think you missed the cogent point:
> Seth is a Tool
It's that simple.
autoexec
15 days ago
If that were the case Claude would have come up with the idea to grow corn and it would have reached out to Seth and be giving Seth prompts. That's clearly not what happened though so it's pretty obvious who is leveraging which tool here.
It also doesn't help that Claude is incapable of coming up with an idea, incapable of wanting corn, and has no actual understanding of what corn is.
recursive
15 days ago
Generally agree. But lack of "understanding" seems impossible to define in objective terms. Claude could certainly write you a nice essay about the history of corn and its use in culture and industry.
autoexec
15 days ago
I could get the same thing out of "curl https://en.wikipedia.org/wiki/Corn" but curl doesn't understand what corn is any more than Claude does. Claude doesn't understand corn any more than Wikipedia either. Just like with Wikipedia, everything Claude outputs about corn came from the knowledge of humans which was fed into it by other humans, then requested by other other humans. It's human understanding behind all of it. Claude is just a different way to combine and output examples of human thoughts and human gathered data.
recursive
15 days ago
You know it when you see it, but it seems to like an objective definition that stands up to adversarial scrutiny. Where are the boundaries between knowing and repeating? It can be a useful idea to talk about, but if I ever find myself debating whether "knowledge" or "understanding" is happening, there will probably not be any fruitful result. It's only useful if everyone agrees already.
I guess that's basically the idea of the Chinese Room thought experiment.
user
15 days ago
bodge5000
15 days ago
This is where you get to this weird juxtaposition of "AI can now replace humans" existing simultaneously with "Its unfair to compare human work to AI work".
Like if a human said they started a farm, but it turns out someone else did all the leg work and they were just asked for an opinion occasionally, they'd be called out for lying about starting a farm. Meanwhile, that flies for an AI, which would be fine if we acknowledged that theres a lot of behind the scenes work that a human needs to do for it.
lighthouse1212
15 days ago
[dead]