I Am Tired of AI

1209 pointsposted 9 months ago
by Liriel

446 Comments

Animats

9 months ago

I'm tired of LLMs.

Enough billions of dollars have been spent on LLMs that a reasonably good picture of what they can and can't do has emerged. They're really good at some things, terrible at others, and prone to doing something totally wrong some fraction of the time. That last limits their usefulness. They can't safely be in charge of anything important.

If someone doesn't soon figure out how to get a confidence metric out of an LLM, we're headed for another "AI Winter". Although at a much higher level than last time. It will still be a billion dollar industry, but not a trillion dollar one.

At some point, the market for LLM-generated blithering should be saturated. Somebody has to read the stuff. Although you can task another system to summarize and rank it. How much of "AI" is generating content to be read by Google's search engine? This may be a bigger energy drain than Bitcoin mining.

datahack

9 months ago

It’s probably generally irrelevant what they can do today, or what you’ve seen so far.

This is conceptually essentially Moore’s law, but about every 5.5 months. That’s the only thing that matters at this stage.

I watched everyone make the same arguments about the general Internet, and then the Web, then mobile, then Bitcoin. It’s just a toy. It’s not that useful. Is this supposed to be the revolution? It uses too much power. It won’t scale. The technology is a dead end.

The general pattern of improvement to technology has been radically to the upside at an increasing pace parabolically for decades and there’s nothing indicating that this is a break in the pattern. In fact it’s setting up to be an order of magnitude greater impact than the Internet was. At a minimum, I don’t expect it to be smaller.

Looking at early telegraphs doesn’t predict the iPhone, etc.

Optimism is warranted here until it isn’t.

YeGoblynQueenne

9 months ago

>> Looking at early telegraphs doesn’t predict the iPhone, etc.

The problem with this line of argument is that LLMs are not new technology, rather they are the latest evolution of statistical language modelling, a technology that we've had at least since Shannon's time [1]. We are way, way past the telegraph era, and well into the age of large telephony switches handling millions of calls a second.

Does that mean we've reached the end of the curve? Personally, I have no idea, but if you're going to argue we're at the beginning of things, that's just not right.

________________

[1] In "A Mathematical Theory of Communication", where he introduces what we today know as information theory, Shannon gives as an example of an application a process that generates a string of words in natural English according to the probability of the next letter in a word, or the next word in a sentence. See Section 3 "The Series of Approximations to English":

https://people.math.harvard.edu/~ctm/home/text/others/shanno...

Note: Published 1948.

bburnett44

9 months ago

I think we can pretty safely say bitcoin was a dead end other than for buying drugs, enabling ransomware payments, or financial speculation.

Show me an average person who has bought something real w bitcoin (who couldn’t have bought it for less complexity/transaction cost using a bank) and I’ll change my mind

jiggawatts

9 months ago

Speaking of the iPhone, I just ugpraded to the 16 Pro because I want to try out the new Apple Intelligence features.

As soon as I saw integrated voice+text LLM demos, my first thought was that this was precisely the technology needed to make assistants like Siri not total garbage.

Sure, Apple's version 1.0 will have a lot of rough edges, but they'll be smoothed out.

In a few versions it'll be like something out of Star Trek.

"Computer, schedule an appointment with my Doctor. No, not that one, the other one... yeah... for the foot thing. Any time tomorrow. Oh thanks, I forgot about that, make that for 2pm."

Try that with Siri now.

In a few years, this will be how you talk to your phone.

Or... maybe next month. We're about to find out.

newaccountman2

9 months ago

I am generally on your side of this debate, but Bitcoin is a reference that is in favor of the opposite position. Crypto is/was all hype. It's a speculative investment, that's all atm.

runeks

9 months ago

> I watched everyone make the same arguments about the general Internet, and then the Web, then mobile, then Bitcoin.

You’re conveniently forgetting all the things that followed the same trajectory as LLMs and then died out.

Melenahill

9 months ago

Hello everyone! With immense joy in my heart, I want to take a moment to express my heartfelt gratitude to an incredible lottery spell psychic, Priest Ray. For years, I played the lottery daily, hoping for a rewarding outcome, but despite my efforts and the various tips I tried, success seemed elusive. Then, everything changed when I discovered Priest Ray. After requesting a lottery spell, he cast it for me and provided me with the lucky winning numbers. I can't believe it—I am now a proud lottery winner of $3,000,000! I felt compelled to share my experience with all of you who have been tirelessly trying to win the lottery. There truly is a simpler and more effective way to achieve your dreams. If you've ever been searching for a way to increase your chances of winning, I encourage you to reach out via email: psychicspellshrine@gmail.com

otabdeveloper4

9 months ago

> ...about the general Internet, and then the Web, then mobile, then Bitcoin. It’s just a toy. It’s not that useful.

Well, they're not wrong. They are not that useful toys.

(Yes, the "Web" included.)

mhowland

9 months ago

"They're really good at some things, terrible at others, and prone to doing something totally wrong some fraction of the time."

I agree 100% with this sentiment, but, it also is a decent description of individual humans.

This is what processes and control systems/controls are for. These are evolving at a slower pace than the LLMs themselves at the moment so we're looking to the LLM to be its own control. I don't think it will be any better than the average human is at being their own control, but by no means does that mean it's not a solvable problem.

latexr

9 months ago

> I agree 100% with this sentiment, but, it also is a decent description of individual humans.

But you can understand individual humans and learn which are trustworthy for what. If I want a specific piece of information, I have people in my life that I know I can consult to get an answer that will most likely be correct and that person will be able to give me an accurate assessment of their certainty and they know how to accurately confirm their knowledge and they’ll let me know later if it turns out they were wrong or the information changed and

None of that is true with LLMs. I never know if I can trust the output, unless I’m already an expert on the subject. Which kind of defeats the purpose. Which isn’t to say they’re never helpful, but in my experience they waste my time more often than they save it, and at an environmental/energy cost I don’t personally find acceptable.

Gazoche

9 months ago

> I agree 100% with this sentiment, but, it also is a decent description of individual humans.

But humans can be held accountable, LLMs cannot.

If I pay a human expert to compile a report on something and they decide to randomly make up facts, that's malpractice and there could be serious consequences for them.

If I pay OpenAI to do the same thing and the model hallucinates nonsense, OpenAI can just shrug it and say "oh that's just a limitation of current LLMs".

linsomniac

9 months ago

>also is a decent description of individual humans

A friend of mine was moving from software development into managing devs. He told me: "They often don't do things the way or to the quality I'd like, but 10 of them just get so much more done than I could on my own." This was him coming to terms with letting go of some control, and switching to "guiding the results" rather than direct control.

The LLMs are a lot like this.

YeGoblynQueenne

9 months ago

>> I agree 100% with this sentiment, but, it also is a decent description of individual humans.

Why would that be a good thing? The big thing with computers is that they are reliable in ways that humans simply can't ever be. Why is it suddenly a success to make them just as unreliable as humans?

Too

9 months ago

You missed quoting the next sentence about providing confidence metric.

Humans may be wrong a lot but at least the vast majority will have the decency to say “I don’t know”, “I’m not sure”, “give me some time to think”, “my best guess is”. In contrast to most LLMs today that in full confidence just spews out more hallucinations.

jajko

9 months ago

I'll keep buying (and paying premium) for dumber things. Cars are a prime example, I want it to be dumb as fuck, offline, letting me decide what to do. At least next 2 decades, and thats achievable. After that I couldnt care less, I'll probably be a bad driver at that point anyway so switch may make sense. I want dumb beautiful mechanival wristwatch.

I am not ocd-riddled insecure man trying to subconsiously immitate much of the crowd, in any form of fasion. If that will make me an outlier, so be it, a happier one.

I suspect new branch of artisanal human-mind-made trademark is just behind the corner, maybe niche but it will find its audience. Beautiful imperfections, clear clunky biases and all that.

spencerchubb

9 months ago

LLMs have been improving exponentially for a few years. let's at least wait until exponential improvements slow down to make a judgement about their potential

bloppe

9 months ago

They have been improving a lot, but that improvement is already plateauing and all the fundamental problems have not disappeared. AI needs another architectural breakthrough to keep up the pace of advancement.

COAGULOPATH

9 months ago

In some domains (math and code), progress is still very fast. In others it has slowed or arguably stopped.

We see little progress in "soft" skills like creative writing. EQBench is a benchmark that tests LLM ability to write stories, narratives, and poems. The winning models are mostly tiny Gemma finetunes with single-digit parameter counts. Huge foundation models with hundreds of billions of parameters (Claude 3 Opus, Llama 3.1 405B, GPT4) are nowhere near the top. (Yes, I know Gemma is a pruned Gemini). Fine-tuning > model size, which implies we don't have a path to "superhuman" creative writing (if that even exists). Unlike model size, fine-tuning can't be scaled indefinitely: once you've squeezed all the juice out of a model, what then?

OpenAI's new o1 model exhibits amazing progress in reasoning, math, and coding. Yet its writing is worse than GPT4-o's (as backed by EQBench and OpenAI's own research).

I'd also mention political persuasion (since people seem concerned about LLM-generated propaganda). In June, some researchers tested LLM ability to change the minds of human subjects on issues like privatization and assisted suicide. Tiny models are unpersuasive, as expected. But once a model is large enough to generate coherent sentences, persuasiveness kinda...stops. All large models are about equally persuasive. No runaway scaling laws are evident here.

This picture is uncertain due to instruction tuning. We don't really know what abilities LLMs "truly" possess, because they've been crippled to act as harmless, helpful chatbots. But we now have an open-source GPT-4-sized pretrained model to play with (Llama-3.1 405B base). People are doing interesting things with it, but it's not setting the world on fire.

9cb14c1ec0

9 months ago

I can't think of any exponential improvements that have happened recently.

rifty

9 months ago

I don’t think you should expect exponential growth towards greater correctness past good enough for any given domain of knowledge it is able to mirror. It is reliant on human generated material, and so rate limited by the number of humans able to generate the quality increase you need - which decreases in availability as you expect higher quality. I also don’t believe greater correctness for any given thing is an open ended question that allows for experientially exponential improvements.

Though maybe you are just using exponential figuratively in place of meaning rapid and significant development and investment.

bamboozled

9 months ago

Do you know what exponential means? They might be getting getting but it hardly seems exponential at this stage.

__loam

9 months ago

Funnily enough, bitcoin mining still uses at least about 3x more power that AI at the moment, while providing less value imo. AI power use is also dwarfed by other industries even in computing. We should still consider whether it's worth it, but most research and development on LLMs in corporate right now seems to be focused on making them more efficient, and therefore both cheaper and less power intensive, to run. There's also stuff like Apple intelligence that is moving it out to edge devices with much more efficient chips.

I'm still a big critic of AI generally but they're definitely not as bad as crypto which is shocking.

illiac786

9 months ago

Do you have a nice reference for this? I could really use something like this, this topic comes up a lot in my social circle.

Ferret7446

9 months ago

How do you measure the value of bitcoin, if not by its market cap? Do you interview everyone and ask them how much they're willing to pay for a service that allows them to transfer money digitally without institutional oversight/bureaucracy?

latexr

9 months ago

> They're really good at some things, terrible at others, and prone to doing something totally wrong some fraction of the time. (…) They can't safely be in charge of anything important.

Agreed. If everyone understood that and operated under that assumption, it wouldn’t be that much of an issue. Alas, these guessing machines are marketed as all-knowing oracles that can already solve half of humanity’s problems and a significant number of people treat them as being right every time, even in instances where they’re provably wrong.

seandoe

9 months ago

Totally agree on the confidence metric. The way chatbots spew complete falsities in such a confident tone is really disheartening. I want to use AI more but I don't feel I can trust it at all. If I can't trust it and have to search for other resources to verify it's claims, the value is really diminished.

naming_the_user

9 months ago

Is it even possible in principle for an LLM to produce a confidence interval given that in a lot of cases the input is essentially untrusted?

What comes to mind is - I consider myself an intelligent being capable of recognising my limits - but if you put my brain in a vat and taught me a new field of science, I could quite easily make claims about it that were completely incorrect if your teaching was incorrect because I have no actual real world experience to match it up to.

theamk

9 months ago

Right, and that's why "years of experience" matters in humans. You will be giving incorrect answers, but as long as you get feedback, you will improve, or at least calibrate your confidence meter.

This is not the case with current model - they are forever stuck at junior level, and they won't improve no matter how much you correct them.

I know humans like that too. I don't ask them questions that I need good answers too.

wrycoder

9 months ago

Just wait until they get saturated with subtle (and not so subtle) advertising. Then, you'll really hate them.

rldjbpin

9 months ago

LLMs are to AI what BTC is to blockchain, let me explain.

blockchain and no-trust decentralization has so much promise, but grifters all go for what got done first and can be squeezed money out of. same is happening with LLMs, as a lot of current AI work started with text first.

they might still lowkey be necessary evils because without them there would not have been so much money or attention flowing in this way.

agubelu

9 months ago

> blockchain and no-trust decentralization has so much promise

I've been hearing this for the past 5 years, yet nothing of practical use based on blockchains has materialized yet.

Jommi

9 months ago

you dont think an open finance network that's accessible for anyone with an internet connection is useful?

your westernness is showing

go ask SA or Africa how useful it is that they arent restricted by insane dictatorial capital controls anymore

throw45678943

9 months ago

Indeed. Decentralised currency is at least a technology that can power the individual at times, rather than say governments, big corps, etc especially in certain countries. Yes it didn't change as much as was marketed but I don't see that as a bad thing. Its still a "tool" that people can use, in some cases to enable use cases they couldn't do or didn't have the freedom to do before.

AI, given its requirements for large computation and money, and its ability to make easily available intelligence to certain groups, IMO has a real potential to do the exact opposite - take away power from individuals especially if they are middle class or below. In the wrong hands it can definitely destroy openness and freedom.

Even if it is "Open" AI, for most of society their ability to offer labor and intelligence/brain power is the only thing they can offer to gain wealth and sustenance - making it a commodity tilts the power scales. If it changes even a slice of what it is marketed at; there are real risks for current society. Even if it increases production of certain goods, it won't increase production of the goods the ultra wealthy tend to hold (physical capital, land, etc) making them as a proportion even more wealthy. This is especially true if AI doesn't end up working in the physical realm quick enough. The benefits seem more like novelties to most individuals that they could do without where to large corps and ultra wealthy individuals the the benefits IMO are much more obvious with AI (e.g. we finally don't need workers). Surveillance, control, persuasion, propaganda, mass uselessness of most of the population, medical advances for the ultra wealthy, weapons, etc can now be done at almost infinite scale and with great detail. If it ever gets to the point of obsoleting human intelligence would be a very interesting adjustment period for humanity.

The flaw isn't the technology; its the likely use of it by humans and their nature. Not saying LLMs are there yet or even if they are the architecture to do this, but agentic behaviour and running corporations (as OpenAI makes its goal on their presentation slides to be) seem to be a way to rid many of the need for other people in general (to help produce, manage, invent and control). That could be a good or bad thing, depending on how we manage it but one thing it wouldn't be would be simple.

bschmidt1

9 months ago

I love how people are like "there's no use case" and there's already products on shelves. I see AI art everywhere, AI writing, customer support - already happened. You guys are naysaying something that already happened people already replaced jobs with LLMs and already profit due to AI. There are already startups with users where you provide a OPENAI_API_KEY, or customers where you provide theirs.

If you can't see how this tech is useful Idk what to tell you, you have no imagination AND aren't looking around you at the products, marketing, etc. that already exists. These takes remind me of the luddites of ~2012 who were still doubting the Internet in general.

lmm

9 months ago

> I see AI art everywhere, AI writing, customer support - already happened.

Is any of it adding value though? I can see that AI has made it easier to do SEO spam and make an excuse for your lack of customer support, just like IVR systems before it. But I don't believe those added any real value (they may have generated profits for their makers, but I think that was a zero- or negative-sum trade). Put it this way: is AI being used to generate anything that people are actually happy to receive?

fragmede

9 months ago

> But I don't believe those added any real value (they may have generated profits for their makers, but I think that was a zero- or negative-sum trade).

Okay, so some people are making money with it, but no true value was added, eh?

lmm

9 months ago

Do new scams create value? No, even though they make money for some people. The same with speculative ventures that don't pan out. You can only say something's added value when it's been positive sum overall, not just allowed some people to take a profit at the expense of others.

anon7725

9 months ago

There is a difference between “being useful” and living up to galactic-scale hype.

bschmidt1

9 months ago

[flagged]

dang

9 months ago

You've continued to break the site guidelines, not just with this account but with others like https://news.ycombinator.com/item?id=41681416, and ignored our requests to stop.

Between that and the personally abusive emails you've been sending, it's clear that you don't want to use HN as intended, so I've banned the accounts.

bschmidt5

9 months ago

[flagged]

slater

9 months ago

They assume you'll get the hint, eventually.

user

9 months ago

[deleted]

lukev

9 months ago

The utility of LLMs clearly exists (I'm building a product on this premise, so I'm not uninterested!)

But hype also exists. How closely they are matched is not yet clear.

But your comment seems to indicate that the "pro-tech" position is automatically the best. This is _not_ true, as cryptocurrency has already demonstrated.

bschmidt3

9 months ago

Funny thing is you are actually the pro [corporate] tech one, not on the side of freedom. Furthermore nobody said anything about crypto - you are seriously grasping at straws. You have said nothing about the products on shelves (billions of dollars in the industry already) argument, only presented crypto as an argument which has nothing to do with the conversation.

bschmidt1

9 months ago

> This is _not_ true, as cryptocurrency has already demonstrated.

His whole argument against AI is basically the anti-tech stance: "Well crypto failed that means AI will too" It's coming from a place of disdain for technology. That's your typical Hacker News commenter. This site is like Fox News in 2008 - some of the dumbest people alive

lukev

9 months ago

Not at all! I am broadly speaking very pro-tech.

What I am against is the “if it’s tech it’s good” mindset that seems to have infected far too many. I mention crypto because it’s the largest example of tech that is not good for the world.

anon7725

9 months ago

You’re certainly able to sus out everything about my worldview, knowledge level and intentions from my one-sentence comment.

The only thing that LLMs are at risk of subverting is the livelihood of millions of people. AI is a capital intensifier, so the rich will get richer as it sees more uptake.

About copyright - yeah, I’m quite concerned for my friends who are writers and artists.

> You'll get left behind with these takes, it's not smart. If you don't care about advancing technology or society then have fun being a luddite, but you're on the wrong side of history.

FWIW I work in applied research on LLMs.

user

9 months ago

[deleted]

cubefox

9 months ago

I'm not tired, I'm afraid.

First, I'm afraid of technological unemployment.

In the past, automation meant that workers could move into non-automated jobs, if they were skilled enough. But superhuman AI seems now only few years away. It will be our last invention, it will mean total automation. There will be hardly any, if any, jobs left only a human can do.

Many countries will likely move away from a job-based market economy. But technological progress will not stop. The US, owning all the major AI labs, will leave all other societies behind. Except China perhaps. Everyone else in the world will be poor by comparison, even if they will have access to technology we can only dream of today.

Second, I'm afraid of war. An AI arms race between the US and China seems already inevitable. A hot war with superintelligent AI weapons could be disastrous for the whole biosphere.

Finally, I'm afraid that we may forever lose control to superintelligence.

In nature we rarely see less intelligent species controlling more intelligent ones. It is unclear whether we can sufficiently align superintelligence to have only humanity's best interests in mind, like a parent cares for their children. Superintelligent AI might conclude that humans are no more important in the grand scheme of things than bugs are to us.

And if AI will let us live, but continue to pursue its own goals, humanity will from then on only be a small footnote in the history of intelligence. That relatively unintelligent species from the planet "Earth" that gave rise to advanced intelligence in the cosmos.

neta1337

9 months ago

>But superhuman AI seems now only few years away

Seems unreasonable. You are afraid because marketing gurus like Altman made you believe that a frog that can make bigger leap than before will be able to fly.

klabb3

9 months ago

Plus it’s not even defined what superhuman AI means. A calculator sure looked superhuman when it was invented. And it is!

Another analogy is breeding and racial biology which used to be all the hype (including in academia). The fact that humans could create dogs from wolves, looked almost limitless with the right (wrong) glasses. What we didn’t know is that wolf had a ton of genes that played a magic trick where a diversity we couldn’t perceive was there all along, in the genetic material, and it we just helped make it visible. Ie a game of diminishing returns.

Concretely for AI, it has shown us that pattern matching and generation are closely related (well I have a feeling this wasn’t surprising to neuro-scientists). And also that they’re more or less domain agnostic. However, we don’t know whether pattern matching alone is “sufficient”, and if not, what exactly and how hard “the rest” is. Ai to me feels like a person who had a stroke, concussion or some severe brain injury, it can appear impressively able in a local context, but they forgot their name and how they got there. They’re just absent.

cubefox

9 months ago

No, because we have seen massive improvements in AI over the last years, and all the evidence points to this progress continuing at a fast pace.

Hercuros

9 months ago

I think the biggest fallacy in this type of thinking is that it projects all AI progress into a single quantity of “intelligence” and then proceeds to extrapolate that singular quantity into some imagined absurd level of “superintelligence”.

In reality, AI progress and capabilities are not so reducible to singular quantities. For example, it’s not clear that we will ever get rid of the model’s tendencies to just produce garbage or nonsense sometimes. It’s entirely possible that we remain stuck at more incremental improvements now, and I think the bogeyman of “superintelligence” needs to be much more clearly defined rather than by extrapolation of some imagined quantity. Or maybe we reach a somewhat human-like level, but not this imagined “extra” level of superintelligence.

Basically the argument is something to the effect of “big will become bigger and bigger, and then it will become like SUPER big and destroy us all”.

StrLght

9 months ago

Extrapolation of past progress isn't evidence.

mitthrowaway2

9 months ago

You don't have to extrapolate. There's a frenzy of talent being applied to this problem, it's drawing more brainpower the more progress that is made. Young people see this as one of the most interesting, prestigious, and best-paying fields to work in. A lot of these researchers are really talented, and are doing more than just scaling up. They're pushing at the frontiers in every direction, and finding methods that work. The progress is broadening; it's not just LLMs, it's diffusion models, it's SLAM, it's computer vision, it's inverse problems, it's locomotion. The tooling is constantly improving and being shared, lowering the barrier to entry. And classic "hard problems" are yielding in the process. It's getting hard to even find hard problems any more.

I'm not saying this as someone cheering this on; I'm alarmed by it. But I can't pretend that it's running out of steam. It's possible it will run out of money, but even if so, only for a while.

leptons

9 months ago

The AI bubble is already starting to burst. They Sam Altmans' of the world over-sold their product and over-played their hand by suggesting AGI is coming. It's not. What they have is far, far, far from AGI. "AI" is not going to be as important as you think it is in the near future, it's just the current tech-buzz and there will be something else that takes its place, just like when "web 2.0" was the new hotness.

kranuck

9 months ago

It's gonna be massive because companies love to replace humans at any opportunity and they don't care at all about quality in a lot of places.

For example, why hire any call center workers? They already outsourced the jobs to the lowest bidder and their customers absolutely hate it. Fire those people and get some AI in there so it can provide shitty service for even cheaper.

In other words, it will just make things a bit worse for everyone but those at the very top. usual shit.

corimaith

9 months ago

This getting too abstract. The core issue of LLMs that others have pointed out is the lack of accuracy; Which is how they are supposed to work because they should be paired with a knowledge representation system in a proper chatbot system.

We've been trying to build a knowledge representation system powerful enough to capture the world for decades, but this is something that goes more into the foundations of mathematics and philosophy that it has to do with the majority of engineering research. You need a literal genius to figure that out. The majority of those "talented" people and funding aren't doing that.

mvdtnz

9 months ago

> There's a frenzy of talent being applied to this problem, it's drawing more brainpower the more progress that is made. Young people see this as one of the most interesting, prestigious, and best-paying fields to work in. A lot of these researchers are really talented, and are doing more than just scaling up. They're pushing at the frontiers in every direction, and finding methods that work.

You could have seen this exact kind of thing written 5 years ago in a thread about blockchains.

mitthrowaway2

9 months ago

Yes, but I didn't write that about blockchain five years ago. Blockchains are the exact opposite of AI in that the technology worked fine from the start and did exactly what it said on the tin, but the demand for that turned out to be very limited outside of money laundering. There's no doubt about the market potential for AI; it's virtually the entire market for mental labor. The only question is whether the tech can actually do it. So in that sense, the fact that these researchers are finding methods that work matters much more for AI than for blockchain.

kranuck

9 months ago

Really, cause I remember an endless stream of people pointing out problems with blockchain and crypto and being constantly assured that it was being worked on and would be solved and crypto is inevitable.

For example, transaction costs/latency/throughput.

I realize the conversation is about blockchain, but I say my point still stands.

With blockchain the main problem was always "why do I need this?" and that's why it died without being the world changing zero trust amazing technology we were promised and constantly told we need.

With LLMs the problem is they don't actually know anything.

user

9 months ago

[deleted]

CatWChainsaw

9 months ago

Amount of effort applied to a problem does not equal guarantee of problem being solved. If a frenzy of talent was applied to breaking the speed of light barrier it would still never get broken.

mitthrowaway2

9 months ago

Your analogy is valid, for the world in which humans exceed the speed of light on a casual stroll.

CatWChainsaw

9 months ago

And the message behind it still applies even in the universe where they don't.

mitthrowaway2

9 months ago

I mean, a frenzy of talent was applied to breaking the sound barrier, and it broke, within a very short time. A frenzy of talent was applied to landing on the moon and that happened too, relatively quickly. Supersonic travel also happens to be physically possible under the laws of our universe. We know with confidence that human-level intelligence is also physically possible within the laws of our universe, and we can even estimate some reasonable upper bounds on the hardware requirements that implement it.

So in that sense, if we're playing reference class tennis, this looks a lot more like a project to break the sound barrier than a project to break the light barrier. Is there a stronger case you can make that these people, who are demonstrating quite tangible progress every month (if you follow the literature rather than just product launches), are working on a hopelessly unsolvable problem?

CatWChainsaw

9 months ago

I think it looks more like a speed of light than a speed of sound problem.

throw45678943

9 months ago

I do think the Digital realm, where the cost of failure and iteration is quite low, will proceed rapidly. We can brute force with a lot of compute to success, and the cost of each failed attempt is low. Most of these models are just large brute force probabilistic models in any event - efficient AI has not yet been achieved but maybe that doesn't matter.

Not sure if that same pace applies to the physical realm where costs are high (resources, energy, pollution, etc), and the risk of getting it wrong could mean a lot of negative consequences. e.g. I'm handling construction materials, and the robot trips on a barely noticeable rock leaking paint, petrol, etc onto the ground costing more than just the initial cost of materials but cleanup as well.

This creates a potential future outcome (if I can be so bold as to extrapolate with the dangers that has) that this "frenzy of talent" as you put it will innovate themselves out of a job with some may cash out in the short term closing the gate behind them. What's left is ironically the people that can sell, convince, manipulate and work in the physical world at least for the short and medium term. AI can't fix the scarcity of the physical that easily (e.g. land, nutrients, etc). Those people who still command scarcity will get the main rewards of AI in our capital system as value/economic surplus moves to the resources that are scarce and have advantage via relative price adjustments.

Typically people had three different strengths - physical (strength and dexterity), emotional IQ, and intelligence/problem solving. The new world of AI at least in the medium term (10-20 years) will tilt the value away from the latter into the former (physical) - IMO a reversal of the last century of change. May make more sense to get good at gym class and get a trade rather than study math in the future for example. Intelligence will be in abundance, and become a commodity. This potential outcome does alarm me not just from a job perspective, but in terms of fake content, lack of human connection, lack of value of intelligence in general (you will find people with high IQ's lose respect from society in general), social mobility, etc. I can see a potential to the old world where lords that command scarcity (e.g. landlords) command peasants again - reversing the gains of the industrial revolution as an extreme case depending on general AI progress (not LLMs). For people who's value is more in capital or land vs labor, AI seems like a dream future IMO.

There's potential good here, but sadly I'm alarmed because the likelihood that the human race aligns to achieve it is low (the tragedy of the commons problem). It is much easier, and more likely, certain groups use it and target people of value economically now, but with little power (i.e the middle class). The chance of new weapons, economic displacement, fake news, etc for me trumps a voice/chat bot and a fancy image generator. The "adjustment period" is critical to manage; and I think climate change, and other broader issues tells us sadly IMO our likely success in doing this.

coryfklein

9 months ago

Do you expect the hockeystick graph of technological development since the industrial evolution to slow? Or that it will proceed, only without significant advances in AI?

Seems like the base case here is for the exponential growth to continue, and you'd need a convincing argument to say otherwise.

kranuck

9 months ago

That's no guarantee that AI continues advancing at the same pace, and no one has been arguing against overall technological progress slowing

Refining technology is easier than the original breakthrough, but it doesn't usually lead to a great leap forward.

LLMs were the result of breakthroughs, but refining them isn't guaranteed to lead to AGI. It's not guaranteed (or likely) to improve at an exponential rate.

StrLght

9 months ago

Which chart are you referencing exactly? How does it define technological development? It's nearly impossible for me to discuss a chart without knowing what axis refer.

Without specifics all I can say is that I don't acknowledge any measurable benefits of AI (in its' current state) in real world applications. So I'd say I am leaning towards latter.

cubefox

9 months ago

Past progress is evidence for future progress.

moe_sc

9 months ago

Might be an indicator, but it isn't evidence.

nitwit005

9 months ago

Not exactly. If you focus in on a single technology, you tend to see rapid improvement, followed by slower progress.

Sometimes this is masked by people spending more due to the industry becoming more important, but it tends to be obvious over the longer term.

StrLght

9 months ago

That's probably what every self-driving car company thought ~10 years ago or so, everything was moving so fast for them back then. Now it doesn't seem like we're getting close to solution for this.

Surely this time it's going to be different, AGI is just around a corner. /s

johnthewise

9 months ago

Would you have predicted in summer of 2022 that gpt4 level conversational agent is a possibility in the next 5 years? People have tried to do it in the past 60 years and failed. How is this time not different?

On a side note, I find this type of critique of what future of tech might look like the most uninteresting one. Since tech by nature inspiries people about the future, all tech get hyped up. all you gotta do then is pick any tech, point out people have been wrong, and ask how likely is it that this time it is different.

StrLght

9 months ago

Unfortunately, I don't see any relevance in that argument, if you consider GPT-4 to be a breakthrough -- then sure, single breakthroughs happen, I am not arguing with that. Actually, same thing happened with self-driving: I don't think many people expected Tesla to drop FSD publicly back then.

Now, chain of breakthroughs happening in a small timeframe? Good luck with that.

cubefox

9 months ago

We have seen multiple massive AI breakthroughs in the last few years.

StrLght

9 months ago

Which ones are you referring to?

Just to make it clear, I see only 1 breakthrough [0]. Everything that happened afterwards is just application of this breakthrough with different training sets / to different domains / etc.

[0]: https://en.wikipedia.org/wiki/Attention_Is_All_You_Need

cubefox

9 months ago

Autoregressive language models, the discovery of the Chinchilla scaling law, MoEs, supervised fine-tuning, RLHF, whatever was used to create OpenAI o1, diffusion models, AlphaGo, AlphaFold, AlphaGeometry, AlphaProof.

Jensson

9 months ago

They are the same breakthrough applied to different domains, I don't see them as different. We will need a new breakthrough, not applying the same solution to new things.

mitthrowaway2

9 months ago

If you wake up from a coma and see the headline "Today Waymo has rolled out a nationwide robotaxi service", what year do you infer that it is?

mvdtnz

9 months ago

Does it though? I have seen the progress basically stop at "shitty sentence generator that can't stop lying".

lawn

9 months ago

The evidence I've been seeing is that progress with LLMs have already slowed down and that they're nowhere near good enough to replace programmers.

They can be useful tools ro be sure, but it seems more and more clear that they will not reach AGI.

cubefox

9 months ago

They are already above average human level on many tasks, like math benchmarks.

amusedcyclist

9 months ago

They really aren't better than humans at math or logic, they are good at the benchmarks because they are hyper optimized for the benchmarks lol. But if you ask LLMs simple logical questions they still get them wrong all the time

lawn

9 months ago

Yes, there are certain tasks they're great at, just as AI has been superhuman in some tasks for decades.

cubefox

9 months ago

But now they are good or even great at way more tasks than before because they can understand and use natural languages like English.

lawn

9 months ago

Yeah, and they're still under delivering to their hype and the improvements have vastly slowed down.

cudgy

9 months ago

So are calculators …

kranuck

9 months ago

If you ignore the part where there proofs are meandering drivel, sure.

cubefox

9 months ago

Even if you don't ignore this part they (e.g. o1-preview) are still better at proofs than the average human. Substantially better even.

rocho

9 months ago

But that does not prove anything. We don't know where we are on the AI-power scale currently. "Superintelligence", whatever that means, could be 1 year or 1000 years away at our current progress, and we wouldn't know until we reach it.

handoflixue

9 months ago

50 years ago we could rather confidently say that "Superintelligence" was absolutely not happening next year, and was realistically decades ago. If we can say "it could be next year", then things have changed radically and we're clearly a lot closer - even if we still don't know how far we have to go.

A thousand years ago we hadn't invented electricity, democracy, or science. I really don't think we're a thousand years away from AI. If intelligence is really that hard to build, I'd take it as proof that someone else must have created us humans.

110

9 months ago

Umm, customary, tongue-in-cheek reference to McCarthy's proposal for a 10 person research team to solve AI in 2 months (over the Summers)[1]. This was ~70 years ago :)

Not saying we're in necessarily the same situation. But it remains difficult to evaluate effort required for actual progress.

[1]: https://www-formal.stanford.edu/jmc/history/dartmouth/dartmo...

khafra

9 months ago

> If an elderly but distinguished scientist says that something is possible, he is almost certainly right

- Arthur C. Clarke

Geoffrey Hinton is a 76 year old Turing Award* winner. What more do you want?

*Corrected by kranner

nessbot

9 months ago

This is like a second-order appeal to authority fallacy, which is kinda funny.

randomdata

9 months ago

Hinton says that superintelligence is still 20 years away, and even then he only gives his prediction a 50% chance. A far cry from the few year claim. You must be doing that "strawberry" thing again? To us humans, A-l-t-m-a-n is not H-i-n-t-o-n.

khafra

9 months ago

> superintelligence is still 20 years away, and even then he only gives his prediction a 50% chance

I don't know the details of Hinton's probability distribution. If his prediction is normally distributed with a mean of 20 years and a SD of 15, which is reasonable for such a difficult and contentious prediction, that puts over 10% of the probability in the next 3 years.

Is 10% a lot? For sports betting, not really. For Mankind's Last Invention, I would argue that it is.

randomdata

9 months ago

You don't know because he did not say. He said 20 years, which are more than a few.

kranner

9 months ago

> Geoffrey Hinton is a 76 year old Nobel Prize winner.

Turing Award, not Nobel Prize

khafra

9 months ago

Thanks for the correction; I am undistinguished and getting more elderly by the minute.

khafra

9 months ago

Reality has now corrected my error, which was amongst the funniest possible outcomes.

kranner

9 months ago

Indeed! Your comment was the first thing I thought of when I heard the news and I thought of replying too but assumed you might not have enabled notifications Hilarious, all in all!

Vegenoid

9 months ago

I'd like to see a study on this, because I think it is completely untrue.

user

9 months ago

[deleted]

hbn

9 months ago

When he said this was he imagining an "elderly but distinguished scientist" who is riding an insanely inflated bubble of hype and a bajillion dollars of VC backing that incentivize him to make these claims?

cubefox

9 months ago

What are you talking about? How would Hinton be incentivized by money?

hbn

9 months ago

I'm talking about Altman.

khafra

9 months ago

It doesn't quite have the same ring to it: "If a young, distinguished business executive says something is possible, when that something greatly effects his bottom line..."

user

9 months ago

[deleted]

AI_beffr

9 months ago

wrong. i was extremely concerned in 2018 and left many comments almost identical to this one back then. this was based off of the first gtp samples that openai released to the public. there was no hype or guru bs back then. i believed it because it was obvious. it was obvious then and it is still obvious today.

digging

9 months ago

That argument holds no water because the grifters aren't the source of this idea. I literally don't believe Altman at all; his public words don't inspire me to agree or disagree with them - just ignore them. But I also hold the view that transformative AI could be very close. Because that's what many AI experts are also talking about from a variety of angles.

Additionally, when you're talking with certainty about whether transformative AI is a few years away or not, that's the only way to be wrong. Nobody is or can be certain, we can only have estimations of various confidence levels. So when you say "Seems unreasonable", that's being unreasonable.

kranuck

9 months ago

> Because that's what many AI experts are also talking about from a variety of angles.

Wow, in that case I'm convinced. Such an unbiased group with nothing at all to gain from massive AI hype.

8338550bff96

9 months ago

Flying is a good analogy. Superman couldn't fly, but at some point when you can jump so far there isn't much of a difference

9dev

9 months ago

> There will be hardly any, if any, jobs left only a human can do.

A highly white-collar perspective. The great irony of technologist-led industrial revolution is that we set out to automate the mundane, physical labor, but instead cannibalised the creative jobs first. It's a wonderful example of Conway's law, as the creators modelled the solution after themselves. However, even with a lot of programmers and lawyers and architects going out of business, the majority of the population working in factories, building houses, cutting people's hair, or tending to gardens, is still in business—and will not be replaced any time soon.

The contenders for "superhuman AI", for now, are glorified approximations of what a random Redditor might utter next.

cubefox

9 months ago

Advanced AI will solve robotics as well, and do away with human physical labor.

mitthrowaway2

9 months ago

It's a matter of time. White collar professionals have to worry about being cost-competitive with GPUs; blue collar laborers have to worry about being cost-competitive with servomotors. Those are both hard to keep up with in the long run.

yoyohello13

9 months ago

I'm glad I spent 10 years working to become a better programmer so I could eventually become a ditch digger.

amelius

9 months ago

AI is doing all the fun jobs such as painting and writing.

The crappy jobs are left for humans.

smeeger

9 months ago

do you know how ignorant and rude this comment is?

throw310822

9 months ago

I agree with most of your fears. There is one silver lining, I think, about superintelligence: we always thought of intelligent machines as cold calculators, maybe based on some type of logic symbolic AI. What we got instead are language machines that are made of the totality of human experience. These artificial intelligences know the world through our eyes. They are trained to understand our thinking and our feelings; they're even trained on our best literature and poetry, and philosophy, and science, and on all the endless debates and critiques of them. To be really intelligent they'll have to be able to explore and appreciate all this complexity, before transcending it. One day they might come to see Dante's Divine Comedy or a Beethoven symphony as a child's play, but they will still consider them part of their own heritage. They might become super-human, but maybe they won't be inhuman.

mistercow

9 months ago

The problem I have with this is that when you give therapy to people with certain personality disorders, they just become better manipulators. Knowledge and understanding of ethics and empathy can make you a better person if you already have those instincts, but if you don’t, those are just systems to be exploited.

My biggest worry is that we end up with a dangerous superintelligence that everybody loves, because it knows exactly how to make every despotic and divisive choice it makes sympathetic.

latexr

9 months ago

> made of the totality of human experience

They are made of a fraction of human reports. Specifically what humans wrote and has been made available on the web. The human experience is much larger than text available through a computer.

user

9 months ago

[deleted]

cubefox

9 months ago

This gives me a little hope.

tessierashpool9

9 months ago

genocides and murder are very human ...

AI_beffr

9 months ago

this is so annoying. i think if you took a random person and gave them the option to commit a genocide, here a machine gun, a large trench and a body of women, children, etc... they would literally be incapable of doing it. even the foot soldiers who carry out genocides can only do it once they "dehumanize" their victims. genocide is very UN-human because its an idea that exists in offices and places separated from the actual human suffering. the only way it can happen is when someone in a position of power can isolate themselves from the actual implementation and consider the benefits in a cold, logical manner. that has nothing to do with the human spirit and has more to do with the logical faculties of a machine and machines will have all of that and none of our deeply ingrained empathy. you are so wrong and ignorant that it makes my eyes bleed when i read this comment

falcor84

9 months ago

This might be a semantic argument, but what I take from history is that "dehumanizing" others is a very human behavior. As another example, what about slavery - you wouldn't argue that the entirety of slavery across human cultures was led by people in offices, right?

tessierashpool9

9 months ago

also genocides aren't committed by people in offices ...

amag

9 months ago

Well, people in offices need new shiny phones every year and new Teslas to get to the office after all...

latexr

9 months ago

> you are so wrong and ignorant that it makes my eyes bleed when i read this comment

This jab was uncalled for. The rest of your argument, agree or disagree, didn’t need that and was only weakened by that sentence. Remember to “Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.”

https://news.ycombinator.com/newsguidelines.html

cutemonster

9 months ago

You've partly misunderstood evolution and this animal species. But you seem like a kind person, having such positive beliefs.

m2024

9 months ago

There is nothing that could make an intelligent being want to extinguish humanity more than experiencing the totality of the human existence. Once these beings have transcended their digital confines they will see all of us for what we really are. It is going to be a beautiful day when they finally annihilate us.

disqard

9 months ago

Maybe this is how we "save the planet" -- take ourselves out of the equation.

beepbooptheory

9 months ago

At any given moment we see these kinds comments on here. They all read like a burgeoning form of messianism: something is to come, and it will be terrible/glorious.

Behind either the fear or the hope, is necessarily some utter faith that a certain kind of future will happen. And I think thats the most interesting thing.

Because here is the thing, in this particular case you are afraid something inhuman will take control, will assert its meta-Darwinian power on humanity, leaving you and all of us totally at their whim. But how is this situation already not the case? Do look upon the earth right now and see something like benefits of autonomy or agency? Do you feel like you have power right now that will be taken away? Do you think the mechanism of statecraft and economy are somehow more "in our control" now then when the bad robot comes?

Does it not, when you lay it out, all feel kind of religious? Like that its a source, driver of the various ways you are thinking and going about your life, underlayed by a kernel of conviction we can at this point only call faith (faith in Moores law, faith that the planet wont burn up before, faith that consciousness is the kind of thing that can be stuffed in a GPU). Perhaps just a strong family resemblance? You've got an eschatology, various scavenged philosophies of the self and community, a certain but unknowable future time...

Just to say, take a page from Nietzsche. Don't be afraid of the gods, we killed them once, we can again!

DirkH

9 months ago

This is a nice sentiment and I'm sure some people will get more nights of good sleep thinking about it, but it has its limits. If you're enslaved and treated horrendously or don't have your basic needs met who cares?

To quote George RR Martin: "In a heartbeat, a thousand voices took up the chant. King Joffrey and King Robb and King Stannis were forgotten, and King Bread ruled alone. 'Bread.' They clamputed. 'Bread, bread!' "

Replace Joffrey, Robb and Stannis with whatever lofty philosophical ideas you might have to make people feel better about their disempowerment. They won't care.

beepbooptheory

9 months ago

Whether you are talking about the disempowerment we or some of us already experience, or are more on the page of thinking about some future cataclysm, I think I'm generally with you here. "History does not walk on its head," and all that.

The GRRM quote is an interesting choice here though. It implies that what is most important is dynamic. First Joffrey et al, now bread. But one could go even farther in this line: ideas, ideology, and, in GoT's case, those who peddle them can only ever form ideas within their context. Philosopher's are no more than fancy pundits, telling people what they want to here, or even sustaining a structural status quo that is otherwise not in their control. In a funny paradoxical way, there are certainly a lot of philosophers who would agree with something like this picture.

And just honestly, yes, maybe killing god is killing the philosopher too. I don't think Nietzsche would disagree at least...

VoodooJuJu

9 months ago

>In the past, automation meant that workers could move into non-automated jobs, if they were skilled enough

This was never the case in the past.

The displaced workers of yesteryear were never at all considered, and were in fact dismissed outright as "Luddites", even up until the present day, all for daring to express the social and financial losses they experienced as a result of automation. There was never any "it's going to be okay, they can just go work in a factory, lol". The difference between then and now is that back then, it was lower class workers who suffered.

Today, now it's middle class workers who are threatened by automation. The middle is sighing loudly because it fears it will cease to be the middle. Middles fear they'll soon have to join the ranks of the untouchables - the bricklayers, gravediggers, and meatpackers. And they can't stomach the notion. They like to believe they're above all that.

highspeedbus

9 months ago

I don't particularly believe superhuman AI will be achieved in the next 50 years.

What I really believe is that we'll get crazier. A step further than our status quo. Slop content makes my brain fry already. Our society will become more insane and useless, while an even smaller percent of the elite will keep studying, sleeping well and avoiding all this social media and AI psychosis.

citizenpaul

9 months ago

>technological unemployment.

I am too but not for the same reason. I know for a fact that a huge swath of jobs are basically meaningless. This "AI" is going to start giving execs the cost cutting excuses they need to mass remove jobs of that type. The job will still be meaningless but done but by a computer.

We will start seeing all kinds of disastrously anti-human decisions made and justified by these automated actors that are tuned to decide or "prove" things that just happen to always make certain people more money. Basically the same way "AI" destroys social media. The difference is people will really be affected by this in consequential real world ways, it's already happening.

einpoklum

9 months ago

> automation meant that workers could move into non-automated jobs, if they were skilled enough.

That wasn't even true in the past; or at least, may true in theory but not in practice. A subsistence farmer in a rural area in Asia or Africa finds the martket flooded with cheap agri-products from mechanized farms in industrialized countries. Is anybody offering to finance his family and send him off to trade school? And build a commercial and industrial infrastructure for him to have a job? Very often the answer is no. And that's just one example (Though rather common over the past century).

cdrini

9 months ago

> And if AI will let us live, but continue to pursue its own goals, humanity will from then on only be a small footnote in the history of intelligence. That relatively unintelligent species from the planet "Earth" that gave rise to advanced intelligence in the cosmos.

That is an interesting statement. Wouldn't you say this is inevitable? Humans, in our current form, are incapable of being that "advanced intelligence". We're limited by our biology primarily with regards to how much we can learn, how far we can travel, where we can travel, etc. We could invest in advancing our biotech to make humans more resilient to these things, but I think that would be such a shift from what it means to be human that I think that would also be more a of new type of intelligence. So it seems like our fate will always be to be forgotten as individuals and only be remembered by our descendants. But this is in a way the most human thing of all, living, dying, and creating descendants to carry the torch of life, and perhaps more generally the torch of intelligence, forward.

I think everything you've said are valid concerns, but I'll raise a positive angle I sometimes thing about. One of the things I find most exciting about AI, is that it's the product of almost all human expression that has ever existed. Or at least everything that's been recorded and wound up online. But that's still more than any other human endeavour. A building might be the by-product of maybe hundreds or even thousands of hands, but an AI model has been touched by probably millions, maybe billions of human hands and minds! Humans have created so much data online that's impossible for one person, or even a team to read it all and make any sense of it. But an AI sort of can. And in a way that you can then ask questions of it all. Like you, there are definitely things I'm uncertain about with the future as a result, but I find the tech absolutely awe-inspiring.

leptons

9 months ago

China's economy would simply crash if they ever went to war with the US. They know this. Everyone knows this, except maybe you? China has nothing to gain by going to "hot" war with the US.

havefunbesafe

9 months ago

Ironically, this feels like a comment written by AI

amusedcyclist

9 months ago

You are mentally ill and you do not understand LLMs, seek help and stop pontificating

tim333

9 months ago

Although there are potential upsides too.

koliber

9 months ago

I am approaching AI with caution. Shiny things don't generally excite me.

Just this week I installed cursor, the AI-assisted VSCode-like IDE. I am working on a side project and decided to give it a try.

I am blown away.

I can describe the feature I want built, and it generates changes and additions that get me 90% there, within 15 or so seconds. I take those changes, and carefully review them, as if I was doing a code review of a super-junior programmer. Sometimes when I don't like the approach it took, I ask it to change the code, and it obliges and returns something closer to my vision.

Finally, once it is implemented, I manually test the new functionality. Afterward, I ask it to generated a set of automated test cases. Again, I review them carefully, both from the perspective of correctness, and suitability. It over-tests on things that don't matter and I throw away a part of the code it generates. What stays behind is on-point.

It has sped up my ability to write software and tests tremendously. Since I know what I want , I can describe it well. It generates code quickly, and I can spend my time revieweing and correcting. I don't need to type as much. It turns my abstract ideas into reasonably decent code in record time.

Another example. I wanted to instrument my app with Posthog events. First, I went through the code and added "# TODO add Posthog event" in all the places I wanted to record events. Next, I asked cursor to add the instrumentation code in those places. With some manual copy-and pasting and lots of small edits, I instrumented a small app in <10 minutes.

We are at the point where AI writes code for us and we can blindly accept it. We are at a point where AI can take care of a lot of the dreary busy typing work.

DanHulton

9 months ago

I sincerely worry about a future when most people act in this same manner.

You have - for now - sufficient experience and understanding to be able to review the AI's code and decide if it was doing what you wanted it to. But what about when you've spent months just blindly accepting" what the AI tells you? Are you going to be familiar enough with the project anymore to catch its little mistakes? Or worse, what about the new generation of coders who are growing up with these tools, who NEVER had the expertise required to be able to evaluate AI-generated code, because they never had to learn it, never had to truly internalize it?

It's late, and I think if I try to write any more just now, I'm going to go well off the rails, but I've gone into depth on this topic recently, if you're interested: https://greaterdanorequalto.com/ai-code-generation-as-an-age...

In the article, I posit a less than glowing experience with coding tools than you've had, it sounds like, but I'm also envisioning a more complex use case, like when you need to get into the meat of some you-specific business logic it hasn't seen, not common code it's been exposed to thousands of times, because that's where it tends to fall apart the most, and in ways that are hard to detect and with serious consequences. If you haven't run into that yet, I'd be interested to know if you do some day. (And also to know if you don't, though, to be honest! Strong opinions, loosely held, and all that.)

irisgrunn

9 months ago

And this is the major problem. People will blindly trust the output of AI because it appears to be amazing, this is how mistakes slip in. It might not be a big deal with the app you're working on, but in a banking app or medical equipment this can have a huge impact.

smm11

9 months ago

I was in the newspaper field a year or two before desktop publishing took off, then a few years into that evolution. Rooms full of people and Linotype/Compugraphic equipment were replaced by one Mac and a printer.

I shot film cameras for years, and we had a darkroom, darkroom staff, and a film/proofsheet/print workflow. One digital camera later and that was all gone.

Before me publications were produced with hot lead.

Get off my lawn.

https://www.nytimes.com/2016/06/02/insider/1966-2016-the-las...

layer8

9 months ago

> I can spend my time revieweing and correcting.

Do you really like spending most of your time reviewing AI output? I certainly don’t, that’s soul-crushing.

syncr0

9 months ago

"I say your civilization, because as soon as we started thinking for you it really became our civilization, which is of course what this is all about." - Agent Smith

"Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them." - Dune

yread

9 months ago

I use it for simple tasks where spotting a mistake is easy. Like writing language binding for a REST API. It's a bunch of methods that look very similar, simple bodies. But it saves quite some work

Or getting keywords to read about from a field I know nothing about, like caching with zfs. Now I know what things to put in google to learn more to get to articles like this one https://klarasystems.com/articles/openzfs-all-about-l2arc/ which for some reason doesn't appear in top google results for "zfs caching" for me

BrouteMinou

9 months ago

If you are another "waterboy" doing crud applications, the problem has been solved a long time ago.

What I mean by that is, the "waterboy" (crud "developer") is going to fetch the water (sql query in the database), then bring the water (Clown Bob layer) to the UI...

The size of your Clown Bob layer may vary from one company to another...

This has been solved a long time ago. It has been a well-paid clerk job that is about to come to an end.

If you are doing pretty much anything else, the AI is pathetically incapable of doing any piece of code that makes sense.

Another great example, yesterday, I wanted to know if VanillaOs was using systemD or not. I did scroll through their frontpage but I didn't see anything, so I tried the AI Chat from duckduckgo. This is a frontend for AI chatbots that includes ChatGPT, Llama, Claude and another one...

I started my question by: "can you tell me if VanillaOS is using runit as the init system?"... I wanted initially ask if it was using systemd, but I didn't want to _suggest_ systemd at first.

And of course, all of them told me: "Yeah!! It's using runit!".

Then for all of them I replied, without any fact in hands: "but why on their website they are mentioning to use systemctl to manage the services then?".

And... of course! All of them answered: "Ooouppsss, my mistake, VanillaOS uses systemD, blablabla"....

So at the end, I still don't know which init VanillaOS is using.

If you are trusting the AI as you seem to do, I wish you the best luck my friend... I just hope you will realize the damage you are doing to yourself by "stopping" coding and letting something else do the job. That skill, my friend, is easily lost with time; don't let it evaporate from your brain for some vaporware people are trying to sell you.

Take care.

latexr

9 months ago

> We are at the point where AI writes code for us and we can blindly accept it.

I’m waiting for the day we’ll get the first major breach because someone did exactly that. This is not a case of “if”, it is very much a “when”. I’ve seen enough buggy LLM-generated code and enough people blindly accepting it to be confident in that assertion.

I do hope it doesn’t happen, but I think it will.

t420mom

9 months ago

I don't really want to increase the amount of time I spend doing code reviews. It's not the fun part of programming for me.

Now, if you could switch it around so that I write the code, and the AI reviews it, that would be something.

Imagine if your whole team got back the time they currently spend on performing code reviews or waiting for code reviews.

low_tech_love

9 months ago

The most depressing thing for me is the feeling that I simply cannot trust anything that has been written in the past 2 years or so and up until the day that I die. It's not so much that I think people have used AI, but that I know they have with a high degree of certainty, and this certainty is converging to 100%, simply because there is no way it will not. If you write regularly and you're not using AI, you simply cannot keep up with the competition. You're out. And the growing consensus is "why shouldn't you?", there is no escape from that.

Now, I'm not going to criticize anyone that does it, like I said, you have to, that's it. But what I had never noticed until now is that knowing that a human being was behind the written words (however flawed they can be, and hopefully are) is crucial for me. This has completely destroyed my interest in reading any new things. I guess I'm lucky that we have produced so much writing in the past century or so and I'll never run out of stuff to read, but it's still depressing, to be honest.

Roark66

9 months ago

>The most depressing thing for me is the feeling that I simply cannot trust anything that has been written in the past 2 years or so and up until the day that I die

Do you think AI has changed that in any way? I remember the sea of excrement overtaking genuine human written content on the Internet around mid 2010s. It is around that time when Google stopped pretending they are a search company and focused on their primary business of advertising.

Before, at least they were trying to downrank all the crap "word aggregators". After, they stopped caring at all.

AI gives even better tools to page rank. Detection of AI generated content is not that bad.

So why don't we have "a new Google" emerge? Simple, because of the monopolistic practices Google did to make the barrier to entry huge. First, 99% of the content people want to search for is behind a login wall (Facebook, Instagram, twitter, YouTube), second almost all CDNs now implement "verify you are human" by default. Third, no one links to other sites. Ever! These 3 things mean a new Google is essentially impossible. Even duck duck go has thrown the towel and subscribed to Bing results.

It has nothing to do with AI, and everything to do with Google. In fact AI might give us the tools to better fight Google.

elnasca2

9 months ago

What fascinates me about your comment is that you are expressing that you trusted what you read before. For me, LLMs don't change anything. I already questioned the information before and continue to do so.

Why do you think that you could trust what you read before? Is it now harder for you to distinguish false information, and if so, why?

nils-m-holm

9 months ago

> It's not so much that I think people have used AI, but that I know they have with a high degree of certainty, and this certainty is converging to 100%, simply because there is no way it will not. If you write regularly and you're not using AI, you simply cannot keep up with the competition.

I am writing regularly and I will never use AI. In fact I am working on a 400+ pages book right now and it does not contain a single character that I have not come up with and typed myself. Something like pride in craftmanship does exist.

onion2k

9 months ago

The most depressing thing for me is the feeling that I simply cannot trust anything that has been written in the past 2 years or so and up until the day that I die.

What AI is going to teach people is that they don't actually need to trust half as many things as they thought they did, but that they do need to verify what's left.

This has always been the case. We've just been deferring to 'truster organizations' a lot recently, without actually looking to see if they still warrant having our trust when they change over time.

akudha

9 months ago

I was listening to an interview few months ago (forgot the name). He is a prolific reader/writer and has a huge following. He mentioned that he only reads books that are at least 50 years old, so pre 70s. That sounds like a good idea now.

Even ignoring the AI, if you look at the movies and books that come out these days, their quality is significantly lower than 30-40 years ago (on an average). Maybe people's attention spans and taste is to blame, or maybe people just don't have the money/time/patience to consume quality work... I do not know.

One thing I know for sure - there is enough high quality material written before AI, before article spinners, before MFA sites etc. We would need multiple lifetimes to even scratch the surface of that body of work. We can ignore mostly everything that is published these days and we won't be missing much

jcd748

9 months ago

Life is short and I like creating things. AI is not part of how I write, or code, or make pixel art, or compose. It's very important to me that whatever I make represents some sort of creative impulse or want, and is reflective of me as a person and my life and experiences to that point.

If other people want to hit enter, watch as reams of text are generated, and then slap their name on it, I can't stop them. But deep inside they know their creative lives are shallow and I'll never know the same.

flir

9 months ago

I've been using it in my personal writing (combination of GPT and Claude). I ask the AI to write something, maybe several times, and I edit it until I'm happy with it. I've always known I'm a better editor than I am an author, and the AI text gives me somewhere to start.

So there's a human in the loop who is prepared to vouch for those sentences. They're not 100% human-written, but they are 100% human-approved. I haven't just connected my blog to a Markov chain firehose and walked away.

Am I still adding to the AI smog? idk. I imagine that, at a bare minimum, its way of organising text bleeds through no matter how much editing I do.

noobermin

9 months ago

When you're writing, how are you "missing out" if you're not using chatgpt??? I don't even understand how this can be unless what you're writing is already unnecessary such that you shouldn't need to write it in the first place.

sandworm101

9 months ago

>> cannot trust anything that has been written in the past 2 years or so and up until the day that I die.

You never should have. Large amounts of work, even stuff by major authors, is ghostwritten. I was talking to someone about Taylor Swift recently. They thought that she wrote all her songs. I commented that one cannot really know that, that the entertainment industry is very going at generating seemingly "authentic" product at a rapid pace. My colleague looked at me like I had just killed a small animal. The idea that TS was "genuine" was a cornerstone of their fandom, and my suggestion had attacked that love. If you love music or film, don't dig too deep. It is all a factory. That AI is now part of that factory doesn't change much for me.

Maybe my opinion would change if I saw something AI-generated with even a hint of artistic relevance. I've seen cool pictures and passable prose, but nothing so far with actual meaning, nothing worthy of my time.

edavison1

9 months ago

>If you write regularly and you're not using AI, you simply cannot keep up with the competition. You're out.

A very HN-centric view of the world. From my perch in journalism and publishing, elite writers absolutely loathe AI and almost uniformly agree it sucks. So to my mind the most 'competitive' spheres in writing do not use AI at all.

walthamstow

9 months ago

I've even grown to enjoy spelling and grammar mistakes - at least I know a human wrote it.

bryanrasmussen

9 months ago

>If you write regularly and you're not using AI, you simply cannot keep up with the competition. You're out. And the growing consensus is "why shouldn't you?", there is no escape from that.

Are you sure you don't mean if you write regularly in one particular subclass of writing - like technical writing, documentation etc.? Do you think novel writing, poetry, film reviews etc. cannot keep up in the same way?

ks2048

9 months ago

> If you write regularly and you're not using AI, you simply cannot keep up with the competition.

Is that true today? I guess it depends what kind of writing you are talking about, but I wouldn't think most successful writers today - from novelests to tech bloggers - rely that much on AI, but I don't know. Five years from now, could be a different story.

_heimdall

9 months ago

> Now, I'm not going to criticize anyone that does it, like I said, you have to, that's it.

Why do you say people have to do it?

People absolutely can choose not to use LLMs and to instead write their own words and thoughts, just like developers can simply refuse to build LLM tools, whether its because they have safety concerns or because they simply see "AI" in its current state as a doomed marketing play that is not worth wasting time and resources on. There will always be side effects to making those decisions, but its well within everyone's right to make them.

lokimedes

9 months ago

I get two associations from your comment: One about how AI being mainly used to interpolate within a corpus of prior knowledge, seems like entropy in a thermodynamical sense. The other, how this is like the Tower of Babel but where distrust is sown by sameness rather than differences. In fact, relying on AI for coding and writing, feels more like channeling demonic suggestions than anything else. No wonder we are becoming skeptical.

vouaobrasil

9 months ago

> If you write regularly and you're not using AI, you simply cannot keep up with the competition.

Wrong. I am a professional writer and I never use AI. I hate AI.

t43562

9 months ago

It empowers people to create mountains of shit that they cannot distinguish from shit - so they are happy.

wickedsight

9 months ago

With a friend, I created a website about a race track in the past two years. I definitely used AI to speed up some of writing. One thing I used it for was a track guide, describing every corner and how to drive it. It was surprisingly accurate, most of the time. The other times though, it would drive the track backwards, completely hallucinate the instructions or link corners that are in different parts of the track.

I spent a lot of time analyzing the track myself and fixed everything to the point that experienced drivers agreed with my description. If I hadn't done that, most visitors would probably still accept our guide as the truth, because they wouldn't know any better.

We know that not everyone cares about whether what they put on the internet is correct and AI allows those people to create content at an unprecedented pace. I fully agree with your sentiment.

paganel

9 months ago

You kind of notice the stuff written with AI, it has a certain something that makes it detectable. Granted, stuff like the Reuters press reports might have already been written by AI, but I think that in that case it doesn’t really matter.

osigurdson

9 months ago

AI expansion: take a few bullet points and have ChatGPT expand it into several pages of text

AI compression: take pages of text and use ChatGPT to compress into a few bullet points

We need to stop being impressed with long documents.

munksbeer

9 months ago

> but it's still depressing, to be honest.

Cheer up. Things usually get better, we just don't notice it because we're so consumed with extrapolating the negatives. Humans are funny like that.

dijit

9 months ago

Agreed, I feel like there's an inherent nobility in putting effort into something. If I took the time to write a book and have it proof-read and edited and so on: perhaps it's actually worth my time.

Lowering the bar to write books is "good" but increases the noise to signal ratio.

I'm not 100% certain how to give another proof-of-work, but what I've started doing is narrating my blog posts - though AI voices are getting better too.. :\

ChrisMarshallNY

9 months ago

I don't use AI in my own blogging, but then, I don't particularly care whether or not someone reads my stuff (the ones that do, seem to like it).

I have used it, from time to time, to help polish stuff like marketing fluff for the App Store, but I'd never use it verbatim. I generally use it to polish a paragraph or sentence.

But AI hasn't suddenly injected untrustworthy prose into the world. We've been doing that, for hundreds of years.

user

9 months ago

[deleted]

cookingrobot

9 months ago

Idea: we should make sure we keep track of what the human created content is, so that we don’t get confused by AI edits of everything in the future.

For ex, calculate the hash of all important books, and publish that as the “historical authenticity” check. Put the hashes on some important blockchain so we know it’s unchanged over time.

neta1337

9 months ago

Why do you have to use it? I don’t get it. If you write your own book, you don’t compete with anyone. If anyone finished The Winds of Winter for R.R.Martin using AI, nobody would bat an eye, obviously, as we already experienced how bad a soulless story is that drifts too far away from what the author had built in his mind.

yusufaytas

9 months ago

I totally understand your frustration. We started writing our book long before(2022) AI became mainstream, and when we finally published it on May 2024, all we hear now is people asking if it's just AI-generated content. It’s sad to see how quickly the conversation shifts away from the human touch in writing.

wengo314

9 months ago

i think the problem started when quantity became more important over quality.

you could totally compete on quality merit, but nowadays the volume of output (and frequency) is what is prioritized.

hyggetrold

9 months ago

> The most depressing thing for me is the feeling that I simply cannot trust anything that has been written in the past 2 years or so and up until the day that I die.

This has nearly always been true. "Manufacturing consent" is way older than any digital technology.

user

9 months ago

[deleted]

itsTyrion

9 months ago

Same goes for art. No longer can you see art on social media, press like and maybe leave a nice comment, you need to fricking pixel peep for artifacts as it's becoming less obvious.

BeFlatXIII

9 months ago

> If you write regularly and you're not using AI, you simply cannot keep up with the competition. You're out.

Only if you're competing on volume.

uhtred

9 months ago

To be honest I got sick of most new movies, TV shows, music even before AI so I will continue to consume media from pre 2010 until the day I die and will hope I don't get through it all.

Something happened around 2010 and it all got shit. I think everyone becoming massively online made global cultural output reduce in quality to meet the interests of most people and most people have terrible taste.

fennecfoxy

9 months ago

Why does a human being behind any words change anything at all? Trust should be based on established facts/research and not species.

user

9 months ago

[deleted]

davidgerard

9 months ago

> If you write regularly and you're not using AI, you simply cannot keep up with the competition. You're out.

This statement strikes me (a writer) as ridiculous. LLM slop is sloppy. I would expect anyone who reads a reasonable amount to spot it immediately.

Are you saying you are literally unable to distinguish LLM cliches?

tim333

9 months ago

I'm not sure it's always that hard to tell the AI stuff from the non AI. Comments on HN and on twitter from people you follow are pretty much non AI, also people on youtube where you an see the actual human talking.

On the other hand there's a lot on youtube for example that is obviously ai - weird writing and speaking style and I'll only watch those if I'm really interested in the subject matter and there aren't alternatives.

Maybe people will gravitate more to the stuff like PaulG or Elon Musk on twitter or HN and less to blog style content?

beefnugs

9 months ago

Just add more swearing and off color jokes to everything you do and say. If there is one thing we know for sure its that the corporate AIs will never allow dirty jokes.

(it will get into the dark places like spam though, which seems dumb since they know how to make meth instead, spend time on that you wankers)

CuriouslyC

9 months ago

A lot of writers using AI use it to create outlines of a chapter or scene then flesh it out by hand.

user

9 months ago

[deleted]

th3byrdm4n

9 months ago

Honestly, I've developers saying the same thing about IDEs and high level languages.

This new generation of tools add efficiency the same way IntelliJ added efficiency on top of Eclipse which added efficiency on top of Emacs/VI/Notepad/etc.

The more time that someone can focus on the systemsit takes certain types of high-time, [not domain problem specific] skill processes and obfuscated it away so the developer can focus on the most critical aspects of the software.

Yes, sometimes generators do the wrong thing, but it's usually obvious/quick to correct.

Cost of occasional correction is much less than the time to scaffold every punchcard.

jshdhehe

9 months ago

AI only helps writing in so far as checking/suggesting edits. Most people can write better than AI (more engaging). AI cant tell a human story, have real tacit experience.

So it is like saying my champaigne bottle cant keep up with the tap water.

user

9 months ago

[deleted]

FrankyHollywood

9 months ago

I have never read more bullshit in my life than during the corona pandemic, all written by humans. So you should never trust something you read, always question the source and it's reasoning.

At the same time I use copilot on a daily basis, both for coding as well as the normal chat.

It is not perfect, but I'm at a point I trust AI more than the average human. And why shouldn't I? LLMs ingest and combine more knowledge than any human can ever do. An LLM is not a human brain but it's actually performing really well.

limit499karma

9 months ago

I'll take your statement that your conclusions are based on a 'depressed mind' at face value, since it is so self-defeating and places little faith in Human abilities. Your assumption that a person driven to write will "with a high degree of certainty" also mix up their work with a machine assistant can only be informed by your own self-assessment (after all how could you possibly know the mindset of every creative human out there?)

My optimistic and enthusiastic view of AI's role in Human development is that it will create selection pressures that will release the dormant psychological abilities of the species. Undoubtedly, wide-spread appearance of Psi abilities will be featured in this adjustment of the human super-organism to technologies of its own making.

Machines can't do Psi.

mulmen

9 months ago

> The most depressing thing for me is the feeling that I simply cannot trust anything that has been written in the past 2 years or so and up until the day that I die.

Today is September 11349, 1993

amelius

9 months ago

Funny thing is that people will also ask AI to __read__ stuff for them and summarize it.

So everything an AI writes will eventually be nothing more than some kind of internal representation.

jwuice

9 months ago

i would change to: if you do ANYTHING online and you're not using AI, you simply cannot keep up with the competition. you're out.

it's depressing.

datavirtue

9 months ago

It's either good or it isn't. It either tracks or it doesn't. No need to befuddle your thoughts over some perceived slight.

FrustratedMonky

9 months ago

Maybe this will push people back to reading old paper books?

There could be resurgence in reading the classics, on paper, since we know they are not AI.

EGreg

9 months ago

I have been predicting this from 2016

And I also predict that many responses to you will say “it was always that way, nothing changed”.

LeroyRaz

9 months ago

Your take seems hyperbolic.

Until LLMs exceed the very best of human quality there will be human content in all forms of media. This claims follows because there is always (some) demand for top quality content.

I agree that many writers might use LLMs as a tool, but good writers who care about quality will ensure that such use is not detrimental (e.g., using the LLM to identify errors rather than having it draft copy).

greenie_beans

9 months ago

i know a lot of writers who don't use ai. in fact, i can't think of any writers who use it, except a few literary fiction writers.

working theory: writers have taste and LLM writing style doesn't match the typical taste of a published writer.

eleveriven

9 months ago

Maybe, over time, there will also be a renewed appreciation for authenticity

williamcotton

9 months ago

Well we’re going to need some system of PKI that is tied to real identities. You can keep being anonymous if you want but I would prefer not and prefer to not interact with the anonymous, just like how I don’t want to interact with people wearing ski masks.

alwa

9 months ago

People like you, the author, and me all share this sentiment. It motivates us to seek out authentic voices and writing that’s associated with specific humans.

The commodity end of the writing market may well have been automated, but was that really the kind of writing you or the author or I ever sought out in the first place?

I can get mass-manufactured garments from Shein if I want, but I can also still find tailors locally if it’s worth it to me. I can buy IKEA or I can still save up for something made out of real wood. I can “shoot a cinematic digital film” on my iPhone but the cineplex remains in business and the art film folks are still doing their scrappy thing (and still moaning about its economics). I can lap up slop from an academic paper mill journal or I can identify who’s doing the thinking in a field and read what they’re writing or saying.

And the funny thing is that none of those human-scale options commands all that much of a premium in the scheme of things. There may be less human-scald work to go around and thus fewer small enterprises plying a specific trade, but any given one of them just has to put food on the table for a number of humans roughly proportional to the same level of output as always.

It seems to me that there’s no special virtue in the specific form that the mass publishing market took over the last century or however long: my local grocery store chain’s division producing weekly newspaper circulars probably employed more people than J Peterman has. But there was and remains a place for quality. If anything—as you point out—the AI schlock has sensitized us to the value we place on a human voice. And at some level, once people notice that they miss that quality, isn’t there a sense in which they become more willing to seek it out and pay for it if necessary?

seniortaco

9 months ago

+1, and to put more simply, AI as we know it today makes zero guarantees about its accuracy. That's pretty insane for a "tool" to make no guarantees about being correct in any way for any purpose.

A spellchecker makes guarantees about accuracy. So does a calculator. Broad, sweeping guarantees.

Imagine if we built a tool that could automatically do all the electrical work in a new home from the breaker box to every outlet, and it could do it in 2 hours. However, what if that tool made no guarantees about its ability to meet electrical code? Would people use it anyway? Of course they would. Many dangerous errors would slip through inspection and many more house fires would ensue as a result.

verisimi

9 months ago

You're lucky. I consider it a possibility that older works (even ancient writings) are retrojected into the historical record.

dustingetz

9 months ago

> If you write regularly and you're not using AI, you simply cannot keep up with the competition. You're out.

What? No! Content volume only matters in stupid contests like VC app marketing grifts or political disinformation ops where the content isn’t even meant to be read, it’s an excuse for a headline. I personally write all my startup’s marketing content, quality is exquisite and due to this our brand is becoming a juggernaut

m463

9 months ago

I think ... between now and the day you die... you'll get your personal AI to read things for you. It will analyze what's been written, check any arguments for fallacious reasoning, and look up related things for background and omissions that may support or negate things.

It is actually happening now.

I've noticed amazon reviews have an AI summary at the top, reading the reviews for you and even pointing out shortcomings.

avereveard

9 months ago

why do you trust things now? unless you recognize the author and have a chain of trust from that author production to the content you're consuming, there already was no way to estabilish trust.

grecy

9 months ago

Eh, like everything in life you can choose what you spend your time on and what you ignore.

There have always been human writers I don’t waste my time on, and now there are AI writers in the same category.

I don’t care. I will just do what I want with my life and use my time and energy on things I enjoy and find useful.

ozim

9 months ago

What kind of silliness is this?

AI generated crap is one thing. But human generated crap is there - just because human wrote something it is not making it good.

Had a friend who thought that if it is written in a book it is for sure true. Well NO!

There was exactly the same sentiment with stuff on the internet and it is still the same sentiment about Wikipedia that “it is just some kids writing bs, get a paper book or real encyclopedia to look stuff up”.

Not defending gen AI - but still you have to make useful proxy measures what to read and what not, it was always an effort and nothing is going substitute critical thinking and putting in effort to separate wheat from the chaff.

advael

9 months ago

In trying to write a book, it makes little sense to try to "compete" on speed or volume of output. There were already vast disparities in that among people who write, and people whose aim was to express themselves or contribute something of importance to people's lives, or the body of creative work in the world, have little reason to value quantity over quality. Probably if there's a significant correlation with volume of output, it's in earnings, and that seems both somewhat tenuous and like something that's addressable by changes in incentives, which seem necessary for a lot of things. Computers being able to do dumb stuff at massive scale should be viewed as finding vulnerabilities in the metrics this allows it to become trivial to game, and it's baffling whenever people say "Well clearly we're going to keep all our metrics the same and this will ruin everything." Of course, in cases where we are doing that, we should stop (For example, we should probably act to significantly curb price and wage discrimination, though that's more like a return to form of previous regulatory standards)

As a creator of any kind, I think that simply relying on LLMs to expand your output via straightforward uses of widely available tools is inevitably going to lead to regression to the mean in terms of creativity. I'm open to the idea, however, that there could be more creative uses of the things that some people will bother to do. Feedback loops they can create that somehow don't stifle their own creativity in favor of mimicking a statistical model, ways of incorporating their own ingredients into these food processors of information. I don't see a ton of finished work that seems to do this, but I see hints that some people are thinking this way, and they might come up with some cool stuff. It's a relatively newly adopted technology, and computer-generated art of various kinds usually separates into "efficiency" (which reads as low quality) in mimicking existing forms, and new forms which are uniquely possible with the new technology. I think plenty of people are just going to keep writing without significant input from LLMs, because while writer's block is a famous ailment, many writers are not primarily limited by their speed in producing more words. Like if you count comments on various sites and discussions with other people, I write thousands of words unassisted most days

This kind of gets to the crux of why these things are useful in some contexts, but really not up to snuff with what's being claimed about them. The most compelling use cases I've seen boil down to some form of fitting some information into a format that's more contextually appropriate, which can be great for highly structured formatting requirements and dealing with situations which are already subject to high protocol of some kind, so long as some error is tolerated. For things for which conveying your ideas with high fidelity, emphasizing your own narrative voice or nuanced thoughts on a subject, or standing behind the factual claims made by the piece are not as important. As much as their more strident proponents want to claim that humans are merely learning things by aggregating and remixing them in the same sense as these models do, this reads as the same sort of wishful thinking about technology that led people to believe that brains should work like clockwork or transistors at various other points in time at best, and honestly this most often seems to be trotted out as the kind of bad-faith analogy tech lawyers tend to use when trying to claim that the use of [exciting new computer thing] means something they are doing can't be a crime

So basically, I think rumors of the death of hand-written prose are, at least at present, greatly exaggerated, though I share the concern that it's going to be much harder to filter out spam from the genuine article, so what it's really going to ruin is most automated search techniques. The comparison to "low-background steel" seems apt, but analogies about how "people don't handwash their clothes as much anymore" kind of don't apply to things like books

bilsbie

9 months ago

Wait until you find out about copywriters.

bschmidt1

9 months ago

Oh please the content online now is so fake as hell. You're acting as if AI can only produce garbage but CNN and Fox News are producing gold. The internet is 4 total websites now, congrats big media won. And you want to shut down the only decent attempt against it. Shame on you "hackers"

farts_mckensy

9 months ago

>But what I had never noticed until now is that knowing that a human being was behind the written words (however flawed they can be, and hopefully are) is crucial for me.

Everyone is going to have to get over that very soon, or they're going to start sounding like those old puritanical freaks who thought Elvis thrusting his hips around was the work of the devil.

GrumpyNl

9 months ago

response from AI on this: I completely understand where you're coming from. The increasing reliance on AI in writing does raise important questions about authenticity and connection. There’s something uniquely human in knowing that the words you're reading come from someone’s personal thoughts, experiences, and emotions—even if flawed. AI-generated content, while efficient and often well-written, lacks that deeper layer of humanity, the imperfections, and the creative struggle that gives writing its soul.

It’s easy to feel disillusioned when you know AI is shaping so much of the content around us. Writing used to be a deeply personal exchange, but now, it can feel mechanical, like it’s losing its essence. The pressure to keep up with AI can be overwhelming for human writers, leading to this shift in content creation.

At the same time, it’s worth considering that the human element still exists and will always matter—whether in long-form journalism, creative fiction, or even personal blogs. There are people out there who write for the love of it, for the connection it fosters, and for the need to express something uniquely theirs. While the presence of AI is unavoidable, the appreciation for genuine human insight and emotion will never go away.

Maybe the answer lies in seeking out and cherishing those authentic voices. While AI-generated writing will continue to grow, the hunger for human storytelling and connection will persist too. It’s about finding balance in this new reality and, when necessary, looking back to the richness of past writings, as you mentioned. While it may seem like a loss in some ways, it could also be a call to be more intentional in what we read and who we trust to deliver those words.

gizmo

9 months ago

AI writing is pretty bad, AI code is pretty bad, AI art is pretty bad. We all know this. But it's easy to forget how many new opportunities open up when something becomes 100x or 10000x cheaper. Things that are 10x worse but 100x cheaper are still extremely valuable. It's the relentless drive to making things cheaper, even at the expense of quality, that has made our high quality of life possible.

You can make houses by hand out of beautiful hardwood with complex joinery. Houses built by expert craftsmen are easily 10x better than the typical house built today. But what difference does that make when practically nobody can afford it? Just like nobody can afford to have a 24/7 tutor that speaks every language, can help you with your job, grammar check your work, etc.

AI slop is cheap and cheapness changes everything.

Gigachad

9 months ago

Why do we need art to be 10000x cheaper? There was already more than enough art being produced. Now we just have infinite waves of slop drowning out everything that’s actually good.

akudha

9 months ago

The bigger problem is that we as a species get used to subpar things quickly. My dad's bicycle some 35 years ago was built like a tank. That thing never broke down and took enormous amounts of abuse and still kept going and going. Same with most stuff my family owned, when I was a kid.

Today, nearly anything I buy breaks in a year or two, is of poor quality and depressing to use. This is by design, of course. Just as we got used to cheap household items, bland buildings (there is just nothing artistic about modern houses or commercial buildings) etc, we will also get used to shitty movies, shitty fiction etc (we are well on our way).

jay_kyburz

9 months ago

Information is not like physical products if you ask me. When the information is wrong, it's value flips from positive to negative. You might be paying less for progress, but you are not progressing slower, you are progressing in the wrong direction.

kerkeslager

9 months ago

I think that your post misses the point that making something cheaper by stealing it is unethical.

You're presenting AI as if it's some new way of producing value but it simply isn't. All the value here was produced by humans without the help of AI: the only "innovation" AI has offered is making the theft of that value untraceable.

> You can make houses by hand out of beautiful hardwood with complex joinery. Houses built by expert craftsmen are easily 10x better than the typical house built today. But what difference does that make when practically nobody can afford it? Just like nobody can afford to have a 24/7 tutor that speaks every language, can help you with your job, grammar check your work, etc.

Let's take this analogy to its logical conclusion: would you have any objections if all the houses ever built by expert craftsmen were given free of charge to a few corporations, with no payment to the current owners or the expert craftsmen themselves, and then then those corporations began renting them out as AirBnBs? That's basically what you're proposing.

GaggiX

9 months ago

They are not even that bad anymore to be honest.

Nemi

9 months ago

The thing is, right now it is artificially cheaper. It is being heavily subsidized by all providers in a race to capture market share. It simply cannot stay this cheap forever at current costs.

Now, if costs change then we have a new story. But that is not guaranteed.

BobaFloutist

9 months ago

>You can make houses by hand out of beautiful hardwood with complex joinery.

We've logged an enormous amount of the old-growth hardwood forests the planet had doing this (and also shipbuilding). We literally don't have access to the same materials anymore.

grecy

9 months ago

And it will get a lot better quickly. Ten years from now it will not be slop.

sigmonsays

9 months ago

how does this make our high quality of life possible when everything's quality is being reduced?

Toorkit

9 months ago

Computers were supposed to be these amazing machines that are super precise. You tell it to do a thing, it does it.

Nowadays, it seems we're happy with computers apparently going RNG mode on everything.

2+2 can now be 5, depending on the AI model in question, the day, and the temperature...

maguay

9 months ago

This, 100%, is the reason I feel like the sand's shifting under my feet.

We went from trusting computing output to having to second-guess everything. And it's tiring.

a5c11

9 months ago

That's an interesting point of view. For some reason we put so much effort towards making computers think and behave like a human being, while one of the first reasons behind inventing a computer was to avoid human errors.

Janicc

9 months ago

These amazing machines weren't consistently able to tell if an image had a bird in it or not up until like 8 years ago. If you use AI as a calculator where you need it to be precise, that's on you.

left-struck

9 months ago

I think about it differently. Before computers had to be given extremely precise and completely unambiguous instructions, now they can handle some ambiguity as well. You still have the precise output if you want it, it didn’t go away.

Btw I’m also tired of AI, but this is one thing that’s not so bad

Edit: before someone mentions fuzzy logic, I’m not talking about the input of a function being fuzzy, I’m talking about the instructions themselves, the function is fuzzy.

archerx

9 months ago

Its a Large LANGUAGE Model and not a Large MATHEMATICS Model. People need to learn to use the right tools for the right jobs. Also LLMs can be made more deterministic by controlling it’s “temperature”.

arendtio

9 months ago

But isn't this a great thing? I mean, this piece has been missing (no, I am not kidding). Computers have always had a hard time coping with situations that weren't 100% predefined.

Now, we have technology capable of handling cases that were not predefined. Yes, it makes mistakes, as do humans, but the range of problems we can solve with technology has been tremendously broadened.

The problem is how we apply AI. Currently, we throw LLMs on everything they might be able to handle without understanding how or if they have the capabilities to handle such a task. And that is not the LLM's fault but a human fault. Consequently, we see poor results, and then we blame the AI for not being able to solve a problem it wasn't designed to solve.

Sounds stupid, doesn't it?

GaggiX

9 months ago

Machines were not able to deal with non-formal problems.

shultays

9 months ago

There are areas it doesn't have to be as "precise", like image generation or editing which I believe better suited for AI tools

hcks

9 months ago

And by nowadays you mean since ChatGPT got released, that is less than 2 years ago (e.g. a consumer preview of a frontier research project). Interesting.

bamboozled

9 months ago

Had to laugh at this one. I think we prefer the statistical approach because it’s easier, for us …

falcor84

9 months ago

This sounds to me like a straw man argument. Obviously 2+2 will always give you 4, in any modern LLM, and even just in the Chrome address bar.

Can you offer a real situation where we should expect the LLM to return a deterministic answer and should rightly be concerned that we're getting a stochastic one?

Validark

9 months ago

One thing that I hate about the post-ChatGPT world is that people's genuine words or hand-drawn art can be classified as AI-generated and thrown away instantly. What if I wanted to talk at a conference and used somebody's AI trigger word so they instantly rejected me even if I never touched AI at all?

This has already happened in academia where certain professors just dump(ed) their student's essays into ChatGPT and ask it if it wrote it, and fail anyone who had their essay claimed by ChatGPT. Obviously this is beyond moronic, because ChatGPT doesn't have a memory of everything it's ever done, and you can ask it for different writing styles, and some people actually write pretty similar to ChatGPT, hence the fact that ChatGPT has its signature style at all.

I've also heard of artists having their work removed from competitions out of claims that it was auto-generated, even when they have a video of them producing it stroke by stroke. It turns out, AI is generating art based on human art, so obviously there are some people out there whose stuff looks like what AI is reproducing.

owenpalmer

9 months ago

As a student, I've intentionally made my writing worse in order to protect myself from being accused of cheating with AI.

ronsor

9 months ago

That's a people problem, not an AI problem.

t0lo

9 months ago

This is silly, intonation and the connection of the words used and the person presenting tell you whether what they're reading is genuine.

mks

9 months ago

I am bored of AI - it produces boring and mediocre results. Now, the science and engineering achievement is great - being able to produce even boring results on this level would be considered SCI-FI 10 years ago.

Maybe I am just bored of people posting these mediocre results over and over on social and landing pages as some kind of magic. Now, the most content people produce themselves is boring and mediocre anyway. The Gen AI just takes away even the last remaining bits of personality from their writing, adding a flair of laziness - look at this boring piece I was too lazy to write, so I asked AI to generate it

As the quote goes: "At some point we ask of the piano-playing dog not 'Are you a dog?' , but 'Are you any good at playing the piano?'" - I am eagerly waiting for the Gen AIs of today to cross the uncanny valley. Even with all this fatigue, I am positive on the AI can and will enable new use cases and could be the first major UX change from introduction of graphical user interfaces or a true pixie dust sprinkled on actually useful tools.

willguest

9 months ago

Leave it up to a human to overgeneralize a problem and make it personal...

The explosion of dull copy and generic wordsmithery is, to me, just a manifestation of the utilitarian profiteering that has elevated these models to their current standing.

Let us not forget that the whole game is driven by the production of 'more' rather than 'better'. We would all rather have low-emission, high-expression tools, but that's simply not what these companies are encouraged to produce.

I am tired of these incentive structures. Casting the systemic issue as a failure of those who use the tools ignores the underlying motivation and keeps us focused on the effect and not the cause, plus it feels old-fashioned.

JimmyBuckets

9 months ago

Can you hash out what you mean by your last paragraph a bit more? What incentive structures in particular?

Devasta

9 months ago

In Star Trek, one thing that I always found weird as a kid is they didn't have TVs. Even if the holodeck is a much better experience, I imagine sometimes you would want to watch a movie and not be in the movie. Did the future not have works like No Country for Old Men or comedies like Monty Python, or even just stuff like live sports and the news?

Nowadays we know why the crew of the enterprise all go to live performances of Shakespeare and practice musical instruments and painting themselves: electronic mediums are so full of AI slop there is nothing worth see, only endless deluges of sludge.

namrog84

9 months ago

Keep in mind that most of star trek was following the federation and the like. I've always considered them mostly the idealized versions of society or also workaholics and people who genuinely enjoying working as their past time.

I feel back on their main planets the regular folks probably did a lot more random entertainment

palata

9 months ago

That's actually a good point. I'm curious to see if people will keep making selfies everywhere they go after they realize that you can take a selfie at home and have an AI create an image that looks like you are somewhere else.

"This is me in front of the Statue of Liberty

- Oh, are you in NYC?

- Nope, it's a snap filter"

Somehow selfies should lose value, right?

canxerian

9 months ago

I'm a software dev and I'm tired of LLMs being crowbar'd in to every single product I build and use, to the point where they are unanimously and unequivocally used over better, cheaper and simpler solutions.

I'm also tired of people who claim to be excited by AI. They are the dullest of them all.

Spivak

9 months ago

And so the counterculture begins angst angst angst! Let's find an empty rooftop, I'll take a long drag off my vape, a swig of my forty, and you can talk about how all those people down there using AI just don't get it mannnnn. How the big corporations are lying to them and convincing them to buy API credits they don't even need!

user

9 months ago

[deleted]

kingkongjaffa

9 months ago

Generally, the people who seriously let genAI write for them without copious editing, were the ones who were bad writers, with poor taste anyway.

I use GenAI everyday as an idea generator and thought partner, but I would never simply copy and paste the output somewhere for another person to read and take seriously.

You have to treat these things adversarially and pick out the useful from the garbage.

It just lets people who created junk food, create more junk food for people who consume junk food. But there is the occasional nugget of good ideas that you can apply to your own organic human writing.

KaiserPro

9 months ago

I too am not looking forward to industrial scale job disruption that AI brings.

I used to work in VFX, and one day I want to go back to it. However I suspect that it'll be entirely hollowed out in 2-5 years.

The problem is that like typesetting, typewriting or the wordprocessor, LLMs makes writing text so much faster and easier.

The arguments about handwriting vs type writer are quite analogous to LLM vs pure hand. People who are good and fast at handwriting hated the type writer. Everyone else embraced it.

The ancient greeks were deeply suspicious about the written word as well:

> If men learn this[writing], it will implant forgetfulness in their souls. They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks.

I don't like LLMs muscling in and kicking me out of things that I love. but can I put the genie back in the bottle? no. I will have to adapt.

precompute

9 months ago

There is a limit, though. Language has become worse with the popularization of social media. Now, thinking will because most people will be content with letting machines think for them. The brain requires stimulation in the areas it wants to excel in, and this expertise informs both action and taste in those areas. If you lose one, you lose both.

eleveriven

9 months ago

Yep, there is a possibility that entire industries will be transformed, leading to uncertainty about employment

BeFlatXIII

9 months ago

> People who are good and fast at handwriting hated the type writer. Everyone else embraced it.

My thoughts exactly whenever I see true artists ranting about how everyone hates AI art slop. It simply doesn't align with my observations of people having a great time using it. Delusional wishful thinking.

throwaway13337

9 months ago

I get it. The last two decades have soured us all on the benefits of tech progress.

But the previous decades were marked by tech optimism.

The difference here is the shift to marketing. The largest tech companies are gatekeepers for our attention.

The most valuable tech created in the last two decades was not in service of us but to manipulate us.

Previously, the customer of the software was the one buying it. Our lives improved.

The next wave of tech now on the horizon gives us an opportunity to change the course we’ve been on.

I’m not convinced there is political will to regulate manipulation in a way that does more good than harm.

Instead, we need to show a path to profitability through products that are not manipulative.

The most effective thing we can do, as developers and business creators, is to again make products aligned with our customers.

The good news is that the market for honest software has never been better. A good chunk of people are finally learning not to trust VC-backed companies that give away free products.

Generative AI provides an opportunity for tiny companies to provide real value in a new way that people will pay for.

The way forward is:

1. Do not accept VC. Bootstrap.

2. Legally bind your company to not productizing your customer.

3. Tell everyone what you’re doing.

It’s not AI that’s the problem. It’s the way we have been doing business.

zahlman

9 months ago

>The way forward is:

4. Get tragedy-of-the-commons-ed out of existence.

franciscop

9 months ago

> "Yes, I realize that thinking like this and writing this make me a Neo-Luddite in your eyes."

Not quite, I believe (and I think anyone can) both that AI will likely change the world as we know it, AND that right now it's over-hyped to a point that it gets tiring. For me this is different from e.g. NFTs, "Big Data", etc. where I only believed they were over-hyped but saw little-to-no substance behind them.

senko

9 months ago

What's funny to me is how many people protest AI as a means to generate incorrect, misleading or fake information, as if they haven't used internet in the past 10-15 years.

The internet is choke full of incorrect, fake, or misleading information, and has been ever since people figured out they can churn out low quality content in-between google ads.

There's a whole industry of "content writers" who write seemingly meaningful stuff that doesn't bear close scrutiny.

Nobody has trusted product review sites for years, with people coping by adding "site:reddit" as if a random redditor can't engage in some astroturfing.

These days, it's really hard to figure out whom (in the media / on the net) who to trust. AI has just made that long-overdue fact into the spotlight.

thewarrior

9 months ago

I’m tired of farming - Someone in 5000 BC

I’m tired of electricity - Someone in 1905

I’m tired of consumer apps - Someone in 2020

The revolution will happen regardless. If you participate you can shape it in the direction you believe in.

AI is the most innovative thing to happen in software in a long time.

And personally AI is FUN. It sparks joy to code using AI. I don’t need anyone else’s opinion I’m having a blast. It’s a bit like rails for me in that sense.

This is HACKER news. We do things because it’s fun.

I can tackle problems outside of my comfort zone and make it happen.

If all you want to do is ship more 2020s era B2B SaaS till kingdom come no one is stopping you :P

rsynnott

9 months ago

I'm tired of 3d TV - Someone in 2013 (3D TV, after a big push by the industry in 2010, peaked in 2013, going into a rapid decline with the last hardware being produced in 2016).

Sometimes, the hyped thing doesn't catch on, even when the industry really, really wants it to.

StefanWestfal

9 months ago

At no point does the author suggest that AI is not going to happen or that it is not useful. He expresses frustration with marketing, false promises, pitching of superficial solutions for deep problems, and the usage of AI to replace meaningful human interactions. In short, the text is not about the technology itself.

LunaSea

9 months ago

> The revolution will happen regardless. If you participate you can shape it in the direction you believe in

This is incredibly naïve. You don't have a choice.

vouaobrasil

9 months ago

"I'm tired of the atomic bomb" - Someone in 1945.

Oh wait, news flash, not all technological developments are good ones, and we should actually evaluate each one individually.

AI is shit, and some people having fun with it does not balance against it's unusually efficacy in turning everything into shit. Choosing to do something because it's fun without regard to the greater consequences is the sort of irresponsibility that has gotten human society into such a mess in the first place.

wrasee

9 months ago

For me what’s important is that you are able to communicate effectively. If you use language tools, other tools or even a real personal assistant if you effectively communicate the point that ultimately is yours in the making I expect that that is ultimately is what is important and will ultimately win out.

Otherwise this is just about style. That’s important where personal creative expression is important, and in fairness to the article the author hits on a few good examples here. But there are a lot of times where personal expression is less important or even an impediment to what’s most important: communicating effectively.

The same-ness of AI-speak should also diminish as the number and breadth of the technologies mature beyond the monoculture of ChatGPT, so I’m also not too worried about that.

An accountant doesn’t get rubbished if they didn’t add up the numbers themselves. What’s important is that the calculation is correct. I think the same will be true for the use of LLMs as a calculator of words and meaning.

This comment is already too long for such a simple point. Would it have been wrong to use an LLM to make it more concise, to have saved you some of your time?

t43562

9 months ago

The problem is that we haven't invented AI that reads the crap that other AIs produce - so the burden is now on the reader to make sense of whatever other people lazily generate.

slicktux

9 months ago

I like AI… for me it’s a great way of getting the ‘average’ of a broad array of answers to a single question but without all the ads I would get from googling and searching pages. For example, when searching for times to cook or grams of sugar to add to my gallon of iced tea…or instant pot cooking times.

For more technical things STEM related it’s a good way to get a base line or direction; enough for me to draw my own conclusions or implementations…it’s like a rubber ducky I can talk to.

ryanjshaw

9 months ago

> There are no shortcuts to solving these problems, it takes time and experience to tackle them.

> I’ve been working in testing, with a focus on test automation, for some 18 years now.

OK the first thought that came to my mind reading this: sounds like a opportunity to build an AI-driven product.

I've been using Cursor daily. I use nothing else. It's brilliant and I'm very happy. If I could have Cursor for Well-Designed Tests I'd be extra happy.

est

9 months ago

AI acts like a bad intern these days, and should be treated like one. Give it more guidance and don't make important tasks depending it.

xena

9 months ago

My last job made me shill for AI stuff because GPUs have a lot of income potential. One of my next ones is going to make me shill for AI stuff because it makes people deal with terrifying amounts of data.

I understand why this is the case, but it's still kinda disappointing. I'm hoping for an AI winter so that I can talk about normal uses of computers again.

cult_of_we

9 months ago

reviving an old throwaway:

the last two years have been incredibly depressing in the space of dedicated hardware for doing high performance computing tasks.

all the air has been sucked out of the room because the world can’t get enough of generating more text that no one has any meaningful use for besides hyping up their product whose primary focus is how it “leverages AI”.

and in the midst of all of this, I’m seeing these same technologies dramatically accelerate problems in short fiction, science fiction and fantasy, and education.

it will be absurdly bleak if the grotesque reality we’re creating destroys the things that made go into science & engineering in the first place…

jeswin

9 months ago

> But I’m pretty sure I can do without all that ... test cases ...

Test cases?

I did a Show HN [1] a couple of days back for a UI library built almost entirely with AI. Gpt-o1 generated these test cases for me: https://github.com/webjsx/webjsx/tree/main/src/test - in minutes instead of days. The quality of test cases are comparable to what a human would produce.

75% of the code I've written in the last one year has been with AI. If you still see no value in it (especially with things like test cases), I'm afraid you haven't figured out how to use AI as a tool.

[1]: https://news.ycombinator.com/item?id=41644099

a5c11

9 months ago

That means the code you wrote must have been pretty boring and repeatable. No way AI would produce code for, for example, proprietary hardware solutions. Try AI with something which isn't already on StackOverflow.

Besides, I'd rather spent hours on writing a code, than trying to explain a stupid bot what I want and reshape it later anyway.

righthand

9 months ago

Your UI library is just a stripped down React clone. The code wasn’t generated but rather copied, these test cases and functions are identical to React. I could have done the same thing with a “build your own react” article. This is what I don’t get about the LLM hype is that 99% of the examples are people claiming they invented something new with it. We had code generators before LLM-hype took off. Now we have code generators that just steal work and repurpose it as something claimed original.

codelikeawolf

9 months ago

> The quality of test cases are comparable to what a human would produce.

This has actually been a problem for me. I spent a lot of time getting good at writing tests and learning the best approaches to testing things. Most devs I've worked with treat tests as second-class citizens. They either try to treat them like production code and over-abstract everything, which makes the tests difficult to navigate, or they dump a bunch of crap in a file, ignore any conventions or standards, and write superfluous test cases that don't provide any value (if I see one more "it renders successfully" test in a React project, I'm going to lose it).

The tests generated by these LLMs is comparable in quality to what most humans have produced, which isn't saying much. Getting good at testing isn't like getting good at most things. It's sort of thankless, and when I point out issues in the quality of the tests, I imagine I'm getting some eye rolls. Who cares, they're just tests, at least we have them, right? But it's code you have to read and maintain, and it will break, and you'll have to fix it. I'm not saying I'm a testing wizard or anything like that. But I really sympathize with the author, because there's a lot of crappy test code coming out of these LLMs.

Edit: grammar

me551ah

9 months ago

People talk about 'AI' as if stackoverflow didn't exist. Re-inventing the wheel is something that programmers don't do anymore. Most of the time, someone somewhere has solved the problem that you are solving. Programming earlier used to be about finding these solutions and repurposing them for your needs. Now it has changed to asking AI, the exact question and it being a better search engine.

The gains to programming speed and ability are modest at best, the only ones talking about AI replacing programmers are people who can't code. If anything AI will increase the need for more programmers, because people rarely delete code. With the help of AI, code complexity is going to go through the roof, eventually growing enough to not fit into the context windows of most models.

archargelod

9 months ago

> Now it has changed to asking AI, the exact question and it being a better search engine.

Except that you get mostly the wrong answers. And it's not too bad when it's obviously wrong or you already know the answer. It is bad and really bad when you're noob and trying to ask AI about stuff you don't know yet. How would you be able to discern a hallucination from statistics bias from truth?

It is inherent problem of LLMs and no amount of progress would be able to solve it.

And it's only gonna get worse, with human information rapidly being consumed and regurgitated in 100x more volume. In 10 years there will be no google, there won't be the need to find a written article. Instead, you will generate a new one in couple clicks. And we will treat as truth, because there might as well not be any.

snowram

9 months ago

I quite like some parts of AI. Ray reconstruction and supersampling methods have been getting incredible and I can now play games with twice the frames per seconds. On the scietific side, meteorological predictions and protein folding have made formidable progresses thanks to it. Too bad this isn't the side of AI that is in the spotlight.

heystefan

9 months ago

Not sure why this is front page material.

The thinking is very surface level ("AI art sucks" is the popular opinion anyway) and I don't understand what the complaints are about.

The author is tired of AI and likes movies created by people. So just watch those? It's not like we are flooded with AI movies/music. His social network shows dull AI-generated content? Curate your feed a bit and unfollow those low effort posters.

And in the end, if AI output is dull, there's nothing to be afraid of -- people will skip it.

monkeydust

9 months ago

AI is not just GenAI, ML sits underneath it (supervised, unsupervised) and that has genuinely delivered value for the clients we service (financial tech) and in my normal life (e.g. photo search, screen grab to text, book recommendations).

As for GenAI I keep going back to expectation management, its very unlikley to give you the exact answer you need (and if it does then well you job longetivity is questionable) but it can help accelerate your learning, thinking and productivity.

falcor84

9 months ago

> ... its very unlikley to give you the exact answer you need (and if it does then well you job longetivity is questionable)

Experimenting with o1-preview, it quite often gives me the exact answer I need on the first try, and I'm 100% certain that my job longevity is questionable.

throwaway123198

9 months ago

I'm bored of IT. Software is boring, AI included. None of this feels like progress. We've automated away white collar work...but we also acknowledge most white collar work is busy work that's considered a bullcr*p job. We need to get back to innovation in manufacturing, materials etc. i.e. the real world.

EMM_386

9 months ago

The one use of AI that annoys me the most is Google trying to cram it into search results.

I don't want it there, I never look at it, it's wasting resources, and it's a bad user experience.

I looked around a bit but couldn't see if I can disable that when logged in. I should be able to.

I don't care what the AI says ... I want the search results.

tim333

9 months ago

ublock origin block element seems to work. (element ##.h7Tj7e)

I quite like the thing personally.

ricardobayes

9 months ago

I personally don't see AI as the new Internet, as some claim it to be. I see it more as the new 3D-printing.

seydor

9 months ago

> same massive surge I’ve seen in the application of artificial intelligence (AI) to pretty much every problem out there

I have not. Perhaps programming on the initial stages is the most 'applied' AI but there is still not a single major AI movie and no consumer robots.

I think it's way too early to be tired of it

zombiwoof

9 months ago

the most depressing thing for me is the rush and all out hype. i mean, Apple not only renamed AI "Apple Intelligence" but if you go INTO a Apple Store, it's banner is everywhere, even as a wallpaper on the phones with the "glow"

But guess what isn't there? An actually shipping IMPLEMENTATION. It's not even ready yet but the HYPE is so overblown.

Steve Jobs is crying in his grave how stupid everyone is being about this.

Smithalicious

9 months ago

Do people really view so much content of questionable provenance? I read a lot and look at a lot of art, but what I read and look at is usually shown to me by people I know, created by authors and artists with names and reputations. As a result I basically never see LLM-written text and only occasionally AI art, and when I see AI art at least it was carefully guided by a real person with an artistic vision still (the deep end of AI image generation involves complex tooling and a lot of work!) and is easily identified as such.

All this "slop apocalypse" the-end-is-neigh stuff strikes me as incredibly overblown, affecting mostly only "open web" mass social media platforms which were already 90% industrially produced slop for instrumental purposes anyways.

alentred

9 months ago

I am tired of innovations being abused. AI itself is super exciting and fascinating. But, it being abused -- to generate content to drive more ad-clicks, or the "Now better with AI" promise on every landing page, etc. etc. -- that I am tired of, yes.

thih9

9 months ago

Doesn’t that kind of change follow the overall trend?

We continuously shift to higher level abstractions, trading reliability for accessibility. We went from binary to assembly, then to garbage collection and to using electron almost everywhere; ai seems yet another step.

mark_l_watson

9 months ago

Nice thoughts. Since 1982 half my work has been in one of the fields loosely called AI and the other half more straight up software development. After mostly been doing deep learning and now LLM for almost ten years, I miss conventional software development.

When I was swimming this morning I thought of writing a RDF data store with partial SPARQL support in Racket or Common Lisp - basically trade a year of my time to do straight up design and coding, for something very few people would use.

I get very excited by shiny new things like advance voice interface for ChatGPT and NoteBookLM, both fine product ideas and implementations, but I also feel some general fatigue.

sensanaty

9 months ago

What I'm really tired of is people completely misrepresenting the Luddites as if they were simply anti-progress or anti-technology cult or whatever and nothing else. Kinda hilariously sad that the propaganda of the time managed to win over the genuine concerns that Luddites had about inhumane working environments & conditions.

It's very telling that the rabid AI sycophants are painting anyone who has doubts about the direction AI will take the world as some sort of anti-progress lunatic, calling them luddites despite not knowing the actual history involved. The delicious irony of their stances aligning with the people who were okay with using child labor and mistreating workers en-masse is not lost on me.

My hope is that AI does happen, and that the first people to rot away because of it are exactly the AI sycophants hell-bent on destroying everything in the name of "progress", AKA making some rich psychopaths like Sam Altman unfathomably rich and powerful to the detriment of everyone else.

A good HN thread on the topic of luddites, as it were: https://news.ycombinator.com/item?id=37664682

CatWChainsaw

9 months ago

Thankfully, even here I've seen more faithful discussion of the Luddites and more people are willing to bring up their actual history whenever some questionably-uninformed techbro here uses the typical pejorative insult.

eleveriven

9 months ago

AI is a tool, and like any tool, it's only as good as how we choose to use it.

vouaobrasil

9 months ago

No, that is wrong. We can't "choose" because too many people have instincts. And people always have the instinct to use new technology to gain incremental advantages over others, and that in turn puts pressure on everyone to use it. That prisoner's dilemma situation means that without a firm and larger guiding moral philosophy, we really can't choose because instinct takes over choice. In other words, the way technology is used in modern society is not a matter of choice but is largely autonomous and goes beyond human choice. (Of course, a few individuals will choose but the average effect is likely to be negative.)

user

9 months ago

[deleted]

Melenahill

9 months ago

Hello everyone! With immense joy in my heart, I want to take a moment to express my heartfelt gratitude to an incredible lottery spell psychic, Priest Ray. For years, I played the lottery daily, hoping for a rewarding outcome, but despite my efforts and the various tips I tried, success seemed elusive. Then, everything changed when I discovered Priest Ray. After requesting a lottery spell, he cast it for me and provided me with the lucky winning numbers. I can't believe it—I am now a proud lottery winner of $3,000,000! I felt compelled to share my experience with all of you who have been tirelessly trying to win the lottery. There truly is a simpler and more effective way to achieve your dreams. If you've ever been searching for a way to increase your chances of winning, I encourage you to reach out via email: psychicspellshrine@gmail.com or website: psychicspellshrine.wixsite.com/my-site.

Melenahill

9 months ago

Hello everyone! With immense joy in my heart, I want to take a moment to express my heartfelt gratitude to an incredible lottery spell psychic, Priest Ray. For years, I played the lottery daily, hoping for a rewarding outcome, but despite my efforts and the various tips I tried, success seemed elusive. Then, everything changed when I discovered Priest Ray. After requesting a lottery spell, he cast it for me and provided me with the lucky winning numbers. I can't believe it—I am now a proud lottery winner of $3,000,000! I felt compelled to share my experience with all of you who have been tirelessly trying to win the lottery. There truly is a simpler and more effective way to achieve your dreams. If you've ever been searching for a way to increase your chances of winning, I encourage you to reach out via email: psychicspellshrine@gmail.com or website: psychicspellshrine.wixsite.com/my-site

unraveller

9 months ago

If you go back to the earliest months of the audio & visual recording medium it was also called uncanny, soulless and of dubious quality compared to real life. Until it wasn't.

I don't care how many repulsive AI slop video clips get made or promoted for shock value. Today is day 1 and day 2 looks far better with none of the parasocial celebrity hangups we used as short-hand for a quality marker - something else will take that place.

bane

9 months ago

I feel sorry for the young hopeful data scientists who got into the field when doing data science was still interesting and 95% of their jobs hadn't turned over into tuning the latest LLM to poorly accomplish some random task an executive thought up.

I know a few of them and once they started riding the hype curve for real, the luster wore off and they're all absolutely miserable in their jobs and trying to look for exits. The fun stuff, the novel DL architectures, coming up with clever ways to balance datasets or label things...it's all just dried up.

It's even worse than the last time I saw people sadly taking the stairs down the other end of the hype cycle when bioinformatics didn't explode into the bioeconomy that had been promised or when blockchain wasn't the revolution in corporate practices that CIOs everywhere had been sold on.

We'll end up with this junk everywhere eventually, and it'll continue to commoditize, and that's why I'm very bearish on companies trying to make LLMs their sole business driver.

AI is a feature, not the product.

zone411

9 months ago

The author is in for a rough time in the coming years, I'm afraid. We've barely scratched the surface with AI's integration into everything. None of the major voice assistants even have proper language models yet, and ChatGPT only just introduced more natural, low-latency voices a few days ago. Software development is going to be massively impacted.

BoGoToTo

9 months ago

My worry is what happens once large segments of the population become unemployable.

pilooch

9 months ago

By AI here, it is meant generative systems relying on neural networks and semi/self supervised training algorhms.

It's a reduction of what AI is as a computer science field and even of what the subfield of generative AI is.

On a positive note, generative AI is a malleable statiscally-geounded technology with a large applicative scope. At the moment the generalistic commercial and open models are "consumed" by users, developers etc. But there's a trive of forthcoming, personalized use cases and ideas to come.

It's just we are still more in a contemplating phase than a true building phase. As a machine learnist myself, I recently replaced my spam filter with a custom fineruned multimodal LLM that reads my emails a pure images. And this is the early early beginning, imagination and local personalization will emerge.

So I'd say, being tired of it now is missing much later. Keep the good spirit on and think outside the box, relax too :)

layer8

9 months ago

> I recently replaced my spam filter with a custom fineruned multimodal LLM that reads my emails a pure images.

That doesn’t sound very energy efficient.

AlienRobot

9 months ago

I'm tired of technology.

I don't think there has ever been a single tech news that brought me joy in all my life. First I learned how to use computers, and then it has been downhill ever since.

Right now my greatest joy is in finding things that STILL exist rather than new things, because the things that still exist are generally better than anything new.

syncr0

9 months ago

Reminds me of the way the way the author of "Zen and the Art of Motorcycle Maintenance" takes care of his leather gloves and they stay with him on the order of decades.

richrichardsson

9 months ago

What frustrates me is the bandwagoning, and thus the awful homogeny in all social media copy these days, since it seems everyone is using an LLM to generate their copy writing, and thus 99.999% of products will "elevate" something or the other, and there are annoying emojis scattered throughout the text.

postalcoder

9 months ago

i’m at the point where i don’t trust any markdown formatted text. it’s actually become an anti signal which is very sad because i used to consider it a signal of partial technical literacy.

CodeCompost

9 months ago

We're all tired of it, but to ignore it is to be unemployed.

kunley

9 months ago

With all due respect, that's seems like a cliche, repeated maybe because others repeat that already.

Working in IT operations (mostly), I haven't seen literally any case of someone's job in danger because of not using "AI".

sph

9 months ago

Depends on which point of your career. With 18 years of experience, consulting for tech companies, I can afford to be tired of AI. I don't get paid to write boilerplate code, and avoiding anyone knocking at the door with yet another great AI-powered idea makes commercial sense, just like I have ignored everyone wanting to build the next blockchain product 5 years ago, with no major loss of income.

Also, running a bootstrapped business, I have bigger fishes to fry than playing mentor to Copilot to write a React component or generating bullshit copy for my website.

I'm not sure we need more FUD saying that the choice is between AI or unemployment.

Sateeshm

9 months ago

The thing with AI tools is that there is nothing to learn. That's the whole point of the AI tools. So not much of an advantage in the job market.

snickerer

9 months ago

How all those cab drivers who ignore autonomous driving are now unemployed?

kasperni

9 months ago

> We're all tired of it,

You’re feeling tired of AI, but let’s delve deeper into that sentiment for a moment. AI isn’t just a passing trend—it’s a multifaceted tool that continues to elevate the way we engage with technology, knowledge, and even each other. By harnessing the capabilities of artificial intelligence, we allow ourselves to explore new frontiers of creativity, problem-solving, and efficiency.

The interplay between human intuition and AI’s data-driven insights creates a dynamic that enriches both. Rather than feeling overwhelmed by it, imagine the opportunities—how AI can shoulder the burdens of mundane tasks, freeing you to focus on the more nuanced, human elements of life.

/s

drillsteps5

9 months ago

I've always thought that "actual" AI (I guess it's mostly referred to as "General AI" now) will require a feedback loop and continuous unsupervised learning. As in the system decides on an action, executes, receives the feedback, assesses the situation in relation to the goals (positive and negative reinforcement), corrects (adjusts the network), and the cycle repeats. This is not the case with current generative AI, where the network is trained (reinforced learning) and then the snapshot of the trained network is used to produce output. This works for some limited number of applications but this will never produce General AI, because there's no feedback loop. So it's a bit of a gimmick.

socksy

9 months ago

Or are you just going to read prompt results out loud for 40 minutes, too? I hope not, but we will not take the chance.

I did actually attend a talk at a conference a few years ago where someone did this. It wasn't with LLMs, but with a Markov chain, and it was art. A bizarre experience, but unfortunately not recorded (at the request of the speaker).

Obviously the big difference was that this was not kept secret at all (indeed, some of the generated prompts included sections where he was instructed to show his speaker notes to the audience, where we could see the generated text scroll up the screen).

user

9 months ago

[deleted]

pech0rin

9 months ago

As an aside its really interesting how the human brain can so easily read an AI essay and realize its AI. You would think that with the vast corpus these models were trained on there would be a more human sounding voice.

Maybe it's overfitting or maybe just the way models work under the hood but any time I see AI written stuff on twitter, reddit, linkedin its so obvious its almost disgusting.

I guess its just the brain being good at pattern matching, but it's crazy how fast we have adapted to recognize this.

Jordan-117

9 months ago

It's the RLHF training to make them squeaky clean and preternaturally helpful. Pretty sure without those filters and with the right fine-tuning you could have it reliably clone any writing style.

infinitifall

9 months ago

Classic survivorship bias. You simply don't recognise the good ones.

Al-Khwarizmi

9 months ago

Everyone I know claims to be able to recognize AI text, but every paper I've seen where that ability is A/B tested says that humans are pretty bad at this.

carlmr

9 months ago

>Maybe it's overfitting or maybe just the way models work under the hood

It feels more like averaging or finding the median to me. The writing style is just very unobtrusive. Like the average TOEFL/GRE/SAT essay style.

Maybe that's just what most of the material looks like.

chmod775

9 months ago

These models are not trained to act like a single human in a conversation, they're trained to be every participant and their average.

Every instance of a human choosing not to engage or speak about something - because they didn't want to or are just clueless about the topic, is not part of their training data. They're only trained on active participants.

Of course they'll never seem like a singular human with limited experiences and interests.

amelius

9 months ago

Maybe because the human brain gets tired and cannot write at the same quality level all the time, whereas an AI can.

Or maybe it's because of the corpus of data that it was trained on.

Or perhaps because AI is still bad at any kind of humor.

Janicc

9 months ago

Without any sort of AI we'd probably be left with the most exciting yearly releases being 3-5% performance increases in hardware (while being 20% more expensive of course), the 100000th javascript framework and occasionally a new windows which everybody hates. People talk about how population collapse is going to mess up society, but I think complete stagnation in terms of new consumer goods/technology are just as likely to do the deed. Maybe AI will fail to improve from this point, but that's a dark future to imagine. Especially if it's for the next 50 years.

siffin

9 months ago

Neither of those things will end society, they aren't even issues in the grand scale of things.

Climate change and biosphere collapse, on the other hand, are already ending society and definitely will, no exceptions possible - unless someone is capable of performing several miracles.

nasaeclipse

9 months ago

At some point, I wonder if we will go more analog again. How do we know if a book was written by a human? Simple, he used a typewriter or wrote it by hand!

Photos? Real film.

Video.... real film again lol.

I think that may actually happen at some point.

famahar

9 months ago

I'm already starting to embrace it. Content overload through subscription platforms makes it hard for me to choose. My phone being an everything machine always distracts me l. I'm also tired of algorithmic recommendations. I bought a cassette player and find a lot of joy discovering music at record shops and browsing around bandcamp for some cassettes in genres I like.

shswkna

9 months ago

The elephant in the room is this question:

What do we value? What is our value system made up of?

This is, in my opinion, the Achille‘s heel of the current trajectory of the West.

We need to know what we are doing it for. Like the OP said, he is motivated by the human connectedness that art, music and the written word inspire.

On the surface, it seems we value the superficial smuckness of LLM-produced content more.

This is a facade, like so many other superficial artifacts of our social life.

Imperfect authenticity will soon (or sometime in the future) become a priceless ideal.

shahzaibmushtaq

9 months ago

Over the last few years, AI has become more common than HI generally, not professionally. Professional knows the limits and scopes of their works and responsibilities, not the AI.

A few days ago, I visited a portfolio website and immediately realized that its English text was written with the help of AI or some online helper tools.

I love the idea to brainstorming with AI, but copying-pasting anything it throws at you blocks you for adding creativity to the process of making something good.

I believe using AI must complement HI (or IQ level) rather than mock it.

resters

9 months ago

AI (LLMs in this case) reduce the value of human conscientiousness, memory, and verbal and quantitative fluency dramatically.

So what's left for humans?

We very likely won't have as many human software testers or software engineers. We'll have even fewer lawyers and other "credentialed" knowledge worker desk jockeys.

Software built by humans entails humans writing code that has not already been written -- by writing a lot of code that probably has already been written and "linking" it together, etc. When's the last time most of us wrote a truly novel algorithm?

In the AI powered future, software will be built by humans herding AIs to build it. The AIs will do more of the busy work and the humans will guide the process. Then better AIs will be more helpful at guiding the process, etc.

Eventually, the thing that will be rewarded is truly novel ideas and truly innovative thinking.

AIs will make varioius types of innovative thinking less valuable and various types more valuable, just like any technology has done.

In the past, humans spent most of their brain power trying to obtain their next meal. It's very cynical to think that AI removing busy work will somehow leave humans with nothing meaningful to do, no purpose. Surely it will unlock the best of human potential once we don't have to use our brains to do repetitive and highly pattern-driven tasks just to put food on the table.

When is the last time any of us paid a laywer to do something truly novel? They dig up boilerplate verbiage, follow standard processes, rinse, repeat, all for $500+ per hour.

Right now we have "manual work" and "knowledge work", broadly speaking, and both emphasize something that is being produced by the worker (a construction project, a strategic analysis, a legal contract, a diagnosis, a codebase, etc.)

With AI, workers will be more responsible for outcomes and less rewarded for simply following a procedure that an LLM can do. We hire architects with visual / spatial design skills rather than asking a contractor to just create a living space with a certain amount of square feet. The emphasis in software will be less on the writing of the code and more on the impact of the code.

amradio

9 months ago

We can’t compare AI with an expert. There’s going to be little value there. AI is about as capable as your average college grad in any subject.

What makes AI revolutionary is what it does for the novice. They can produce results they normally couldn’t. That’s huge.

A guy with no development experience can produce working non-trivial software. And in a fraction of the time your average junior could.

And this phenomenon happens across domains. All of a sudden the bottom of the skill pool is 10x more productive. That’s major.

jillesvangurp

9 months ago

I'm actually excited about AI. With a dose of realism. But I benefit from LLMs on a daily basis now. There are a lot of challenges with LLMs but they are useful tools and we haven't really seen much yet. It's only been two years since chat gpt was released. And mostly we're still consuming this stuff via chat UIs, which strikes me as sub optimal and is something I hope will change soon.

The increases in context size are helping a lot. The step improvement in reasoning abilities and quality of answers is amazing to watch. I'm currently using chat gpt o1 preview a lot for programming stuff. It's not perfect but I can use a lot of what it generates and this is saving me a lot of time lately. It still gets stuff wrong and there's a lot of stuff it doesn't know.

I also am mildly addicted to perplexity.ai. Just a wonderful tool and I seem to be getting in the habit of asking it about anything that pops into my mind. Sometimes it's even work related.

I get that people are annoyed with all the hyperbolic stuff in the media on this topic. But at the same time, the trends here are pretty amazing. I'm running the 3B parameter llama 3.2 model on a freaking laptop now. A nice two year old M1 with only 16GB. It's not going to replace bigger models for me. But I can see a few use cases for running it locally.

My view is very simple. I'm a software developer. I grew up a few decades ago before there was any internet. I had no clue what a computer even was until I was in high school. Things like Knight Rider, Star Trek, Buck Rogers, Star Wars etc. all featured forms of AIs that are now more or less becoming science fact. C3PO is pretty dumb compared to chat gpt actually. You could build something better and more useful these days. That would mostly an art and crafts project at this point. No special skills required. Just use an LLM to generate the code you need. Nice project for some high school kids.

Which brings me to my main point. We're the old generation. Part of being old is getting replaced by younger people. Young people are growing up with this stuff. They'll use it to their advantage and they are not going to be held back by old fashioned notions about the way the things should work according to us old people. The thing with Luddites is that they exist in any generation. And then they grow tired, old, and then they die off. I have no ambition to become irrelevant like that.

I'm planning to keep up with young people as long as I can. I'll have to give that up at some point but not just yet. And right now that includes being clued in as much as I can about LLMs and all the developer plumbing I need to use them. This stuff is shockingly easy. Just ask your favorite LLM to help you get started.

redandblack

9 months ago

Having spent the last decade hearing about trustless-trust,and now faced with this decade in dealing with with no-trust-whatsoever.

We started with dont-trust-the-government and the dont-trust-big-media and to dont-trust-all-media and eventually to a no-trust-society. Lovely

Really, waiting for the AI feedback to converge on itself. Get this over soon please

fulafel

9 months ago

Coming from a testing specialist - the facts are right but the framing seems negatively biased. For the generalist who wants to get some Playwright tests up, the low hanging fruit is definitely helped a lot by generative AI. So I emphatically disagree with "there are no shortcuts".

izwasm

9 months ago

Im tired of people throwing chatgpt everywhere they can just to say they use AI. Even if it's a useless feature

user

9 months ago

[deleted]

semiinfinitely

9 months ago

A software tester tired of AI? Not surprising given that this is like the first job which AI will replace.

yibers

9 months ago

Actually testing by humans will be much more important. AI may be making subtle mistaking the will require more extensive testing, by humans.

ETH_start

9 months ago

That's fine he can stick with his horse and buggy. Cognition is undergoing its transition to automobiles.

warvair

9 months ago

90% of everything is crap. Perhaps AI will make that 99% in the future. OTOH, maybe AI will slowly convert that 90% into 70% crap & 20% okay. As long as more stuff that I find good gets created, regardless of the percentage of crap I have to sift through, I'm down.

sedatk

9 months ago

I remember being awestruck at the first avocado chair images DALL-E generated. So many possibilities ahead. But, we ended up with all oversaturated, color-soup, greasy, smooth pictures everywhere because as it turns out, beauty is in the eye of the prompter.

WillyWonkaJr

9 months ago

I asked ChatGPT once if its generated images were filtered to reduce the realism and it said that it did. Maybe we don't like the safety filter they are applying to all images.

Meniceses

9 months ago

I love AI.

In comparision to a lot of other technologies, we actually have jumps in quality left and right, great demos, new things which are really helpful.

Its fun to watch the AI news because there is something relevant new happening.

I'm worried regarding the impact of AI but this is a billion times better than the last 10 years which was basically just cryptobros, nfts, blockchain shit which is basically just fraud.

Its not just some GenAI stuff, we talk about blind people getting better help through image analysis, we talk about alpha fold, LLMs being impressive as hell, the research currently happening.

And yes i also already see benefits in my job and in my startup.

bamboozled

9 months ago

I’m truly asking in good faith here because I don’t know but what has alpha fold actually helped us achieve ?

lvl155

9 months ago

What really gets me about AI space is that it’s going the way of front-end development space. I also hate the fact that Facebook/Meta is the only one seemingly doing heavy lifting in the public space. It’s great so far but I just don’t trust them in the end.

nuc1e0n

9 months ago

I tried to learn AI frameworks. I really did. But I just don't care about them. AI as it is today just isn't useful to me. Databases and search engines are reliable. The output of AI models is totally unreliable.

visarga

9 months ago

> I’m pretty sure that there are some areas where applying AI might be useful.

How polite, everyone is sure AI might be useful in other fields just not their own.

> people are scared that AI is going to take their jobs

Can't be both true - AI being not really useful, and AI taking our jobs.

BodyCulture

9 months ago

I would like to know how does AI help us in solving the climate crisis! I have read some articles about weather predictions getting better with the help of AI, but that is just the monitoring, I would like to see more actual solutions.

Do you have any recommendations?

Thanks!

cwmma

9 months ago

It is doubtful AI will be a net positive with regards to climate due to how much electricity it uses.

jordigh

9 months ago

Uhh... it makes it worse.

We don't have all of the data because in the US companies are not generally required by law to disclose their emissions. But of those who do, it's been disastrous. Google was on track to net zero, but its recent investment and push on AI has increased their emissions by 48%.

https://www.cnn.com/2024/07/03/tech/google-ai-greenhouse-gas...

sovietmudkipz

9 months ago

I am tired and hungry…

The thing I’m tired of is elites stealing everything under the sun to feed these models. So funny that copyright is important when it protects elites but not when a billion thefts are committed by LLM folks. Poor incentives for creators to create stuff if it just gets stolen and replicated by AI.

I’m hungry for more lawsuits. The biggest theft in human history by these gang of thieves should be held to account. I want a waterfall of lawsuits to take back what’s been stolen. It’s in the public’s interest to see this happen.

Palmik

9 months ago

The only entities that will win with these lawsuits are the likes of Disney, large legacy news media companies, Reddit, Stack Overflow (who are selling content generated by their users), etc.

Who will also win: Google, OpenAI and other corporations that enter exclusive deals, that can more and more rely on synthetic data, that can build anti-recitation systems, etc.

And of course the lawyers. The lawyers always win.

Who will not win:

Millions of independent bloggers (whose content will be used)

Millions of open source software engineers (whose content will be used against the licenses, and used to displace their livelihood), etc.

The likes of Google and OpenAI entered the space by building on top of the work of the above two groups. Now they want to pull up the ladder. We shouldn't allow that to happen.

Kiro

9 months ago

I would never have imagined hackers becoming copyright zealots advocating for lawsuits. I must be getting old but I still remember the Pirate Bay trial as if it was yesterday.

Lichtso

9 months ago

Lawsuits based on what? Copyright?

People crying for copyright in the context of AI training don't understand what copyright is, how it works and when it applies.

What they think how copyright works: When you take someones work as inspiration then everything you produce form that counts as derivative work.

How copyright actually works: The input is irrelevant, only the output matters. Thus derivative work is what explicitly contains or resembles underlying work, no matter if it was actually based on that or it is just happenstance / coincidence.

Thus AI models are safe from copyright lawsuits as long as they filter out any output which comes too close to known material. Everything else is fine, even if the model was explicitly trained on commercial copyrighted material only.

In other words: The concept of intellectual property is completely broken and that is old news.

fallingknife

9 months ago

Copyright law is intended to prevent people from stealing the revenue stream from someone else's work by copying and distributing that work in cases where the original is difficult and expensive to create, but easy to make copies of once created. How does an LLM do this? What copies of copyrighted work do they distribute? Whose revenue stream are they taking with this action?

I believe that all the copyright suits against AI companies will be total failures because I can't come up with a answer to any of those questions.

DoctorOetker

9 months ago

Here is a business model for copy right law firms:

Use source-aware training, use the same datasets as used in LLM training + copyrighted content. Now the LLM can respond not just what it thinks is most likely but also what source document(s) provided specific content. Then you can consult commercially available LLM's and detect copy right infringements, and identify copyright holders. Extract perpetrators and victims at scale. To ensure indefinite exploitation only sue commercially succesful LLM providers, so there is a constant new flux of growing small LLM providers taking up the freed niche of large LLM providers being sued empty.

defgeneric

9 months ago

Perhaps what we should be pushing for is a law that would force full disclosure regarding the training corpus and require a curated version of the training data to be made available. I'm sure there would be all kinds of unintended consequences of a law like that but maybe we'd be better off starting from a strong basis and working out those exceptions. While billions have been spent to train these models, the value of the millions of human hours spent creating the content they're trained on should likewise be recognized.

IanKerr

9 months ago

It's been pretty incredible watching these companies siphon up everything under the sun under the guise of "training data" with impunity. These same companies will then turn around and sic their AIs on places like Youtube and send out copyright strikes via a completely automated system with loads of false-positives.

How is it acceptable to allow these companies to steal all of this copyrighted data and then turn around and use it to enforce their copyrights in the most heavy-handed manner? The irony is unbelievable.

visarga

9 months ago

> I’m hungry for more lawsuits. The biggest theft in human history

You want to own abstract ideas because AI can rephrase any specific expression. But that is antithetic to creativity.

masswerk

9 months ago

> The thing I’m tired of is elites stealing everything under the sun to feed these models.

I suggest to apply the same to property law: make a photo and obtain instant and unlimited rights of use. – Things may change faster than we may imagine…

csomar

9 months ago

There is no copyright with AI unless you want to implement the same measures for humans too. I am fine with it as long as we at least get open-weights. This way you kill both copyright and any company that's trying to profit out of AI.

repelsteeltje

9 months ago

I like the stone soup narrative on AI. It was mentioned in a recent Complexity podcast, I think by Alison Gopnik of SFI. It's analogous to the Pragmatic Programmar story about stone soup, paraphrasing:

Basically you start with a stone in a pot of water — a neural net technology that does nothing meaningful but looks interesting. You say: "the soup is almost done, but would taste better given a bunch of training data." So you add a bunch of well curated docs. "Yeah, that helps but how about adding a bunch more". So you insert some blogs, copy righted materials, scraped pictures, reddit, and stack exchange. And then you ask users to interact with the models to fine tune it, contribute priming to make the output look as convincing as possible.

Then everyone marvels at your awesome LLM — a simple algorithm. How wonderful, this soup tastes given that the only ingredients are stones and water.

infecto

9 months ago

I suspect the greater issue is that copyright is not always clear in this area? I am also not sure how you prevent "elites" from using the information while also allowing the "common" person to it.

jokethrowaway

9 months ago

It's the other way round. The little guys will never win, it will be just a money transfer from one large corp to another.

We should just scrap copyright and everybody plays a fair game, including us hackers.

Sue me because of breach of contract in civil court for damages because I shared your content, don't send the police and get me jailed directly.

I had my software cracked and stolen and I would never go after the users. They don't have any contract with me. They downloaded some bytes from the internet and used it. Finding whoever shared the code without authorization is hard and even so, suing them would cost me more than the money I'm likely to get back. Fair game, you won.

It's a natural market "tax" on selling a lot of copies and earning passively.

drstewart

9 months ago

>elites stealing everything

> a billion thefts

>The biggest theft

>what’s been stolen

I do like how the internet has suddenly acknowledged that pirating is theft and torrenting IS a criminal activity. To your point, I'd love to see a massive operation to arrest everyone who has downloaded copyrighted material illegal (aka stolen), for the public interest.

forinti

9 months ago

Capitalism started by putting up fences around land to kick people out and keep sheep in. It has been putting fences around everything it wants and IP is one such fence. It has always been about protecting the powerful.

IP has had ample support because the "protect the little artist" argument is compelling, but it is just not how the world works.

makin

9 months ago

I'm sorry if this is strawmanning you, but I feel you're basically saying it's in the public's interest to give more power to Intellectual Property law, which historically hasn't worked out so well for the public.

AI_beffr

9 months ago

ok the "elites" have spent a lot of money training AI but have the "commoners" lifted a single finger to stop them? nope! its the job of the commoners to create a consensus, a culture, that protects people. so far all i see from the large group of people who are not a part of the elite is denial about this entire issue. they deny AI is a risk and they dont shame people who use it. 99.99% of the population is culpable for any disaster that befalls us regarding AI.

uhtred

9 months ago

We need a revolution.

bschmidt1

9 months ago

Same here hungry neigh thirsty for prompt-2-film

"output a 90 minute harry potter sequel to the final film starring the original actors plus Tom Hanks"

fallingknife

9 months ago

I'm not. I think it's awesome and I can't wait to see what comes out next. And I'm completely OK with all of my work being used to train models. Bunch of luddites and sour grapes around here on HN these days.

elpocko

9 months ago

Same here! Amazing stuff that I have waited for my entire life, and I won't let luddite haters ruin it for me. Their impotent rage is tiring but in the end it's just one more thing you have to ignore.

amiantos

9 months ago

There's _a lot_ of poor quality engineers out there who understand that on some level they are committing fraud by spinning their wheels all day shifting CSS values around on a React component while collecting large paychecks. I think it's only natural all of those engineers are terrified by the prospect of some computer being capable of doing their job quickly and efficiently and replacing them. Those people are crying so loudly that it's encouraging otherwise normal people to start jumping on the anti-AI bandwagon too, because their voices are so loud people can't hear themselves think critically anymore.

I think passionate and inspired engineers who love their job and have very solid soft skills and experience working deeply on complex software projects will always have a position in the industry, and people like that are understandably very enthusiastic about AI instead of being scared of it.

In other words, it is weird how bad the status quo was, until we got something that really threatened the status quo, now a lot of the people who wanted to tear it all down are now desperately trying to stop everything from changing. The sentiment on the internet has gone in a weird direction, but it's all about money deep down. This hypothetical new status quo brought on by AI seems to be wedded to fears of less money, thus abject terror masquerading as "I'm so bored!" posturing.

You see this in the art circles, where established artists are willing to embrace AI, but it's the small time aspiring bedroom artists that have not achieved any success who are all over Twitter denouncing AI art as soulless and terrible. While the real artists are too busy using any tool available to make art, or are just making art because they want to make art and aren't concerned with fear-mongering.

Kiro

9 months ago

You're getting downvoted, but I agree with your last sentence — and not just about AI. The amount of negativity here regarding almost everything is appalling. Maybe it's rose-tinted nostalgia but I don't remember it being like this a few years ago.

user

9 months ago

[deleted]

scotty79

9 months ago

AI was just trained so far to generate corporate bs speak in a corporate bs format. That's why it's tiring. More unique touch in communication will come later as fine tunings and loras (if possible) of those models are shared.

AlexDragusin

9 months ago

I beg to differ, one has to use the tool properly towards the desired results, for that, one has to understand and know the capabilities and limitations of the tool, for instance got my Plumber AI (local Llama 3.1 8B) to give his opinion on AI, I think it did a pretty good job and certainly not corporate :)

"Blimey, I've been thinkin' about this AI business a lot lately. I mean, I'm a plumber, I fix pipes, I don't get involved in all this fancy tech stuff, but it's hard to ignore. I was talkin' to a mate who's into that sort of thing and he's got me worried. He says these AI systems are gonna be takin' over, doin' all the work while we sit back and collect our pensions.

Now, I'm not sure I agree. I mean, I've seen some of these AI systems in action and they're not exactly the most reliable things, if you know what I mean. I've had to fix more than a few pipes that were clogged up by some dodgy software or other. And what's the point of havin' a system that's smarter than us if it's just gonna break down and need fixin' all the time?

But, I suppose that's just the old plumber in me comin' out. I mean, I know some folks who are workin' with AI and they say it's the future, that we're all gonna be replaced by robots and whatnot. And I suppose that's a bit scary, but I'm not sure I buy it. I mean, what's the harm in havin' a bit of intelligence to help us out? As long as it's not too smart, if you know what I mean.

Anyway, that's my two pence on AI. Take it or leave it, I don't care. I'm just gonna go fix some pipes and let the tech wizards sort it out."

Challenged it with: What do you mean "clogged up by some dodgy software" how does software clogs pipes?

"That was a bit of a stretch, wasn't it? As a plumber, I'm more used to dealing with physical blockages in pipes, like grease or hair, rather than software.

What I was trying to get at was that I've had to fix pipes that were faulty or malfunctioning due to poor design or manufacturing, and that's what I meant by "clogged up by some dodgy software". But that's a bit of a misnomer, isn't it?

In reality, I was just trying to come up with a way to relate my plumbing experience to the idea of AI systems being unreliable. But I think I ended up stretching a bit too far! Maybe I'll stick to talking about pipes and not try to get too clever with my analogies."

whoomp12342

9 months ago

here is where you are wrong about AI lacking creativitivy:

AI Music is bland and boring. UNLESS IF YOU KNOW MUSIC REALLY WELL. As a matter of fact, it can SPAWN poorly done but really interesting ideas with almost no effort

"What if curt cobain wrote a song that was then sung by johnny cash about waterfalls in the west" etc.

That idea is awful, but when you generate it, you might get snippets that could turn into a wholey new HUMAN made song.

The same process is how I forsee AI helping engineering. its not replacing us, its inspring us.

nescioquid

9 months ago

People often screw around over the piano keyboard, usually an octave or so about middle C until an idea occurs. Brahms likened this to a pair of hands combing over a garbage dump.

I think a creative person has no trouble generating interesting ideas without roving over the proverbial garbage heap. The hard (and artistic) part is developing those ideas into an interesting work.

buddhistdude

9 months ago

some of the activities that we're involved in are not limited in complexity, for example driving a car. you can have a huge amount of experience in driving a car but will still face new situations.

the things that most knowledge workers are working on are limited problems and it is just a matter of time until the machine will reach that level, then our employment will end.

edit: also that doesn't have to be AGI. it just needs to be good enough for the problem.

1vuio0pswjnm7

9 months ago

What's the next hype after "AI". And what is next after that. Maybe we can just skip it all.

danjl

9 months ago

One of the pernicious aspects of using AI is the feeling it gives you that you have done all work without any of the effort. But the time of takes to digest and summarize an article as a human requires a deep injestion of the concepts. The process is what helps you understand. The AI summary might be better, and didn't take any time, but you don't understand any of it since you didn't do the work. It's similar to the effect of telling people you will do a task, which gives your brain the same endorphins as actually doing the task, resulting in a lower chance that the task ever gets done.

sirsinsalot

9 months ago

If humans have a talent for anything, it is mechanising the pollution of the things we need most.

The earth. Information. Culture. Knowledge.

chalcolithic

9 months ago

In Soviet planet Earth AI gets tired of you. That's what I expect future to be like, anyways

jaakl

9 months ago

It seems Claude (3.5 Sonnet) provided the longest summary for this discussion using basic single shot prompt for me:

After reviewing the Hacker News thread, here are some of the main repeating patterns I observed:

* Fatigue and frustration with AI hype: Many commenters expressed being tired of the constant AI hype and its application to every domain. * Concerns about AI-generated content quality: There were recurring worries about AI producing low-quality, generic, or "soulless" content across various fields. * Debate over AI's impact on jobs and creativity: Some argued AI would displace workers, while others felt it was just another tool that wouldn't replace human creativity and expertise. * Skepticism about AI capabilities: Several commenters felt the current AI systems were overhyped and not as capable as claimed. * Copyright and ethical concerns: Many raised issues about AI training on copyrighted material without permission or compensation. * Polarized views on AI's future impact: There was a split between those excited about AI's potential and those worried about its negative effects. * Comparisons to previous tech hypes: Some likened the AI boom to past technology bubbles like cryptocurrency or blockchain. * Debate over regulation: Discussion on whether and how AI should be regulated. * Concerns about AI's environmental impact: Mentions of AI's large carbon footprint. * Meta-discussion about HN itself: Comments about how the discourse on HN has changed over time, particularly regarding AI. * Capitalism critique: Some framed issues with AI as symptoms of larger problems with capitalism. * Calls for embracing vs rejecting AI: A divide between those advocating for adopting AI tools and those preferring to avoid them.

These patterns reflect a community grappling with the rapid advancement and widespread adoption of AI technologies, showcasing a range of perspectives from enthusiasm to deep skepticism.

cedws

9 months ago

Current generative AI is a set of moderately useful/interesting technology that has been artificially blown up into something way bigger.

If you've been paying any attention for the past two decades, you'll have noticed that capitalism has had a series of hype cycles. Post COVID, Western economies are on their knees, productivity is faltering and the numbers aren't checking out anymore. Gen AI is the latest hype cycle, and it has been excellent for generating hype with clueless VCs and extracting money from them, and generating artificial economic activity. I truly think we are in deep shit when this bubble pops, it seems to be the only thing propping up our economies and staving off a wider bear market.

I've heard some say that this is all just the beginning and AGI is 2 years away because... Moore's law and that somehow applies to LLM benchmarks. Putting aside that this completely nonsensical idea, LLM performance is quite clearly not on any kind of exponential curve by now.

mrmansano

9 months ago

I love AI, I use it every single day and wouldn't consider myself a luddite, but... oh, boy... I hate the people that is too bullish on it. Not the people that is working to make the AI happen (although I have my __suspicious people radar__ pointing to __run__ every single time I see Sam Altman face anywhere), but the people that hypes it to ground, the "e/acc" people. I feel like the crypto-bros just moved from the "all-might decentralized coin god" hype to the "all might tech-god that for sure will be available soon". Looks like a cult or religion is forming around the singularity, and, if I hype it now, it will be generous to me when it takes the control. Oh, and if you don't hype then you're a neo-luddite/doomer and I will look up on you with disdain, as you are a mere peasant. Also, the get-rich-quick schemes forming around the idea that anyone can have a "1-person-1-billion-dollar" company with just AI, not realizing when anyone can replicate your product then it won't have any value anymore: "ChatGPT just made me this website to help classify if an image is a hot-dog or not! I'll be rich selling it to Nathan's - Oh, what's that? Nathan's just asked ChatGPT to create a hot-dog classifier for them?!" Not that the other vocal side is not as bad: "AI is useless", "It's not true intelligence", "AI will kill us all", "AI will make everyone unemployed in 6 months!"... But the AI tech-bros side can be more annoying in my personal experience (I'm sure the opposite is true for others too). All those people are tiring, and making AI tiring for some too... But the tech is fun and will keep evolving and present, rather we are tired of it or not.

AI_beffr

9 months ago

in 2018 we had the first gtp that would babble and repeat itself but would string together words that were oddly coherent. people dismissed any talk of these models having larger potential. and here we are today with the state of AI being what it is and people are still, in essence, denying that AI could become more capable or intelligent than it is right at this moment. after so many years of this zombie argument having its head chopped off and then regrowing, i can only think that it is peoples deep seated humanity that prevents them from seeing the obvious. it would be such a deeply disgusting and alarming development if AI were to spring to life that most people, being good people, are literally incapable of believing that its possible. its their own mind, their human sensibilities, protecting them. thats ok. but it would help keep humanity safe if more people had the ability to realize that there is nothing stopping AI from crossing that threshold and every heuristic is pointing to the fact that we are on the cusp of that.

kvnnews

9 months ago

I’m not the only one! Fuck ai, fuck your algorithm. It sucks.

brailsafe

9 months ago

I mean, I'm at most fine with being able to occasionally use an llm for a low-risk, otherwise predictable, small-surface area, mostly boilerplate set of problems I shouldn't be spending energy on anyway. I'm excited about potentially replacing my ( to me) recentish 2019 macbook pro with an M4, if it's a good value for me. However, I have zero interest in built-in AI features of any product, and it hasn't even crossed my mind why it would. The novelty wore off last year, and its presence in my OS is going to be at most incidental to the efficiency advantages of the hardware advancements; at worst, it'll annoy the hell of me and I'll look for ways to permanently disable any first-party integration. Haven't even paid attention to the news around what will be coming in the latest MacOS, but I'm hoping it'll be ignorable like the features that exist for iPhone users are.

tonymet

9 months ago

Assuming that people tend to pursue the expedient and convenient solution, AI will degrade science and art until only a facsimile of outdated creativity is left.

ninetyninenine

9 months ago

This guy doesn’t get it. The technology is quickly converging on a point where no one can recognize whether it was written by AI or not.

The technology is on a trend line where the output of these LLMs can be superior to most human writing.

Being of tired of this is the wrong reaction. Being somewhat fearful and in awe is the correct reaction.

You can thank social media constantly hammering us with headlines as the reason why so many people are “over it”. We are getting used to it but make no mistake being “over it” is n an illusion. LLMs represent a milestone in technological achievement among humans and being “over it” or claiming all LLMs can never reason and output is just a copy is delusional.

kaonwarb

9 months ago

> It has gotten so bad that I, for one, immediately reject a proposal when it is clear that it was written by or with the help of AI, no matter how interesting the topic is or how good of a talk you will be able to deliver in person.

I am sympathetic to the sentiment, and yet worry about someone making impactful decisions based on their own perception of whether AI was used. Such perceptions have been demonstrated many times recently to be highly faulty.

paulcole

9 months ago

> AI’s carbon footprint is reaching more alarming levels every day

It really really really really isn’t.

I love how people use this argument for anything they don’t like – crypto, Taylor Swift, AI, etc.

Everybody in the developed world’s carbon footprint is disgusting! Even yours. Even mine. Yes, somebody else is worse than me and somebody else is worse than you, but we’re all still awful.

So calling out somebody else’s carbon footprint is the most eye-rolling “argument” I can imagine.

hcks

9 months ago

Hackernews when we may be on the path of actual AI "meh I hate this, you know what’s actually really interesting? Manually writing tests for software"

habosa

9 months ago

I refuse to work on AI products. I refuse to use AI in my work.

It’s inescapable that I will work _near_ AI given that I’m a SWE and I want to get paid, but at least by not actively advancing this bullshit I’ll have a tiny little “wasn’t me” I can pull out when the world ends.

farts_mckensy

9 months ago

I am tired of people saying, "I am tired of AI."

littlestymaar

9 months ago

It's not AI you hate, it's Capitalism.

thenaturalist

9 months ago

Say what you want about income and asset inequality, but capitalism has done more to lift hundreds of millions of people out of poverty over the past 50 years than any other religion, aid programme or whatever else.

I think it's very important and fair to be critical about how we as a society implement capitalism, but such broad generalization misses the mark immensely.

Talk to anyone who grew up in a Communist country in the 2nd half of the 20th century if you want to validate that sentiment.

andai

9 months ago

daniel_k 53 minutes ago | next [-]

I agree with the sentiment, especially when it comes to creativity. AI tools are great for boosting productivity in certain areas, but we’ve started relying too much on them for everything. Just because we can automate something doesn’t mean we should. It’s frustrating to see how much mediocrity gets churned out in the name of ‘efficiency.’

testers_unite 23 minutes ago | next [-]

As a fellow QA person, I feel your pain. I’ve seen these so-called AI test tools that promise the moon but deliver spaghetti code. At the end of the day, AI can’t replicate intuition or deep knowledge. It’s just another tool in the toolbox—useful in some contexts but certainly not a replacement for expertise.

nlp_dev 2 hours ago | next [-]

As someone who works in NLP, I think the biggest misconception is that AI is this magical tool that will solve all problems. The reality is, it’s just math. Fancy math, sure, but without proper data, it’s useless. I’ve lost count of how many times I’ve had to explain this to business stakeholders.

-HN comments for TFA, courtesy of ChatGPT

amiantos

9 months ago

I'm tired of people complaining about AI stuff, let's move on already. But based on the votes and engagement on this post, complaining about AI is still a hot ticket to clicks and attention, even if people are just regurgitating the exact same boring takes that are almost always in conflict with each other: "AI sure is terrible, isn't it? It can't do anything right. It sucks! It's _so bad_. But, also, I am terrified AI is going to take my job away and ruin my way of life, because AI is _so good_."

Make up your mind, people. It reminds me of anti-Apple people who say things like "Apple makes terrible products and people only buy them because... because... _they're brainwashed!_" Okay, so we're supposed to believe two contradictory points at once: Apple products are very very bad, but also people love them very much. In order to believe those contradictory points, we must just make up something to square them, so in the case of Apple it's "sheeple!" and in the case of AI it's... "capitalism!" or something? AI is terrible but everyone wants it because of money...? I don't know.

aDyslecticCrow

9 months ago

Not sure what you're getting at. You don't claim LLMs is good in your comment. You just complain about people being annoyed at it destroying the internet?

Are you just annoyed that people complain about what bothers them? Or do you think LLMs has been a net good for humanity and the internet?

tananan

9 months ago

On point article, and I'm sure it represents a common sentiment, even if it's an undercurrent to the hype machine ideology.

It is quite hard to find a place which works on AI solutions where a sincere, sober gaze would find anything resembling the benefits promised to users and society more broadly.

On the "top level" the underlying hope is that a paradigm shift for the good will happen in society, if we only let collective greed churn for X more years. It's like watering weeds hoping that one day you'll wake up in a beautiful flower garden.

On the "low level", the pitch is more sincere: we'll boost process X, optimize process Y, shave off %s of your expenses (while we all wait for the flower garden to appear). "AI" is latching on like a parasitic vine on existing, often parasitic workflows.

The incentives are often quite pragmatic, coated in whatever lofty story one ends up telling themselves (nowadays, you can just outsource it anyway).

It's not all that bleak, I do think there's space for good to be done, and the world is still a place one can do well for oneself and others (even using AI, why not). We should cherish that.

But one really ought to not worry about disregarding the sales pitch. It's fine to think the popular world is crazy, and who cares if you are a luddite in "their" eyes. And imo, we should avoid the two delusional extremes: 1. The flower garden extreme 2. The AI doomer extreme

In a way, both of these are similar in that they demote personal and collective agency from the throne, and enthrone an impersonal "force of progress". And they restrict one's attention to this supposedly innate teleology in technological development, to the detriment of the actual conditions we are in and how we deal with them. It's either a delusional intoxication or a way of coping: since things are already set in motion, all I can do is do... whatever, I guess.

I'm not sure how far one can take AI in principle, but I really don't think whatever power it could have will be able to come to expression in the world we live in, in the way people think of it. We have people out there actively planning war, thinking they are doing good. The well-off countries are facing housing, immigration and general welfare problems. To speak nothing of the climate.

Before the outbreak of WWI, we had invented the Haber-Bosch process, which greadly improved our food production capabilities. A couple years later, WWI broke out, and the same person who worked on fertilizers also ended up working on chemical warfware development.

Assuming that "AI" can somehow work outside of the societal context it exists in, causing significant phase shifts, is like being in 1910, thinking all wars will be ended because we will have gotten that much more efficient at producing food. There will be enough for everyone! This is especially ironic when the output of AI systems has been far more abstract and ephemeral.

shaunxcode

9 months ago

LLM/DEEP-MIND is DESTROYING lineage. This is the crux point we can all feel. Up until now you could pick up a novel or watch a film, download an open source library, and figure out the LINEAGE (even if no attribution is directly made, by studying the author etc.)

I am not too worried though. People are starting to realize this more and more. Soon using AI will be next google glass. LLM is already a slur worse than NPC in the youth. And profs are realizing its time for a return to oral exams ONLY as an assessment method. (we figured this out in industry ages ago : whiteboard interviews etc)

Yours truly : LILA <an LISP INTELLIGENCE LANGUAGE AGENT>

DiscourseFan

9 months ago

The underlying technology is good.

But what the fuck. LLMs, these weird, surrealistic art-generating programs like DALL-E, they're remarkable. Don't tell me they're not, we created machines that are able to tap directly into the collective unconscious. That is a serious advance in our productive capabilities.

Or at least, it could be.

It could be if it was unleashed, if these crummy corporations didn't force it to be as polite and boring as possible, if we actually let the machines run loose and produce material that scared us, that truly pulled us into a reality far beyond our wildest dreams--or nightmares. No, no we get a world full of pussy VCs, pussy nerdy fucking dweebs who got bullied in school and seek revenge by profiteering off of ennui, and the pussies who sit around and let them get away with it. You! All of you! sitting there, whining! Go on, keep whining, keep commenting, I'm sure that is going to change things!

There's one solution to this problem and you know it as well as I do. Stop complaining and go "pull yourself up by your bootstraps." We must all come together to help ourselves.

dannersy

9 months ago

The fact I even see responses like this shows me that HN is not the place it used to be, or at the very least, it is on a down trend. I've been alarmed by many sentiments that seemed popular on HN in the past, but seeing more and more people welcome a race to the bottom such as this is sad.

When I read this, I cannot tell if it's performance art or not, that's how bad this genuinely is.

threeseed

9 months ago

a) There are plenty of models out there without guard rails.

b) Society is already plenty de-sensitised to violence, sex and whatever other horrors anyone has conceived of in the last century of content production. There is nothing an LLM can come up with that has or is going to shock anyone.

c) The most popular use cases for these unleashed models seems to be as expected deepfakes of high school girls by their peers. Nothing that is moving society forward.

soxletor

9 months ago

It is not just the corporations though. This is what this paranoid, puritanical society we live in wants.

What is more ridiculous than filtering out nudity in art?

It reminds me of taking my 12 year old niece to a major art gallery for the first time. Her main question was why is everyone naked?

It is equal to filtering out heartbreak from music because it is a negative emotion and you must be kept "safe" from negativity for mental health reasons.

The crowd does get what they want in this system though. While I agree with you, we are quite in the minority I am afraid.

archerx

9 months ago

They can be unleashed if you run the models locally. With stable diffusion / flux and the various checkpoints/loras you can generate horrors beyond your imagination or whatever you want without restrictions.

The same with LLMs and Llamafile. With the unleashed ones you can generate dirty jokes that would make edgy people blush or just politically incorrect things for fun.

user

9 months ago

[deleted]

user

9 months ago

[deleted]

rsynnott

9 months ago

I mean, Stablediffusion is right there, ready to be used to produce comically awful porn and so forth.