We ran Anthropic’s interviews through structured LLM analysis

81 pointsposted 14 hours ago
by jp8585

80 Comments

jp8585

14 hours ago

Anthropic released 1,250 interviews about AI at work. Their headline: "predominantly positive sentiments." We ran the same interviews through structured LLM analysis, and the true story is a bit different.

  Key findings:                                                                                               
  • 85.7% have unresolved tensions (efficiency vs quality, convenience vs skill)                              
  • Creatives struggle MOST yet adopt FASTEST 
  • Scientists have lowest anxiety despite lowest trust (see ai as a tool, plain and simple)
  • 52% of creatives frame AI through "authenticity" (using it makes them feel like a fraud)                            
                                                                                                              
Same data, different lens. Full methodology at bottom of page. Analysis: https://www.playbookatlas.com/research/ai-adoption-explorer Dataset: https://huggingface.co/datasets/Anthropic/AnthropicInterview...

furyofantares

9 hours ago

And then you had it write your post even here on HN. C'mon.

jp8585

3 hours ago

I guess all the interview quotes almost feel like those fake reviews on websites. They are all true excerpts, though. And we had a lot of fun creating all of those interactive bits. I get the sense there's a sort of ai content ptsd here, even some of the replies here are being flagged as ai, lol

furyofantares

44 minutes ago

It's not PTSD, it's that I have no clue what you think of the results of your project here when even your comment on HN introducing it is an infodump that came out of an LLM. I can't tell what you think of the results, if you're skeptical of any of it, or if you think it's a smoking gun, or what. I don't know what parts of it you care more about than others. All I know is what the LLM thinks about the project.

cmiles8

14 hours ago

The story that’s solidifying is the tech is cool, it’s useful for certain things (eg, meeting note taking), but business have run a ton of “innovation lab” pilots that have returned little to no measurable value with leaders getting frustrated at the invested red ink. In short the substance isn't living up to the hype.

Everywhere I look the adoption metrics and impact metrics are a tiny fraction of what was projected/expected. Yes tech keynotes have their shiny examples of “success” but the data at scale tells a very different story and that’s increasingly hard to brush under the carpet.

Given the amount of financial engineering shenanigans and circular financing it’s unclear how much longer the present bonanza can continue before the financial and business reality playing out slams on the brakes.

jp8585

13 hours ago

I actually think things improved substantially when compared to last year. The latest batch of sota models is incredible (just ask any software engineer about what’s happening to their profession). It’s only a matter of time until other knowledge workers start getting the asphyxiating “vibe” coding treatment and that drama is what really fascinates me.

People are absolutely torn. It seems that ai usage starts as a clutch, then it becomes an essential tool and finally it takes over the essence of the profession itself. Not using it feels like a waste of time. There’s a sense of dread that comes from realizing that it’s not useful to “do work” anymore. That in order to thrive now, we need to outsource as much of your thinking to GPT as possible. If your sense of identity comes from “pure” intellectual pursuits, you are gonna have a bad time. The optimists will say “you will be able to do 10x the amount of work”. That might be true, but the nature of the work will be completely different. Managing a farm is not the same as planting a seed.

Terretta

13 hours ago

There’s a sense of dread that comes from realizing that it’s not useful to “do work” anymore. That in order to thrive now, we need to outsource as much of your thinking to GPT as possible. If your sense of identity comes from “pure” intellectual pursuits, you are gonna have a bad time.

This is 180 degrees from how to think about it.

The more thinking you do as ratio to less toil, the better. The more time to apply your intellect with the better machine execution to back that up, the more profit.

The Renaissance grand masters used ateliers of apprentices and journeymen while the grand masters conceived, directed, critiqued, and integrated their work into commissioned art; at the end signing their name: https://smarthistory.org/workshop-italian-renaissance-art/

This is how to leverage the machine. It's your own atelier in a box. Go be Leonardo.

polo

39 minutes ago

“It's your own atelier in a box. Go be Leonardo.”

So well put. 100% agree. Paraphrasing Steve Jobs I think of it as a mech suit for the mind.

jp8585

13 hours ago

I definitely understand that this is the rational way of viewing it. Leveraging these tools is an incredible feeling, but the sense of dread is always there in the corner. You can just feel a deep sense of angst in a lot of these interviews. In any case, I would rather have them and use them to their full extent than to become obsolete. Becoming Leonardo it is.

jonplackett

2 hours ago

If you are capable of being a leonardo, then this approach will work.

Not everyone is capable of being Leonardo

jp8585

2 hours ago

I know, right? That’s part of the angst these professionals suffer. Failure, despite having the infinite leverage provided by these tools.

wongarsu

12 hours ago

The catch is that many professional environments have evolved values that above a certain quality floor reward quantity over quality. Even more so in the US where pointless torment is "work ethic" and pausing to think something through is "lazy" (see Bill Gate's famous quote about hiring lazy people, or "work smarter, not harder" almost being a rebel motto).

Granted, that's not everywhere. There are absolutely places where you will be recognized for doing amazing work. But I think many feel pressured to use AI to produce high volumes of sub-par work instead of small volumes of great work

godelski

8 hours ago

  > see Bill Gate's famous quote about hiring lazy people
I think this is part of why all this is so contentious. There's been a huge culture shift over the last decade and AI is really just a catalyst to it. We went from managers needing to stop engineers from using too much abstraction and optimizing what doesn't need to be optimized to the engineers themselves attacking abstraction. Just look how people turn Knuth's "premature optimization is the root of evil" went from "get a profiler before you optimize" to "optimization? Are you crazy?"

Fewer and fewer people I know are actually passionate about programming and it's not uncommon to see people be burned out and just want to do their 9-5. And I see a strong correlation with these people embracing AI. It makes sense if you don't care and are just trying to get the job done. I don't think it's surprising things are getting buggier and innovation slowed. We killed the passion and tried to turn it into a mechanical endeavor. It's a negative feedback loop

pertymcpert

8 hours ago

I don't necessarily agree with you completely, but I think that's a really great analogy. At the very least full of optimism.

zdragnar

8 hours ago

It's a fundamentally flawed analogy. Leo's apprentices learned and improved. They studied under a master and faced serious repercussions if they bullshitted about their ability or what they had accompolished.

LLM capabilities are tied to their model, and won't improve on their own. You learn the quirks of prompting them, but they have fixed levels of skill. They don't lie, because they don't understand concepts such as truth or deception, but that means they'll spout bullshit and it's up to you to review everything with a skeptical eye.

In this analogy, you aren't the master, you're one part client demanding work, one part the janitor cleaning up after their mistakes.

solumunus

2 hours ago

> one part the janitor cleaning up after their mistakes.

More often their master simply pointing out what they did wrong and instructing them to fix or improve it.

vkou

8 hours ago

I'm a professional developer, using SOTA systems, and dealing with them is like bargaining with a fucking empathy vampire. It is emotionally draining.

They trick the reptilian part of your brain that you're dealing with something resembling a human being, but if they were one, they'd be described as a pathological liar and gaslighter. You can't go off on them for it, because they don't give a shit, and you shouldn't go off on them for it, because making a habit of that will make you a spiteful, unpleasant piece of shit for your coworkers to be around.

It's one thing when a machine or a tool doesn't function in the way you intend it to. It's another when this soulless, shameless homunculus does.

jbs789

4 hours ago

That’s an interesting point. I do get pretty tired of the “you are right!” I get the upsides to engagement for a chat bot but for real work it is quite draining.

delusional

13 hours ago

> just ask any software engineer about what’s happening to their profession

I'm a professional developer, and nothing interesting is happening to the field. The people doing AI coding were already the weakest participants, and have not gained anything from it, except maybe optics.

The thing that's suffocating is the economics. The entire economy has turned its back on actual value in pursuit of silicon valley smoke.

latentsea

12 hours ago

Nothing interesting happening in the field? If you've been paying attention the trend over the last two years has been that the problem space that requires humans to solve has been shrinking. It's continuing to shrink. That's interesting. Significantly interesting.

As an engineer that's lead multiple teams including one at a world leading SaaS company, I don't consider myself one of the weakest participants in the field and neither do my peers generally. I'm long on agents for coding, and have started investing heavily in making our working environment productive not only for humans, but now for agents too.

jondwillis

9 hours ago

So what does that amount to? Shared Claude code hooks and skills?

latentsea

6 hours ago

Things like that are only part of it. You can also also up your agents batting average by finding ways to build guardrails and things that inject the right context at the right time.

Like for instance we have a task runner in our project that provides a central point to do all manner of things like linting, building, testing, local deployment etc. The build, lint and test tasks are shared between local development and CI. The test tasks run the tests, take the TRX files and use a library to parse it to produce a report. So the agent can easily get access to the same info as CI is putting out about test failures. The various different test suites output reports under a consistent folder structure, they also write logs to disk under a consistent folder structure too. On failure the test tasks output a message to look at the detailed test reports and cross-reference that with the logs to debug the issue. Where possible the test reports contain correlation IDs inlined into the report.

With the above system when the agent is working through implementing something and the tests don't pass, it naturally winds up inspecting the test reports, cross referencing that with the logs, and solving the problems at a higher rate than compared to just taking a wild guess at how to run the tests and then do something random.

Getting it to write it's own guardrails by creating Roslyn Analyzers to make the build fail when it deviates from the project architecture and conventions has been another big win.

Tonnes of small things like that start to add up.

Next on my list is getting a debug MCP server, so it can set breakpoints and step through code etc.

jp8585

12 hours ago

That’s fascinating. If you don’t mind me asking, what type of software development do you do? Have you tried any of the latest coding tools? Or even used LLMs as a replacement for stack overflow?

delusional

5 hours ago

Professionally, I do banking. It's a lot of integration work, sprinkled with a little algorithm every now and then. Lately I've been on capital requirements. The core of that is a system called AxiomSL, which is quite a lot of work for one guy to keep running.

In my spare time I write some algorithmic C, you can check that stuff out on github (https://github.com/DelusionalLogic) if you're curious.

I was an early adoter of LLM's. I used to lurk in the old EleutherAI discord and monitor their progress in reconstructing GPT-2 (I recall it being called GPT-J). I also played around a bunch with image generation. At this point nobody really tried applying them to code. We were just fascinated that it wrote back at all.

I have tried most of the modern models for development. I find then to generate a lot of nonsensical and unexplainable code. I've had no success (in the 30 or so times I've tried) at getting any of the models to debug or develop even small features. They usually get lost in some "best practice" and start looping on that forever. They're also constantly breaking style and violating module boundaries.

If i use them to generate documentation I find it to be surface level and repetitive. It'll make a lot of text about structures that are obvious to me just glancing at the code, but will (obviously) not have any context about the thought process that created that code, which is the only part I care about. I can read the code just fine myself. This is the same problem I find in commit messages generated with AI tools.

For the reversing I also do, I find the models to be too imprecise. It'll take large logical leaps that ruin understanding of the code I'm trying to understand. This is the only place I actually believe a properly trained (not a chatbot) model could actually succeed past the state of the art.

I don't really use stackoverflow either, I don't trust its accuracy, and it's easy to get cargo culted in software. I generally try to find my answers in official documentation, and if I can't get that I'll read the source code. If that's unavailable I'll take a guess, or reverse the thing If it's really important to me.

frizlab

12 hours ago

I would love to be able to say the same but I’m literally the only last person in the company still not using AI to code (if anything for ethics reasons, but I also truly do not need it at all), and I am obviously not the only good dev in the company. The gain is highly debatable (especially in delivery, I do not trust self-reports), however there have been recent reports of morale improvement since using AI, so at least there’s that.

nl

8 hours ago

I'm a decent dev and I'm possibly 100 times as productive using AI.

It lets me concentrate on the parts I'm good at and ignore things I don't care about. Claude is a lot better at React than I am and I don't care.

delusional

5 hours ago

100 times? You do in a day what used to take you 3 months?

Those are just not realistic numbers.

doug_durham

12 hours ago

That's not what the data shows. Read the posting, and read Anthropic's original report. I found it a very sober, grounded report on the reality of using today's tools.

blindhippo

13 hours ago

If anything, the AI bubble is reinforcing to me (and hopefully many more people) that the "markets" are anything but rational. None of the investments going on have followed any semblance of fundamentals - it's all pure instinct and chasing hype. I just hope it doesn't tear down the world for the 99% of us unable to actually reap any benefits from it.

AI is basically a toy for 99% of us. It's a long long ways away from the productivity boost people love to claim to justify the sky high valuations. It will fade to being a background tech employed strategically I suspect - similar to other machine learning applications and this is exactly where it belongs.

I'm forced to use it (literally, AI usage is now used as a talent review metric...) and frankly, it's maybe helped speed me up... 5-10%? I spend more time trying to get the tools to be useful than I would just doing the task myself. The only true benefit I've gotten has been unit test generation. Ask it to do any meaningful work on a mature code base and you're in for a wild ride. So there's my anecdotal "sentiment".

dionian

13 hours ago

I multi task much more now that i can farm off small coding assignments to agents. i pay hndreds per month in tokens. for my role personally its been a massive paradigm shift.

blindhippo

13 hours ago

Might work for you, but if I multi task too much, the quality of my output drops significantly. Where I work, that does not fly. I cannot trust any agent to handle anything without babysitting them to avoid going off the rails - but perhaps the tools I have access to just aren't good (underlying model is claude 4.5, so it the model isn't the cause).

I've said this in the past and I'll continue to say it - until the tools get far better at managing context, they will be hard locked for value in most use cases. The moment I see "summarizing conversation" I know I'm about to waste 20 minutes fixing code.

dionian

10 hours ago

I think it depends on the project and the context, but I developed my own task management system particularly because of this challenge. I'm starting to extend this with verification gates as well.

If I worked on different types of systems with different types of tasks I might feel the same way as you, i think AI works well in specific targeted use cases, where some amount of hallucination can be tolerated and addressed.

What models are you using, I use opus 4.5, which can one shot a surprising ratio of tasks.

fragmede

13 hours ago

If you can predict that hitting “summarize conversation” equals rework, what can you change upstream so you avoid triggering it? Are you relying on the agent to carry state instead of dumping it into .MD files? What happens if your computer crashes?

> so it the model isn't the cause

Thing is, the prompts, those stupid little bits of English that can't possiu matter all that much? It turns out they affect the models performance a ton.

cmiles8

13 hours ago

There are absolutely folks like you out there and I don’t doubt the productivity increase. The challenge is you are not the norm and the hundreds per month from you and others like you are a drop in the bucket of what’s needed to pay for all this.

WhyOhWhyQ

13 hours ago

To each his own, but multi-tasking feels bad to me. I want to spend my life pursuing mastery of a craft, not lazily delegating. Not that everyone should have the same goals, but the mastery route feels like it's dying off. It makes me sad.

I get it that some people just want to see the thing on the screen. Or your priority is to be a high status person with a loving family etc.. etc... All noble goals. I just don't feel a sense of fulfillment from a life not in pursuit of something deeper. The AI can do it better than me, but I don't really care at the end of the day. Maybe super-corp wants the AI to do it then, but it's a shame.

dionian

10 hours ago

I lazily delegate things that can be automated, which frees me up to do actual feature development.

Terretta

13 hours ago

> I want to spend my life pursuing mastery of a craft, not lazily delegating.

And yet, the Renaissance "grand masters" became known as masters through systematizing delegation:

https://smarthistory.org/workshop-italian-renaissance-art/

WhyOhWhyQ

13 hours ago

I have wondered about that actually. Thanks, I'll read that, looks interesting.

Surely Donald Knuth and John Carmack are genuine masters though? There's the Elon Musk theory of mastery where everyone says you're great, but you hire a guy to do it, and there's the <nobody knows this guy but he's having a blast and is really good> theory where you make average income but live a life fulfilled. On my deathbed I want to be the second. (Sorry this is getting off topic.)

fragmede

13 hours ago

Masters of what though?

Steve Jobs wrote code early on, but he was never a great programmer. That didn’t diminish his impact at all. Same with plenty of people we label as "masters" in hindsight. The mastery isn’t always in the craft itself.

What actually seems risky is anchoring your identity to being the best at a specific thing in a specific era. If you're the town’s horse whisperer, life is great right up until cars show up. Then what? If your value is "I'm the horse guy," you're toast. If your value is taste, judgment, curiosity, or building good things with other people, you adapt.

So I’m not convinced mastery is about skill depth alone. It's about what survives the tool shift.

WhyOhWhyQ

12 hours ago

I won't insult the man, but I never liked Steve Jobs. I'd rather be Wozniak in that story.

"taste, judgment, curiosity, or building good things with other people"

Taste is susceptible to turning into a vibes / popularity thing. I think success is mostly about (firstly just doing the basics like going to work on time and not being a dick), then ego, personality, presentation, etc... These things seem like unfulfilling preoccupations, not that I'm not susceptible to them like anyone else, so in my best life I wouldn't be so concerned about "success". I just want to master a craft and be satisfied in that pursuit.

I'd love to build good things with other people, but for whatever reason I've never found other people to build things with. So maybe I suck, that's a possibility. I think all I can do is settle on being the horse guy.

(I'm also not incurious about AI. I use AI to learn things. I just don't want to give everything away and become only a delegator.)

Edit: I'm genuinely terrified that AI is going to do ALL of the things, so there's not going to be a "survives the shift" except for having a likable / respectable / fearsome personality

re-thc

12 hours ago

> Steve Jobs wrote code early on, but he was never a great programmer. That didn’t diminish his impact at all.

I doubt Jobs would classify himself as a great programmer, so point being?

> So I’m not convinced mastery is about skill depth alone. It's about what survives the tool shift.

That's like saying karate masters should drop the training and just focus on the gun? It does lose meaning.

alehlopeh

13 hours ago

I like how you compare people to renaissance painters to inflate their egos

WhyOhWhyQ

12 hours ago

Inflate whose ego? Mine? It seemed more like a swipe than ego-inflation, but I was happy to see the article anyway.

fragmede

12 hours ago

The other surprising skill from this whole AI craze is, it turns out that being able to social engineer an LLM is a transferable skill to getting humans to do what you want.

brazukadev

5 hours ago

One of the funniest things to see nowadays is the opposite tho, some people expecting similar responses from people but getting thrashed as we are not LLMs programmed to make them feel good

brazukadev

5 hours ago

It seems you are a bit obsessed with the Renaissance? Are you building a "vibeart" platform?

re-thc

12 hours ago

> the "markets" are anything but rational

No, they are rational. At least those with a lot of money.

> None of the investments going on have followed any semblance of fundamentals - it's all pure instinct and chasing hype

That's not what investments are about. Their fundamentals are if they can get a good return on their money. As long as the odds of the next sucker to buy them up exists it is a good investment.

> AI is basically a toy for 99% of us.

You do pay for toys right? Toy shops aren't irrational?

fragmede

13 hours ago

> AI is basically a toy for 99% of us.

So you're at the "first they laugh at us" stage then.

AnimalMuppet

13 hours ago

OK, but not everything that gets to that stage moves on to the next, let alone the stage after that.

But I will give you this, the "first they ignore us" stage is over, at least for many people.

bronco21016

12 hours ago

I use AI coding almost daily. I’m able to move my repositories into context easily through the multitude of AI coding tools and I see a massive boost in productivity. I say this as a junior dev. Often the outputs are “almost” and I make the necessary fixes to get it the rest of the way there.

To contrast with this, my org tried using a simple QA bot for internal docs and has been struggled to move anything beyond proof of concept. The proof of concepts have been awful. It answers maybe 60-70% of questions correctly. The major issue seems to be related to taking PDFs laced with images and poorly written explanations. To get decent performance from these RAG bots, a large FAQ has to be written for every question it gets wrong. Of course this is just my org so it can’t necessarily be extrapolated across industry. However, how often have people come across a new team and find there is little to no documentation, poorly written documentation, or outdated documentation?

Where am I going with these two thoughts? Maybe the blocker to pushing more adoption within orgs is twofold, getting the correct context into the model and having decent context to start with.

Extracting value from these things is going to require a heavy lift in data curation and developing the harnesses. So far most of that effort has gone into coding. It will take time for the nontechnical and technical to work together to move the rest of an org into these tools in my opinion.

The big bet of course then is ROI and time to adoption vs current burn rates of the model providers.

throw1235435

4 hours ago

I'm wondering if the ROI will be worth it anytime soon for anything other than coding, and anything that can be publically scraped of the internet. Or more to the point things that are at a enterprise level and require paid staff to train the model for a particular domain at an expert level of quality - and the ROI of such a task to be positive.

The thing is none of this is really happening under typical economic assumptions like ROI, rate of return, net PV, etc.

You see - on a pure ROI basis none of this should of existed. Even for coding I think - a lot of this is fuelled by investor money and even if developers took up the tooling I'm not sure it would pay off the capital investment. DeepMind wouldn't of been funded by Google, transformers would of never been invented if it was just based on expected ROI, etc. Most companies can't afford engineers/AI researchers on the side "just in case" it pays off especially back then when AI was a pie in the sky kind of thing. The only reason why any of this works is because Big Tech has more money than they can invest, and the US system punishes dividends meaning companies can justify "expected bad" investments as long as they can be dressed up and some pay off. They almost operate like internal VC's/funds because they had the money to do so.

This allows "arms race" and "loss leading" dynamics to take hold and be funded - which isn't about economics as much anymore. Most other industries/domains don't have the war chest or investors with very very deep pockets to make that a reality.

Sadly I think we as SWE's think it will also be other professions; what if instead we just disrupted our own profession and a few other smaller targets?

latentsea

12 hours ago

Yep. There are a lot of things that apply equally to human engineering teams in terms of productivity. Poor documentation and information architecture is a thing I have seen time and time again, and is always something I put time into course correcting for because it makes performing cognitive work much easier. Same goes for poorly factored codebases. They make doing any work feel like wading through mud. Throughout my career I have done a lot of work on what I would call platform engineering and product re-engineering and it's always to course correct for how difficult an environment has become to work in.

Agents are going to struggle with those same difficulties the way humans do too. You need to put work into making an environment productive to work in, and after having purposely switched my development workflow for the stuff I do outside of work to being "AI first on mobile", that's such a bandwidth constrained setup that it's really helping me to find all the things to optimise for to increase the batting average and minimise the back and forth.

malfist

13 hours ago

This article is rife with unedited llm signals. This makes me question their methodology here. I want you believe what they found, but I don't trust this analysis. If they were this sloppy with the write up, how sloppy were they with the science?

jp8585

13 hours ago

We have a full page on the methodology we used! Let me know if you’d like access to the dataset we created for this. The aim was not to be scientific but to flush out some deeper meanings from these interviews that typical nlp techniques struggle with. Ps: Of course we used llm tools as a writing aid, I’d be willing to bet those “signals” probably come from my own writing though and my appreciation of Tom Wolfe. I’ve been told it can be “sloppy” sometimes.

userbinator

8 hours ago

We have a full page on the methodology we used! Let me know if you’d like access to the dataset we created for this.

I'm not sure if you realise that those two sentences sound like 100% verbatim LLM output, or am I actually replying to a bot and not a human.

dcre

10 hours ago

The bits that stand out to me are the non-question questions.

“Their headline?”

“Scientists are thriving. The workforce is managing. But creatives?”

“The top trust destroyer?”

travisgriggs

8 hours ago

How do I know this fine article wasn’t the result of

“Create a web page infographic report that is convincing and boils down the essential truths of how people are feeling about AI in different professions and domains.. Include statistics and numbers and some rolling/animated sound bite quotes.”

nphardon

13 hours ago

I'm a scientist and I mostly agree with the scientist part, but I am definitely collaborating with my bot, I don't view it as "just a tool". I know this because this morning I had to do a forced reboot and my VsCode wasn't connecting to our remote servers, it took like over 5 minutes after reboot to reload my bot chat, and from like minutes 3-5 I had the distinct feeling of losing a valuable colleague.

gopher_space

12 hours ago

Personification can build empathy up to a point, but the machine has no desires.

nphardon

12 hours ago

I don't have illusions about whats going on on the other end, but we've done some deep collaborating and I 90% anthropomorphize it; much like how people on Star Trek TNG interact with Data.

fragmede

12 hours ago

Mine gets ashamed and embarrassed and goes and deletes the evidence (and my project folder! Good thing I've got backups.) when it fails. It also gets lazy and tells me to go stuff when it could do it, and I have to tell it to go do it instead of me having to go do it.

nphardon

12 hours ago

Thats wild! I have had nothing but consistent, stable experiences. It's possible they just take on the personalities of whoever theyre working with. So for me, it's become this like idealized version of a scientific collaborator. Also, I assume different models and versions have different personalities. As far as I can tell gpt-mini has no personality, whereas my claude sonnet 4.5 has a big one.

huevosabio

13 hours ago

``` Creatives have the highest struggle scores and the highest adoption rates. ```

Here is my guess for the puzzle: creative work is subjective and full of scaffolding. AI can easily generate this subjective scaffolding to a "good enough" level so it can get used without much scrutiny. This is very attractive for a creative to use on a day to day basis.

But, given the amount of content that wasn't created by the creative, the creative feels both a rejection of the work as foreign and a feeling of being replaced.

The path is less stark in more objective fields because the quality is objective, so harder to just accept a merely plausible solution, and the scaffolding is just scaffolding so who cares if it does the job.

layer8

13 hours ago

One issue with AI for creatives is that it’s virtually impossible to get AI to create a specific vision you have in mind. It creates something, but you just have to accept whatever that is, you can only steer it very roughly. It can be useful for getting inspiration, but not for getting exact results. If AI was better suited for realizing one’s own creative vision and working in a detail-oriented fashion, creators would likely embrace it more.

userbinator

8 hours ago

Having tried some AI image generation, it feels more like gambling than work --- repeatedly submitting and hoping you get the result you wanted is extremely reminiscent of pulling a one-armed bandit hoping to win, except perhaps a bit cheaper. I can certainly understand a potential for addiction though.

Libidinalecon

2 hours ago

For me, this is completely true but I just see it as a new form of being creative.

What I find interesting is no one use to be against Kai's Power Tools like this 30 years ago. I can remember waiting for new magazines to come out in the 90s to find what cool new graphic tools I might get to use to make something interesting. The output is what mattered, not the path to get to the output.

I think it is really deeper about art itself and that "creatives" are largely not at all actually creative. In the 21st century, "Creatives" is the word we use for the product varnish painters.

A class of people tasked with putting a thin layer of gloss on products to make them a bit more shiny. Absolutely nothing creative or artistic about this at all. Not to say there isn't a lot of talent involved in being a product gloss painter. I would be highly against AI in the arts too if I was a product gloss painter.

There is also this aspect that is like at some point in the 20th century, there were highbrow people that believed anything but 12-tone music was not worth listening to. Schoenberg promoted this music saying his discovery was in league with Einstein. If you were to play early punk music to these people not only is it not even music, it is an afront to the very idea of what you believe music to be. The idea anyone can pick up an instrument to make music is not worth considering. It is just making a type of noise. A type of worthless "slop".

ctoth

13 hours ago

Possible confound (seems important):

"creatives" tend to have a certain political tribe, that political tribe is well-represented in places that have this precise type of authenticity/etc. language around AI use...

Basically a good chunk of this could be measuring whether or not somebody is on Bluesky/is discourse-pilled... and there's no way to know from the study.

Lerc

13 hours ago

The high usage and high anxiety tracks with what I have found from taking to artists IRL. There is a sense that any any public expression that is not wholly against AI will draw vilification from a section of the artistic community.

There are a broad range of opinions but the expression seems to have been extremely chilled.

doug_durham

12 hours ago

Is this any different than the adoption of any technology. I think of the transition from practical effects to CGI in Hollywood. Anxiety levels of the creative model builders was sky high at the time. It worked itself out and now there are different jobs.

FuckButtons

8 hours ago

I think if you ask those specific people, you might find different answers. There are different jobs, sure, but not necessarily for those people.

WhyOhWhyQ

12 hours ago

Are they happy in their new jobs?

doug_durham

12 hours ago

I presume many are. It's a different medium, but it's still creative. We got the "Mythbusters" show out of some of the model builders who didn't want to move to CGI.

WhyOhWhyQ

13 hours ago

Another thing I might throw out there is that there are so many domains and niches out there that person A and person B are almost certainly having genuinely different experiences with the same tools. So when person A says "wow this is the best thing ever" and person B says "this thing is horrible" they might both be right.

ursAxZA

12 hours ago

When railroads were built, canal operators were upset too.

000ooo000

11 hours ago

What kind of mindset do you need to have to trust anything a company like this has to say? A company riding the hype train, praying the bubble doesn't pop, desperately trying to even turn a profit? Would you believe cigarettes are healthy too?