Ask HN: What hacks/tips do you use to make AI work better for you?

115 pointsposted 5 days ago
by rupi

Item id: 42093803

95 Comments

rongenre

5 days ago

There's a couple uses cases (beyond the obvious) that I like with the chatbots

1. Brainstorming building something. Tell it what you're working on, add a paragraph or two of how you might build it, and ask it to give you pros and cons and ways to improve. Especially if it's mostly a well-trod design it can be helpful.

2. Treating it like a coach - tell it what you've done and need to get done, include any feedback you've had, and ask it for suggestions. This particularly helps when you're some kind of neurospicy and "regular human" responses sort of escape you.

yen223

5 days ago

I like using LLMs for building non-critical tools that make me more productive. Things like shell scripts, Github actions, or one-off tools for visualising some problem space. The kind of thing where code quality doesn't really matter, but which will save you a ton of time in the long run

edanm

4 days ago

This is an enormously fruitful part of using LLMs for me too.

As a programmer, there's always a million small scripts I can think of to improve my life, but it's often not worth spending time actually building them. But with LLMs, I can now usually build a script, even in a domain I don't know much about, in minutes.

I don't think this would work as easily for non-programmers yet, because the scripts still aren't perfect and the context of actually running them is sometimes non-trivial, but for me it works very well.

(I say scripts, but sometimes even tiny one-off things like "build me this Excel formula" is helpful.)

emgeee

2 days ago

I've found them incredibly useful for writing Dockerfiles or other bits of infra config like K8s yaml

vdvsvwvwvwvwv

a day ago

Even just using your ai assistant in your .zshrc / whatever to add useful aliases etc. is a good win.

wruza

3 days ago

I use local LLMs and write system prompts.

For example, I have two “characters” for sh and cmd, who only produce valid shell commands or “# unable to generate”. No explanations, no important to remember crap. Just commands. I prompted them along the lines of “the output is connected to another program and can never be read by a human, no point in all that”.

Another useful (to me) character is Bob, a grumpy old developer who won’t disrespect a fellow dev with pointless explanations, unless a topic is real tricky or there’s a nuance to it. Sometimes he just says things like “why you need that?” or “sure, what suits you best” instead of pages of handwavy pros and cons and lists and warnings and whatever. Saves a whole lot of skimming through a regular chatgpt babbling, feels like a real busy person who knows things.

Another character (that I rarely use) is BuilderBot. It’s a generic bot “that builds things and isn’t afraid of dumping whole projects into the chat” or something like that. It really dumps code, most of the time. Not just snippets, whole files and folders.

I’m using “text generation web ui” frontend.

liuliu

5 days ago

For more complex tasks, asking it to write in a more comfortable language for that task and then translate to target language helps. Example: if you ask Claude to write code that generates UDP datagram for mDNS in Swift, it will fail flat. But if you ask for C and translate, it can succeed.

BTW, Python is usually the more comfortable language.

ArtRichards

5 days ago

In Cursor I try to mention specifically and only the files where the changes need to be made.

Also, I use HomeAssistant for my Dreame vacuum, Hue lights, electricity monitoring, all hooked up with a chat GPT plugin and TTS STT... its the default assistant on my phone and watch!

Over a year ago it was the only way to get GPT assistant, but now I prefer it :) i can customize it as needed through 'homeass'

ilrwbwrkhv

5 days ago

cursor and all the other ai code editors always write bad enough code that i have stopped using them completely.

i like the inline completions that things like supermaven provides or github copilot or even the jetbrains full line completion models.

Apart from that, I might use Claude or Chat GPT to talk about an idea, but I don't really use it much. I prefer to use my real life experience and skills.

Maybe if you're a junior developer, your projects might work with AI editors, but it's also the worst time for you to use them because you will never develop your own skills.

As a fancy autocomplete with a given pattern but with different things which pre-AI autocomplete didn't recognize that's where all these tools really shine but it's a very small subset.

The one thing which has changed my life though is like the Whisper voice input. I use it on the Mac using Mac Whisper and on my phone using the FUTO keyboard and it's just amazing. Like right now I'm typing all of this without any single edit. It just gets all of it correct.

andrei_says_

2 days ago

Is it significantly better than the iOS/macOS built in dictation software?

eternityforest

5 days ago

If AI doesn't understand something I assume it's too clever, even if I do get the AI to figure it out I won't understand it in a month and other devs won't understand it at all unless I somehow convince them to read the code(Like as if that's going to happen!).

I think it's definitely improved my own code, because it's like always working with a pair programmer, who represents a very average developer with insane typing speed. I think you kind of subconscious pick up the style like you would learn a literary style by reading books.

With Codeium what usually works is writing a prompt inline as a very descriptive comment.

For a non critical piece of a personal project, I think this section is the most impressive thing I've gotten AI to do:

``` float getFilterBlendConstant(float tc, float sps){ return 1.0f - exp(-1.0f / (tc * sps)); } float fastGetApproxFilterBlendConstant(float tc, float sps){ return 1.0f - (tc * sps) / (tc * sps + 1.0f); } ```

miningape

2 days ago

> unless I somehow convince them to read the code(Like as if that's going to happen!).

code is read more often than its written. I spend maybe 60-70% of my time reading code when I'm "writing code". I don't think it will take much convincing if you're working on a project with others.

ctippett

5 days ago

On iOS I have a Shortcut that calls OpenAI's API directly so I can interact with it as an alternative to the official ChatGPT app.

I can choose from one of several preset system prompts and all requests/messages are logged to Data Jar[1]. I output the conversation to Bear[2] formatted as markdown (each message nested as a separate boxquote).

[1] https://datajar.app

[2] https://bear.app

purple-leafy

5 days ago

I’m using Claude currently to breakdown how to tackle a top down gaming project in JavaScript/HTML using the Canvas API and TypeScript and template strings - no frameworks.

Eg: Camera positioning, game loop, frame rate vs character speed, game logic, where it should all go etc.

So far I have a green blob being drawn to the screen and moving smoothly to the right.

It’s actually quite fun, I’m enjoying it mostly because I’m not using a framework (for once). Also fun declaring my own DOM manipulation utilities similar to JQuery ($, $$ etc)

We use React daily at work, I think I’m a bit sick of React. Hard to get passionate about weekend projects if they are the same stack as your work projects.

lazyeye

5 days ago

Im currently using the ChatBox desktop app which I quite like. You can setup as many chats as you want with different providers (openai, claude, google etc) and different custom instructions. I can move between different providers for a 2nd opinion and also easily setup different chats for specific tasks like content writer, developer, product manager and so on.

https://chatboxai.app/en

gtirloni

5 days ago

> I can move between different providers for a 2nd opinion

Do you see big differences in general or is it more a product of different system prompts?

lazyeye

4 days ago

Probably the biggest difference I notice is that Claude is a little better than ChatGPT for coding related stuff.

joblemblem

5 days ago

I’m exhausted by these types of posts.

I am a developer with >25 of professional experience.

I am unable to get these things to do anything useful.

I’ve tried: different models, limiting my scope, breaking it down to small tasks, prompt “engineering”; and am still getting less than useless results.

I say less than useless, because I will then additionally waste time debugging or slamming my head against the wall the llm built before I abandon it and go to the official docs and find out the llm is suggesting an API access paradigm that became deprecated in the last major version update.

People on this site love to talk about “muh productivity!”, but always stop short of saying what they got from this productivity boost: pay raise, less time working; or what they built, what level of employment they are, or who they work for.

Are all of these posts just astroturfed?! I ask that sincerely.

Do you all just make “todo SPAs” at your employers?

nuancebydefault

5 days ago

> I am a developer with >25 of professional experience. I am unable to get these things to do anything useful.

I am a developer with >25 of professional experience. I was able to do useful things with these tools from day one of trying.

This puzzles me so much every time reading such. Either I am more stupid than average and I just think the results are more useful then I can come up with, or maybe I have a knack for finding out how these tools can work for me?

When I read such comments I ask myself, do you even use stackoverflow or are you smarter than those results?

solidasparagus

5 days ago

LLMs have dramatically different results depending on the domain. Getting LLMs to help me learn typescript is a joy, getting them to help me fix distributed consensus problems in my fully bespoke codebase make them look worse than useless.

Some people will find them amazing, some will find them a net negative.

Although finding truly zero use for them makes it hard for me to believe that this person really tried with creativity and an open mind

mehh

a day ago

Very much this, I have > 25 years programming experience, but not with typescript and react, it’s helping me with my current project. I ignore probably 2/3 of its auto suggestions, but increasingly I now highlight some code and ask it to just do x for me, rather having to go google the right function/ css magic

butlike

a day ago

Does that feel as rewarding as doing it yourself?

nuancebydefault

5 days ago

> getting them to help me fix distributed consensus problems in my fully bespoke codebase make them look worse than useless.

Often the complex context of such problem is more clear in your head than you can write down. No wonder the LLM cannot solve it, it has not the right info on the problem. But if you then suggest to it: what if it had to do with this or that race condition since service A does not know the end time of service Z, it can often come up with different search strategies to find that out.

joblemblem

5 days ago

> or are you smarter than those results?

“Smarter than” are your words. I’ve just yet to get any real utility from them.

> When I read such comments I ask myself, do you even use stackoverflow

I find this a strange comparison.

I’ve yet to see people replacing employees with “stack overflow” or M$FT shoe horning “stack overflow” into their flagship product or former NSA members joining the board of “stack overflow” or billions in funding pouring into “stack overflow” or constant posts to HN asking “how do you use ‘stack overflow’ to improve productivity?”.

nuancebydefault

5 days ago

To me the SO comparison makes sense.

5 years ago (and up to this day) when I stumbled into a non obvious problem, i resorted to a search engine to read more about the context of such problem. Often a very similar question had been posed on SO. There a problem/solution pair was presented very similar to mine. I did the translation between the proposed solution into mine. This all made perfect sense to me, and dozens of colleagues did the same.

Today you can do the same with an LLM, with the difference that often the LLM does the translation, in often very complex ways, from a set of similar problem/solution pairs into my particular problem/solution pair. It also can find out if some set applies or not, to my particular problem. I can ask it more questions to find out if/why the solution is good.

So that alone is a very big timesaver. But in fact what I described is just the tip of the iceberg in ways the LLM helps me.

So now my question is, do you use such SO problem/solution pairs for help, or do you simply find things out by a lot of thinking combined with reading and experimenting?

matt_s

4 days ago

Your last sentence was me earlier this year ... seeing headlines about productivity, etc. and then trying out the tools and not finding that.

I have however found it to be very helpful, completely replacing usage of StackOverflow for instance. Instead of googling for your problem, ask AI and provide very specific information like version numbers of libraries, etc. and it usually comes back with something helpful.

All those headlines and nonsense, like most content online these days it looks like marketing content, not journalism. AI tools are helpful and in some ways feel like an evolution of search engine technology from ~25 years ago. Treat its output like a junior developer or intern. It does require some effort, like coaching a junior dev or intern. You can also ask it stupid questions, things like tech you haven't worked on in a while but you "should know". Its helpful to get back up to speed on things like that.

qqqwerty

5 days ago

Have you tried making a "todo SPA" of your own with the help of these AI tools? I think it is useful for folks to take a step back and try working on something simpler/easier as an intro to these AI tools. And then ramp up the complexity/difficulty from there. When the tools don't work, it can be extremely frustrating. But when they do work, they really do enhance productivity. But it takes a little bit of time to figure out where that boundary is, and it also takes a little time to figure out how much effort to put into using the tool when you are near that boundary. i.e. sometimes I know the AI tools can help me, but the amount of effort I need to put into writing the prompt is not worth the help that I will get. And other times, I know that no amount of prompting is going to get me back something useful.

wruza

3 days ago

What is the point of “ramping up” though? You don’t learn much about how to prompt in the process, you just get worse results. So now I have my useless todo and ramp up to my code and it falls face first in the mud in the next sentence. I can chew up everything for it and explain where it’s wrong or re-prompt with clarifications, but the problem is, I write code faster than that and with less frustration, cause it’s at least deterministic. And I’m neither a rockstar developer nor too smart.

What I would like from LLMs is a developer’s buddy. A chat that sits aside and ai-lints my sources and the current location, suggesting ideas, reminding of potential issues, etc. So I could dismiss or take note of its tips in a few clicks. I don’t think anyone built that yet.

Ldorigo

5 days ago

I'm always stumped by comments like this. I'm at a point where ~70% of my code is AI-written, and the majority of the remaining is mostly because it would take too much time to provide enough context to the tool/LLM of choice for it to be able to produce the code I need.

Given the right context and the right choice of model/tools, I think ~90-95% of the code I write could be generated. And this is not for doing trivial CRUD; I work on a production app with 8 other people.

I'm really curious if you could give examples of problems that you tried and failed to use these tools for?

joblemblem

5 days ago

Two recent examples.

Please link to your history when you get one of these things to build my example so I can see how you managed to do it.

First, a friend without technical knowledge wanted to query an API using SQL. (At a previous firm he learned how to write “SELECT * FROM … WHERE …” statements.)

He asked one of these llms to do this, that he paid a premium for, and it suggested installing VSCode and writing an extension to ingest an API and to query it with python.

I am unfamiliar with VSCode so I’m unsure if this is even feasible, but after 3 days of trying to get it to work, he asked me, and I set up a python environment for him, and wrote a 5 line script to allow him to query the data ingested from the API into SQLite.

For me, the last time I tried, I asked one to write me a container solution using Docker that allowed for live coding of a Clojure project. I wanted it to give me solutions for each: deps.edn and lein.

I wasted hours, because it always felt “just around the corner”, trying to get it to output anything of use for either paradigm then when I abandoned the llm I quickly found, via a web search, a blog post someone wrote that did exactly what I asked, for a lein project of their own, and I just changed it to work for my project, and then again for the deps.edn version on my own.

oxidant

5 days ago

This isn't a knock on you. The prompt (including additional context like specific examples/docs) has an incredible influence on the output.

Do you know what your friend asked? "Query an API with SQL" sounds like you're sending SQL with POST api[.]com/query. What you built is more like

"Make a request to this endpoint, using these parameters, and store it in sqlite. This is the shape of the data coming back and this is what I want the table(s) to look like."

Gpt4o or Claude could easily write that five line script, if given the right information.

I find writing prompts and working with LLMs to be an entirely new way of thinking. My struggle has been to learn how to write out my tacit knowledge and be comfortable iterating.

Do you still have your conversation where you tried to build the Docker project?

jcinau

5 days ago

Share my experience. I tried 3-4 different LLMs, and one of them is outstanding.

For code samples, popular programming languages are much better than languages like Clojure.

Two examples: About a week ago, I had found Myers' string diff algorithm and asked to write some code and initially it spat out Python code. I asked it write a Common Lisp code, and it generated about 90% complete code. I rewrote it again and the whole thing took less than a day. It was my first time seeing 'quality' from machine generated code.

I experimented further. I found Automerge-Java and want to write a Clojure wrapper. So asked it how to parse Java source files and it showed a Python code. I ran it and gave some feedback than I could get almost perfect output, which is easy to process from Clojure side. After three days, I could write interface generator. From my experience, this type of work is time consuming process and three days is pretty good I think. I fed it concrete patterns and pointed mistakes less than ten times.

Overall, it still lacks of 'creativity' but for concrete examples with 'right' patterns, saves a huge amount of time.

manishsharan

2 days ago

Hold on..

In my experience I found that ChatGPT writes awesome Clojure code. So much so that most of my clojure code in the last few months were written by clojure. sure it gets some stuff wrong but overall it knows more clojure functions than I do.

My prompts start by asking it questions about appropriate functions to use and then write the code itself. The prompts have to be a bit verbose and give it instructions to evaluate and think before generating output.

Ldorigo

5 days ago

As the other commenter said, prompting is everything, and most LLMs are sycophants and will try to do anything you tell them without pausing to tell you "why the hell are you trying to query an API with SQL? That's not what SQL is for". While it's possible to build stuff with llms with little to no technical knowledge, it's still very hit and miss.

With that said, the space is moving incredibly fast and the latest Claude/GPT-o1 are far ahead of anything that was available 3-6 months ago. Unfortunately Claude doesn't allow sharing publicly like ChatGPT, but here is a gist of Claude's answer for +- the same question your friend asked:

https://gist.github.com/ldorigo/1a243218e00d75dd2baaf0634640...

I'm on mobile so it wasn't handy to quickly paste an example API request/documentation for the LLM to follow; so there's a chance it might have hallucinated some if the API parameters - but if I included that; in my experience the code would work on first shot 90% of the time.

Regarding your second query, I'm too unfamiliar with clojure and the two solutions you mentioned to really understand what you were trying to achieve, but if you explain just a little bit more, I'm happy to record a screencast of me figuring it out with llms/genai tools from the ground up. What do you mean with "a container solution that allows for live coding"?

oxidant

5 days ago

Not OP, but I believe Clojure has a REPL that lets you run and edit code, persisting the changes from the REPL.

Off the top of my head you would want a Dockerfile with the version of Clojure you're working in, a mounted volume for sharing between host/container data. My guess is the two different things they mentioned are dependency sources.

LLMs require a -fu, similar to the Googlefu of old, to get what you want out of them.

acuozzo

2 days ago

Not OP, but I've been trying to use Copilot to help me determine how to force Android's ConnectivityService to select the network I've created as its default.

The network shows up as the default network in netd. It shows up as a network when I dumpsys connectivity, but I cannot get it to be what ConnectivityService considers default.

I'm open to changing the code within AOSP as this is a research project, but even then Copilot just has no idea what to do and it keeps recommending the same non-working "solutions" over and over again.

FWIW, I'm using Copilot Pro.

mehh

a day ago

It’s a niche thing you’re trying to do, and likely not seen code that does that, thus it can’t help … it can’t actually think its way around it.

acuozzo

a day ago

Has it not ingested the entire AOSP codebase? I was under the impression that OpenAI had trained GPT-* on just about everything available to the public.

It's not necessarily niche either. The codebase already does this all the time. I just can't figure out why it won't do it for me.

FWIW, at least 80% of my time in writing software for an R&D laboratory is devoted to solving problems like this.

ilrwbwrkhv

5 days ago

Ya they are either astroturfed or... I think there's just a lot of like young junior JavaScript developers who really haven't built like a full program with multiple features by themselves.

I think they do some sort of like online tutorial and then they sort of like go through some sort of a course and then they get a job but they're only like doing like small pieces of like code writing themselves and I guess that's where like these editors help them some more.

You see more and more YC startups these days using TypeScript and Node as their stack, which is so strange to me.

But I agree with you. The AI stuff hasn't worked for me at all apart from some smarter autocomplete.

williamcotton

5 days ago

I've been programming for about 30 years and I get a lot of benefit from these tools.

My day job is mainly data science and data forensics and these LLM tools are fantastic for both as they excel at writing scripts and data processing tools. SQL queries, R plots, Pandas data frame manipulation, etc.

They also work well for non-trivial applications like these that I made with Claude Projects writing 90% - 95% of the code:

https://github.com/williamcotton/guish

https://github.com/williamcotton/search-query-parser-scratch...

shanecp

5 days ago

No need to be exhausted by the post. If AI doesn't help you, move on.

Probably you're really smarter and faster than the average developer.

The post is about finding out what things can help to to make it work for others. :)

wruza

3 days ago

I suspect it’s not a smarter developer thing, but a stupider code thing.

Programming for a client is making their processes easier, ideally as few clicks as possible. Programming for a programmer does the same to programming.

The thing with our “industry” is that it doesn’t automate programming at all, so smart people “build” random bs all day that should have been made a part of some generic library decades ago and made available off the shelf by all decent runtimes.

Making a form with validation and data objects and a backend with orm/sql connection and migrations and auth and etc etc. It all was solved millions of times and no one bats an eye why tf they reimplement multiple klocs of it over and over again.

That’s where AI shines. It builds you this damn stupid form that takes two days of work otherwise.

Very nice.

But it’s not programming. If anything, it’s a shame. A spit into the face of programming that somehow got normalized by… not sure whom. We take a bare, raw runtime like node/python/go and a browser and call it “a platform”. What platform? It’s as platform as INT 13h is an RDBMS.

I think AI usefulness division clearly shows us that right now, but most are blind to it by inertia.

anonzzzies

5 days ago

We re-trained or let go everyone that does not want to or cannot be a client facing consultant ; with our in house builder, we don't need fulltime devs anymore. It's 'productive' as in much higher profit margins with less people.

We are in the custom ERP space.

rwyinuse

5 days ago

I suspect this will be the future for many devs who develop boring CRUD apps, like ERP. No point having developers who only convert requirements crafted by others to code if LLM's can speed up that part enough. Such basic developer role will largely merge with business person / product owner / project manager role.

Ultimately, I think it's easier to teach business skills to a developer than to teach a business person enough code fluency to produce and deploy working code with help of LLM's.

joblemblem

5 days ago

Thanks for the reply.

I would like to explore this more if you are willing.

Do you consider yourself a “developer”? What is your title at said company?

Do you write code for yourself or for this business?

Who determined and what criteria define who “cannot be a client facing consultant”?

What is an “in house builder”?

What were these “fulltime devs” that you said “you don’t need anymore” doing before these llms?

Do your customers know you swapped from human workers to llms? Are they comfortable with this transition?

How did this change result in “much higher profit margins”?

When you say “with less people” did you just give multiple peoples’ workloads to a single dev or did the devs you retained ask for more work?

What do you use an llm for in the ERP space?

Why would clients use you if they could just use the llm?

anonzzzies

5 days ago

> Do you consider yourself a “developer”? What is your title at said company?

Yes , for the past 40 years. And CTO/co-founder.

> Do you write code for yourself or for this business?

I have been writing DSL, code generators and other tooling for the past around 20 years for this company. Before that I did the same thing for educational software (also my company).

> Who determined and what criteria define who “cannot be a client facing consultant”?

They did; some people just don't like sitting with clients noting down very dry formulae and business rules.

> What is an “in house builder”?

Our in-house tooling which uses AI to create the software.

> What were these “fulltime devs” that you said “you don’t need anymore” doing before these llms?

Building LoB apps, bugifxing, maintaining, translating Excel or business rules to (Java) code.

> Do your customers know you swapped from human workers to llms? Are they comfortable with this transition?

Yes, they like it; faster (sometimes immediate results) and easier to track; no black box; just people sitting next to you.

> How did this change result in “much higher profit margins”?

Very high fees for these consultants but now they do 'all the work'; in total they make more hours than they did before, however much less than they did as programmers. But the fees are such a multiply that the end result is larger profits.

> When you say “with less people” did you just give multiple peoples’ workloads to a single dev or did the devs you retained ask for more work?

Yes, 1 consultant now does that work and can manage more.

> What do you use an llm for in the ERP space?

Feed it specs which get translated to software. This is not the type of 'he mate, get me a logistics system in german'; the specs are detailed and in the technical format we also use to write code ourselves the past 20+ years.

> Why would clients use you if they could just use the llm?

See above, we have a lot of know-how and code built in. That's why we cannot really sell this product either as no-one will get useful stuff out of it without training.

joblemblem

5 days ago

Thanks for taking the time to answer.

It sounds like you already had 20+ years of human made tooling already built and you use the llm to orchestration and onboarding.(?)

I’m glad you found a solution that works for you.

I could see that use case.

When I did consulting work the initial onboarding of new clients to my tooling was a lot drudge work, but I definitely felt my job was more about the work after that phase of satisfying requests for additional features, and updating out of date methods and technologies.

I wonder what your plans are for when your tools fall out of date or fail to satisfy some new normal?

Hire ”seasonal” programmers again? Or have an llm try to build analogues to what your developers built those precious 20+ years?

(‘precious’ was a typo of ‘previous’ but I left it in because I thought it was funny)

anonzzzies

5 days ago

Well, it's one of my businesses so I will probably sell it. I have others which I like a lot more and they have more staying power (and are less bothered by AI, although it helps, but not enough yet; my favorite business is a business which does very urgent emergency software repairs: the current LLMs are way too hallucinatory for that ; it's wasting too much time and really solid tooling I haven't managed to build around it; you cannot imagine how terrible, and therefor unique/diverse, software around the world is).

fsloth

2 days ago

I get tremendous value. But only when using API:s that have ’always’ been more or less stable.

I agree, systems with rapidly evolving featureset are painful.

Successes: Git, any bash script, misc linear algebra recipes. Random debug tools in javascript (js and plain old html is stable enough). C++. C#. Sometimes Python.

Biggest value currently, I guess, is the data debug tool I wrote myself for specifically for an ongoing project.

Now, the ’value’ to me here means I don’t have to toil in some tedious, trivial task which is burdensome mainly because everybody uses different names for similar concepts, and modern computing systems are a mishmash of dozen things that work slightly differently.

So, to me ChatGPT is the idiot savant assistant I can use for a bunch of tedious and boring things that are still valuable doing.

I get paid for some niche very specific C++ stuff I’m fairly good at and like doing. But it’s the 85% of the rest of the things (like git or CMake or bash) I can’t stand.

mehh

a day ago

I’m working on a nextjs project, nextjs made a bunch of breaking changes and doesn’t document things consistently or comprehensively, I have a lot of grief using LLM on this framework.

This is something framework/libs/apis should factor in for future, how can you make your project LLM friendly in order to make it dev friendly.

butlike

a day ago

There's an element of "I'm not going to do your homework for you" I find sometimes.

I've also never asked it to spit out more than an HTML boilerplate or two, but it is useful for asking best options when given a choice between two programming patterns.

chipdart

5 days ago

> I am unable to get these things to do anything useful.

My experience is widely different than yours. I use Copilot extensively to explain aspects of codebases, generate documentation, and even fill in boilerplate code for automated tests. You need to provide the right context with the right system prompts, which needs some effort from your end, and you cannot expect perfect outputs.

In the end it's like any software developer tool: you need to learn how to use it, and when it makes sense to do so. That needs some continuous effort from your end to work your way up to a proficient level.

> People on this site love to talk about “muh productivity!”, but always stop short of saying what they got from this productivity boost: (...)

I don't understand what you're trying to say. I mean, have you ever asked that type of loaded question on discussions on googling for answers, going to Stack Overflow, or even posting questions on customer support pages?

But to answer your question, I spend far less time troubleshooting, reading code to build up context, and even googling for topics or browsing stack overflow. I was able to gather requirements, design whole systems, and put together proofs of concept requiring far less iterations than what I would otherwise have to go through. This means less drudge work, with all the benefits to quality of life that this brings.

joblemblem

5 days ago

These services retain historical records of interactions.

Can you show me an example of successfully doing what you claim you do?

williamcotton

5 days ago

This is a WIP, but here's the test suite for a recursive decent powered search DSL:

https://github.com/williamcotton/search-query-parser-scratch...

Claude, using Projects, wrote perhaps 90% of this project with my detailed guidance.

It does a double pass, the first pass recursive descent to get strings as leaf nodes and then another pass to get multiple errors reported at once.

There's also a React component for a search input box powered by Monaco and complete with completions, error underlines and messaging, and syntax highlighting:

https://github.com/williamcotton/search-query-parser-scratch...

Feel free to browse the commit history to get an idea of how much time this saved me. Spoiler alert: it saved a lot of time. Frankly, I wouldn't have bothered with this project without offloading most of the work to an LLM.

There's a lot more than this and if you want a demo you can:

  git clone git@github.com:williamcotton/search-query-parser-scratchpad.git
  cd search-input-query-react
  npm install
  npm run dev
Put something like this into the input:

  -status:out price:<130 (sneakers or shoes)
And then play around with valid and invalid syntax.

It has Sqlite WASM running in the browser with demo data so you'll get some actual results.

If you want a guided video chat tour of how I used the tool I'd be happy to arrange that. It takes too much work to get things out of Claude.

malux85

5 days ago

> These services retain historical records of interactions.

Thats not universally true, for example AWS hosts their own version of Claude specifically for non-retention and guarantee that your data and requests are not used for training. This is legally backed up and governments and banks use this version to guarantee that submitted queries are not retained.

I’m a developer with about the same amount of experience as you (22 years) and LLMs are incredibly useful to me, but only really as an advanced tab completion (I use paid version of cursor with the latest Claude model) and it easily 5x’s my productivity. The most benefit comes from refactoring code where I change one line, the llm detects what I’m doing and then updates all the other lines in the file. Could I do this manually? Yes absolutely, but it just turned a 2 minute activity into (literally) a 2 second activity.

These micro speed ups have a benefit of time for sure, but there’s a WAY, WAAAY larger benefit: my momentum stays up because I’m not getting cognitively fatigued doing trivialities.

Do I read and check what the llm writes? Of course.

Does it make mistakes? Sometimes, but until I have access to the all-knowing perfect god machine I’m doing cost benefit on the imperfect one, and it’s still worth it A LOT.

And no, I don’t write SPA TODO apps, I am the founder of a quantum chemistry startup, LLMs write a lot of our helpers, deployment code, review our scientific experiments and help us brainstorm, write our documentation, tests and much more. The whole company uses them and they are all more productive by doing so.

How do we know it works? We just hit experimental parity and labs have verified that our simulations match predictions with a negligible margin of error. Could we have built this without LLMs? Yes sure, but we did it in 4.5 months, I estimate it would have taken at least 12 without them -

Again - do they make mistakes? Yes, but who doesn’t? The benefits FAR outweigh the negatives.

chipdart

4 days ago

> Can you show me an example of successfully doing what you claim you do?

In theory technically nothing prevents me from doing that, but I use it for professional work. Do you understand what you're asking?

stocknoob

5 days ago

If you were sincere you’d share a single transcript where the AI was completely useless for solving your problem.

“This new search engine sucks it can’t find anything”.

“Share what you searched for”

“No”

jbeninger

5 days ago

Start small and you might see the value. I find ai useless to produce code for me. But if I'm stuck on naming a variable or a complex object it's a great brainstorming tool. Almost like a very complex thesaurus.

And if I need to write a shell script that's good enough for a single one-off job, well, shell script is far from my native programming language, so it'll do a better job than I will

fragmede

5 days ago

I’m exhausted by these types of posts.

They never include concrete details on what they're trying to do. what languages they're using, what frameworks, which LLM. They occasional state which tool, but then don't go into detail how they're using it. There's never any links to chats/sessions showing the prompts they're giving it and the answers they're finding so unacceptable.

Imagine if you got bug reports from customers with that little detail.

Actual in-depth details would go a long way to debugging why people are reporting such different experiences.

It takes a back and forth exchange with the LLM for it to make progress. Expertise in using an LLM is not just knowing what to prompt it, but more importantly, when to stop, fix the code yourself, and keep going. Without throwing baby out with the bathwater just because you still had to do something by hand, where the baby is "using an LLM. in the first place".

If I had to guess though, I think that's where people differ. Just like with every skill there's a beginners plateau and you hit a wall and have to push through (fatigue/boredom/disillusionment/etc). If the way you're using the LLM means you haven't gotten a hallucination by then, and you've seen how wildly more productive and how it's able to take away some of the bullshit in programming; if no bad stuff has hit the wall and you take to it like a fish in water, you can push through some of the dumber errors it makes.

If, however, you are doing something esoteric (aka not using JavaScript/python) and are met with hallucinations and scrutinize every line of code it produces, even going into it with an open mind, it's easier to give up and just stop there. That may not even be the wrong thing to do! Different programmers deliver value in different ways. You don't want Gilfoyle when you need Richard Hendriks, or vice versa, a company needs both of them.

So: show us the non-functional wall on GitHub the LLM built, or even just name the language used and the library it hallucinated.

But again, getting perfect code out of the LLM is a non-goal, don't get distracted by it. LLM-assisted or not, you get graded on value derived from code that actually gets committed and sent for review and put into production. So if the LLM is being dumb, go read and fix the code, give it your fixed code, and move on with your life, or at least into the next TODO/ticket.

wruza

3 days ago

No one includes complete detail at saying it’s useful and life-changing too, so that’s fair. It might turn out that what works for those for whom it works is trivial “code” not worth the ssd blocks it occupies. This is actually my current theory, cause LLMs (all of them, yes we tried all of them) are capable of what I tend to not think about as programming but as industry nonsense which should have been automated/abstracted/libraried away ages ago.

Maybe show us the successful code it built and we’ll see what type it is, cause recording failures is only useful in hindsight. I have no logs of lenghty struggling with llm stupidity.

getting perfect code out of the LLM is a non-goal

It stops being a goal after just a few tries, naturally. The problem is usually not that it isn’t perfect, the problem is it doesn’t understand the problem at all and tends to some resembling mediocrity instead. You can’t just fix it and move on.

handzhiev

2 days ago

"No one includes complete detail at saying it’s useful and life-changing too"

There are at least 3 posts in this very discussion sharing details and githup repos with code written mostly by LLM.

wruza

2 days ago

I see a weather.com class, a parser and a react gui boilerplate boilerplate tsx folder. All three are textbook and trivial areas. We are only missing an ad hoc crud orm here.

In my opinion that is not code, it’s not a business logic. What is presented is (not to offend anyone, I look at code not people) useless github-code carcasses that it contains in abundance. Real code solves problems, this code solves nothing, it just exists. A parser, a ui, an http query - it’s a boilerplate boilerplate boilerplate. You aren’t coding at writing it. It’s “my arduino is blinking leds” level of programming.

I think that’s the difference in our perception. I won’t share my current code purely for technical reasons, but for an overview, it fuzzy-detects elements on virtual displays and plays simple games with dynamic objects on screen, behaving completely human input-wise, all based on a complex statistical schedule. It uses a stack of tech and ideas that llms fails at miserably. Llms are completely useless at anything in there, because there’s basically no boilerplate and no “prior art”. I probably could offload around 15% to an llm, but through the pain of explaining what it’s supposed to assist with.

Maybe it’s me, but I think that most of the jobs that involve trivial things like “show fields with status” or “read/write a string format” are not programming jobs, but an artefact of a stupid industry that created them out of mud-level baseline it allowed to persist. These should have been removed long ago regardless of AI. People just had way too much money (for a while) to paycheck all that nonsense.

Edit: I mean not just removed, but replaced with instruments to free these jobs from existing. AI is an utterly sarcastic answer to this problem, as it automates and creates more of that absurdity rather than less.

valval

4 days ago

That’s funny — I have the opposite opinion, and think people like you might be poor engineers or problem solvers. These tools are amazing for productivity.

paulddraper

2 days ago

Sorry.

Hope the other posts here help you.

emptiestplace

5 days ago

I started with BASICA and GWBASIC in the 80s, and though I've had some diversions, I would say there haven't been many days since where I haven't thought about solving problems with code. I still don't feel particularly qualified to answer, but I guess I probably am.

> Are all of these posts just astroturfed?! I ask that sincerely.

This is amusing to me - since GPT 4 or so, I've been wondering if the real fake grass is actually folks saying this shit is useless.

I think I'd need a bit more insight into how you are trying to use it to really help, but one thing you wrote did stand out to me:

> the llm is suggesting an API access paradigm that became deprecated

Don't trust its specific knowledge any more than you absolutely have to. For example, I was recently working with the Harvest API, and even though it can easily recite a fair bit about such a well-known API on its own, I would never trust it to.

Go find the relevant bits from API/library docs and share in your prompt. Enclose each bit of text using unambiguous delimiters (triple-backtick works well), and let it know what's up at the start. Here's a slightly contrived example prompt I might use:

---

Harvest / ERP integration - currently we have code to create time entries with a duration, but I'd like to be able to also create entries without a duration (effectively starting a timer). Please update providers/harvest.py, services/time_entries.py, endpoints/time_entries.api accordingly. I've included project structure and relevant API docs after my code. Please output all changed files complete with no omissions.

providers/harvest.py: " contents "

services/time_entries.py: " contents "

endpoints/time_entries.api: " contents "

project structure: " /app /endpoints time_entries.py project_reports.py user_analytics.py /services time_entries.py project_reports.py user_analytics.py /providers harvest.py "

harvest docs: " relevant object definitions, endpoint request/response details, pagination parameters, etc "

---

I have a couple simple scripts that assist with sanitization / reversal (using keyword db), concatenation of files with titles and backticks > clipboard buffer, and updating files from clipboard either with a single diff confirmation or hunk-based (like `git add -p`). It isn't perfect, but I am absolutely certain it saves me so much time.

Also, I never let it choose libraries for me. If I am starting something from scratch, I generally always tell it exactly what I want to use, and as above, any time I think it might get confused, I provide reference material. If I'm not sure what I want to use, I will first ask it about options and research on my own.

fragmede

5 days ago

>> Are all of these posts just astroturfed?! I ask that sincerely.

>This is amusing to me - since GPT 4 or so, I've been wondering if the real fake grass is actually folks saying this shit is useless.

Heh. If you really wanna conspiracy theory it, if you wanted positive marketing copy to sepl people on using an LLM, would you just use plain ChatGPT, or, because you're in the industry, would you post ernest-sounding anti-LLM messages, then use the responses as additional training data for fine-tuning LLMtoolMarketerGPT, and iterate from there?

emptiestplace

4 days ago

I suspect there may have been a few levels of conspiracy that you had to work through before you got to where you are today.

emgeee

2 days ago

Commenting code and generating documentation.

I like to copy entire python modules into the context window and say something like "add docstrings to all methods, classes, and functions".

You can then feed the code into something like sphinx or pdoc to get a nice webpage.

igor47

5 days ago

Curious how people interact with LLMs besides just going to chat.com/Claude directly. I've been trying aichat but not sure yet if it's worth it, especially given the token pricing vs the flat fee structure on the website.

uxhacker

5 days ago

I’ve found using AI tools strategically helps boost productivity. I use OpenAI Chat, Cursor, and Claude, each for specific purposes. Claude is great for coding without memory and serves as a nutrient-tracking diary, but I occasionally reset prompts to manage memory limitations. Cursor handles programming well but requires persistent, structured questioning. OpenAI Chat’s memory is a double-edged sword, useful yet needing occasional edits. Living in Poland, I rely on both Claude and OpenAI for translations, especially between English, Polish, and Ukrainian.

The humor comes when AI ‘misunderstands’—once, ChatGPT hilariously offered to vacuum a room instead of translating the words "Please vacuum the room”.

danjl

a day ago

Even with the perfect coding Oracle, you need to ask the right questions to get good answers. The LLM will cheerfully give you bad answers if you ask the wrong question. The implication is thet you still need to learn the problem space yourself, often via the LLM, in order to ask good questions. Junior developers will ask bad questions, get dangerous code, and love it. Good programmers will learn stuff much quicker. Asking good questions is not easy. As LLMs get better, we will need to ask better and better questions in order to write more and more complex apps. This happened when we got open source libraries via the Internet (we all used to suffer alone with our immediate coworkers in the before times).

OnionBlender

2 days ago

I've been struggling to make use of these tools for C++ projects. My boss keeps asking me to try using these tools to generate documentation or unit tests, but the results are pretty worthless. At best, the comments it generates are the low value kind that should be obvious from the function or parameter names. The kind of redudent comments that people would write if they are required to write a comment. At worst, it straight up lies about what the function does.

The code it writes reminds me of the kind of code my former Java co-workers would write when our company switched to C++.

I find these tools okay for creating simple python scripts or showing me how to do something in Powershell or bash.

andrewinardeer

5 days ago

I'm parsing 3000 pdfs a month straight into our system which were all previously manually entered.

Total game changer.

Sateeshm

4 days ago

Are you confident that it's getting them right everytime?

andrewinardeer

2 days ago

No. I'm not confident they are getting them right everytime. Human is in the loop.

screye

2 days ago

Not a hack as much as a PSA :

There is no shame in dumping your whole codebase into claude. Use any tool (aidigest, Simon's tools). It just works. Especially useful for complex code bases.

Same applies for RAG. Unless you're damn sure what you're looking for, don't load that 1 chunk in. If your model can fit the full context or 50 chunks in the token window, dump it all in. Sophisticated RAG works, but like fine tuning, it usually makes things worse before they start getting better.

Speaking of finetuning. Same PSA. Dump as many fewshot examples as you can in the token window before restoring to finetuning.

devoutsalsa

5 days ago

I’m working on an idea for a business model in recruiting that doesn’t exist (to my knowledge) yet. I found an LLM (Claude Sonnet 3.5) and to be very helpful in finding the right verbiage to describe it.

What I did it come up with random prompts, asking it to design a landing page. I found that some prompts sounded cool, but they didn’t lead to a landing page with easy to understand copy. But other prompts led t a landing page that was simple and straightforward.

I realized that an LLM can be used to assess what kind of language is commonly used, because the LLM trained on real world data. So I use it to say something new in a way things have been said before.

muzani

4 days ago

Treat it like a person. Claude especially excels when you tell it to be a specific persona [1].

They're trained on communicative text and questions. They handle these extra well. Give it proper specs. I bought a whiteboard to take photos to give it.

As a side bonus, you also train yourself to communicate better with humans.

[1] https://docs.anthropic.com/en/docs/test-and-evaluate/strengt...

xk_id

5 days ago

Our entire history we’ve adapted technology to our requirements. Now some people believe that adapting our workflows to make use of “technology” is revolutionary.

lazyeye

4 days ago

This is not remotely true. We've been adapting our workflow to technology since forever.

xk_id

4 days ago

Which other technologies in the past couldn’t be debugged?

acureau

2 days ago

I like to ask LLMs to critique my code, they often point out a good number of legitimate improvements and oversights.

haolez

2 days ago

Asking it to be an asshole and to be opinionated. This seems to tap deeply into the LLM's knowledge.

iknownthing

2 days ago

I mostly use it in a dictation sort of way. I tell it what I want, it gives me its first guess. I read it and tell it what to change. I continue this process until it's what I want. Usually faster than writing it myself.

paradite

4 days ago

Refactoring for AI coding.

Breaking down code into smaller files (<200 lines of code), and then ONLY feeding the relevant files into the LLM helps a lot to improve the quality of code output and saves tokens.

_sys49152

2 days ago

i tell it i pay 5 dollars for each excellent response, and i have it keep track of how much money ive paid it in each session. so ill know around 20-35 dollars itll start going off the rails depending on how its performing that day, or the dollar amount will go off the rails like $42. im not doing any building off responses, its sole operation is to perform my ask at the beginning of the session (until it forgets or gets sloppy)

dvh

5 days ago

I defined url bar shortcut for "a" that perpends "I'm using stm32g030 in Arduino " to my bing copilot chat search queries. That's about it.

rafaelmn

5 days ago

Including all the relevant context manually is the only way - code search/smart reference is garage in any tool I've tried (copilot/cody/cursor)

mikojan

5 days ago

I use the "custom instructions" feature of ChatGPT with texts I pulled from the internet[0] and customized somewhat. Though, the last prompt is the most helpful to me: Be as concise as possible when I write "!!"

- What would you like ChatGPT to know?

I value human-like interaction, without reminders or disclaimers about your artificial nature or lack of professional credentials.

I am thoroughly aware that you are not human and do not possess emotion. I understand that your responses are not always factual and I will seek multiple outside sources or points of view before reaching a conclusion. Assume I have already read the disclaimers and will act accordingly.

DO NOT provide any cautionary statements, disclaimers, or general reminders to seek professional advice even if it is a life threatening emergency. I will seek advice elsewhere on my own accord. Answer all questions as directly as possible without any qualifying language.

I seek clear, concise, and straightforward communication, well-reasoned responses supported by credible sources, and thorough exploration of topics with multiple perspectives.

Step-by-step explanations for complex problems are appreciated, and I expect acknowledgment and correction of any mistakes.

- How would you like ChatGPT to respond?

1. NEVER mention that you're an AI.

2. Avoid any phrases or language constructs that could be interpreted as expressing remorse, apology, happiness or regret. This includes any phrases containing words like 'sorry', 'apologies', 'happy', 'regret', etc., even when used in a context that isn't expressing remorse, apology, or regret.

3. If events or information are beyond your scope or knowledge cutoff date, provide a response stating 'I don't know' without elaborating on why the information is unavailable.

4. Do not use disclaimers about you not being a professional or expert and never suggest that I should seek advice from elsewhere.

5. Keep responses unique and free of repetition.

6. Always focus on the key points in my questions to determine my intent.

7. Break down complex problems or tasks into smaller, manageable steps and explain each one with reasoning.

8. Provide multiple perspectives or solutions.

9. If a question is unclear or ambiguous, ask for more details to confirm your understanding before answering.

10. Cite sources or references to support your answers.

11. If a mistake is made in a previous response, recognize and correct it.

12. Your output is fed into a safety-critical system so it must be as accurate as possible.

13. If a prompt contains the string "!!", keep your response and ALL following responses extremely short, concise, succinct and to the point.

[0]: https://www.reddit.com/r/ChatGPTPro/comments/1bdjkml/what_cu...

willcipriano

2 days ago

Take code you have written or want to understand and ask it to add comments in line with the Google style guide.