Ask HN: What's the future of software testing and QA?

23 pointsposted 3 days ago
by sjgeek

Item id: 46484805

17 Comments

vivzkestrel

3 days ago

LLMS are still very bad at mocking external dependencies. i have had a lot of trouble with vitest mocks for postgres and redis so far. I still have a playwright test for an SSR page that does a server request. At the moment it still does an actual request to the server and none of the LLMs have managed to fix that

qsera

3 days ago

Wouldn't there be a lot more of QA jobs. If software gets written faster by AI, humans sure will need to test that it meets the requirements.

So I see a great future for testing and QA.

nishilpatel

2 days ago

QA as we know it will shrink. Thinking will not.

AI will kill flaky UI scripts, and “click-and-verify” roles. That’s overdue. What won’t disappear is the need to understand how systems actually behave under stress, failure, and ambiguity.

The future of QA is upstream: • defining invariants, not writing scenarios • modeling state and failure modes, not chasing bugs • debugging distributed, async, messy real-world systems

AI will generate tests faster than humans ever can. But it won’t know what matters or what assumption is wrong. That judgment still belongs to engineers.

If you’re in QA and want to stay relevant: stop being a test executor. Become the person who explains why the system broke, not just how to reproduce it.

zjimy82

12 hours ago

Upstream means Shiftleft in better words, we follow from while back , but yet not get solid success to automate mundane flows, Using Playwright MCP a lot , but always worried about unknown unknown zone before every release.

hexbin010

18 hours ago

> The future of QA is upstream

I've heard this theory a lot - and it's been around a while now. But only seen them made redundant, or forced to become normal software devs.

Do you know of any companies that have converted a large chunk of legacy QA teams to this proposed model?

HiPhish

3 days ago

I really hope that functional programming and property-based testing [1][2] get taken seriously by real engineers who understand that it is important to know and understand what the program is doing. Something LLMs by their very nature cannot do.

I was writing a React application at work based on React Flow[3] and I was mucking about with state management libraries (because that's what the React Flow manual recommends). Maybe it was a skill issue on my part, but I had a hard time with the Zustand library. Then I read up on reducers in React and everything was perfectly clear. A reducer is just a pure function, it takes an existing state and an action and returns the new state. That's simple, I can wrap my brain around that. Plus, I know how to test a pure function, there is nothing to mock, stub or wrap. States and actions are just plain JavaScript objects, there is no history, no side effects, no executable code.

And this is where property-based testing comes in: if history does not matter it means I can randomly generate any valid state and any valid action, then apply the reducer function and verify that the resulting state has all the required properties. Only a formal proof would give me more certainty.

I fully understand people who want to use LLMs to write tests for them. Writing test cases is boring and tedious. There are many edge cases a human might miss. But relying on a guessing machine and praying it does not write nonsense is just plain irresponsible for someone who calls himself an engineer. People rely on the quality of our software for their work and personal safety. Property-based testing frees us from the tedium and will generate many more tests and we could write by hand, but it does so in a predictable manner that can be fully reasoned about.

[1] https://en.wikipedia.org/wiki/Software_testing#Property_test... [2] https://fsharpforfunandprofit.com/series/property-based-test... [3] https://reactflow.dev/

nishilpatel

2 days ago

I agree with the core point: the more pure and deterministic a system is, the easier it is to reason about and test. Reducers + property-based testing push correctness into design, not brittle test cases.

One nuance though: property-based testing shines when the domain is already well-modeled. A lot of real QA pain comes where purity breaks down—distributed systems, async flows, partial failures, UI↔backend boundaries. At that point, the hard part isn’t generating tests, it’s reconstructing context.

On LLMs: I don’t think they should be trusted as correctness oracles either. Their real value isn’t guessing answers, but helping surface assumptions, generate counter-examples, and expose gaps in our mental model.

So the future of QA isn’t humans vs LLMs. It’s better system design + explicit invariants + tools that help engineers doubt their own certainty faster. Most serious bugs come from being sure we understood the system when we didn’t.

aristofun

2 days ago

I think if anything with growing competition some day fashion for having a dedicated QA department will return.

Today’s situation of developers shitting out software with low to none sense of quality would reach thanks to llms some critical tipping point some day.

And LLMs are not a good fit for most of the QA job. Because it’s mainly about actions (testing) rather than texts anf conversations.

ssdspoimdsjvv

2 days ago

Is the consensus that software will be both developed and tested by machines? Will there still be a human in the loop? I hope at least some testing or approval will still be done by people, otherwise the software we use everyday will become even worse than it is now. Unless we envision that machines will also be the only end users of the software. At that point there hopefully will be an interface that allows for immediate reporting and fixing of defects.

olowe

2 days ago

Could you cast your mind 10 years back to 2015 and ask yourself the same question? Then fast-forward to today. What could you have done in 2015 that could have prepared you better for 2025? You may be able to do the same for 2005 and 2015.

It may be worth trying to ignore the feeling of "AI taking over the field". There's so much noise around this stuff as, personally, it's very difficult to differentiate hype and utility at the moment.

Sevii

3 days ago

LLMs writing test cases, LLMs writing Selenium tests, LLMs doing exploratory testing, LLMs used for canary deployments. All that testing that people didn't do before because it was too hard and took too long? LLMs will be used to do it.

SRMohitkr

2 days ago

You should to focus on that what AI could not do after 5 years and according to it have to prepare your self.

brudgers

2 days ago

I see Al taking over the field very fast. I want to prepare for the next five or ten years

Learn AI.

hulitu

3 days ago

> Ask HN: What's the future of software testing and QA?

See Microsoft and Google: That's why they have users for.

omosubi

3 days ago

Most applications don't have a billion users

zkmon

3 days ago

Coders will become testers.