I love the use of reusable non-LLM code for the "happy path" and only using the LLM to create a workflow or repair it.
One area for improve seems to be the checkboxes/radio buttons (possibility other input types?) given the demo didn't pick up on that (it did make the same selections but it didn't recognize this was a multiple-choice input). It might be useful to have a step before creating the JSON where it asks the user some follow up questions like, "Here are the inputs I found and my understanding of their data type". And then go through each input asking for a default value and maybe even clarification on "Should we even prompt for this?" (Example, always select country X).
I wonder if, for workflow repair purposes, it would be helpful to, at recording time, save more contextual information about the fields you are filling/clicking on. "This a country selector", "This is the birthdate field", etc. So that if the xpath/css/etc fails you can give the LLM doing the repair work a description of what it's looking for.
I'm excited to see more efforts in QA testing with things like this. Brittle e2e tests are the bane of my (limited) automated experience and the ability to auto-heal and/or deal with minor deviations would be wonderful.
Very cool.
1) How do you deal with timings? If a step includes clicking on a link or something that needs loading, then if you just fire off the generated playwright code at at once, some steps might fail because the xpath is not there yet. So to be safe, i'm guessing you would have to wait using the difference in the timestamps in the json.
2. For self healing, we worked on something similar, and we found it's very easy to get off the rails if one step fails because if your assertions of if the fix was correct are off, then the next and next steps will also fail etc. The most stable way was to just regenerate all steps if a step fails in 2 attempts (2 runs of the flow) consecutively. If the xpath is broken for a step, very likely the subsequent ones won't be worth healing individually.
1) we made this sick function in browser use library which analyses when there are no more requests going through - so we just reuse that!
2) yeah good question. The end goal is to completely regenerate the flow if it breaks (let browser use explore the “new” website and update the original flow). But let’s see, soo much could be done here!
What did you work on btw?
1. Oh yes right. I remember trying it out thinking it was going to be brittle because of analytics etc but it filters for those surprisingly well.
2. We are working on https://www.launchskylight.com/ , agentic QA. For the self onboarding version we are using pure CUA without caching. (We wanted to avoid playwright to make it more flexible for canvas+iframe based apps,where we found HTML based approaches like browser-use limited, and to support desktop apps in the future).
We are betaing caching internally for customers, and releasing it for the self-onboarding soon. We use CUA actions for caching instead of playwright. Caching with pixel native models is def a bit more brittle for clicks and we focus on purely vision based analysis to decide to proceed or not. I think for scaling though you are 100% right, screenshots every step for validating are okay/worth it, but running an agent non-deterministically for actions is def an overkill for enterprise, that was what we found as well.
Geminis video understanding is also an interesting way to analyze what went wrong in more interactive apps. Apart from that i think we share quite a bit of the core thinking, would be interested to chat, will DM!
Very cool evolution!
Really great to see the fallback to the agentic run when the automation breaks. For our e2e testing browser automation at Donobu, we independently arrived at the same pattern and have been impressed with how well it works. Automatic self-healed PR example here: https://github.com/donobu-inc/playwright-flows/pull/6/files
edit: typo
This is a really interesting direction, Gregor & Magnus! You're spot on about enterprises needing more robust and self-healing solutions for their high-frequency automation.
It's true that many are looking into self-healing for existing automation scripts; from what I've seen, tools like Healenium are gaining some traction in this space. However, I agree that a Browser Use-like approach also holds a lot of promise here.
My thinking on how this could be achieved with AI agents like Browser Use is to run the existing automation scripts as usual. If a script breaks due to an "element not found" exception or similar issues, the AI agent could then be triggered to analyze the page, identify the correct new locator for the problematic element, and dynamically update or "heal" the script. I've actually put together a small proof-of-concept demonstrating this idea using Browser Use: https://www.loom.com/share/1af87d78d6814512b17a8f949c28ef13?...
I had explored a similar concept previously with Lavague setup here: https://www.loom.com/share/9b0c7cf0bdd6492f885a2c974ca8a4be?...
Another avenue, particularly relevant for existing test suites, is how many QA teams manage their locators. Often, these are centralized in files like POM.xml (for Java/Maven projects) or external spreadsheets/CSVs. An AI agent could potentially be used to proactively scan the application and update these locator repositories.
For instance,
I've experimented with a workflow where Browser Use updates a CSV file of locators weekly based on changes detected on the website: https://www.loom.com/share/821f80fcb0694be4bd4d979e94900990?...
Excited to see how Workflow Use evolves, especially the self-healing aspects!
Haha was just thinking last week there should be a tool called “show don’t tell” that infers a routine from recording, great minds think alike :)) Awesome feature guys, looking forward to playing around!
If any one wanna try it in browser can try implement it on our chrome extension code base in typescript: https://github.com/nanobrowser/nanobrowser, we support browser use and re wrote its code in typescript, not yet supporting workflow use and love to hear how it works from community!
I think it's awesome (we are close friends with Erik from Pig so slightly biased) - one extreme is Browser Use, which is just an agent that does everything for the first time, the other extreme is Workflow Use, which is almost deterministic. I think the winner product lies somewhere in the middle - Browser Use + Cache is easier to do for browser trajectories than for pure images! We will definitely try this direction!
This is amazing. We've been using BrowserUser to try and create deterministic playwright scripts for months with mixed results.
So, so, so excited to see this
We use selenium for RPAs. Saving this as an alternative to explore.
So I can use this and it will pull new data from a database I can use to have it fill out a form? And I can trigger it when I need it to run with the updated form information?
Yes so you can run the same form over and over again with different input variables, very reliable, fast and cheap
Can utilizing Chrome Extensions be added? It seems no one has and would be a critical bridge to many browser tasks
Yes. Also, would love something that can run directly in my browser with my sessions
There’s a lot of websites that are super hostile to automation and make it really hard to do simple, small, but repetitive stuff with things like playwright, selenium, chromedriver
we built a chrome extension supporting browser use and can run locally in ur browser: https://github.com/nanobrowser/nanobrowser, feel free to implement workflow use on top of our code base in typescript. Love to learn how it works!
seems similar to selenium plugin for firefox, minus the scripting it generates.