I’ve built Auto-Browse, an AI-powered agent that eliminates the need to write step definitions manually. Instead of mapping test steps to code, Auto-Browse directly executes BDD tests written in natural language with just one simple handler:
When(/^(.*)$/, async ({ page }, action: string) => { await auto(action, { page }); }); That’s it. No more writing boilerplate step definitions. No need for automation engineers to manually script test logic. Just describe your test steps in plain English, and the AI handles execution.
Why this is a game-changer:
No step definitions needed – write tests directly in natural language.
Works with BDD frameworks like Cucumber and Playwright.
Reduces test automation setup time drastically.
Example test case:
Given user is on the login page When user enters "admin" in the username field And user enters "password" in the password field Then user should see the dashboard See it in action: https://youtu.be/VxJg3RRShoY Docs: https://typescript.docs.auto-browse.com/usage/bdd-mode Website: https://www.auto-browse.com NPM: npm install @auto-browse/auto-browse
Would love to hear your thoughts—does this approach resonate with you?
- This is the direction I'd love UI automation to move towards, congrats on bringing it closer. Gonna try and see how it works for me. I have a couple of questions:
1. How's reproducibility of actions, is it flaky?
2. How does it perform under adversarial conditions, such as slow network and high CPU load? With current crop of frameworks, you have to write tests defensively.
3. Any plans for visual regression integration? I'd love to have a smart VR tool that doesn't fail because a Linux machine CI renders fonts differently than Windows. None of the existing image comparison libraries are robust enough.
- This is cool, but there is no way I would want to have our test suite rely on LLM's for every single run. It would however be really cool to use this during test recording, to lower the barrier for users, among other things.