
Smarter, Faster Testing: How AI Accelerates Test-Driven Development in Drupal
AI Applied

September 24, 2025
A Patch Worth Perfecting
A few months ago, I worked on improving a patch for the autosave_form module (#3481755), which automatically saves form in progress to prevent data loss if a user navigates away or loses connection. During an internal Tag1 AI workshop, I decided to revisit the issue, making it the focus of an internal experiment. The feature had always been tedious to test manually, and adding automated tests was a clear next step. It also felt like an ideal candidate for experimenting with AI, since it combined repetitive testing tasks with opportunities for efficiency gains.
The patch’s goal is simple but important: make sure that autosaves only trigger when a form actually changes. The existing release code didn’t handle some cases correctly, such as fields with multiple widgets. (For example, a group of checkboxes, where multiple elements share the same name attribute.) Another key improvement involved the autosave notification, which is a small message that pops up at the bottom left of the browser window. Previously, that notification appeared before the save was confirmed, which meant users would see a success message even if the autosave failed. The update fixed this behavior so the notification only appears after a successful save.
The base for these fixes was already in place, but autosave_form’s JavaScript is complex -- over 400 lines of code with multiple types of asynchronous and nested callbacks -- and testing changes manually is both time consuming and error-prone. Adding an automated test not only reduced the burden of repetitive manual checks but also ensured more reliable, accurate results.
Testing Smarter, Not Harder With AI
I relied on AI for two main tasks: First, I used it to write automated tests covering the new functionality, drawing on patterns from existing tests. Because the feature was complex and I didn’t want to risk breaking the working state of the code, I had AI generate these tests before making any code changes. With the tests in place to act as a safety net, I then used AI to help address open PR feedback, refining the code while verifying through repeated test runs that nothing broke in the process.
Commits where AI was leveraged can be viewed here: Autosave Form Merge Request #30 (commits dated August 18, 2025).
Keep Your Hands on the Wheel
Co-working with AI on this project had its ups and downs. For starters, AI suggested coverage by drawing from existing tests, but it also generated unnecessary ones that I had to rewrite or remove. This was a case where expert guidance was critical in order to keep AI on track, as well as correcting and removing unnecessary code that it wanted to introduce.
Even with the AI-assisted code, the tests initially failed. I tried multiple times to have it attempt fixes, but after two unsuccessful attempts, I needed to debug manually. With a bit of work on my part, I was able to address the remaining issues in the code and tests that AI had written under my guidance. Ultimately, the combination of AI acceleration and human expertise produced a final test that validated every new addition and worked reliably even in tricky scenarios that often cause errors.
How Challenges Shaped Better Solutions
The main takeaway: AI can accelerate test-driven development (TDD), making test-writing more practical within real project constraints.
That means:
- Better test coverage for even the most tedious edge-cases
- More robust applications with fewer regressions
- Faster development iterations without cutting corners on quality
AI is a partner, not a replacement for skilled developers. Human judgement, manual debugging, and quality checks are still essential.
Fewer Surprises, Faster Results
Our applied AI experiments aren’t just technical wins, they directly translate into better client outcomes.
By using AI to accelerate test-driven development (TDD), we can move faster while delivering more reliable, scalable applications.
- Reduced risk — stronger test coverage protects against regressions
- Faster delivery — features reach production sooner without sacrificing quality
- Efficiency gains — AI reduces repetitive work, freeing teams for higher-value efforts
- Built for Growth — reliable foundations support long-term growth and scalability
- Using AI Responsibility — we use AI as an accelerator, not a replacement, ensuring human expertise and quality remain at the core
Want to learn more about using AI to improve development workflows and test-driven development? Reach out, we're happy to share insights and practical tips from this project and others.