* flaky tests. This be tests that focus on the wrong dynamic data in a static way or timing issues that introduce race conditions again the tests
* poor negative tests and false positives
* unreliable communication between the test harness and the test subject or communication with interference from the test subjects other communication channels
This depends a lot on how an organizations engineering teams are structured. But, here's some tips:
1. Put your E2E tests in the same solution as the project under test if you can. If the product changes in a way that your tests start "failing", then make sure the developer tasked with changing the product also changes the test. Existing tests should pass should be part of some kind of definition of done. This also makes incorporating E2E test to a CI/CD pipeline much easier then keeping them a separate repo.
2. Understand the waiting game. Have at least have one person on the team who deeply understands UI race conditions and how to handle them. "Auto waiting" features like in Playwright and other frameworks are great, until they don't work the way you want them to :). I much prefer the flexibility of the explicit wait pattern in WebDriver. And, if you're any good at what you do, rolling your own "autowait" that's tailored to your specific loading strategies is not rocket surgery.
3. Choose the right tools. Webdriver.io is miles ahead of Playwright and Cypress in terms of testing frameworks. It has the flexibility of Selenium Webdriver behind it, the performance of the new WebDriver bidi apis, while also having all the utility of other frameworks. Such as API testing, recording videos, etc... those bells and whistles marketed by Playwright were solved long before Playwright hit the community.
On that same note, make sure you choose the right language. If your front-end is written in Typescript, then use Typescript. There's plenty of "back-end" functionality in node.js. I have no idea why I still see teams who write massive test suites on Java or C#, but barely scratch the surface in terms of the features they use for this type of programming. E2E tests are pretty simple if you follow one of the patterns you mentioned. (Btw, if you're building an SPA, then the Page Object Model is not what you want. Prefer a Component Object Model instead. It's the same thing, but focused on smaller, reusable components, rather than pages. I see this alot, and authors don't understand why they can't port part of one page to another area in the code where they need it. It's because they failed to understand the component based nature of modern SPA front-ends)
4. Parallel by default. I've written frameworks that run 600+ tests in under 20 minutes. Based on you're infra, you should be able to scale tests at the click of the button. Technically, it's possible to run 100s, if not 1000s of tests at the same time. This drastically stops wasting everyone's time waiting for the results of the "the big regression run". It also forces you to maintain the rule of totally independent tests. Make sure your tests don't depend on database state (always create new, or properly seed the db on demand) and you're gold.
5. Invest in your teams training. Far too often, automation goes to a QA member that's just technical enough to make something happen, but not technical enough to understand how easy some things are to, say a software engineer. The QA person watches one course, maybe copies some code from GitHub repos, or reads some blog post. They then spread what they learned around to the rest of the team. The rest of the team "seems" to become productive at building this suite out. But, without anyone digging into what these tools do, how they work, or that has ever even made a web app themselves, that suite will be toast in a couple years. Make sure that at least one of those QA people have written web app before. Doesn't have to be a full blown thing, just that they understand where business logic occurs, what these frameworks do, how easy it is to add data-testids, etc...
Common pain points for me with test automation:
* slow performance of the baseline application
* flaky tests. This be tests that focus on the wrong dynamic data in a static way or timing issues that introduce race conditions again the tests
* poor negative tests and false positives
* unreliable communication between the test harness and the test subject or communication with interference from the test subjects other communication channels
This depends a lot on how an organizations engineering teams are structured. But, here's some tips:
1. Put your E2E tests in the same solution as the project under test if you can. If the product changes in a way that your tests start "failing", then make sure the developer tasked with changing the product also changes the test. Existing tests should pass should be part of some kind of definition of done. This also makes incorporating E2E test to a CI/CD pipeline much easier then keeping them a separate repo.
2. Understand the waiting game. Have at least have one person on the team who deeply understands UI race conditions and how to handle them. "Auto waiting" features like in Playwright and other frameworks are great, until they don't work the way you want them to :). I much prefer the flexibility of the explicit wait pattern in WebDriver. And, if you're any good at what you do, rolling your own "autowait" that's tailored to your specific loading strategies is not rocket surgery.
3. Choose the right tools. Webdriver.io is miles ahead of Playwright and Cypress in terms of testing frameworks. It has the flexibility of Selenium Webdriver behind it, the performance of the new WebDriver bidi apis, while also having all the utility of other frameworks. Such as API testing, recording videos, etc... those bells and whistles marketed by Playwright were solved long before Playwright hit the community.
On that same note, make sure you choose the right language. If your front-end is written in Typescript, then use Typescript. There's plenty of "back-end" functionality in node.js. I have no idea why I still see teams who write massive test suites on Java or C#, but barely scratch the surface in terms of the features they use for this type of programming. E2E tests are pretty simple if you follow one of the patterns you mentioned. (Btw, if you're building an SPA, then the Page Object Model is not what you want. Prefer a Component Object Model instead. It's the same thing, but focused on smaller, reusable components, rather than pages. I see this alot, and authors don't understand why they can't port part of one page to another area in the code where they need it. It's because they failed to understand the component based nature of modern SPA front-ends)
4. Parallel by default. I've written frameworks that run 600+ tests in under 20 minutes. Based on you're infra, you should be able to scale tests at the click of the button. Technically, it's possible to run 100s, if not 1000s of tests at the same time. This drastically stops wasting everyone's time waiting for the results of the "the big regression run". It also forces you to maintain the rule of totally independent tests. Make sure your tests don't depend on database state (always create new, or properly seed the db on demand) and you're gold.
5. Invest in your teams training. Far too often, automation goes to a QA member that's just technical enough to make something happen, but not technical enough to understand how easy some things are to, say a software engineer. The QA person watches one course, maybe copies some code from GitHub repos, or reads some blog post. They then spread what they learned around to the rest of the team. The rest of the team "seems" to become productive at building this suite out. But, without anyone digging into what these tools do, how they work, or that has ever even made a web app themselves, that suite will be toast in a couple years. Make sure that at least one of those QA people have written web app before. Doesn't have to be a full blown thing, just that they understand where business logic occurs, what these frameworks do, how easy it is to add data-testids, etc...