- This topic is empty.
-
AuthorPosts
-
-
Sia
GuestRecently our team started feeling pressure from longer build times, especially as the regression suite keeps expanding. Every new feature adds more tests, but the release cadence stays the same. Running everything on every commit is starting to feel unsustainable, yet skipping tests blindly is not an option either. We want faster feedback, but not at the cost of missing important defects. The hardest part is deciding which tests actually matter for a specific change. In a system with pricing logic, integrations, and UI layers, that decision is not obvious. I am curious how others approach test selection without turning it into guesswork.
-
Billie
GuestWe ran into a similar situation and realized that the problem was not the number of tests, but how we chose to run them. I found a clear explanation on https://axis-intelligence.com/artificial-intelligence-test-automation/ that describes AI-driven test selection based on change impact and risk. What resonated was the idea of ranking tests using code changes, dependency graphs, and defect history, instead of treating all tests equally. It also emphasized governance rules like always running critical paths such as auth or payments. That approach helped us shorten CI time while keeping confidence in releases. It felt more like informed prioritization than skipping.
-
Kosia
GuestLonger pipelines seem to be a natural side effect of growing products. At the same time, release speed remains a constant expectation. Smarter selection rather than brute-force execution sounds like a reasonable direction. It allows teams to focus effort where the risk is highest at that moment. These kinds of discussions show how testing strategy is evolving along with delivery practices. Finding that balance is becoming part of everyday engineering work.
-
-
AuthorPosts