When talking about transitioning from more traditional ways of testing, we tend to downplay the impact on the skills required from a Quality Assurance engineer. The fundamentals of test case design are, of course, the same. All the same methods applied when designing test cases, such as boundary value analysis, equivalence partitioning, etc., are still valid. However, the way of identifying the input to these methods is different. So is the implementation as automation is crucial. And this is where the technical demands often increase.
Working in an agile setting while relying on manual functional regression testing is a paradox. Developing a product in an agile fashion does, by implication, mean that the product functionality grows incrementally, causing the regression test suite to grow along with it. Trying to keep up with this growth by adding manual tests is, in a continuous integration context, akin to trying to reach the speed of light. Scaling does not help. It only causes other agility-prohibiting consequences which are much harder to mitigate. Besides, using people to perform regression testing is quite a waste of resources. People can see nuances in software behavior and performance that automations cannot. Therefore, using people for e.g., exploratory testing is far more powerful.
Regardless of test development and execution strategy, transparency is paramount. Making sure that the entire team understands test automation, design and coverage is key when striving for shared quality responsibility. Test frameworks and cases should be made available to the entire team. Then test execution, both locally and as a part of pipelines, is made easily available to developers. The ability to debug test- and product code simultaneously in the same integrated development environment is a powerful tool when troubleshooting. As a bonus, this will give the product developers a high degree of familiarity with test coverage.
Actual regular test execution is considered one of the final layers of the quality fabric. It is only when all other quality promoting efforts have failed that actual testing is designed to attempt to catch any actual functionality anomalies. It is not designed to validate functionality or to ensure the absence of errors. However, by shortening the feedback loop or even eliminating it entirely, we increase the probability of delivering added value.
To learn more, check out the following article on "Quality Everywhere"