In a previous article, we described Scrum, the internal process PTC uses to develop Creo. For our research and development teams, Scrum provides a progressive approach to successful project management. But does it matter outside the lab—to the people who use the software after it’s released?
Michael Pfrommer, VP of Creo Product Development, says Creo customers benefit greatly from Scrum because it ensures releases are useful and reliable.
In this continuation of our interview, Pfrommer describes in depth the testing that goes into Creo and how we decide when to release:
GH: I’ve often heard the phrase that PTC ‘builds in quality’ during software development – what do you mean by that?
Pfrommer: At PTC, we encourage our software developers to “Built in Quality” as they design and develop the functionality. We spend time up front making sure the requirements are clear, functional designs are reviewed and approved by the team, and developers pre-test their code changes in their local development environment before submitting back to the main software development stream. We really try to drive to build in quality right at the design and coding.
GH: Previously, you mentioned that during each development period or “sprint,” the team creates a potentially shippable product. That is, everything added during the sprint works correctly before you proceed with the next set of new features or fixes. Could you describe the testing for each sprint?
Pfrommer: You’re right, it’s critical each sprint is functional and tested. So after just a few weeks of development, we test the code. Individual checks are rung against any new or updated fuctionality.
GH: Randomly? Methodically?
Pfrommer: Strategically. Five high-level steps make up most testing cycles:
- Reaching agreement. First, we agree on what will be tested, i.e., the test plan. This typically comes from a mix of product requirements, definition, and acceptance criteria.
- Documenting our tests. In this step we document those tests, test cases, pass conditions, and, if possible, build an automated test.
- Running the tests.
- Reporting, tracking, and manage the test results.
- Identifying failed tests and prioritizing them.
GH: Do you automate a lot of the testing?
Pfrommer: Yes. All quality engineers continually develop automated tests and build ever -increasing test suites that test all aspects of the sprint. For Creo, we execute over a million automated tests before a release goes out.
GH: How do you digest all those test results?
Pfrommer: For each test, the result is either a Pass (the test and result ran, and the results were correct), Fail (the test ran, but the results were incorrect) or OK to Fail (the test ran, and the results were analyzed, and the actual test needs updating). This summed up across all the tests performed for each sprint gives you an objective quality view for each sprint.
GH: And after you’ve completed a number of sprints, you have a potential release candidate?
Pfrommer: Yes. The Product Owner affirms that the product requirements are met with the completion of a sprint and nominates that version of the code as a release candidate.
GH: Why isn’t every sprint considered a potential release candidate?
Pfrommer: Well, it’s a mix of factors. The Product Owner must decide if the software is sufficiently complete to meet bring value to our customers. Release too early and the product won’t meet needs; release too late and customers may have found another solution, so there may no longer be any opportunity for PTC.
We also ask how has the marketplace evolved? How has the business evolved? Perhaps we’ve acquired a technology or product that can help us meet the customer’s needs better, or a partner has developed a solution in that space.
Finally, we consider that every time we nominate a potential release candidate it enters a broader phased testing, where we involve more people. In the broader testing, we include more internal people, software partners, and customers. This all takes effort, and we really only want to do that when we can be sure we have an impactful release.
GH: Can you discuss the steps in that broader phased testing?
Pfrommer: We look at the whole product in its entirety, for a release candidate, we’ll:
- Consider more aspects beyond functional quality, for example, performance, reliability, usability, scalability, serviceability, and compatibility.
- Run tests on a wide range of machines and platforms, using the release candidate, outside of the developer’s build environment.
- Involve a lot more manual testing, for example, by releasing a beta version of the software that can be shared with testers around the world.
- Test compatibility with our other products.
- Test all the different languages and supported language configurations.
- Release a version of the release candidate to our software and hardware partners so that they can certify their offerings with our latest release.
- Track all the above, reporting, tracking, managing, and addressing critical issues.
GH: So the broader testing is as important as the sprint tests.
Pfrommer: Right; both matter. I see three key benefits to the Scrum approach.
First product stability throughout development. We want to make sure our customers are excited and look forward to each release, and within each sprint we see rapid progress in product evolution. It’s a transparent and step-wise approach – so every few weeks we can immediately see results and demo them to our customers.
The second is the recognition that during a project we can change our minds about what is wanted and needed. With the new technologies in Creo, Scrum provides the platform to deal with the unpredicted challenges that cannot be easily addressed in a traditional predictive or planned manner.
Finally, with the completion of each sprint, we have a potential candidate for release.
GH: How much emphasis does PTC put on the testing phase of product development overall?
Pfrommer: I can tell you that there are over 1 million automated tests, and that 20% of our total product development capacity is dedicated to testing. I can tell you that for Creo 1.0, we’ll invest over 200,000 hours on manual testing, and that without counting the thousands of hours that will be conducted by our customers, software and hardware partners, and other external people.
GH: With all this automated testing, will there ever be day when there’s no manual testing?
Pfrommer: No. I can’t imagine that. We can minimize issues in a number of ways. For example, going to the apps concept will reduce complexity, involving fewer lines of code. But we’ll always test, and we’ll always need a warm body to use the software to find the quirkiest and, sometimes most obvious, flaws.
Watch out for our next Behind the Scenes articles as we review the current status of Creo 1.0.