One the innovative, great processes for reducing risk of achieving any goal, improving the efficiency of work teams, and reducing resource investment when business decisions change, and one being adopted en mass currently amongst companies that develop software is the Agile Process.
I love Agile process and believe it is absolutely the best way to develop software and protect business from over-investment (resources and time) and risk in a dramatically changing business climate.
However, the ideal (and most practiced) Agile sprints last 2 or 3 weeks. 10 - 15 working days. Some companies, especially SAAS companies, deliver the software to the production environment after each sprint. This is really great for the end customer; they get a new update of the product every 2-3 weeks and since the changes are "incremental", it helps reduce the massive changes in software functionality and user experience that used to accompany major software updates (does anyone recall their user experience going from Windows 2000 to Windows XP to Windows Vista or Windows 7 to Windows 8?).
The compressed timeframes, and dependency of the development resources to complete the new functionality or software changes greatly reduce the amount of time for testing activities. Testing resources have no control as to when the software changes get finished and ready to test. In most cases, testers have multiple stories to test in each sprint. This scenario creates testing schedule pressures. New changes must be tested, plus the regression testing.
In most cases, I have experienced that developers will not compete new functionality and be ready for testing into the testing environment until 50-70% of the entire sprint time duration.
This leaves 50-30% of sprint time for testing activities per story. 3 to 5 working days for a 10 day sprint duration possibly for testing each story implementation.
During this time, bugs may or may not be found. If they are, bug reports are created and assigned for fixing. Re-testing must be done for bugs fixed. Regression is run many times over again to verify the product is still working as expected. For large software development tasks, bug finding might be greater in occurrence.
Testing difficulty, and time to test, occurs between white and black box testing. The company needs to be aware of what impact this has on testing it's software in short time durations. Risk varies amongst functional goal to be implemented.
Using the Agile process for software development puts a lot of schedule pressure on the testing time to validate the Acceptance Criteria of the software for each story. This could be between 3-5 days. This is why so many companies (almost every company) are putting testing emphasis on automated testing to assist the team in testing-related activities. It is a wise choice, but the organization needs to realize the stress that the Agile process puts on testing activities and be sure to adequately provide testing resources to ensure acceptance criteria for new stories and successful regression testing is accomplished and all required tests to be run pass.
Monday, June 17, 2013
Wednesday, June 12, 2013
Take ambiguity out of software quality determination: use Acceptance Criteria and test results as the only opinion of quality
Software quality, or the quality of any product that people use, is always a matter of opinion.
Opinions usually differ for a variety of reasons.
For any product, there is no such thing as quality, or any guarantee of of it. Then how can quality be determined good or bad?
Easy answer: Acceptance Criteria.
The great thing is that can be applied not only to functional tests cases but also performance-related test cases.
If the product is tested correctly and meets the Acceptance Criteria of the test cases which have high enough priority that they MUST BE TESTED, which is determined during the first 30% of an Agile sprint by the team stakeholders (Test, Development, Product Management, anyone else), then the product is ready to be released to the customer with acceptable quality.
The stakeholders are responsible for determining all possible test cases and which of those test cases are critical to be run per sprint. If opinions differ for the priority of test case or acceptance criteria or whatever else, then the Product Manager has final decision making power in each case.
So keep the team focused on collaboratively determining test cases, steps to test, and acceptance criteria. The test results become the definitive answer on the "quality" of the product.
Opinions usually differ for a variety of reasons.
For any product, there is no such thing as quality, or any guarantee of of it. Then how can quality be determined good or bad?
Easy answer: Acceptance Criteria.
The great thing is that can be applied not only to functional tests cases but also performance-related test cases.
If the product is tested correctly and meets the Acceptance Criteria of the test cases which have high enough priority that they MUST BE TESTED, which is determined during the first 30% of an Agile sprint by the team stakeholders (Test, Development, Product Management, anyone else), then the product is ready to be released to the customer with acceptable quality.
The stakeholders are responsible for determining all possible test cases and which of those test cases are critical to be run per sprint. If opinions differ for the priority of test case or acceptance criteria or whatever else, then the Product Manager has final decision making power in each case.
So keep the team focused on collaboratively determining test cases, steps to test, and acceptance criteria. The test results become the definitive answer on the "quality" of the product.
Monday, June 10, 2013
Manual and automated testers: One team
Most of us have been watching how test automation has been at the forefront of most company's test initiatives over the past 5 years. Almost every company has been doing various automation for the past few years, and some are just starting.
During this time, I worked at very large companies (40+ employees) and very small companies (less than 25 employees), and a medium sized companies (50-300).
It's been interesting to see the management teams' forming of Test teams with a mix of employees with test automation skills, and those without. In some cases, I have seen org charts where the test automation is an entirely separate team from the Test team that only tests the product manually.
In some cases, there is very little interaction between the manual and automated testers. The manual testers test the product and gives links to the test cases for the automation testers so they can go off and automate, but they dont actually run the tests themselves.
When I think about test automation, I think of it as a software product whose customer is the manual tester to assist in their coverage of product testing. The manual tester should always be the primary end user of the automated tests and be absolutely sure of:
1) All necessary test cases are executing.
2) All required test cases that must pass are indeed passing.
There should always be very close and daily interaction between testers of the product and review of test results from all testers on the team.
During this time, I worked at very large companies (40+ employees) and very small companies (less than 25 employees), and a medium sized companies (50-300).
It's been interesting to see the management teams' forming of Test teams with a mix of employees with test automation skills, and those without. In some cases, I have seen org charts where the test automation is an entirely separate team from the Test team that only tests the product manually.
In some cases, there is very little interaction between the manual and automated testers. The manual testers test the product and gives links to the test cases for the automation testers so they can go off and automate, but they dont actually run the tests themselves.
When I think about test automation, I think of it as a software product whose customer is the manual tester to assist in their coverage of product testing. The manual tester should always be the primary end user of the automated tests and be absolutely sure of:
1) All necessary test cases are executing.
2) All required test cases that must pass are indeed passing.
There should always be very close and daily interaction between testers of the product and review of test results from all testers on the team.
Friday, June 7, 2013
The Quality-Centric Software Development Process
Hello everyone!
After 20+ years in working in the software industry as a software developer and tester, I have decided to introduce a new methodology and mindset for developing great software with the highest possible quality and user experience.
I am calling it the Quality-Centric Software Development Process.
You can find it Here. Share this with your colleagues. Your review and feedback are appreciated.
After 20+ years in working in the software industry as a software developer and tester, I have decided to introduce a new methodology and mindset for developing great software with the highest possible quality and user experience.
I am calling it the Quality-Centric Software Development Process.
You can find it Here. Share this with your colleagues. Your review and feedback are appreciated.
Subscribe to:
Posts (Atom)