Wednesday, November 27, 2013

In XPath we trust

I have known several SWET's over the years who really hate using xpath as element locators for looking up elements on web pages while using Selenium, and now for testing mobile applications we use Appium.

I have never really understood the hatred. I love XPath and use it instead of CSS for finding elements in web pages and mobile applications.

Using XPath is only as powerful as the tools that can translate it correctly and actually have tools that tell you what the xpath locator string is.

For mobile applications, Appium Inspector tool will give you the xpath locator to interact with UI elements.

For web applications, use FireBug as usually those xpath indicators work with Selenium.

And that's it. That is all you need to be the world's greatest, and most efficient automation test developer.

Use Xpath and prosper :)

Saturday, November 23, 2013

Be careful not to let personal relationships at work jeopardize product quality

I have seen personal/friendly relationships at work between QA and development, product management teams hurt product quality. How? The QA team members cover up any product quality risks for fear of damaging a work relationship.

The reason is that these relationships may, at times, introduce compromise and risk of product features and as a QA person.

Your responsibility is to the product and customer experience, not to make friends or protect future job references.

Be careful to manage your rapport with co-workers because, as a QA professional, you must ALWAYS represent the customer and expose and risk (process, design, etc) that has potential to be negative in any way.

Thursday, November 21, 2013

RIP Winamp

I just heard today that Winamp is going to shutdown.

That really sucks.

I think almost everyone I know first played ripped mp3's from their CD's (or pirated from Napster) using Winamp in 1999. I even made a few skins and published them and even got them rated by other users.

What a great, innovative product from Web 1.0. Probably the best one.

http://www.winamp.com/

Thanks to everyone that has ever been involved with this awesome product!

Wednesday, October 30, 2013

iOS Mavericks and XCode 5.0.1 was Apple showing arrogance and dominance in the mobile app market

I am still amazed at how Apple has showed complete arrogance and disregard for it's developer base by releasing XCode 5.0.1 and Mavericks just a few weeks after iOS7/XCode 5.0 initial release with several major changes in the OS and testing/development tools that are not close to working the same way as with the previous release.

Apple should have just waited (and not released 3 weeks ago) and called last week's OS upgrade Mavericks 7.0 and XCode 5.0, because iOS 7.0.2 and XCode 5.0.1 are a lot different (especially the way Instruments and Authorization internally works) than would be expected in a "dot" or "point" release, which is usually slated for unseen bugs or security fixes which do not require disruption to the previous upgrade.

Apple must have calculated the disruption that happened earlier this week and basically said: we don't care, because we are Apple, and you have to react to whatever we decide is going to happen.

It is clear that this was a message from Apple to it's development community: mobile is where most of the new product development is happening and we own this space, and you will do what we say and we will let you know information when we want to, and you will just have to scramble and deal with the consequences.

It was either disorganization or total arrogance, but I doubt it was the former. Apple is too smart for that.

I offer my never-ending Thanks and Appreciation to all of the 3rd party testing and development products that have to deal with Apple's arrogance in order to assist their respective communities in supporting the development of excellent and innovative products that help to sell iPhones and iPads because user want/need that app.

Ironic.

Tuesday, October 29, 2013

iOS Mavericks is a disaster for test automation developers

iOS 7 first update was disaster for us test automation developers.

Thanks Apple for, yet again, showing us test automation developers and software QA people how little you care about quality and testing of 3rd party apps depending on mobile technology stability.

Appium - broken
Instruments - broken (why ask permissions EVERY SINGLE TIME?)

It's amazing how a single "dot" release disrupted (and not in a positive way) every single test automation framework other than whatever weak, inferior test automation software was attempted in XCode 5.0.1 and Mavericks.

Saturday, October 19, 2013

Appium - the game changer that mobile application automated testers have been waiting for.

A few months ago,  I was introduced to Appium - an amazing mobile application testing tool for software testing professionals that have the incredible burden of being responsible for testing mostly Android and iOS mobile applications.

And I love it.

I think "love" is not an adequate word - It is what all software testing professionals have been waiting for for years: An easy-to-use, robust, powerful, silver bullet that allows me to rapidly create tests for mobile applications on any platform that I used to have to write (for iOS) Objective-C (which is really stupid) and Java activity proxy (which is slow and cumbersome) in the programming language that you choose.

In my case, I love Java. So...... I created an entire automated tests suite for iOS and Android mobile applications using, for the most part, the same codebase - in Java!

Follow this amazing project on twitter at @AppiumDevs. Thanks guys! Finally, someone got it right.

Buh-bye Objective-C. Buh-bye Calabash. We dont need you anymore, but thanks for playing :)

Monday, June 17, 2013

Using the Agile Process makes testing activities more difficult, not easier

One the innovative, great processes for reducing risk of achieving any goal, improving the efficiency of work teams, and reducing resource investment when business decisions change, and one being adopted en mass currently amongst companies that develop software is the Agile Process.

I love Agile process and believe it is absolutely the best way to develop software and protect business from over-investment (resources and time) and risk in a dramatically changing business climate.

However, the ideal (and most practiced) Agile sprints last 2 or 3 weeks. 10 - 15 working days. Some companies, especially SAAS companies, deliver the software to the production environment after each sprint. This is really great for the end customer; they get a new update of the product every 2-3 weeks and since the changes are "incremental", it helps reduce the massive changes in software functionality and user experience that used to accompany major software updates (does anyone recall their user experience going from Windows 2000 to Windows XP to Windows Vista or Windows 7 to Windows 8?).

The compressed timeframes, and dependency of the development resources to complete the new functionality or software changes greatly reduce the amount of time for testing activities. Testing resources have no control as to when the software changes get finished and ready to test. In most cases, testers have multiple stories to test in each sprint. This scenario creates testing schedule pressures. New changes must be tested, plus the regression testing.

In most cases, I have experienced that developers will not compete new functionality and be ready for testing into the testing environment until 50-70% of the entire sprint time duration.

This leaves 50-30% of sprint time for testing activities per story. 3 to 5 working days for a 10 day sprint duration possibly for testing each story implementation.

During this time, bugs may or may not be found. If they are, bug reports are created and assigned for fixing. Re-testing must be done for bugs fixed. Regression is run many times over again to verify the product is still working as expected. For large software development tasks, bug finding might be greater in occurrence.

Testing difficulty, and time to test, occurs between white and black box testing. The company needs to be aware of what impact this has on testing it's software in short time durations. Risk varies amongst functional goal to be implemented.

Using the Agile process for software development puts a lot of schedule pressure on the testing time to validate the Acceptance Criteria of the software for each story. This could be between 3-5 days. This is why so many companies (almost every company) are putting testing emphasis on automated testing to assist the team in testing-related activities. It is a wise choice, but the organization needs to realize the stress that the Agile process puts on testing activities and be sure to adequately provide testing resources to ensure acceptance criteria for new stories and successful regression testing is accomplished and all required tests to be run pass.


Wednesday, June 12, 2013

Take ambiguity out of software quality determination: use Acceptance Criteria and test results as the only opinion of quality

Software quality, or the quality of any product that people use, is always a matter of opinion.

Opinions usually differ for a variety of reasons.

For any product, there is no such thing as quality, or any guarantee of of it. Then how can quality be determined good or bad?

Easy answer: Acceptance Criteria.

The great thing is that can be applied not only to functional tests cases but also performance-related test cases.

If the product is tested correctly and meets the Acceptance Criteria of the test cases which have high enough priority that they MUST BE TESTED, which is determined during the first 30% of an Agile sprint by the team stakeholders (Test, Development, Product Management, anyone else), then the product is ready to be released to the customer with acceptable quality.

The stakeholders are responsible for determining all possible test cases and which of those test cases are critical to be run per sprint. If opinions differ for the priority of test case or acceptance criteria or whatever else, then the Product Manager has final decision making power in each case.

So keep the team focused on collaboratively determining test cases, steps to test, and acceptance criteria. The test results become the definitive answer on the "quality" of the product.

Monday, June 10, 2013

Manual and automated testers: One team

Most of us have been watching how test automation has been at the forefront of most company's test  initiatives over the past 5 years. Almost every company has been doing various automation for the past few years, and some are just starting.

During this time, I worked at very large companies (40+ employees) and very small companies (less than 25 employees), and a medium sized companies (50-300).

It's been interesting to see the management teams' forming of Test teams with a mix of employees with test automation skills, and those without. In some cases, I have seen org charts where the test automation is an entirely separate team from the Test team that only tests the product manually.

In some cases, there is very little interaction between the manual and automated testers. The manual testers test the product and gives links to the test cases for the automation testers so they can go off and automate, but they dont actually run the tests themselves.

When I think about test automation, I think of it as a software product whose customer is the manual tester to assist in their coverage of product testing. The manual tester should always be the primary end user of the automated tests and be absolutely sure of:

1) All necessary test cases are executing.
2) All required test cases that must pass are indeed passing.

There should always be very close and daily interaction between testers of the product and review of test results from all testers on the team.

Friday, June 7, 2013

The Quality-Centric Software Development Process

Hello everyone!

After 20+ years in working in the software industry as a software developer and tester, I have decided to introduce a new methodology and mindset for developing great software with the highest possible quality and user experience.

I am calling it the Quality-Centric Software Development Process.

You can find it Here. Share this with your colleagues. Your review and feedback are appreciated.

Wednesday, May 29, 2013

OK, DNS seems to have finished propagating through the worldwide DNS server network. http://sqaevangelist.com is back online :)
Hi everyone,

I moved my website (sqaevangelist.com) to a dedicated hosting server rather than a shared hosting, so I had to get that setup today. Domain (DNS) has been setup and now just waiting for it to propagated to worldwide DNS server network. The website should be back up in less than 24 hours.

Monday, May 27, 2013

New SQA Evangelist website

Hi everyone, I hope you are enjoying your day.

I have a new website that I have started for representation of all of us SQA professionals.

http://sqaevangelist.com/

It is still under construction, but I thought it was better to just get it out there and get some response to the content and find out what the QA community needs.

Please review and provide feedback. I need your help in making it better and understanding how it can help you in your daily SQA activities.

-m-

Saturday, May 25, 2013

Successful QA using Agile process - QA milestones during a sprint

Most companies are just now starting to integrate the Agile process into their software development teams, but I have seen that companies are mostly unaware of what the QA activity milestones are within each sprint.

Most of us know the typical activities of QA for a project, tasks like:

1) Create a test plan
2) Create test cases
3) Review and approve test cases with Product Management and Development team.
4) Execute tests manually
5) Perhaps even automate some of them
6) Update wiki
7) Update Story status (using something like Rally, for example)

So when do these tasks need to be done and finished within each sprint? This is the topic of this blog post. Following the milestone dates in the example is critical to the success of insuring the quality is met.

Let's say, for example, your Agile team is using 2 week sprints, or 10 working days. (It's also easier to demonstrate task time in regards to percentage using a 10-day sprint rather than a 3 week or 15 day sprint.)

Day 1-3 tasks:

1) Create test plan and test cases for each story. The test cases are determined by the QA team, Product Manager, and development team.
2) Document the test cases for each story that has QA tasks. Purpose (what is being tested), Test Case Steps, and Acceptance Criteria (what needs to happen in order to determine that the test case passed during testing) needs to be very clearly documented.
3) An email, with test case documents and/or links to all story test cases is sent to the Product Manager and development team with a request to review and approve the test cases.
4) The approval emails must be received by end of day 3 for all stories.

Milestone 1 - finished by first 30% of sprint duration: Approval of all identified Test Cases and Acceptance Criteria for all stories by all stakeholders involved.

Day 3/4 -7 tasks:

1) The time for testing a story will be different. It is entirely dependent on the size of the development task. There may be times when no mock testing can be done, or anything to prepare or be proactive until the story is ready to be tested.

Milestone 2 - Make sure that the development team sends an email to the QA person indicating when development is finished and when each story is ready to be tested in the QA environment. This is very important because the QA team must know asap when a story is ready to be tested at the earliest possible time.

2) Proactive writing of mock tests until development has completed their part of the story, if possible.
3) Updating of wiki and Agile story progress
4) Execute the test cases when story is ready to be tested in QA environment
5) Reporting of bugs found during testing

If the QA person has not received email and the story is not ready to be tested by the 60% (6 days) timeline, that is introducing risk into the quality of the product, quite simply because the QA team may not have adequate time to test the story. This risk needs to be relayed to the all involved stakeholders, so send an email indicating this.

Milestone 3 - All sprint story testing needs to be completed by the 70% (7th day) stage.

Day 8 - 10 tasks

1) Validation of fixed bugs
2) All regression testing run and determination of any bugs found.

Milestone 4 - Must verify no P1-P2 bugs exist in QA environment by 80% stage (8th day). Your team needs to determine if even P2 bugs are allowed and known into the product at this point.

3) Push the product into the staging environment. All testing needs to occur and be finished within the next 24 hours.

Milestone 5 - Must verify no P1-P2 bugs exist in the Staging environment by 90% stage (9th day). Your team needs to determine if even P2 bugs are allowed and known into the product at this point.

The final day is for story completion updates in Agile tool (Rally, etc) and updating of test case results/documentation in test case management tool (QMetry, etc). Update wiki to document any updating of test procedure changes. Send emails updating all stakeholders of final test results. Include links to any Continuous Integration test results for proof of any automated testing that might have occured also.

Milestone 6 - Assuming no P1, P2 bugs were found in staging environment testing, send email to all stakeholders that the production release can proceed and that QA is finished for all stories.


The milestone completion dates are very important, especially if your team is using 2-week, or 10 day, sprints. 

Even 1 day delays can introduce much risk into the product quality. You, as the QA person, are responsible for managing these timelines and doing as much as possible to reduce product stability risk.

Thursday, May 23, 2013

The goals of manual QA are not the same as test automation QA

The goal of manual (non-test-automation) QA is not necessarily the same as test automation QA.

The goal of the manual QA tester is to test as much of the product as possible, and, hopefully, find as many bugs as possible.

Test Automation is an internal corporate product solely to assist the manual QA tester in expanding their code coverage and feature testing so they can eventually focus on finding the exception case bugs (where most bugs occur - this is especially true in mobile applications). Most of these test cases are product features where test automation code is not recommended for a variety of reasons (asynchronous behavior, high maintenance costs, etc).

If your test automation test code is covering 100% testing of smoke test cases, then your company is in a great position to free up the manual tester to have the time and creativity to attack these exception test cases and hopefully find any bugs that real users will experience.

The goal of the manual QA tester is to find as many bugs as possible in the product. 

The goal of test automation QA is to assist the manual QA person to expand and cover their testing of the product to the greatest possible extent without the physical use of their time. It is a tool to assist the manual QA person (who should be considered the subject matter expert and have greatest knowledge of test cases, steps to execute the test cases, and acceptance criteria) to allow for basic smoke testing and eventually, in time, expanded testing, so that the manual QA person can execute corner/exception cases which, as expected, will uncover and find most of the bugs in an application, web or mobile.

Monday, May 20, 2013

Hello everyone! I have a Twitter account now! I encourage you to follow it for expert mentoring and advice on SQA/QE activites: https://twitter.com/SQAEvangelist