Sunday, October 16, 2011

Of logging bugs

Sometimes we scared too much or to cautious to log bugs. We spent times unnecessarily to investigate an issue to make sure its valid. And some teams even include validity of bugs as part of their deliverables. Haven't it occurs to them that it may backfire. Valid bugs are better than invalid bugs. However, nobody can deny that invalid bugs are much much better than escaped bugs.

And then after we logged bugs, we are told to document the steps to find it. And probably create test cases to make sure that the bug won't be escaped untested in the next release. They told us, if you don't document it, if anything happen to you, we know what to do. It's also useful to the person after you. When what do they meant actually is, if you don't document it, if you are fired or resigned, we know what to do. It's also useful to the oblivious newbie we hired after you left.

Good things about bugs that logged in detail, is that it provides way to understand the function they are related with. If you don't have enough time to study the feature, read the bugs logged about it. To understand a feature is to read the bugs logged about them.

Q: How many QA does it take to change a light bulb?
A: Five. One to write a plan, one to understand the requirement, one to actually change it, one to check that the bulb is changed properly, and one to manage all these people.

I still remember, a senior QA manager from Company M, once said, a QA is as good as he catches the bugs. If he's just running tests, might as well automate to a machine and fire him. Sounds cold, isn't it?

Saturday, May 14, 2011

The most important thing to deliver, the last thing to do

Lots of organization, if not all, tend to preach the same mantra, "Quality is our no. 1 priority", "We held quality as the upmost regard", quality this, quality that, yada yada.. However, more often than not, the times to 'uphold' this quality thingy is usually at the end of the lifecycle. Not just that, it has the least time allocated to it! Irony, isn't it?

For instance, a product planning goes into, let say, hardware phase, maybe by the EE guys, 4 weeks, then the software developed for that product, 4 weeks, then the testing phase, 2 weeks. Suddenly, when the real project started, EE guys screw up something or maybe software guys screw up something, taking more than the weeks they are allocated to. "Oh, never mind. Let just reduce the testing phase to 1.5 weeks." Wow, how convenient is that! You are expecting us to deliver the same result from a 2 weeks time frame, in a much reduced time. Yet you say you have the quality is our no.1 priority?

Adding salt to the injury, if any issue found in the product, who's the first party to be asked for responsibility? No prize for guessing. "It's our fault we introduce that bug, but hey, it's the testing team that should have discover that before the customers."

Oh my. SSDD. Same sh*t, different day.

Sunday, February 6, 2011

on discovering bugs late

It’s been a while since the last update. Busy testing for the final regression of our product. Or is it?

Regression test is defined as the tests where we want to see if any fixes or changes in the codes, meant for fixing certain features, do not interfere or break other unrelated features. The nature is a repetitive task all over again. Which best to be automated. Or is it?

The fallacy of saying certain phase of test is a regression test, more often than not, is not a regression at all. Time spent is for validating the bug fixes. Or discovering new bugs which may be caused by the fixes or exist even before the fix is introduced. Anyhow, the objective strays from its original path. Regression is now just another round of tests. Or is it?

So, who’s at fault? Developers? Testers? Maybe, nobody at all. The time factor plays a big role in deciding whether a regression test is really a regression test. Like usual cycle, the bugs are discovered late and the fixes come in late, so the testing occurs late.

“Blame the tester then, for discovering the bugs late. They should have discover them earlier and save our times!” Yes and No. How do we know whether the bugs can be discovered earlier? How do we know that the bugs are not recently introduced?

I recalled my previous company, where a post-mortem is conducted to see whether the testers are discovering the bugs late? (Can you believe that resources are spent to discover whose faults are those, instead of finding more bugs? Well, in a mud-slinging situation where everybody is trying to save their own asses, anything is possible. :] ) We actually asked to re-test some late-discovered bugs in the earlier releases, just to prove that the bugs can be discovered earlier! Isk isk isk…*shaking head in disbelief*

So, what? If it’s tester fault, we’ll gonna blame the tester then? Or if the bugs not there, we gonna blame the developers? Knee-jerk and reactive decisions like these won’t bring any real benefit I think.

One thing we can do to mediate this, is to try discovering the bugs earlier. How? Strategize on critical features, or features known to be prone to bugs. Or new features where they aren’t enough testings are done. Or features that are not tested that often (imagine them as the uncharted territories). Do an impact analysis on each fixes coming in, test again features that are related to that fixes.

Which one is a better approach: Regression after regressions, or specific target risk-based tests? What say you?