Sunday, October 16, 2011

Of logging bugs

Sometimes we scared too much or to cautious to log bugs. We spent times unnecessarily to investigate an issue to make sure its valid. And some teams even include validity of bugs as part of their deliverables. Haven't it occurs to them that it may backfire. Valid bugs are better than invalid bugs. However, nobody can deny that invalid bugs are much much better than escaped bugs.

And then after we logged bugs, we are told to document the steps to find it. And probably create test cases to make sure that the bug won't be escaped untested in the next release. They told us, if you don't document it, if anything happen to you, we know what to do. It's also useful to the person after you. When what do they meant actually is, if you don't document it, if you are fired or resigned, we know what to do. It's also useful to the oblivious newbie we hired after you left.

Good things about bugs that logged in detail, is that it provides way to understand the function they are related with. If you don't have enough time to study the feature, read the bugs logged about it. To understand a feature is to read the bugs logged about them.

Q: How many QA does it take to change a light bulb?
A: Five. One to write a plan, one to understand the requirement, one to actually change it, one to check that the bulb is changed properly, and one to manage all these people.

I still remember, a senior QA manager from Company M, once said, a QA is as good as he catches the bugs. If he's just running tests, might as well automate to a machine and fire him. Sounds cold, isn't it?

Saturday, May 14, 2011

The most important thing to deliver, the last thing to do

Lots of organization, if not all, tend to preach the same mantra, "Quality is our no. 1 priority", "We held quality as the upmost regard", quality this, quality that, yada yada.. However, more often than not, the times to 'uphold' this quality thingy is usually at the end of the lifecycle. Not just that, it has the least time allocated to it! Irony, isn't it?

For instance, a product planning goes into, let say, hardware phase, maybe by the EE guys, 4 weeks, then the software developed for that product, 4 weeks, then the testing phase, 2 weeks. Suddenly, when the real project started, EE guys screw up something or maybe software guys screw up something, taking more than the weeks they are allocated to. "Oh, never mind. Let just reduce the testing phase to 1.5 weeks." Wow, how convenient is that! You are expecting us to deliver the same result from a 2 weeks time frame, in a much reduced time. Yet you say you have the quality is our no.1 priority?

Adding salt to the injury, if any issue found in the product, who's the first party to be asked for responsibility? No prize for guessing. "It's our fault we introduce that bug, but hey, it's the testing team that should have discover that before the customers."

Oh my. SSDD. Same sh*t, different day.

Sunday, February 6, 2011

on discovering bugs late

It’s been a while since the last update. Busy testing for the final regression of our product. Or is it?

Regression test is defined as the tests where we want to see if any fixes or changes in the codes, meant for fixing certain features, do not interfere or break other unrelated features. The nature is a repetitive task all over again. Which best to be automated. Or is it?

The fallacy of saying certain phase of test is a regression test, more often than not, is not a regression at all. Time spent is for validating the bug fixes. Or discovering new bugs which may be caused by the fixes or exist even before the fix is introduced. Anyhow, the objective strays from its original path. Regression is now just another round of tests. Or is it?

So, who’s at fault? Developers? Testers? Maybe, nobody at all. The time factor plays a big role in deciding whether a regression test is really a regression test. Like usual cycle, the bugs are discovered late and the fixes come in late, so the testing occurs late.

“Blame the tester then, for discovering the bugs late. They should have discover them earlier and save our times!” Yes and No. How do we know whether the bugs can be discovered earlier? How do we know that the bugs are not recently introduced?

I recalled my previous company, where a post-mortem is conducted to see whether the testers are discovering the bugs late? (Can you believe that resources are spent to discover whose faults are those, instead of finding more bugs? Well, in a mud-slinging situation where everybody is trying to save their own asses, anything is possible. :] ) We actually asked to re-test some late-discovered bugs in the earlier releases, just to prove that the bugs can be discovered earlier! Isk isk isk…*shaking head in disbelief*

So, what? If it’s tester fault, we’ll gonna blame the tester then? Or if the bugs not there, we gonna blame the developers? Knee-jerk and reactive decisions like these won’t bring any real benefit I think.

One thing we can do to mediate this, is to try discovering the bugs earlier. How? Strategize on critical features, or features known to be prone to bugs. Or new features where they aren’t enough testings are done. Or features that are not tested that often (imagine them as the uncharted territories). Do an impact analysis on each fixes coming in, test again features that are related to that fixes.

Which one is a better approach: Regression after regressions, or specific target risk-based tests? What say you?


Saturday, December 25, 2010

If Fail, skip the rest of the steps?

Suppose we have a product that has 3 items and 4 functions. Each functions are applicable to each items. So, it will be like:

Item 1 – Feature W
Item 1 – Feature X
Item 1 – Feature Y
Item 1 – Feature Z
Item 2 – Feature W
Item 2 – Feature X
…and so on…

How should the test cases be written? We can have 1 big test case for an item, like this:

Test Case 1 – Features in Item 1

  1. Test Feature W in Item 1
  2. Test Feature X in Item 1
  3. and so on…

Or 1 big test case for each function, like this:

Test case 1 – Feature W

  1. Test Feature W in Item 1
  2. Test Feature W in Item 2
  3. and so on…

Then, suddenly a bug is discovered for Feature W. What happens next is, if we use Approach A, all test cases for Item 1, 2, and 3 will fail, because all test cases consists of testing Feature W. But if we use Approach B, only 1 test case, i.e. test case for Feature W only will fail.

But, how do we know either it’s an item, or a feature that will fail?

Also, to add salts onto the wound, there’s a possible tendency for testers to rush up testing, will lead them to skip the testing of the rest of the test cases that are not failing. For instance, if a tester found out that a bug can fail a third step of the test case, he/she may just skip executing the rest of the steps and just failed the test case. That’s why it’s no good to rush our testers to complete their tests quick :P

Of course, some may argue, maybe it’s better to use Approach C instead:

Test Case 1 – Feature W in Item 1
Test Case 2 – Feature X in Item 1
Test Case 3 – Feature Y in Item 1

That will translate to much higher number of test cases, i.e. number of features x numbers of item!

So, the remedy? Use Approach C if you want.

Or, what we can do is, make sure that the testers still execute the rest of the test steps, although the first part of the test is failing due to the discovered bugs.

How do we know the tester still execute the rest of the test steps? Relatively, we can judge from the execution time. That is, if your team have it recorded. If the execution time is much much shorter, than an average execution time of that same test, that could be something fishy.

Another positive reinforcement would be to motivate the testers themselves, about how major a bug that they could miss if they just fail a test because of a previous bug, for the sake of completing their tasks for the day.

That also means, the management should not be too pushy about us completing our tests faster.

Fast, cheap, good. These three attributes don't co-exist together. You can pick any two of them, but not all three.

Saturday, December 18, 2010

What does a Tester do, actually?

What does a Test Engineer or QA person do actually? It's not as popular as software developers or coders. I myself never heard of this job description before deciding what courses to take during my college years. It's only until several months before I saw openings in J*bStreet when doing my job-hunting. Even when I decided to take up the job with the Company M, I'm still not clear about the job. At that time, like most of the graduates, the aim is 'to get some experience first'.

[Yeah, we can talk a lot about the importance of pursuing your dreams, make sure the job fits yourself, dream job, satisfaction, yada yada yada. Yeah right, tell that to employers nowadays. Fresh graduates look for experience, not satisfaction.]

But, as my days went on at the Company M, it's not such a bad decision after all. I got to know a different world, that actually revolves around software development, but not the development itself. Testing has never occurs to my mind as a career. Ask the students nowadays, how many of them actually know what does a tester do? Or actually even heard of it? I myself having hard time explaining to others.

[Come to think of it, I'll tell them, "I find other people's fault for a living...Heheh...]

Software development life cycle, or in its short form, SDLC, simply put, are stages to develop software, first, by designing the software, then you specify requirements about it, then you write it, and at the end, you test it before it got released to the customers.

Those were the days when not that many companies really consider the last stage of the life cycle as important - testing. Developers do the testing themselves, or even worse, no testing at all. No quality check, na da.

It could be because of several reasons actually. The major one is that the lack of fund to get a dedicated resources to test. It's not just the human, it also includes the testing tools, and the times needed. Why, just be satisfied with some tests that developers already run.

It could also be that the testing stages are deemed not to be worthy of it. I got to know about a company, a big one, that don't really do proper quality check on their products, their excuse being that the product market's lifetime is very short. By the time the consumers realized that their products have certain defects, at that time an average consumer may already buy the next version of that product. So, the management calculated the ROI, that it's not worth it to spend capital on testing, rather just wait for the customers to buy the updated version. No wonder the software in their products suck!

After I did my time in Company M, and moved on, I started seeing a bigger picture. Testing is not just applicable to the software or hardware actually. It's actually exist in different working worlds. It could have different names, with different job description, but in the end, they converge to similar concepts. Tester, Validation Engineer, Quality Assurance, Quality Control, it even goes on to Audits, MQA, S*RIM, J*KIM, and so on.

If we really think about it, there are always some sorts of checking going around in our daily life. As you type on your laptop, who qualify it that it will not explode in front of you? As you drive to your work today, who make sure that the car passes certain ratings that it is safe to be use? As you eat at your favorite mamak stall, which department certify that the ingredients are halal (if you're Muslim) or the hygiene level is A or B?

There are always parties that will watch for errors, look for faults, and hunt for bugs.

Wednesday, December 15, 2010

Notepad++

I have to admit I’m quite left behind in technology and tools up-to-date. So, I just knew about Notepad++ when I join this Company E. Good things about Notepad++:

- colourful syntax that follows the language of the file

- multiple tabs of files can be opened

- last saved memory, i.e. we can just close the Notepad while working on a file, and it’s there when we re-open it. No annoying auto-recovery stuff.

- And so on….

What I like most, are the ability to open in a different views, i.e. Move to Other View. This is damn helpful when I need to see two documents side by side.

And to add, Plug-ins > Compare, allows me to compare the documents viewed side-by-side and quickly identify the differences in colours. Talk about Documentation Test!


Wednesday, December 8, 2010

Importance of recording setup time

QA Desk, 6.18 pm.

I’m still at my desk, adding up some test cases to the test plan, for the new component that we need to integrate. The whole day is pretty much spent for setting up the environment. Come to think of it, can we say that the majority of times are spent prior to the test to setup the environment the testing itself?

When I was in Company M, in our Test Management System (which is developed in-house by Company M itself), we need to enter execution time, investigation time, and setup time for certain test cases. Then at the end of the period, let say after the project finishes, post-mortem perhaps, the team lead will go into the system and start mining the data to see if there is any spike on setup time, execution time, or investigation time. From there, improvements can be suggested, like:

- Having a dedicated environment
Saves time to setup each time the same test is needed, especially for new hires. However, it will constrain the resources to be used up for other activities. The dedicated environment can sometimes be sacrificed to be dissembled for that purpose. My personal case, is when we built up some environments, they only stay intact for awhile, until the stringent time comes, that everybody starts dismantling them for other uses.

- Clear written test cases
How many times do we face problems when the test cases are not clearly mention some settings or some features need to be enabled for the test? What can be done is that the tester should be encouraged to update the test cases every time they found something useful to be added

Like what some people said, if you can’t measure it, you can’t improve it.

In Company E, it is not required yet to record the setup time. Would be good eh, if I suggest this to my boss?