New
#280
Random testing is like throwing dice, real testing for meaningful results should be ordered.
Random testing is like throwing dice, real testing for meaningful results should be ordered.
100% agreed.
Yeah. Let's pay money for a feature about to come to Windows for free.
I would argue that being an Insider provides *some* with the ability to *methodically* test build, whereas the vast majority just run limited testing, or subjective testing testing versus any sort of scientific testing at all.
With the massive numbers of people reporting issues, you have to figure that for every report there are X # of people not reporting but only upvoting existing reports, Y # of people who just read the reports, applying any workarounds as posted, and Z # of people who never look at feedback in the first place. I won't make any guesses, other than to say that I believe none of those numbers are 0.
Ok, while I am not pointing any fingers at anyone (except maybe myself), I have to point out the incongruity here.
Testers should scientifically be testing these builds, but subjectively feel unappreciated without explanations.
Let that sink in for a moment.
Mind you, I agreed that testing should be thorough and scientific. But, personally, I don't give a ... rodent's derriere about explanations, nor about appreciation. All I want to see are bugs being quashed.
However, I concede that those that 'feel' underappreciated are probably more likely to be the same ones who aren't testing very scientifically (or systematically, for that matter).
In all the ßeta testing I have been involved in over the years, I, personally, can unequivocally say that I derived my sense of satisfaction, not from being told anything, but by seeing the product I was testing become a better product due to the testing conducted by the testers.
Granted, as with any venture, there were 1 or 2 (or a relative few in large scale tests) who were glory hound, who wanted some sort of recognition, but by and large, the vast majority of people I met were not testing to get recognition, but testing to help developers release a better product.
I'm on my Insider now. Sometimes I wonder how many are. Yes, we humans have inquisitive minds as you say. When my children or employees asked me why, why not, or give them directions, I usually made it a point to tell them why or why not. It's a part of the learning process. Got that from my mother and some teachers. My dad, not so much.
My father was a scientist (chemist) and army colonel, I must have picked up some things from him. I like orderly testing and use and also to see results negative or positive. Negative results can be more educative than positive ones. When something works. it works bu if it doesn't..... that's when you really need to know why.
My father is a retired Horticulturist with well over 100 published papers, and I have a Masters in Biotechnology. I truly understand the need to know why something didn't work. Key word, someTHING. Not plural, singular. Experiments in my Masters thesis were focused on no more than a few variables, maybe up to 10, for the sake of keeping things not only coherent but also keeping the workload manageable and allowing me to complete the experiments, compliment the data and offer analysis and draw conclusions all withing a reasonable amount of time.
In software testing, however, particularly in ßeta testing, you go in *knowing* that things may not work. In addition, note that every single IP build we have treated has had a list of things that 'are not working'.
Do we really need explanations for each and every one of those things?
Or do we now qualify which things that are not working get an explanation?
The easiest solution is to fix the bugs and release another build. The second easiest is to discuss the most prevalent bugs, fix them, and release a new build The third easiest would be to answer what the users want to hear in a democratic manner, then fix and release.
By far the hardest would be to answer every single problem, issue and concern that every single user has with each build.
Yes, it would be nice to know why, but realistically, the why is pretty irrelevant for us testers as we are not involved in the actual coding of the OS preview build we are testing.
As your father the Chemist was, as my father the Horticulturist was, and sh I was in my Masters program, we were directly involved in the processes of our experimentation to obtain a result. The testing we performed was so we could alert the parameters ourselves to eventually lead to a successful outcome.
As software testers, we can only report the results, none of us are coding these preview builds ourselves. It is not an apples to apples comparison.
I have a PHD in engineering and have developed major internationally used packages. The developers will have a systematic checklist but that ultimately scratches the surface. Once package goes into wild, at best, a small percentage will do some simplistic but systematic testing on situations applicable to them but majority will just run a few superficial checks.
I usually check if new feature works in vm first, then on pc etc. I always test these as clean installs to minimise interactions. If not, I ask here and on feedback forum for possible workarounds, or even just education how it works. If it is a real bug, I report it and move on. I do not feel need for long explanation why it failed - I just want it fixed.