Last week I have written about automated translation system gone awry in the midst of Polish international relations crisis. Today I have discovered extremely good article explaining important parts of artificial intelligence in layman’s terms.
Read Asking the Right Questions About AI by Yonatan Zunger.
I would like to put it in context of testing in particular. Using Yonatans terms, testing is mix of indirect and undefined goals in an unpredictable environment. It’s in the group of problems that are the hardest to solve by machine learning systems. Testing won’t be taken over by AI (“automated”, as they say in our industry) in coming decades; possibly it will never be.
It doesn’t mean that testing won’t change at all and doesn’t have to adapt. All roles where agency is primarily given to non-human actors (erroneously referred as “manual testing” by huge part of industry) will slowly disappear. Testers will need to learn how to make better use of machines they are working with. Testers will need to better understand results given by machines and quickly catch situations where machine is answering question different than the one that was asked - things that will be very hard without better understanding of statistics and machine learning. It’s hard to tell whether industry as a whole will need more or less testers in the future.
And one more thing - when discussing ethical choices that autonomous car will have to make, Yonatan claims that society will have to make a choice and state it explicitly. As sociologist, I disagree. Getting societies to reach consensus is extremely hard, if not impossible. That’s why successful societies are build on rather vague principles (so everyone can agree on them, but interpret them a bit different) and have built-in “venting” mechanisms that allow people to express their disagreement and desire for change.
Takeaway: read this article. Do situations described in section “Ethics and the Real World” have anything in common with testing? What? How can testers apply lessons learned by AI researches to their own jobs, even if they don’t use any AI?
Comments