420 likes | 536 Views
Operation Successful Patient Dead. Margaret Dineen, Encompass Testing. Let’s Test Oz 2014. Learning Meeting new people Growing Sharing …. Sharing …. I’d like to share an “Experience” with you Experience = what you get when you don’t get what you want!. Before we start ….
E N D
Operation Successful Patient Dead Margaret Dineen, Encompass Testing
Let’s Test Oz 2014 • Learning • Meeting new people • Growing • Sharing …
Sharing … • I’d like to share an “Experience” with you • Experience = what you get when you don’t get what you want!
Before we start … • Walkthrough of a project I worked on recently • I’d like you to think about times you’ve found yourself in a similar situation and how you handled things • There will be bribes prizes for sharing!
Is this your experience? • Perfect, thoughtfully-written, complete requirements • Wonderful, supportive team – your manager bakes cakes for you • Set your own deadlines – why don’t you take twice that time for testing – just in case! • Daily group hugs • Saved the world by finding the right bugs at the right time • On time, under budget, no bugs in production • Customers LOVED the software • You got an award for being just so AWESOME
It is? • REALLY???? • Can I get a job with you please?!
Is the following closer to the truth? • Technical challenges • Interpersonal challenges • Questioned your sanity and your ability • Questioned everybody else’s sanity and their ability • Crazy stuff .. You want what? • Always prioritising and juggling based on limited time available
Let’s talk about one such project • Financial organisation • Authentication device • Background: change of manufacturer based on high failure rate of previous devices • New device, minimal changes to firmware, application or back-end system
Client mandate • We want you to run a basic test which we will define (register device, login, perform transaction, logout) • We want you to run the test across 48 operating system/anti-virus combinations • We will provide the provisioned devices • We will provide the test environment • You have 10 days to complete testing
A bit of clarification… • Us: Who is going to provision these devices? Is provisioning part of the project scope? • Client: That’ll outside of the test scope; it’s a simple process and it’s all under control • Us: Ok and another thing – you’re asking us to test physical devices on a virtual environment, that seems a bit risky. How do we know the environment will give the same results as the real production environment? • Client: We know everything. Don’t worry your head about this stuff, we’ve got it sorted!
Our approach ✔ • Client expectations set up-front • Risks identified up-front • Mission of testing clearly identified and agreed • Communication strategy agreed up-front • Hardware devices provided to us up front • Test environment provided to us up front • We ran the tests, we reported on the results ✔ ✔ ✔ ✔ ✔ ✔
Anything missing from the picture? • What did we miss? • What would you have done differently? • Any red flags yet? • SHARING TIME
Why the odd title? • Actual feedback from the project manager at the end of the project • Yes – it did suck • BIG time!
Outcome … • I screwed up • I misread signals, ignored red flags, crashed straight into icebergs and from client perspective, I failed to provide value • As a tester, not a good place to be • I thought I was doing a great job so I hadn’t expected such bad feedback
What failure felt like • Dark • Self-doubt • Confidence-crushing • Numerous replays of situation and how I could have handled things differently • Why?
Reality • Once project was under way, each team was so focused on their own component delivery that communication ground to a halt (visualise hamsters running on a little wheel) • Testing was seen as “will be managed later” • Risks were not managed; they were outside of our control; small project risks had huge impact on testing • Hardware devices were incorrectly provisioned but we didn’t know that! • Test environment was unstable and outside of our control • Back-end system was unstable and outside of our control
Put on the brakes! • What were we thinking??? • Why were we thinking it?!
Test headspace at start • Clean slate • Everything new, everything is awesome • Lots of questions; formed a conceptual picture of project • Clarity and understanding • Good communication; everyone has time to talk about doing things right • Relationships already starting to form • One big happy family working TOGETHER with a COMMON FOCUS
Test headspace during project • Must meet objectives – must get the job done • Time’s running out; must work faster • Just meet the objectives and report on findings • No time to dig too much into detail • BUT …
Getting the job done was based on the conceptual picture formed at the start of the project • Was “getting the job done” still the best place to focus our effort? What should we have done?
Test headspace at end of project • It’s all bad • The sky is falling • The hardware is terrible • The software is shocking • Pull the plug .. Pull the plug
Client perspective at end of project • Testers always cry wolf, silly testers • PM: project delivered successfully, almost on time and almost on budget • What a hero!
Why such a discrepancy between stakeholder and tester? • What do you think? • Where do you think it started to go wrong? • SHARING TIME
Where did it start to go wrong? • It started to go wrong when I didn’t listen to my gut • Evidence found during testing would have meant something completely different if • We had really MANAGED our risks • We had been more confident about our misgivings • Exercise was so tightly time-boxed there was little time to react
The downhill slide • Not listening to my gut started a chain of events leading to failure on my part • Failure = • Σ(chain of little mishaps each of which could have been avoided) • No one catastrophic event • Once you’re on a downhill slide, the only thing you can really change is how quickly you get to the bottom
Why did I miss the obvious? • What do you think? • SHARING TIME
Some of the reasons for failure • Cognitive dissonance • Poor risk management • Complacency – lost the “context” out of “context driven” • I didn’t take enough time to really understand the problem, the steps required to solve it and the steps required to understand what was going on around me and react appropriately
Does this sound familiar? • Cognitive dissonance • Mental stress or discomfort experienced by an individual who holds two or more contradictory beliefs, ideas, or values at the same time, or is confronted by new information that conflicts with existing beliefs, ideas or values • Leon Festinger’s theory of cognitive dissonance – we strive for internal consistency. When dissonance is present, we try to reduce it and actively avoid information and situations which would likely increase the dissonance. • Let’s talk about aliens!
So how does this relate to the project? • Initial conceptual model had: • Correctly provisioned devices • Stable environment • When bugs appeared, I couldn’t even see that these two areas might be the cause of the bugs – my mind was focused elsewhere • I was blind to the facts because my brain wanted to reduce the dissonance
Poor risk management • Risks don’t just disappear all by themselves • They need someone to love them: • Recognise them • Own them • Have pre-defined action plans to manage them • We only did the first step because we thought someone else had them all under control.
Complacency • I failed to remember that every project is different because the people dynamics are different • I can’t say I’m truly context-driven if I am regurgitating the same solution / process without first even recognising that this is a different problem to be solved • A half-baked attempt at CDT is easy but real CDT is hard!
The impact of failure on me as a person • Questioned my ability as a tester • Questioned my desire to remain a tester • Wanted to run for the hills • I chose flight rather than fight • These feelings didn’t magically disappear overnight
What I learned • Project direction and objectives may be set at the start of a project but testing must be able to anticipate and react to changes in circumstances during the project • When I feel “that’s a bit odd” I need to question • Communication strategies must be agile • I need to step back, defocus and question my beliefs – they may be wrong
Attitude • If I’d had a different attitude, would I have gotten a different outcome? • Does anyone have any insights as to how they react in situations like this? • Are there any tools you use? • SHARING TIME
Additions to my toolkit • Understand my scope of control • Regular defocussing; I need to look at all the evidence; not just one piece • I need to ask “Am I adding value” and not be afraid to deviate from “getting the job done” • I need to listen to my gut and acknowledge distress signals • I need to ask questions; they might not be as stupid as I think
My enhanced toolkit (contd) • Self-evaluation / notebook of woe • How I feel during the project • Deviations from the plan or my conceptual model and why • How well is it going? • What can I do better? • Down-time • Gotcha’s • Every experience adds something to my tester’s toolset. The tools are always there for me to use but I need to have the wisdom look through my existing tools or to develop new ones depending on the CONTEXT.
It’s true that • Failure provides an opportunity for both learning and growth • BUT • It’s not comfortable and it’s not easy. • Maybe it’s worth sharing our failures as often as we do our successes. It may help us cope better with the hard stuff and become better testers.
And finally … • Thank you for listening to my tale of woe.