The future of the automation engineer

Although I’ve tried to avoid writing about test automation since publishing The A Word four years ago, I suppose I should probably take some time soon to add a few more chapters before the book (like my last one) becomes largely obsolete.

Not too long ago, I posted some predictions. In those predictions, I said:

Testers writing automation will become a rarity. […]testers won’t write that much application automation. Unit and integration tests will be written by the developers on the team (w/ coaching from the test specialist on their team as needed). The test expert on the team who code will more likely write diagnostic tools, frameworks for “ilities” (perf, security / fuzzing, stress, etc.) – as well as other tools / processes / approaches that accelerate the achievement of shippable quality.

A few weeks ago, I spit out a small tweet storm about the same topic. The TL;DR version is that I have a strong opinion that the role of someone who solely writes automation for developer produced code will be gone soon. There’s just no good reason I can see anymore for developers to not write test automation for their own code.

For a longer, and really, really well annotated version of my tweet-barf, please check out Richard Bradshaw’s excellent post here. Richard was one of the people I thought I’d scare with my tweets. He’s been (as far as I know) in the role of a test automator for some time. But he agreed with me. He gets it. He sees it too, and he’s in the trenches doing this stuff, while I’m just a talking head most of the time.

That means something.

I know there are some readers who don’t follow me on twitter and may have missed this – but my tweets – and Richard’s further thoughts on the subject are important, and a “prediction” that I am absolutely confident will come to life.


Making it Easy to Do Good Work

Time has flown, but it’s now been six months since I quit my job at Microsoft. That’s nothing compared to the 22 years I worked there, but still a minor accomplishment. I won’t rehash the differences between Microsoft and Unity – so I’ll just say it’s a different and (for me) pleasant experience. I spend a chunk of my time recruiting and hiring, and one thing I tell candidates about working at Unity is that “it’s easy to do good work here.” I emphasize that the work itself isn’t easy – the work is plenty difficult. It’s just that there are few obstacles (office politics, politics, micro management, etc. getting in the way of doing great work.

I’ve been here now just long enough that I have some idea of what I should be doing. Which, of course, means that I’m realizing how much work there is to do. But I’m up for it, it’s challenging, it’s fun, and it’s easy to do good work here. I still have a huge amount to learn, but it’s more exciting than the day I started.

The change of pace has also (apparently) been good for my health. My blood pressure  – which had been a solid 120/70 into my early 30s had slowly climbed over the years – peaking out at a pre-hypertension level of 138/80 in my last two years at Microsoft. Two weeks ago, I was pleasantly surprised to discover that my bp is almost back to my two decade old norm at 122/70. It’s a small victory, but one I’m proud of.

In the next six months here (and beyond) I’ll work a lot on moving our Services org to rely much more on monitoring, usage analytics and testing in production. There’s a ton of work to do, and really hard problems to solve, but we’ll get it done.

Of course, if you want to be part of this journey with me (no guarantees on improving your health), I have openings in Bellevue, San Francisco, and Helsinki.

A call to action: let’s fix the Wikipedia page on software testing

Today, I had a few minutes, and wanted to try, yet again, to make some positive change to the article on software testing on wikipedia

I failed. It’s a mess, and I don’t know where to start. It’s too long. It’s unorganized. Many / most of the citations are a decade or more old. Some blanket statements are un-cited (and arguably false). Other citations point to non-peer reviewed conference presentations. At best, some citations are from books 20 years old. Software testing has changed, but the entire article represents antiquated methods of testing. I recognize that many teams test this way, but the article fails to recognize many other testing approaches.

@noahsussman attempted to clean a large chunk of the article, but it was blindly (mostly blindly) rejected. He needs help, but I’m not sure how to help him by myself. I don’t know what the new article should look like, but I want to be a part of creating it.

My first challenge for you is to go read the article from beginning to end. If things don’t sit well, read the citations. Try to make sense of it. Then come back here and tell me if you’re as frustrated (or as embarrassed) as I am. Compare it with the article on software development if it helps.

My second challenge is for you to join the cause to fix it. Send me a DM on twitter (@alanpage) – or send me an email (this domain – alan), and I’ll add you to a slack group where we can discuss strategy, share thoughts, and figure out how to modify this article so it makes sense to all of us – and that it’s an article we all feel can represent the current definition and state of software testing.

To be clear, I think there’s some value buried in the current page, but I think it can be much, much better. If you agree, come help me (and if I can get him re-involved, help Noah as well).

Predictions and other stupid things

Between my talk at the online testing conference, and in discussions with Brent on AB Testing – and my day job thinking about service quality at Unity, I’ve spent a bunch of time lately pondering the role of the tester in modern software engineering, and what it means for today’s crowd of software testers.


Years ago, I gave a few presentations on the Future of Test. In hindsight, it was stupid of me to attempt to predict the future of testing, and I caution anyone thinking of going down this path (my hypocritical predictions are below). Years ago, I talked about “cutting edge” test ideas and claimed they were (or may be) the future, but few of those ideas are as interesting today as they were then. Today, I read an article on the Future of Testing that describes testing and software development practices from years ago. The future, it seems, is based a lot on context and point of view. That’s ok, and I ask  you to consider that before you disagree (or agree) with my own thoughts.


All that said, I think I can make a few claims about where testing is going – even if these changes are years (or decades) out for many testers. Consider these wild-assed guesses based on my own experience rather than predictions about the future.


1) Independent test teams are diminishing in favor of the test specialist. Many examples of this already in software. This ultimately means fewer testers, but good testers will remain in high demand.


2) The industry infatuation with automation – especially UI automation will finally fade. I’ve been asking testers to stop writing automation for nearly a decade, and I’m beginning to see more and more examples of teams halting their UI automation efforts, and investing in unit and integration tests (and monitoring) more significantly.


3) Testers writing automation will become a rarity. Hand in hand with both points above, testers won’t write that much application automation. Unit and integration tests will be written by the developers on the team (w/ coaching from the test specialist on their team as needed). The test expert on the team who code will more likely write diagnostic tools, frameworks for “ilities” (perf, security / fuzzing, stress, etc.) – as well as other tools / processes / approaches that accelerate the achievement of shippable quality.


4) Huge amounts of testing will be done via monitoring real customer usage. I shouldn’t include this one, because it’s already true for many, many products already, but I see enough people disbelieve in this approach for their product, that I’m throwing it out here anyway. Given that it’s been 10 years since Keyes said that good monitoring is indistinguishable from testing, the disclaimer makes me feel icky, but still seems necessary.


I can’t wait to revisit this post in five years and see how irrelevant my claims are.

Technical Testing at the Online Testing Conference

In just under 5 days, I’m giving a presentation on Technical Testing at the Online Testing Conference –

If you’re wondering, “WTF is technical testing?” I’ll give you my opinions (as well as some examples) along with the usual angry weasel rants.

Here’s the twitter preview if you missed it.

Dear Weasel, What do you have against career growth?

The inevitable follow up to my last post is a discussion on career growth, and how to manage it effectively. For the record, I am not against career growth – in fact I think it’s one of the most important parts of my job. What I’m against, is employees making decisions based on growing their career over decisions based on making our customers lives better.

I’ve built “career guides” before to give employees examples of what growth looked like. At the time, I thought I was doing the right thing, but I don’t think so any longer. If an employee leaves your org / company because they didn’t feel like they had enough opportunity to grow, the problem is NOT a missing checklist of example tasks or competencies.

It’s a management problem.

Every manager must be deliberate and passionate about understanding what their team members are good at, and where they need to improve. Managers need to provide challenging opportunities for their employees, and provide a balance of tasks that stretch their employees and make them learn; and opportunities in their “wheelhouse” where they can excel and lead by example.

Managers who don’t do this should not be managers.

This is a point worth more discussion. Many people move into management, as a growth opportunity (or more directly, they view it as a promotion). They may like the perceived importance of more meetings and visibility, but that’s absolutely the wrong reason to be a manager. Of course, decision making, communication, etc. are important for managers, but the most important thing has to be managing the careers of your employees. If a manager is not vested in challenging and growing their teams, they are 1) not challenging themselves, and 2) managing a team that is not learning or growing. In my experience, this is a horrible way to manage a team.

So what does this have to do with career guides / ladder levels / whatever your company calls the checklist they use to describe career growth?

What I’ve seen (granted in one company directly), is that employees use the guides as a checklist. They look at the bullet point examples for the growth level above their own and write a sentence about every single one, and then go their manager and ask why they aren’t being promoted. The guide drives their work.

I much prefer a model where it’s the managers responsibility to make sure employees have a growth plan, and transparently and frequently communicate with them on types of tasks and challenges that are appropriate to their growth – but focus on the work that makes the business and the customers successful rather than on words on a page in some internal HR documentation. Create career development plans – write them down if you need to, but focus on improving the business, and assign tasks and responsibilities that stretch and grow your employees towards that goal.

I do see how transparency on career growth communication can help a company – and even why some companies may need it; but I think there’s a huge trap to fall in when going down this path that most companies do not consider. Do what you must, but, as with most initiatives, be very careful of driving wrong and damaging behavior.

Musings on Microsoft

If you can believe it, it’s been 4 months since I left the big M. I miss a lot of people there, but I can’t say I’ve missed working there. Sure, I’m still in the honeymoon phase here at Unity – but so far, it’s been fun, challenging, and most of all, just plain refreshing.

I took some notes on a few “aha” (or “hmmm”) moments during my first few months. I thought there may be more, but I pretty much hit all of these within my first month or two – but given the weight of the last one I’ll share, I wanted to sit on it for a while while I pondered and reflected on my thoughts.

No Mo’ Microsoft

I’m really surprised how easily (and quickly) I transitioned to using so few Microsoft products on a daily basis. I have a Windows laptop (I still regret not getting a MBP instead, but I’ll save my reasoning for another post), and I use my Xbox a lot at home. But looking at running apps on my phone and desktop, almost none of them are Microsoft apps, yet I remain massively productive. The one exception is Excel. Excel is a beast and a brain dead simple way to pull and manage data from a remote database. In my last few months at Microsoft, I began using Visual Studio Code, and liked it a lot, but I’ve since discovered Sublime Text, paid for my own copy, and now have my favorite code editor ever.

Let me also use this moment to speak of the happiness that envelops me now that I don’t need to use Outlook. Ever. :). Yes, there’s no doubt that Outlook is the most full-featured mail app available, but it’s fat and slow, and I don’t remember a day ever not cursing it. I prefer a unified client over gmail windows, so I use Mailbird, and have been very happy.

Death of the Desktop

This is something I knew before, but never really thought about. With obvious exceptions in teams like VR or Graphics, pretty much nobody uses a desktop PC. Nearly all of my friends and colleagues at other tech companies use laptops exclusively, and 99% of the people I work with use only laptops (connected, of course, to external keyboards and a pair of 27″ 4k monitors at work).

Of course, if you’re building Windows, you need a big beefy desktop machine. As an aside, lot’s of tech press last week about Windows moving to git for source control, but they chose to not package / componentize windows, so I imagine it still takes the better part of a day to build the beast even on the fastest machine. Fortunately, most of us build things that can build and test in a few minutes, and laptop convenience is worth any shortcomings in the ability to compile a zillion lines of code.

One interesting thing about this is my observation of what happened when we had a brief (10 second or so) power outage in February sometime. If this had happened at Microsoft, hours (no exaggeration) would have been lost as people rebooted – ran disk cleanup utilities, and recovered lost work. At Unity, people stretched as their external monitors went blank for 10 seconds, and then went back to work. ~Eighty employees on site x $40 an hour (minimum), x 1 hour = $3.2k saved (ish!).


Microsoft builds software for corporations. Yes, those corporations have people, but they’re so far removed from the general Microsoft employee that they just don’t relate. XBox One may have been the closest thing to an exception from my career of something that had more customer focus – and maybe Windows 95, but that audience is drastically different from the people using software today.

I can’t speak for other companies, but at Unity, we make multiple decisions today based on the customer and their experience. “How well does solve the customers problem?” “What does this enable the customer to do?” “Does this make our customers more productive or happier?” The people I work with here work hard with the goal of pleasing the customer.

Microsoft, upon reflection, doesn’t really give a shit about the customer. Sure, they do at a hand-wavy meta-level, and there will be several of my old colleagues who will tell me I’m wrong, but after seeing “real” customer focus for the last four months and reflecting on what motivates people, I’ve had a harsh realization. As I mentioned above, at Unity (in general, at least), employees take actions daily to please the customer. At Microsoft, employees regularly take actions with the goal to please their manager. I see people at Unity of all levels work across teams, and with a variety of people to do what’s right for the company and for our customers. Microsoft employees, in hindsight, avoid doing most work that doesn’t help themselves, and are over-focused on career growth.

The last sentence may sound like a harsh biased view from a disgruntled employee, but for context, I’ll share a big mea culpa. I spent at least 5 (probably closer to 10) years at Microsoft driving “career growth” initiatives across the company and sharing career growth plans and examples, especially with, at its peak, nearly 10k testers at Microsoft. I worked with HR on building Career Stage Profiles describing career growth across Microsoft’s level bands. As a result, most msft employees (again, I worked across most of Microsoft, so I feel “most” is accurate) became focused on “how do I get to the next level” rather than “how can I create great software”. I helped build a monster, but didn’t realize what a distraction and deterrent it was for customer value until my time away (and significant reflection during long plane flights)

My last team especially (and I can say this without fear of disagreement, even though some of those folks read my blog) had a huge culture of pleasing management over pleasing customers. While I believe (and see at Unity) that customers drive our work decisions, what I saw on Teams was over-the-top culture of making directors and Vice Presidents happy – sometimes at the expense of the customer. To be fair, that sort of culture happens in other teams (and other companies!), but it’s not the way I want to make software. The days of we-make-it-you-take-it software development are over. Microsoft have a history of making the software they want to make and convincing their customers it’s what they want. Sometimes, they’ve been successful in this despite themselves, and I give them (us, at the time) full credit. However, we have the tools and ability to learn from our customers, understand our customers, and to solve their problems wonderfully. To me, any approaches that completely (or largely) avoid customer learning are irresponsible.

I’m not hating on Microsoft. They have a ton of smart people who want to do the right thing. While I’m fairly confident that what I wrote above is an observational fact and not opinion, I have an opinion that the long-in-tooth thick layer of middle management has been a primary contributor of this growth-emphasis problem. More likely, it’s a full-on systems problem with multiple inputs, but fortunately, no longer in my span of problems-to-solve.

With this said, I’m sure my old colleagues will tell me how wrong I am (I’m confident that I am not), and my new colleagues may also tell me I’m wrong (which I may be). What’s important is that from day to day, I see customer-driven decision making that I never saw at Microsoft, and it’s been ….refreshing.

A week of bad code

Earlier this month, I spent a week in beautiful Copenhagen at what’s called a R&D Training week. The goal is that every new Unity engineer spends a week at the Copenhagen office, learning about the systems we use, and about engineering at Unity. Granted, since I’m on the services side, a big chunk of the stuff wasn’t directly relevant, but I did learn a lot, and had a really nice time getting to know people and working on the project.

We had presentations every morning about some aspect of Unity engineering, and in the afternoon (and typically, late into the evening) we worked on a project. I won’t give away details, but in general, we needed to write a Unity engine component, and then tune it for performance using Unity’s job system.

I haven’t written C++ in over a decade (angryweasel pauses to think here if he ever wrote anything but straight C in XBox One…maybe, but nothing other than maintenance). Enough of it came back, however to make progress and figure things out. In fact, in the end, I had a solution that ran correctly, and ran quickly. So I guess I succeeded.

A few thoughts and insights came out while I reflected on the week and the exercise. I had to grep through a ton of source code to learn how things worked, and during that process, I learned how a lot of other things worked. I know more about how the engine and editor work than I ever would have figured out on my own. And, as I mentioned above, I completed the exercise, but that was, I think, a false victory, and one that reflects a real world problem – and a missed lesson learned.

I wrote code that solved the problem. It was fast. It was mult-threaded. It worked. But it was shitty code. In my effort to hit a deadline, I wrote crappy code. It was overly complex. It was filled with dead code and unused variables. I wrote so much crap in such a relatively short amount of time, that I literally forgot what some of my code did. And no, I didn’t write any comments (nor write many tests) for my shitty code. To my credit, you could say what I wrote was a prototype. And like most prototypes, I should rewrite it before shipping it.

It made me realize how easy it is in the real world for us (software teams) to focus on getting our work out to our customers quickly (by a deadline) vs. getting them great work that we can be proud of (and trust). The exercise, of course, wasn’t about getting something out quickly, but I was surprised (and a bit embarrassed) that I lost sight of code craftsmanship over the goal of a deadline (and keeping up with my better coding classmates).

All in all, a fun and interesting week, with some great learning points that I’ll continue to ponder.

Failure to Launch

In my role on Teams, I was “in charge” of quality – which eventually turned into everything from the moment code was checked in until it was deployed to our end-users. At one point during development, we had a fully usable product with no known blocking issues. We were missing key features, performance was slow sometimes, and we had a few UI tweaks we knew we needed to make. In what would seem like a weird role for a tester, I pushed and pushed to release our product to more (internal) users. Those above me resisted, saying it “wasn’t ready.”

I was concerned, of course, in creating a quality product, but I was also concerned whether or not we were creating the right product. I wanted to know if we were building the right thing. To paraphrase Eric Ries – you don’t get value from your engineering effort until it’s in the hands of customers. I coined the phrase “technical ‘self-satisfaction'” to describe the process where you engineer and tweak (and re-tweak) only for you or your own team. While the product did improve continuously, I still believe it would have improved faster, had we released more often.

In my previous post, I talked about how it’s OK to wait for a future release to get users that next important feature. While I truly believe there’s no reason to rush, I’m absolutely not against getting customers early access to minimal features (or a minimum-minimum viable product – MMVP).

The decision on whether to release now or later isn’t a contradiction. It’s a choice (mostly) of how well you can validate the business or customer value of the feature in use – (and if possible, or necessary, remove the feature). If you have analytics in place that enable you to understand how customers are using the feature, and if that feature is valuable, it’s a lot easier to make the decision to “ship” the feature to customers. On the other hand, if you’re shipping blind – i.e. dumping new functionality on customers and counting on twitter, blog posts, and support calls to discover if customers find value in the feature, I suggest you wait. And perhaps investigate new lines of work.

One thing I consistently ask teams to do during feature design is to include how they plan to measure the value of the feature to customers or business value. Often, only a proxy metric is available, but those work way better than nothing at all. Just as BDD makes you think about feature behavior before implementation, this approach (Analysis Driven Development?) makes you think about how you’ll know if you’ve made the right thing before you start building the wrong thing.

Short story is that an analytics system that allows you to evaluate usage and other relevant data in production, along with a deployment system that allows you to quickly fix (or roll back) changes means that you can pretty much try whatever you want with customers. If you don’t have this net, you need to be very careful. There’s a fine line between the fallacy of now, and a failure to learn.

The Fallacy of Now

A long time ago (for most of us), we built products over a long period of time, and hoped that customers liked what we built. It was important to give them all of the features they may need, as we wouldn’t be able to get them new features until the next release, which was usually at least a year away.

Today, we (most of us) try to understand what features our customers need and we try to please our customers. We’ve moved from we-make-it-you-take-it releases to we-listen-and-learn releases. We ship more often – most apps under development these days ship new features quarterly, monthly, or even more often.

The Joy of Features

One thing I’ve seen throughout my career is the excitement of getting a feature to customers. It’s a wonderful feeling to give a customer a feature they love. Maybe they’ll tweet about it; maybe they’l blog; maybe news of the feature will trend on reddit! For whatever the reason, getting features to customers is such an exciting task, that it sometimes can overshadow stability.

A decade or more ago, this mindset made sense. If the feature doesn’t “ship” now, it will be years before it’s available. But it’s just not as important today. If your product ships monthly, and you decide a feature isn’t ready for the monthly release, it’s a maximum of two months (assuming you canceled the feature on the first day) until the feature gets to customers at the end of the second month. Yes, two months is forever. And if you have a quarterly release, six months may seem like a million years. But my bet is that if you gamble and try and shove features in early, you’ll end up with a pile of half-finished features that don’t help customers at all. If you’re really, really lucky, you won’t lose too many of them.

The challenge is a mindset problem. People are motivated by progress, and seeing your feature move through the pipe is exciting. But if you have a predictable ship schedule, it’s like missing your train. If you miss your train (in any city with reasonable public transportation), you don’t cry and freak out, because you know another train is coming along soon. If you miss getting your feature into this month’s release, you know you’ll make the next release. It’s ok to miss your train.

When I worked on MS Teams, we shipped a new web front end every week. Every seven days. Still, every week someone requested that we hold the release for one more feature. If you wait one day, they said, we can make this really cool thing happen. Every week, I said, “nope – the really cool thing can happen as scheduled next week”.


Whatever you’re doing, it doesn’t need to happen now. For 99.9% of the features you’re working on, your customers don’t need it now. They may ask for it, or you may tell them about it and excite them, but 99.9% of the time, they can wait. I know at least some of you are disagreeing, but I’m going to be a dick and just tell you that you’re wrong.

The point is, that I believe that customers (you remember that we make software to help people, right?) want to trust our software. Sure, they want new ways to solve their problems and new functionality that makes their experience awesome, but they want it to work. We certainly don’t need to make perfect software. However, we need to weigh whether the positive value of the new feature minus the distraction of unreliability or other flaws still results in a net positive for our customers.

It’s a challenge to get this balance right. It’s something I enjoy doing from my role (whether it’s QA, release management, tester, or whatever suits the situation). It’s hard, and it’s sticky, and it’s a big systems problem.

And that’s why I like it so much.

%d bloggers like this: