3 thoughts on “AB Testing – Episode 12

  1. Thanks for another great episode guys! My org is currently stuck in a very similar rut (mandatory, annual, self-selected peer scoring) which is proving harder to shake because it feeds into so many different systems – for instance, it’s the basis of weightings for bonuses (even though, as we’ve explained to senior management, we’d be perfectly happy to just split the pot evenly and scrub the review charade altogether).

    So it was interesting to hear some alternate perspectives on how to do this merry dance. I like the idea of continuous feedback at the point of relevance, if there were a way to solicit/provide it without it seeming forced (it sounds a little like those email footers, “Did you receive good service today? Click this link for yes, this link for no”).

    My only concern with Brent’s suggestion (“What would they need to do to make it a 10”) is that – when scaled up amongst many reviewers – it could result in a laundry list of people’s tiny flaws, which could be depressing to receive (though I would trust the manager to filter and collate that list down to some actionable points). It does sound like a useful mechanism for extracting meaningful feedback when the brain otherwise runs dry.

    At 25:30 it briefly seemed that you were about to touch upon Google’s peer bonus system, which I hadn’t heard about before. I think I’ve got the crux of it now, having since found articles such as A Look At Google’s Peer-To-Peer Bonus System following the podcast, but there’s not that much to be said about it on the interwebs (other than its existence) so I’d be interested in hearing more of your thoughts on it, either in comment or audio form 🙂

Comments are closed.