Agile Marketing Series: Feedback Within, Feedback Without Part 1
Part 1: Internal Feedback and the Agile Team
In agile software, developers can code in pairs. Those pairs are all part of a larger development team that holds short, daily, stand-up meetings called “scrums,” where members of the team share progress and prioritize what each work pair will do for the day. It’s open, free and non-hierarchical in nature.
“Sprints” are meetings with wider scopes, where big projects are broken down into manageable parts for the team to work on. The team then charts the progress of each project using burndown charts.
Burndown charts show hours of effort remaining on a project graphed against time—usually a month, or so.
What’s the common element in all these agile components? Communication—or, more specifically, feedback.
Two people can’t code together without constant feedback.
Teams can’t meet daily without giving feedback.
Projects can’t be broken into parts without feedback.
Burndown charts can’t be followed without feedback.
But wait a minute! Marketing isn’t the same as software development.
No, it’s not. Where a piece of coded software can be tested immediately to see if it works, a marketing idea may take more time to execute and try. Stories take time to develop. Yet, in both departments feedback is key.
And unlike software, marketing initiatives involve more subjectivity and interpretation, and where there’s subjectivity, there’s the mistake of confusing correlation with causality. (Wearing red shirts does not cause game attendance, even though many people at the game wore red shirts. Go 49ers!)
Because of all the subjective shades of gray in marketing, giving and receiving good feedback becomes critical. Or else the entire endeavor can quickly devolve, where opinions and hierarchy replace proper testing and collaboration.
- “You’re an idiot.”
That’s an example of bad feedback! And though I’m exaggerating, sometimes criticisms of campaigns, design, headlines and other matters of subjective taste are as thinly veiled—not to mention how such criticism of ideas for business direction can even become more heated!
- “Criticism” isn’t a bad word.
In fact, the agile marketer should crave constructive criticism more than glowing, one-sided praise. One-sided praise too often originates from political agendas rather than a true desire to improve products or collaborate on better ideas for the customer.
So how do you give good criticism?
How can your feedback help the agile process?
Begin any feedback by describing what you you “like.” If you don’t see or hear anything you like…well, look harder. Find something positive, at least one thing in the form or the idea that you connect with in a way you “don’t hate”.
Try to interpret the meaning of what you see. What does the form or idea evoke in your mind? What does it make you think of? What would it make you do?
Now see it from the perspective of your customer. Try to analyze how the execution fits into one of your customer stories.
After you’ve described the idea and interpreted the meaning, now you can begin to raise concerns or introduce why you think a particular execution may fall short of optimum goals. Try to make a judgement based on the criterion of the idea.
The criterion is everything. Is this a huge branding campaign that will live on for a year? Or is it a web banner with a shelf life of only a few days? Temper your evaluation accordingly.
After all, you wouldn’t judge a child’s self-portrait for the front of the refrigerator to the same standard as a career retrospective of Chuck Close’s work.
In agile marketing, testing—not senior authority or criteria bias—settles disagreements.
Don’t judge the small ideas too harshly. Don’t analyze and evaluate the seeds possibility off the table in the name of heavy-handed criteria.
As long as your team keeps feedback on a positive level, even the biggest disagreements can be settled in an atmosphere of professionalism and decorum—and fun competition.
The real beauty of agile marketing testing is that you don’t have to have a scientific hypothesis in place before you test. In his book Ignorance: How It Drives Science, neuroscientist Stuart Firestein says,
“The trouble with a hypothesis is it’s your own best idea about how something works. And, you know, we all like our ideas so we get invested in them in little ways and then we get invested in them in big ways, and pretty soon I think you wind up with a bias in the way you look at the data.”
For whatever reason, traditional marketing departments have relied on hunches and authority when making decisions. Call it the Mad Men Effect. For every Don Draper who’s right, there’s a hundred who are wrong. And whether right or wrong, they all say the thing, “Trust me, I know. This is right. That is wrong.”
We’d all like to think we “know” what will work. But we don’t, and those who claim to know can be catastrophically out-of-touch.
Test your competing ideas, look at the data, and then see what the hypothesis might be—because acting on the opinion of the highest paid person in the room is a sure-fire way to fail the same way, over and over and over.
(See the recent Vanity Fair article on Microsoft, covered in my next post as a case study of out-of-touch authority.)
Next up: “Case Study on Negative Feedback—Microsoft”