The speechwriter, speaking coach and speaking guru Nick Morgan wrote last week about the "uselessness of speaker ratings." His piece claims that participants usually don't know what they're being asked to judge, and he cites a study showing that the typical five-point scale makes such evaluations statistically meaningless.
On a five-point scale, Mrs. Lincoln, how did you like the play?
In the course of organizing conferences since attendees smoked pipes, I've developed my own series of attitudes about speaker evaluations.
1. I hate speaker evaluations.
One star speaker I've used over the years insists that I never share any results with him, because he becomes obsessed with the comments, good or bad—he's haunted by the odd insult and unmoved by the praise—and finds the whole experience is net bad juju. He is not alone. One of the most popular speakers in corporate communication history tells me he has never gotten over a seminar participant who wrote that he had "big ears and a Napoleon complex," in 1967. Egos are fragile, and shouldn't be fucked with by one grouchy, hungover conference attendee.
Also: I loathe the classless but shockingly common conference emcee who, in the course of introducing the speaker, reminds attendees to please fill out their evaluations because they're very important to ensuring that we continue to improve our conferences. Translated, that means, "If this woman stinks up the joint, use the form under your seat to tell us about it." The speaker, who has presumably worked hours on a presentation, likely flown to the event on her dime and screwed up the courage to stand and deliver—she deserves better.
And finally, I hate dealing with speaker evaluations as a conference organizer, because they require a fair amount of administrative work in order to confirm what I already mostly know. I hold small-enough conferences and I can always at least look in for a few minutes on every session. I also have friends in every session whose judgment I trust. No speaker has ever bombed who I thought was great. And no speaker has ever seemed great to me, but the evaluations informed me otherwise.
2. I need speaker evaluations.
Every once in awhile, I will learn something that helps me. I'll look in on a session that seems a little low-energy and make a shallow, harried-conference-producer's judgment that the audience isn't enthralled. I'll learn in the evaluations that the speaker communicated with them in a quiet way, and they appreciated it deeply.
Once, a speechwriter at a seminar remarked that the afternoon snack was so meager as to be "almost churlish."
And it's a useful regular reminder that, however good or bad most of the crowd thought a speaker was, someone in the crowd disagrees strongly. This keeps a conference organizer from trying to please everyone—the first slip down the slope of pleasing no one at all.
But mostly, speaker evaluations are useful in confirming quantifiably and with commentary, what I already know. They sometimes give me the confidence of my conviction when I need to respond to a mediocre speaker who wants to return. I very rarely actually share disastrous results with speakers, and I rarely need to. (They usually also know when the audience wasn't bagging what they were mowing.)
Mostly, I need speaker evaluations because most attendees find it arrogant not to ask for them. In the "customer is always right" culture we've been living in for the past 40 years, we're asked to rate how satisfying we found our toothpicks and our swizzle sticks. So when people pay a grand or more to attend a conference, they think they deserve a chance to tell hosts whether they had a good time. That's hard to argue with.
3. I take speaker evaluations with a grain of salt.
So we offer speaker evaluations for almost every event we do. We offer them quietly, so as not to make the speaker unduly self-conscious, sending them in an email a couple days after the event is over. We read the returned surveys carefully, and we talk earnestly about how the information might be used to make next year's conference better. (Maybe we should quietly tell the popular but aging speaker the time is past to refer to male attendees as "Champ," and the women as "Honey." A lot of people said the 20-minute networking roundtable felt rushed; so we'll make it 30. And yes: churlish snacks, never again.)
We share the results only with speakers who ask. And we react to the information with a grain of salt, unlike a lot of conference companies that have hard and fast rules: E.g., Any speaker who rates less than 4.0 out of 5.0 doesn't get invited back. Anybody can have a bad day (and maybe a heavy-handed conference goon assigned them an ill-fitting topic in the first place).
Evaluating the success of a conference is as complicated and varying as the minds and moods and needs of every speaker and attendee (and conference organizer) involved.
Only the most obvious facts of such events will be measured on a five-point scale. Or any scale at all.
Despite what we soberly tell our bosses when we seek their permission to attend professional conferences, these events are like every other ritual social gathering: They involve much human mystery and magic and spirit—in addition to practical, take-home, bottom-line best practices you can implement your first Monday back at the office.
Otherwise, I would have tired of this difficult work many years ago—and you would have stopped attending anyway.
Leave a Reply