Everybody’s talking about Chat GPT. “I’m currently experimenting with AI composition,” a retired speechwriter emailed me on Christmas Eve. “Frighteningly good. Something that will need to be taken into account by speechwriters?”
Goddamnit, I thought I had the final word on this last summer, when I compared AI to a kind of human writer who has always worked among us—a hack: “a literate cynic who would write anything she or he was told, as quickly as possible and without too precious a regard for precisely chosen language, or any sacred belief in the connection between words and truth.”
But more communicators who I trust are dangerously dabbling with Chat GPT. Like an executive communication consultant pal who asked AI over the holidays: “Write a heartfelt letter from the CEO to employees, telling them that 10% of the workforce will need to be laid off in the next five days.”
Here’s what AI spit back, in 10 seconds:
Dear valued employees,
I am writing to inform you that, unfortunately, we will be facing the difficult decision to lay off a portion of our workforce in the coming days. It is with a heavy heart that I have to tell you that approximately 10% of employees will be affected by this decision.
I understand that this news will come as a shock and will likely be met with sadness, frustration, and uncertainty. Please know that this decision was not made lightly and was only arrived at after careful consideration and a thorough analysis of our financial situation.
I want to assure you that we have done everything in our power to avoid this outcome. We have explored every possible avenue to mitigate the impact of the current economic challenges on our business and our employees. However, despite our best efforts, we have been unable to avoid the difficult decision to reduce our workforce.
I understand that this news will be difficult to hear and will have a significant impact on those affected and their families. Please know that we are committed to supporting you through this transition and will do everything in our power to make it as smooth as possible.
We will be providing severance packages and outplacement support to those affected by the layoffs. Additionally, we will be holding town hall meetings in the coming days to answer any questions and provide further information.
I am truly sorry for the impact this will have on you and your families. Please know that this decision was not taken lightly and that we are deeply grateful for the hard work and dedication you have shown to our company.
Sincerely, [CEO]
“Formulaic, self-centered and bland,” my pal acknowledged, before also pointing out that it’s better than many layoff letters she sees.
And written in a fraction of the time! you say. Yes, a fraction of the already relatively infinitesimal time that’s spent drafting a layoff announcement. But the lawyers and other corporate pencil-fuckers are still going to squander a hundred person-hours debating this draft, and making it even worse: “with a heavy heart” will become “with sincere regret” … and “outplacement support” will be throttled down to “separation consultation.” The day AI replaces corporate hand-wringers is the day it replaces corporate management itself.
Oh, but layoff announcements are sort of formulaic by nature, you say. Well, so is writing, “I’m so sorry,” when a friend’s mother dies of a heart attack. But how would you like for your friend to find out you pushed the “Death Condolence” button and called it a day. No, you want your friend to know you debated “so sorry,” versus “so so sorry,” versus, “may her memory be a blessing,” versus “your mother was an unforgettable woman who will live on forever through the son she raised.” And do you not believe that the bereaved or the laid-off won’t sense and suffer from the soullessness, if not of each “heartfelt AI message,” then in the aggregate?
Whether written by man, woman or machine, most layoff announcements will also wind up being “formulaic, self-centered, and bland,” you say. That’s how they wind up, but it’s not supposed to be how they start!
Using AI to write means separating the writing from the thinking, separates the words from the human mind that chose them, transforms communication into math. Communication isn’t math.
“Communication is the litmus test of management’s decision-making,” PR legend Jack Felton explained 30 years ago. If a decision is easy to explain to various stakeholders, it’s likely a good decision; if it’s torture to get across, likely it’s a bad one—or at least, there’s something important wrong with it, anyway. At a crucial moment during the Cuban Missile Crisis, JFK’s speechwriter Ted Sorensen drafted a speech for Kennedy to give the nation. He wanted to include a particular paragraph that would have publicly offered an ill-advised trade deal to the USSR. But he wound up scrapping the paragraph, and thus the idea. Why? “It just wouldn’t write,” Sorensen said later.
I realize that layoffs don’t get called off because the communication staffer struggled with the fourth paragraph of the announcement. But tortured paragraphs—or exasperation expressed by their writers—do occasionally cause changes in the timing, scope, nature or tone of the campaign.
I’m permanently instructed by a tale told to me two decades ago by Terry Edmonds, who was the first Black chief speechwriter in a White House—Clinton’s. Needing to deliver a speech announcing the signing of the Welfare Reform Act, Clinton tapped Edmonds, who had firsthand experience with welfare as a child growing up in Baltimore. Clinton thought Edmonds could handle the subject sensitively.
But Edmonds disagreed with the policy—along with a number of other prominent African Americans in the administration, one of whom resigned in protest of the bill.
After much agonizing, and serious consideration of recusing himself from the job, Edmonds decided to write the speech, because he figured he’d rather have a say in the way Clinton explained the bill—making sure the speech discussed methods “for helping people get off welfare in a way that did not destroy their lives” and stressed that Clinton was “not abandoning his commitment” to help people up from poverty.
Now: Was Edmonds’ speech better for the moral agony that went into it and can be heard throughout it as clear as children’s crying? Heard in the words, heard in the structure, heard in the teeth-gritted tone. Or should Edmonds have saved himself and all his colleagues a lot of unnecessary grief and just asked a machine:
“Write a heartfelt speech from a Democratic president announcing the end of welfare as we know it.”
Back to that IA-sanguine retired speechwriter I was emailing with. When I confronted him with arguments like the above, he claimed that as a writer, he’s “not so crass” as to pass off artificial intelligence as his own.
“However,” he added, “if I was still in the corporate world, under a tight deadline, would I have the strength to resist?”
Well that, my friend, is a different question altogether.
The ultimate test: ask Chat GPT to compose an appropriate (and unbiased) comment to respond here…
ChatGPT 4 is coming out in the spring. By all accounts, it will represent a significant step forward. What everyone needs to keep in mind about ChatGPT is that OpenAI only released it to the public for feedback. It ain’t ready for prime time. Hold on to your hats.
Oh, one more quick point. With any generative AI, GIGO still applies. Your output will only be as good as the prompt you created. The more detailed and precise the prompt, the better the output, whether it’s text or graphics.
Okay, THIS is my last point. The reason ChatGPT cranked out something as bad as typical corporate layoff notices is that corporate layoff notices are what it was trained on. The dataset that taught it what layoff notice looks like was undoubtedly not curated to include only the best examples available but as many examples as possible. Unquestionably, this is one of the drawbacks of generative AI (which can still help with a first draft or to provide some ideas you may not have thought of but will ALWAYS require editing/rewriting).
Well, Shel, you and I have been arguing about the upsides and downsides of technology since at least 1995. Through the internet, intranets, blogs and other social media, Second Life, and now AI composition, you’ve ALWAYS taken the sanguine side.
I asked a communicator who’s been in the business for that whole time–asked her innocently, not even thinking about technology–what her favorite time in her career was. She said it was right before the internet came along.
You finished the project, you put it in a FedEx box at noon, and you went out for a boozy lunch to celebrate, because there was nothing else left to do.
Being the communicators’ tech guru for all these years has afforded you a good income. And you simply could NOT have been better or more generous in this capacity.
And you, or no you, this stuff would have profoundly affected the business … and your being here has made the tech far easier to digest for many people.
Still … MUST you be so sanguine, even about something as disturbing as robots, who could potentially remove the moral conscience of human from the corporate communication equation?
Is there no new tech development you’re unwilling to fight, Sir!?!?
I’m sure there were people who liked life better before automobiles came along.
Technology is merely interesting, David. What’s utterly fascinating is how it can, should, and shouldn’t be used, how to make it effective, and how it can help us do our jobs and live our lives better (along with how to be cautious so it does no harm). For example…
https://contently.com/2023/01/03/guidelines-for-responsible-content-creation-with-generative-ai/
Oh, there was a whole animal kingdom that liked life before automobiles came along, Shel!
If only contently had bee around back then to warn us about responsible transporation choices!
There was and is a whole animal kingdom that liked life before humans and ShatGPT.
The automation of instant prose at the touch of a button after you add in the matter, the request, is industrialising the communication industry.
It’s a sad state of the future where potentially all humans will be outsourcing writing to a hot because we can only muster 140 characters or less.
Bot not hot, automated spelling correct 😆