Earlier this month I read of the case of the prominent law professor and frequent TV pundit Jonathan Turley, about whom ChatGPT “falsely reported on a claim of sexual harassment that was never made against me on a trip that never occurred while I was on a faculty where I never taught. ChapGPT relied on a cited [Washington] Post article that was never written and quotes a statement that was never made by the newspaper.”
When contacted by the Post, “Katy Asher, Senior Communications Director at Microsoft, said the company is taking steps to ensure search results are safe and accurate.” That is it and that is the problem. You can be defamed by AI and these companies merely shrug that they try to be accurate. In the meantime, their false accounts metastasize across the Internet. By the time you learn of a false story, the trail is often cold on its origins with an AI system. You are left with no clear avenue or author in seeking redress. You are left with the same question of Reagan’s Labor Secretary, Ray Donovan, who asked “Where do I go to get my reputation back?”
I’ve written before about my resistance to ChatGPT as a composition program, and I obviously stand to lose a lot if writers get replaced by R2 units.
But this reputation issue worries me more. It gives me the same feeling that I had at a particularly paranoid moment early in the Trump administration, when I was writing a lot of hateful shit about Trump, while simultaneously worried that this administration just might be organized enough to come after its critics. Turns out it wasn’t.
But at the time, my wife attempted to reassure me on another count: by telling me I wasn’t important enough for the Trump administration to bother punishing.
But now, with ChatGPT, you don’t have to be important enough to inspire the government or a journalist or a legal team to come after you. You just have to be important enough to have a career that relies on the generally favorable regard of a few dozens or hundreds of former, current or future colleagues—and one bitter enemy who wants to program a robot to publicly defame you, with no accountability of its own.
That’s pretty much all of us, folks. Or anyway, all but the most assiduous avoiders of social media and duckers of public media.
It was on my first visit to Australia 15 years ago that I first heard the term “tall poppy syndrome.” (I was so unfamiliar with the term that for days I repeated it as “tall puppy syndrome.”) It means that the tallest flower gets its head cut off by a society that discourages the over-celebration of any one member.
That notion was and is abhorrent to this American. Even in my most democratic-minded vision of civic life, celebrating cultural icons—whether writ large, or within small groups—is one important way we define the whole society.
You and I could each name many hundreds of famous people in common, the shared acquaintance with whom helps us understand our society together. And if you’ve been in any single trade for awhile, we could sit together and name hundreds more, just in that wee realm of that industry. What do you think of her, what do you think of him? Helps us figure out what we think of ourselves, each other, and the world we work in.
The biggest celebrities are ironically safe from ChatGPT—or rather, they’re only as endangered by it as they’re currently endangered by all sorts of yahoo journalists and unhinged social media trolls who defame them every day, hurling insults and accusations and attempts at cultural cancellation that usually just go down the Twitter maw and come out the other end with all the other shit.
But you or me—boy, could a lot of doubt be sneakily sown about our suitability for future employment by one odd AI-generated report on LinkedIn, backed up by a source that looks authoritative but isn’t. And probably something real could be scrounged up by that relentless robot, from half a lifetime of social media posts, and presented out of the context of the thrust of our contributions over long and mostly well-lived years.
When even the creators of ChatGPT talk vaguely about how scary this technology can eventually be, I suspect they are talking about this. And when you and I worry idly about it, it seems to me we should be worrying about this, more than anything else.