When AI explained my book to me, it got a few details wrong. Namely, everything | Opinion
The other day I decided it was time to engage with ChatGPT and other AI programs to see for myself what the fuss was all about. I framed a question: Give me a good book description for “Hogge Wild: A Gordon Strange Mystery” (the book I published last summer). In less time than it took me to type the question, ChatGPT generated five engaging paragraphs with rich descriptions of the main character, his motivation, plot twists — everything you would expect from a well written piece of jacket copy or book blurb. It was astonishingly well done, and plausible.
There was only one problem: All of the actual details were wrong. Wrong setting, wrong victim, wrong crime. I was immediately reminded of an old college professor who scrawled on a classmate’s literature paper: “You said nothing very well. D+.” Here, as opposed to nothing, incorrect information was presented splendidly.
Back when the earth’s crust was cooling, I had my first job in publishing for a small but well-regarded trade book company known for books and studies on cutting edge technology, the publishing industry and media in general. My editor-in-chief, who led many of these studies, made extensive use of comparisons to illustrate the trends he was identifying and it often fell to me, as an editorial assistant, to fill in his blanks. I still remember the first assignment he handed me: a sentence typed out on a sheet of yellow scratch paper, saying: “The number of checks being cashed daily (X) is rapidly approaching the number of grains of sands in the ocean.” It was my job to identify X, if not exactly then an approximate number that would stand up to scrutiny once in print. Many days later, through extensive interviews and research among government agencies, bank trade organizations and experts in the field I was able to do so, grateful that at least he had not asked me to count the number of grains of sand in the ocean. When the study was finally published, that number — X — became a benchmark figure, reported over and over. It became fact. From that point forward, identifying the source of information coming my way, and the credibility of that source, became a mantra. If not an obsession.
Fast forward to 2023. Answers to questions are at our fingertips, but all too often, those answers are a murky soup of facts, misinformation and outright lies blended as expertly as the five paragraphs my chatbot created. And can we really expect people, exhausted by three years of pandemic trauma and inundated with carefully curated silos of information, to question the source of the information they are getting? Much less care?
The compelling narrative created by my bot took a page, unintentionally, out of the conspiracy theorists’ playbook, weaving a credible, totally incorrect narrative incorporating just enough obvious facts to make it seem true. It isn’t bad enough that we have people cobbling together half-baked theories for monetary gain or power, now we’ve got a program to help them do it. ChatGPT and other AI programs scrape the internet to craft their responses so the possibility for recycling and repeating errors, sometimes deliberate ones, is considerable. The old computer adage, “garbage in, garbage out,” still applies.
AI will become ubiquitous in our lives, probably before I’ve finished this sentence. But we can’t allow it to do our thinking for us. Maybe one day we can rely on the machines to help us police themselves. Until then, if we value truth, we’ll have to continue trying to count the number of grains of sand in the ocean.
Karen Sirabian, of Jensen Beach, is the author of “Hogge Wild.”