Run! Hide! The AIs are coming for you! They're going to take away your job and otherwise completely screw up your life! Or maybe there's a single mega-AI like Skynet in the Terminator movies which will kill us all! Elon Musk could be secretly assembling murder robots at Tesla factories right now and frankly, I would not put it past him. Why, just the other day he...oh, never mind.
Making apocalyptic predictions about AI has become a popular new subgenre for the egghead class. Thomas L. Friedman, who preens as A Really Big Thinker on the New York Times' editorial pages, was given a simple dog-and-pony demo of a chatbot and after a sleepless night wrote a March 21, 2023 column saying he foresees it becoming as powerful and dangerous as nuclear energy. "We are going to need to develop what I call 'complex adaptive coalitions' ...to define how we get the best and cushion the worst of A.I." Pundits who want to appear extra savvy usually toss in an ominous warning that doomsday is only a few years away - or if we're really unlucky, just a few months. Be afraid, be very afraid.
Look, I get it; recent advances in AI can seem super-scary, and it doesn't help when even an OpenAI co-founder admits "we are messing around with something we don’t fully understand." It seems safe to say these technologies will impact our future in ways we can't anticipate - though I doubt they will nudge us towards Utopia, which is the sort of thing AI developers actually like to say.
Chabots in particular are hyped as a boon to humankind because users can supposedly ask questions about anything and receive easy-to-understand answers in a wide variety of languages. A top concern about chatbots is they work too well - that students can use a 'bot to effortlessly write homework assignments for them. And unless a teacher has reason to suspect the work was generated by a computer, the student might expect to get a very good grade. After all, any report or essay generated by the computer will be clearly written and contain true, verifiable facts...right? Uh, maybe. There's that sticky little problem of hallucinations.
A chatbot will sometimes make stuff up - Wikipedia has a good page on this "hallucination" phenomenon. Not only will it tell you a lie, but when asked followup questions the 'bot will double-down and insist its answers were accurate, despite absolute proof it was dead wrong. Even more worrisome, researchers do not understand why this happens (see quote above, per "we are messing around").
Since the topic here is history, I want to be very clear this is not an issue of interpretation - that a chatbot answer was considered incorrect because it stated the Civil War was about state's rights or that John Quincy Adams was a better president than his father. Nor does it suggest the 'bot was simply confused and mixed us up with (say) the city of Santa Rosa in the Philippines. No, a chatbot hallucination means the program invented people, places or things that never existed, or that it ignored facts which have been proven true. And as I was amazed to discover, it happens a lot.
To evaluate the quality of the chatGPT 'bot, I submitted a dozen questions discussed below. None of them were intended to be tricky; they were the sort of questions I imagine might appear on a middle school or high school test after the class spent a unit learning about local history. (I did, however, throw in one where the topic was inferred.) ChatGPT answered three accurately; the rest were all/partially wrong or the question was skipped. One answer was a complete hallucination. If a teacher gave the chatbot a D+ grade I would consider her to be generous.
The rest of this article can be read at the SantaRosaHistory.com website. Because of recurring problems with the Blogger platform, I am no longer wasting my time formatting and posting complete articles here. I will continue to create stubs for the sake of continuity, but will be publishing full articles only at SantaRosaHistory.com.
- Jeff Elliott
Labels: researching, technology
0 comments:
Post a Comment