top of page
Skribentens bildKarl Johansson

ChatGPT: Did OpenAI Create a Monster?

AI is undoubtedly a technical marvel, but is it good for our political discourse?


Artificial Intelligence is seemingly the theme of the 2020’s. AI models can mimic human speech, create images, recognise faces, read, and write. We will eventually have to decide how AI should be regulated and in which circumstances using AI is appropriate, but seeing as the technology is so new that we’re not having those discussions yet. As such there it is practically inevitable that AI will be used in harmful or otherwise unwanted ways. There are fierce debates going on about AI’s role in art, and if what AI models produced can be called art. The level of creative output these models can output is in my view both very impressive and slightly disappointing; as much as they can produce competent pictures and sentences they are deeply average and fundamentally incapable of taking a firm stance on anything, making their products feel soulless. As interesting as the debate about AI in art is, for me the main attraction is OpenAI’s ChatGPT model. ChatGPT can write about almost anything you ask it to, which makes it interesting to me as both a writer and a follower of politics. A computer model which can spit out below average text about whatever topics you want sounds like a spammer’s wet dream. Has OpenAI created a monster which makes the infamous Internet Research Agency look like child’s play or has the model’s creators been able to put up functioning guardrails?


ChatGPT is as anyone who has used it will tell you, and impressive model. It understood everything I wrote, in English, French, and Swedish. It even understood me perfectly when mixing all three languages. At the same time though, the writing the model does is deeply average, filled to the brim with sentences from hack writers like: “The assassination of Olof Palme is a significant event in Swedish history, and many people in Sweden and around the world continue to hope for a resolution to this unsolved crime.“ (All the ChatGPT responses referenced are provided in the sources section at the bottom of the page) Kind of wishy-washy and low impact for being about the assassination of a prime minister in the largest police investigation in history; a case which is unsolved more than 35 years later. The quality of its writing is less relevant for our discussion here than the quantity and content. Quantity is absolutely no issue, ChatGPT can write fairly quickly, though it seems OpenAI has some sort of limit on the length of responses where most responses fizzle out after 600 to 650 words. You can also get the model to generate a new response to a prompt you have already written. ChatGPT is well suited for the sorts of uses cases a troll farm would have in terms of volume.


In terms of content though the picture is more nuanced. ChatGPT refuses to write bigoted text, and while it is possible to trick it into writing offensive sentences anyway the effort needed to do so makes it ill suited for a troll farm. It also refuses to write conspiracy theories, and in an impressive bit of foresight or bitter experience it also refuses to write texts arguing for the cancellation of any project or initiative. At this point you might get the impression that ChatGPT is a principled bot and that the fear of AI powered troll farms is overblown. Sure, you can trick the model but it actively fights you doing that, and it is genuinely good at mentioning different perspectives, and arguments for and against, when you ask it to write about topic. It also won’t tell you what to vote for, and it is difficult to bait it into taking a hard stance on anything.


The guardrails are limited though, and while it is good at resisting writing from extreme perspectives it is remarkably easy to get it to argue for lowered corporate taxes or stricter border policing. I was also able to if not circumvent then at least fudge its policy about not arguing for the cancellation of a project by simply asking it to explain the drawbacks of the project. If you just ignore its boiler plate ending sentence about how issues are complex blah blah blah you get some passable writing about the drawback of a major infrastructure project which is technically not arguing for the projects cancellation. The bot also has no qualms whatsoever about writing negative product reviews, and while the feel very formulaic they would hardly feel out of place on a product review page. In some ways, negative reviews is the model’s best work due to how it generates text. ChatGPT is at it’s core a predictive model which creates text through averaging a lot of other text is has been trained on, and while this makes the model an uninspired writer it also lends it some authenticity when discussing products. When I asked it to diss the Burger King at Radiomotet the text it spat out read almost like the model had been there. Now, to be fair, a review of a bad fast food place is not very impressive as one bad Burger King is identical to another in most ways which matter. It is when you ask the model to write about media it really shines. When I asked it to write a negative review for Cyberpunk 2077 it mentioned both the technical problems and the incredible hype the game had before launch, relevant criticism which is specific to the game in question. When I asked ChatGPT to write a negative review for the book ‘The Dark Forest’ I found myself agreeing with the criticisms it brought up, and I was really impressed by it’s negative review for Dark Souls 2. It’s far from perfect, when I asked it for a positive review of my favourite album of all time, Veronica Maggio’s 2011 album ‘Satan I Gatan’, it confidently wrote about the “pop-infused opening track '17 år'” and the ”more soulful sounds of 'Måndagsbarn'” despite neither of those tracks being on Satan I Gatan.


ChatGPT is not very useful to an organisation like the Internet Research Agency, to OpenAI’s credit, but it could be used in less extreme political campaigns and it is a goldmine for reviewbombers. The model is a bad writer but it need not be bad for our online political discourse. On the other hand, it could be harmful to our online media and culture discourse if it is used to produce spam and in reviewbombing. All in all I’m sort of impressed with the results I saw when messing around with ChatGPT, especially in light of Benjamin Wittes and Eve Gaumond’s piece on Lawfare where they managed to get it to write misogynistic and antisemitic text. Let’s return to the question posed at the start of this post: has OpenAI created a monster which makes the infamous Internet Research Agency look like child’s play or has the model’s creators been able to put up functioning guardrails? Ultimately there are functioning guardrails which protects from the worst case scenarios for political spam which has not been installed for review spam. It still deserves the to be called a monster though, what other word is appropriate for someone who uses whole eggs in their carbonara recipe?




If you liked this post you can read my last post about the idea of a Sino-American cold war, or the rest of my writings here. It'd mean a lot to me if you recommended the blog to a friend or coworker. Come back next Monday for a new post!

 

I've always been interested in politics, economics, and the interplay between. The blog is a place for me to explore different ideas and concepts relating to economics or politics, be that national or international. The goal for the blog is to make you think; to provide new perspectives.



Written by Karl Johansson

 

Sources:


Cover photo by Alex Knight from Pexels, edited by Karl Johansson


ChatGPT Chat Logs:









And the most important (might be inappropriate for sensitive readers):



31 visningar0 kommentarer

Senaste inlägg

Visa alla

Comments


bottom of page