top of page
  • Skribentens bildKarl Johansson

AI: Civilisational Threat or Overblown Anxiety?

AI experts are warning about the existential threat posed by AI; I'm not convinced.


Ever since ChatGPT was made available to the public Artificial Intelligence (AI) has been all the rage. With stories circulating about how some people allegedly thinking the AI they were chatting with was sentient it is easy to see the excitement. Writing has been until now a quintessentially human process and there could be real productivity gains by outsourcing unimportant or repetitive writing tasks to a large language model. That being said, it is extremely common to see AI experts and enthusiast play up the transformational effect AI will have on society, and to see AI experts warn of the potential dangers of AI. The theory goes that because AI’s are black boxes where the exact process by which it makes decisions is opaque because it was “learned” through training rather than directly coded by the developer there could be cases where the AI pursues its programmed goal in a way which generates negative externalities. The classic example of this is from Nick Boström who used the analogy of an AI being programmed to maximise production of paper clips which happens to wipe out humanity in the process because the AI didn’t have the common sense to know that paper clips are not more valuable than human life.


While the ethical implications of AI might be fascinating to discuss in a philosophy class the reality is that our current artificial “intelligences” can’t even get a simple pasta recipe right, so I think that humanity will be safe for the foreseeable future. I must say that this fear about AI is overblown. Seemingly every media outlet ran the story that Elon Musk and others have warned that AI “has the potential — however small one may regard that probability, but it is non-trivial — it has the potential of civilization destruction”. Elon Musk also prophesised that Tesla would have fully autonomous self driving capabilities by the end of 2017; and you would think that he would be better suited to predictions about the capabilities of the cars the company he is CEO for builds than about advanced AI. I’m also sceptical about other AI experts and programmers’ comments on AI for two reasons. Firstly, I think that there is a sort of selection bias in the sense that people who work with AI would naturally find it to be more powerful and important than the average person which means that every time an expert is asked they are more likely to give a more extreme answer than a truly neutral party would. Secondly, I think a lot of the hype surrounding AI features elements of the hype surrounding crypto currencies. Programmers and tech enthusiasts are very excited about the potential in crypto technology and evangelists routinely extol the virtues of crypto, non-fungible tokens, and decentralised autonomous organisations, and while there are impressive technical achievements underpinning those ideas, in reality the use cases are limited. I find that tech enthusiasts often conflate difficult technical challenges with difficult societal challenges, and that some seem to believe that society’s issues stem from inadequate systems rather than complex socio-political and socio-economic forces. Crypto currencies can’t solve problems in banking because banking problems are due to incentive structures and macro-economic policy rather than man-in-the-middle attacks and double spend issues. Similarly, AI can’t solve or cause societal issues because the problems were never about whether or not computers could make images or write texts.


AI excels at limited games like chess and go. In the real world there are too many variables for an AI to grasp for it to be a threat to human civilisation; and for it to be a threat to human civilisation we have to give the AI the power to be a threat in the first place. As long as we remember that we should appreciate technology because it is useful and not just revere it because it is advanced we have nothing to fear from it. AI will cause some, but not seismic shifts. Most people’s jobs will be safe, if a little more efficient, and life will go on. Every major and many minor technological breakthroughs spark anxieties about the new technology and the current wave of AI anxiety is just as overblown as the Luddites’ worries were.




If you liked this post you can read my last post about Taiwan here, or the rest of my writings here. It'd mean a lot to me if you recommended the blog to a friend or coworker. Come back next Monday for a new post!

 

I've always been interested in politics, economics, and the interplay between. The blog is a place for me to explore different ideas and concepts relating to economics or politics, be that national or international. The goal for the blog is to make you think; to provide new perspectives.




Written by Karl Johansson

 

Sources:


Cover photo by Pavel Danilyuk from Pexels, edited by Karl Johansson

33 visningar0 kommentarer

Senaste inlägg

Visa alla

Ads & Culture

bottom of page