![The useless class](https://theianguerin.wordpress.com/wp-content/uploads/2018/05/screen-shot-2018-05-08-at-18-51-07.png?w=508&h=593&crop=1)
AI and the Creation of the Useless Class
The development of full AI could spell the end of the human race” – Stephen Hawking
“Can we build AI without losing control over it?” – Sam Harris
“The biggest risk we face as a civilization is artificial intelligence” – Elon Musk
Listed above are three quotes. The first being from one of history’s greatest ever scientific minds, and the second and third ones from arguably two of the most intelligent men currently alive today. Stephen Hawking was a theoretical physicist as well as a bestselling author with his book, A Brief History of Time. Sam Harris is a neuroscientist, philosopher, podcast host as well as a bestselling author. Elon Musk is the CEO and lead designer of SpaceX co-founder, CEO, and product architect of Tesla, Inc.; and co-founder and CEO of Neuralink. These men all agree that not only will AI change the world. What it means to be human will be different from the past. Moving forward, what this means going forward is that we can as a civilization can choose one of two options.
1. Stop Making Progress
The first being to stop making technological progress as a species. In all likelihood, there are only three scenarios in which this could possibly occur.
- A nuclear war
- An asteroid impact
- A global pandemic
2. Continue To Progress
What is a more likely path is that we will continue to improve our intelligent machines. This entails that we continue to build machines smarter than we are that will be capable of improving themselves. This is what mathematician, IJ Good referred to as an “intelligence explosion” and that the rate of improvement could get away from us and out of hand. The progress will result in us building machines so much more competent than we are that the slightest divergence between their goals and ours could lead to us being destroyed.
How do we match up?
For those who think that is completely farfetched, they must find an issue with the following assumption, we are nowhere near the summit of possible intelligence. It is overwhelmingly likely that the spectrum of intelligence goes much further than we could possibly conceive at this moment in time. If we build machines more intelligent than we are, the likelihood is that these machines will be inclined to explore this spectrum and inevitably surpass us.
This can be concluded using simple logic. Electronic circuits function around a million times faster than biochemical ones, which humans have in our brains. This means that the AI that has been built will think a million times faster than the minds of those who have built it. In a given week, this machine could carry out 20,000 years of human level intellectual work.
Last year, AlphaGo, an AI developed by Deepmind, a Google subsidiary took only four hours to learn the rules of the chess before going head to head with the world champion chess programme, Stockfish. In four short hours, the AI had surpassed the entire history of human progress on how to play chess by winning 28 out of 100 games, losing none and drawing the rest. That same AI can now simultaneously play the world’s top 50 chess programmes and beat them all. What this represents is a remarkable pace of progress thus far.
“The Best Case Scenario”
Another way of looking at the potential impact of AI is looking at the best possible scenario and then considering some of the ramifications. This would involve the design of a labour saving device, whereby, as a result, everyone would then be free to do as they pleased and wouldn’t have to work a day in their lives again. There is already a school of thought and evidence to suggest that AI is on the brink of taking over menial tasks and within the next few decades, will outperform humans in more and more of these.
There is an alternative way to examine this issue of AI displacing millions or potentially billions of people from work, leading to us being confronted by a terrible problem in the job market. As has been demonstrated by previous industrial revolutions, the likelihood of this occurring for what Schwab coined the “fourth industrial revolution” is extremely slim. What is more likely in fact, is that the displacement of those from work will lead to huge societal problems as mass production becomes ubiquitous.
The implications of this are that nobody has any idea what to teach children in schools because we don’t know what kind of skills they will need in 30 years. What we are talking about now is the creation of the useless class, a class that never before seen. These people who are useless from the viewpoint of the economic and political systems currently in place. What is becoming increasingly feasible as a solution to this issue is that universal basic income would need to be introduced in order to overcome the incredible socio-economic problems that would take place as a result of the progress of this technology. However, the likelihood of this actually being implemented is slim. The majority of expenditures in healthcare and education from any government or state are largely based on the premise that the system needed the people to operate. In the future, if the economy doesn’t need you, the state won’t be incentivized to invest in your health and education.
What Claire Dillon, formerly of Microsoft and technological evangelist stresses is that steps must, therefore, be taken to ensure that AI is built ethically. These steps create an environment where conditions are conducive to this intelligence being developed safely. These steps are as follows:
- Decide where you are on the ethical continuum.
- Connect AI implementation to a valid business case
- Determine measures of success and failure
- Determine the need for open or explainable AI (XAI)
- Hire a diverse team
- Educate, educate, educate
- Build a risk mitigation plan.
- Track datasets – where data came from and how you used it
- Test, test, test.
- Keep testing
- Monitor usage scenarios
- Be transparent