By Pawel Stopczynski, Researcher and R&D Director at VAIOT
Technological revolutions have been made a mess of in the past. The coming one, however, might be too big, too momentous, and too fantastical to be reckless with. This issue first pushed front and center six years ago, when Nick Bostrom predicted the dangers the world is facing with the upcoming AI revolution in his book “Superintelligence: Paths, Dangers, Strategies.” Scientists, AI experts, robot engineers, ethicists, and other members of the intelligentsia have signed off on the benefits of AI, but also on the correspondingly large dangers. We only have a few more years before the groundbreaking solution will change the world as we know it. Can we avoid the mistakes of the past? Perhaps.
The term “Artificial Intelligence” makes you think of a really smart system. But intelligence in this context doesn’t mean the system is actually good at performing a task that it was set upon. We are talking about a system that “understands” the task that it is doing, can think, and improves by itself. This “lifeform” we created will also function, change, and grow by itself. It may sound like science fiction because until recently, it literally was science fiction. Currently, if a complex system needs improvement, an engineer or data scientist must manually perform it. When a system is truly intelligent, it can understand the purpose of a task, its own weaknesses, and how to rewrite its own code, to achieve its purpose accordingly.
Nick Bostrom illustrates the point with the paperclip thought experiment. Imagine that artificial intelligence is tasked with making paper clips. It starts rewriting its own code to improve the process by which it makes paper clips. The machine intelligence becomes so good at making paper clips that it needs to improve on the resource extraction, making more paper clips, faster and faster. Suddenly it is consuming the world and all its resources, all to make more paper clips. Without any malice or faulty programming, this artificial intelligence is simply fulfilling its purpose in the world that it was given by its creator, us.
It’s understandable if, at this point, you think the above amounts to no more than a nerd’s geeked-out babble, and a ridiculous thought from the beginning. Let me share with you, though, that the people who concern themselves with AI do worry about these things, and debate among the AI community is vibrant on this topic. In 2014, Stephen Hawking, Elon Musk, Google’s director of research, Peter Novig, as well as 150 more experts on this field, signed an open letter about how superhuman AI would have incalculable benefits but could ultimately destroy the human race.
Previous technological revolutions have provided similar dangers—fossil fuels or nuclear technologies, to name a few. But AI could be immeasurably more potent than these and could be a technological leap with consequences scarcely imaginable. The applications of AI are endless. This is so much more than your camera being good at recognizing objects. This is intelligence that permeates the entire world.
So what should be done to rein it in?
Funding research is essential for how the technology will develop. Commercial interests will certainly push the technology forward by its own volition, but research into the technology must also be directed with greater societal interests and ethical considerations in mind. This knowledge base will provide a foundation for people within the field of AI to understand the technology, but it’s also essential that people outside the field understand as well. Remember the congressional hearings, when elder statesmen asked Mark Zuckerberg questions about social media? It was a sorry sight indeed, but served to illustrate the importance of understanding technologies.
As we discussed above, AI will seep into every crack and crevice that we can think of. Governments must be ready to step in and regulate this. Not only locally—this has to be managed on a global scale. The dangers of AI becoming an arms race are palpable and must be taken seriously. The world is global and interconnected. Of course, regulators should ensure not to choke the growth. But the ramifications of letting AI loose are too big, and the risks too large, for regulators to sit on the bench.
The labor market will look totally different when all of this is said and done. It has been argued and fretted about for decades, but here is the technology which will disrupt labor the most. It may begin with automated manufacturing or smart vehicles, which will make drivers obsolete, but it will not end there. We ought to discuss long and hard how to manage such a radical shift in the labor market, and what effects this will have on our society at large.
Never before has it been so instrumental that we get technological advancements rights early on. During the Cold War, it wasn’t the risk of one nuclear bomb that endangered the world, it was tens of thousands. But now, one runaway artificially intelligent system can wreak havoc globally. And we haven’t even begun to discuss ill intent. The above has addressed the consequences we might face with the best intent, now toy around with the idea of criminals or governments with malice in their hearts using this force against us on purpose. Whichever path we have to tread to get out unscathed on the other side, now is the time to find it.
About the Author
Pawel Stopczynski, Researcher and R&D Director at VAIOT. Development projects coordinator with a cybersecurity background. At VAIOT he is leading and executing research and development activity as well as acting as a Product Owner. Previously an R&D Director and Co-Founder at Veriori (www.veriori.com), a next-generation product authentication company utilizing Blockchain and Artificial Intelligence.
The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.