This episode presents a brief discussion on Artificial Intelligence and its consequences for Humanity.
Will we ever develop super intelligent machines, and if we do will they end up destroying the human race? Artificial intelligence is advancing all the time, but predictions of when we will see a human-level A.I. have been wrong in the past. Is it even possible to create a truly powerful A.I. and what are the consequences for humanity if we do?
Follow us on
Twitter: https://twitter.com/theeurasianpost — Send in a voice message: https://anchor.fm/the-radical-outlook/message
Related article to Podcast
Is AI Our Final Invention?
The future of artificial intelligence can be either exciting or worrisome depending on who you talk to. Some believe that AI may take over humanity, while others believe that we’re far away from AI achieving anything close to human intelligence. One author recently wrote a book addressing the potentially apocalyptic perspective of AI. James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era shared insights into his book on a recent AI Today podcast. Despite the title of the book, he is surprisingly more on the fence about it than his book title would imply.
AI is becoming an increasing part of our daily lives. From intelligent assistants to facial recognition, AI technology is starting to permeate all sorts of our personal interactions. Though we might have achieved the grand visions of a singular intelligence system capable of learning any task, so-called Artificial General Intelligence (AGI), we are increasingly living in a world where the everyday person uses AI on a daily basis. With this sudden boost in AI usage, people are beginning to question just how safe this technology actually is. In many instances, AI and machine learning have the potential to provide significant benefit. However, we have reason to be concerned for AI systems that go wrong.
Where society stands on AI
In his book, James Barrat interviewed a variety of experts from various technology fields including AI researcher Eliezer Yudkowsky and noted technology futurist Ray Kurzweil. His sources included a wide list of big names, and what’s most notable is that each of them had something incredibly different to say about AI technology and what a future with AI might look like. For example, one of the experts interviewed believes that true human-level AI technology (AGI) will be introduced sometime around 2030. Since the book was written in 2013, and as we now enter the 2020s, it might be exciting to think we’re on the cusp of AGI. However, there is really no way of knowing when we will get to AGI or what it will look like.
Barrat speaks heavily about the notion of “grand innovations”. From his perspective, the right innovation can lead us to have a more comprehensive understanding of AI. The reality is that, just like with any technology, we are a mere breakthrough away from some very serious changes to the way we currently operate. The more that gets invested in AI and the more effort from major corporations, governments, and research institutions that we have working on the challenge of AI, the more likely it is that we will see this large boost that will lead us to the future of artificial intelligence.
The result is that AI is finding many more applications now than it has in the past. This means that we might be very close to needing to find ways to regulate the use and application of AI. The problem with artificial intelligence is that there is simply no real way to control just exactly how the technology will be adopted and applied. Barrat believes that artificial intelligence could help us to solve some truly amazing problems. He stresses the idea that we might be able to find solutions to problems that are currently plaguing us in a way that could truly change the outcome of the planet. The use of AI might be able to bring us to a world where we can address issues of climate change or create better medicine. It is a tool that has the ability to do a lot of good depending on who is wielding it and how it is used.
Barrat, however, sees a problem with managing AI. He points out that it is more than possible for the “good guys” to accidentally be “bad guys” and that AI may have unintended consequences from inappropriate application of the technology. Barrat points out that even companies with great technology will use their market strength to suppress competitive technology and perspectives. The notion of “good” in a corporate or government setting can be complex. He also points out that the unique nature of AI is that it learns from experience, and that experience might include human bias. Barrat details how depending on what data you feed a system, it can alter the outcome. His example is that if you feed a system photos of doctors that all look alike, the system might infer that all doctors are only white men. This can lead to problems down the line and imperfect system logic.
Concerns around AI
Barrat shares his opinion that one of the biggest concerns with AI is the possibility that it might end up in the wrong hands. Giving this kind of power could easily lead to damaging outcomes depending on who is wielding it or who gets to the breakthrough first. Barrat compares it to nuclear fission and shows how we studied it for its tremendous positive potential, but the technology ultimately ended up producing incredibly dangerous weapons with some governments using it not for peace, but as a threat. He points out that we lean on the idea of forced transparency as companies pursue AI to keep bad actors at bay. However, he states that this means that the wrong kind of people might get information that they could use to carry out their own nefarious plans. Everything with AI is presented as a double-edged sword where we are largely hoping that everyone is going to play nice with it.
If history has taught us anything, it is that there simply are not enough ways to manage how bad people will use tools. Though we might not end up with a dystopian future where the AI systems work together to take over the planet from human control, there are other just as potentially hazardous outcomes that are more likely. Barrat explains that it could be dangerous if a criminal, or a corporation, or a rogue government creates an AI breakthrough that then is used to control other AI systems. The problem is that this technology is not only too big to regulate, but also impossible to make disappear. Just as it’s impossible to truly pull the plug on the Internet, so too would it be impossible to truly pull the plug on a strong AI system. With so many people invested and interested in the growth of AI, it can feel nearly impossible to implement any kind of limitations on the technology. But according to Barrat, we will have to in order to ensure the technology’s continued growth and adoption.
The use and development of Artificial intelligence is only going to grow with time. We have absolutely no idea when the next big breakthrough is coming or what it will look like. Some scientists are skeptical that we will ever have the true science fiction representation of AI, but that doesn’t mean that it isn’t possible. As we continue to explore these technologies, only time will tell how we will manage to control it and keep it away from bad actors. Will AI be our final invention? For now, all we can do is stay educated and keep the discussions around AI flowing.
Kathleen Walch is Managing Partner & Principal Analyst at AI Focused Research and Advisory firm Cognilytica (http://cognilytica.com)