top of page
Search
Mark and Conrad Pearlman

AI: Utopian or dystopian

Innovation and creativity can inspire societal transformation, but if misdirected, they can lead to unintended, negative consequences. Artificial Intelligence (AI) is a perfect case in point: It offers unparalleled potential for efficiency and progress – along with possibilities for downsides. The ongoing debate on these issues was thoughtfully covered in the media this past week.


In Sunday’s 60 Minutes segment on Yuval Noah Harari, the history professor and author

raises concerns about evolutionary divergence: “For millions of years, intelligence and consciousness went together. Consciousness is the ability to feel things, like pain and pleasure and love and hate. Intelligence is the ability to solve problems. But computers or artificial intelligence, they don't have consciousness. They just have intelligence. They solve problems in a completely different way than us. Now in science fiction, it's often assumed that as computers will become more and more intelligent, they will inevitably also gain consciousness. But actually, it's -- much more frightening than that in a way they will be able to solve more and more problems better than us without having any consciousness, any feelings.”




Henry Kissinger and Eric Schmidt address the humanity question in a WSJ article, in which they warn not only of the practical and legal implications of AI, but also the philosophical ones: “One should consider if AI perceives aspects of reality humans cannot, how is it affecting human perception, cognition and interaction? Can AI befriend humans? What will be AI’s impact on culture, humanity and history?”


We at no baselines view AI as dystopian and utopian. On the utopian side, we understand that AI can enable us to create the best health care system in history. On the dystopian side, we ask ourselves some uncomfortable questions, such as: What else is being done with that data? And who supervises it? Who regulates it?


Harari offers some useful guidelines. “One key rule is that if you get my data, the data should be used to help me and not to manipulate me,” he states. “Another key rule, that whenever you increase surveillance of individuals you should simultaneously increase surveillance of the corporation and governments and the people at the top. And the third principle is that -- never allow all the data to be concentrated in one place. That's the recipe for a dictatorship.”


He adds, “Now, we are at the point when we need global cooperation. You cannot regulate the explosive power of artificial intelligence on a national level. I'm not trying to kind of prophesy what will happen. I'm trying to warn people about the most dangerous possibilities, in the hope that we will do something in the present to prevent them.”


We will continue to follow this important No Baselines issue, focusing on potential and limits of innovation and creativity as we delve more deeply into AI. We plan to curate and highlight relevant articles and videos. For now, we’re inclined to agree with Kissinger and Schmidt, who observe, “The advancement of AI is inevitable, but its ultimate destination is not.”









36 views0 comments

Comentários


bottom of page