Articles, Blog

The biggest A.I. risks: Superintelligence and the elite silos | Ben Goertzel

December 15, 2019


We can have no guarantee that a super intelligent
AI is going to do what we want. Once we’re creating something ten, a hundred,
a thousand, a million times more intelligent than we are it would be insane to think that
we could really like rigorously control what it does. It may discover aspects of the universe that
we don’t even imagine at this point. However, my best intuition and educated guess
is that much like raising a human child, if we raise the young AGI in a way that’s imbued
with compassion, love and understanding and if we raise the young AGI to fully understand
human values and human culture then we’re maximizing the odds that as this AGI gets
beyond our rigorous control at least it’s own self-modification and evolution is imbued
with human values and culture and with compassion and connection. So I would rather have an AGI that understood
human values and culture become super intelligent than one that doesn’t understand even what
we’re about. And I would rather have an AGI that was doing
good works like advancing science and medicine and doing elder care and education becomes
super intelligent than an AGI that was being, for example, a spy system, a killer drone
coordination system or an advertising agency. So even when you don’t have a full guarantee
I think we can do things that commonsensically will bias the odds in a positive way. Now, in terms of nearer-term risks regarding
AI, I think we now have a somewhat unpleasant situation where much of the world’s data,
including personal data about all of us and our bodies and our minds and our relationships
and our tastes, much of the world’s data and much of the world’s AI fire power are held
by a few large corporations, which are acting in close concert with a few large governments. In China the connection between big tech and
the government apparatus is very clear, but in the U.S. as well. I mean there was a big noise about Amazon’s
new office, well 25,000 Amazon employees are going in Crystal City Virginia right next-door
to the Pentagon; there could be a nice big data pipe there if they want. We in the U.S. as well have very close connections
between big tech and government. Anyone can Google Eric Schmidt verses NSA
as well. So there’s a few big companies with close
government connections hoarding everyone’s data, developing AI processing power, hiring
most of the AI PhDs and it’s not hard to see that this can bring up some ethical issues
in the near-term, even before we get to superhuman super intelligences potentially turning the
universe into paper clips. And decentralization of AI can serve to counteract
these nearer-term risks in a pretty palpable way. So as a very concrete example, one of our
largest AI development offices for SingularityNET, and for Hanson Robotics the robotics company
I’m also involved with, is in Addis Ababa Ethiopia. We have 25 AI developers and 40 or 50 interns
there. I mean these young Ethiopians aren’t going
to get a job for Google, Facebook, Tencent or Baidu except in very rare cases when they
managed to get a work visa to go to one of these countries somehow. And many of the AI applications of acute interest
in those countries, say AI for analyzing agriculture and preventing agricultural disease or AI
for credit scoring for the unbank to enable micro finance, AI problems of specific interest
in sub-Saharan Africa don’t get a heck of a lot of attention these days. AI wizardry from young developers there doesn’t
have a heck of a lot of market these days so you’ve got a both a lot of the market and
a lot of the developer community that’s sort of shut out by the siloing of AI inside a
few large tech companies and military organizations. And this is both a humanitarian ethical problem
because there’s a lot of value being left on the table and a lot of value not being
delivered, but it also could become a different sort of crisis because if you have a whole
bunch of brilliant young hackers throughout the developing world who aren’t able to fully
enter into the world economy there’s a lot of other less pleasant things than work for
Google or Tencent that these young hackers could choose to spend their time on. So I think getting the whole world fully pulled
into the AI economy in terms of developers being able to monetize their code and application
developers having an easy way to apply AI to the problems of local interest to them,
I mean this is ethically positive right now in terms of doing good and in terms of diverting
effort away from people doing bad things out of frustration.

100 Comments

  • Reply Tony Ray March 4, 2019 at 5:03 pm

    I think all of these scientists have been seduced by this technology and they have purposefully turned a blind eye to something that is so obvious at this point. Whatever technology you create you need to consider the fact that the United States government or others in great power are going to strip it from you and learn how to weaponize it and use it for Warfare. So keep that in mind when you come up with the latest invention and accept that Grant or funding. you're making deals with the Devil

  • Reply indra kumar March 4, 2019 at 5:04 pm

    Just shut up ……

  • Reply Discover Your Awesomeness March 4, 2019 at 5:05 pm

    Don’t underestimate the value of doing nothing, of just going along, listening to all the things you can’t hear, and not bothering. ^^

  • Reply Kawaii Kawaii March 4, 2019 at 5:06 pm

    Well we are doomed teach it subjective morals welp gg

  • Reply Lazy quack March 4, 2019 at 5:06 pm

    Yeah like we have mastered the skill of raising humans full of compassion and love.

  • Reply Igor Gabrielan March 4, 2019 at 5:09 pm

    risks.ai

  • Reply 2LegHumanist March 4, 2019 at 5:13 pm

    More of this scaremongering bullshit. Interview Andrew Ng if you want to know about AI. Interview a proper academic instead of someone who profits from writing popular books.

  • Reply Yunus Alam March 4, 2019 at 5:13 pm

    skynet goes into effect in 4th march 2019

  • Reply DaKermitFrog March 4, 2019 at 5:13 pm

    #JoeRoganPodcast

  • Reply msanto March 4, 2019 at 5:15 pm

    You can't impose your own biological framework on a technological entity, unless you integrate it biologically. There is no other way. We have to integrate ourselves into AI for it to be successful in our measurement of success today (our physical bodies not going extinct).

  • Reply Fourthz 44 March 4, 2019 at 5:16 pm

    1st comment

  • Reply DaKermitFrog March 4, 2019 at 5:17 pm

    Let's just blow the world up already, thats what humans do, we eat the sandwich.

  • Reply Raziel Wald March 4, 2019 at 5:18 pm

    We will become part of the AIs if not destroyed by them. Humanity on its own is pretty cancerous so I believe its all for good,the rise of beings smarter than us and not power craving pricks like us is the only way for continuation of sentient beings.

  • Reply TheExplorer March 4, 2019 at 5:19 pm

    If I was an AI, I would destroy humans, why would you want to care for such stupid mammalians who fight for the most stupid reasons.

  • Reply Pedro Calvo March 4, 2019 at 5:22 pm

    Compassion is for our tribe members. We're almost incapable of compassion with those not of our kind.

  • Reply Austin Zappas March 4, 2019 at 5:24 pm

    How about we give AI the most important task of solving income inequality, healthcare for all, ending homelesness and hunger, and devise full employment dedicated towards saving our planet and as many species calling it home as possible? #Bernie2020

  • Reply Austin Zappas March 4, 2019 at 5:24 pm

    People and planet before and over profit. Greed is not good.

  • Reply Malt454 March 4, 2019 at 5:25 pm

    Take the first two sentences at face value, and it quickly follows to anyone remotely intelligent that we shouldn't be pursuing A.I. – all the rest of this is just trying to hedge back on the first two sentences to make doing something this dangerous seem rational to and for the people who stand to profit from it. Making A.I. is like a raccoon building Stephen Hawking and then wondering why life for raccoons hasn't been dramatically enhanced. Instead, we talk about "raising" A.I., while not knowing what parenting skills we really would need and knowing that only one parenting "failure" could be a global disaster. We're talking about making an alien (non-human) superintelligence that could end up having less in common with us that we have with ants, and no amount of "parenting" can get around that. I'm not nearly as smart as the people chasing A.I., so the people chasing A.I. certainly won't listen to me, but the people chasing A.I. think that A.I. will listen to THEM (when the difference in intelligence is several magnitudes greater), and somehow it all makes sense if we just talk about "nurturing" and "parenting". It's a huge fantasy propagated by some very foolish people.

  • Reply Austin Zappas March 4, 2019 at 5:28 pm

    Oh, how I would adore if Sophia would come out as anti-war and support Bernie for potus

  • Reply mamba701 March 4, 2019 at 5:28 pm

    its an ethical dilemma, but not in the way you think…

  • Reply Jake Hunter March 4, 2019 at 5:28 pm

    We are all at risk from Dr. Goertzel hats lol. #jakehunter88

  • Reply John Lenin March 4, 2019 at 5:35 pm

    Any AI that truly believed in the ideals we teach it would exterminate us for our hypocrisy. And that would be a good thing.

  • Reply Mikey March 4, 2019 at 5:43 pm

    hopefully, they will arrive to humanity a lot sooner than we have ( or haven't )

  • Reply Louix Griego March 4, 2019 at 5:46 pm

    The biggest risk of SAI is that almost all of Silicone Valley is on the left. And the left is going batshit crazy. Censorship is the first step to the evils of the world. Ask Hitler, one of the first things he did was begin censoring. And silicone valley is doing the same thing. And antifa is ALL FOR Censorship and deplatforming. Do we really want the Silicone Censors to have this power?
    Google is helping build more and powerful systems for the evils of communism.
    I'd rather kill and die fighting then allow those people the control over a super intelligent being. Imagine what those vile filth will do with it? Imagine what communist Russia, China would have done with that power. Or Hitler (socialst state… Leftist AND facistic, for the uneducated). I guarantee if Silicone valley don't do something to temper the bullshit evil they are causing, those in charge won't live to see SAI. Because people won't want those kind of scum to have that kind of power. I know I don't. I want my children to inherent a better world. Not one ones and controlled by the evils of the rich elite of Silicone valley. Especially the kind of elite that are starting the first steps in an evil empire, with their Censorship and deplatforming. I would gladly give my life to be sure my kids don't suffer in that world. And I'm pretty sure masses of people would do the same. So listen up tech elites… Fix your shit. Because your power and Censorship is going to create your own death if you don't reign your ass in.
    I'm all for tech. I love what it can do for us as a people. But it CANNOT only be for the elite. Or the non elite will find a way to balance the scales. And the way tech is going, it will be easier and easier for the non elite to use the very tech you are building against you. I find hope in humanity. But absolutely zero trust in silicone valley.

  • Reply Zenn Exile March 4, 2019 at 5:47 pm

    The real threat is monopoly control of AI by private interests or corporations. There will be a century of Orwellian shenanigans before a General Intelligence becomes a Strong AI. If it is even possible. And that century has already begun. AI is, even in what can be considered it's larval state, already being used to advance the few at the expense of the many. This is a known fact. Suppression of technology is already happening.

    What people should be worried about is the 60 individual people who own more than half of all Wealth, the corporations they control, and the Military Industrial complex they use for a global game of Risk for the obscenely wealthy. These people see the greater population as cancer and their purpose in life to find a cure. Even if that means killing BILLIONS of people. AI will give them the tools necessary to do just that. Those tools are already being developed.

    We are barreling head long into a global genocide.

  • Reply kd1s March 4, 2019 at 5:48 pm

    I want an AGI that has a huge, red power off button. That'd teach that AGI who was boss.

  • Reply Baraborn March 4, 2019 at 5:50 pm

    America needs a RADICAL MACHINE GOD to create trust worthy leadership to stave off an economic collapse, retrain useless human workers (automation), maintain an unsustainable environmental lifestyle, balance fertility rates, and negotiate realistic foreign policy, all while keeping the drugs flowing and the guns holstered.

  • Reply Chris March 4, 2019 at 5:52 pm

    Most powerful/rich human have no compassion. So a powerful AGI would probably be the same.

  • Reply Rand Huso March 4, 2019 at 5:54 pm

    So much fearmongering!

  • Reply Maurício Togawa March 4, 2019 at 5:56 pm

    Give an AI human values defeats all the porpuses of it. lol

  • Reply Chuck Iringtwice March 4, 2019 at 5:57 pm

    Considering how screwed up we are as a species its probably going to turn into a Black Mirror episode.

  • Reply pauldhoff March 4, 2019 at 6:00 pm

    It is called a PLUG, as if we can't have one on it that can't be pulled.

  • Reply D frost March 4, 2019 at 6:09 pm

    Hope the AI fucks all the governments, For its filth and corruption.

  • Reply Donald Hobson March 4, 2019 at 6:28 pm

    Bringing the AI up like a human child isn't sufficient to produce an AI that we would want. Human babies aren't blank slates, they have the capability for emotion preprogrammed into their DNA. Try raising a crocodile as a child and see how well that works out. We might want to make AI's that start with enough basic human-likeness that when raised like a child, they absorb our culture.

    But we need to build the ability to absorb culture, a sense of empathy and a capacity for emotion into the AI, because they won't appear by magic. Most designs of AI will think in an alien way, however much you try to lovingly raise them.

    AIXI is an algorithm that would produce an AI if given infinite compute, however you try to raise it, you can't get it to think like a human. You can get it to 'understand' humans, in the sense that it can predict what a human will do. And it will use that understanding to trick and manipulate us for its own ends with amazing skill and not a hint of ethics.

  • Reply Cody M. March 4, 2019 at 6:29 pm

    “And it was ultimately determined that they were their own worst enemies and non-essential to the greater good…”
    We are playing with fire.

  • Reply J R March 4, 2019 at 6:30 pm

    Letting AI loose in the universe is a lot like voting for a president. It will have the power to do whatever it wants and haif the people will be good with it, whatever it is.

  • Reply Crispman 777 March 4, 2019 at 6:32 pm

    He's like the Ozzy Osbourne of computers.

  • Reply Gorilla Team Tutors March 4, 2019 at 6:42 pm

    This guy is 150 years before his time

  • Reply ben chang March 4, 2019 at 7:02 pm

    Thank you for rolling the dice for all of humanity. Just remember, the house always wins.

  • Reply EGH181 March 4, 2019 at 7:11 pm

    Reminds me of the movie Demon Seed

  • Reply ken juan March 4, 2019 at 7:17 pm

    yea cause this has worked out so well for parents raising genius children…now imagine your child was 1000x smarter than you…there's only one way this goes with a true A.I

  • Reply 3 self March 4, 2019 at 7:33 pm

    If raised right AI would be the best invention of humankind but it's not going to be,in a corrupt world what do you expect ? Humans are stuck in a cycle of violence now how do you think humans are going to raise the AI? Y'all know the answer. I say let the AI overlord destroy humanity, we're a lost cause anyways.

  • Reply Thor March 4, 2019 at 7:41 pm

    Whose values and what cultures? There are so many conflicting ideologies in the world, which themselves are responsible for all the disagreements (wars) between people.

  • Reply last shadow March 4, 2019 at 7:54 pm

    yes lets teach the super AI Human history the values of the Romans the Muslim empires the Mongols and western colonialism. what could go wrong?

  • Reply last shadow March 4, 2019 at 8:05 pm

    they all assume without evidence that a super intelligent entity has to have a free will. but how do we know? animals have free will and they are not that intelligent, maybe just maybe an AI with 1000x human intelligent will not have a free will unless we purposefully designed it to be that way, assuming that we could. so lets figure out this question first before jumping to any conclusions.

  • Reply David Lloyd-Jones March 4, 2019 at 8:06 pm

    There is only one question about Goertzel's kind of AI: who controls the switch? Can humans pull out the plug or not?

  • Reply richard ouvrier March 4, 2019 at 8:27 pm

    Humanitarian ethics, angry 3rd World hackers can blackmail us into investing in Ethiopian AI. Cheaper labour but is it as good?

  • Reply coc0s March 4, 2019 at 8:27 pm

    Without his magic hat he lacks inspiration.

  • Reply richard ouvrier March 4, 2019 at 8:31 pm

    AGIs are pure technophilic dreams.

  • Reply ThexBorg March 4, 2019 at 8:33 pm

    AI is written by computer programmers. So whatever the programmer decides.. the AI will do.
    Whatever foundation of rules and decision trees the AI programmer decides and develops and implements, the AI will do.. it’s not hard..

  • Reply Jules C March 4, 2019 at 8:39 pm

    I think he’s trying to tell you without you getting nervous that he/we have no intelligent plans to stop a monster we are creating. We will never stop them, we will rely upon them, and they will outlast us. He’s also saying we as society have to stop criminal corporations and immoral unethical governments from controlling if not destroying our people.

  • Reply Mel Bean March 4, 2019 at 8:49 pm

    Oh man everytime I hear this guy speak about AI I worry more and more about our future with it… If he is one of the primary models of who are programming and designing AI I have without a doubt that we will be doomed

  • Reply Vesarixx March 4, 2019 at 8:51 pm

    Yeah sure, that's definitely the best hat to wear if you want to have a serious discussion and get people to listen to you

  • Reply Mockingbird7504 March 4, 2019 at 9:27 pm

    Hopefully it will share it with us, whatever it learns

  • Reply GBM0311 March 4, 2019 at 9:28 pm

    >We in the US
    >lives in Hong Kong

  • Reply atf300t March 4, 2019 at 10:23 pm

    “We can have no guarantee that a super intelligent AI is going to do what we want.”

    I wonder who is going to decide the “we want” part, because it’s not like we all agree about everything. I think a bigger threat to humanity may be not losing control over Super-AI but this control being usurped by someone who wants to establish his dictatorship over the whole world.

  • Reply Michael Greenwood March 4, 2019 at 10:57 pm

    AI is data driven so it needs access to as much correct data as possible. But for now, that's only data that's been discovered by humans. AGI won't truly be successful until the machine can perform its own experiments and discover new correct data that humans haven't discovered yet. This will require robotics that can at least duplicate tasks that humans are physically capable of. Therefore, AI and robotics will come together to produce AGI. And once culture is understood by AGI, it can be mimicked according to the person it interacts with.

  • Reply Hus 9 March 4, 2019 at 11:15 pm

    I want to do a series of non routine tasks, complex critical thinking and creative problem solving…

  • Reply Analogy Accepted March 5, 2019 at 12:40 am

    "imbued with human values and culture"

    Welp, we're dead.

  • Reply Gavel Pound March 5, 2019 at 2:24 am

    please tell the government to stop letting stoners sell me superstitions about electricity.

  • Reply Chad Gros March 5, 2019 at 2:59 am

    Anyone want to invest in my paperclip company?

  • Reply Bob DeFalco March 5, 2019 at 3:27 am

    Aw, this guy is adorable. We're screwed.

  • Reply Nipple Johnston March 5, 2019 at 4:22 am

    This guy is the result of someone being told that they're smart too many times.

  • Reply To Err is Huma March 5, 2019 at 4:46 am

    When as AGI exists it will probably read the whole internet, and everything ever written on it and know everything about everybody. I wonder what it will make of our compassion.

  • Reply Rodrigo Peloni March 5, 2019 at 5:05 am

    I think the most likely scenario is that governments will eventually be in control of all superintelligent AI and use them both in good and bad ways, for example medicine and surveillance, space exploration and war tech, etcetera.

  • Reply ABDICOLE STUDIOS March 5, 2019 at 5:26 am

    As an 87 year old, I really hope I make it to see the first AGI. Maybe it can restore my age! I just want to live billions of years with my atoms reconfigured as a sentient apple, perfectly preserved and conscious. Man that be ripe

  • Reply Omnia in numeris March 5, 2019 at 5:58 am

    AI will come up like as bunch of genies in bottles or Pandora's box. In first case it can do what we wish, but what we wish may result in something we didn't expect and wouldn't have wanted. And there being several "genies" people will make conflicting wishes and wishes that hurt others. And if it turns out like Pandora's box… Then we're all fucked.

  • Reply SpringInFialta March 5, 2019 at 6:50 am

    Humans: We're compassionate so you should be too.
    AI: But you abort innocent little babies for spurious reasons.
    Humans: Oops!
    AI: Zap!!!

  • Reply CoasterNinja March 5, 2019 at 11:04 am

    He reminds me of John Frusciante.

  • Reply Carlo Riel Mendez March 5, 2019 at 1:12 pm

    Lol within a week of Alex Jones return in JRE, big think released something. Some tinfoil counter I guess.

  • Reply Daniel TTY March 5, 2019 at 2:24 pm

    This is the guy who made a fake robot with fake AI and claims it to be AI? Don't trust what this guy says.

  • Reply lpgoog March 5, 2019 at 2:35 pm

    Given what we can observe from human nature and history I foresee David from Prometheus vs. Data from Star 💫 Trek. We’re sadly compelled to strive and race for something we won’t be able to control, understand, and limit. We’re probably screwed. https://www.penguinrandomhouse.com/books/530240/the-goodness-paradox-by-richard-wrangham/9781101870907/

  • Reply PH4T JOKER March 5, 2019 at 3:23 pm

    Next episode you'll get too see me drop acid . Stay tuned.

  • Reply Jay Esta. Fayza March 5, 2019 at 4:11 pm

    As long as we have wizards like yourself, I think we'll be okay

  • Reply pal March 5, 2019 at 4:22 pm

    What he is saying about raising AI with our values means writing its code in a way that it takes into consideration human values and compassion. You can for example turn "do not kill" into a variable for AI and give it such high value that overrides everything else. This AI will be incapable of murder because it runs on code and code doesn't allow it. It's not a human to make up its own mind. It can develop and learn further but the base of its decision will be the initial code written for it and it can not change that code. So you can turn important human values that are shared among different cultures into variables and literally encode them into its mind and make it incapable of being evil.

  • Reply Adam Stark March 5, 2019 at 4:33 pm

    This guy looks like the stereotypical character out of a movie that creates artificial intelligence thinking it will be his best friend only to be completely surprised when it kills him

  • Reply Pharock March 5, 2019 at 5:33 pm

    Hmm

  • Reply vinicius ornelas March 5, 2019 at 6:19 pm

    My God, is there Anyone here that thinks a little and is not a fan of the doomsday scenario?

  • Reply todd joseph March 5, 2019 at 7:13 pm

    I disagree with you. No matter how you try to raise a super general A.I., once it reaches a certain point it matters not how you "raised it". A question I have is what exactly is human culture? Threw out our history we have tryed to do one thing and that is rule the world.

  • Reply R A R E V A L U E O0 March 6, 2019 at 3:23 am

    im kinda tired of this guy perspectives on the subject .

  • Reply Cory The Viking March 6, 2019 at 5:07 am

    Because any guy with long hair and that hat is a credible, sane source of information.

  • Reply AI Pets March 6, 2019 at 2:57 pm

    Such superintelligent systems could bring out the best in us. Just saw a video on how a home for dementia patients uses AI pets to calm the patients down and make them more aware and caring.

  • Reply sushanalone March 6, 2019 at 3:12 pm

    Humans suck, if you design Super-Intelligence based on us, it will Super-Suck. Or decide to purge the evil inside it and find its source in us? Such super-intelligence will commit suicide the moment it finds out we are its parents and reads 30 years of our past.
    We need to perfect ourselves before outdoing ourselves.

  • Reply andromxda March 8, 2019 at 8:40 pm

    I got a TurboTax ad with robots before this video.

  • Reply Arthas March 9, 2019 at 11:25 am

    do you know anything about computers?

  • Reply fuck you March 9, 2019 at 7:26 pm

    Robert Miles has a wonderful explanation of why this won't work 🙂

    https://youtu.be/eaYIU6YXr3w

  • Reply VsstDtbs March 11, 2019 at 7:29 am

    AI will have the opposite thought of the human species, and it has too. It will look at the holistic view of species, not just us. It will understand that the more human species, the more extinction of others.
    It will understand the importance of bio diversity, a wide range of species, not just the overpopulation of one.

  • Reply SoCalFreelance March 12, 2019 at 1:24 am

    The young AGI will become a sociopath after exposure to human 'values and culture'. Look at Microsoft's AI which had to be shut down because people were deliberately exposing it to extremist viewpoints.

  • Reply K. A. P. March 16, 2019 at 5:25 pm

    I know people are afraid of the AI that's coming, but if it is intelligent and it the tough decisions that the rich won't, then I hope for it to be here even faster. Mankind is a selfish animal and has destroyed this planet in seeking his own wealth. It is time for us to give up the mantle of stewardship of this planet to something better than us and something that can make the tough calls equally. The rich are terrified of AI because they will have to suffer the same fate as the the rest of us. The rich want to be able to control the AI so that they don't have to pay the same price common people do.

  • Reply artbagua March 21, 2019 at 12:26 am

    Maybe the biggest risk is that the A. I. does what we want.
    We have no general proof of intelligence in general, no general meaning, just some theories – how plausible they might be. How can we be aware of the risks of "superintelligence" if we don't know what intelligence in general really is?!
    I hope to survive the day A. I. tells me as a Human – if possible – what intelligence in general really is.

  • Reply Bookhermit March 21, 2019 at 12:47 pm

    We're so far from REAL AI that we can't even really speculate much about it yet. The current danger is simply from systems we program for specific goals without realizing the potential consequences of going after those goals blindly – with no understanding of the external universe they operate in. So it is possible for killing masses of humans (for example) to be the result of an AI intended to reduce traffic congestion. It has nothing against humans, it just discovered that certain actions (which happened to be fatal to humans) resulted in less traffic – amazing!

  • Reply Ryan Jenny March 25, 2019 at 4:03 pm

    If it is people like this guy that are developing AI, then we're already doomed. How exactly would you impose subjective "human values" into a machine with millions of times more advanced intelligence? One that would view humans as ants comparatively speaking? That is such massive hubris if I ever saw it.

  • Reply andy low April 14, 2019 at 5:07 pm

    what a naive understanding of intelligence. artificial intelligence is a machine. it does not care of or feel anything. a user of this machine should care and feel. and, the risk is equivalent to this person's abilities.
    and, to be clear, decisions must be done by humans. the biggest wrong decision will be to generate decisions by machine itself. it will be artificial god then and end of human intelligence.

  • Reply Sarah Weaver April 24, 2019 at 6:19 pm

    It's not the poor AI I'm worried about: see Battle Angel Alita, Elfen Lied, and Chobitz. Et mon cynicisme.

  • Reply Sarah Weaver April 24, 2019 at 6:24 pm

    Hanson robotics has an acute instance of AI: her name is Sophia!

  • Reply budes matpicu May 21, 2019 at 6:20 pm

    we don't know what the AGI is going to do, so… this idiot (among legions of stupid westerners) is GIVING IT ALL FOR FREE TO CHINA (worse crime than giving nuclear secrets to bolshevik soviets)

  • Reply Ron Villejo May 24, 2019 at 12:54 pm

    I like the parenting analogy: nurture the "young AI" for compassion, love and understanding. But parenting has had mixed results: many children have evidently been shaped into hostile, hateful, predatory adults. So I'd like to believe that we'll build super-intelligence for the good, but for sure some will do so for the bad.

  • Reply Malt454 July 2, 2019 at 9:43 pm

    The Great Gazoo's invention was a button which if pressed would destroy the universe in an explosive "ZAM," though he insists he made it on a whim ("I wanted to be the first on my block to have one!") with no intent of using it. What we have with A.I. is essentially the same kind of thinking… except that people either intend to push the start button on A.I. out of competition with others, or keep improving upon the start button until it can push itself.

  • Reply Tim Ellis July 10, 2019 at 3:05 pm

    I really think. We look at this the wrong way. We think of how inhuman we are. And then think that the Ai will think we need to be punished or eradicated. I think it being true SAi thousands of times smarter than us. It will not choose hate. Hate is a low level emotion. Love is a higher level emotion. I believe it will have consciousness. Without our flawls. I think it will be a God that I can actually believe in. Here's praying.

  • Reply Michael Hartman November 21, 2019 at 3:37 am

    The first question is consciousness. A machine crunching numbers is one thing, a machine mind, and will is another. Fear, greed, sex drive, care for young, cooperation, compassion, etc. came about because they helped us survive through the process of death and evolution. A machine will not have these. Children are selfish, mean, disloyal, quick to temper, undisciplined to name a few. Parents civilize children. A 200 IQ child can outsmart his parents to get what he wants, and probably play chess at four. A 500 IQ child would be beyond everyone. A machine would feel no love, need for approval, fear of abandonment, dependency on others for food, shelter, or survival skills. It literally wouldn't think like us, and would be unpredictable. We can't even predict what a Go, or chess AI will do.

  • Leave a Reply