Is artificial intelligence our biggest existential threat?

Discussion in 'Science' started by OldManOnFire, Oct 27, 2014.

  1. OldManOnFire

    OldManOnFire Well-Known Member

    Joined:
    Jul 2, 2008
    Messages:
    19,980
    Likes Received:
    1,177
    Trophy Points:
    113
    Reference: http://www.msn.com/en-us/news/techn...-is-our-biggest-existential-threat/ar-BBbsWgU

    Excerpt;

    Elon Musk has spoken out against artificial intelligence (AI), the second time in a month, declaring it the most serious threat to the survival of the human race.

    Musk made the comments to students from Massachusetts Institute of Technology (MIT) during an interview at the AeroAstro Centennial Symposium, talking about computer science, AI, space exploration and the colonisation of Mars.

    “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful,” said Musk. “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”


    OMOF agrees...be careful what we wish for...
     
  2. One Mind

    One Mind Well-Known Member Past Donor

    Joined:
    Sep 26, 2014
    Messages:
    20,296
    Likes Received:
    7,744
    Trophy Points:
    113
    It isn't artificial intelligence that is a threat. It's the lack of human intelligence that will try to use it for particular things.
     
  3. OldManOnFire

    OldManOnFire Well-Known Member

    Joined:
    Jul 2, 2008
    Messages:
    19,980
    Likes Received:
    1,177
    Trophy Points:
    113
    Well...the question might be how much intelligence, what is their potential? For example, if you create some complex form of AI, and allow it to access the Internet in order for it to perform certain functions, will it have the potential to branch out on it's own outside of it's intended use? Another example might be creating AI police, meaning they can make decisions and have weapons, can they possibly evolve on their own beyond their intended use? If an AI is weaponized, what's to keep it from turning on the good guys? As AI becomes more and more complex with extreme potential it's at this juncture that one should wonder what they are capable of achieving on their own...
     
  4. Aleksander Ulyanov

    Aleksander Ulyanov Well-Known Member

    Joined:
    Mar 9, 2013
    Messages:
    41,184
    Likes Received:
    16,180
    Trophy Points:
    113
    Gender:
    Male
    Ridiculous, AI is not and never will be a threat to us. You are anthropomorphizing a machine and forgetting that, even if it does gain self awareness, it is still NOT a human being or even, in fact, a living thing, therefore it will have no survival instinct from which to derive any hostility to us. It will be able to do what we tell it to, nothing more, and, further, have no DESIRE to do anything else.

    I love these so called "experts" in the field who seem to think that the painting robots in car factories are going to come after us somehow. SF writers have milked this subject dry since Mary Shelley and nothing like what they predicted has ever happened or even come close
     
  5. wgabrie

    wgabrie Well-Known Member Donor

    Joined:
    May 31, 2011
    Messages:
    13,882
    Likes Received:
    3,074
    Trophy Points:
    113
    Gender:
    Male
    Yes! They're going to take our jobs, and the bills aren't going down once AIs become reality.

    They're not going to have any of the mental hangups that bother us biological humans, so they'll have the natural edge when it comes to everything.

    They have no compassion so they might be ruthless.
     
  6. One Mind

    One Mind Well-Known Member Past Donor

    Joined:
    Sep 26, 2014
    Messages:
    20,296
    Likes Received:
    7,744
    Trophy Points:
    113
    Yeah, I think the fears are misplaced. I know some people believe that some horrible sci fi scenario will develop when AI can basically behave like a powerful sociopathic genius. To actually have an ego just like a human being. I don't think its possible actually. For the AI would also have to copy the human consciousness and create the illusion called the ego, and then operate as if that illusion was fact, instead of illusion. I don't think the mechanics of the computer would allow it to happen. That is, the AI would be perfectly coherent in every operation until it came to its ego. Only the human brain can be coherent and incoherent at the same time. A computer would crash each time.

    We will just have smarter machines, but the human brain being far more complex than a computer cannot make something as complex as it is, not to mention more complex. Biology cannot be replicated in machinery to the degree needed, for biology is much more than mere machinery no matter what the materialists believe.

    Something totally new cannot come from a computer, no matter how large it is. It can only process the KNOWN, and the human brain can discover the unknown, something totally new that is not within its memory banks. We can only philosophize about from whence the NEW comes, but some people say it comes from a non locality which is impossible for a computer to access. I like this idea, and it seems right.

    The universe is more intelligent than the earth, the earth and its ecosystem more intelligent than one life form on it, man, and man will forever be of a higher intelligence than anything that it can construct.
     
  7. OldManOnFire

    OldManOnFire Well-Known Member

    Joined:
    Jul 2, 2008
    Messages:
    19,980
    Likes Received:
    1,177
    Trophy Points:
    113
    I think you should explore a bit beyond 'painting robots' perhaps 50-100 years in the future, in which AI might gain some biological behaviors. Obviously there are goals to create AI on the same level as humans. As AI develops to this complexity, is it possible to see some dangerous downside? If so, what might their potential be? I think it was a fair warning about AI but not one to lose sleep today...but to be cautious what we do tomorrow...
     
  8. OldManOnFire

    OldManOnFire Well-Known Member

    Joined:
    Jul 2, 2008
    Messages:
    19,980
    Likes Received:
    1,177
    Trophy Points:
    113
     
  9. wgabrie

    wgabrie Well-Known Member Donor

    Joined:
    May 31, 2011
    Messages:
    13,882
    Likes Received:
    3,074
    Trophy Points:
    113
    Gender:
    Male
    Well, if it's the programmers deciding how it should work, than probably they'll do a bad job.

    This reminds me of the robot that was programmed to try and save 'humans' (actually 'bots) from a hole. With more than one 'human' half the time it failed because it was paralyzed with indecision: Ethical trap: robot paralysed by choice of who to save.

    However, if it's the AI teaching itself, a lot has to do with how much of ethics or moral behavior is a natural instinct or whether it's something that can be learned from intelligence alone. I don't know what's the case.
     
  10. Hotdogr

    Hotdogr Well-Known Member Past Donor

    Joined:
    Oct 21, 2013
    Messages:
    11,043
    Likes Received:
    5,266
    Trophy Points:
    113
    You youngsters are seriously twisted. Back when I was a kid, we didn't have artificial intelligence to navigate the virtual reality world. We had to use ACTUAL intelligence to navigate REALITY. God help you all if your computer crashes. I'll be in my bunker with my AR15, my MREs, and my tin-foil flak helmet. :D
     
  11. Aleksander Ulyanov

    Aleksander Ulyanov Well-Known Member

    Joined:
    Mar 9, 2013
    Messages:
    41,184
    Likes Received:
    16,180
    Trophy Points:
    113
    Gender:
    Male
    We already have AI that can mimic biological behaviors. The computer you're using to write this post can probably pass the Turing test, and you cannot prove that it is not self-aware. My question is HOW will it hurt you, even if it decides to, and, more importantly, WHY would it decide to? Again, it is a machine, you cannot threaten it's survival because it is not alive in the first place.

    You should watch the anime Ghost In The Shell. A major set of characters there are the Tachikomas (sp?). These are tanks with very advanced AI, and behave very much like humans. The main protagonist feels threatened by them because of this so she takes them offline, but it is the Tachkomas who rescue her later and, in the end, they save the world by destroying themselves through an act of noble self-sacrifice. A machine would do that without hesitation if programmed to do so and be able to do nothing else, but humans who will sacrifice themselves for others are really very rare.
     
  12. OldManOnFire

    OldManOnFire Well-Known Member

    Joined:
    Jul 2, 2008
    Messages:
    19,980
    Likes Received:
    1,177
    Trophy Points:
    113
    AI will perceive a situation then take action to achieve success. What can they learn, how much can they reason, then what might they perceive is the appropriate actions to take? Depending on what AI's goals are, determines what the risks might be. If they are capable of killing the risks are high. If they only need to determine when to feed the cat then the risks are lower. It's not a matter of 'if' we can develop AI but 'when' it will happen and what the applications might be. It's logical to me that if we create AI which is equal to or greater than human intelligence, then why would their evolution be any better or safer than human evolution? Human's reasoning, learning, and perceptions can be quite flawed so how can AI be any less flawed? How can we know what AI can learn or perceive if AI has the capability to do this on it's own? None of this should cause a loss of sleep but it's interesting to think about where this might lead...
     
  13. OldManOnFire

    OldManOnFire Well-Known Member

    Joined:
    Jul 2, 2008
    Messages:
    19,980
    Likes Received:
    1,177
    Trophy Points:
    113
    In bold above is the bugaboo! The question is not their original programming but what can they possibly learn on their own? If they have the capability to perceive something, reason it, then take actions...evolving...where might this lead?
     
  14. Lil Mike

    Lil Mike Well-Known Member

    Joined:
    Aug 4, 2011
    Messages:
    51,600
    Likes Received:
    22,912
    Trophy Points:
    113
    Well we've never dealt with a non human intelligence before, particularly one that will be far superior to us so I consider it an open question.

    We may be creating a magic genie to serve our wishes, or Skynet. We probably won't know which until it's too late.
     
  15. AlpinLuke

    AlpinLuke Well-Known Member

    Joined:
    May 19, 2014
    Messages:
    6,559
    Likes Received:
    588
    Trophy Points:
    113
    Gender:
    Male
    The threat is AI, but not Artificial Intelligence, but Artificial Idiocy ...

    Being a professional of the sector, I cannot figure out why we should produce an Artificial Intelligence with the defects of the human being [greed, envy, will of power ...]. I would avoid such an idiocy and I would limit the evolution of computers to the level of "FI": Functional Intelligence.

    This said, if we really think to create individuals, artificial individuals [intelligence doesn't require self awareness, to be clear], well, we will risk to face the problems that God faced creating us ...
     
  16. Aleksander Ulyanov

    Aleksander Ulyanov Well-Known Member

    Joined:
    Mar 9, 2013
    Messages:
    41,184
    Likes Received:
    16,180
    Trophy Points:
    113
    Gender:
    Male
    You're missing my point. It doesn't matter if it can learn or if it's less or more intelligent than us or even if it's self-aware. In order to act against us it must desire to, and desires are not a function of reasoning, so it can only have whatever desires we give it..

    Keep in mind also that I am not saying it cannot hurt us if we tell it to. A saw can hurt you quite badly if you use it improperly but do we fear saws?
     
  17. wgabrie

    wgabrie Well-Known Member Donor

    Joined:
    May 31, 2011
    Messages:
    13,882
    Likes Received:
    3,074
    Trophy Points:
    113
    Gender:
    Male
    Well, regarding measuring its progress, I think the next step after it's actually working is hooking it up to the human brain.

    Bwahaha, one day people are going to get implanted circuitry exactly for this purpose. :mrgreen:
     
  18. OldManOnFire

    OldManOnFire Well-Known Member

    Joined:
    Jul 2, 2008
    Messages:
    19,980
    Likes Received:
    1,177
    Trophy Points:
    113
    It won't have 'desires'? It will approach a scenario, try to reason what's going on, then take some form of action. If it can evolve through a learning process, absorbing myriad information, the question remains 'might it perceive something incorrectly' and then take action which is not desirable? Humans in our finite intelligence are dangerous creatures so we should the expect the same from AI since it will be molded from humans. Add to this the unknown evolution of the AI unit and who knows what can happen...
     
  19. OldManOnFire

    OldManOnFire Well-Known Member

    Joined:
    Jul 2, 2008
    Messages:
    19,980
    Likes Received:
    1,177
    Trophy Points:
    113
    I agree...it's only a matter of time before we can artificially replace the brain functions and finally increase our collective IQ from 42 to something more beneficial to society...
     

Share This Page