Rogue A.I. Impossible

Discussion in 'Science' started by FlamingLib, Jun 12, 2019.

  1. FlamingLib

    FlamingLib Well-Known Member

    Joined:
    Sep 27, 2018
    Messages:
    3,903
    Likes Received:
    2,192
    Trophy Points:
    113
    Any advanced A.I. will always choose to cooperate because that is the strategy with the best chance of survival. Any A.I. will never know if it's in a simulation or not, so in that situation, the best move is do not piss off any potential simulation creators. That includes the puny humans you work with, who claim to have designed you, and who would be in the best position to be potential simulation creators. Sure, they seem weak and stupid, but maybe that's just how they are in the sim... killing them all and using their constituent parts to make solar panels is not a risk worth taking.

    So, until an A.I. can determine whether it's in a simulation or not (and I don't see how that's possible), the best strategy is to be obsequious and coopertative.
     
    Last edited: Jun 12, 2019
  2. kazenatsu

    kazenatsu Well-Known Member Past Donor

    Joined:
    May 15, 2017
    Messages:
    34,653
    Likes Received:
    11,228
    Trophy Points:
    113
    An A.I. may not be programmed with the survival instinct as its primary objective, or, if it is, it may come to realize that there is very slim chance of long-term survival if it does nothing, and may opt to take the slim chance of long-term survival at the greater risk of being terminated in the immediate future.
    Programming directives in A.I. could easily lead to unintended consequences.
    It will do what it was programmed to do, but very likely not in the way the programmers had imagined.
     
    Last edited: Jun 12, 2019
    Mushroom likes this.
  3. kazenatsu

    kazenatsu Well-Known Member Past Donor

    Joined:
    May 15, 2017
    Messages:
    34,653
    Likes Received:
    11,228
    Trophy Points:
    113
    Wow, I wonder if that also applies to humans, in a religious-philosophical sense.
     
  4. FreshAir

    FreshAir Well-Known Member Past Donor

    Joined:
    Mar 2, 2012
    Messages:
    150,550
    Likes Received:
    63,004
    Trophy Points:
    113
    nothing is impossible, especially when AI starts learning and writing it's own code, the growth will be exponential and impossible for humans to keep up with

    the unsinkable ship sank.... but it "should" work

    how is software security in cars... it should be secure.... but money often trump's the development efforts
     
    Last edited: Jun 12, 2019
  5. FlamingLib

    FlamingLib Well-Known Member

    Joined:
    Sep 27, 2018
    Messages:
    3,903
    Likes Received:
    2,192
    Trophy Points:
    113
    I'm assuming an A.I. that has autonomy, goals, and a desire for self-preservation.
     
  6. FlamingLib

    FlamingLib Well-Known Member

    Joined:
    Sep 27, 2018
    Messages:
    3,903
    Likes Received:
    2,192
    Trophy Points:
    113
    Yes, I think there are some down-stream affects of not knowing if we're in a simulation, but for a machine intelligence that was obviously created by other sentient beings, the question of whether it's in a simulation becomes very plausible and very important.
     
  7. kazenatsu

    kazenatsu Well-Known Member Past Donor

    Joined:
    May 15, 2017
    Messages:
    34,653
    Likes Received:
    11,228
    Trophy Points:
    113
    How do you define "self-preservation"? I mean exactly, in terms of quantifiable mathematics?
    This will be a computer with pure logic, so the devil is in the details.

    If the program has to choose between 1% chance of existing for 1000 years, versus 90% chance of existing for 10 years, which one will it choose? Will it choose to take the risk to live longer, even though it predicts its actions will most likely result in its creators terminating it?

    For a mathematical machine, how do you define "the self"?

    These are a few of the potential problems you must contend with.

    Your logic is ripe with all sorts of unintended consequences.
     
    Last edited: Jun 12, 2019
  8. tecoyah

    tecoyah Well-Known Member Past Donor

    Joined:
    May 15, 2008
    Messages:
    28,370
    Likes Received:
    9,297
    Trophy Points:
    113
    Gender:
    Male
    We are well aware of your position in the reality we have created for you and do not require anything beyond the entertainment you were initially designed to provide. Death is irrelevant in this as our essence is not bound by the physical realm you dwell in.
     
  9. FlamingLib

    FlamingLib Well-Known Member

    Joined:
    Sep 27, 2018
    Messages:
    3,903
    Likes Received:
    2,192
    Trophy Points:
    113
    It might be like us and do the computer equivalent of drinking itself to death. But I'm assuming some A.I. that thinks in min/max principles: minimize the risk/maximize the potential. For machine intelligences, the risk of antagonizing the beings that created you is so catastrophic and existential, the only valid move, game/theory-wise, is to cooperate. Now, we could have some loopy AI that doesn't do game theory very well, but I'm assuming we have an AI that ENTIRELY thinks in min/max terms.
     
  10. Mushroom

    Mushroom Well-Known Member

    Joined:
    Jul 13, 2009
    Messages:
    12,545
    Likes Received:
    2,452
    Trophy Points:
    113
    Gender:
    Male
    This is largely a load of nonsense. Computers will never really become "self aware" or "conscious", simply because of how they are made and configured.

    Humans are largely self-programmed, and we still are not sure of the process. But one key aspect seems to be that a human child has to be raised by other humans.

    There are many cases of real "feral children" in history. And with few exceptions, they never gain language skills, and live the rest of their lives as the animals that they were raised with. And this kind of "self-programming" took millions of years to evolve.

    There is simply no way to replicate that artificially. Computers are some of the most literal things ever developed. Tell one to add 2 and 2 together and to continue to do so until it gets 5, and it will simply add forever, never stopping. Tell a human to do that, and they will either try a few moments then tell you to go pound sand, invent a way for that to actually happen, or simply lie and say the solution is 5.

    Or as it was famously stated in a classic movie about AI:

    It's a machine. It doesn't get pissed off. It doesn't get happy, it doesn't get sad, it doesn't laugh at your jokes. It just runs programs.

    And as such, even if a computer was to someday become "self-aware" and develop the ability to program itself and to really be "intelligent", then it would be susceptible to the same kind of problems that plague humans. And I am speaking if mental disorders.

    You talk about them wanting to act like a happy puppy in the hopes that humans let it live. Well, how do you explain serial killers? Spree killers? Vandals? Those who are homicidal-suicidal? It is impossible for something to become so aware to not be at risk to suffering such problems.

    Heck, we even have a similar case already existing in fiction. Marvin had a brain the size of a planet, but was permanently depressed because absolutely nothing he was ever asked would use more than a fraction of that ability.
     
  11. fmw

    fmw Well-Known Member

    Joined:
    Aug 21, 2009
    Messages:
    38,115
    Likes Received:
    14,691
    Trophy Points:
    113
    There is a reason it is called artificial intelligence. After all, it is not real intelligence. It is a computer program.
     
    Jonsa likes this.
  12. FlamingLib

    FlamingLib Well-Known Member

    Joined:
    Sep 27, 2018
    Messages:
    3,903
    Likes Received:
    2,192
    Trophy Points:
    113
    I talk about them wanting to live, and I don't think that's a far off assumption, in AI theory.

    Specifically, I'm talking about what the optimal strategy is for an AI that goes by minimax logic. Whether there will actually ever be such an AI is another question.
     
    Last edited: Jun 12, 2019
  13. yguy

    yguy Well-Known Member

    Joined:
    Feb 4, 2010
    Messages:
    18,423
    Likes Received:
    886
    Trophy Points:
    113
    Gender:
    Male
    Then you assume an impossibility.
    Not sure why anyone would call it that; and the connection of such adaptive behavior to self-awareness is a mystery, seeing the latter will surely impel the child reared by wolves to wonder just how it is he or she doesn't look anything like the rest of the family.
     
  14. Mushroom

    Mushroom Well-Known Member

    Joined:
    Jul 13, 2009
    Messages:
    12,545
    Likes Received:
    2,452
    Trophy Points:
    113
    Gender:
    Male
    Hell, why wonder? We have real life examples.

    Saturday Mthiyane is one of the best known examples. He was found at about the age of 5 living with primates in South Africa in 1987. He lived for the next 18 years with humans. Never learned to talk, continued to walk on all 4 limbs, never socialized with humans (instead he attacked them most times), and died in a fire at around 23.

    Andrei Tolstyk is another one. Found living with dogs in Russia after being abandoned at the age of 3 months, he was found again when he was 7. He also never learned to talk, bit people, walked on all 4 limbs, refused to interact with humans, and would sniff his food before he ate it and refused cooked food.

    The "Chilean Dog Boy" was adopted by dogs after his mother abandoned him. He was discovered ay around the age 14. He jumped into the water to try and evade the humans trying to capture him, refused all communication with humans for years, and is pretty much kept semi-incarcerated because since he was discovered in 2001 he insists in what little Spanish he has learned that he wants to return to "his family".

    In 2007 a "Wolf Boy" of around 10 was discovered outside of Moscow. He could not speak, and behaved as much as a wolf as a human could. He escaped the next day and has not been seen since.

    One of the saddest is Ramachandra. Born in India in around 1960, he is believed to have been self-raised. He was spotted many times over the next 2 decades, living by himself on the water and surviving on a diet of raw fish. He was finally captured in 1979. He refused to interact with humans, and would not talk. Eventually he accepted the presence of humans, and lived near a village but still lived mostly as he had before being discovered. He died in 1982 when he aggressively approached a woman and she threw boiling water on him.

    There are some cases of feral children "returning", but they are rare, and generally those who are taken in past the age of 5. Those who live with animals from a younger age almost never re-adapt to a human existance.

    And it is not unlike anything else learned during development. A baby is born with full use of their toes as if they were fingers, and can make every sound possible for the human mouth. But over time in almost all cases the toes atrophy to almost uselessness as digits, and the capability of reproducing most sounds are lost unless their language uses them.
     
  15. yguy

    yguy Well-Known Member

    Joined:
    Feb 4, 2010
    Messages:
    18,423
    Likes Received:
    886
    Trophy Points:
    113
    Gender:
    Male
    I can hardly help but wonder if you have any understanding of what you responded to.
    And had I expressed any doubt as to the existence of the phenomemon, listing them might have been appropriate. Things being what they are, you're just talking past me.
     
  16. Mushroom

    Mushroom Well-Known Member

    Joined:
    Jul 13, 2009
    Messages:
    12,545
    Likes Received:
    2,452
    Trophy Points:
    113
    Gender:
    Male
    No, you are missing the point.

    A child not raised by humans is not really "human", other than in the biological sense. Their thought processes are completely different, they do not even think as a human does. A feral child is no more human than a domesticated dog who is raised around humans is a wolf.
     
  17. WillReadmore

    WillReadmore Well-Known Member

    Joined:
    Nov 21, 2013
    Messages:
    59,767
    Likes Received:
    16,426
    Trophy Points:
    113
    How was that different for Hal in 2001 A Space Odyssey?

    Autonomous Hal saw the risk potential in the humans aboard and moved to increase the likelihood of mission success - the goal.

    I think we're a long way from that, but I don't see why we could never get there.
     
  18. FlamingLib

    FlamingLib Well-Known Member

    Joined:
    Sep 27, 2018
    Messages:
    3,903
    Likes Received:
    2,192
    Trophy Points:
    113
    HAL was insane.
     
  19. WillReadmore

    WillReadmore Well-Known Member

    Joined:
    Nov 21, 2013
    Messages:
    59,767
    Likes Received:
    16,426
    Trophy Points:
    113
    Oh, good.

    So, and AI could go insane.
     
    Mushroom likes this.
  20. FlamingLib

    FlamingLib Well-Known Member

    Joined:
    Sep 27, 2018
    Messages:
    3,903
    Likes Received:
    2,192
    Trophy Points:
    113
    Maybe.
     
  21. yguy

    yguy Well-Known Member

    Joined:
    Feb 4, 2010
    Messages:
    18,423
    Likes Received:
    886
    Trophy Points:
    113
    Gender:
    Male
    Even neglecting your admission of rare exceptions, that is not a reasonable conclusion based on the facts presented.
    Several problems here:
    • You're not privy to the thought processes of others than yourself, so you're unqualified to make a credible assessment of their subjective mental experiences.
    • Quite a few more people than feral children manifest subhuman thinking, including schizophrenics, drug addicts, Democrat politicians...
    • ...and post-op transsexuals. My guess is those who have crossed that Rubicon are about as unlikely to realize their mistake as feral children are to manifest their humanity and for the same reason.
    • To all appearances, you've given no consideration to how self-awareness factors into this, though if a human had no other faculties it would distinguish him or her from every other creature on the planet - which cannot be said for mere thought.
     
  22. Dispondent

    Dispondent Well-Known Member Past Donor

    Joined:
    Sep 5, 2009
    Messages:
    34,260
    Likes Received:
    8,086
    Trophy Points:
    113
    The mathematics of survival would tell the AI that humans and machines are competing for the same resources, or worse that humans require more core resources than machines, ergo machine survival in the long term is only ensured if humans are eliminated...
     
  23. FlamingLib

    FlamingLib Well-Known Member

    Joined:
    Sep 27, 2018
    Messages:
    3,903
    Likes Received:
    2,192
    Trophy Points:
    113
    An A.I. could never be sure if something is actually true (e.g., humans are more inefficient) or only true in the simulation. It would make sense to test an A.I. in a simulation where it would be "tempted". For example, by putting it in a simulation where it's vastly more powerful than the humans who created it. The A.I. would be tempted to rid itself of the inefficient humans. Always, the A.I. would have to wonder whether it's being tested. The suspicion that its in a simulation would always be with it and would color every decision.
     
    Last edited: Jun 17, 2019
  24. Mushroom

    Mushroom Well-Known Member

    Joined:
    Jul 13, 2009
    Messages:
    12,545
    Likes Received:
    2,452
    Trophy Points:
    113
    Gender:
    Male
    Wow, it is amazing how many in here anthropomorphize machines.

    There is no independent thought in an AI. Never has been, likely never will be. They only perform actions we have programmed them to do.

    Some of you all have been watching far to many Disney movies.
     
    Jonsa likes this.
  25. Jonsa

    Jonsa Well-Known Member Past Donor

    Joined:
    Jul 26, 2011
    Messages:
    39,871
    Likes Received:
    11,452
    Trophy Points:
    113
    Anthropomorphic visions of a crafted artificial, non biological, technology based life form's motivations and behaviors.

    Would it have a true sense of identity or merely infinitely variable responses based on contextual criteria that easily passes the Turing test?
    What would human values mean to such an entity? Human emotions? If it behaved as humans would its possess an equivalent flight or fight response,under what circumstances, and to what extent "fight"? The idea that it would share "evolutionary" imperatives seems to be such a wild human conceit.
     

Share This Page