I don't know if life is hardware or software. It seems to be both. There would also be a built in selfishness. Life is the only way to see so it adds a dimension to the universe not otherwise obtainable. Also, without life no desire is possible. Without life no knowledge is possible , just information. Life is the only possible way for me to be me.
If a dog is not self aware then i don't understand the term. Why is a dog not self aware? Infinity doesn't exist in nature. it exists only in philosophy and mathematics. When you see infinity in mathematics you can be sure that the mathematics are flawed. If you see it in philosophy you can ignore it. Semantics. What is the difference between artificial and simulated? Infinity doesn't exist in nature.. Spirituality is not a natural thing. It is a construct of the Human mind and a common one. Yes leave it out of your discussion since it is nonsense. At least I agree with your original premise. AI has nothing at all to do with intelligence or awareness. Computers follow programs and programs are written by people. The best AI can do is choose between options. It can't really create the options without changes in the program. It can only seem to act like an intelligent being but the intelligence is in the mind of the programmer, not the program. There are certainly situations where AI can make useful decisions and, at times, even do a faster and maybe better job than people. But a computer isn't intelligent. It is a tool for use by mankind.
AI isn't binary either. The future of AI is based on the use of fuzzy logic in much the same way the brain operates
Fuzzy logic is just weighted binary based decision making. The computer is still working on a provided dataset.
The brain works in a similar way, receiving inputs from many senses and makes a decision based on those inputs taken together with what it has in its memory
"True" self awareness IS life. True self-awareness is consciousness. A machine is NOT life, nor can mankind endow it with life. simulated intelligence is simulated intelligence. Moreover, consciousness doens't require 'self awareness'. A dog is conscious, but it is not self aware, it might not be as smart as an AI that can play chess, but it IS life, and the AI is NOT. "intelligence", or rather, the capacity to perform complex tasks, does not require consciousness, ie., 'self awareness'. Intelligence here we are definiting it as a function, not as 'beingness''. Some CONFUSE AI intelligence with 'self awareness' because they can't tell the difference. This is logical fallacy. The operative word is 'simulated'. Fake sugar will not give your body glucose, but it will taste sweet. Fake is fake. Ex Machina is science fiction. And, if you are going to say, today's science fiction is tommorrow's reality don't. That's true on a number of fronts, it's just not true on this one. Can man create humanoids? Some day they might, but it will be via the manipulation of genetics, which is guiding nature, not 'creating life', let us not forget that even there, nature is giving 'life' not man. Only nature can give life, (some would argue 'god' but I'm not a theist ) noting that it takes nature billions of years to do it. To assert that man can do it is the zenith of arrogance.
Precisely. There is a difference between the 'intelligence' of a machine, and the 'intelligence' of a human, is that the former is not 'alive' and the latter is. How life comes to be is the supreme mystery, nor is it solvable. Nature will allow man to garner many of her secrets, but that one, she is going to keep for herself.
No, the brain is quite adept at disregarding its input from senses or even creating a brand new one on the spot. A computer can't. It can only follow the instruction he's been fed and act on the data in its memory.
Future AI may even incorporate man made "living" tissue - is a man made brain a living entity or a machine? How many steps are there from a low level intelligent organism to human intelligence.........
But why do you think that a future AI machine will not be able to do the same thing? At the moment in time one difference between a higher order organism and an AI machine is that a higher order organism's brain can sometimes react on impulse not logic. What controls reaction on an impulse? (rhetorical question). Obviously there are many things that we don't know about the brain but we do know that by activating areas of the brain just by sending EM signals to that area we can activate certain emotions and alter the organism's response to the stimulus. We can actually reverse the organism normal behaviour to a situation by using an external stimulus directed at the brain itself. So we are now artificially controlling a living organism's behaviour just by activating memory. We are even able to implant false memories in mice just by using light. Admittedly I am side-tracking here but I find it fascinating
The future doesn't exist yet. We can only go with what we have now and what we call artificial intelligence is everything but intelligent. Alternative to binary computing has been in the works since the 70's and they've **** to show for it. There's also all the software needed to be written, by flawed humans, to exploit those future non binary computers. SciFi is great, but you must consider the fiction term after the science word in science fiction. Only a itsy tiny part of all the wonders SciFi predicted has come to fruition.
Well, I didn't say that. I pointed out that whether something is "alive" is a very different question. I don't believe life is a requirement for being self aware, conscious, having the free will we have, etc. And I pointed out that there are already machines that can fool humans concerning whether or not they are robots. BUT then I pointed out that fooling humans is not a measure of consciousness and can only be considered a step on the way. The central point is that humans have capabilities that are very definitely finite. We can tell, because it exists inside our body mass. Thus it isn't reasonable to suggest that those capabilities could never be duplicated.
This one about programming seems good to me. After all, we program ourselves. We are not static. Every time we see something, learn something, hear something, whatever, we change physical structures in our brains to remember that. And, we draw on those physical structures of memory of past events, education, etc., to form plans for how to address problems we face. And, those plans are also saved as changed physical structures in our brains. And, machine learning today includes strategies that are similar. In learning to wipe out the very best GO players (a game thought to favor human players far more than chess - a game where no human can beat a computer anymore), designers have used the approach of creating AIs that can learn. So, the strategy was to allow their AI to play millions of games against itself at warp computer speed and to analyze and learn strategy from that experience. The result today is machines that can beat any human and in fact where the machine uses strategies that humans have had a hard time to even understand. That is, the ability of the AI to learn and to program itself is the core reason that it can beat any human. I think this is all VERY separate from any discussion of what life is or what purpose is.
The thing is, the programming good AIs have today involves the AI learning and creating new strategy. I agree that what we have today is limited in that the goal and the range of options for responses are set by the programmer. It's not like we are - with free will v. the universe. In my view we have a LONG way to go. But, I don't see any justification for the notion that what we have as humans is infinite in any dimension. It is all inside the volume of the human body where there are serious limits - such as available energy, brain cell count, etc.. Thus I don't see a limit concerning how close an AI could come to being equally capable - or more capable, for that matter.
The strategy for winning at go did not come from human programmers. Programmers focused on allowing the AI to learn plus the basic rules of the game. I think that says a lot about where the focus will be, including where the programming will come from. I'm sure we will find ways to create computing power that requires less energy and is faster.
I'm not going to rule out the prospect of a 'humanoid' which is largely organic, but it might have material matter components. That doesn't change the premise. Which I've addressed.
But it is. 'Self aware' i.e., aware that one is aware, is consciousness. However, consciousness, doesn't require being self aware. In either case, a machine is not conscious, let alone self aware. It only simulates it. Simulation is not the real deal.
But, that's still not 'consciousness', it's programming a machine to teach itself, it's still programming, an extension of humanity.
I don't get your point. I think I was clear about what that is. The issue at hand, I believe, is the question of whether "consciousness" (etc.) is impossible to accomplish, NOT whether it has been accomplished. And, the fact that there is no dimension in which human capabilities are infinite is a clear argument that duplication is not impossible.
At such time in the future when the "simulation" and whatever you want to call the "real thing" are identical I think it's game over. You can examine to see if it's "living", but I don't see any justification for that being a criterion for mental capability.
It is my view that nature will allow man to garner many of her secrets, but the secret to how nature endows life, that one she'll keep, forever. Therefore, the distance between a machine and life is infinite, beyond the reach of Moore's law.
They will never be identical. Perhaps one can be fooled, temporarily, but living with that machine, it will become clear that it is not 'alive'.