TheGreatEmperor TheGreatEmperor

Why the future doesnt need us

Why the future doesnt need us

Superintelligent AI and the Singularity

What happens when we create the first AI that is more intelligent then the average human? Well, with technological progress at the pace it is today it will only take us 2 decades to acomplish this.

So what happens then? Do we hope we didnt make a mistake and live our lives letting the AI enhance themselves further and further. Or do we stop and think about what the consequences might be. We never know when a simple math problem asigned to a super intelligent entity might cause the extinction of the human race.

So, heres a questions. Should technological progress become more limited?
519,556 views 372 replies
Reply #26 Top
There is a huge difference between data input (which is binary for a lot of our body, maybe even most) and data processing. Which, while our brain cells may be binary (on or off) I don't think is binary -- they still have to decide where to send their data, after all.
Reply #27 Top
you dont know much of your neuroscience do you?

neurons fire indiscriminately, they have a certain power threshold that causes them to fire, and then when the information is transfered it is from thousands of dendrites to dozens of nearby neurons. in effect a "shotgun" approach, not a choice one. the only thing that keeps your body from going complete haywire is that there is a e-potential threshold that keeps the neurons from firing upon being jolted only by a few dendrites.
Reply #28 Top
A basic assumption in that statement is that AI would not be "of" humanity. I think of that sort of thing a bit like giving birth to a really "different" child. In addition consider that we might slowly transition into machines ourselves. It's already started.


We have machines that can replace our hearts. We have machines that can regulate brain function. We have machines that can start to emulate the eye. We have machines that can replace arms. We have machines that can replace legs. On and on and on.


We are slowly developing the technology to build cyborgs. Most of this technology remains inferior to our natural born organs. But in time we can build organs that do not fail.


Imagine replacing one organ in your body at a time that does all the essential functions of the organic organ. Lets say we started slowly replacing bits of your brain with computer chips. A little at a time. Replace your visual cortex... you auditory centers... etc. At the end would you still be "you"? If we could digitize your "soul"... your memories, your mind, your thoughts, and feelings... if we could simulate your existence in a machine would that be so bad?


benefits?

Immortality.
Unlimited potential.
The ability to send your mind through time and space at the speed of light. Immunity from toxins. Enhanced resistance to hot and cold. MUCH higher resistance to pressure differences.


If we want to go into space we may need to become machines first. Or birth a race of machines to go in our name... while we stay on earth and wait for the sun to burn out. Those machines would be the only surviving legacy of humanity. The only evidence we ever existed.


But could you take the first step? If someone offered you the ability to understand the most sophisticated math understood by humanity today and the only price would be grafting a computer chip onto your brain. Would you do it? How about become an excellent programmer? Or musician?
Reply #29 Top
you dont know much of your neuroscience do you?


Nope! Just some basics.

We have machines that can replace our hearts.


Replace, or augment?

Lets say we started slowly replacing bits of your brain with computer chips.


I sincerely doubt they will ever develop a computer "chip" that can replace parts of your brain which handle higher functions. Lower functions like respiration and heartbeat, maybe, but higher functions? Not so sure about that. Of course, if you mean "computer chip" in the generic "a computer component" thats another ballpark.

If someone offered you the ability to understand the most sophisticated math understood by humanity today and the only price would be grafting a computer chip onto your brain. Would you do it?


My question would rather be: Why not do it? Whats the disadvantage? What will it cost me?
Reply #30 Top
In addition consider that we might slowly transition into machines ourselves. It's already started.

I agree that this is the most likely scenario, allthough I dont quite believe that we wont be able to explore space without being mostly metal.
If someone offered you the ability to understand the most sophisticated math understood by humanity today and the only price would be grafting a computer chip onto your brain. Would you do it?

hell yes.

then again, I dont think I'll need it.
Reply #31 Top
There is a huge difference between data input (which is binary for a lot of our body, maybe even most) and data processing. Which, while our brain cells may be binary (on or off) I don't think is binary -- they still have to decide where to send their data, after all.


what you are discribing as thought is known as "fuzzy logic" they are trying to develop a computer system using "fuzzy logic" so that they can figure a way to make conputers "think" like the human mind. The human mind does not use binary for data input. Binary is On or OFF. YES or NO the human mind however uses "could be" "might be" and abstract "what if". Binary cannot create an idea, only human "fuzzy logic" can.

Reply #32 Top
fuzzy logic
A superset of Boolean logic dealing with the concept of partial truth -- truth values between "completely true" and "completely false". It was introduced by Dr. Lotfi Zadeh of UCB in the 1960's as a means to model the uncertainty of natural language.
Any specific theory may be generalised from a discrete (or "crisp") form to a continuous (fuzzy) form, e.g. "fuzzy calculus", "fuzzy differential equations" etc. Fuzzy logic replaces Boolean truth values with degrees of truth which are very similar to probabilities except that they need not sum to one. Instead of an assertion pred(X), meaning that X definitely has the property associated with predicate "pred", we have a truth function truth(pred(X)) which gives the degree of truth that X has that property. We can combine such values using the standard definitions of fuzzy logic:
truth(not x) = 1.0 - truth(x) truth(x and y) = minimum (truth(x), truth(y)) truth(x or y) = maximum (truth(x), truth(y))
(There are other possible definitions for "and" and "or", e.g. using sum and product). If truth values are restricted to 0 and 1 then these functions behave just like their Boolean counterparts. This is known as the "extension principle".
Just as a Boolean predicate asserts that its argument definitely belongs to some subset of all objects, a fuzzy predicate gives the degree of truth with which its argument belongs to a fuzzy subset.
Reply #33 Top
what you are discribing as thought is known as "fuzzy logic" they are trying to develop a computer system using "fuzzy logic" so that they can figure a way to make conputers "think" like the human mind. The human mind does not use binary for data input. Binary is On or OFF. YES or NO the human mind however uses "could be" "might be" and abstract "what if". Binary cannot create an idea, only human "fuzzy logic" can.

we're talking about the processing, not the output. your brain still works in 1s and 0s, even if the concious output works in (what is APPARENTLY) a range of possible answers beyond one and zero.
Reply #34 Top

We have machines that can replace our hearts.


Replace, or augment?

Both. Artificial hearts have been made that can operate almost as well as transplants.

I sincerely doubt they will ever develop a computer "chip" that can replace parts of your brain which handle higher functions. Lower functions like respiration and heartbeat, maybe, but higher functions? Not so sure about that. Of course, if you mean "computer chip" in the generic "a computer component" thats another ballpark.

I don't mean it will be a silicon semiconductor but a human built artifact able to replace a specific brain function and or interface with the brain on a deep level.

Such components once developed will be quickly improved upon and expanded in capability.

If required we can even genetically modify the human being augmented such that they accept the implants more smoothly.


We are entering a new stage in the evolution of life on earth.


First evolution could only occur through random mutation.
Second evolution added sex to the list allowing successful individuals to exchange genetic material.

Third evolution is gaining the ability to allow individuals to intelligently guide their own evolution.

Forth evolution will transcend the biological... I don't know what comes after that.



My question would rather be: Why not do it? Whats the disadvantage? What will it cost me?

You'd likely become more dependent on civilization. As it stands if civilization was destroyed tomorrow billions would die but humans can still live in barbarism. If we become cyborgs however we might not be able to survive without organized industry to provide replacement parts and service to our manufactured parts.
Reply #35 Top
If we become cyborgs however we might not be able to survive without organized industry to provide replacement parts and service to our manufactured parts.


And how is that different from the diabetic that needs insulin shots? The epileptic that needs drugs to control his seizures? The fat guy who needs food to keep from starving?

... The idiots that don't know enough to boil water before drinking it? The people that "do their business" upstream of where they (or others) drink?
Reply #36 Top
seeing as that doesnt hurt us I dont see that as "bad"

also note that we would each be way more than "just numbers" we probably would register in the trillion of trillion numbers (dont exactly know what that is)


Well, it could be, if it needs to subtract something, and that something happens to be a person then goodbye person.

Our brains process chaotically. Just sit down for a while and meditate for a little bit, learn to "hear" your mind better


Great, now he 'hears' things in his head.

neurons fire indiscriminately, they have a certain power threshold that causes them to fire, and then when the information is transfered it is from thousands of dendrites to dozens of nearby neurons. in effect a "shotgun" approach, not a choice one. the only thing that keeps your body from going complete haywire is that there is a e-potential threshold that keeps the neurons from firing upon being jolted only by a few dendrites.


He is correct, neurons fire everywhere, the thing that matters are what chemicals are used in the trasfrance between two cells.

My question would rather be: Why not do it? Whats the disadvantage? What will it cost me?


Nothing much, we might need to implement a loyalty program just to make sure you stay working for the UEF.

we're talking about the processing, not the output. your brain still works in 1s and 0s, even if the concious output works in (what is APPARENTLY) a range of possible answers beyond one and zero.


Well doesnt processing include the 'maybe'. You see something and you think that it 'might' be this, or it 'might' be that. I dont think for one bit that we function in such a simplistic way as binary, but it is true that on the basic level our cells and receptors probably do.
Reply #37 Top
Umm the closest thing to a walking talking AI today is that big white dancing robot in Japan. I mean hes adorable, but, can those jazz hands really decimate the entire human race? If so.... kudos

The only thing that can really f*** us sideways from technology is some mega computer virus shutting down all internet and computers, EMP waves do permanent damage and whipe out all electricity (now THAT would send us into complete chaos), nuclear armageddon (rare but slightly possible), or Mongolians.
Reply #38 Top

Great, now he 'hears' things in his head.


So does every single person on earth. The only difference is a sane person knows thats just his own thoughts!
Reply #39 Top
And how is that different from the diabetic that needs insulin shots? The epileptic that needs drugs to control his seizures? The fat guy who needs food to keep from starving?

It's different in a few ways.

1. I'm assuming the cyborg CHOSE to be a cyborg and wasn't made one by some accident or affliction.

2. I'm assuming that eventually cyborgs will be the majority of the population. We might still be born as pure humans but perhaps all adults become cyborgs. People with seriously medical conditions that require medication to keep them stable are a very small minority of the population.

3. You cited afflictions which harm those effected while the cyborg modifications are intended to be positive.

The idiots that don't know enough to boil water before drinking it? The people that "do their business" upstream of where they (or others) drink?

I'm making the point that once your biology starts to become dependent on artifacts losing access to the means of repairing or replacing those artifacts could mean you'd die. If we assume that a large portion of the population were converted and then society fell... it would mean that only those that required no maintenance, had never been modified, or somehow has access to replacements would survive. We need food, water, shelter, and access to a large enough breeding population to prevent inbreeding to survive currently. When you take it down to basics that's it. However, most of us are cyborgs we've added an additional requirement for survival. We'd need access to repair and replacement facilities to keep the artificial components functioning. If we're looking at a worst case situation then they would only need that kind of access for a single generation. Their progeny obviously wouldn't have such things installed. But it does increase the "price" to the human population of a collapse in civilization. That was my point.


Reply #40 Top
I would like to see the sources where you get your information and conclusion that "in 2 decades time" the development of AI will exceed human intelligence and capability.

From the research I've done online and info from reputable resources, I see a sentient AI far from being able to match a human's ability. We already have a very tough time trying to get a AI to respond to human facial gestures and so forth (this is still being developed at MIT).
Reply #41 Top
More intelligent is a subjective term... some people define intelligence by the capacity to remember, in which case, in straight memeory there exists computer intelligence more intelligent than we are.

Other people define it as the ability to reason, whether inductive or deductive. And this is the area we would need to be worried about. The question isn't what happens when an artificial intelligence becomes more intelligent than we are (I would say a great deal many of the Earth's politicians fall in the category of less intelligent than most AI in the modern day), but rather the question is what happens when the AI is aware that it is more intelligent. Self awareness, though not necessary for intelligence, is necessary for other things: perspectives on reality, morality, etc...

A computer mind which becomes more intelligent than a human may not know that it exists, that we exist, or the world exist, it may have no awareness other than input or output, 'caring'(in a very figurative sense, and myself not knowing a better word to explain it) not where it comes from or where it goes. We assume that creatures at least as intelligent as us must be sentient, because every time we see creatures at least as intelligent as us they are. But the case is, the only creatures as sentient as us is other humans.

But let's assume in 20 years we create a sentient computer at least or more intelligent than a human (besides a politician, which generally are not humans but rather simple organic machines which truly make me wonder how someone with out a central nervous system can make it through a full life time). The only real problem is whether or not morality could be programmed in and a skepticism could be programmed out.

In the case of morality it is important because we would need to know if the computer would recognize our sentience. That is: we recognize other people are sentient because they are the same species, and we would treat them as we would like to be treated (Lao tzu said it first, Jesus, and by extension God, just copied him). Computers look nothing like us (unless if given a human form, yay Asimov!), so how would a computer look at us an figure we are sentient? The quick answer is that they necessarily, on their own, would not. The fix would be to programme it in, give it say... three principles which it cannot break. This creation would be the creation of non-objective objective principles, they exist but only inside of the machine, thus making them physical principles and not underlying principles of the Universe. The question then to be asked from our moral stand point is would it be right for us to force moral ideas upon another creature in order for it to better serve us? Would that not be slavery?

As for skepticism, I do not think it could be programmed out with out removing a great deal of intelligence and reasoning from the machine, thus making it essentially dumb. A smart machine would recognize that any sensory information may not accurately represent the world, or infact there may be no 'world' as we would call it. To steal a bit from the First Meditation, how infact would the machine be able to tell the difference between the 'real' world and a world where the information were given to him by a malicious programmer? We all know the sense can be fooled, just take mushrooms. So how difficult would it be to fool the senses of a being which we created? Not at all. A intelligent computer would break down into solipsism and be stuck and an infinite loop a sentient computer may disregard it and file it under the assumption that it must be thought this way to make any progress. Maybe one day how sick humans see a physician, a sick machine will see a metaphysician to help it get through a litteral identity crisis.

I honestly do not think it would be a problem for sentient computers more intelligent than us. If their neural net and our neural nets would be rough copies of each other, then perhaps the way to deal with sentient machines is not to think of the seperately from humans. That is like a human a sentient machine would have a childhood, adolescents and adulthood, and like a human much of its personality could be derived from whether any part of its development was particularly traumatic. Or think of it this way, the final step of programming would be character development. It does not have to be Spielbergesque, but that is a possbility. Chances are, though, if an AI were to be developed, the first employment would be in military arenas, so any chance to create a 'good' AI would be destroyed, think more along the lines of Terminator or War Games (as most people already do).


At some point along the line a sentient computer was thought to always be a member of the Church of Ayn Rand. How did this happen? Problem in some movie and it sold, so others trying to capitalize on this situation maid similar moves, thus cementing it in our social consciousness. Can a sentient computer turn out to be like in the Matrix? Yes. Is it possible for other outcomes? Yes.

To say that research and technological development should be curtailed because of possile outcomes should not be sufficient reasoning to stop it. If we were to stop any potentially destructive technology because of its potential, society as we know it would have never existed. Now that may or may not be a good thing, but if we go back a few hundred thousand years to the first cave man who sharpened a stick, and someone said 'do not do that because it will cause millions of deaths' and he listens, history turns out quite different.

I myself believe in evolution. Sentient machines bring us the opportunity to raise our being from Homo Sapien to Homo Superior. A sentient machine requires us to better ourselves and not be made obsolete.
Reply #42 Top
I would like to see the sources where you get your information and conclusion that "in 2 decades time" the development of AI will exceed human intelligence and capability.

From the research I've done online and info from reputable resources, I see a sentient AI far from being able to match a human's ability. We already have a very tough time trying to get a AI to respond to human facial gestures and so forth (this is still being developed at MIT).

Considering that we can't create intelligences with the flexibility of an ant despite having computers that greatly exceed the calculative power of an ant... I don't see creating computers with the calculative power of the human brain and beyond as being the primary hurdle we need to leap towards truly robust AI.


That said, i've read in many places that our super computers will exceed the calculative power of the human brain within 10 years and after that they'll start exceeding it dramatically... Moore's law being what it is...
Reply #43 Top
Well, it could be, if it needs to subtract something, and that something happens to be a person then goodbye person.

wait, you mean a computer using us like a giant abacus? I doubt it...

dammit, you just made multiplication a very... nasty... subject.
It's different in a few ways.

1. I'm assuming the cyborg CHOSE to be a cyborg and wasn't made one by some accident or affliction.

2. I'm assuming that eventually cyborgs will be the majority of the population. We might still be born as pure humans but perhaps all adults become cyborgs. People with seriously medical conditions that require medication to keep them stable are a very small minority of the population.

3. You cited afflictions which harm those effected while the cyborg modifications are intended to be positive.

I'm going to assume that if we have the medical tech to safely put this chip in someones brain, we have hte materials tech to make sure it doesnt degrade. so I think its all null to argue about.
However, most of us are cyborgs we've added an additional requirement for survival

thats also assuming that the chip becomes integral instead of peripheral in the case of shutdown.
Considering that we can't create intelligences with the flexibility of an ant despite having computers that greatly exceed the calculative power of an ant... I don't see creating computers with the calculative power of the human brain and beyond as being the primary hurdle we need to leap towards truly robust AI.


That said, i've read in many places that our super computers will exceed the calculative power of the human brain within 10 years and after that they'll start exceeding it dramatically... Moore's law being what it is...

people dont seem to understand how marvelous the brain truly is...

for a computer to function on the human brain level it will have to get many times smaller, if only for the length of busses to be reduced.

if you're talking number crunching, duh, computers already win. but in terms of SHEER processing MIGHT, well... I win.
Reply #44 Top
So does every single person on earth. The only difference is a sane person knows thats just his own thoughts!


So your quite certain you are sane?

I'm making the point that once your biology starts to become dependent on artifacts losing access to the means of repairing or replacing those artifacts could mean you'd die


Well that is only bad if you are afraid to die.

I would like to see the sources where you get your information and conclusion that "in 2 decades time" the development of AI will exceed human intelligence and capability.

From the research I've done online and info from reputable resources, I see a sentient AI far from being able to match a human's ability. We already have a very tough time trying to get a AI to respond to human facial gestures and so forth (this is still being developed at MIT).


Oh I basing it around a more lose construct, the theory of Singularity, in which technological advancement is exponensial, which in truth it is. It is but an estatmate, and by far not a sure fire date for when you are dealing with the future you cant always be sure of what will happen next.

[quote] wait, you mean a computer using us like a giant abacus? I doubt it...

dammit, you just made multiplication a very... nasty... subject. [/quote

Exactly, the machine doesnt even have to be malecious.

I win.


Your pretentiousness knows no bounds Schem.
Reply #45 Top
Oh I basing it around a more lose construct, the theory of Singularity, in which technological advancement is exponensial

the theory of singularity is flawed, technological advancement is quite obviously closer to the way populations with limiting factors develop, lag, log and stat phases.

lag: little development
log: exponential-like development
stat: growth slows, eventually is staved off completely


why? because no matter how smart we are we still are forced to obey constraining laws on our physics. now, you may say "what if we breach our own universe into the multiverse"

well... then that will be a much funkier model, but still.
Reply #46 Top
More intelligent is a subjective term... some people define intelligence by the capacity to remember, in which case, in straight memeory there exists computer intelligence more intelligent than we are.


Those people are idiots and I think we can safely assume (for the purposes of this conversation) that everyone knows thats stupid.

It's different in a few ways.
I'm making the point that once your biology starts to become dependent on artifacts losing access to the means of repairing or replacing those artifacts could mean you'd die. If we assume that a large portion of the population were converted and then society fell... it would mean that only those that required no maintenance, had never been modified, or somehow has access to replacements would survive. We need food, water, shelter, and access to a large enough breeding population to prevent inbreeding to survive currently. When you take it down to basics that's it. However, most of us are cyborgs we've added an additional requirement for survival. We'd need access to repair and replacement facilities to keep the artificial components functioning. If we're looking at a worst case situation then they would only need that kind of access for a single generation. Their progeny obviously wouldn't have such things installed. But it does increase the "price" to the human population of a collapse in civilization. That was my point.


Let me rephrase my point more clearly: A large part of our current population will die anyway the instant we loose civilization for whatever reason. Even if you ignore the fact that its only civilization that lets us feed this many people. Adding in cybernetics (and I sincerely hope that any widely used cybernetic implant would be designed around a "human" lifespan -- or at least one comparable to the organ its replacing / augmenting) isn't going to change that much.



So your quite certain you are sane?


I'm willing to consider (informed) debate on that subject... if needed.

is quite obviously


Question: Should I just LOL at this, or actually ask you to explain how it could be oh-so-obvious?
Reply #47 Top
I think a good point would be the amount of energy needed to keep the robotics running...
Reply #48 Top

I think a good point would be the amount of energy needed to keep the robotics running...


Depending on the nature of the equipment, odds are you can just "harvest" it from the human body. There are (immature) technologies already out that let you do that. Its not a lot -- maybe enough to run small, power efficient electronics -- but it'll improve. Now, if we're talking about any kind of mechanical motion, things get more complicated -- they'll need to figure out how to utilize the bodies own energy distribution system (AKA: harvest blood sugar and convert that to power) but it could be done for stuff like artificial hearts. Stuff like artificial lungs / kidneys could probably get by on much lower power levels, though.
Reply #49 Top
add in the energy required to manufacture the stuff and you have a pretty large amount of junk...

to do anything that they show in the movies, increased 'thought' and whatnot would take quite a bit of energy IIRC. also, does anyone have any ideas about power systems that small? I know that it is hard to predict technology, but does anyone have ideas of how its gonna go based on current research?
Reply #50 Top
Should I just LOL at this, or actually ask you to explain how it could be oh-so-obvious?

again, because we're constrained by those infamous laws of physics eventually we will get to the point where our engines run at 99.9% efficiency, and without some amazing change in the way the universe works, we wont be able to breach 100%, same will go for so many other technologies: engines, material sciences, explosive devices, medical etc.

theres a certain point beyond which theres nothing left to discover. so heres an LOL for you not getting that...
does anyone have any ideas about power systems that small?

microfilm nanite power strips, work kindof like nuclear rod strips, except that they're are lots more, they're nanite sized, and its a chemical reaction. thing is I dont know how long those last & their output.