Why the future doesnt need us

Superintelligent AI and the Singularity

What happens when we create the first AI that is more intelligent then the average human? Well, with technological progress at the pace it is today it will only take us 2 decades to acomplish this.

So what happens then? Do we hope we didnt make a mistake and live our lives letting the AI enhance themselves further and further. Or do we stop and think about what the consequences might be. We never know when a simple math problem asigned to a super intelligent entity might cause the extinction of the human race.

So, heres a questions. Should technological progress become more limited?
518,872 views 372 replies
Reply #1 Top
a bit like battle star galactica where the cylons (robots) turn on the human race and own emm down.
Reply #2 Top
Well if we don't get out of this iraq war, we're all going to die!
-Average American Thinking
Reply #3 Top
and how do u get to that conclusion?
Reply #4 Top
hes an american idiot

sorry windex, dont really mean it, but i had to say it <3
Reply #5 Top
More people want to get out of the war in America than they are about global warming (or at least know about it). And after all, all the epa people tell us that if we dont' cut global warming, we're all going to die!

So since more people are against something than another thing that 'will' kill us all is obviously worse.


POWER TO THE PEOPLE!
Reply #6 Top
So, heres a questions. Should technological progress become more limited?

a good question and its been asked for a long time.
I think the problem with it, is that we try to become GOD/s and with that comes a responsibility like no other.

As it is now, when we cant raise our own children in a way to behave themself properly, i dont think we are ready to raise a new "species".

Not that im against real AI, i would love to have one to teach and learn stuff (like those old creatures games).
Reply #7 Top
kinda hard to say that we will ever have a ai close to a humans. it isn't a easy thing to do. also i dont think the us will be making skynet anytime soon. also if you look into supreme commander lore. there always will be a backup program that cannot be deleted. that will keep things form going overboard.
Reply #8 Top
Should technological progress become more limited?


Too late. You can't limit it -- its become a self-sustaining reaction.
Well, with technological progress at the pace it is today it will only take us 2 decades to acomplish this.


And we were supposed to be driving flying cars by now -- you can't schedule progress. Sure, flying cars are "just around the corner" (They've got working prototypes...) but the fact remains that these things will take their own pace.
Reply #9 Top
Too late. You can't limit it -- its become a self-sustaining reaction.


Nothings ever too late or too early, everything happens precicely when it means to.

As to the second part of it. Is that reaction now uncontrolable, and will it bring an end to our human society?

And we were supposed to be driving flying cars by now -- you can't schedule progress. Sure, flying cars are "just around the corner" (They've got working prototypes...) but the fact remains that these things will take their own pace.


We've had flying cars for quite a while, they were just not feasable because they would drain efficiency. The plan I believe was discussed and discarded. However, they are thinking about getting airplan cars on the roads soon.

Now, you can plan progress, thats why we have all the organizations and beurocracy.
Reply #10 Top
I live in Philadelphia and we have flying cars here already, have for a long time!

they fly into each other while drag racing
they fly off the road from the pot holes
they fly into people cause buttlets are in short supply
they fly down the street like idiots
they fly onto the sidewalk for points (deathrace 2000)
they fly into hotel parkinglots for hookers

I just wish those jerks who fly them would also learn how to drive them!

Reply #11 Top
Just because an AI will be smarter than a human doesn't mean its going to start thinking for itself and what a wonderful place it would be if humans didn't exist and all that did were AI. Its not like we can program emotions into a computer (well, we can but they are't genuine feelings, they need to be set by some trigger and have an automated or several automated responses).

Chill TGE, the only thing that threatens your power is a small group of small teenagers with barely any funding or support! And with your Grand Moff's latest machine, nothing can stop you! Whats the worst that could happen?
Reply #12 Top
A future that doesn't need us won't include us.

If you do believe that our end will come from Artifical intelligence replacing the human mind (FYI: in order for that to be possible Computers would have to be operating with Quantum Mechanics, gl with that), then depending on the nature of the intelligence we created, we can, of course see the "famous end" and the "more realistic end". (And a shit-load more additional ends, including combinations of various ends)

The Famous End is that with the rise of intelligent machines, they will come to the conclusion to usurp the Human Race in order to achieve a specific agenda, (Self-Awareness, a Good-intentioned Road to Hell, or a simple programming error can be examples).

The more realistic end is that with the rise of intelligent machines, a majority of the human race may be unable to cope with the rise of Machines, and as such a constructed society where a majority of a population cannot survive with a standard of living will cause the Human Race to be usurped by machines. (There are so many variables based on perception, but I want to keep it simple).

So instead of Technology destroy Humans, a society could be constructed where the Human race may unable to co-exist with it's own technologies. (A perplexing thought, no?)

Nevertheless, in the second scenario, I believe the top 5-10% of the Human Race can almost always surivive in such a society AND that the merger between technology and man is inevitable.
Reply #13 Top

a bit like battle star galactica where the cylons (robots) turn on the human race and own emm down.


As long as it's the human Cylons... sign me up for some of that action.


/homerdrool Tricia Helfer
Reply #14 Top
You must ask yourself, what would these AI do after they eliminated humanity, spread through the universe, consuming all, as we ourselves would have done, AI of such advanced intelligence would be human in all regard, discard the fact that they would be of metal and we of flesh, and while AIs with human intelligence is possible, it would have to take a extremely advanced civilization to construct something like that, and hopefully by then they would be wary enough of such things.
Reply #15 Top
Whats the worst that could happen?


A computer using all the mass in the solar system as a calculation device making us just numbers in an equation?
Reply #16 Top
pfff TGE, havent you read... that one book... where the supercomputer processor thingys were moved into hyperspace...
Reply #17 Top
what would a computer need in order to take over the world? (or start a cascade that ends in said result)

-needs to be smarter than we are, check
-needs to have programing that thinks outside of assigned parameters, no check
-needs self-preserving motives, no check
-needs self advancing concerns, a.k.a. 'ego', no check
-needs engineering capacity, no check
*note* -it might be impossible for a computer to create anything more advanced than itself using its own programming, known as the issue of paradox created from degrading of information: think, is your brain powerful enough to contain all the information nescessary to create a duplicate, let alone a better one? the answer: no.
-needs appendages
-needs access to materials
*note* -if its acting independantly it will need mobility to access those materials
-needs way to process materials

I assume if it can create even crude materials it would work using those equipment then progress to the point where its working on the micro level. ex: use 2 rocks to make a sharp stone, use sharp stone to whittle a wood tool, use wood tool to dig for metal, use fire to make metal into another tool etc. etc.
... this would take a while...

anyway I dont see the point in putting all of this stuff together. even still there are technical issues that we cannot sort out currently, so computers will not be more powerful than we are (or smarter, emp.  ) for a while.
A computer using all the mass in the solar system as a calculation device making us just numbers in an equation?

seeing as that doesnt hurt us I dont see that as "bad"

also note that we would each be way more than "just numbers" we probably would register in the trillion of trillion numbers (dont exactly know what that is)
We've had flying cars for quite a while, they were just not feasable because they would drain efficiency. The plan I believe was discussed and discarded. However, they are thinking about getting airplan cars on the roads soon.

not true, the reason we dont have flying cars is because we dont have a reliable model. I agree with ron here, you cannot place a timeline on progress (unless you're making obvious statements, like "we wont have hyperspace in the next week or two")
Reply #18 Top
"we wont have hyperspace in the next week or two"


even that you cant be 100% about
Reply #19 Top
but I can be over 99.99999999999999999999999999999999999999999999999999999999999% about, and in almost all forms of thinking, thats 100%.
Reply #20 Top

-needs to be smarter than we are, check


Um, no. A computer can crunch numbers all day long, better than we can. But it isn't in fact more intelligent than we are. All it can do it run through a program. Now, if you manage to get a program capable of self-reprogramming in some form or another, that will change.
Reply #21 Top
sorry ron, let me rephrase:

thinks faster than we do.

happy?
Reply #22 Top

happy?


Since a computer can't think, only crunch numbers... not really
Reply #23 Top
thinking is merely processing of a binary system. thats all it comes down to, computers do that as well. so do black holes, so does your ham sandwich. the ONLY difference is that both brains and computers work in a orderly way, while everything else processes chaotically (beyond our capacity to decypher)
Reply #24 Top
thinking is merely processing of a binary system.


... LOL

Please, please tell me you aren't serious?

the ONLY difference is that both brains and computers work in a orderly way, while everything else processes chaotically (beyond our capacity to decypher)


See above: Please tell me you aren't serious.

Our brains process chaotically. Just sit down for a while and meditate for a little bit, learn to "hear" your mind better. You'll notice that while there is a semi-ordered primary "thread" which represents your conscious train of thought, there are many other "threads", to which you give varying amounts of attention. Most never reach the level of conscious thought, true, but they are there. And they are real. And boy are they chaotic.

And stopping even the conscious layer of thought is incredibly, incredibly difficult.
Reply #25 Top
I dont care that you cannot tell whats going on, but the result of the whole process is clear and understandable, so much so that you only exist because of it. hence "orderly"

the result of a black hole thinking is the most chaotic thing possible, almost (or possibly complete) absolute entropy breakdown in the form of hawking radiation. that, my man, is chaos.
... LOL

Please, please tell me you aren't serious?

no, I'm very serious.

even visual data is binary, rods and cones saying "yes, I see something" "no I dont" and then the stimulation of neurons is as simple as getting enough 1s to pass on another.

auditory data! nothing but thousands of tiny hairs each attatched to a neuron that says whether they are on or off, binary.

same goes for touch, internal organs being ordered to do stuff, taste, smell, types of perception you dont even know about etc.

the very fundation of our universe is binary, whether something is doing blank or not