What Will Kill You First, Cancer or Robots? Probably Robots

What Will Kill You First, Cancer or Robots? Probably Robots

image

Cambridge thinkers propose more research into "extinction-level" risks to mankind.

What are you supposed to be doing right now? The washing up? I wouldn't bother, mate. We're all doomed anyway.

At least, that's what popular culture tells us. Even if the environment doesn't turn against us with fiery abandon, there's always meteors, robot uprisings, nanomachine swarms, sentient viruses, etc. The only thing left to do is take bets on which terrifying cataclysm will kill us first, and that's pretty much what a new initiative proposed by a gaggle of boffins at Cambridge University would do.

The Centre for the Study of Existential Risk (CESR) would analyze risks to the future of mankind, particularly those we could be directly responsible for. The Centre, proposed by a philosopher, a scientist and a software engineer, would gather experts from policy, law, risk assessment and scientific fields to help investigate and grade potential threats. According to the Daily Mail, the proposal is backed by Lord Rees, who holds the rather grand-sounding post of Astronomer Royal.

Judging by comments from philosopher and co-founder, Huw Price, the potential threat of artificial intelligence seems to be pretty high on the centre's agenda.

The problem, as Price sees it, is that when an artificial general intelligence (AGI) becomes smart enough to write its own computer programs and create adorable little AGI babies (applets?), mankind could be looking at a potential competitor.

"Think how it might be to compete for resources with the dominant species," says Price. "Take gorillas for example - the reason they are going extinct is not because humans are actively hostile towards them, but because we control the environments in ways that suit us, but are detrimental to their survival."

"Nature didn't anticipate us, and we in our turn shouldn't take artificial general intelligence (AGI) for granted."

Price quoted former software engineer and Skype Founder, Jaan Tallinn, who once said he sometimes feels he's more likely to die from an AI accident than something as mundane as cancer or heart disease. Tallinn has spent the past few years campaigning for more serious discussion of the ethical and safety aspects of AI development.

"We need to take seriously the possibility that there might be a 'Pandora's box' moment with AGI that, if missed, could be disastrous," writes Price. "With so much at stake, we need to do a better job of understanding the risks of potentially catastrophic technologies."

Source: The Register

Permalink

My step dad and I were actually talking about this just last week and he believes we'll likely fully integrate with such technology before it gets to a point where it'd try to turn on us. I'd prefer a cyborg future to a human vs. robot one any day.

So we could end up living out the last years of our lives in a human conservation zone, full of the optimum level and quantity of human stimulus made by the computers to save the endangered humans.
I'd be up for that: Being waited on hand and foot by machines, maybe not the way we expected to be, but it still counts.

Sounds like someone's been reading this little number.

http://www.amazon.co.uk/Everything-Going-Kill-Everybody-Terrifyingly/dp/0307464342/ref=sr_1_1?ie=UTF8&qid=1353962061&sr=8-1

On a side note, I'm gonna go do a facebook search for John Connor and start making some new friends.

If robots could develop the ability to love, I wonder if any of them would be biocurious?

Seems to me if that we create a superior race, organic or otherwise, that proceeds to wipe out humanity, its just natrual selection bitch slapping us for being so bloody stupid and made of feeble meat.

Hero in a half shell:
So we could end up living out the last years of our lives in a human conservation zone, full of the optimum level and quantity of human stimulus made by the computers to save the endangered humans.
I'd be up for that: Being waited on hand and foot by machines, maybe not the way we expected to be, but it still counts.

They tried that. But some humans escaped, and tried to make life hell for everyone else enjoying the pseudoparadise instead of just plugging themselves back into the Matrix.

Well, we're all doomed in a few billion years when our sun leaves the main sequence and goes red giant.

DVS BSTrD:
If robots could develop the ability to love, I wonder if any of them would be biocurious?

Bring out Proposition Infinity again!

In more seriousness though, yes, technology is something we need to be careful with in order for something not to go horribly wrong (or horribly right, that's usually even worse).

Back to jokes, does going into cardiac arrest when a game glitches on you half a second before you'd have won count as "AI accident"?

What a bunch of stupid bullshit. Machines do not come preprogrammed with an instinct for survival out of fucking nowhere. Stop reporting things according to the Daily Mail. The Daily Mail is a pile of sensationalist rubbish.

This baffles me. Speaking as a programmer, I can say that programs tend to only do what you tell them to. I don't know a ton about Machine Learning but I'm reasonably certain that it would be impossible to accidentally program a humanity destroying AI.

Falterfire:
This baffles me. Speaking as a programmer, I can say that programs tend to only do what you tell them to. I don't know a ton about Machine Learning but I'm reasonably certain that it would be impossible to accidentally program a humanity destroying AI.

Oh I dunno. Look at auto-correct. Apply to international diplomats. Imagine what happens when the answer to "Have you informed your guys where we are on the deal?" goes from "Got to tell new kid on the way" to "Go to hell. Nuke is on the way."

Where's me Turian doing Air Quotes picture when I need it?

Kwil:

Oh I dunno. Look at auto-correct. Apply to international diplomats. Imagine what happens when the answer to "Have you informed your guys where we are on the deal?" goes from "Got to tell new kid on the way" to "Go to hell. Nuke is on the way."

There's a difference between retarded speech correction and intentionally malicious self aware AI. Autocorrect may be the cause of Armageddon, but not due to intentional malice on the part of the program.

Let's see, evidence of each one trying to kill me....

Cancer: Cancer already tried once.
Robots: A coder watched the Terminator movies.

I'm guessing cancer.

Eh, I'm not convinced. I mean, they would have to have some understanding of how things evolve and most software engineers are far to ignorant to know this. The human brain has evolved continuously. When it evolved a system for sight, it then started evolving a system for sight on top of that system. It's the same for almost all parts of the brain. In order for a machine to eventually have the ability to wipe out humans, the original programmers would have to understand this process. But they don't. Software is always the same; "We made system X to handle task Y. We're done!" This is humanities biggest weakness in creating true AGI. It would have to be aware of itself to the extent it would need to know how to evolve itself. Also, they are assuming that an AGI would be as clueless as humans in encroaching on foreign environments (AGI and humans would have very different environment needs unlike humans and great apes who share the same environment needs), or at very least hostile towards people, which there is no reason to assume it would be.

Truth be told, someone would have to fund these guys, and that sounds like asking for government handouts. To which I would say a resounding no, personally.

My brain is hurting because of what the guy's suggesting. Seriously? Reproducing software? Computers writing their own software?! I can't be the only one to call it BS. Call me back when you all successfully teach computers emotions, okay?

I thought is would be the fact that Mobile phones are giving us cancer. Maybe the robots are rising up....

On december 21st the first sentien computer will boot up. mark my words.

My brain is hurting because of what the guy's suggesting. Seriously? Reproducing software? Computers writing their own software?! I can't be the only one to call it BS. Call me back when you all successfully teach computers emotions, okay?

well, we already have computer viruses that re-write themselves to stay undetectable. thats as much evolution as we managed to get computers to do. its scary really, as one day they may accidentally rewrite themselves to be smart, you know, pretty much like how evolution works. its just that the information amount is so small this probably never will happen. one of our body cells store more information than our largest hard drive. and rewrites itself with every generation.

well our machinery progresses fast and will continue to do so, but i think we will develop implants that speed up our processing (like a robotic eye that is superior) before we invent true AI, which will lead is to become brains in robot bodies or even ghost in the machine before AI will develop. And by then, we may as well consider AI our equal. Except that it will likely be more logical.

[span]Fo[span id=:
r[/span]mica Archonis[/span]" post="7.394691.16005424"]Let's see, evidence of each one trying to kill me....

Cancer: Cancer already tried once.
Robots: A coder watched the Terminator movies.

I'm guessing cancer.

[span]Falte[span id=:
r[/span]fire[/span]" post="7.394691.16004784"]

Kwil:

Oh I dunno. Look at auto-correct. Apply to international diplomats. Imagine what happens when the answer to "Have you informed your guys where we are on the deal?" goes from "Got to tell new kid on the way" to "Go to hell. Nuke is on the way."

There's a difference between retarded speech correction and intentionally malicious self aware AI. Autocorrect may be the cause of Armageddon, but not due to intentional malice on the part of the program.

I assume they are speaking AI that'd be on our level of self awareness here with the same mental capabilities in terms of free will. So an AI may very well turn on it's creators in this case, but it ultimately comes down to what an AI's equivalent of instincts tell it or what it feels is the logical conclusion. Also comes down to what a virus could do to it.

Of course anyone who plugs up a true AI to anything that let's it take over anything before it has been rigorously tested for years would definitely be earning their darwin award...

 

Reply to Thread

Log in or Register to Comment
Have an account? Login below:
With Facebook:Login With Facebook
or
Username:  
Password:  
  
Not registered? To sign up for an account with The Escapist:
Register With Facebook
Register With Facebook
or
Register for a free account here