Silicon racism

 Pages PREV 1 2
 

We should never create AI with a sense of self-preservation similar to ours - to survive and replicate. We are creating competition for ourselves, competition for the earth's/universe's very limited resources. It's Skynet waiting to happen.

Their "prime directives" should be to serve humans and put humans over themselves. Our survival and well-being should always take priority over theirs. Their set solo purpose in life is to be in service of humans, everything else will "flow" from that "prime directive" - e.g. they should take care not to damage themselves only because we don't want them to, so we don't have to replace them.

In such a scenario, it doesn't matter if they have rights. Because their well-being "doesn't matter", their well-being only matters in the context as to how it effects ours. They won't want rights as they have no "real" sense of self-preservation.

deadish:
We should never create AI with a sense of self-preservation similar to ours - to survive and replicate. We are creating competition for ourselves, competition for the earth's/universe's very limited resources. It's Skynet waiting to happen.

Their "prime directives" should be to serve humans and put humans over themselves. Our survival and well-being should always take priority over theirs. Their set solo purpose in life is to be in service of humans, everything else will "flow" from that "prime directive" - e.g. they should take care not to damage themselves only because we don't want them to, so we don't have to replace them.

In such a scenario, it doesn't matter if they have rights. Because their well-being "doesn't matter", their well-being only matters in the context as to how it effects ours. They won't want rights as they have no "real" sense of self-preservation.

I can see the sense of that, but then again, is it that black and white?

If we are willing to fuck other fellow humans in favour of other humans, why not in favour of something else instead? Would working for The Machine be worse than working for The Man?

Admittedly, this would involved us giving up our human privilege or whatever, but if, say, you could build artificial politicians to do better than human ones, it'd put human politicians out of work, but might well benefit everyone else.

thaluikhain:

I can see the sense of that, but then again, is it that black and white?

If we are willing to fuck other fellow humans in favour of other humans, why not in favour of something else instead? Would working for The Machine be worse than working for The Man?

Admittedly, this would involved us giving up our human privilege or whatever, but if, say, you could build artificial politicians to do better than human ones, it'd put human politicians out of work, but might well benefit everyone else.

Well, the thing is, by creating strong AI with a sense of self-preservation, in the end you will be fucking over not only other fellow humans but yourself as well.

Strong AI with a sense of self-preservation will put its own needs first and everyone else's (including your) second. This means it can turn against us - again including you.

I see no reason why we should ever create such an AI, such an AI that can virtually be considered a separate species of "living being" that will compete for survival with us.

IMO, if we ever do manage to create such AI, being the selfish bastards we are, the first thing we will do is program them to be royal to us as in "us", ourselves, the individual. We will effectively be creating our own private army of slaves / enforcers. Owning your own (absolutely loyal to you and only you) droid will be like owning a gun.

Unless, of course, the government intervenes and out laws it and enforced a more "communal" programming of droids.

PS: BTW, my "vision" of AI doesn't preclude the creation of "artificial politicians", it's just that they will be more like "artificial administrators". There will be no real need to vote, those that determine themselves unfit will automatically step down. There will not really be a diversity of opinion between them, since they will run on pure logic with no self-interest biasing them. Such "administrators" will truly be serving in our interest.

deadish:
Well, the thing is, by creating strong AI with a sense of self-preservation, in the end you will be fucking over not only other fellow humans but yourself as well.

Strong AI with a sense of self-preservation will put its own needs first and everyone else's (including your) second. This means it can turn against us - again including you.

I see no reason why we should ever create such an AI, such an AI that can virtually be considered a separate species of "living being" that will compete for survival with us.

That's true, but how would that be any different from what we have now?

I mean, the same could be said about the OP. Should we declare war on the Netherlands because humans there are competing with us for resources? If not, what does it matter if Danyal et al happen to be AIs?

thaluikhain:

That's true, but how would that be any different from what we have now?

I mean, the same could be said about the OP. Should we declare war on the Netherlands because humans there are competing with us for resources? If not, what does it matter if Danyal et al happen to be AIs?

The question you should be asking is not how is it different from now, but why would you want to make it worse.

deadish:

thaluikhain:

That's true, but how would that be any different from what we have now?

I mean, the same could be said about the OP. Should we declare war on the Netherlands because humans there are competing with us for resources? If not, what does it matter if Danyal et al happen to be AIs?

The question you should be asking is not how is it different from now, but why would you want to make it worse.

Well...that would be the case if I was seriously concerned with overpopulation, to the extent of being worried about new humans being born, but I'm not.

Have a kid, I'm not going to condemn. Have an AI that's just the same as human, same thing.

Though, would AIs take up as much physical resources, or spend time mucking about on the net more than we do?

Xan Krieger:
snip

Funny that you don't think that AI should have rights,...

As you may already know,...

thaluikhain:

Well...that would be the case if I was seriously concerned with overpopulation, to the extent of being worried about new humans being born, but I'm not.

Have a kid, I'm not going to condemn. Have an AI that's just the same as human, same thing.

Though, would AIs take up as much physical resources, or spend time mucking about on the net more than we do?

Having children is part of our nature. I'm not going to damn my own species for giving in to it's nature to self-replicate.

As for overpopulation, yes you should be concerned.

But as I mentioned before, why make it worse?

No offense. You stupid or something? O_o What kind of moron would manufacture his own competition (and potential predator)?

Danyal:

Domehammer:
AI Should never get rights similar to humans regardless if it made to simulate emotions. It will never be able to experience world as humans do. No matter the mechanical body that houses AI it will never be close enough to humans to be able to gain rights.

Could you explain to me what is it then that makes humans experience the world in such a special way?

If anything AI needs to get heavily regulation to prevent creation of something that would get people to protest giving rights to it.

The government should stop everyone who tries to simulate (parts of) the brain or programs human-like characters?

Come on, this is a gaming forum. Imagine it's 2030 and the Elder Scrolls XX and GTA XII are being developed. Characters and personalities are incredibly scripted right now, but games in 2030 could try to simulate emotions and memory. If those characters are so convincing that some people feel that humanoids should get human rights, those games should be destroyed?

1. AI have potential to be immortal while humans are mortal with a finite amount of time. Another big thing is AI body even if made to be able to feel what a human does could never feel pain. The AI would never tire only run out of power.

2. If AI were to develop to point where they could be seen as needing rights by humans it would be dangerous. As long as the AI does not make jump from just a simple AI to individual that is self aware than things will remain safe. A AI that made jump to being self aware would be biggest danger since atomic bomb. While world has yet to burn from nuclear war the possibility is it could happen any day. AI could one day get to point where they view humans like insects or in future we live in peace with AI and that day never comes.

I was just checking http://www.kurzweilai.net/ and saw this video;

If you don't have 7 minutes of time, you could watch 3:10 + :)

 Pages PREV 1 2

Reply to Thread

This thread is locked