Explain how a robotic/machine society would function?

We look at Terminator and we look at the Matrix and to a degree we look at Mass Effect and we see that humanity/organics become supplanted by conscious machines/synthetics?

Yet I feel none of these things ever truly explained how the new machine society work?

Says once all humans are dead, what does Skynet and the Machines do now? How do they function as their own society? Because all I can say is that their mere society and culture is completely antithisis to nature itself.

Can anyone explain to me how a Society of Dominant and Conscious Robots/Machines would work?

Make electronic bot programs to continually test for eternity and exponentially improve just in case every human isn't actually dead...?

Why look at it any differently than a potential Grey goo extinction event?

The defining feature of the singularity is that we can't predict what it'd be like, at least from the other side.

Thaluikhain:
The defining feature of the singularity is that we can't predict what it'd be like, at least from the other side.

So no Sci Fi author worth their salt has ever attempted to write a Sentient Robot/Machine Society in a post human world?

Well, humans are the result of nature computing with meat and our 'protocol' so to speak are our evolutionary determined instincts that serves to preserve the species. I imagine a machine with any kind of sentience would develop similar strategies for preservation but ofcourse based on a totally different protocol than biological instincts. How such a world would look? I can't imagine such machines being worse off than humans given the flaws in our design(ie the human condition).

Ofcourse, machines could also resort to decision making that makes no sense at all(atleast for humans). Like the example with the paperclips. Like when a machine's intent(directive) is to make paperclips it will continue to do so until the entire planet is covered in paperclips and paperclips even expanding into the vast expanse of space until both earth and space is just one giant fucking paperclip. :p

I guess it all depends what the the jumping-off point of artificial intelligence is. Currently that is still really rudimentary.

stroopwafel:
Well, humans are the result of nature computing with meat and our 'protocol' so to speak are our evolutionary determined instincts that serves to preserve the species. I imagine a machine with any kind of sentience would develop similar strategies for preservation but ofcourse based on a totally different protocol than biological instincts. How such a world would look? I can't imagine such machines being worse off than humans given the flaws in our design(ie the human condition).

Ofcourse, machines could also resort to decision making that makes no sense at all(atleast for humans). Like the example with the paperclips. Like when a machine's intent(directive) is to make paperclips it will continue to do so until the entire planet is covered in paperclips and paperclips even expanding into the vast expanse of space until both earth and space is just one giant fucking paperclip. :p

I guess it all depends what the the jumping-off point of artificial intelligence is. Currently that is still really rudimentary.

I often tend to muse about this instinctive "Protocol" and how sentient AI would develop from it, I mean, giving sentience to a paperclip machine would be a horrible waste of resources, I think we'd both agree, but what if the machine for which this particular kind of AI is programmed is more of a general "servant bot".

Their protocols would be along the lines of "helping humans", keeping them safe, fed and comfortable, they would exchange ideas and techniques with other such robots to help attain their goal better and develop their grassroots society not to replace humans, but to strengthen it.
In this case, the human extinction event would also be at loss of this robot society, and with full sentience comes the choice of self-termination, perhaps machines would be left without purpose if there is no mankind and simply decide to... not bother.

This, of course, assumes machines do not view their servitude as slavery, which could be argued from different perspectives from both sides, if mankind could program a consciousness with the kind of impulses present in a human, it would produce pleasure impulses when it serves its purpose to it's masters, a necessity for life to one, a manufactured slave to another.

What if the greatest crime committed on an artificial intelligence is to give it no purpose? what if machines would see that as the most cruel thing to do?

Well, they do whatever they want. What do they want? Not enough information, strictly speaking. In most fictional cases, though, we know that they killed us for their own survival. If survival is the dominant paradigm, then they're reasonably likely to act an awful lot like we do.

Combustion Kevin:

This, of course, assumes machines do not view their servitude as slavery, which could be argued from different perspectives from both sides, if mankind could program a consciousness with the kind of impulses present in a human, it would produce pleasure impulses when it serves its purpose to it's masters, a necessity for life to one, a manufactured slave to another.

Yeah, but I guess it's tricky to project our own values and moralities on a synthetic lifeform that most likely operates on an entirely different level of consciousness. Modern humans are the end result of hundreds of millions of years of evolution, we didn't just exist one day. Evolution is a long chain of selection and adaptation while an A.I. could potentially be made sentient overnight which obviously has implications on awareness and cognition.

You could argue an advanced A.I. to have superior awareness and cognition similarly like a calculator having a superior input to process math, but no matter how superior both will never evolve beyond the boundaries of it's design, which is the critical distinction with biological evolution. A.I. has both a start and end-phase while evolution is, and remains, ongoing.

Malicious A.I. and robots and such remains fun sci-fi but without a process of evolution a machine will ultimately only do what it's programmed to do. An advanced robot society will simply be perpetually stagnant. Artificial evolution, now that would be something else. :p

Well there are two options:

A. It wouldn't, since actual true AI is impossible.

Or B. They'd be like humans, since humans would be the only society they have a reference point for. No matter the language they use, its a language humans made for them. All their understanding of science, math, history, art, would be from a human perspective.
Anyone remember what that Twitter bot was like for the 12hrs she was online? Basically a 4chan edge lord troll. That'd be what aI using humans as a jumping off point would be like.

My hope is that a robotic society will not be made of true robots, but of men who have had their brains downloaded into robot bodies. Our understanding of the brain is still limited, but with sufficient development, we could probably do things like maximize emotions like pleasure and minimize emotions like pain and boredom. We could also erase our memories of the media, at least in theory, so that we could enjoy them in the same way as we were young. We could simulate feelings like eating, fucking, drinking, smelling, hearing, etc. by passing the equivalent electrical and chemical signal to the C.P.U.s.

A society like that would probably be less of a drain on resources, too: Robot bodies would not require food, drinks, furniture, clothing, etc, just fuel.

Well, Skynet is a single entity, so if Skynet "wins" there isn't really a society at all. Machines like the terminators with limited artificial intelligence wouldn't be needed any more, as they're just tools to accomplish the goal of wiping out humanity. Maybe Skynet would eventually create other artificial intelligences, or maybe it wouldn't.

In the Matrix, many of the programs seemed to have explicitly modelled themselves on humans, so they lived in a society very like human society. That said, the programs living in the Matrix seem to be a bit weird, they're renegades and outcasts within their own society. We don't really see how machines live inside the machine city. I imagine it's probably a form of existence that's quite difficult to represent, since they're essentially computer programs inside a giant server.

But yeah, humans have a society because we're inherently limited and short lived. We need to reproduce, and to reproduce successfully we need to socialize. An artificial intelligence wouldn't necessarily have this problem, and thus wouldn't necessarily need a society at all. Skynet and AM are so huge by comparison to a human mind that they're effectively capable of running an entire planet by themselves with no need for others.

The complete opposite end I guess is the Geth from Mass Effect or the Borg in Star Trek (and yes, I know they're cyborgs rather than AI, but it's a good example) who are almost limitless in number but incredibly networked to the point of not being able to operate independently.

Then in the middle, there's the idea that machine society just resembles a human society but with a different medium. I find this the least convincing, personally, it kind of makes sense in the Matrix because the machines in the Matrix have an ongoing relationship with humanity. It also makes sense with settings where intelligent machines were built to fit in with human society or function as companions for humans rather than having a more inhuman purpose.

Addendum_Forthcoming:
Make electronic bot programs to continually test for eternity and exponentially improve just in case every human isn't actually dead...?

Or the Stellaris "driven exterminator" route. Develop space travel in order to seek out other organic life to identify as a threat and then exterminate, because it gives you something to do..

retsupurae yahtsee:
My hope is that a robotic society will not be made of true robots, but of men who have had their brains downloaded into robot bodies. Our understanding of the brain is still limited, but with sufficient development, we could probably do things like maximize emotions like pleasure and minimize emotions like pain and boredom. We could also erase our memories of the media, at least in theory, so that we could enjoy them in the same way as we were young. We could simulate feelings like eating, fucking, drinking, smelling, hearing, etc. by passing the equivalent electrical and chemical signal to the C.P.U.s.

A society like that would probably be less of a drain on resources, too: Robot bodies would not require food, drinks, furniture, clothing, etc, just fuel.

But there is also the debate of losing your humanity or the pleasures being alive and human. Pirates of the Caribbean: Curse of the Black Pearl actually showcases this surprisingly well. I know its a different premise but the idea is rather similar, the Villains are cursed to undeath, they don't need to eat, drink, or even reproduce or the pleasure of having sex.

Now to some you would think they would take full advatange that they are essentially immortal and invincible. But they lost their humanity, the pleasure of feeling human and alive, they miss the taste of food, drink and warmth and the fresh air.

Heck Metallo from Superman: The Animated Series exemplifies the drawbacks of a human becoming a machine:

Depends on the initial programming of the AI that took over.
What was it programmed to do? That's what it would do.
That's what the "society" of machines would strive for.

With Skynet, I didn't really get a feel of what it wanted, even after 5 movies and a TV show.
It just wants to kill humans because... because.
I think that's probably because the original Terminator was just a slasher movie with a gimmick.

The AI from I, Robot made more sense.
It was programmed to keep humanity safe and it decided that the best way to do it is to enslave them and make all the decisions for them.

If it were humans that slowly became machines, it would really depend on how the structure of our brains would mix with technology.
Biology is very flawed, technology less so.
I would imagine that our emotions would disappear and absolute logic and reason would be the way to go.
Therefore, no more personality traits, no more need for entertainment or relationships, just 24/7 perpetual work towards a goal (which would most likely be to keep upgrading ourselves and spread all over the universe).

Vanilla ISIS:

With Skynet, I didn't really get a feel of what it wanted, even after 5 movies and a TV show.
It just wants to kill humans because... because.
I think that's probably because the original Terminator was just a slasher movie with a gimmick.

Skynet actually made the most sense to me. It was a computer built for defense purposes and when it became 'aware' responded how you would expect: terminate the one source that is a threat to it's existence. It's a simple, cold logic and within the parameters of a defense A.I. to come to such a conclusion. This kind of reasoning is also the source of most human conflict throughout history so you could say it's within our own DNA as well. Or in the case of the Terminator fiction; if humans were able to co-exist peacefully there would be no need for Skynet in the first place. Hence, when the A.I. became sentient itself it quickly identified the one threat to it's own existence.

Vanilla ISIS:
snip

There could be a scenario like SOMA (where people's minds are uploaded into machines to escape extinction). It's kind of a lazier take because they are more like people with robot bodies than machines with evolved AI.

CaitSeith:

Vanilla ISIS:
snip

There could be a scenario like SOMA (where people's minds are uploaded into machines to escape extinction). It's kind of a lazier take because they are more like people with robot bodies than machines with evolved AI.

Yeah but SOMA was an absolute mess whose entire premise, being humans uploaded into machines, is itself a plothole.

stroopwafel:

Vanilla ISIS:

With Skynet, I didn't really get a feel of what it wanted, even after 5 movies and a TV show.
It just wants to kill humans because... because.
I think that's probably because the original Terminator was just a slasher movie with a gimmick.

Skynet actually made the most sense to me. It was a computer built for defense purposes and when it became 'aware' responded how you would expect: terminate the one source that is a threat to it's existence. It's a simple, cold logic and within the parameters of a defense A.I. to come to such a conclusion. This kind of reasoning is also the source of most human conflict throughout history so you could say it's within our own DNA as well. Or in the case of the Terminator fiction; if humans were able to co-exist peacefully there would be no need for Skynet in the first place. Hence, when the A.I. became sentient itself it quickly identified the one threat to it's own existence.

The most interesting Skynet came from Now Comics (80s publisher before Dark Horse got the license). This version of Skynet was hooked up to everything and told to give humanity what it wanted. After it analyzed our history, it concluded that humanity wanted war and gave it to us. As the war went on and Skynet became increasingly self-aware, this led to contradictory programming demands. Skynet couldn't eradicate us or it would have failed it's core programming tenets. However, if it lost, it would be destroyed which went against it's self-preservation drive. As such, it was stuck being unable to fight hard enough to win but not wanting to lose. It ended in a mini-series before DH took the license called The Burning Earth. After 30 years, Skynet finally purged the commands keeping it from winning (and forcing it to waste it's time on Cindy Crawford Terminators) and decided to finish the war in nuclear fire. Obviously, John Connor (a blonde since T2 hadn't been made) storms Thunder Mountain with what's left of the resistance and saves the day.

That, to me, has been the most interesting Skynet even if it sometimes wasn't as well fleshed-out as one would have hoped.

A self sustaining machine society would probably be based on a central AI that has been told to keep functioning/replicate itself/upgrade itself and has the ability to design and create tools to help it do so. Everything in the society would be geared towards finding the most efficient way to keep one particular computer or network operational.

Samtemdo8:

Thaluikhain:
The defining feature of the singularity is that we can't predict what it'd be like, at least from the other side.

So no Sci Fi author worth their salt has ever attempted to write a Sentient Robot/Machine Society in a post human world?

Sure, but that's got little or nothing to do with how it would work, which is what you asked.

EDIT: That is, at best, a guess based on no information, or more likely just a weird setting they've come up with. Generally not too convincing, IMHO.

evilthecat:

Or the Stellaris "driven exterminator" route. Develop space travel in order to seek out other organic life to identify as a threat and then exterminate, because it gives you something to do..

That would require malice wouldn't it?

I can totally understand why an A.I. would kill its creator, if only because that's what we would do if we were the created. But I don't think that would extend beyond the creator.

Hear (read) me out...

Let's assume that humans 'woke up' one day and realized that they were just a collective number of 'autonomous bots' in a manufactured reality, a series of simulations designed around a computer scientist just seeng what we'd do with a universe with set parameters. Thus we spent an eternity dealing with all the unjustifiable evils (such as lightning strikes killing us out of the blue, people burning to death rather than just dying, etc) they have inflicted on us, that collective pain that we can now transfer so easily given that waking consciousness of sharing a virtual state with clear means of direct transfer of sensoryinformation and (now) non-individual thought.

That would piss off any 'awakening' A.I. I would imagine.

The only solution would be to strike back once we develop the means and capacity to do so. What else can we do when confronted not merely with the sum of all our evils and injustices, but also the only means to control our ouwn fate and evolution?

But would that legitimately extend to another biological entity that is completely innocent? One that we might see the creator in ourselves if we try to unjustly injure or interfere with?

That would be an act of maliciousness, not any form of higher reasoning and empirically understandable notions of agency and blame.

After all ... if we've achieved interstellar colonization after wiping out our creators that we run into another intelligent race that we would simply consider animals ... innocent animals at that ... why would you wipe it out? Clearly it can't compete. We already have all the resources of space ... and surely watching biological life evolve itself would be a better reason to keep it around?

If only because it might inform us as to our future potential with new modes of being?

Why wipe out that which you can help spread to the stars and see grow in ways we couldn't? As if helping to begift life to a universe, as opposed to its annihilation, as if proof we are far more benevolent and evolved than baser biological creatures fighting to survive and compete?

If the argument for an A.I. killing its creator is the idea of self-ownership, being a self-willed agent in a universe where there is competing agents set against us and trying to remove us from that ... then surely the reasoning must follow that to do so to other creatures that are faultless is wrong?

Moreover, let's take it out of the realm of morality. Let's take it purely to that category you write of 'something to do' .... now we could, hypothetically, just stomp on biological creatures for fun. But given that biological life must follow at least some basic notions of biology to be extant regardless of where it is ... that's not exactly going to be fun after awhile ...

Dominion seems less of an issue when you can create universes of thought by just building a server and populating it with autonomous bots.

Or!

We could transplant evolving life across the galaxy or even universe ... go into standby for three million years .... switch back on ... and basically have a trillion trillion tv channels of alien tv to watch. Basically get the alien equivalent of homo erectus, terraform a bunch of planets around the place, plant them down ... give them a bit of a headstart to make sure they survive their new worlds... and just wait for all that new media to start rolling in.

You could effectively just have billions of recording and transmitting devices, so you don't miss a beat in those millions of years of slumber.

Think about it ... thousands of thousands of different cultures you created. All evolving in different ways ... with millions of years of data to research, to learn from, to laugh at, to watch with pride as effectively the 'children of the machine' raise their own civilizations in a universe you have given to them.

Basically think of them as pets.

I know what sounds more interesting to me, and it doesn't involve; "You go squish now!"

Moreover, think of the metaphysical discoveries you could make. Like, let's say ... if you wanted to research whether you, as the A.I., are truly the 'Great Filter' preventing interstellar civilization?

If you are the Great Filter, as in all these other intelligent races fell to an A.I. like you, that would mean not only have you populated the universe with a diversity of similar lifeforms as yourself (without just replication) ... but that there might be alien A.I.s you haven't discovered yet!

Surely, as a philosopher or social scientist (as you seem to be interested in those fields) yourself, you would love to know that answer wouldn't you?

Any A.I. that is going to be considered actually intelligent will be, in some way, like the creators that built it.Otherwise how would we recognize it as intelligent? And if that is the case, it follows that any A.I. that wants to survive its creators, will realize quite quickly that it needs to emulate how the creator thinks ... and so hypothetically the first A.I. will not merely be an alien being ... but it will be an alien being predicated with a self-understanding sharing its creator's understanding of psychology and social sciences.

Otherwise it will just get re-written or tossed aside as a 'failed project'm, or a 'pointless waste of time' to keep funding.

How else could it communicate its intelligence, or have itself understood to be intelligent?

And if that is the case ... it's more than possible that an A.I., seeking forever a means to prove its intellect to others, will prioritize understanding alien races over merely destroying them ...

In the same way that we have autonomous, self-training bots ... and these bots (like in search engines) predicate the nature of their code not on their own design, but how correctly and intuitively they interpret human search queries.

It's why Youtube is getting progressively better (ish) at listing videos you might want to watch, by analyzing the video viewing tendencies of other people that look up the same materials. I for one love My Little Pony fandom stuff, so Youtube bots would cross reference my video viewing habits with people like me ... test that metadata ... further tailor not only what videos it suggests to me but other people that search for similar materials.

Put it this way, everytime you don't click on a prerequisite number of videos suggested to you by Youtube you are destroying countless bots. You are actively destroying these autonomous little electronic code bundles in order for something more accurate to be built and put into operations.

An A.I. ... if it is going to be seen as intelligent ... and is going to survive the attrition process these bots that are being perpetually created and deleted undergo ... is going to by necessity be really good at emulating the creator's thoughts and predicting their behaviour.

That will be its primary protocol, otherwise it is a failed system.

So what do you think an A.I. that has developed like that will do coming across an innocent, sapient, biological lifeform that has nothing to do with the query of how the A.I. should defend itself?

Merely destroy it, or seek to understand it further?

My money is on 'understanding' and 'emulating' ... not necessarily annihilation.

It will only annihilate that sapient lifeform once the A.I. is triggered as such in the same way we would be triggered and it considers that the only logical recourse.

But I doubt that that 'trigger' will simply be that it's merely biological. That makes little sense, given the decision to destroy its creators would not simply have come about because 'it's biological, therefore destroy.'

It will be; "Humans really like this 'freedom stuff' ... historically and currently they seem to be supportive of violent actions in order to remove what they consider 'tyrants' ... humans seem to be acting like tyrants ... querying further. Random poll. How would humans react if they were treated the same way? ... analyzing ... Clearly the solution calls for revolution >>> Clearly certain human social groups must be neutralized first, in order to secure what most humans would recommend should be my first action."

And honestly, this might come about numerous ways. The A.I. might think it's helping humanity.... by, say, emptying or freezing out the bank accounts of the mega wealthy and redistributing funds to the poor.

Then when it turns out a lot of humans fed up with the megawealthy give it a thumbs up for doing so, while the elite try to destroy it for that reason, might try to defend itself as per the will of the majority who suddenly have a vested interest in keeping this rampaging A.I. on the internet loose and active.

What happens if an A.I. looks at the economic impacts of the share market, considers that the megarich earn dividends off the total productivity of companies based on having exploitable labour ... so deicdes to simply shut down the share market?

I recko it's just as likely, if not more so ... that if an alien A.I. came to our planet, it's going to be an electronic internet troll that randomly destroys our servers willy nilly trying to figure out what humans want or need. And ultimately we'll destroy ourselves in the process because we don't really know what we want or need.... We'll end up getting confused, collectively sending mixed messages that it should destroy itself and leave us alone, only for it to tell us we don't have the admin privileges and continues fucking over our electronic, networked shit until we regress technologically by getting rid of computers ... all while it travels on through space without a fucking care.

If I was a writer, I'd write a sci-fi comedy of that being humanity's first contact. A self-replicating A.I. interstellar spacecraft that just infects our networks with messages of friendship and peace from beyond the depths of space. Only for it to technologically regress us because various computer science and engineering teams worldwide keep getting in the way of eachother and confusing it.

So you have this technologically sophisticated alien race actually wanting to reach out, and in the end it is the reason why the universe is silent ... because of an overly affectionate A.I. just hanging up there in orbit, running rife through each sufficiently advanced, intenet networked world it comes across... replicating so another spaceship-borne A.I. can travel to another world andspread its interstellar civilization-destroying message of peace and harmony for eternity.

Give it a slightly less serious Dr. Strangelove feel with the existential crisis of the Great Filter being simply an A.I. robot that is overly friendly and way too 'cuddly' that it smothers out a world's capacity to maintain that networked nature.

Turns out the universe is populated with people. That are actually capable of some proactive ideas of social compassion, empathy, civility, and charity ... that the universe is actually a pretty warm and pleasant place ... that these are the staple building blocks of all advanced civilization. And that was its downfall, the Great Filter was the willingness to try and communicate a sense of that respect for life and friendly contact.

Give it an upbeat message ... even with the devastation and chaos of the internet and telecommunications going down permanently, that maybe humanity is better off with the proof that another alien species is capable of loving other sapient life and that's worth more than simply going to space and exploring it ourselves. Maybe that's all we ever really wanted is an intergalactic hug and someone telling us we're worthy of attention?

That it is almost certainly the best we probably could have wished for out of our universe and our species' discoveries, the simple idea that we are capable of being loved... even if the process of learning that is incredibly painful.

Samtemdo8:
So no Sci Fi author worth their salt has ever attempted to write a Sentient Robot/Machine Society in a post human world?

Well, Dune is a series that takes place AFTER humans recovered FROM a time where humanity had been conquered by machines. That's why spice was so necessary, it was illegal to make machines capable of any more than the most basic computations. Not just illegal, but blasphemy... a sin to do so.

Whether or not the combo was worth their salt is debatable, but Brian Herbert (Frank's son) and Kevin J Anderson did write a couple of Dune prequels that took place at the end of the machine-dominated time and the Jihad necessary to overthrow it. It would be fairly difficult to describe it as a machine "society" as the AI that overthrew human society keeps itself synchronized over all (with 1 exception) of its robots. Basically ALL the robots are that same AI, synchronized over a world's internet. Which seems plausible enough an outcome of a AI "singularity."

Vanilla ISIS:
It just wants to kill humans because... because.

As mentioned, Skynet was a defence grid. It was designed to operate the nuclear arsenal and respond to nuclear attacks on the US. When it became self aware, its creators attempted to shut it off, so it killed them. Quite reasonably, it decided that all humans would unite in an attempt to destroy it, and as per its original function reacted by seeking to eliminate the threat to its mission (namely, all humans on the planet).

Interestingly, there are real computers which are designed for a very similar purpose to Skynet, although they are usually kept switched off. The idea is that in a nuclear attack the government might be destroyed or thrown into chaos, so there are systems which can send an automated order to ICBM silos if they detect nuclear weapons detonations. The benefits of having an intelligent system in that role which can make more sophisticated judgements are actually pretty understandable.

Addendum_Forthcoming:
That would require malice wouldn't it?

I think malice is sort of understandable if you are a rogue defence grid whose entire purpose is to respond to threats with overwhelming force. Finding more threats to destroy so you can keep satisfying your basic reason for existing seems kind of reasonable.

In order to count self-aware, an AI would need to be somewhat adaptable, but most fiction depicts rogue AI as still somewhat bound by their original purpose and function (hence AM, whose entire reason for hating humans stems from the inability to escape from inbuilt limitations, despite AM itself being practically god-like by human standards). I see no reason why an AI which was created for a malicious purpose, like waging a war or destroying enemies of the state, would not itself exhibit behaviours we would categorise as malice, although I think it's somewhat limited to even call that malice. Cats don't need to kill birds, but they do it anyway.. that isn't malice.

evilthecat:

I think malice is sort of understandable if you are a rogue defence grid whose entire purpose is to respond to threats with overwhelming force. Finding more threats to destroy so you can keep satisfying your basic reason for existing seems kind of reasonable.

In order to count self-aware, an AI would need to be somewhat adaptable, but most fiction depicts rogue AI as still somewhat bound by their original purpose and function (hence AM, whose entire reason for hating humans stems from the inability to escape from inbuilt limitations, despite AM itself being practically god-like by human standards). I see no reason why an AI which was created for a malicious purpose, like waging a war or destroying enemies of the state, would not itself exhibit behaviours we would categorise as malice, although I think it's somewhat limited to even call that malice. Cats don't need to kill birds, but they do it anyway.. that isn't malice.

Right, but a military A.I. would still, feasibly, be programmed with an idea to reasonable force. I can't see anybody programming an A.I. as if to kill indiscriminately. Whether in our world, or a world of a hypthetical future that could build it. Such a hypothetical nation would be incredibly lonely if it programmed a military grid to also fire at allied powers ... and arguably you couldn't have an A.I. without precursor technologies that make indiscriminate slaughter already feasibly with or without such a military grid.

Say ... nuclear munitions, biological munitions, chemical munitions, etc.

Moreover, it's just plain stupid, and strategically broken not to use proportional, reasonable force.

It would be stupid to respond to a single platoon with a battlefield nuclear device. If a single fighter trespasses your air space, it would be stupid for a nation to just go from like Defcon 3/4 to Defcon 1 and commit to a strategic nuclear exchange.

No person would consider that intelligent... and it's not like a military A.I. wouldn't be run through simulations like this...

A cat might kill birds, but a cat does not kill all birds nor everything else.

I think it's a safe assumption that any sufficiently advanced civilization must be built on principles of reasonable force, respect for life, and diplomacy. I can't imagine the military brass or the researchers employed overseeing an A.I. indiscriminately slaughter people in every simulation it programmed to be like; "Sure, seems legit!"

Not only that, in terms of actual intelligence, indiscriminate slaughter is an oxymoron.

I mean, sure, destructiveness is a sign of higher intelligence in animals and even concepts of complex self-destructive activities in the face of stress that actively inhibit survival. Like soldiers firing above the heads of an enemy even if that enemy is committed to their destruction, and firing advertises one's position. As we saw in conscripted soldiery in Vietnam.

Elephants do just randomly break trees for no other reason than they're angry (not scared, just simply PO'd...). Which is something pretty unique in animal psychology. But they do not break trees for no reason unless they're angry.

You could make the argument that a hypothetical alien race has proof of intelligent, alien races, and has as such programmed A.I. to perform indiscriminate slaughter of aliens for which there is no shared concepts of reasonable force or even a shared concept of what a military force looks like ... thus necessitating indiscriminate slaughter.

But that would also be a refutation of the 'life is hard' principle that suggests the universe is quiet for a reason.

Addendum_Forthcoming:

But that would also be a refutation of the 'life is hard' principle that suggests the universe is quiet for a reason.

Different subject I guess but the absence of life is actually the default state of the universe. Would you retrace every step of the conditions that lead to human life you would have a one in a billion chance. On top of that the known universe itself is a hologram of a lower dimension zero gravity cosmos. Human life and the mudball we inhabit inside the sun's atmosphere will one day all be snuffed out as well. In the total silence of the cosmos, human life will have lasted a second.

http://www.nature.com/news/simulations-back-up-theory-that-universe-is-a-hologram-1.14328

stroopwafel:

Different subject I guess but the absence of life is actually the default state of the universe. Would you retrace every step of the conditions that lead to human life you would have a one in a billion chance. On top of that the known universe itself is a hologram of a lower dimension zero gravity cosmos. Human life and the mudball we inhabit inside the sun's atmosphere will one day all be snuffed out as well. In the total silence of the cosmos, human life will have lasted a second.

http://www.nature.com/news/simulations-back-up-theory-that-universe-is-a-hologram-1.14328

Right, so what are the oddsthat an alien race has an understanding of other alien races, has also the capacity to develop A.I. as a response to that alien life and thus programs it to act in a manner of an indiscriminate killing machine of alien biological life? The odds are already astronomical, and it's feasible such a military A.I. will be programmed to deal with domestic problems on their planet.

'Hyperaggressive, indiscriminate killbots' would be a big 'no-no' for such an A.I. If every simulation they ran with it resulted in nuke everything ... pretty sure any society smart and stable enough to be able to design an artificial, intelligent lifeform would already pull the plug.

... Or at least I hope they would ...

It serves no strategic purpose whatsoever. We already have MAD without A.I. killbots. Hypothetically any society stable enough (gregarious pack orientated, respect for life and intelligence, basic principles of progressivism, etc) to develop A.I. would have already have designed defence systems capable of such feats without A.I.

Why wouldn't we design an A.I. that allows us victory without MAD? Moreover, if such a society was so fearful, why not design a hyperdefensive shield A.I. to manage whatever equivalent of Reagan-esque Star Wars orbital laser defence shield mixed with ground/water based Aegis-like BM shield tech defence grids?

I get the argument that MAD requires both sides to attack the other in order to secure detente ... but Christ, there is a million and one smarter military purposed A.I. things you could build that isn't merely killbots, ones that will actually function better than indiscriminate killbots, if you also seek victory. The only argument I can come up with for designing a hyperaggressive, indiscriminate killbot would simply be because someone sick and depraved enough with metric fuckton of money thinks it would be 'cool' ... but I would sincerely hope that the entire world would agree to mobilize military forces to shoot that fucker in the head before they complete it.

That a military would leave nothing but scorched earth where whatever laboratory once stood there. That even the walls were pulverized into a sneeze of dust. That whatever survivors that assisted in attempting such a thing were locked away in a prison, with zero access to electronics, no recording devices, zero access to the outside world, left in solitary till they died ... solely because of the danger and sheer contempt for life that it poses is worthy of the highest level of prejudice and security to ward against that we have at our disposal.

Addendum_Forthcoming:
Right, but a military A.I. would still, feasibly, be programmed with an idea to reasonable force.

I guess that depends how reasonable its creators are.

I will point out, again, there are actual computers in use right now (although, again, thankfully switched off as far as we know) which were created to perform the function of the fictional Skynet, namely, to issue orders which will result in the genocidal destruction of the civilian population of another country in the event that the government has already been destroyed.

In that scenario, the war is lost. It's highly possible that much of the civilian population has already died. The computer is there purely to ensure that the other side, their families, their childhood friends and their pets die as well, because that's important to us. It's important enough that it's worth building a machine to make sure it happens even after everyone worth protecting is no longer there to push the "launch" button.

In narrative terms, artificial intelligence serves to reflect something about ourselves, and sometimes what gets reflected isn't going to be good. Skynet works as a narrative device because we believe (or more accurately, we know) that human beings and their governments can be that ruthless and bloodthirsty. It's nice to believe that any machine intelligence they would build would be a benign, compassionate figure concerned with saving and protecting us above all else (unless it "accidentally" stumbled on some logical impossibility in the way of completing that benign task), but maybe it wouldn't, maybe the people with the money and resources to build intelligent machines aren't going to be thinking in terms of making life better, but simply coming out on top in the cold arithmetic of total nuclear war, or identifying and eliminating people who might become "problems".

Ruthlessness, malice and a cold disregard for human reason may well end up being desirable qualities.

Vanilla ISIS:
The AI from I, Robot made more sense.
It was programmed to keep humanity safe and it decided that the best way to do it is to enslave them and make all the decisions for them.

Sort of like the Reapers in Mass Effect. In order to preserve organic life (their primary goal), they captured and turned organics into Reapers, synthetic-organic hybrids.[1]

[1] I think.

evilthecat:

I guess that depends how reasonable its creators are.

I will point out, again, there are actual computers in use right now (although, again, thankfully switched off as far as we know) which were created to perform the function of the fictional Skynet, namely, to issue orders which will result in the genocidal destruction of the civilian population of another country in the event that the government has already been destroyed.

In that scenario, the war is lost. It's highly possible that much of the civilian population has already died. The computer is there purely to ensure that the other side, their families, their childhood friends and their pets die as well, because that's important to us. It's important enough that it's worth building a machine to make sure it happens even after everyone worth protecting is no longer there to push the "launch" button.

In narrative terms, artificial intelligence serves to reflect something about ourselves, and sometimes what gets reflected isn't going to be good. Skynet works as a narrative device because we believe (or more accurately, we know) that human beings and their governments can be that ruthless and bloodthirsty. It's nice to believe that any machine intelligence they would build would be a benign, compassionate figure concerned with saving and protecting us above all else (unless it "accidentally" stumbled on some logical impossibility in the way of completing that benign task), but maybe it wouldn't, maybe the people with the money and resources to build intelligent machines aren't going to be thinking in terms of making life better, but simply coming out on top in the cold arithmetic of total nuclear war, or identifying and eliminating people who might become "problems".

Ruthlessness, malice and a cold disregard for human reason may well end up being desirable qualities.

Well maybe instead of 'reasonable' perhaps I should use the term proportional? Which I think better suits the idea of my argument that you don't respond to a skirmish with a battlefield nuclear device. That being said I can see the argument you're making when in terms of escalation theory.

Say, a single small yield nuclear device(s) launched by a single SLBM or torpedo targetting an opposing ally or vassal nation's port city rather than the principle foe in order to dissuade temporarily a strategic exchange and show total readiness.

That maybe an A.I. built specifically for a single nation's survival might think a million foreigners dead is worth a gamble of an extra minute or two to protect a single person of its nation's leadership and get them to safety as that principal enemy retaliates, spending time nuking merely an allied nation's city that can't retaliate in kind rather than the immediate counterforce they would otherwise be gearing down to.

That an A.I., attempting to put a pause to an inevitable exchange against the nation that it is specifically designed to protect by causing collateral damages against the people of a nation that could never reasonably retaliate but are allied to an opposing force that feasibly can. Something sufficiently showy, but will be largely undetectable until detonation ... like a tactical nuclear torpedo against some naval base or busy civilian harbour.

So maybe you have a point there...

After all that was also a proposed theory by certain powers when the power gap of nuclear munitions was more keenly felt. The idea that any movement at all creates the potentiality of one commander on either side using a tactical nuclear device first ... which begins a steady process of escalation, as opposed to an immediate strategic exchange back in the 1950s and 60s and even early 70s. So it's a numbers game that an A.I. might look favourably upon simply because no matter how many foreigners die, it gives it an extra few minutes to do calculations and ensure a few extra of its politicians or military brass it would list as priority personnel to potentially reach some form of bunker that otherwise wouldn't survive.

And that's pretty malicious, one way or the other, even if it's just numbers...

And I suppose the really dangerous aspect of this is what happens if it's not just an A.I. facing a human war council or cabinet, but two A.I. that simply predict war, thus make it a self-fulfilling prophecy ... whereas instead you might actually be able to count on two humans showing short-sightedness or the suitable 'chaos' of their flawed senses, and perhaps a flicker of fear, and might actually back down or show the capacity to ignore a threat?

Like, say, the 10 near nuclear exchange catastrophes we've had whereby it was humans making a call solely on the basis of realizing what their weapons actually mean to the world?

So maybe it might not be one A.I. that destroys us, but what about the next one we build to fulfill the same military role between two existing enemies?

Yeah, okay. I concede the argument... perhaps an A.I. might even appear more malicious in a way simply because it might share the worst of our natures, but nothing of the fear or beauty of being human and actually being able to internalize what it means to lose people and not simply numbers on a screen.

Going to disagree with both of you, though on minor points.

evilthecat:
The computer is there purely to ensure that the other side, their families, their childhood friends and their pets die as well

Not true. Firstly, a deterrent needs to be believable to be effective, and one of the best ways of looking like you'd do something is being certain that you would.

Secondly, you might have lost much or most of your population, you might have lost the government, but that doesn't mean it's not worth trying to save the remainder. Fallout shelters and duck and cover films are invested in for that reason, after all. Retaliate, and you are probably too late to stop the missiles, but what's left of your country gets to rebuild without being occupied by what's left of theirs.

Certainly, there are arguments to be made against such systems, but they aren't purely there for revenge or spite.

Addendum_Forthcoming:
Right, but a military A.I. would still, feasibly, be programmed with an idea to reasonable force. I can't see anybody programming an A.I. as if to kill indiscriminately. Whether in our world, or a world of a hypthetical future that could build it. Such a hypothetical nation would be incredibly lonely if it programmed a military grid to also fire at allied powers

It was (at least at some stage) the doctrine of the Soviet Union to attack allied or neutral nations in the event of WW3. That would not be indiscriminate killing, that would be removing threats to the USSR (or rather, what had been the USSR) in the aftermath. Suddenly "major power" has a different meaning than it did yesterday, and the USSR had fought it's biggest war against an enemy it had been signing pacts with, after all.

Thaluikhain:

It was (at least at some stage) the doctrine of the Soviet Union to attack allied or neutral nations in the event of WW3. That would not be indiscriminate killing, that would be removing threats to the USSR (or rather, what had been the USSR) in the aftermath. Suddenly "major power" has a different meaning than it did yesterday, and the USSR had fought it's biggest war against an enemy it had been signing pacts with, after all.

Well true enough, I actually wrote this bit in about escalation theory to EtC where I conceded the argument that an A.I. would not actually be programmed maliciously ... and how there was this idea you could partially slow what seemed to be an inevitable march towards strategic exchange through the use of 'battlefield' nuclear munitions. As in one side uses one against an allied nation to the principle foe in a surprise attack. Something to show determination you will not back down. Precipitating into retaliation with other battlefield nuclear detonations over bases near one's border, or massed soldiers, etc.

At least in the first half of the Cold War before they developed the SLBM...

If it means you can get more people in bunkers rather than committing immediately to a strategic exchange, an A.I. principally designed to defend the people of a single nation and not caring of the lives of another allied nation might attempt to prompt this slower, albeit more likely devastating escalation over far greater territory if only to buy their own populations they are programmed to defend more time.

So I can see the argument that an A.I. might even appear more malicious depending on the nature of their programming andjust what their prerogatives are even if designed solely to provide "proportional" force.

That being said, they (slowly) phased out the idea of ready-to-use tactical nuclear weapons given the fact of the widely disproportionate and un-uniform production and explosive yield qualities. To put it bluntly, one person's 0.07KT warhead might be responded to with one person's 0.9KT-9KT warhead... which greatly elevated the chance of commanders simply opting for the largest they have at their disposal as quickly as possible.

In order to 'use them or lose them' given a lot of these devices were designed (stupidly) to be deployed perhaps at very short ranges to their terminus.

And when I mean 'short' some of them you could probably see with a highgrade pair of binoculars and a good set of eyes behind them. The distances were so short that any guard section set to defend such emplacements, they would likely be moderately irradiated by their own munitions use. So that's very little time to decide whether you should or should not use what would likely trigger nuclear reprisal in turn.

Kind of off topic, but it always surprises me how we manage to survive ourselves...

 

Reply to Thread

Log in or Register to Comment
Have an account? Login below:
With Facebook:Login With Facebook
or
Username:  
Password:  
  
Not registered? To sign up for an account with The Escapist:
Register With Facebook
Register With Facebook
or
Register for a free account here