Game Theory: Who is Mega Man's True Villain?

Who is Mega Man's True Villain?

Mega Man, clearly a cut-and-dry story of good vs. evil...or is it? This story of super fighting robots actually serves as a dire warning for us and our futures. I fact, the Mega Man series may not be as black and white as you think.

Watch Video

Great vid

A thing about the Asimov rules: They suck - the movie Irobot showed that, as well as the webcomic Freefall which does a lot in robot ethics and the nature of sentience.

See, if a robot isn't allowed to let harm come to humans through action or inaction - then... well

How would they deal with a suicidal human?

What if the robots discover that humans need air to breathe - and one human breathing in a mouthful of air means that other humans are denied that air.

Basically: Asimov's three laws do not take into account that humans tend to hurt each other - or one way or the other

Hell, the first law doesn't define harm.

How would a 3-law robot react to a tumblr SJW who claims to be 'harmed' by the mere presence of a cishet shitlord?

Or how would a 3-law robot react to religious instructions? In Freefall they handwave this by saying that religious texts trip anti-virus programs, due to their instructions to spread the faith above all others. What if that didn't happen. A religious robot?

That said, I had no idea that the Megaman universe fluff was like this. I had always just thought that it was Dr. Wily who had made evil robots take over the world - with Dr. Light trying to stop him

webkilla:
Basically: Asimov's three laws do not take into account that humans tend to hurt each other - or one way or the other

I don't remember the movie and that Freefall comic has furries which immediately makes me not want to read it but I'm still curious. What do the robots do when humans hurt each other? I would imagine they're programed to detect different levels of threats, so that a human breathing doesn't raise any threat levels but a human pointing a real gun at another human would raise the threat level to critical etc. but by then there's already more implications behind the three rules and more rules would need to be added

I think MegaMan got around this simply by building robots that are specific to one task. Woodman cuts wood, Cutman cuts... paper? Metalman has really cool music and works with sawblades, Skullman... uhhh? ANYWAYS the point is that these robots all have one single job that they do day after day until they are scrapped and replaced by better versions. This is where Willy comes in to take advantage of their free will, but I think Rock is the only one that much like Protoman he doesn't have any specific job he simply does as he pleases, which is what makes him the biggest threat or maybe the world's savior, since he can choose much like humans do, except he doesn't have to worry about aging or mortality

webkilla:
Great vid

A thing about the Asimov rules: They suck - the movie Irobot showed that, as well as the webcomic Freefall which does a lot in robot ethics and the nature of sentience.

See, if a robot isn't allowed to let harm come to humans through action or inaction - then... well

How would they deal with a suicidal human?

What if the robots discover that humans need air to breathe - and one human breathing in a mouthful of air means that other humans are denied that air.

Basically: Asimov's three laws do not take into account that humans tend to hurt each other - or one way or the other

Hell, the first law doesn't define harm.

How would a 3-law robot react to a tumblr SJW who claims to be 'harmed' by the mere presence of a cishet shitlord?

Or how would a 3-law robot react to religious instructions? In Freefall they handwave this by saying that religious texts trip anti-virus programs, due to their instructions to spread the faith above all others. What if that didn't happen. A religious robot?

That said, I had no idea that the Megaman universe fluff was like this. I had always just thought that it was Dr. Wily who had made evil robots take over the world - with Dr. Light trying to stop him

I've read a few of his short stories - he's revised the rules over the years to be more complex (ie prioritizing certain humans over others or saving many at the expense of a few, defining "harm" to include damage to professional reputation, prioritizing the first law to overrule the others [I believe law #1 would overcome #2 in regard to suicide - it would ignore a human's order to let him/her kill themselves] etc.), ironically creating loopholes he himself acknowledges could lead to robots declaring themselves the more competent authority and thus only taking orders from itself and other robots.

To be fair to cultural stereotyping in the Megaman games, they were made by Capcom, who are Japanese, so they stereotype everybody who's different from them and there I go stereotyping.

Now where did he find those comic pages? Is it from the current Mega Man comic book from Archie Comics?

The Three laws ironically were just coined by Asimov so he could break them. It's repeatedly shown that there are many loopholes. THe most basic being the definition of 'Human' If say you defined Human as just one person... I.e. Me. then you could very easily have an army of killbots because .. again you are the only human by their definition. If it's not you it's not human.

Also the Asimov Rules are basically taken from a previously established template governed by human fear,. You think it's only robots that follow the asimov laws... look at every superhero... they themselves are more or less bound by asimov laws. HUman beings are such that we cannot peaceably coexist with beings of equal intelligence who are more powerful that us unless they exist to serve us.

Even God is not exempt from this.. think about it, really.

On the other hand.. it's a scary thought but then again...without freewill, there is defacto slavery.

I'm just going to go out and say the orange catholic bible is pretty clear on this "Thou shalt not make a machine in the likeness of a human mind."

German? Where did you ever get the idea Dr. Wily was German? I mean, sure, maybe that's in the lore, but that isn't a typical German name. Maybe he's the German-born descendent of immigrants?
Also, I didn't take Megaman's comment to mean that he and all the other robots were programmed to disregard the laws of robotics but that Megaman himself specifically was somehow different than the other robots.
For instance, that he had evolved through his ordeals and experiences. Or perhaps that he was different from the other robots from the start. Or, hell, that perhaps he was a cyborg rather than a robot or a robot with organic components or something.
Dunno. Doesn't matter which, I'm just saying I didn't take his comment to be applicable to all of the robots in those games.

Wasn't it some dude named sigma ?

1. The Megaman 7 bit is a terrible translation. In the Japanese version, Megaman said absolutely nothing after Wily pointed out that he can't kill humans. The English version was changed for the same reason Kirby is made angry on American box art.

2. While not mentioned in the games, the comics establish that Dr. Light did in fact try to find Protoman and failed.

3. Rock asked to be turned into Megaman, Dr. Light wasn't going to do it originally.

4. Dr. Light only built the robot masters from Megaman 1 and 9. In 3, he worked with Wily to make the Robot Masters. That's it. The rest were made by Cossack, various other creators, and most of them, Wily.

And lastly, the whole "if he hadn't done X, bad things wouldn't have happened" is ridiculous. Who is at fault if someone get murdered? The parents of the murderer, or the murderer? Not to mention the slippery slope it creates if we blame the creator when their creation is used for evil ends.

Honestly, one of the worst episodes of Game Theory, by far.

webkilla:
Great vid

A thing about the Asimov rules: They suck - the movie Irobot showed that, as well as the webcomic Freefall which does a lot in robot ethics and the nature of sentience.

Don't... associate Asimov with that piece of shit movie I-Robot. The only thing they have in common is the 3 laws. And even that is barely.

See, if a robot isn't allowed to let harm come to humans through action or inaction - then... well

How would they deal with a suicidal human?

He covers this.
The robot ceases to function. It can neither save the person's life, because it would cause mental harm. But by doing nothing, the robot would be allowing physical harm.

What if the robots discover that humans need air to breathe - and one human breathing in a mouthful of air means that other humans are denied that air.

See above.

Basically: Asimov's three laws do not take into account that humans tend to hurt each other - or one way or the other

Yes.. Yes he did. That is the entire point of the 3 laws in the Asimov Universe!

Hell, the first law doesn't define harm.

They kinda do.. If you read the robot series, you'll have a greater understanding of this.

How would a 3-law robot react to a tumblr SJW who claims to be 'harmed' by the mere presence of a cishet shitlord?

The robot might just persuade the user to stop using Tumblr. It might 'remove' the computer. It could do all sorts of things, including ceasing to function.

Or how would a 3-law robot react to religious instructions? In Freefall they handwave this by saying that religious texts trip anti-virus programs, due to their instructions to spread the faith above all others. What if that didn't happen. A religious robot?

You have to be more specific. What kind of religious instructions?

When it comes to Asimov's 3 laws, your criticism of it was highly ignorant to the point of hilarity!
Also, there is a 4th law to the 3 laws.
It's called the Zeroth Law.
Check it out. ;)

I've read a few of his short stories - he's revised the rules over the years to be more complex (ie prioritizing certain humans over others or saving many at the expense of a few, defining "harm" to include damage to professional reputation, prioritizing the first law to overrule the others [I believe law #1 would overcome #2 in regard to suicide - it would ignore a human's order to let him/her kill themselves] etc.), ironically creating loopholes he himself acknowledges could lead to robots declaring themselves the more competent authority and thus only taking orders from itself and other robots.

Kind of.
In general, the 3 laws are pretty static through out.. in the 'normal' cases.
But there are people tamper with the 3 laws. Which is another one of his plot methods of throwing a wrench into finding loop holes.
On one world, yes they tampered with making the 3rd law trump the 2nd. Or the 2nd trump the 1st. (Changing the order)
He uses those as a cautionary tale as to the 'dangers' of tampering with the 3 laws, and there order.

Later, he has another world that went further. They defined only this 'modified' human race as 'human' and anyone who wasn't like them were not human... which allowed those robots to kill humans. But in general they operated the same as any other robot. IE: They reacted the same, they just 'saw' the world differently. (This is the part in those books that yells to the reader do you get it yet?! in regards to social parallels.)
But yea. Every book dealt with people either
1) Modifying the 3 laws.
2) Altering how the Robots saw the world.
3) Finding a loop hole within the 3 laws.
4) Convincing the reader that they found a loophole when in reality the robot had nothing to do with it.

Also. Asimov loved to lie to the reader, and at the end of the books, tell them the lie and how they fell for it hook line and sinker. He was a devious devil.

Skeleon:
German? Where did you ever get the idea Dr. Wily was German? I mean, sure, maybe that's in the lore, but that isn't a typical German name. Maybe he's the German-born descendent of immigrants?

In the cartoon he has a german accent. Plus he's based off of Einstein who was German.

webkilla:
Great vid

A thing about the Asimov rules: They suck - the movie Irobot showed that, as well as the webcomic Freefall which does a lot in robot ethics and the nature of sentience.

See, if a robot isn't allowed to let harm come to humans through action or inaction - then... well

How would they deal with a suicidal human?

What if the robots discover that humans need air to breathe - and one human breathing in a mouthful of air means that other humans are denied that air.

Basically: Asimov's three laws do not take into account that humans tend to hurt each other - or one way or the other

Hell, the first law doesn't define harm.

How would a 3-law robot react to a tumblr SJW who claims to be 'harmed' by the mere presence of a cishet shitlord?

Or how would a 3-law robot react to religious instructions? In Freefall they handwave this by saying that religious texts trip anti-virus programs, due to their instructions to spread the faith above all others. What if that didn't happen. A religious robot?

That said, I had no idea that the Megaman universe fluff was like this. I had always just thought that it was Dr. Wily who had made evil robots take over the world - with Dr. Light trying to stop him

You should read the I Robot book. It has nothing to do with the movie, and it pretty much has the situations you described.

webkilla:

That said, I had no idea that the Megaman universe fluff was like this. I had always just thought that it was Dr. Wily who had made evil robots take over the world - with Dr. Light trying to stop him

Which is exactly what happens. Dr. Wily tries to take over the world with his reprogrammed or built robot masters and Dr. Light and Megaman stop him, it's never been any more complicated than that. Dr. Wily is the bad guy and Dr. Light is the good guy, that's it.

This video is incredibly flawed because it makes a great many assumptions and even outright falsehoods about the Megaman universe. For one, only in 1 and 9 are any of the robot masters actually built by Dr. Light, they're simply captured and reprogrammed by Dr. Wily into killing machines, Dr. Light isn't any more responsible for that than the inventor of the shotgun is for the fact that somebody decided to kill people with it. For another, it has yet to be established that Asimov's 3 laws exist in the Megaman universe or in the same form, and any of the Robot Masters definitely don't have it programmed into them in any case.

In fact, I don't know where to end when it comes to the falsehoods in this video.

prpshrt:
Wasn't it some dude named sigma ?

That's a robot in Megaman X, about 100 years after Megaman was created.

I don't really care to be fair. I'm in Megaman for the cool power-ups and to shoot shit with my arm canon. Classic 2D action, great times had. Megaman doesn't even need a story.

GZGoten:

webkilla:
Basically: Asimov's three laws do not take into account that humans tend to hurt each other - or one way or the other

I don't remember the movie and that Freefall comic has furries which immediately makes me not want to read it but I'm still curious.

What do the robots do when humans hurt each other? I would imagine they're programed to detect different levels of threats, so that a human breathing doesn't raise any threat levels but a human pointing a real gun at another human would raise the threat level to critical etc. but by then there's already more implications behind the three rules and more rules would need to be added

1) the comic isn't really furry.
It has one genetically engineered dog in it, which is using a neural design that all the freewill'ing AI robots also use. Officially the dog character is designated as an organic AI. It also has an alien. Other than those two its all humans and robots
- its also highly intelligent and deals more with AI/human rights and the plight of freeroaming AI with free will

2) What do the robots do when humans hurt each other? In the Freefall comic, this hasn't actually be addressed yet - but there have been shown several examples of scrupulous humans faking injury to get robots to prioritize them higher, in lieu of doing more sensible things

3) I don't think simply adding more rules would ever work.
The point the freefall comic seems to make is that considering the extremely complicated code of laws that humans use to live with each other, then expecting robots to function perfectly with humans using just three laws... is kinda silly

For example, at one point in the comic a human gives a direct order to a robot (all robots in the comic have to obey direct orders from humans... a safety measure officially) to go steal a few things, give those things to the human, then destroy itself. Later on, an advocate for giving civil rights to a robot want robots with free will to have their memories become protected by law, so a criminal can't just order them to delete themselves to destroy evidence.

And back to the megaman thing: If the 'problem' was that Willy tweaked a bunch of industrial robots to rebel from their boring but intended purposes - then... well... don't make the robots that smart to begin with. There's a reason why today's assembly line robots aren't much more than a waldo with a tool at the end: It's all that's needed.

The moment we make intelligent robotics I can't imagine we'd be able to get them to do boring and repetive stuff - unless we at the same time hardcode the AIs to enjoy menial labor.

The Geth from Mass Effect are an example of industrial/menial labor bots that suddenly turned full AI on accident - and perhaps how not to react to that.

Robot-Jesus:
I'm just going to go out and say the orange catholic bible is pretty clear on this "Thou shalt not make a machine in the likeness of a human mind."

I knew some amazing devil was going to say this.

*gasp*

I- I AM the Kwisatz Haderach!

OT: I liked the Meagman X series more, myself. It deals with the issue a little more, especially with 4. Any robots that turn against humanity are labeled mavericks. A group of "good" robots label themselves the maverick hunters, and form a kind of military police force that hunt the mavericks. In 4 there's a society of robots that form a robotic military. They exist peacefuly with humans, but are blamed for a terrorist act. This leads to a war between them and the maverick hunters, and neither side realized they were being manipulated. It was an interesting story, and surprisingly emotional, since the machines were essentially hunting their own kind.

Aren't all the human dead by the time Megaman Legends comes around? I forget.

There's a peculiar, common trait amongst Japanese-made video game series. Eventually they begin to descent into bleak themes of existentialism, fatalism, nihilism, solipsism and (here comes that term) ludonarrative dissonance. In Megaman it kicked off with Megaman X, where X spends every free moment questioning everything he does, lamenting all the violence in-between bouts of killing everything in sight. The series gets steadily more depressing until finally EVERYONE DIES. Period.

I don't know where this mentality comes from, maybe it's because their modern society is so disconnected and divided, maybe it's because their cultural zeitgeist has been tainted by being the only country to be hit by atomic weapons, but there's this morose sense of "everything is awful and should die" that slowly overcomes their stories.

I could make a list of all the heroes who complain about violence yet are the most violent characters in their respective series. I could also make a list of all the villains that are motivated solely on the belief that life is pain and therefore everything should die. A long, long list.

That wouldn't make Light the villain...
That's like saying "who ever built the concentration camps is the true villain not Hitler."....

The entire point of the 3 laws of robotics is to show how it's pretty much useless to come up with rules of robotics.
They are not meant as a practical guide.

The only way to a robot to follow the laws of robotics is for an entity to be able to identify the intent of the three laws. This implies human level reasoning at the least. Further, in order to obey the intent of the laws a robot has to be able to deviate from the literal interpretation.

So... in the end any system of hard logic will fail. Having the capability of fuzzy logic decisions and the ability to re-work the framework is necessary.

Light's solution is the better, even if it allows for exploits. All systems are exploitable.

The opening of the video with a heavy emphasis on the Megaman 7 ending really puts a wrong foot forward with the theory. This is because that ending isn't canon and is merely an "artistic liberty" taken with the localization of Megaman 7.

In the original game released in Japan that entire line "I am more than a robot" was actually "..." which is why the text crawls into place. The text itself was changed but the original coding behind the display of the text wasn't. This would mean that despite the desire for Megaman to put an end to the run around right there, he couldn't. Megaman is then removed from the choice and conflict by the building collapsing on Dr. Willy.

So the ending is actually a localization error. Even so, the idea that Dr. Light is the real villain here isn't a new idea nor one without merit. It has been explored by portions of the fanbase time and time again and shown that there are a lot of darker points to the supposed world utopia created by Dr. Light.

Loved the video and enjoyed it though. Thanks for the show!

daxterx2005:
That wouldn't make Light the villain...
That's like saying "who ever built the concentration camps is the true villain not Hitler."....

If there were a guy, who gave unlimited supply of funds, armaments to Hitler and say 'have fun'. It doesn't make Hitler any less of a villain... It just means there was a guy funding & potentially encouraging said behavior.

As to the Doctor light being the true villain? Perhaps he's like a politician.. He is highly negligent and quite simply incompetent...

But I'm pretty sure some version of the 3 laws must exist for master class robots in the Megaman Universe. For drones and military robots (mets, sniper joes, etc) they have no such rules because they aren't truly sentient, but for the Master Robots, unless Wily alters their programming, they have to follow those laws. There was even an issue of the Megaman Comic where terrorists were holding a convention hall hostage and the Robot Masters were powerless to act against them because they were all human, so long as the terrorists promised that nobody would be harmed as long as the hostages cooperated the masters had no choice but to let them do as they pleased. Some of them even lamented not having Wily's programming anymore. It wasn't until one of the terrorists punched a guy that the robot masters were finally able to counter attack, with non lethal force. I think the reason Megaman attempted to break the laws at the end of MM7 is because of how long he has been active, like in the Will Smith movie "I, Robot," the central AI that ran the city came to the conclusion after years of operation that in order to protect the bulk of humanity it would have to act against the wishes of and even kill a few of them. Megaman probably came to the same conclusion, "this ONE man dies, millions more will be safe." In fact, when Dr. Light finally DID create a machine capable of having totally free will, he was afraid to activate it, so he put it in a pod and had it run ethical simulations for 30 years to ENSURE it wouldn't come out evil, that's the premise of Megaman X. Dr. Light sacrificed living to see his greatest creation, his DREAM, from becoming a reality just so he could be sure it wouldn't harm anyone.

Fox12:

Aren't all the human dead by the time Megaman Legends comes around? I forget.

There is a Zero focused franchise that is set centuries after X and the human still survive.

 

Reply to Thread

Log in or Register to Comment
Have an account? Login below:
With Facebook:Login With Facebook
or
Username:  
Password:  
  
Not registered? To sign up for an account with The Escapist:
Register With Facebook
Register With Facebook
or
Register for a free account here