For the past four years on this channel, I have made videos on various topics,
but by far the most popular form of video I have done centers on video game analysis.
What I think made them so popular is that instead of being your usual video game review,
I have focused primarily on narrative.
Specifically, I have focused on what I believe are the best stories in the video game canon.
For twenty-odd years, video game narratives have been demonstrating a capacity to confuse
entice, and emotionally manipulate their audience in the most positive of ways.
Most recently, in fact, I have been putting forth the argument that video game narratives have demonstrated such sufficient profundity,
that they sometimes have matched the greatest stories told in other mediums.
To continue our collective celebration of our modern-day digital mythology, I would finally like to give full
attention to a game which, I would argue, has the most profound moment in gaming history.
I refer to the final codec conversation from Metal Gear Solid 2.
While I have done several videos on Metal Gear Solid 2 in the past,
I have never given full analytic attention to this specific moment.
The main reason why is because somebody has already done so.
My all-time favorite YouTube video is titled
“The Most Profound Moment in Gaming: MGS2 AI Conversation Analysis”.
Though it was released almost nine years ago, by its creator LogosSteve,
I pondered for a while how I might do a better job than he did.
However, given our current cultural context,
I believe I have a few other worthwhile things to contribute.
The advent of subjects like “Fake News”, the resurgence of rampant political correctness,
and our tendency towards online tribalism has imbued this conversation with a much denser sense of meaning.
To those who consider themselves gamers, but have never played a Metal Gear Solid game,
you are obligated to at least listen to this final conversation.
Not only will you be enlightened by its philosophical commentary,
but you will be terrified by its implications, as I have been for my entire life.
Up until the end of the game, the main character Raiden believed he was receiving orders
from a man named Colonel Roy Campbell, a man who was also Solid Snake’s commander throughout Metal Gear Solid 1.
At the end of Metal Gear Solid 2, Raiden learned that his Colonel was in fact an artificial intelligence
manipulating Raiden to perform its political bidding.
The bidding in question refers to the A.I.’s desire to control the human race
by controlling the spread of information via the press and the Internet
but we’ll focus on that more in a moment.
During the first part of this conversation, the audience learns how this A.I. came to be,
and surprisingly, it is similar to the way human beings as a species came to be.
Unlike the origin of human life in the oceans, the A.I.’s quote-unquote “life”
had its basis in American idealism, hence the A.I.’s referencing of the White House as a symbol.
In the mind of the A.I., their purpose is to push forward American values
things like freedom, equality, and healthy competition.
However, Raiden points out what should be an obvious contradiction in the A.I.’s thought process:
that the AI wants to take away those freedoms via censorship.
Not only are the A.I. about to argue for the ethics of controlling information,
but they are going to do so in a very condescending manner.
As we move on, I suggest that you, the listener, pay attention to how the A.I. demeans Raiden.
Their pomposity is intrinsic to the effectiveness of their future argument.
What the A.I. is trying to do here is illustrate how genes function similar to how memes function.
If genes are, quote,
“a unit of heredity which is transferred from a parent to offspring and is held to determine some characteristic of the offspring”,
then a meme is something like an ethic, a fact, or an opinion that later generations inherit.
Like genes, memes are also subject to a process called “natural selection.”
If certain organisms are better adapted to their environment than others,
then their genes are passed on to the next generation while others have the potential to die off.
Similarly, if certain memes seemingly hold enough meaning or logic,
then supposedly they survive better than memes that do not.
What distinguishes memes from genes is the nature of the selection process.
Genes are more-or-less subject to the amoral discretion of nature,
but memes can we analyzed and weeded out via moral discretion.
However, the A.I. will now illustrate how the natural selection of memes bears a unique problem in the Internet age, compared to the past few centuries.
Prior to the Internet age, information was curated.
We read about the news in newspapers, watched the news on TV,
and it was all funnelled to us after passing through a, hopefully, professional body.
Granted, there was a greater potential for curators such as journalists,
editors and academics to manipulate information to fit their various political biases,
but at least there was some form of selection process going on.
In the age of the Internet, however, information is released so fast
that it is impossible to control the discussion.
Instead of listening exclusively to reporters, we are left to our own devices to derive what is valuable.
Except unlike the curators in the pre-digital age,
the general population does not know how to separate the wheat from the chaff.
We will focus on small, trivial issues like a celebrity’s love life
rather than macroissues such as the cumulative monetary debt of the Western world.
We will only listen to news sources that confirm our pre-existing political leanings,
and ignore any challenging voices.
If it sounds like I am engaging in apologetics for censorship, I am not.
I would never censor information under any circumstances,
but, what does one do in respect to the destructive influence of human bias?
The A.I. goes on to propose a solution.
The A.I. argues that because they are not prone to human error
and have inner CPUs with greater processing power than the human brain,
they would know how to curate information and present it to the human race in a way that doesn’t slow human evolution.
Unlike your average political body which might censor information that is inconvenient to their cause,
the A.I. would simply present information in the most factual way possible so that political bias would not intervene,
thus cooling political tensions across the political spectrum.
Of course, if something like the A.I.’s censorship program came into place,
ethical questions would arise regarding whether or not this compromises human freedom…
specifically our quote-unquote “free will.”
Ideally, human beings should have the autonomy to make their own decisions, no matter what mistakes may follow.
In response, the A.I. argues why the problems that arise from that freedom, supposedly necessitate censorship.
Here, the A.I. illustrates here the various ways in which human error corrupts truth.
That human error arises from an instinctual drive
to protect ourselves emotionally and physically, which truth tends to threaten.
For instance, let’s say a religious fundamentalist is confronted with various scientific truths that challenge their worldview.
Let’s say the fundamentalist loses faith when confronted by those truths.
What happens psychologically is that because the fundamentalist no longer has a belief system to make sense of the world,
they are thrown into a mental pit of confusion and resentment.
That loss of structure and certainty is so painful to the ideologue
that it seems it would be better to avoid that feeling at all costs.
This instinct is endemic to almost all human beings.
This is why we separate ourselves into political tribes and won’t consider the ethical implications of our positions.
For instance, is it right to be pro-choice, or pro-life?
Instead of considering the validity of either side’s argument,
the pro-life crowd will chastise the pro-choice crowd as being on par with murderers,
and the pro-choice crowd will chastise the pro-life crowd as being misogynist, religious zealots that want to control women.
These low-resolution assumptions exacerbate tensions
between political parties and can potentially lead to violent conflict.
Most people will avoid these difficult conversations to preserve order,
but if you ignore them long enough, political positions become cemented,
and people will kill each other before they admit that they’re wrong about something.
The A.I. then present a future where human bias corrupts the flow of information.
As we saw in the 20th century, whether it is with Hitler’s fascism, or Stalin and Mao’s communism,
the ideologically possessed would rather kill millions in the name of their ideology
rather than consider the arguments of their political opponents.
We see this type of attitude in its nascent stages in the digital age.
Whether it’s calling everybody who disagrees with you a racist,
sexist, homophobic Nazi, or a West-hating communist,
this attitude is pervasive throughout social media, forums and comment sections.
This type of inflammatory language has further polarized our political climate,
and has resulted in violent clashes.
In the mind of the A.I., this is due to the fact that we insulate ourselves
by staying inside our little ponds, and casting all outsiders and dissidents as bad people.
If this continues to happen, who knows what violent, oppressive future might manifest?
Unlike the reality of MGS2, where a supposedly benevolent A.I. can make the hard decisions for us,
our reality might become a sort of Orwellian nightmare, which the A.I. wants to prevent.
I want to remind you all that Metal Gear Solid 2 came out in 2001.
This game was warning us about the advent of “fake news”,
misinformation and half truths, roughly two decades before the issue captured our collective attention.
Worse yet, not only do the A.I.s describe a potentially dystopian future
for both our reality and the reality of the game,
but they also make us inadvertently realize that we currently have no constructive solution to the problem.
The only tactic available at the moment is for individuals to strengthen their moral compass,
and call out misinformation where possible.
However, given humanity’s inherent imperfection, even the best of us are prone to mistakes.
Worse yet, instead of bolstering the best among us despite their mistakes,
we will feverishly exacerbate the mistakes of our opponents for political gain.
We see this quite vividly in what has been aptly termed “call-out culture.”
For instance, it is thanks to this culture that there is no host for the Oscars this year.
It is thanks to this culture that people will be fired from their job over a tweet they made ten years ago.
A person might even be fired for a tweet that was taken out of context,
when said person did nothing wrong whatsoever.
Worst of all, it is thanks to this culture
that we will elect politicians not based on their ethics or policies,
but on how effectively they can punch the other side.
… and let me re-emphasize, we have no solution to this problem.
In the reality of MGS2, however, a hypothetical solution is presented:
because people abuse the responsibility that comes along with freedom of speech and expression,
and would rather attack people than their ideas,
a system that controls information must be set up, supposedly for our own good.
The A.I. go into more detail regarding this solution as the conversation goes on.
However, that analysis will have to be saved for the second part of this video.
Thank you so much for watching, ladies and gentlemen.
If you liked this video and understand the importance of the message I’m trying to convey, please share it around.
Also, I implore you to check out LogosSteve’s original video, “The Most Profound Moment in Gaming.”
9 years later, it is still my favorite YouTube video ever, and I think you will get a lot of value out of it.
If you want to see my other Metal Gear related content, please click on one of the videos you see on screen now.
And hey, if you like that, maybe consider subscribing!