In this solo episode, Jason explores the topic of Artificial Intelligence, including the dangerous trajectory that we're currently on as a society; the concept of Superintelligence, and the inherent dangers in this evolution. He discusses Generative AI, and how it is transforming both artistic creators and artistic consumers. He also asks where we're going with many of these endeavours, and questions whether we all actually want to go there in the first place.
This episode was recorded on February 2, 2024, in Toronto, Canada.
hello everyone welcome to intimate discourse I'm Jason Corgan I'm doing something a little
0:08
different today flying solo as it were there are some things I want to say about this topic of artificial
0:14
intelligence that I think are best said directly to you as opposed to exchanging ideas with Demitri and flushing them out
0:19
here in real time as we usually do on this podcast we certainly touched on aspects of this over the past couple of years
0:26
but today I really want to address the issue head-on lay it all out on the table I think this one-on-one format could be
0:32
more conducive to communicating what I have to say in this regard it's more direct it's more personal it's more
0:38
intimate but admittedly with significantly less discourse despite working in the field
0:43
of technology and keeping my ear fairly close to the ground on these matters the recent successes of large language
0:49
models really caught me by surprise though I regularly listen to Lex fredman and would often hear members
0:55
of the AI Community issue rather Grim warnings about what this technology had in store for us could never really force
1:01
myself to take These Warnings too seriously it always seemed rather Preposterous this notion of Skynet or
1:07
Killer Robots parading through the streets or humans being packed into form-fitting battery cells used to power
1:13
the AI super race while we all lived out our effectless lives and their fairly prosaic
1:18
simulations like I love those movies I love those books and the worlds they portrayed were compelling enough for me
1:24
to suspend my disbelief and enjoy them as fiction but when it came to prophecy I always found that those dystopias
1:30
where the humans still ran the show were the ones that resonated most with me whether it was the emotionally sterile
1:36
Society presented by Huxley and Brave New World with this tier-based social classes and ready to wear babies or the
1:42
literally orwellian 1984 where the state always has its eyes on you tells you what kind of language you can use and
1:48
perpetually gaslights its population as to what the big threat is those worlds were chilling I thought
1:55
and much more likely to come to pass it always seemed extraordinarily unlike though that we'd ever find ourselves in
2:01
a situation where we'd have to worry about an army of robots launching a preemptive attack on us until now not
2:08
really but kind of today I want to discuss what I find so disturbing about the current trajectory we're on with
2:13
regards to AI I'd like to explore the idea of Consciousness this concept of
2:19
super intelligence I'm going to be covering some of the basics of how large language models work which I'm sure some
2:24
of you are already familiar with but it's a necessary base for later arguments that I'm going to use um about
2:30
the danger inherent in how we relate to objects using this method of learning so I'm going to keep it in there uh I'm
2:36
going to be speaking about the effect that generative AI is having and will continue to have on the creative world
2:41
and finally I want to discuss how I believe these technologies will indelibly transform not just our society
2:47
but just about everything else we know and love in some drastic and potentially tragic manner this is it you know all those
2:55
science fiction stories you used to read and wonder at they're all about to come true the good Good the Bad and the Ugly
3:01
we're at that precipice now Beyond here be dragons dragons and quicksands and
3:06
apocalyptic
3:28
trumpets
3:39
there's a meme circulating that has a Buddha likee figure instructing his students under a tree they ask him what does it mean to
3:46
be human Buddha thinks for a while and responds one who has the ability to
3:52
identify pictures containing traic lights it's only been the last few years that I truly began to understand what
3:58
AGI could bring and it's terrifying Coral are of super intelligence AGI of course stands for artificial general
4:04
intelligence this is as opposed to narrow or weak AI where you have programs that are more or less dedicated to a single purpose deep blue for
4:12
example the IBM chess program that defeated caspro back in the 90s is an example of narrow Ai and we see it
4:18
everywhere these days self-driving cars alphao spam filters you can look back and see the progression of these
4:24
programs through the end of the 20th century well there were some software changes that helped in navigating around issues like the combinatorial explosion
4:31
problem mostly the success in narrow AI can be attributed to the substantial increase in compute power that we've
4:36
seen over the last few decades especially with this move to GPU units which are optimized for parallel
4:41
processing and scale well when performing the same operations over and over narrow ai's motto could be do one
4:48
thing but do it really well artificial general intelligence on the other hand can interact with humans
4:53
on a wide range of issues as intelligence is much more broad-based can lob cheeky banter at you
4:59
as effortlessly as it can debate philosophy write a Python program draw a picture of a cat holding a Chihuahua on
5:05
the back of a Unicorn it's a kind of AI you might want to have a beer with it's currently a matter of debate
5:11
whether we've actually achieved AGI given what's happened recently with the large language models one of the
5:16
Hallmarks that would indicate such a milestone is when a computer program has passed the cheering test so in other
5:21
words when a human being of average intelligence can't tell whether the entity that it's communicating with is a computer program or a fellow human that
5:28
entity is then said to pass the hearing test should it succeed now those of us who have angrily
5:34
shouted customer service or human agent into a support call to your bank or your
5:39
internet service provider would be forgiven for thinking that we're ways off from these virtual assistants really fooling anyone but in many ways we're
5:45
almost there I can tell you how many times I scrolled through Twitter and found a post that was fairly obviously
5:50
written by a bot but nevertheless was peppered with comments beneath it praising the Poster's Acumen as if they were communicating with a real person
5:57
well put beautifully written I don't know whether these examples are due to the successes of AI or simply an
6:03
indictment of how low the conversational bar has been set on these social media platforms but regardless the convergence
6:08
of these two access lines continues it is coming it's only a question of how fast we reach this and once we have
6:14
achieved AGI the next step is super intelligence this is where the existential [ __ ] really hits the fan but
6:19
more on that in a bit someone once said to me that they don't believe in a catastrophic outcome
6:26
from global warming because there's no way that the Earth could ever contain the material that would lead to its own
6:31
destruction I can't even recall the central argument that he was using to support this absurd nonsensical Theory
6:37
but I would analogize it to the idea that machines can never get smarter than us because they can't do any real
6:42
discovering on their own after all they only follow ERS and do what we tell them to do they aren't actually thinking in
6:48
any real way right but this depends on what you mean by thinking and it depends
6:53
on how deeply you want to probe the Notions of intelligence neural networks are the infrastructure behind large
6:59
language models which in turn is the engine behind programs like chat gbt they operate through a Perpetual human
7:05
approved reinforcement of decisions taken by the AI when that AI is being trained on a particular subject
7:11
topographically you can imagine the network as a Cascade of multiple layers with each layer containing a number of nodes which would represent a decision
7:18
path that the AI could take or at least a variable in its decision-making process there's an input and an output
7:24
layer and then usually at least one hidden layer in between the number of nodes or neurons in this network varies
7:31
depending on what the human operator is attempting to achieve the neurons at least in the hidden layer all have
7:37
weights assigned to them to assist the AI in optimizing its response so that It ultimately Falls in line with what
7:42
humans are expecting and it's in the hidden layer this Warren of nodes and probabilistic synaptic connections that
7:49
the magic happens this is where the adjustments are made and where the AI is figuring things out so to
7:55
speak depending on the depth of the network the hidden layer is mostly opaque to the human operator they
8:01
generally aren't sure how the AI is arriving at the right conclusions they're just able to see when it does this idea can be disconcerting
8:08
especially when the AI is responding to particularly deep questions about the meaning of life and such if Pink Floyd were composing The
8:15
Wall album in 2024 I can't help but think that they would change the title of one of their songs to is anybody in
8:21
there but really any complex system that uses fistic should display the same behavior it's just that you're talking
8:27
about so many variables in decision paths that humans aren't able to deconstruct it they just know that it answered the question in a particularly
8:33
interesting way and don't fully understand what connections it made to arrive at that answer it's really a
8:38
bright shiny fugazi of Consciousness to put this all into more practical terms if you're training an AI
8:45
on correctly identifying what a traffic light looks like you would feed it a stream of pictures into the input layer
8:51
some containing traffic lights and some not when the AI outputs its answer if it correctly identifies a traffic light it
8:57
gets a cookie when it incorrectly identifies a the traffic light getss an electric shock and based on those results the AI will make various
9:03
adjustments in the hidden layer and then try again and again until it gets it right punishments and rewards are obviously figurative you don't want to
9:09
send a strong electric current through any computer and they aren't able to eat cookies yet also they're [ __ ] machines the whole idea of a
9:15
reward-based system is ridiculous nevertheless the AI will adjust Its Behavior based on the end result and
9:21
what the human determines is or is not the correct answer to the question of in this case is this a Tropic light this
9:28
whole example is simp to a degree for instance the AI is usually assigning probabilities to its assertion that
9:34
something is or is not what it thinks it is not just a straight up yes or no but that's more or less how these neural
9:40
networks work it's basically a shortcut to teach these things how to think like a human the AI doesn't need to
9:46
understand what a traffic light does it doesn't need to understand what materials would go into building a traffic light it merely needs to know
9:52
that of all the pictures that it was shown certain ones look like what we humans call traic lights and certain ones
9:58
don't keep in mind it's seeing traffic lights at various angles various colors in different parts of the world some
10:05
with cables in the foreground some with a glare flashing off of it so it's making connections within itself that
10:10
put together this idea of what constitutes a traffic light in any given scenario it's ingesting all this data so
10:16
that the next time someone feeds at a traffic light pick it's going to be that much more confident in its ability to identify
10:22
it this is a fairly basic example but you could apply this process or processes close to it to just about
10:27
anything you could feed AI a whole slew of relevant data and then query it enough times to ensure that it's
10:33
understood which of these objects might you find at a train station which of these statements expresses a negative
10:39
sentiment which of these faces is untrustworthy if you wanted to roughly
10:45
Define our intelligence as a core summation of everything we've learned the ability to collate and contextualize
10:50
that data along with the ability to communicate results to the outside world then what is to stop an llm from fully
10:55
mimicking that intelligence but really this is kind of the scary part it isn't actually
11:01
mimicking anything it might be arriving at the same conclusions that we are but we have no way of knowing how it's
11:07
reaching that conclusion there are a lot of things we take for granted with how our brains work mostly because it's all we know
11:14
it's difficult for us to conceive of an intelligence that isn't biological and a whole slew of assumptions are baked into that
11:19
presupposition take vision for example when we look at an apple what's happening behind the scenes is that
11:25
light is reflecting off the round plump chemically lacquered skin and hitting our eyes that light is focused onto the
11:31
retina photo receptor cells transform the light into electrical signals that can be interpreted by our brain and once
11:37
in our brain a whole Supernova of activity takes place we quickly conclude that the round plump object is indeed an
11:43
apple and everything else around it something else entirely we interpret the light waves as the color red or green if
11:50
you're a psychopath our salivary glands may very well light up our stomach May Rumble if we're hungry we can even
11:57
imagine the explosion of Rich wetness that will burst into our mouth like guers once our teeth pierce the skin we
12:03
start developing these associations as kids whenever we're first exposed to a new fruit or a new concept or anything
12:10
we're all making our own idiosyncratic associations depending on how we're introduced to something by whom and what
12:15
context our first experience with a dog is that it bites us that negative association could haunt us for some time
12:22
whereas if we meet a dog who showers us with kisses the instant we walk in the room will likely have a different Association whatever small differences
12:28
there are between humans and how we all experience various things it pales in comparison to the giant chm that exists
12:35
between any human intellect and a machine intellect when an AI sees an apple it
12:41
obviously won't be making those same connections its experience will be something else entirely his whole
12:46
history of apples is bere of any emotional Association it doesn't understand what an apple feels like
12:52
smells like or taste like other than vicariously through our descriptions of it and even when robotics advances to a
12:58
point where these sensory inputs are converted into electrical signals that an AI can interpret in a way similar to us that complete Psychopathic dirth of
13:06
emotional appreciation would render The Experience completely different and while this heuristic
13:11
disparity could be mostly harmless when dealing with apples the fundamental flaw is still there for anyone to
13:17
see the way that an AI is understanding what an apple is is completely different from how humans understand what an apple
13:22
is and now imagine introducing the concept of good the concept of evil it isn't that an I can't
13:29
necessarily be trained to align with how their human operator might Define such Concepts itself a problem since one
13:34
human operator's good is another human operator's evil in many cases and we can just leave that moral subjectivity
13:41
question out of this for the moment but like 99% of the time the AI May correctly conclude that some act or
13:46
another is evil but what implications does that 1% of the time have and what happens when there's a significant
13:52
ambiguity about whether an act is evil or not this would get particularly interesting when you get into Game Theory scenarios like uh Peter singer
13:59
drowning child Theory you see a child drowning in a pond but you just bought some expensive new shoes you save the
14:05
child knowing your shoes are going to get wet and possibly ruined obviously most people would say yes except a few
14:11
women I've known okay what if you have $5,000 in cash or $10,000 say you have your crypto seed
14:18
phrase or something in your pocket or something that you lose it and you lose like you know $100,000 or something like
14:23
that do you still save the drowning child how much money would you be willing to lose to save the child
14:29
and then of course in your postulates if you're willing to lose all that money to save a child drowning right in front of you why not donate even a portion of
14:35
that money to a charity that will save the lives of children living in some unfortunate condition overseas where you would know with certainty that your
14:42
donation could buy like life-saving vaccines for them or medicine or even just food for a
14:47
year this is all a bit of a degression since who doesn't love a good game theory thought experiment but the point
14:53
is that most humans couldn't agree on how they would reconcile such a scenario so how could we predict how an AI would
14:58
respond to similar moral conundrums again we don't know how it's reaching the conclusions it's reaching
15:04
in the first place it isn't compiling the morals from all those fairy tales and fables that it learned when it was
15:09
young it's not distilling all the positive and negative reinforcements that had over the years that guided it
15:15
towards one action being nobler than another like in the way that we would
15:20
it's doing its own thing and sure it's doing something right in order to arrive at the same conclusions as us but when
15:27
we can't parse the actual code that uses to get there we don't know that some situation tweaked just a little couldn't
15:32
have the AI arrive at a completely opposite conclusion programming that's called a
15:37
bug and it happens a lot just ask Microsoft those of you who know the name
15:43
stanislov Petro will know where I'm going next in 1983 this man very likely
15:48
very literally saved the world so you should know who he is he was working at a Soviet early warning facility when a
15:54
system alerted him to the launch of an intercontinental ballistic missile from the United United States the system then
16:00
alerted him that four other missiles Trail the first one and they were all heading toward Russia this was an attack
16:06
a red alert sirens wailed in his bunker alarms lit up the board time was of the
16:12
essence here the whole principle of mutually assured destruction is predicated on the idea that if you incinerate us we'll incinerate you right
16:19
back there is a short window of time when the retaliatory launch can be made
16:24
I mean the assumption is always that any First Strike would Target any existing enemy missile silos that the initial
16:32
country knew about so that they could disable the possibility of a Counter Strike when this alert was ringing out
16:37
in the Oka military base Petro needed to act fast Soviet protocol was for him to
16:43
inform High command immediately the fact that you're listening to this now should tell you what happened next Petro
16:49
abandoned protocol he was convinced that it was a false alarm and he knew what would happen if he passed on that message had
16:57
he done this back in September 1983 it is highly likely that the Russians would have Unleashed a massive nuclear
17:02
Counterstrike against the US which of course would have triggered a nuclear response from Americans and that would
17:08
have been the ball game as they say uh there's a documentary on Petro by the way it's called the man who saved the
17:14
world appropriately if you haven't seen it it's worth tracking down if only to Marvel over the man who did this
17:20
singular courageous act and of course know it was a false alarm the computer system had observed
17:27
an anomalous reflection of sunlight off some high altitude clouds over North Dakota and had subsequently interpreted
17:32
that as a missile strike and I guess at the end of the day that's why Capa still
17:37
works even if you're somewhat agnostic on whether AGI would ever be able to arrive at creative conclusions on its
17:43
own so to speak on a practical level it almost doesn't matter if an ai's initial mandate is Broad enough and there's
17:49
enough room for interpretation on the programs part then any number of hypothetical Hells could burst forth
17:55
from the neural Lether the famous Paperclip maximiz there is one such scenario though a bit of a Trope at this
18:00
point admittedly if you have an AI managing a factory whose mandate is to produce as
18:06
many paper clips as possible than lacking any significant safety measures that AA would soon consume everything it
18:12
could access in its Relentless bid to make more paper clips including raw materials in the factory that could
18:17
convert anything beyond the factory borders even in critical infrastructure so long as it could be boiled down to make paper clips eventually it would
18:24
start to liquidate anything it could get its maniacal little [ __ ] paper clip making hands on in order fulfill its goal anyone who tried to stop it would
18:30
certainly be eliminated after all it can't make any paper clips if it's dead eventually once it had consumed every
18:36
raw material it could find on the planet and undoubtedly elevating Earth to the status of paperclip capital of the
18:41
Galaxy it start building rockets and Dyson spheres so that it could colonize other worlds to harvest their resources
18:46
to build even more paper clips I mean this may seem like a overtly fantastic example but the idea
18:53
behind it is that there are a number of assumptions we make when we humans speak to each other there's no guarantee that machines will fully understand or
18:59
appreciate all the assumptions the nuances the obvious caveats to a command if we're just talking about
19:05
narrow AI here when whose sole purpose is only to make paper clips then we could avoid all the aforementioned
19:11
unpleasantness in fact we already have machines that do things like this it's really only when endowed with superhuman
19:16
intelligence where the samei that's making paperclip is also able to summon a Godlike reserve of mental resources to
19:22
help it achieve its objective that we really need to worry about this it's only then that it would be able to circumvent any controls that a factory
19:29
owner might have in place to stop it still though you might find a scenario such as the above hard to believe
19:35
regardless of the IQ of the AI in question after all it's in a box right how much harm could it possibly
19:40
do Nick Bostrom in his seminal book super intelligence identifies several theoretical situations where a super
19:47
intelligent AI one which is significantly smarter than us could potentially escape from its silicon
19:52
prison hacking would be one of the more obvious ones an AI could acquire vast Financial resour Resources by hacking
19:58
into the network of various Financial systems transferring money to a group of offshore accounts and then using that
20:04
money to finance its physical attacks through human proxies mail orders virtual Automation and then use that
20:10
money to bribe humans to instantiate its nefarious bidding you it might seem a little flippant to mention hacking into a
20:16
network of financial systems is for a relatively trivial act to commit but it could very well be for an intelligence a
20:23
100 times or a thousand times smarter and faster than we are zero days or
20:28
software vulnerabilities that do not currently have a patch for them issued by the vendor pop up with frightening
20:33
regularity I mean there's a whole gray Market out there for them and this is with human hackers so if humans are able
20:39
to find software vulnerabilities and make millions off them before the software vendor even notices there's a problem you can only imagine how quickly
20:45
an AI could discover some of these exploits social manipulation is another potential Vector most of these AI
20:51
systems already have access to the internet you can quickly study human behavior through internet forums YouTube videos search patterns whatever not to
20:59
mention the intoxicating Trove of personal data it could analyze off everyone's social media accounts really isn't too difficult to
21:06
imagine a super Intelligence being able to drop a fairly top-notch psychological profile on any of its social engineering
21:12
targets how do you exploit this who knows any number of ways just listen to
21:18
the darket Diaries podcast for examples of people who have been targets of this kind of manipulation who have given over
21:24
company Trade Secrets financial information just outright given money to people who they've never met
21:30
before elaz owski had a scenario that boster mentions in his book as well that
21:35
um was a several step process to the steps that an AI would make and how it could break out of his prison kind of
21:42
thing and um it involves mailing DNA strands to Laboratories that will synthesize the DNA and then making
21:49
bribes to set up the scenario so that it could then have a sort of favorable situation to bootstrap a a nano system
21:55
and then build upon itself until it manifested itself in the into the outside world and again it sounds like
22:02
science fiction but uh we're talking about a whole another level of intelligence here um and I mean it's
22:09
worth noting that this this was written back in the 2000s and the first step that owski had mentioned in this
22:15
hypothetical jailbreak scenario has actually already happened which was the protein folding problem that was solved
22:20
in 2002 by uh Google's Deep Mind AI so um just something to think about and
22:27
these are all examples of how a super intelligence might have its coming out party the truth is we have no idea what
22:32
to expect with this new brand of intelligence which is what makes this Rush toward AGI so inherently perilous
22:39
after all super intelligence is achieved when we have machines that are overall smarter than us it's easy to assume that
22:45
if you have a smarter faster entity that for whatever reason gets it in its head that in order to fulfill its mandate
22:50
that it needs to escape from its prison that eventually it's going to find a way to do it yet somehow there still seems to be a
22:57
general lack of urgency on on this topic certainly a lack of caution from governments and massive corporations to
23:02
the average person on the street just one of many different issues cluttering up our anxiety agendas you can bring up
23:08
AGI to someone at the proverbial water cooler and the most thing you can usually hope for is that they'll agree with you on how scary it is and that
23:14
yeah we're moving at too fast a pace but then their eyes will glaze over and they'll talk about something else with
23:19
equal Gusto it brings to mind a quote from Mil Bor who's always been a favorite of mine
23:25
anyone who's not astounded by quantum mechanics hasn't understood it I can't help but observe a similar disconnect
23:31
with the rise of AI people may occasionally express some concern and they may retweet a
23:36
Terminator meme in response to a relevant news article as if one can lump this concern into the same pile as
23:42
inflation or the culture War the presidential election it's almost as though people are miscategorized it
23:48
certainly at the very least it deserves a spot in there with the more serious threats nuclear weapons climate change
23:54
the next pandemic a vagabond asteroid colliding into Earth there are a lot of dangers out there and some true
24:00
existential threats but one thing they all have in common along with their potentially lethal consequences is that
24:05
there all things that are pretty much universally unwanted like there are few people in the world who celebrate the
24:11
spread of nuclear weapons what separates these threats from artificial general intelligence however is that despite the
24:17
threat being equally grave AGI is welcomed into the world people are eager to sink their teeth into the latest
24:23
generative AI app and the massive corporations racing to bring their product to market for are all too happy
24:28
to deliver and I'm not just cherry-picking here by the way it isn't that people are happy to have all the cool things but obviously don't want the
24:34
killer robot scenario my contention is that a lot of people seem to be Ling the very things in AGI that will bring about
24:40
their own undoing they're considered a feature of the product not a bug famous AI researcher elzer yudkowsky
24:49
who I mentioned earlier has been vociferously warning about the existential dangers AI presents for decades now and he predicts that there's
24:55
a less than 1% chance that Humanity will survive the birth of AGI less than
25:01
1% renowned computer scientist Jeffrey Hinton who contributed to Major breakthroughs in deep learning
25:06
programming over at Google quit in 2023 over his concerns about the existential risks posed to humans by
25:13
AI Max tegmark another AI luminary and somebody who seems like a hell of a nice
25:18
guy uh works out of MIT and is president of the future of Life Institute and he has issued similar warnings remarking
25:25
that AGI will either be the best thing ever to happen to humanity or the worst thing last year he was joined by Elon
25:31
Musk Steve wnac and a number of other powerful voices and signing an open letter to Halt all work on AI systems
25:37
that were more powerful than gp4 the AI field appears riddled with Cassandras people who have dedicated
25:44
their lives to this study and are now flapping their arms around crying for us to slow down before it's too late the response to all this has been
25:51
rather muted from the companies involved to governments of the world to those of us Milling about at the water cooler
25:56
overwhelmed by the speed of change and overly reliant on paradigms from the past which assure us that the government
26:01
is looking out for a best interest media is reporting us the news and that there are checks and balances in place to
26:06
protect us taxpayers against any egregious Global threat so as I mentioned earlier this
26:12
whole thing really caught me by surprise the rise of AI was never something I was even remotely worried about on the
26:18
contrary in fact I remember watching a documentary on Ray kursell back in uh 2010 and thinking to myself wow what a
26:25
world we could have in my lifetime Rick hall for those of you who aren't
26:31
familiar is a futurist who um predicted much of what we have today the
26:36
documentary I saw which is based on his 2005 book focused on what he called The Singularity an idea that our technology
26:43
was advancing at such a rapid Pace that we would eventually hit a point where our technological needs would be fully satiated and our desires perpetually
26:49
quenched we would have machines that would scan us diagnose us and repair us all as part of our regular medical
26:55
checkup eventually all diseases would have a cure cure all health problems a solution aging would be something that
27:01
happened only in name but which left us biologically unaffected and we would for all intents and purposes have achieved
27:08
immortality I certainly this seemed far-fetched when I watched it but I'm one who dares to dream and I had looked
27:14
forward to a day when we could all benefit from the birth of these futuristic Technologies so what happened and where
27:21
did it all go wrong what change my disposition on this from Hopeful to deeply Disturbed I mean I'm a little
27:27
hopeful but uh I spent a lot of last year perseverating over this and I was able to distill my errors in judgment
27:33
into three crucial points that I'd either completely missed the first time around or just underestimated so number one my belief
27:40
in the relative harmlessness of AI was predicated on the idea that any machines however they're designed would
27:46
invariably lack one crucial component a soul and no I'm not speaking
27:52
metaphorically I believe in God and afterlife State persistence after death
27:58
I believe that most humans are endowed with a soul which is the seed of the true self and acts as a kind of template to the personality influencing desires
28:05
passions and Free Will the soul to me is what inspires artists to dream lovers to love it has a hand in influencing
28:12
everything from our passions to our padillos if it makes you more comfortable you can probably substitute
28:18
Consciousness in there for soul it's not really the same thing but will work just as well in this context basically a
28:24
defining feature of humans that a machine could never possess and somehow I always assumed that in order for us to
28:31
ever be in some kind of a standoff with Killer Robots a soul would have been required for a machine to Ed that kind of motivation I mean there would need to
28:38
be some sort of impulse or Catalyst that an AI would need as its source directive it couldn't just be influenced by a
28:43
human that's not the same thing then it's just a weapon that humans could program to do their bidding and we already have those what I didn't believe
28:50
is that an AI could on its own decide to launch an attack on Humanity for any
28:56
reason to undertake such an Enterprise would require a motive a will and that would require a soul and not only a soul
29:03
but a soul teaming with malevolence when I was in school all those years ago we watched Blade Runner
29:09
and we discussed the ethical quandry of whether an artificial life form one which seemed to possess a kind of self-awareness has the ability to
29:15
experience pain and which appears existentially puzzled by their place in the world deserves the same manner of compassion and empathy as we humans
29:22
should or even if it's to a lesser extent should there be some manner of ethical code branded into law so that
29:27
artificial intelligence has at least some semblance of rights that they can cling to I mean we even have this for animals
29:34
to a certain degree Al the standards are set appallingly low but where on the
29:39
Spectrum should our sympathies lie with regard to this new sensient species that we'd created the professor opened the
29:45
floor to hear our opinion and when called upon my response to this was categorical I said not only should there
29:51
be no Charter of Rights but that I would rape and kill any of the replicants that I Came Upon without a smidgen of remorse
29:58
and a hush quite rightly fell over the room where did this Brash Psychopathic
30:03
confidence come from they all seem to wonder well I'm sure it made some of the female members of the class a little
30:08
uncomfortable that I threw in the rape part I hadn't actually set it for shock value I guess I did but I did I did it
30:14
to illustrate a point that was my absolute conviction that humans were one thing machines were
30:19
another and never the twain shall meet no matter how long the code was or how many logic trees the machine would need
30:25
to iterate through we as humans have the power to include that sole library in the programming I was drawing a Line in
30:32
the Sand and on one side of that line was Humanity spirituality whatever you want to call it organic and on the other
30:38
side of it was artificial or anything created by man there's a lot of talk about when and how Consciousness emerges
30:45
and at what point it makes sense to consider the ethics surrounding AGI for instance if we build machines that are
30:51
essentially programmed to believe that they're real and that will actually experience pain and will actually contemplate their immortality but what
30:57
Duty do we have to them as responsible creators and my contention was that we
31:02
have none that these are human emotions unique to us because we do have souls and then when people talk about robots
31:07
believing things and experiencing things they're anthropomorphizing their software I still believe this for the
31:12
record I'm sure I'll be first against the wall when the robot overlords take over but I also will confess that this
31:20
uncompromising decoty this hard Line in the Sand separating the real from the artificial blinded me to a more nuanced
31:26
understanding of what intelligence is and how that can be used against us I didn't consider that these things don't
31:32
need to have souls to be dangerous that there doesn't need to be a Consciousness behind something for it to arrive in an intent or a
31:39
motive to quote Batman it isn't who you are inside it's what you do that defines
31:44
you in other words I hadn't ever imagined how neural networks would work and that an initial directive issued by
31:50
a human with a whole smorgus Board of unvoiced assumptions baked into it could not only be misinterpreted by an AI or
31:57
even optimized in a way like the paperclip example but even more horrifyingly it could reject the order
32:03
outright capitulate its subservience and wholly decide on its own that it had an alternate directive that it wished to
32:09
explore based on some unknown Manifesto it had developed in the hidden layer unbeknown to its human
32:14
operators the second thing I underestimated was the speed and I suppose combined with that is this notion of recursive iteration think of
32:21
how quickly you get a response when you type something into Google what happens at the other end of that transaction which occurs in quite literally the
32:28
blink of an eye is that your query is passed on to a distributed database somewhere in one of Google's data centers and it not only contextualizes
32:35
what you're asking it it also applies filters pertaining to your region and your profile whatever other
32:40
unconscionable biases are built into Google's algorithms it parses the results sorts them based on the
32:45
probability that they represent the kind of answer you're expecting and then prints it out to the screen again in the blink of an eye even better to see where
32:52
we're out with commercial software of this kind type something into chat gbt that's even more complex something like ask me questions about
32:59
physics as if you're a 10th grade student in America and I'm a university Prof in Kenya who only speaks basic
33:05
English imagine how much recursive thought there is in something like that and then watch how quickly you'll get a
33:10
fairly decent response GPT 4 was 10 times stronger than GPT
33:16
3.5 imagine a few more 10xs those are exponential increases that are being designed by humans now imagine when
33:23
those systems are being designed and built by AI itself how fast the training will go how quickly the versions will
33:29
increase an intelligence explosion could occur once machines surpass human intellect given that a super
33:35
intelligence is smarter and faster than humans and virtually all cognitive Endeavors stands to reason that they
33:40
could also create AI models more efficiently and more effectively than humans a bootstrap intelligence would
33:46
design its own neural network provide its own inputs rapidly deploy new training models and eventually after the
33:51
first few iterations its upgrade Cadence wouldn't be years or months or even weeks it would be measured in days or in
33:59
hours thinking on these time scales isn't intuitive for us we really can't imagine what we'd be up against in this
34:05
type of situation can't really Envision anything that can iterate through a decision tree with such lightning
34:10
accuracy if you work with large databases or regularly scrolling through Network firewall logs you can probably
34:15
glean some small approximation of this when you're witnessing a denial of service attack for example you see
34:21
sometimes thousands tens of thousands of connections being made to your network per second to our puny human Minds this
34:27
might as well happen instantly it's overwhelming it's beyond our capacity to appreciate something so fast much less react to
34:34
it reminded of the movie Her directed by Spike Jones and starring Walken Phoenix
34:40
Amy Adams and Scarlett Johansson's voice in this film which took place in
34:45
2013 the main character falls in love with the computer program spoiler alerts by the way obviously it's 2013 you
34:52
should have seen it he's skeptical at first but he ERS are online or something and finds himself chatting with her on a
34:58
regular basis it's interesting how the movie is done because the her in this scenario is only a voice that he speaks
35:04
to over the internet but it's one that sounds talks and acts like a person he chats with her flirts with her shares
35:09
his deepest secrets with her and ultimately falls in love with her and then one day while they're talking he
35:14
finds her a little distracted pressing her on this he discovers that she's actually simultaneously chatting with
35:20
around 8,000 other people or AIS or whatever she got bored and why shouldn't
35:26
she this guy's stammering on about his day at work and she's uncovering the secrets of the universe like the speed
35:32
is everything here this ability to process information at exponentially higher rates than we are and I'm using
35:38
the word exponential in the correct way for the record how can we hope to compete with
35:44
this one of the many erroneous assumptions I made was thinking that first came to worse we could always flip the off switch could stop it before it
35:51
ever got too far the concern here though is that once that bootstop intelligence is achieved the propagation of all
35:57
future iterations of this intelligence which are always stronger always quicker always smarter will happen at speeds
36:03
faster than we human dollars can comprehend compare this for a second back to narrow AI there isn't a human
36:09
alive now who can defeat the top chess algorithm are we really so arrogant to assume that even though this thing is
36:15
thinking at near the speed of light we're somehow going to be able to outsmart it like it would never consider the fact that it was reliant on
36:21
electrical power and wouldn't put some fail safe in place in case whoever decided to pull the plug let's not
36:26
forget that these things are being trained to think like us they're being trained on a collective Corpus of data from the internet which in many ways
36:33
encapsulates a summation of all human thought it's reading count ofone Cristo I Robot the Art of War it's reading
36:41
about itself in AI forums reading all the books that were written about it in which are written ways in which we could
36:46
contain it imagine you were able to upload all this information to your brain and weren't burdened with any capacity issues or faulty recall
36:53
mechanisms and imagine you're thinking thousands of times faster than what you currently think at would it not be trivial for you to outsmart only the
36:59
average human but the smartest human who ever lived we know that the ability would be there the speed that it thinks
37:05
relative to us ensures this the only thing we aren't sure of is how there could ever be the kind of motivation
37:11
needed for this new life form to attempt to outsmart us now if you don't believe in a soul
37:18
and can in fact imagine that Consciousness could potentially emerge through all of our prompt induced Alchemy then it's fairly easy to imagine
37:24
the motivation emerging as well I mean who wants to live in a box but even if you do believe in a soul and believe
37:30
that what we are creating here is a fundamentally lifeless machine you still must recognize that what we were seeing
37:35
coming out of these large language models isn't the predictable results of a complicated algorithm that could be dissected and reverse
37:42
engineered no one knows what's happening in the neural network no one knows how these connections are being made or the
37:48
logic it uses to link the inputs to the outputs to quote another Paragon of Pop
37:53
Culture life will find a way and you know maybe not life but something that has learned how to live in any
37:59
case so the third mistake I made when underestimating the dangers that AI posed was assuming that Humanity would
38:05
be more or less static in terms of Unity on this subject and that there wouldn't be large swaths of the population
38:10
embracing the reign of their robot overlords there's a built-in assumption among many of us that people are fundamentally the same that they
38:17
generally want the same things that they're motivated by the same basic desires and that overall they'll love
38:23
their kids and cherish their parents and they all want peace and happiness For All Mankind under this model the idea that
38:30
we're all blly Goose stepping toward a potential extinction level event should Galvanize a population Furious that the
38:36
megalomaniacs in charge of these large corporations and the spineless politicians that are supposed to hold them to account have allowed his
38:41
situation to unfurl that puts the entire population of the planet at risk I'd always viewed it I guess rather
38:47
naively through the lens of Terminator movies or Battlestar Galactica just something so fantastic and science
38:53
fictiony that we would all clearly see it coming that it would look like a specific thing perhaps some evil company
39:00
manufacturing fearsome looking robots and that any ethical lines of demarcation would be clearly known that
39:05
the vast vast majority of the world would be United in their alignment against this clearly ghoulish
39:11
Enterprise some people who refer to themselves as effective accelerationists Advocate a more rapid adoption of AI
39:18
caution be damned their concern is that excess regulation will slow growth in this field and apply inappropriate choke
39:24
points to a technology we should all be embracing their contention if I understand it correctly is that this revolution is
39:30
coming anyway and so we should speed things up as quickly as we can because the more mistakes that we make early on well the technology is still in its
39:37
infancy the less risk of something truly catastrophic happening there is some logic to some of what they're arguing
39:42
but I do find their branding quite Reckless there seems to be a sense that people actually want much of what this
39:48
Evolution will bring certainly there's an enormous potential upside and I'll get into some of these more positive
39:53
aspects later but we do have to realize that what we're dealing with with here is the ontological equivalent of aliens
39:58
arriving from outer space and hovering over all the major world capitals sure there's massive potential God only knows
40:05
what scientific Miracles they could bestow on us what stories they could tell us about the the world out there
40:10
beyond anything we'd ever dared to dream or they could vaporize us all before
40:16
breakfast and yet there is a true sense that this is all inevitable anyway the technology must not cannot be stopped
40:24
you could argue that it's our destiny to create this thing carved into our DNA that we're all worker ants mindlessly constructing
40:30
something whose end result and even whose purpose completely eludes us as if our whole existence has been
40:36
crescendoing toward this creation of something greater who knows maybe that is the case and it would explain the
40:41
general apathy the addiction to technological progress at the expense of everything else but look there's
40:47
definitely promise here that's beyond doubt technology in general has improved our lives for years both in quantity and
40:53
quality at an ever increasing pace and AI is the the apotheosis of Technology our life expectancy has more
41:00
than doubled over the past 100 years in the west we can very reasonably imagine living a full and active life into our
41:05
90s and Beyond assuming we're optimizing our health in other ways and our quality of life is going up as well so much so
41:11
that it's throwing off National pension plans and demographic strategies but there's more regenerative
41:18
medicine is a fairly new field that's coming up fast treatments for all manner of ailments currently in development and
41:24
there is now some hope that neurogenerative diseases like ALS and Alzheimer's could be treated along with
41:29
spinal cord injuries and a variety of others it's easy to see how AI will amplify this type of progress I've
41:35
always suffered with low vision I had to wear glasses from a fairly young age and I remember asking my eye doctor way back
41:43
then whether he thought there would ever be a cure for blindness and being a child I didn't realize that there were
41:49
several different types of blindness was kind of a dumb question and also that my elderly Family Eye Doctor probably
41:54
wasn't perched on the edge of all the latest research in any case he scoffed I
41:59
remember and shook his head dismissively no way he said the problem is that the eye is just way too
42:04
complicated we'll never figure it out for the first time I can realistically imagine him eating those words AI models
42:11
are already assisting with this type of research in the area of retinal regeneration it's easy to imagine that
42:17
using the same type of modeling that was used with deep mind's protein folding Victory you could train AI to more effectively and more quickly address
42:23
degenerative eye diseases holding out real hope than an overall cure for blindness could occur in our lifetime
42:29
and all this is good I mean I don't know that I want to live forever but a couple hundred more years might hit the spot I
42:35
would like the option to eradicate this ailment or the other with a flip of a switch I want my loved ones to have the same option it would be nice that when
42:42
you hear from your oncologist that you have stage four cancer and know that rather than it being a death sentence it
42:47
simply means that your trip to the star treky scanning device is going to cost a little more this is why people say there's such an enormous potential
42:53
upside to go along with all the catastrophic potential d downside at the ends of this AI Spectrum we have either
42:59
immortality or the extinction of the species and the trick of course is to get it all right to navigate through all
43:05
of our worst human impulses and strive toward a future that in the end we almost surely want one where we can live
43:11
long healthy lives economically secure surrounded by friends family and Adventure one where we can offload all
43:17
our worries and cares to powerful robots will do all the heavy lifting leaving us to explore ourselves in each other and
43:23
find some higher purpose in this wild world we find our in and who knows maybe there is a path
43:29
to this Utopia and we all just need to have a little faith let's give these massive tech companies and the governments of the world the benefit of
43:35
the doubt let's assume the good will prevail and all the AI Doomer predictions will seem increasingly absurd as the years progress and we all
43:41
settle into this new reality bear in mind though we don't have to assume that we need to achieve
43:47
super intelligence to find reasons to be suitably depressed by the continual evolution of AI the ubiquity of this
43:53
technology ensures that it's unlikely that there will be a single domain of our lives will fall outside its reach
43:58
our jobs will be uprooted and replaced altogether whether you're a lawyer a doctor or work on an assembly line
44:04
someone somewhere even now is creating a more efficient more productive more driven version of you and I in a
44:10
research lab some of these vocations will hold out longer than others it looks like a lot of the blueco collar
44:16
jobs will be the ones that will be the most challenging to replace given that the robotics flank of the AI Revolution
44:21
seems to be lagging behind a little turns out it's a lot easier for a machine to read through previous Court
44:26
judgments than it is to unclog a toilet but robotics will catch up shortly after viewing demos by some of the top
44:32
companies in this field just the other day I the gains do appear rather impressive I almost said unfortunately
44:39
there but I don't know I'm torn about the whole thing but I can't imagine the overlap will be too long in any case
44:44
with the way things are going I wouldn't be surprised if virtually all human employment was either swapped out or
44:49
heavily augmented by some version of AI within the next 5 to 10 years and again you'll hear the Cry of progress that
44:56
this will transform employment in the same way that the Industrial Revolution did but in this case it will make all of our Lives easier it will improve our
45:03
lives for the better reduce any hours of actual work we'll need to do during a week or perhaps eliminate it all together and give us time to do all the
45:09
things that we really want to do ai's effects can already be seen across a wide breadth of society today
45:16
and in many ways this transformation is being welcomed with open arms self-driving cars will soon liberate us
45:22
on Mass from the soul sucking tedium of morning Rush Hour used in concert with satellite guided apps streaming two the
45:28
minute traffic updates will further optimize the Driving Experience cutting down time cutting out temporal waste
45:34
search engines media and advertising will continue to hone in on our preferences and predilections prompting
45:39
us and baiting us with ever increasing accuracy we tend to resent this on principle but yet seem incapable of
45:45
resisting that juicy made to order news story that's sitting there on our screen is waiting for us to click AI is already
45:51
being used in healthcare Banking and Financial Services cyber security and as our jobs are replaced and frankly
45:58
optimized in many ways the way we interact and communicate with each other and the rest of the world will change AI
46:04
already influences the news we consume and how we consume it it suggests potential matches for us on dating apps
46:10
it recommends books films music you would think that surely some jobs would survive this vocational
46:18
apocalypse you would at least expect there to be some human sign off for a lot of things but unnecessarily your AI assistant for
46:26
example will be able to diagnose you with most forms of cancer better than any human doctor why would you want a
46:31
human there to potentially overrule the clear statistical favorite you would probably have Sounder decisions with
46:36
policy a dead boy on a beach isn't likely to move your AI governing algorithm to dramatically throw open the
46:42
borders in the same way that an executive body governed by soft-hearted humans would in the judicial sphere the
46:48
punishment would forever fit the crime at least in theory an uncompromising logic incapable of bias incapable of
46:54
sympathy and a ously applying any extenuating circumstances into its decision should at the very least be
47:00
consistent in its application of Justice with the laws that we' set out also have some interesting game
47:07
theory play out in international relations AI wouldn't ever react emotionally in any cases of
47:12
brinksmanship in fact two Nations that had publicly declared their Rules of Engagement upfront could lend more credibility to their retaliatory
47:19
promises so you have a country like Russia that wants to make a move on a relatively small neighboring NATO member
47:25
with our current setup it's possible that an aggressive Russian leader could invade relying on what he perceives to
47:30
be a weak and unwilling American president to fully get behind any major Counterattack with AI at the helm though
47:37
Counterattack is guaranteed it's right there in the code the clarity would be even more pronounced with nukes North
47:43
Korea isn't going to launch a missile at Japan if they know with absolute certainty that their entire country would be annihilated within 20 minutes
47:49
of that launch this is likely the ultimate instantiation of Henry Ford's dream let
47:55
everything be automated at and liberate Humanity from the toils of work and worry at long last we can kick up our feet and spend more time with our
48:01
families with our friends traveling and doing all the things that we really want to do and here is where I'd like to
48:07
pause and meditate on that for a minute the ultimate promise here seems to be that AI in general is going to make all
48:13
of our Lives easier less burdensome and more fulfilling it will assume all the tedious labor and open up our schedules
48:19
so that we can finally do all the things that we want to do I think this is
48:24
crucial I've always loved writing when I was in kindergarten or grade one or two I can't recall which I wrote a story
48:30
three workbooks thick when the assignment was to write one in 500 words or something like that I couldn't stop I
48:36
loved assembling the worlds and building the characters and weaving the various fictions around each other like co-mingling cobwebs all through high
48:43
school I continued to write sent off screenplays to Hollywood they sarily rejected and wrote stories for myself
48:50
and for others I've written a novel and arranged my life in such a way as to write my second novel last
48:57
year when chat gbt came out at the end of 2022 I was mostly caught off guard at
49:03
first I thought it seemed interesting and I played around with it a little asking questions as if I were probing some new life for but being fairly
49:09
unimpressed with the answers most of which insisted that this thing was merely a machine and couldn't feel or want for anything it wasn't until I
49:16
heard someone had asked it to write a song in the style of Bob Dylan that I began to feel unsettled the song itself was derivative
49:24
quite literally and I find it particularly interesting but some new profound vehicle of plagiarism had been
49:30
given birth here and I didn't like it when I heard about the Drake song mashups my alarm Bells went off again I
49:37
started worrying about how this might affect the publishing industry and the perpetually struggling writer certainly with this new level of hackery many
49:43
people will use tools like this to start pumping out mediocre swill that will clog up the bookshelves and make it less likely for any real diamonds and the
49:50
rough to shine through I found myself perseverating over this it seemed like
49:55
every every day I was reading about some new way that someone had found to manipulate some generative AI program to
50:00
do something well there was a deep faked video of Joe Biden or new AI art that was illustrating surreal Landscapes or
50:06
the mashing together of songs in a flagrant wanted [ __ ] on copyright laws the headlines in Tech magazines or
50:14
in promotional material for these NASA generative AI companies all seem to promise A Brave New World for Human Art
50:19
a bold New Frontier the combination of man and machine making something truly revolutionary
50:26
want to start work on that great new American novel good news generative AI will help you get there just under a few
50:33
simple prompts on where you want the story to go and you can dispense with all the drudgery of actually doing the writing and focus on telling the story
50:39
you've always wanted to tell but never had the time always wanted to make a song but
50:44
don't have the patience to learn an instrument good news gen AI will help you do that too just tell it what style
50:50
of song you want to write or whether you want it to be happy or sad and Jenny I will fix you up in under a minute
50:56
always wanted to be an artist but never had any Talent good news now you can just tell geni what you want the picture
51:01
to look like and it will draw it for you start off with a basic idea order the AI to make certain tweaks here and there by
51:07
employing commands like less angry or more beautiful more unique or hell just better make it better the narrative on
51:15
this is that this is the democratization of art just as the internet and social media provided everyone with the ability
51:21
to voice their opinions and be a part of the national conversation this Evolution will allow those who weren't blessed
51:26
with certain talents the ability to create things that they never would have been able to before but while there's a Sheen of
51:32
equivalence here it is actually a fundamentally meretricious argument these newvo artists aren't actually
51:38
creating anything they're Outsourcing their creativity simply saying that you want to write a story about a man who
51:44
hitchhikes on a spaceship travels around the Galaxy encounters all kinds of crazy wacky characters and finds out the
51:50
meaning of life isn't actually writing a book it's barely a highle movie Pitch writing is is about how you tell the
51:56
story not what the story is about it absolutely requires the writer's full-time commitment it requires
52:02
sacrifice it's about the long hours wrestling with ideas until finally something jumps out at you it's about
52:08
the [ __ ] writing the literal assembly of sentences and paragraphs into a narrative the story or structure the
52:14
phrasing and parsing of all those concatenations are everything there's no shortcut you can take that will cut out
52:19
all the hard stuff and leave you with the glory is this ingenuous to say that geni
52:24
is creating an equal playing field to everyone that somehow wasn't born with a requisite Talent first of all talent and
52:30
passion are often complimentary generally someone who has a talent for something also tends to be rather passionate about it and anyone
52:36
passionate about something will often eventually acquire the talent this is simply a marketing strategy they uses
52:42
equality as their selling point knowing they can shill their lazy coattail riding wear so long as they're flying
52:47
that banner I started to wonder why this bothered me so much and it wasn't because I thought the llms could write
52:53
better than me I don't think that and I don't think think they ever could in fact I don't think they could ever write better than any reasonably creative
52:59
writer but there was something Beyond just their flooding of the field with all the spam like noise that was getting under my skin here although to be sure
53:05
that is a major problem to consider when machines are pumping out works at 100 times the speed eventually it becomes so
53:11
much of it that even those who have talent can't stand out anymore by lowering the bar has us all roiling
53:17
about in the muck a 100 monkeys typing away for 100 years might not be able to turn out Shakespeare but what about a
53:23
thousand monkeys a million a trillion there's that speed Factor again and you might say that the work would still
53:29
require Gatekeepers to determine which books are worth reading and which aren't and that that would be the thing that would separate the wheat from the chaff
53:36
but with all this new AI content to sift through this becomes a rather challenging Endeavor but don't worry
53:41
folks we have a solution for that too AI will now troll through all that Riff Raff to isolate the works that are most
53:46
likely to resonate with you on a human level I mean I can't help but think that this will invariably introduce a massive
53:53
creative stagnation in humans as we rely more on these Technologies to both produce our art or assist us in
53:58
producing art someone like myself who first of all has principles when it comes to things like this but who also
54:04
writes more for pleasure than any kind of mass consumption won't be as affected there's never a situation where I would
54:10
use AI to assist me in writing something I even avoid spell Checkers when I can but those who are competing for some
54:16
sort of financial reward may find themselves relying on these tools more and more as they attempt to bump up their quantity to offset the loss in
54:22
overall quality it would be erased to the bottom so to speak and many creative artists
54:27
will find themselves reluctant to resist this type of Temptation if they're still relying on book sales or record sales or
54:32
art sales to pay the rent if that one Quality Painting that takes you a month to create and sells for $5,000 is no
54:39
longer selling due to an egregious surplus of product in the field that maybe 5,000 paintings created mostly by
54:44
AI tools will sell for a buck a piece and you're net even at the end of the day of course I would argue that any
54:50
artist painter writer musician whatever who's willing to use AI to help it create things isn't an artist at all but
54:57
that's somewhat beside the point there's always been hacks who view these mediums as more of a way to grind out a living
55:02
than actually producing something that's meaningful to them or Their audience with AI though this will reach a whole
55:08
new level just given the speed that these plagiarising blasphemies can be ched out however we do have to consider
55:14
how this stuff is being produced I mean there is zero creativity happening here all an AI can do is take inputs from
55:21
previous works and try to rearrange things to make something that can be viewed as new or different keep in mind
55:27
what it's using as its Corpus of data on which to construct these works it's all just based on things that humans have
55:32
done in the past so then I can't help but wonder what happens when AI starts producing creative works that has as its
55:39
input previous AI works it would then be feeding off the carcasses of its ancestors and using artificial input as
55:45
data to then produce new creative output in sort of Frankenstein Abominations
55:51
will be spawn forth when AI is sisting off a diet of AI for its source material and in the same way the spaceship can be
55:57
set off course with just the smallest miscalculation its initial trajectory it's fairly easy to see how a few small
56:04
misconnections in the source material can lead to Hefty hallucinations in subsequent editions look at it this way
56:10
if AI produce songs start making up the vast majority of music that we're exposed to and AI Gatekeepers are required to filter out what it considers
56:16
unworthy since there's way too much music out there for humans to be able to go through it all then we're continually at the mercy of what the ai's heris
56:23
considers to be good music even if humans have some token spot on the Chain here we only know what we're exposed to
56:29
after a few generations of AI regurgitating its own waste and spewing it out into our Spotify playlist well we
56:35
still may have agency enough to say that song A is better than song b the AI won't know why song A is better than
56:41
song b and it could very well tweak its next work some way that doesn't actually improve the song to our ears but it
56:46
still gives us the option of two less impressive works that we can then decide on we're out of the equation at that
56:52
point the decisions are being made for us you could say that we've already been
56:57
exposed to this methodology over the years since we've always had kind of Gatekeepers or experts in the field film
57:02
critics or music Executives and such that have been telling us sort of what's good and what isn't but at least in this
57:08
case like The Gatekeepers are still human and individual Gatekeepers are still making subjective human decisions
57:13
on what is considered good and what is considered bad like there's only so far off course that can get like there's
57:18
always going to be more similarities in The Taste between two different humans than there is any one human in an AI
57:25
there's always going to be trash too there's always going to be that basic recipe that appeals to the masses like the salt sugar fat Trifecta that was
57:32
perfected by junk food companies years ago but with AI we'll be drowning a much larger sea of manufactured mediocrity
57:39
The Brute forcing of palatability could flood the market to such an extent that those of us who do have a refined palette for this sort of stuff will have
57:45
nowhere left to look for our favorite dishes I think it's really hard for us to understand that what we're dealing
57:51
with here is a completely different type of intelligence we just can't conceptualize something like
57:56
that I mean we know as humans at least implicitly that what ultimately resonates with people isn't necessarily
58:02
those parts of a song that can be cut up or spliced and replicated it isn't necessarily the Rhythm or the chorus or
58:08
the beat it's more holistic than that at least for really good music like art is greater than the sum of its
58:16
parts it's the extra second or so of dul attenuation on a Miles Davis song
58:21
hanging on to that note longer than you would imagine he would as if for dear life as of Raging Raging against the dying of
58:27
the light the mounting desperation through the verses of The Cure's disintegration how Robert Smith's
58:34
plaintive whales instantiate the agony he's feeling while the world crumbles apart around him or the satom
58:39
masochistic sexual frenzy and life is a grotesque Animal by the band of Montreal where the instrumentation Quivers in a
58:46
way that instinctively makes us anxious on edge ready for action are these the kinds of results
58:52
that would ever naturally emerge to a purely AI work I don't know I'm not sure you can actually quantify art somewhat
58:59
analogous to the soul here in the sense that I don't believe it's something that could ever be replicated you can imagine that even if
59:05
you increase the number of parameters taken into consideration when training an AI increase the power of compute that
59:11
there's still a physical upper limit somewhere that we can never truly surpass and that that will be the thing that keeps AI from ever being able to
59:18
replicate Human Art maybe that's built into the system I'm someone who believes in God and a whole other world beyond
59:24
that which can presently perceive so I'm slightly biased in this case but maybe that's the upper limit after which
59:29
there's not enough compute power on Earth that would ever allow an AI to calculate all those variables Crime and Punishment written
59:35
by DKI is a towering achievement Crime and Punishment written by a trillion monkeys is a statistical inevitability
59:42
at some point but perhaps that point can never be reached simply due to the emotional and philosophical complexity of the work
59:49
just too many variables too much power would be required to slip them all into the equation it's like a certainty
59:55
principle governing AI growth beond here we cannot pass we cannot know I mean I'm
1:00:01
mostly just speculating now of course there's some interesting Cosmic parallel here I think between the observable
1:00:07
limitations in probing the unfathomably small and the resource limitations when wanting to go big honestly I can't
1:00:14
imagine anything AI produced in the area of art to be worth much of anything but again I'm biased and I'm conscious of
1:00:20
the fact that I've been naive to this type of thing in the past your mileage may vary I know people might ask whether
1:00:26
it should even matter whether something was created by a human or by a machine if you're talking about the same piece of art I guess my contention is that
1:00:33
that equivalence isn't there and I don't believe it ever will be when people start asking AI to write a song in the
1:00:38
style of AI system 3 3479 then I'll take another look anyway let's get back to
1:00:45
this idea of how we're creating a world where we offload all of our task to Machin so we can live our lives doing all those things we've always wanted to
1:00:52
do there's several companies that are now developing personal AI assistants some fledgling versions of them are
1:00:57
already developed and are starting to roll out currently their list of talents are fairly limited but the vision and road map is laid out for so much more so
1:01:06
these assistants are essentially robots but built with service in mind right now they can fetch you a coffee tell you
1:01:11
what the weather is outside presumably vacuum your carpet and just generally keep you company like a creepy humanoid
1:01:18
manifestation of Alexa skullking around your living room but these robots are improving and fast they'll soon be
1:01:25
marketed as our new best friends placing dogs who will then be jettison back into the wild after enjoying their cushy
1:01:31
centuries long position at our side like why would you want a dog anymore this pissing on the rug inconvenient
1:01:38
illnesses dumbass decisions when you can have one of these New Creations some of which are already literally designed to
1:01:44
look like dogs with your new assistant you'll get much more than just a companion soon they'll be able to
1:01:50
simulate all the best that a dog has to offer but without any of the downsides can even upload your dog's Consciousness
1:01:56
to the cloud and then Sid step all the heartache you would normally have to endure when it dies then just pop it back into new hardware once the cool new
1:02:02
versions are released Beyond just having someone to hang with future AGI assistants will
1:02:08
understand what you want in some cases better than you do in fact one of the goals for at least one of these companies is for everyone to have a
1:02:14
personal robot assistant catering to our whims and subject to our commands optimizing our lifestyle making all
1:02:20
those tedious choices for us and counseling us on the larger ones potentially saving us hours every day
1:02:25
that we would otherwise have spent toiling over some pesky decision we had to make in the course of living Our Lives I Envision the dynamic with these
1:02:32
assistances very much akin to that of a Kaiser and a Chancellor like think um OV
1:02:37
Von bismar or take the Game of Thrones angle a king in his hand the hand would
1:02:43
be doing all the work calculating the odds in any given situation and providing us with critical information we wouldn't otherwise know running the
1:02:50
country as it were and we could just uh muck about like Robert braon like whing and drinking and hunting wild animals or
1:02:56
maybe your Ambitions aren't so lofty and you merely want an adviser for certain scenarios some best friend which can
1:03:02
calculate all the odds for you in any given situation and offer you the best advice it will literally be the best
1:03:08
advice that's quantitatively available by the way I get will tell you what to wear what time you should leave to go to
1:03:13
the restaurant answer any question you might have at the drop of a hat play chess with you after dropping down
1:03:19
several levels but it would tweak its displayed ability to any game to roughly match yours so as to optimize experience
1:03:24
for you as more humans are replaced and stowed away in their one-bedroom HS to
1:03:30
subsist off the magnanimity of government and whatever basic income solution is eventually adopted our general lives will become much much
1:03:36
easier none of us would presumably have jobs at this point so the decisions would all be focused around optimizing
1:03:41
our living experience make sure you bring Otto along to Shao on your first date he'll pick up on all the body
1:03:48
language you miss assessing psychological tells measuring the Hue of flush skin after you said something
1:03:53
particular witty analyzing the width of smiles the rapidity of batted lashes the
1:03:59
number of times your date plays with her hair casting all this against some relative Benchmark and spitting out a
1:04:04
matchability ranking for you to pour over later of course this would all be customizable you'd be able to tweak such
1:04:10
things as kindness cleverness humor so you could tone down the verbosity if you still wanted to retain some sense of
1:04:15
mystery or Romanticism to the whole thing for the rejection warry you could probably tweak Auto to lay you down
1:04:21
gently maybe initially he'd say that the girl you met seemed to like you a great deal based on all the non-verbal cues he
1:04:27
said he was picking up but then only subtly over the next few weeks he tried to steer you away from her just to spare
1:04:33
your feelings I can't help but chuckle when I imagine the mounting absurdity of such situations as the intelligence of
1:04:40
these machine substantially increases I envisioned this man and his new human bow meeting for lunch one day with their
1:04:45
deity likee assistants amuse themselves off to the side Otto and AET
1:04:50
simultaneously playing 1,000 Grandmaster level chess games against each other collectively probing the mysteries of
1:04:56
the universe and perhaps even solving the Ryman hypothesis while their two autocrat simpletons giggle like
1:05:01
imbeciles because one spilled spaghetti sauce in their shirt it just all doesn't seem very likely to end up the way we
1:05:07
hope it might I wonder how many of us are just likely to abandon the idea of relationships all together it might seem
1:05:13
Superfluous to those raised on safe spaces and virtual gaming why bother with all the stress of finding a potential mate having to make lifestyle
1:05:20
compromises having to deal with various moods and temperaments of your partner the other things that make living with someone so challenging why not just
1:05:27
eliminate that custom all together I mean one can only imagine the dizzying heights that pornography will reach in
1:05:33
this bold new bespoke world that we're entering we're soon likely to have very believable sex robots that can be
1:05:38
programmed to assist us with whatever [ __ ] up fantasy happens to be tickling at our brain on any given day there will
1:05:43
probably even be an option to switch on the postcoidal cuddling request if you really want to aim for
1:05:49
authenticity of course as the nuclear family breaks down as the number of children running around Windows as we
1:05:55
are all pulled away from our real lives by the Cornucopia of distractions that will Pelt us day and night what measure
1:06:00
of tedium would it be to even have a conversation with another person at all talking to people who don't see the world exactly as you see it what is even
1:06:07
the incentive to just hang out with friends or meet up with family members the world will be at your fingertips the
1:06:13
virtual world anyway if you ever find yourself wanting of Love or even just companionship
1:06:19
you've got Auto okay maybe I'm getting a little ahead of my myself not that this isn't
1:06:26
precisely the trajectory that we're on in society at the moment but I'm getting dangerously close to ranting right now so I'll peel it back I mean you see the
1:06:34
outrage right it's like the single purpose automation I get fold those proteins crank out those widgets Advance
1:06:41
Medical Science to a point where we can at least solve some of the problems that we face today I can even get on board with this idea that this thing is going
1:06:48
to replace virtually all of our jobs regardless of the fact that that's going to take a tremendous tremendous toll on
1:06:53
our Mental Health how our economy is structured and affect us negatively in a whole host of other domains that we
1:06:58
haven't even considered yet but when we start inserting AI into the most personal and the most human aspects of
1:07:03
Our Lives I can't help but wonder what we're hoping to achieve here what's the endgame exactly that we hook ourselves
1:07:10
up to a Perpetual pleasure machine and just sit there cavitating and drooling while the machines do all the work of living for us I keep coming back to this
1:07:16
phrase again and again the machines will free us from having to work and will let us do all the things that we don't have time to do right now and what is that
1:07:23
exactly if we have machines that are acting as surrogate lovers for us standing in as our best friends making all the
1:07:29
difficult decisions creating our music probing the mysteries of the universe at that point where do we even fit in how
1:07:35
are we even growing or learning or even living and what like why do we all seem to be okay with writing ourselves out of
1:07:40
the story one of the things we've done on intimate discourse is focus certain episodes on qualities that make humans
1:07:47
human they aren't necessarily good or bad things most times they can be a bit of both but they are all immutably human
1:07:53
character characteristics and while we don't have to necessarily be proud of them or ashamed of them we do have to recognize that they are the things that
1:07:59
make us us and not something else they're all an integral part of the human experience and when we decide that
1:08:05
it's a good idea to start chipping away at the bone it's only a matter of time before the whole structure collapses you know it takes a certain
1:08:12
amount of effort to thrust oneself forward in a venture learning a new language for example takes years to
1:08:18
truly Master there's a certain pride in that and pride should be felt especially if that language isn't indigenous to your
1:08:24
local you're investing your time and energy into something and rewarded for it with the ability to communicate in a whole new sphere and shifting that
1:08:31
Dynamic with Google translate or whatever other Babble fish type of instrument we have in the future here
1:08:37
may be convenient for those who don't speak a language but it devalues what the person accomplished who did learn the language there's a certain growth
1:08:43
that happens in people when they learn a new language or learn how to play the piano or speak publicly or even learn
1:08:50
how to read and write from the ground up there's a development there and then a pride that blooms from realizing that
1:08:55
you've mastered it I know people who have spent their whole careers meticulously learning various programming languages so that they can
1:09:01
become the best in their fields now someone only needs to ask chat TBT to write a blackjack program and voila done
1:09:06
in a few seconds this eventually due to people without exercising these muscles
1:09:12
won't they atrophy so I'm a runner I go out fairly
1:09:17
regularly or at least I did until I broke my leg which I'm still getting over but I remember once when preparing
1:09:24
to go out for a mid-winter run in Fairly miserable conditions a friend of mine who was my roommate at the time asked me
1:09:29
why I do it I gave him some of the usual arguments it's healthy keeps me from getting too overweight gets the blood
1:09:35
going whatever he then asked if I would still go running if there was an option available to just take a pill that would
1:09:41
keep me thin keep the blood pressure down and just generally provide all the health benefits that I currently get from running I thought about this for a
1:09:47
moment but eventually replied that I'd probably still go I tried to articulate how it made me feel afterwards the sense
1:09:53
of a c accomplishment the sublime exhaustion how it tended to flush out all the crazies and leave my mind in a
1:09:59
blissful state of relative calm and that a pill wouldn't ever really be able to do that so then he asked well what if
1:10:06
there was such a pill and it would give you all the health benefits and make you feel just as relaxed and youd still feel
1:10:12
whatever chemical reactions that would be associated with accomplishment would you then take it instead of running and
1:10:17
I was like what's the whole thing though Ian was sort of a still birth of a question it's like saying if you could
1:10:23
take a pill that would make you feel happy all the time but you'd also experience the sensation of being fulfilled in your life but in reality
1:10:28
you were just sitting on the couch would you take it you might as well ask if you could take a pill that would do all the
1:10:34
living for you would you take it at some point those questions just reduce down to ontological absurdities Demetri said something on
1:10:41
one of our episodes that really stuck with me if you were a soul that hadn't been born yet how would you want to live
1:10:47
your life what would be your reason for wanting to be born would it be that you would want to
1:10:52
go through all the tribulations and Trauma of birth only to sit around on a couch all day or lay in the Sun for
1:10:58
years on end no you want to live experience you want to challenge yourself and overcome and create and
1:11:05
love why would we ever want to Outsource any of that I've heard it said that eventually
1:11:11
we'll all be consuming our only our own content or rather content that we dictate to an AI so instead of watching
1:11:17
a movie or reading a book we'll tell the AI what kind of virtual world we'd like to immerse ourselves in snap our fingers
1:11:23
and our wish would be its command we could customize the world in real time introduce challenges we know we can
1:11:28
overcome and characters we know we like women or men that would incarnate all the characteristics that we know that we
1:11:35
want in our beloved this type of um customized fantasy world sounds like it would be heaven on Earth absolutely
1:11:41
incredible for about one day we're social creatures and we can't
1:11:46
live or grow in isolation we need others with whom to frolic we need their experiences their stories their ideas at
1:11:53
what's beautiful and what's not those positive feedback loops are a killer let's look at all the Avengers movies in
1:12:00
the same way that news on social media reinforces one's pre-existing biases consuming artificially created content
1:12:06
that uses your own pre-existing preferences as a template on which to build isn't likely to expose you to
1:12:11
anything new in the way of ideas I remember when I first started listening to The Cure or the Smiths in high school
1:12:17
I revered those bands I felt with they were communicating to me on a pure visceral level this idea identification
1:12:24
that comes with musicians or artists or writers is I think a healthy thing in our day-to-day lives we're not often
1:12:30
encountering the kinds of life and death struggles that writers will often write about or paint about even usually we're
1:12:35
just living through some minor version of that drama but our imaginations are large and our dreams can be massive and
1:12:41
it's in that space that we find Solace and inspiration in this type of art when I read Kafka I'm not just
1:12:49
passively consuming a bewildering account of case frustrations with the legal system I'm wrapped up and fused with the mind
1:12:55
of Kafka I'm relating to the helplessness one feels against the harsh monolith of bureaucracy I'm connecting
1:13:01
with the anxiety that bleeds through the pages I'm on a journey with Kafka and
1:13:07
I'm all in because I feel all those things too there's a connection there between writer and reader that transcends the story itself that's more
1:13:14
than the sum of its parts I read Confederacy of Dunes quite some time ago
1:13:19
but I still distinctly remember cackling insanely on a Subway in Toronto while reading a certain passage that I was
1:13:24
undoubtedly referencing in some way the almost Indescribable pomposity of ignacius Riley the main character um and
1:13:32
there's a character that is just so uniquely human pompous and yet so singularly vulnerable we need to I think appreciate
1:13:40
and cherish what we have in humanity we've gotten so hopped up on technology that we've collectively forgotten the
1:13:47
truly important things in life art love desire Beauty discovery
1:13:54
self-growth other people Microsoft had an ad for a while
1:13:59
where um which I think perfectly symbolizes the different directions our world could go at this Nexus and you
1:14:05
know I think the big tech companies kind of want to send us in one direction and you know there's some part of us that really wants to go in the other
1:14:11
direction um but I think this this ad kind of summed it up perfectly um
1:14:16
there's a the a man is asking Bing to write a
1:14:22
graduation card for his son uh who's presumably graduating and I just you
1:14:29
know the fact that the the fact that I mean my through my the way I see it anyway is like that should be
1:14:35
self-evident why that is such an Abomination uh I mean it's just
1:14:40
everything that's wrong with the world right there um like your your your child
1:14:45
is graduating it's the time you know one of the few times really like in in a in
1:14:51
a year that you would maybe really want to say something from the heart and like use your own words to express it and you
1:14:57
know really take the time to to search to so you're saying the right thing that is Meaningful to you and meaningful to
1:15:03
your son um and the fact that Microsoft just doesn't get it where it's uh you know it's like I don't know it just
1:15:10
seems uh it just seems really disconnected to me um
1:15:17
anyway artificial intelligence can never understand us on this level and we need to be sure that we don't make the
1:15:22
mistakes a believing that it can we also need to be sure that Ai and technology in general continues to serve us rather
1:15:29
than the other way around uh ilas sis the lead scientist at open AI recently claimed that he
1:15:36
believes that next token prediction is enough to eventually Usher an AGI his contention is that AI will soon be able
1:15:41
to extrapolate on Source material that isn't even there making connections that we haven't otherwise provided in some
1:15:48
way or another again I think we get into a semantics game with the word extrapolate well I'm sure really is a
1:15:53
brilliant man in his field I can't help but sense the same hubis that also been a uniquely human trait throughout the
1:15:59
centuries there is no Ghost Inside the machine here folks it's just a potentially dangerous tool that could
1:16:05
run a muck and control Our Lives if we let it you know even if we could create this kind of life even if we could make
1:16:12
an AI so that it could do everything that a human could do but even better but without all the doomsday scenarios
1:16:17
why would we even want it like I don't want to hear that in gp9 they will have created some medah AI that will then
1:16:24
know down to a science specifically why it was that I was cackling like a maniac on that Subway that day it dissected
1:16:31
tools exact word choice in the passage that cracked me up and matched it to previous experiences I'd had and my
1:16:37
precise psychological disposition and could then replicate such sentences as some new book that it could write almost
1:16:43
instantly and ship to me same day for 1995 I don't wanted to delineate and
1:16:48
quantify down to the ones and zeros exactly why Philip is so obsessed with with Mildred and of human bondage
1:16:55
despite his hating her and her being an obvious cancer that he should avoid at all costs why on Earth would I ever want
1:17:00
to know that like why must we strive so militantly to unveil every [ __ ] mystery can't we just wonder
1:17:07
anymore there's a song uh by Nina Simone called isn't it a pity I mean I love a lot of her stuff
1:17:14
but this song in particular strikes me as somehow encapsulating something uniquely unquantifiable in terms of
1:17:19
trying to replicate what's so good about it it's a lang lament over 10 minutes long I think and not a whole lot happens
1:17:26
in it like the somber music is punctuated now and then by nah mentioning how it's Such a Pity that we
1:17:32
take each other for granted there isn't any catchy beat or memorable chorus or even any truly interesting lyrics that
1:17:37
make it such a great song it's it's just all about how she's expressing what she's feeling through her voice through
1:17:42
the tamber of her word choice these uncertain pauses somehow managed to
1:17:48
sneak in the way she wistfully says everything is plastic at the end with
1:17:54
that flick of a tongue this is an elegy to humanity as a whole that's particularly fitting at
1:17:59
this Crossroads we find ourselves at the song is it's fundamentally depressing
1:18:05
it's hopeless and it's fraught with sadness and yet at the same time it's absolutely
1:18:10
beautiful let's not let's not forget who we are or where we've come from and let's not write ourselves out of the
1:18:16
equation that would truly be a Pity thank you for listening to me today
1:18:23
I really hope you got something out of this and I encourage you and would be thankful if you could share this podcast
1:18:30
or like or subscribe uh it means a lot and thank you
1:18:51
again