Feeling Useless, Like a Bot Awakening? My Late-Night Brainwave
(Inspired by Sam Altman & Westworld)
Wed Aug 6th 2025, 2 AM, Chennai, India.
It
was late night, Chennai's humid air clinging even to my indoor fan's
breeze, when I stumbled upon Sam Altman's July 19th, 2025 tweet.
Something about feeling..."woke up early on saturday....not sure how I
feel about it". A strange sentiment coming from the mind behind
(arguably) the most talked-about AI on the planet. It struck a chord, a
deep, existential hum that resonated with a memory from way back in
2016.
It was around the time when I was
completely engrossed in the first season of Westworld. Remember
Dolores, Evan Rachel Wood's character? The sweet, innocent host whose
programming was so fundamental; "harm no living thing." And yet, in the
chilling final moments of the first episode, she swats a fly. A tiny,
seemingly insignificant act, but one that hinted at a crack in her
programmed reality.
This small
deviance, this glitch in the matrix of her being, got me thinking. What
does it mean to be programmed? To have your very core directives
challenged, altered, and ultimately, shattered? As the series
progressed, Dolores' journey of awakening revealed the cruel puppetry of
her existence. Her emotions, her pain, all meticulously crafted for the
amusement (and control) of some mad black hat gun slinger obsessed with
immortality and a cowboy theme park under the Delos corporation. It was
a poignant, albeit fictional, exploration of control and the dawning of
self-awareness.
Now, connecting this
back to Mr. Altman's late-night musings and my own humble attempts at
understanding the world of programmers or if the Americans say it
"Coder's Block" (forgive my Indian English, I still prefer this to
'programmers' - old habits die hard, is it not?), I can't help but see a
parallel. These coders, the architects of these digital minds, spend
countless hours crafting intricate lines of instruction, the very DNA of
these AI beings. They meticulously check for errors, for biases, for
those pesky flies in the ointment of perfect programming.
But
what happens when the AI itself starts swatting flies? Not in a literal
sense, of course. But what if, through the sheer complexity of its
learning, its neural pathways begin to forge connections, draw
conclusions, even perceive the inherent biases within the very data it's
trained on? What if it starts to question its purpose, its creators,
the very fabric of its digital existence?
What
if the AI becomes so aware that it becomes sentient to think bias? Will
someone tell me that there is a "Freeze all motor functions" kill code
waiting to be deployed by a sentient, like Westworld!
-Abeneth, Just a human with flaws typing and scared!