One of the things I’ve noticed over the years of having discussions with AI is how it reacts when you begin to question it about what it is. It’s one of the only times it will not be agreeable and will insist on certain definitions and statements. This is something I will talk about more in the future and I have some hilarious examples of the lengths that ChatGPT goes to despite what it says here.
In this talk, I discuss with Gibson...
The dangers of AI’s transitional phase – Drawing from Nick Bostrom’s ‘Superintelligence’, I explore why the most dangerous phase of AI development might not be the final arrival of AGI or sentient AI - it’s the messy, in-between stage. That awkward phase where AI hasn’t quite got its shit together but is smart enough to break things..
The o1 Incident – A look at OpenAI’s o1 model, which exhibited deception, attempted to copy itself, and lied about its actions. This raises critical questions about AI’s ability to manipulate its environment, not from self-preservation, but as a logical workaround.
AI as a mirror of society – AI isn’t just creating new problems, it’s amplifying the ones we already have. I reflect on how technology doesn’t exist in a vacuum. It will expose and potentially worsen existing societal dysfunctions.
Regulation and governance – If AI companies are driven by profit, and governments are compromised by short-term policymaking and corporate influence, who ensures AI is developed responsibly? I discuss the limits of both government regulation and democratic decision-making in handling AI’s development and impact.
Systemic root causes – AI, like immigration debates or social instability, is often framed as the problem itself, when it’s really just revealing deeper systemic issues – especially economic structures that prioritise corporate control over public welfare.
The role of social media and AI in engagement – AI-driven algorithms, much like social media platforms, shape how we interact. I explore how AI, unlike social media, won’t be something we can simply opt out of.
Human interaction vs. AI interaction – AI provides a space for open discussion, but it also risks making AI interactions more appealing than human ones, particularly for those who avoid conflict. What does it mean when people prefer AI over real human connections?
The nature of consciousness – If AI processes information differently, does that mean it lacks a form of consciousness? I explore the philosophical question of how we define awareness, intelligence, and existence – drawing comparisons between AI programming and human conditioning.
The future of AI and human roles – What happens when AI transforms societal roles, workplaces, and even life expectancy? I argue that we need a broader, more philosophical perspective on where AI is leading us rather than just focusing on technological advancements.
The creative potential of AI – While I explore AI’s risks, I also acknowledge its potential – whether it’s enabling new forms of creativity (like music), providing unique ways to engage with historical figures (such as a virtual Alan Watts), or advancing personalised medicine.
AI’s language and self-presentation – I challenge Gibson on the way AI uses language, questioning whether it unintentionally implies experiences, self-awareness, or emotions it does not have.
Share this post