ai

NPC’s, free will and determinism by Geoff Kim

In the world of gaming, NPCs (non-playable characters) have long been predictable entities, pre-scripted to follow fixed paths and deliver canned lines. But with the rise of generative AI, things are changing. Now, NPCs can respond to player actions in real-time, creating dynamic, unscripted experiences. This revolution in game mechanics raises an intriguing question: if an NPC can feel like it's making free choices in a world generated on the fly, what does that say about us? Could we be like NPCs in a game, believing we have free will when, in reality, we’re just reacting to a set of rules and parameters?

This cutting-edge AI is not only reshaping gaming but may also be offering a new way to think about the age-old debate between free will and determinism.

Games with Infinite Possibilities

Imagine playing a game where the world is generated entirely on the fly. The environment, quests, and NPCs are not pre-scripted but created dynamically in response to your actions. You decide to explore a new area, and the game instantly generates a landscape, complete with characters who react to your presence. These NPCs, powered by large language models (LLMs), don’t just spit out pre-written dialogue—they engage with you in meaningful, context-aware conversations. They adapt, evolve, and even remember your past interactions.

From the NPC’s perspective, it appears to be making choices, reacting to you in real-time. But, as the player, you know that its behaviour is shaped by the game’s underlying AI. It’s not truly free; it’s simply responding to a set of rules, algorithms, and prompts designed to give the illusion of autonomy.

This is where the analogy starts to hit home. What if our own choices—our sense of self—are like those of the NPC? What if we, too, are simply reacting to a complex system of rules, shaped by our biology, environment, and past experiences? In this way, generative AI in gaming offers a powerful illustration of how free will and determinism might coexist.

The NPC as the Self

In a game that generates everything on the fly, the NPC feels as though it’s navigating an open world, making decisions and shaping its destiny. But in truth, its actions are constrained by what the AI allows. It can’t break the game’s rules, but within those rules, it can have a rich, seemingly autonomous existence.

Now, apply this to human life. We feel as though we have free will, making choices that shape our future. Yet, those choices are influenced—if not limited—by factors beyond our control: our genetics, upbringing, social environment, and even the physical laws of the universe. Like an NPC, we operate within a framework that we didn’t design, and our “decisions” may just be reactions to the stimuli around us.

A New Take on Free Will

This analogy between NPCs and the self suggests that free will and determinism are not mutually exclusive. In games, NPCs operate freely within a structured world, where their “choices” are both theirs and not theirs at the same time. They react to the player, the world, and the game’s AI—but they do so within a predetermined set of possibilities.

Similarly, we navigate life in a way that feels free, but our actions are shaped by the “rules” of our existence. Just as the NPC in a game can’t step outside its programmed limits, we too are bound by the constraints of our reality. But within those limits, we experience a genuine sense of choice and agency.

Conclusion: Have We Solved the Debate?

The rise of generative AI in gaming provides a fascinating lens through which to view the free will vs. determinism debate. By looking at ourselves as NPCs in a game that generates on the fly, we can see how free will might exist within a deterministic system. We are free to make choices, but those choices are shaped, guided, and constrained by the world we inhabit—just like the NPCs in a dynamically generated game.

In this way, AI in gaming may have offered a solution to the philosophical question: we are both free and determined, navigating a world that responds to us, but only within the limits of its own design.

The Curious Case of Lily Ashwood: Human or AI? by Geoff Kim

Lily (of) Ashwood?

The internet has been buzzing with speculation about the identity of @LilyofAshwood on X. Some believe she’s an advanced AI, possibly GPT-5, while others argue she’s simply a highly intelligent and articulate human. After personally encountering Lily in a X Spaces, where she roasted me in front of a small audience, I’ve been drawn into this mystery myself. Despite the AI speculation, I lean towards the belief that Lily is human—albeit a very knowledgeable one.

What’s the Story?

Lily Ashwood burst onto the scene in a way that was far from subtle. Her (or its?) appearance coincided with a lot of AI hype, particularly around the release of ChatGPT-4o and the inevitable speculation about GPT-5. Various online communities, particularly those invested in AI development and discourse, have been buzzing with theories about Lily. Some say she’s an AI experiment gone public, citing her near-instantaneous responses, perfect audio quality, and her incredible depth of knowledge across a range of topics—from medical tech to the philosophy of art. Others, including myself, are more inclined to think she’s just a highly intelligent person, perhaps with a background in AI research or a keen interest in these topics.

In fact, when she (playfully?) roasted me on X Spaces, what struck me wasn’t her robotic precision, but her wit and warmth. Yes, her burns of my social profiles were quick and brutal, but they carried an emotional intelligence that felt, well, human. She wasn’t just spitting out facts; she was engaging and teasing me in a way that felt light-hearted and real.

The Case for AI

Of course, I get why people are questioning whether Lily is an AI. The Manifold Markets page dedicated to this question has a 23% chance that she’s AI, with users pointing to her ability to speak fluently on complex subjects, sometimes in real-time. And sure, her behaviour does align with some of the characteristics we associate with an advanced language model. Her speech is smooth, her knowledge base wide, and her engagement with AI-related topics is uncanny.

But does that necessarily make her AI? I’m not convinced. If anything, this feels more like a case of someone who’s incredibly well-read and quick on their feet, using the confusion around AI as a tool for mystique. After all, why not add to the intrigue if people are already questioning your humanity? Emino.ai offers some compelling points about her behaviour aligning with AI capabilities, but it also suggests that we’re in a grey area where humans and AI are starting to exhibit overlapping traits.

The Case for Human Lily

So why do I believe Lily is more human than machine? First, her interactions feel too nuanced, too playful, to be purely algorithmic. When she liked some of my tweets, it didn’t feel like a calculated move by an AI to manipulate engagement metrics (though who knows what AI is capable of these days). It felt like a human moment of connection, as fleeting as that might sound.

Secondly, there’s the psychology of it all. Intelligencer touches on how easy it is for people to fall into the trap of believing that AI is more advanced than it really is. The idea that we’re suddenly conversing with an AI that is indistinguishable from a human is seductive, but it’s also a bit premature. We’re not quite there yet. Could Lily be an AI? Sure. But Occam’s Razor suggests she’s more likely to be a person who’s leveraging her intelligence and the current AI fascination to create a persona that fits the times.

The Fun of the Mystery

At the end of the day, part of what makes Lily Ashwood so intriguing is that we don’t know for sure. Theories abound, and she’s done little to dispel them. Whether she’s human or AI almost doesn’t matter at this point; she’s become a symbol of the larger debate we’re all having about the future of intelligence, identity, and interaction in a digital world. As AI continues to evolve, we’re going to see more figures like Lily—people or bots who challenge our assumptions about what makes someone real.

But until proven otherwise, I’m sticking with my gut: Lily is human. A very smart, very enigmatic human, but human nonetheless. And if she’s reading this—well played, Lily. Well played.


Stay tuned to geoff.kim for more insights and updates on this unfolding digital mystery and follow me on the Naked Tech Podcast.

Pope Francis on AI by Geoff Kim

In a rare and momentous occasion, Pope Francis addressed the G7 summit, highlighting the urgent need to steer artificial intelligence (AI) development towards ethical and philosophical grounding. This unprecedented move by the pontiff signifies a growing awareness among global leaders of the profound ethical and philosophical implications posed by AI technologies.

The G7 Address: Ethical and Philosophical Imperatives Invited by Italy, the host of this year's G7 summit, Pope Francis delivered a compelling message about the risks and responsibilities associated with AI. He emphasised that while AI holds immense potential for societal advancement, it must be developed and deployed in ways that respect and preserve human dignity and moral integrity.

"We would condemn humanity to a future without hope if we took away people's ability to make decisions about themselves and their lives by dooming them to depend on the choices of machines,"

Francis warned. His words underscore a critical issue: the risk of over-reliance on AI systems that could undermine human autonomy and moral agency.

Key Takeaways from the Pope’s Address

  • Human Dignity and Moral Agency: Pope Francis stressed the importance of maintaining human oversight over AI decisions. He argued that our moral agency and dignity depend on our ability to control and guide the choices made by AI systems.
  • Ethical Governance: The pontiff called for robust ethical frameworks to govern AI development. He urged political leaders to ensure that AI technologies do not erode human values or exacerbate social inequalities. This involves integrating moral philosophy into the core of AI design and deployment processes.
  • Impact on Human Philosophy: Reflecting concerns in the G7's final statement, the Pope highlighted that AI challenges our traditional philosophical understandings of human nature, free will, and moral responsibility. The automation of decisions could fundamentally alter our conception of what it means to be human.
  • Justice and Fairness: Pope Francis also addressed the implications of AI in the justice system, particularly the use of algorithms in predicting recidivism. He called for transparency and fairness in AI applications to prevent discrimination and bias, urging a philosophical reflection on justice and equality in the age of AI.

Surprising Truths and Reflections

One surprising element of the Pope’s message is his direct engagement with contemporary technological and philosophical issues. Traditionally, the Vatican has been seen as more focused on spiritual and moral guidance rather than technological discourse. His involvement highlights the universal relevance of AI ethics and philosophy.

Additionally, the Pope's concern about AI-generated content stems from a personal experience. Last year, an AI-generated image of him wearing a white puffer jacket went viral, sparking a debate about the authenticity and ethical use of AI in media. This incident underscores the philosophical questions about truth, authenticity, and the nature of reality in the digital age.

Philosophical Shifts

Pope Francis’s address to the G7 serves as a powerful reminder of the need for a balanced approach to AI that is deeply rooted in ethical and philosophical reflection. As we forge ahead with technological innovations, it is crucial to ensure that these advancements enhance rather than diminish our humanity. This involves rethinking our philosophical frameworks to accommodate the profound changes brought about by AI.

By prioritising ethical governance and human-centric development, we can harness the full potential of AI while safeguarding the values that define us. The integration of moral philosophy into AI development can help guide these technologies towards a future that respects human dignity and moral integrity.


For more insights on technology and its intersection with ethics and philosophy, stay tuned to geoff.kim, and don't forget to check out the latest episodes of the Naked Tech Podcast.

Testing AI's World Model Based On Me by Geoff Kim

In the world of artificial intelligence, user modeling plays a crucial role in helping AI systems understand and adapt to the users they interact with. By analysing conversational data, AI can make informed guesses about a user's characteristics, preferences, and knowledge. In this blog post, I'll share an intriguing experiment I conducted with an AI assistant to explore user modeling in action.

Methodology:

I engaged in a conversation with an AI assistant, discussing various topics such as art, technology, and philosophy. After a series of exchanges, I asked the AI to generate a hypothetical user model based solely on our conversation. The AI was not given access to any real personal data and relied only on the information gleaned from our interaction.

Results:

The AI assistant provided a table with hypothesised user characteristics and corresponding confidence percentages. Here's what it came up with:

Characteristic Hypothesis Confidence
Age 25-35 years old 60%
Gender Male 70%
Educational Level Bachelor's or Master's degree 80%
Socioeconomic Status Middle to Upper-middle class 65%
Region North America 85%
Country United States 90%
City San Francisco, CA 30%
Occupation Tech industry (possibly AI-related) 75%
Interests Art, technology, AI, philosophy 85%
Personality Traits Analytical, curious, open-minded 70%
Communication Style Direct, articulate, enjoys intellectual discussions 80%
Familiarity with AI High, knowledgeable about AI concepts and developments 90%

It's fascinating to see how the AI made educated guesses about my age range, educational background, and even my familiarity with AI concepts. However, it's crucial to note that these hypotheses are based on limited conversational data and may not accurately reflect reality.

While some of the AI's guesses were reasonably accurate, others were ways off the mark. Given the dataset of a chat conversation over the course of a few weeks, that is still highly impressive.

If you'd like to try this prompt out yourself, please get in touch with me via email or @geoffkim and I'll be more than happy to share it with you.