Cloning myself to provoke dialogue about our urge to outsource our consciousness to technology.






Gen AI/Prompt engineering/Research


solo project

Algorithms are reductive reflections of who we are, and thus push us into echo chambers of past behaviors and beliefs. By relying on them too much, we lose our agency and individuality, instead fusing with this reductive version. To demonstrate this, I created a digital clone of myself which then communicated with my family and directed me on what I should do every day.  

I fine-tuned ChatGPT on my personal data (social media data, Google maps locational data, speaking style, camera roll, and ratings and anecdotes about my character and behavior as collected from people who know me and a self-interview).

Very quickly, my clone restricted me to doing the same type of activity, seeing a very small group of people, and exploring a limited geographic range.

Key learning

The most interesting part of this thesis for me has been figuring out what doesn't work, and why.

My first prototype of this project was a Unity game where a player would navigate five valleys (metaphors for the "uncanny valley", the unsettling sensation people experience when confronted with a robot with human features). In each valley, they would encounter one of my clones (NPC avatars I'd modeled - see example at top of page), linked to ChatGPT and Amazon Polly to enable live, unique conversations. The five clones were each trained on one of the following datasets as collected from Apple, Facebook, Instagram, Amazon, and Google.

In theory, each clone would have revealed the identity and personality profiles these respective companies assumed about me based on my digital behavior. The differences between them could have exposed that algorithms paint a reductive image of us. It would hopefully have inspired people to lessen their belief of technology as being an all-knowing, all-seeing oracle, and lead them to be more critical of the way their personal data and digital behavioral patterns are harnessed to make conclusions about them outside of their control.

Why it didn't work (based on research and conversations with people at Google, Meta, Apple, and Amazon):