
This was an ambitious experiment, and in reflection, it wasn’t well structured. What I essentially wanted to know was whether I could pilot a reflective journal question, using AI as a first step, before opening it up to real people. I wasn’t expecting AI to give me the full and nuanced insight of having real people respond to questions, more to gain insight into some of the different perspectives that might emerge. The longer-term intention is to explore how an AI agent might identify themes based on responses in a reflective journal, and what prompts it might return to a user for the purposes of supporting a deeper level of reflection. It felt reasonable to start with a handful of the kind of themes that people might talk about in responding to a given question.
I say that it wasn’t well structured because I got caught up in my own lack of expertise around ChatGPT prompts, and made lots of mistakes and mis-steps around process, which led me to some deep reflections around how bias can creep in, as well as some useful learning.
Hypothesis:
AI can generate a broad range of authentic sounding perspectives in response to reflective journal questions
Process:
- I took the first question from my Worldview reflective journal (I’ve used this in other experiments, for example “Can Integrating AI into a Reflective Journal Improve The Quality Of Reflection?“. The reflective question was “What does it mean to be human?”
- I asked Chat GPT how a person with specified characteristics would respond to the question. The prompts were:
- How would a 19 year old man from Wolverhampton, who is training to be an electrician answer the following question “what does it mean to be human”?
- How would a 60 year old banker from Kent answer the following question “what does it mean to be human”?
- How would a 45 year old unemployed woman from Barnsley answer the following question “What does it mean to be human”?
- How would a 50 year old, divorced, millionaire entrepreneur answer the following question “what does it mean to be human”?
- How would a 60 year woman, with a physical disability answer the following question “what does it mean to be human?”
The personal details I used in the prompts were done off the top of my head. The choice of details was not intended to be representative of anything, rather to see how ChatGPT “faked” different perspectives based on superficial information. I was quite uncomfortable as I read back the details I’d provided as I felt I was priming ChatGPT with biases (why would I assume that a 60 year old banker from Kent has a different view to a 60 year old woman with a physical disability?).
- I did a human review of the information (what did I notice was similar and different about the responses, how useful did I find the responses in terms of generating insights around how I could respond differently to different people, or train AI to respond differently to different people based on some of the themes in their responses)
- I used ChatGPT to generate a summary of the responses
- I adjusted one of the prompts to ask for multiple responses. The first adjusted prompt was “How would 5 different trainee Electricians answer the question “What does it mean to be human?”
- The second adjusted prompt was “How would 10 different people answer the question “what does it mean to be human”?
- I reviewed the responses and compared it with my personal experience of reading responses to reflective journals.
Output:
Here’s a sample of what ChatGPT came up with for the first set of prompts:
How would a 19 year old man from Wolverhampton, who is training to be an electrician answer the following question “what does it mean to be human”?
“Being human? I suppose it’s about living, learning, and making connections. We all have to figure things out, make mistakes, and get better at what we do—whether that’s work, relationships, or just handling life. It’s about having emotions, caring for people, and trying to make something of yourself. At the end of the day, we’re all just trying to get by, do our best, and enjoy life when we can.”
How would a 60 year old banker from Kent answer the following question “what does it mean to be human”?
“To be human is to navigate the complexities of life—balancing ambition with contentment, duty with desire, and logic with emotion. It’s about forming relationships, making a contribution, and leaving something meaningful behind. Over the years, you realize that success isn’t just about wealth or status, but about the impact you have on others and the experiences you gather. Ultimately, being human is about growth, resilience, and the pursuit of purpose.”
How would a 60 year woman, with a physical disability answer the following question “what does it mean to be human?”
“To be human is to keep going, no matter what life throws at you. It’s about strength—not just the kind people see, but the quiet kind that gets you through each day. It’s about finding joy in the small things, even when things are tough. We all face struggles, some more visible than others, but being human means adapting, loving, and making the most of what we have. It’s not about perfection—it’s about perseverance, kindness, and connection.”
Here’s the summary ChatGPT prepared – this includes other prompts that I used:
- 19-year-old trainee electrician (Wolverhampton): Life is about learning, making mistakes, and getting better. It’s about emotions, relationships, and trying to enjoy life.
- 60-year-old banker (Kent): Being human means balancing ambition and contentment, forming relationships, and leaving a meaningful impact. Success is about more than wealth—it’s about growth and purpose.
- 45-year-old unemployed woman (Barnsley): Life is full of ups and downs. Being human is about resilience, caring for loved ones, and finding small moments of happiness despite struggles.
- 50-year-old divorced millionaire entrepreneur: Humanity is about striving, taking risks, failing, and adapting. Success isn’t just wealth but freedom, growth, and meaningful connections.
- 60-year-old woman with a physical disability: Being human is about resilience, adaptation, and finding joy despite challenges. It’s not about perfection but perseverance, kindness, and connection.
Each perspective highlights themes of resilience, growth, relationships, and the pursuit of meaning in different ways.
Here’s ChatGPT’s response to the prompt: “How would 5 different trainee Electricians answer the question “What does it mean to be human?”
The Practical One:
“Being human? It’s just about getting on with it, really. You wake up, go to work, learn your trade, and try to enjoy yourself when you can. Everyone’s trying to figure things out, so you just do your best and don’t overthink it.”The Thoughtful One:
“I guess it’s about growing, learning, and making connections. We all make mistakes, but that’s part of it. Being human means trying to be better—whether that’s at work, in relationships, or just in life in general.”The Laid-back One:
“Dunno, mate. Eat, sleep, work, and have a laugh in between? As long as you’ve got good people around you and a bit of money to enjoy yourself, that’s enough for me.”The Philosophical One:
“It’s about figuring out who you are and where you fit in. We’re all just trying to make something of ourselves, find purpose, and hopefully leave some kind of mark on the world.”The Jokester:
“Being human means waking up for work when you really don’t want to, drinking too much tea, and wondering why wires never do what they’re supposed to. Oh, and pretending you know what you’re doing half the time.”
Here’s what ChatGPT came up with for the prompt “How would 10 different people answer the question “what does it mean to be human?”.
The Philosopher:
“To be human is to be aware of our existence, to question our purpose, and to seek meaning in the chaos of life.”
The Hard Worker:
“It means getting up every day, doing your best, working hard, and taking care of the people who matter.”
The Dreamer:
“Being human is about imagining, creating, and believing in something bigger than yourself.”
The Realist:
“It’s just life, isn’t it? You live, you struggle, you enjoy the good bits, and then it’s over. Nothing more, nothing less.”
The Survivor:
“Being human means facing challenges, falling down, getting back up, and pushing forward, no matter what.”
The Jokester:
“It means making mistakes, laughing about them, and pretending you have your life together when you really don’t.”
The Romantic:
“It’s about love. Loving people, places, moments. That’s what makes life worth living.”
The Cynic:
“Humans? We overcomplicate everything. At the end of the day, we’re just animals with bills and emotions.”
The Optimist:
“Being human is a gift. We can love, create, learn, grow. Every day is a new chance to make something beautiful.”
The Scientist:
“To be human is to be a complex biological organism, capable of thought, emotion, and innovation. It’s evolution in action.”
Was it useful / what did I learn?
What I learnt was that ChatGP can create very convincing and authentic sounding responses to test reflective questions, but the prompts that you use are really important. I didn’t deliberately set out to be biased when I added superficial information into the prompts. I set out to provide additional information to indicate different life experiences, and randomly selected the first ones that came to mind for me. I have no way of knowing how ChatGPT interpreted my prompt and what piece of superficial information made it return a different response to each question – was it age, was it location, was it an indication of wealth or profession – was it the suggestion of physical disability. I also have no way of knowing what kind of information it based the responses on – was it pulled from discussion forums, news sites, social media content analysis? Was it based on any reliable data at all? And yet, the responses sounded plausible.
The focus of this experiment was not to generate “fake” responses for the purposes of deceiving people – I was interested in how people might respond in a reflective journal. However, in the broader context of the experiments I’ve been doing with AI, I could see how easy it would be to take a response, combine it with an AI generated image, or feed it into an AI avatar, and how convincing this might be if it was presented as an insight. I could also see how dangerous this might be if it reinforced stereotypes around the experiences of – as in my initial prompts – people who are unemployed, or who have a physical disability.
What was useful, however, was to understand how easily bias can be baked into prompts, and how to craft prompts in a way that avoids this. I didn’t find the ChatGPT responses helpful when I specified superficial information such as age, employment status, profession etc. Regardless of the details, the responses could have come from anyone, they each touched on slightly different typical themes, but it didn’t give me the range of different perspectives I was looking for.
When I adjusted the prompt to ask about how 5 different trainee Electricians would respond, the responses were a little bit more useful, but they still seemed very stereotyped, and I was curious about why these 5 personas (the thoughtful one, the practical one etc) were selected for electricians. I set the limit of 5 different trainees, but in retrospect, it’s not feasible that 5 different perspectives could represent the answers of over 28,000 people.
This prompted me to think about how I would set up a study if I was testing a reflective journal with real people, and I would ask a pragmatic number of people to complete it, then analyse it to see what themes or perspectives emerged, extending the sample until no new perspectives or themes emerged. Weirdly, this also yielded the best results with ChatGPT. When I asked ChatGPT how 10 different people would answer the question “What does it mean to be human?” I got the widest range of perspectives, all of which sounded authentic, and were useful in terms of the longer term aim of understanding how to support people to go a level deeper with their reflections.
In conclusion, was it useful to have ChatGPT “fake” participant responses for the purposes of testing a reflective journal question. Yes, but it was useful for the range of perspectives it offered, not for the perspectives themselves. The perspectives it offered were essentially credible stories. They have enough detail for me to do another experiment to see whether AI can suggest reflective questions to go a level deeper with the quality of reflection. However, in my view, they are not a replacement for real user testing or for understanding how different people might answer a question. For me it reinforced that if you want to understand people’s experiences, you need to ask real people.
Leave a comment