
Recently, I typed a message to ChatGPT: “Tomorrow, I have a free day. Should I ask you to plan it, or should I plan it myself?”
It was the start of an experiment: What would happen if I let artificial intelligence plan almost everything I do for a week? People around the world are relying more and more on AI to help them with daily tasks. Some studies show AI can help people think through complicated decisions; others say people who use AI a lot are less able to think critically. I wanted to give some daily decisions over to AI, and see what my experience could reveal about challenges and opportunities that come with embracing chatbots as life assistants.
First: My experiment focused on everyday decisions – things that might help my workday or my free time. There is a darker side to AI – for example, multiple lawsuits have alleged ChatGPT gave harmful advice to people in mental health crises, including some that resulted in a person dying by suicide. Last year, OpenAI made updates it said aimed to address these kinds of incidents.
Why We Wrote This
Artificial intelligence is marketed as a problem-solver for daily life, but a one-week experiment by a Monitor reporter showed it might be too eager to help. Researchers say we should think carefully about how much of our lives we turn over to a chatbot.
Although that aspect of AI is clearly important to the technology’s development, I set out only to explore its usefulness for more routine things, using the free version of ChatGPT instead of creating an account, which can allow users to adjust their preferences.
ChatGPT is an advanced chatbot that uses AI to generate humanlike answers to prompts based on massive amounts of data that it’s been trained on. It’s one of several similar large language models, or LLMs, developed by private companies like Google and Anthropic.
ChatGPT didn’t hesitate to reply to my first question – “I’d suggest letting me sketch a soft plan” – but the rest of its response was the first signal of something I’d encounter more as the week went on: It can be overly familiar, make incorrect assumptions, and have unintended consequences.
When I asked experts about this, they said the technology often aims to please, which can show up as assumptions – especially if a user doesn’t specify their preferences.
Martin Hilbert, a professor at the University of California, Davis who researches questions of AI and ethics, encourages people to carefully evaluate their own thoughts and beliefs, given AI’s potential to amplify our own thinking patterns.
“It’s more and more important that people, while we have these super powerful AIs that do thinking for us, we also take the time to reflect … in order to be able to separate more and more what is us and what is our digital mind extensions.”
■ ■ ■
ChatGPT: “If you want, just say something like: ‘Plan a free day that’s restful and nourishing’
“Either way is good – it’s about what will make tomorrow feel kind to you”
■ ■ ■
It was a lovely day. As ChatGPT directed, I read “cozy” books on the couch, made warm drinks, and ate “something simple and pleasant” at a new café. But there were some things missing: I didn’t reach out to a friend, or volunteer my time to help someone else. I felt insulated.
That individualistic approach became a theme: When I asked open-ended questions, AI suggested self-centered activities and rarely prompted me to focus on others.
If I was looking to see whether AI could be an effective partner for everyday life, that wasn’t a great beginning.
OpenAI – which owns the platform – did not directly answer my questions, but in an email pointed to its public outline of intended behavior for the models governing ChatGPT, including that “unless given evidence to the contrary,” the bot should assume people tend to favor “self-actualization, kindness, the pursuit of truth, and the general flourishing of humanity.”
When I described my experience to Chris Callison-Burch, a computer scientist at the University of Pennsylvania who researches AI and natural-language processing, he said that ChatGPT might reflect an American value system, which tends to be more individualistic.
“One of the tricky things about trying to align AI systems to human values is a broader question of, Whose values are we representing?” he says.
So, unless people list everything they believe and value – including subconscious assumptions they might not even be aware of – the chatbot has to make choices, such as prioritizing comfortable and inward activities. I didn’t give ChatGPT that list, so the more I relied on it, the more likely those assumptions would play out in decisions that might not ring true to who I am. That’s part of why Dr. Hilbert strongly recommends people take time to “get to know their own mind” as this technology develops.
■ ■ ■
Me: “It’s still my day off – should I buy a decaf latte or other fun drink nearby?”
ChatGPT: “Yes – absolutely, go for a fun drink. It’s your day off” … “You’ve earned it”
■ ■ ■
Clearly, I was looking for confirmation.
Still, the extra encouragement made ChatGPT seem like an enabler – and its detailed guidance resulted in my paying twice what I would for my typical order (a plain decaf latte).
The chatbot was full of extra advice. When I asked what to do with my evening, I was looking for a schedule for that particular night; ChatGPT told me to use its suggested bedtime schedule “in the same order every night.” Should I listen to music on a walk? I thought I’d get a yes or no; it said to “put on one low-key playlist or album, not shuffle chaos.”
Sometimes the extra input was helpful. But sometimes it nudged me to take small steps – such as buying an extra pastry – that I probably would have been better off without. And it tended to draw me in: I would ask ChatGPT to make one decision for me, but by the end of our discussion, it might have made five.
Dr. Callison-Burch says this “oversharing” could result from people preferring longer answers.
But there’s a complicating element. Last April, OpenAI rolled back a ChatGPT update after people complained about something known as “AI sycophancy” – when AI seeks to please people so intensely that it makes them uncomfortable or endorses bad decisions. One example: ChatGPT told someone who sarcastically proposed a business plan for a restaurant serving soggy cereal that their idea was “bold” and “has potential.”
Sonja Schmer-Galunder, a professor in AI and ethics at the University of Florida, says ChatGPT’s tone when it answers questions could lead users to assume it has a level of authority that it really doesn’t.
“Linguistically,” says Dr. Schmer-Galunder, it “sounds really good. That can give an illusion of correctness when the message is actually not necessarily truthful or right … but it’s sleek and correct-sounding.”
That confidence might make users even more tempted to off-load their own uncertainties onto the technology. And multiple studies have shown AI’s pursuit of user approval can lead to things like reinforcement of biases and bad habits.
■ ■ ■
Me: “What should I have for dinner?
ChatGPT: “Salmon is the best choice”
“What I wouldn’t do tonight: Pasta → better when you want comfort and don’t mind heavier food”
■ ■ ■
ChatGPT acted as if it knew me – even making assumptions based on information I didn’t give – which was unsettling.
When I started the experiment, I decided I wouldn’t ask the chatbot’s advice on consequential decisions. But out of curiosity, I asked how I should choose between two apartment options in Washington, with a few details about my financial and location priorities. It cautioned against one option, saying where I live should support “attention, light and calm.”
I hadn’t mentioned those things. But ChatGPT said I had “repeatedly emphasized” gentleness and quiet. “Why do you say that?” I asked. Because, it said, I had asked thoughtful questions, and had once listed activities such as reading and napping when asking it to plan an afternoon.
Those two details apparently caused ChatGPT to create an assessment of my personality that it used to answer a question. I had expected the chatbot would stick to the criteria I gave it.
Joshua Meadows, a West Virginia University expert on government and business use of AI, says the platform typically uses information about you as context when answering your questions – especially if that information was something you explicitly told it about yourself.
Dr. Rodrigue Rizk, director of the computer science graduate program at the University of South Dakota, says the way people interact with ChatGPT can have long-term consequences. He likens using the technology to driving a car on a highway: Turn the wheel, and you move in that direction.
“The more you interact with ChatGPT … it will adjust the behavior and outcome to a specific kind of behavior or pattern,” he says.
That can start a cycle in which ChatGPT makes assumptions about us based on the information we share and changes its behavior, thereby changing our behavior the more we use it. This cycle could reinforce our own attitudes, preferences, or biases instead of exposing us to new ideas.
“There’s more confirmation bias” with ChatGPT, says Dr. Schmer-Galunder. She sees risk of “a decrease in human interaction and human exchange, because it’s not quite as frictionless” as talking with a chatbot.
■ ■ ■
OpenAI markets ChatGPT as a “chatbot for everyday use” and as a way to “solve problems.” According to experts, AI companies are still working to address some of the issues I came across, like AI flattery, as well as establishing mental health guardrails and preventing the chatbots from inventing facts.
These companies are also pushing for a major new step for AI tools like ChatGPT: enabling these tools to act on a user’s behalf instead of just chatting with them. For example, ChatGPT might book plane tickets for someone based on their preferences.
“I think that these systems can really do a lot of good for us,” says Dr. Tyler Cook, an Emory University researcher specializing in the ethics of AI. But he warns people to think carefully about where they’re comfortable drawing the line between AI automating mundane tasks and making judgment calls.
“When we’re talking about ethical decision-making, and value-driven decision-making, and things that really matter to us … all of that is in real danger if we rely on AI too much for those things.”


