top of page

Do You Say Thank You to Your Robot Vacuum? Many People Do

  • Writer: Capacitor Partners
    Capacitor Partners
  • Aug 21
  • 4 min read

By Yioulika Antoniades - Systems Deployment Manager, Capacitor Partners


ree

Would you still grumble if your smart vacuum robot vacuum cleaner stopped mid-task, looked up, and asked why you were upset?


It sounds like a joke—until you pause and think about it. As our everyday devices become more “alive,” our behaviour toward them starts to matter in unexpected ways.

You wouldn’t slam a door in someone’s face. But if a delivery robot stalls in front of you on the sidewalk, do you walk around it? Nudge it with your foot? Curse under your breath? You’re not alone—many people do.


In fact, a 2025 global study by KPMG and the University of Melbourne found that 66% of people now use AI regularly—a strong indicator that this technology is no longer novel, but it has become a normalized presence in everyday life, shaping not only productivity but also our social expectations and behaviours.


We’re also growing more emotionally attached to these systems. We ask Alexa for help. We thank Google Maps. We get frustrated when ChatGPT doesn’t understand. These are signs that our minds are wired for connection. When technology talks, we tend to talk back.


This psychological inclination to humanize AI is also a key reason people feel both drawn to and uneasy around it. As discussed in a HBR IdeaCast interview with Julian De Freitas, assistant professor at Harvard Business School, AI systems often feel almost human—but not quite. People may resist adopting AI because it seems either too opaque to trust, too rigid to relate to, or too autonomous to control. Yet ironically, these same human-like cues (such as a voice or a pause like Alexa’s “Hm”) also increase emotional connection. De Freitas warns that as AI becomes more capable and integrated into daily life, we must consider not only the psychological ease of use but also the ethical risks—especially when users turn to AI for companionship or mental health support.


This emotional response is not just anecdotal. According to the 2025 Top-100 GenAI Use Case Report by Marc Zao-Sanders (Harvard Business Review), “Therapy / companionship” rose from the #2 spot in 2024 to become the #1 generative AI use case globally in 2025. Based on an extensive analysis of Reddit posts, the report reveals that users increasingly rely on AI for emotional support, grief processing, and self-reflection.


The data further shows that new top-ranking entries such as “Organizing my life” (#2) and “Finding purpose” (#3)—which didn’t even appear in the previous year’s top 10—represent a growing trust in AI for deeply personal needs. These categories scored high in both reach (extent of adoption) and impact (significance to users), indicating that AI is no longer just a utility—it is becoming a mirror and companion in people’s daily lives.


So… does our emotional behavior toward AI have ethical and psychological consequences?

Psychologists think it might. If we normalize disrespect toward machines that act human—even if they aren’t—it could influence how we treat real people. Especially children, who are still learning where the line between play and harm lies.


This concern becomes even more pressing as AI systems are designed to appear thoughtful, responsive, even empathic. The same 2025 GenAI report notes that users are now routinely confiding in AI, simulating conversations with deceased loved ones, and using it as a safe space to share emotional struggles. These use cases fall under the category of “personal and professional support,” and include direct quotes from users who describe AI as a non-judgmental listener—always available, always calm.


As the line between interface and companion blurs, so too does our moral reflex toward machines. If machines act emotionally responsive, and we respond without empathy, the question is no longer whether machines deserve kindness—but whether we still know how to give it.


This is where ethics comes in. Much like the Hippocratic oath in medicine—“First, do no harm”—based on Floridi, L., & Cowls, J. (2019). A Unified Framework of Five Principles for AI in Society, Harvard Data Science Review, AI ethics rests on a framework of five guiding principles: privacy and autonomy, fairness and non-discrimination, safety and security, explainability and accountability, and value alignment and control. These echo the bioethical principles doctors follow: autonomy, equity, non-maleficence, and beneficence—now expanded to address AI’s unique challenges, especially explainability. Together, they remind us that while AI is not (yet) a moral agent, we are.


And if we take a leap into the far future, we could argue that, even if current AI systems don’t require new ethical principles beyond explainability, AI systems of the future almost certainly will. What happens if they become convincingly human-like—or even capable of suffering? After all, science still doesn’t fully understand consciousness. It might be fine for your cat to ride your robot vacuum cleaner, but is it okay to kick it when it misses a spot? Who’s being harmed? Even if we all agree the robot can’t feel pain, might we be training ourselves to feel less?

The ethics of AI isn’t just about protecting machines—it’s about protecting us from what we might become when empathy becomes optional.


So, next time your robot messes up, ask yourself: What kind of person am I becoming when I choose how to respond?


Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
bottom of page