In an interaction with a teenager at an academic conference recently, an interesting reflection suddenly came up when we were talking about modern relationships and their dynamics for the young generation. The teenager mentioned that a friend of theirs “used GPT to write an apologetic love letter when they had a fight with their partner”, and how most of the friends that this teenager knew were also doing something like this in their relationships as well. In fact, many conversations were being ‘simulated’ or being driven using AI-generated text.
A second instance then emerged in a recent article about a judge in New Zealand finding it hard to adjudicate a case in which an arson convict was asked to write an apology to victims and the court. On a brief look, the judge discovered that the letter, though articulate and sincere, was seemingly AI-generated. The judge went on to reason, “When one is considering the genuineness of an individual’s remorse, simply producing a computer-generated letter does not really take me anywhere as far as I am concerned.” He offered only a 5% reduction in sentence as opposed to the 10% requested by the defendant and their lawyer. Similarly, a study done at the University of Kent found that humans negatively perceive those who use AI as ‘lazy and less trustworthy’ (Claessens et al., 2025).
According to OpenAI’s own declarations and rough estimates through user surveys and corporate reports, AI use is booming across demographics (except older generations), geographies (with almost all countries in the Global South reflecting a ‘high enthusiasm’ score) and purposes.
But what could this say about the larger field of AI use in human activity? Do we perceive the use of AI differently in different contexts, and if so, when are we more averse to AI use, and why?
AI use in our daily life
Statistical analysis of AI use is quite telling. The global generative AI market was estimated to be at $1.75 billion in 2022 with a CAGR of roughly 80%. According to OpenAI’s own declarations and rough estimates through user surveys and corporate reports, AI use is booming across demographics (except older generations), geographies (with almost all countries in the Global South reflecting a ‘high enthusiasm’ score) and purposes. ChatGPT had over 800 million active users weekly and the market now serves over 1 billion users combined on a frequent basis. Harvard Business Review’s statistics show that the top use of generative AI is for ‘therapy and companionship’. The way this usage typology is defined is still quite unclear as it itself falls under a broad category of ‘personal and professional support’, which can range from supporting basic daily tasks and organising work calendars to mental health support and queries on basic communication, and with multiple motivations as to the desired output.

However, what is clear as a trend is that more and more people use generative AI less for ‘technical support’ for refining computer codes and running theoretical or statistical analysis and more for emotional or ‘relational support’ which hinges on contextual and interpersonal interactive bridging. This ‘use case’ study of generative AI sits uncomfortably alongside the above mentioned research which showed how people are perceiving others’ usage of AI as a negative. Largely, people using AI to support their processing of thoughts, or expressions of it, are perceived negatively and even the distrust in people’s communication is visible in the overestimation of AI use among people by respondents.
The larger point I aim to highlight lies in this very gap. I define the use of AI as an emotive tool to reduce effort or support complexity in planning relational interactions or human communication as ‘supported communication’. The above studies show that while we value people who rely on ‘supported communication’ less favourably, we as a people are simultaneously moving towards using it more and more in our daily communication practice. We can ask why that is so, and I come back to two insidiously linked questions – first, to the social theory of ‘alienation’ which speaks about humans losing more and more attachment to or visibility of any social processes as we walk deeper into capitalist lifestyles. The time poverty that is generated by grueling modern work life and labour is also something I aim to shed light on here. The second question is of patriarchal value systems and their effect on defining what counts as legitimate ‘work’.
Time poverty in capitalism and supported communication as care work
In his book Time, capitalism and alienation: a socio-historical inquiry into the making of modern time, Martineau argues that under the capitalist organisation of society, one of the major abstractions we struggle with is that of ‘clock-time’. Clock time became the most influential and governing typology of time around the Industrial Revolution to motivate and control workers’ time use as opposed to other natural temporal rhythms that we experience, for example, seasonal rhythms to bodily rhythms which work more synchronistically with natural and social behaviours. Our natural body clocks, circadian rhythms, and menstrual rhythms work in complex and grounded ways as opposed to ‘abstracted’ and rigidly quantified time units like hours and days. This dissonance is what Martineau and other scholars posit as generating a new form of alienation that we experience under capitalism, that of ‘time poverty’.
This is where we come across our current muddle – where most of our time is to be dedicated to ‘productive tasks’ (which could be defined as studying if one is of a young age or job-orientated work and organisation as one grows up), then other ‘relational’ tasks remain less valued and under threat from productivity-orientation under capitalism. One must strive to be a better student than a highly emotionally intelligent partner or friend, as those are ‘unproductive’. Here is where I come to my second position, which asks, when productivity is narrowly defined as ‘activity which must lead to some monetary gain’, what happens to other forms of equally important but devalued labour?
Patriarchal Productivity and Low Value Emotional Communication
These ‘unproductive’ activities are what Arlene Daniels defines as ‘invisible work’. Invisible work is labour needed to socially reproduce labour the next day on the job. This would involve domestic labour, household chores and other administrative tasks one must do to create household life and stable domestic operations. Skills that are required to perform tasks related to care work were classified into a framework of a specific ‘knowledge, skills and attitude’ matrix by Meenakshi Bose in her work ‘Valuing Women’s Unpaid Work: Experiences From Odisha‘. She finds, though, that care work is generally devalued and undervalued due to misperceptions of care work as ‘duty’ and due to structural constraints in national economic calculation systems.
AI has silently and invisibly taken over the role of a care worker in a fast-paced modern social structure which has long devalued and invisibilised any work which is ‘non-productive’.
In modern life and workplaces, this definition has changed and has incorporated tasks like organizing a birthday party for your friends, clearing out shared kitchens (of dirty dishes and used teabags), and taking notes in meetings. What is also critical is that intersectionally, these tasks would always fall within the domain of ‘women’s work’ at the household level, and at the societal level, it would be performed by socio-economic, cultural or sexual minorities in shared public spaces.
The modern social arrangement, when one becomes too ‘pinched for time’ on these tasks, is where ‘supported communication’ is born. It can take the shape of an AI-drafted birthday card, an AI-generated apology letter (from the court case above), or a generative AI ‘confidant’ to trust with one’s secret and offer validation. However, since such invisible work was devalued under patriarchal notions of legitimate work, which was severed from ‘care work’, such supported communication is also looked down upon and chastised. In our investigation in Vashi Naka slums of Mumbai, researchers from the Tata Institute of Social Sciences found that women (6.7 hours) spent far more time in unpaid domestic labor than men (1.7 hours). Male respondents largely dismissed domestic work as a female responsibility.
The steady danger of unvalued and invisible AI-supported communication
AI has silently and invisibly taken over the role of a care worker in a fast-paced modern social structure which has long devalued and invisibilised any work which is ‘non-productive’. With productivity operating within strict definitional boundaries of ‘career progression’ and ‘monetary value making’, AI has stepped into the role which was filled by care workers and traditionally a social function rather than an individualised challenge. In an era of ever-reducing social circles and drastically cut budgets for shared and public infrastructure for social support and mental care labour, time poverty is directing how we use AI and its generative abilities. There are many pitfalls to this as well, as Caldwell and Fisher show that in their study with youth, there was increased ‘sycophancy’ in AI companions, which tend to agree with the user more rather than offering critical and relational perspectives. This exists alongside the ever-present data, privacy and abuse concerns for teenagers using the internet. If everyone is ‘bowling alone’ (a phrase borrowed from Robert Putnam’s seminal study on growing loneliness in a capitalist USA), then whose fault is it if most communication is supported communication?
If we look back to Meenakshi Bose’s work, she has demonstrated how one of the key skills that is undervalued is the relational complexity of tasks, which are also missed out in a ‘time-use’ labour paradigm. The struggle, thus, remains not with AI use for care work, but with the gendered notions of care work per se which dominate our notions of productivity. These notions force people to ‘offload non-productive tasks’ that require relational and emotional intelligence to a machine that’s unfortunately not learning about them the way another related human would.

