Shreya Upendra
Symbiosis School for Liberal Arts
Symbiosis International (Deemed University)
Propelled by cutting-edge advancements in science and technology, we are progressively moving towards a society where machines will be exceedingly similar to us. In such a society, the utility of machines will no longer be restricted to merely improving the efficiency of routine mechanical tasks, rather, the attempt will be to integrate them as egalitarian participants of a socio-cultural dialogue. Although this advancement has been met with both excitement and trepidation, the possibilities of deep learning cannot, and should not, be dismissed. Until now, Artificial Intelligence (AI) has exponentially improved our physical and computational prowess, but deep learning promises developments that surpass the augmentation of the rational mind, to include the emotional mind as well.
German philosopher and psychologist, Theodor Lipps’s work transformed empathy “from a concept of nineteenth century German aesthetics into a central category of the social and human sciences” (Karsten, 2019). Empathy has since remained a topic of intensive psychological research, particularly since the late 1940s and throughout the early twentieth century. The first revolution in psychology proposed to redefine psychology solely on the basis of behaviourism. However, by the mid-1950s, it became increasingly apparent that this proposition would not succeed. In light of this, Noam Chomsky even rightly remarked, “Defining psychology as the science of behaviour was like defining physics as the science of meter reading” (as cited in Miller, 2003, p. 142). The cognitive revolution in psychology in the 1950s was therefore a counter-revolution and “brought the mind back into experimental psychology” (Miller, 2003, p. 142). Even after seventy years, Miller’s original dream “of a unified science that would discover the representational and computational capacities of the human mind and their structural and functional realization in the human brain,” (2003, p. 144) continues to have an appeal that cannot be resisted.
The incorporation of empathic AI would effectuate social change which would require an enhanced interdependence of societal roles. Very simply put, progress for machines is not about redundancy, but about additionalities; on the contrary, progress for humans is about compensation. Therefore, for a future hinged on technological advancement, in true Darwinian understanding, the section of society that adopts this technological prowess will gain an evolutionary advantage. In the wake of expectant social change, research in cognitive science and the principles of prevention science and intervention development underpin this commentary, and guide the author in making an argument that affirms the integration of empathic AI as a “protective factor” in interpersonal relationships.
Understanding AI in Relation to Human Intelligence
Similar to developmental milestones in human beings, the objective of AI-centric research is to surpass certain milestones, where each is an approach to equalize AI with human intelligence. Arguably, one of these milestones is emotional intelligence, of which, empathy constitutes an integral part. Research focusing on empathic AI is thus seemingly utopic, because it emphasizes an advanced bionetwork comprising of shared intelligent agents. To better understand the techno-socio-cultural nuances of such an ecosystem, a discussion of developments will be grounded within an understanding of AI. This calls for an operationalization of the term “intelligence”.
Reflecting upon the works of Sir Francis Galton (1869), Alfred Binet (1916), David Weschler (1940), Jean Piaget (1936, 1950), Howard Gardener (1983), Raymond Cattell (1963) and Aleksandr Luria (1966), human intelligence is marked by complex cognition, high levels of motivation, and self-awareness. It features the ability to learn, form, and use highly abstract concepts through multiple processes including recognition of patterns, comprehension, representational and association-based reasoning, decision-making, retention of information, and the use of language for communication.
To achieve this, artificial neural networks in deep learning are algorithms created to mimic the biological structure of the human brain. This training involves feeding the algorithm with huge volumes of data and making suitable allowances for it to adapt to situational contexts. The smartness of AI is thus also a comment on the complexity of its algorithm. As Langley (1987) explicates, “machine learning is an evolving discipline” (p. 198). It is the role of the researcher to identify useful metrics to further research and diminish the gap in understanding human intelligence, and advance the possibilities of AI. “In working toward closing this gap […], ideas from neuroscience will become increasingly indispensable” (Hassabis et al., 2017, p. 250). Deep learning is making this a real possibility.
AI and Us: Interpersonal Relationships and Models of Empathy
The etymological origin of empathy can be traced to the Greek word empatheia, meaning affection or passion. It is also related to the word enpathos where en means in or at and pathos refers to emotion (passion or suffering). Empatheia was adapted by linguists to create the German word Einfühlung meaning feeling into, which was later translated by Edward B. Titchener in 1909 into the English term ’empathy’.
Traditionally, empathy-related research has been dominantly pursued in the anthropocentric context, conceiving empathy as two distinct levels of empathy mechanisms- affective and cognitive. These different levels have also been referred to as, “low/high empathy (Goldman 2006), basic/re-enactive empathy (Stueber 2006) and mirroring/constructive empathy (Goldman 2011)” (as cited in Yalcin & DiPaolo, 2019, p. 2986). Yalcin and DiPaolo (2019) tout the aforementioned models as the “categorical approach” to empathy (p. 2986).
While the affective level of empathy mechanism sought to investigate empathy as a response that promoted prosocial behaviour and moral development, the cognitive level of empathy mechanism sought to delineate phylogenetic and environmental factors that influenced empathic accuracy. However, in recent years, there has been a renewed agreement, acknowledging empathy as a multifaceted phenomenon, necessitating the integration of these two hitherto separate research traditions. “In fact, it is a growing belief among empathy theorists and researchers that our understanding of empathy can improve only with the explicit recognition that there are both affective and cognitive components to the empathic response” (Deutsch & Madle, 1975; Hoffman, 1977; Feshbach, Note 1, as cited in Davis, 1983, p. 113). According to Yalcin and DiPaolo (2019), this attempt to functionally integrate the two levels of empathy mechanisms highlights the “dimensional approach” to empathy (p. 2987).
Taking the integration forward, Davis’s (1983) Interpersonal Reactivity Index (IRI), Omdahl’s (1995) assessment of cognitive appraisals, and Hoffman’s (2000) Theory of Moral Development recognised individual differences and popularised empathy as a multi-dimensional construct, encompassing both affective and cognitive dimensions. The implications of these mechanisms deepened with the discovery of mirror neurons in the 1990s, as recorded by Rizolatti and colleagues (1996). Mirror neurons established an empirical foundation for the functional architecture of empathy mechanisms in humans. Inspired by this discovery, Preston & De Waal (2002b) identified the Perception Action Model of Empathy, which established that the perception of a target’s state alone is capable of activating the observer’s perception of that state.
However, it is important to iterate that, “since different disciplines have focused on very specific aspects of the broad-range of empathy-related phenomena” (Karsten, 2019); a conceptual understanding of empathy comes with its fair share of confusion. Coplan (2004), De Vignemont & Singer (2006), Pinotti & Salgaro (2019) argue that it is these archives of multiple definitions that have created “incomprehension and misunderstandings within the scientific community”. In an attempt to enhance these classical conceptualizations and to best study the processes through which empathy is manifested, Main et. al (2017) argue that, “[…] empathy is best characterized not by a finite point in time of mutual affective experience, but rather as a dynamic process that involves cognitive and emotional discoveries about others’ experiences” (p. 358).
Such, “a relational perspective of emotion emphasizes the complex interplay between the person and the environment [emphasis added]” (Campos et al, 2011; Reeck et al., 2016; Zaki & Williams, 2013, as cited in Main, 2017, p. 359). In the context of interpersonal relationships, empathy therefore emerges in a “dynamic, bidirectional fashion” (Butler, 2015; Campos et al., 2011; Reeck et al., 2016; Zaki & Williams, 2013, as cited in Main, 2017, p. 359). Both the response and the nature of feedback that is offered to this empathic attempt influence such an interactive social process.
Naturally, with the integration of empathic AI, the perceived “shared ecosystem” would have to be founded on a trusted relationship. “Trust” only emerges when we can perceive intention and subsequently align this perceived intention to our own expectations. “If the intent of the AI is not aligned with that of the human, the AI is likely to make decisions that disappoint the human and cause the human to suspect the intent of the AI, thus leading to a situation of human mistrust regardless of the AI’s level of automation (capability) and level of autonomy (opportunity). If the intent of the AI is aligned with the human, automation and autonomy start to moderate the relationship” (Abbass, 2019, p. 167).
Figure 1
Building Human- AI Trust.

An interpersonal relationship moderated by automation and autonomy, gives rise to two extreme possibilities:
- Where the level of automation is lower than the level of autonomy: Here, AI is given opportunities that exceed its abilities, leading to over-reliance on the part of the individual. This eventually culminates in disappointment and distrust.
- Where the level of automation is higher than the level of autonomy: Here, AI is more capable than the opportunities it is given. In this underutilisation case, the high cost of producing this AI is not justified with its use.
Under-reliance implies inefficiency while over-reliance implies risk which can be accounted for by calibrating risk assessments. It is only when the level of automation equals the level of autonomy, as depicted in Figure 1, that the overall system is balanced, which amounts to a trusted human-centric AI-interpersonal relationship.
Thought Experiment
Coie, et. al., (1993) define prevention science as, “the application of a scientific methodology to prevent or moderate [emphasis added] major human dysfunctions before they occur” (p. 1013). Prevention science is thus founded on predictability and the ability to delay a particular outcome, or best, to avoid it at all costs. It does not assume a singular outcome and works towards patchwork interventions. This implies that prevention science is not a solution in flux, but a process that prioritises the enhancement an individual’s ability to resiliently cope with adversity and achieve a positive sense of self-esteem, well-being, and social inclusion. Arguably, the core element of prevention science is therefore, empathy.
The principles of prevention science typically draw upon the tenets of developmental epidemiology. Affixing “development” with the term “epidemiology” implies a certain temporal negotiation. Developmental change, therefore, refers to an increase or decrease in the prevalence of risk factors and protective factors. Jessor et al., (1998, p.196) refer to a risk factor as, “conditions or variables associated with a lower likelihood of socially desirable or positive outcomes and a higher likelihood of negative or socially undesirable outcomes [emphasis added] in a variety of life areas from health and well-being to social role performance”. Crucial to this definition, is the understanding that a risk factor is a measurable characteristic that precedes, and is typically associated with a negative outcome (Kraemer et al., 1997, p.338). For instance, heavy alcohol use and cigarette smoking can be considered as risk factors of substance abuse-related depressive disorders.
Conversely, “protective factors have the reverse effect: they enhance the likelihood of positive outcomes [emphasis added] and lessen the likelihood of negative consequences from exposure to risk” (Jessor et al., 1998, p. 196). A protective factor, is, therefore, a measurable characteristic that impedes, moderates, or buffers the effect of a risk factor and (or) the negative outcome itself. For instance, self-control, academic competence, and healthy parental attachment can be considered as protective factors of substance abuse-related depressive disorders.
Individually, each of us are embedded in distinct nested social fields or contexts. In such a nested social context, the outcome of empathic interpersonal care is a social task demand. As individuals, we are not only receivers of social task demands, but also agents of social task demands for another individual. Whether the social task demand manifests as a risk factor or protective factor is purely probabilistic and dependent on a range of unforeseen stimuli that cannot be guaranteed. Therefore, if a situation arises that necessitates empathic understanding for the most effective outcome, with humans, empathy cannot be guaranteed. This guarantee can, however, be achieved by incorporating empathic AI, for in such a context, AI will remain immune to changes in social task demands. To better understand the role of empathic AI, the following illustrate two nested environments.
AI in an Interpersonal Domestic Environment.
Consider a relationship between two co-dependents and their adolescent child in a remote area. Ideally, with regard to an adolescent’s socio-emotional development, parental influence would be considered a protective factor. This could be in the form of positive reinforcement, parenting style, attachment style, certain parenting beliefs, and (or) mutually agreed standards of behaviour. However, to assume that parental influence is perfunctorily positive dismisses the possibility of the same serving as a risk factor. Possible instances could include incarcerated parents, parental conflict, abuse, negligence, financial stress, violence, and aggression. This is where empathic AI can be tactfully integrated as a protective factor to acutely understand the adolescent’s emotional needs as a parent ideally would.
Educational, interventional, and rehabilitation programs with those who are justice-involved requires an understanding of trauma-informed language. Empathic AI can be modelled to help incarcerated individuals and parent effectively. It could also serve as a potential tool to evaluate distress, triage violence risk, and suggest effective and timely coping mechanism to help both, the parent(s) and their adolescent child.
Let us consider another instance and draw attention to a child diagnosed with Autism Spectrum Disorder (ASD). The heterogeneity of the disorder renders a layered interventional complexity, right from customized schedules, personalized tools, and flexible learning environments that cater to the child’s individual development needs and learning objectives. Naturally, this child is at a disadvantage, owing to their inability to interpret non-verbal communication cues, making any form of social interaction overwhelming and hypersensitive. To be able to participate in traditional social interaction therefore necessitates gradual assimilation. However, society’s abysmal tolerance of their needs often results in discriminatory behaviour, eventually leading to social ostracism of both the parents and their child. This is where empathic AI can play an influential role, empowering parents by assisting them in caring for their child.
AI in an Interpersonal Professional Environment.
Let us attempt to extrapolate the aforementioned argument to a defining relationship between a medical care professional and their client. Research on care professions have extensively documented burnout prevalence and “compassion fatigue” (Figley, 2002; Rothschild, 2006, as cited in Wilkinson et al, 2017, p. 130), a possibility that is significantly gaining traction especially in the times of COVID-19. Although it has been clinically observed that there exists a noteworthy association between burnout and empathy, there also exists an indefiniteness. As an optimal empathic approach, Birault et al (2016) reference the term “clinical empathy” (p. 6), stressing the need to distinguish the self from the other. Realistically though, this approach is easier said than done. Here, empathic AI can be integrated as a plausible protective factor. Empathic AI will always remain actors of the social task demand. Therefore, despite vicariously engaging in empathic interpersonal conversations, empathic AI’s objective will consistently be unidirectional i.e., for the benefit of the other.
The timely relevance of such an objective cannot be overlooked in the current global crisis. Bereavement, isolation, unemployment, financial instability, and (or) fear are rapidly emerging as triggers, to an extent where it is not unrealistic to opine that the COVID-19 pandemic has, or is likely to, consequentially trigger a mental health pandemic. “While many countries (70%) have adopted telemedicine or teletherapy to overcome disruptions to in-person services, there are significant disparities in the uptake of these interventions. More than 80% of high-income countries reported deploying telemedicine and teletherapy to bridge gaps in mental health, compared with less than 50% of low-income countries” (WHO, 2019).
With India consistently reporting an exponential increment in the number of active cases and COVID-19 related deaths, even if we were to assume that telemedicine and teletherapy were significantly scaled to penetrate all geographical regions equitably, there exist only a limited number of experienced mental health professionals to cater to the aggressive growth of mental health challenges. “According to National Crime Records Bureau, 2015 (the latest data available), the entire mental health workforce, comprising clinical psychiatrists, psychologists, psychiatric social workers and psychiatric nurses stands at 7,000, while the actual requirement is 54,750” (Shukla, 2017). Consequently, this shortage is associated with soaring costs for consultation and suggested interventions. Thus, this small number will only be effective in providing point-in-time support to limited population in the wake of the current crisis.
To counter this, empathic AI can be modelled as emergency front-line volunteers in COVID-19 helplines. Founded and designed by Dr. Alison Darcy in 2017, Woebot, is a Facebook-integrated computer program that successfully replicates conversations on the part of a therapist. Clients are asked about their mood and thoughts, are actively listened to, and eventually taken through tailored modules of cognitive based therapy. Alternatives like Woebot can supplement care and bridge the gap of accessibility, thereby serving as a protective factor 24×7. There is no time more suited to these applications than now, with massive demand for COVID-19 helpline professionals.
Let us consider another instance. Medical institutions are often susceptible to the longitudinal needs of “super-user” clients. This small percentage of super-user population often accounts for a huge proportion of their expenditure, and are typically clients diagnosed with a chronic medical illness. Consider a COVID-19 patient with type-2 diabetes and hypertension as comorbidities. Such a client would require round-the-clock expert medical attention and timely delivery of complex medications. Guided by analytics, empathic AI could help care professionals to monitor super-users from the comfort of their home.
Additionally, empathic AI could facilitate screening and triaging patients, monitor COVID-19 symptoms, provide decision support, and automate hospital operational functions. Through such integration by staffing hospitals with empathic AI, medical institutions can significantly limit physician exposure and ease the workload of healthcare workers, thereby alleviating the pressure on emergency and acute care in medical institutions.
Conclusion
“The first industrial revolution […] ushered in mechanical production […]. The second industrial revolution made mass production possible […]. The third industrial revolution […] is usually called the computer or digital revolution because it was catalyzed by the development of semiconductors, mainframe computing (1960s), personal computing (1970s and 80s) and the internet (1990s)” (Schwab, 2016, p.11). Building on this, Schwab (2016) has been interestingly vocal about the beginning of the fourth industrial revolution; a revolution that will build extensively on the digital revolution and be heralded by a ubiquitous AI. According to Polonski (2017),
Ray Kurzweil predicted that by 2029, intelligent machines will be able to outsmart [emphasis added] human beings. Stephen Hawking argued that ‘once humans develop full AI; it will take off on its own [emphasis added] and redesign itself at an ever-increasing rate’. Elon Musk warned that AI may constitute a ‘fundamental risk [emphasis added] to the existence of human civilization’. Alarmist views on the terrifying potential of general AI abound in the media”.
The impeding question to the integration of empathic AI therefore remains – if AI makes a mistake, who is to blame? The programmers, AI itself, or the end users? This question remains unanswered, and for long, this uncertainty, coupled with the unmitigated possibility of AI has plagued society and instilled fear. Every section of this commentary was centred around the singular notion of empathy. Since empathy is significantly associated with morality, the likelihood of being able to introduce an ethically empathic AI is naturally questionable.
But morality is not objective and cannot be accurately quantified by measurable ethics. Any attempt to do so without crowdsourcing multiple potential solutions would not give the desired accuracy, and would be reduced to a biased appropriation of morality and ethics. Introducing the, ‘Principle of Ontogeny Non-Discrimination,1 Bostrom & Yudkowsky (2011) therefore lends an important argument in this regard. “It is important that AI algorithms taking over social functions be predictable to those they govern [emphasis original]. It will also become increasingly important that AI algorithms be robust against manipulation [emphasis original]” (Bostrom & Yudkowsky, 2011, p.2).
With empathic AI embodying certain essential features of human intelligence, machine learning for AI, would be what human cognition is for us, which is as follows:
- Engage in constant interaction with the surrounding environment. AI must demonstrate the ability to learn from a single experience, rather than solely relying on pattern identification from multiple instances of the same experience. This would require some kind of sensory-motor grounding in reality and a grasp of characteristics such as size, shape, texture of objects, as well as temporal and spatial relationships.
- To understand the nuances of language communication and what it means for social survival. The most probable conclusion must, therefore, highlight extensive contextual reasoning through logic, emotions, and (or) abstraction.
- To equally understand the context of the other, like that of the self, to accurately respond with both emotions and cognitions.
- To focus on existing knowledge including recent knowledge to accelerate learning. The representations of these experiences would help AI to validate their exhibited behaviour.
- To determine the purposefulness of its exhibited behaviour
- To gauge the relevance of tasks, and dynamically manage multiple, potentially conflicting goals, and priorities.
As individuals, we have always been encouraged to question the what, the how, the when, and the where, but it is now becoming increasingly important to question the why. For us to be empathic, the propensity to relate to another’s experience is heavily dependent on having lived that experience, even if that experience counts as observation or perception. Being empathic necessitates active listening, multicultural competency, and the ability to recognize bias.
By extending their role to interpersonal relationships, empathic AI would witness almost no visible change in the magnitude and constellation of their protective and risk factors, which makes them stable. Their stability across time and space therefore makes them effective agents for preventive intervention. Therefore, empathic AI is not here to replace interpersonal relationships, but to play the role of protective factors.
Endnotes
[1] See (Bostrom, & Yudkowsky 2011, p.6-7), for an insightful formulation of ‘The ‘Principle of Ontogeny Non-Discrimination’. ‘The Principle of Ontogeny Non-Discrimination’ states that, “if two beings have the same functionality and the same consciousness experience, and differ only in how they came into existence, then they have the same moral status”.
References
Abbass, H. A. (2019). Social integration of artificial intelligence: Functions, automation allocation logic and human-autonomy trust. Cognitive Computation, 11(2), 159-171. https://doi.org.10.1007/s12559-018-9619-0
Binet. A. & Simon, T. (1973). The development of intelligence in children. New York: Arno Press.
Birault, F., Thirioux, B., & Jaafari, N. (2016). Empathy is a protective factor of burnout in physicians: New neuro-phenomenological hypotheses regarding empathy and sympathy in care relationship. Frontiers in Psychology, 7:763.
Bostrom., & Yudkowsky, E.. (2011). The ethics of artificial intelligence. In K. Frankish & W. M. Ramsey (Eds.), Cambridge Handbook of Artificial Intelligence(pp. 316-334. Cambridge University Press. https://www.nickbostrom.com/ethics/artificial-intelligence.pdf
Butler, E. A. (2015). Interpersonal affect dynamics: It takes two (and time) to tango. Emotion Review, 7(4), 336–341. https://doi.org/10.1177/1754073915590622.
Campos, J. J., Walle, E. A., Dahl, A., & Main, A. (2011). Reconceptualizing emotion regulation. Emotion Review, 3(1), 26–35. https://doi.org/10.1177/1754073910380975.
Cattell, R. B. (1963). Theory of fluid and crystallized intelligence: A critical experiment. Journal of Educational Psychology, 54(1), 1–22. https://doi.org/10.1037/h0046743.
Coie, J. D., Watt, N. F., West, S. G., Hawkins, J. D., Asarnow, J. R., Markman, H. J., Ramey, S.L., Shure, M.B., & Long, B. (1993). The science of prevention: A conceptual framework and some directions for a national research program. American Psychologist, 48(10), 1013–1022. https://doi.org/10.1037/0003-066x.48.10.1013.
Coplan, A. (2004). Empathic engagement with narrative fictions. The Journal of Aesthetics and Art Criticism, 62(2), 141–152. http://www.jstor.org/stable/1559198.
Davis, M. H. (1983). Measuring individual differences in empathy: Evidence for a multidimensional approach. Journal of Personality and Social Psychology, 44(1), 113–126. https://doi.org/10.1037/0022-3514.44.1.113
De Vignemont, F., & Singer, T. (2006). The empathic brain: How, when and why? Trends in Cognitive Sciences, 10(10), 435–441. https://doi.org/10.1016/j.tics.2006.08.008
Deutsch, F., & Madle, R. (1975). Empathy: Historic and current conceptualizations, measurement, and a cognitive theoretical perspective. Human Development, 18(4), 267-287. https://doi.org/ 10.1159/000271488
Feshbach, N. D. Empathy in children: A special ingredient of social development. Invited address to the meeting of the Western Psychological Association, Los Angeles, April 1976.
Galton, F. (1869). Hereditary Genius. London: Macmillan and Co. https://galton.org/books/hereditary-genius/text/pdf/galton-1869-genius-v5.pdf
Gardener, H. (1983). Frames of Mind: The Theory of Multiple Intelligences. Basic Books.
Hassabis, D., Kumaran, D., Summerfield, C., & Botvinick, M. (2017). Neuroscience-inspired artificial intelligence. Neuron, 95(2), 245–258. https://doi.org/10.1016/j.neuron.2017.06.011
Hoffman, M. (1977). Empathy, its development and prosocial implications. Nebraska Symposium on Motivation, 25, 169-217.
Hoffman, Martin. (2000). Empathy and Moral Development. Cambridge University Press. http://catdir.loc.gov/catdir/samples/cam032/99029669.pdf
Jessor, R., Turbin, M. S., & Costa, F. M. (1998). Risk and Protection in Successful Outcomes
Among Disadvantaged Adolescents. Applied Developmental Science, 2(4), 194–208.
https://doi.org/10.1207/s1532480xads0204_3
Karsten, S. (2019). Empathy. In Zalta, E. (Ed.). The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab. https://plato.stanford.edu/archives/fall2019/entries/empathy/
Kraemer, H., Kazdin, AE., Offord, DR., Kessler, RC., Jensen, PS., & Kupfer, DJ. (1997). Coming to terms with the terms of risk. Archives of General Psychiatry, 54(4), 337–343. https://doi.org/10.1001/archpsyc.1997.01830160065009
Langley, P. (1987). Research papers in machine learning. Machine Learning, 2(3),195–198. https://doi.org/10.1023/a:1022603230145
Luria, A.R. (1966). Human brain and psychological processes. New York: Harper and Row.
Main, A., Walle, E. A., Kho, C., & Halpern, J. (2017). The interpersonal functions of empathy: A relational perspective. Emotion Review, 9(4), 358–366. https://doi.org/10.1177/1754073916669440
Miller, G.A. (2003). The cognitive revolution: A historical perspective. Trends in Cognitive Sciences, 7(3), 141-144. https://doi.org/10.1016/S1364-6613(03)00029-9
Omdahl, B.L. (1995). Cognitive Appraisal, Emotion and Empathy. Psychology Press. https://doi.org/10.4324/9781315806556
Piaget, J. (1936). Origins of intelligence in the child. London: Routledge & Kegan Paul.
Piaget, J., & Cook, M. T. (1952). The origins of intelligence in children. New York: International University Press.
Pinotti, A., & Salgaro, M. (2019). Empathy or empathies? Uncertainties in the interdisciplinary Discussion, Gestalt Theory 41(2), 141-158. https://doi.org/10.2478/gth-2019-0015
Polonski, S. (2017, December 19). Can we teach morality to machines? Three perspectives on ethics for artificial intelligence. The Medium. https://medium.com/@drpolonski/can-we-teach-morality-to-machines-three-perspectives-on-ethics-for-artificial-intelligence-64fe479e25d3
Preston, S. D., & de Waal, F. B. M. (2002b). Empathy: its ultimate and proximate bases. Behavioral and Brain Sciences, 25(1), 1–71. https://doi.org/10.1017/s0140525x02000018
Reeck, C., Ames, D. R., & Ochsner, K. N. (2016). The social regulation of emotion: An integrative, cross-disciplinary model. Trends in Cognitive Sciences, 20(1), 47–63. https://doi.org/10.1016/j.tics.2015.09.003
Rizzolatti, G., Fogassi, L., Fadiga, L., & Gallese, V. (199). Action recognition in the premotor cortex. Brain, 119(2), 593-609. https://doi.org/10.1093/brain/119.2.593
Shukla, P. (2017, September 11). Is India suffering from a deficiency of mental health professionals? Business World. http://www.businessworld.in/article/Is-India-Suffering-From-A-Deficiency-Of-Mental-Health-Professionals-/11-09-2017-125815/
Schwab, K. (2016). The fourth industrial revolution. World Economic Forum. https://law.unimelb.edu.au/__data/assets/pdf_file/0005/3385454/Schwab-The_Fourth_Industrial_Revolution_Klaus_S.pdf
Wechsler, D. (1940). Non-intellective factors in general intelligence. Psychological Bulletin, 37, 444-445
Wilkinson, H., Whittington, R., Perry, L., & Eames, C. (2017). Examining the relationship between burnout and empathy in healthcare professionals: A systematic review. Burnout Research, 6, 18–29. https://doi.org/10.1016/j.burn.2017.06.003
World Health Organization. (2020, October 5). COVID-19 disrupting mental health services in most countries, WHO survey. World Health Organization. https://www.who.int/news/item/05-10-2020-covid-19-disrupting-mental-health-services-in-most-countries-who-survey.
Yalçın, Ö. N., & DiPaola, S. (2019). Modeling empathy: Building a link between affective and cognitive processes. Artificial Intelligence Review, 53(1), 2983-3006. https://doi.org/10.1007/s10462-019-09753-0.
Zaki, J., & Williams, W. C. (2013). Interpersonal emotion regulation. Emotion, 13(5), 803–810. https://doi.org/10.1037/a0033839