This summer, MedicalExpo e-magazine is republishing ten of its most popular articles—an opportunity to review the cutting-edge innovations and digital technology that have made an impact in several healthcare sectors this year. Here is the first article in chronological order:
Jean-Michel Besnier is a French philosopher who teaches at the Sorbonne University in Paris. His research focuses on the philosophical and ethical impact of science and technology on individual and collective representations and imagination. We met with him to talk about the consequences of the explosion of robotics and artificial intelligence (AI) in the healthcare sector, especially since the beginning of the Covid-19 pandemic.
(Published on January 3, 2022)
“We Repair the Living, but We Do Not Heal the Human”
MedicalExpo e-magazine: Can you give us your definition of artificial intelligence?
Jean-Michel Besnier: I have the same definition that everyone has. I am more attentive to the conceptual extension of the notion of artificial intelligence, which at the beginning referred to something rather simple, that is to say the implementation of devices capable of solving problems in an automatic or algorithmic way. This was the definition as it appeared in the 1950s. Then progressively AI became something much less precise, and now it is uncontrollable. Now we talk about AI as soon as we are dealing with a device capable of simulating autonomy and adapting to a particular environmental context. Anything is referred to as AI, from our software to conversational agents. AI has exploded in all sectors in a few years and at the same time it has become considerably more commonplace, which is what makes it even more uncontrollable.
How do you explain this conceptual expansion?
JM Besnier: I imagine that part of it is due to some marketing aims. We can also blame the intensive development of automation in our societies with the idea that efficiency is always achieved through automation. This has promoted the somewhat anthropomorphic vision of an intelligence capable of initiative. There is a lot of anthropomorphism and animism in the way we approach AI. This is a problem because all the fantasies that we develop stem from this kind of fascination that we have for machines that function on the basis of cognitive mechanisms.
Is the intensive development of AI in healthcare leading to dehumanization in this field?
JM Besnier: Yes, everyone is aware of this. Some people complain about it, others are happy about it. There is a dehumanization because we lose sight of what care is and we only think in terms of repairing the human body. Doctors themselves have integrated this mechanistic repair vocabulary when referring to their approach to patients. Care implies taking into account the interiority, the inner life of the patient. If we only use machines intended to produce effects, we lose this dimension of interiority and it is the language itself that basically disappears. We repair the living, but we do not heal the human.
It is good to repair the living, we cannot reproach medicine for doing so, but at the same time we completely hide the human dimension that comes with having a language, with being able to dialogue with the patient, with building representations, etc. For example, I am involved in a research group at the French National Center for Scientific Research (CNRS) on sleep apnea. Machines are used to diagnose and treat sleep apnea, and the pneumologists I work with are almost happy that they no longer need to see patients in consultation, or at least that they do not have to try to elucidate the sleep disorders that may be present in the patient’s case through words and dialogue.
Why are some doctors happy about this?
JM Besnier: Because some doctors emphasize the fact that the implementation of AI removes a significant workload, it allows them to be more efficient, to better manage their patients and consult the data delivered by the machines remotely. They value AI in the sense that it allows them to capitalize on their patient data. Sometimes they may be able to correlate this data and have a much more refined diagnostic approach. But also a much colder one, obviously.
The Covid-19 Effect
Has Covid-19 accentuated this trend?
JM Besnier: Covid created an element of dramatization where all of a sudden survival reflexes triumphed and individuals thought of themselves as threatened animals. There was something primal in the behavior, so it was necessary to do everything possible to preserve the living in oneself. As a result, everything that seemed superfluous in the clinical approach with the doctor was eliminated, minimized and sometimes completely repressed. We saw two systems of value confronting each other: the system of value putting the emphasis entirely on the living being and its requirements, and the system of value which is intended to preserve the symbolic dimension of the human being and which recalls that the human being is more than just the living being.
We are not animals like the others, we have a conscience, we have a history, we have emotions. In the end, what we have lost, because of the situation, is the interactive behavior that has been the guarantee of humanity until now. Touch, for example, has disappeared, which is incredible. We have been confined to a visual sensoriality and still!
What does it create in the collective unconscious to get used to this?
JM Besnier: It sets up a phenomenon of de-linking, de-symbolization, de-substantialization. There is something like “every man for himself” in this period when life seems threatened. There is a loss of the sense of the universal and a loss of the collective aim, which also finds symptoms elsewhere. We are in a phase where the individual is stressed and a stressed individual is an individual who loses all substance. This makes a mockery of the idea of collective intelligence where everyone is in relation with each other. On the contrary, it reinforces the old idea according to which cyberspace would only be the linking of individuals who would be like neurons in a brain, who would have no more substance or interiority than a neuron, and who could be active or inactive, inhibited or disinhibited, and that’s all. There is nothing glorious from the human point of view and it is a way of putting us completely under the control of technical devices.
What do you think of these animator robots in retirement homes or humanoid robots in pediatrics?
JM Besnier: These little robots who accompany for example seniors suffering from Alzheimer’s are pathetic because we are dealing with people whose ability to communicate is reduced and suddenly we discover that machines are likely to arouse empathy in these people. It is pathetic because it is not false, because it is obvious that Alzheimer’s patients will be more receptive to the machine than to their family environment.
JM Besnier: This has been demonstrated very frequently. It is for trivial reasons. When you have a little robot like NAO who is able to sing the nursery rhymes that grandma knows 50 times a day, and to provoke renewed pleasure each time, it is obvious that the robot is much more efficient than the grandson who will not have this patience. It is terribly technically efficient! I have watched videos where you see the prostrate person surrounded by members of his or her family and then all of a sudden little NAO arrives, waddling and singing old songs, and the elderly person’s face wakes up and lights up. This makes the loneliness all the more striking because the family can say to themselves that if they can be replaced by NAO, perhaps they won’t go every week to see their grandmother.
Watch NAO by SoftBank Robotics:
At the same time if an old person has nobody to visit them, at least they have NAO… It’s the same thing for the people who were all alone during the lockdown and who used conversational agents…
JM Besnier: Yes, but the problem is that it is self-sustaining: a person has NAO because they are lonely and the loneliness increases because of NAO’s presence since NAO makes the family or social environment empty. And NAO is always efficient!
Some Professions Such as Radiologists Will Gradually Disappear
Can AI replace doctors?
JM Besnier: There are sectors in which this is already well underway, such as radiology. I attended a National Congress of Radiologists two years ago and they were all convinced that young people should be discouraged from going into radiology because there would soon be no need for radiologists. Machines are much more efficient than humans in this respect. They have been fed with incredible amounts of data and they are now able to recognize tumors and other things from their database. At this conference, people were telling me that we shouldn’t worry too much because the machines would allow radiologists to have more time to talk with patients. Because radiologists know very well that this is a specialty where you have an extremely reduced clinical relationship with the patient: most of the time a patient goes for an MRI, he waits in the waiting room, then the radiologist comments on the report in three minutes, and then the patient leaves and goes see their GP. Radiologists say that if the machine does the job of decoding, they can take more time to give the patient a more elaborate commentary. But I don’t believe it, they won’t do it. So radiologists will gradually disappear as such.
Or there could be a more elaborate clinical approach in this sector, perhaps training radiologists in a psychological or psychotherapeutic approach to discuss the sensitive subject of cancer with patients, for example. This could transform professions and specialties and not necessarily make them disappear. However, I can already see in the vocabulary used that we are already recording the disappearance of what you call in an archaic way the doctor or the physician. We are diluting the ancient figure of the doctor. Doctors are now “healthcare professionals” who manage data or devices capable of collecting data. Some say that this is very good, that we can treat patients better than before, that patients live longer. There are many reasons to support this evolution, but in terms of dehumanization, it is obvious that we are there.
In addition to this risk of job losses, what other risks can we see arising in the field of health with the explosion of AI?
JM Besnier: The first one I see is a generalized and masterful hypochondria due to this kind of self-monitoring that individuals are increasingly inflicting on themselves. Everyone will soon be wearing a watch that will take health data in real time and permanently, this is called the “quantified self”. In the past, it was said that health was the silence of the organs and now we have devices that make our organs talk continuously and this will poison our existence. At the slightest problem, we will rush to our doctor’s office, but also to medications and other treatment. Overdiagnosis and the overconsumption of medication are big problems in our society, as we can see in men for prostate cancer, for example, or in women for breast cancer, where we find hyperbolic approaches. This also comes from health professionals who have machines that they have to test, use and multiply, so we are witnessing a nosography that is becoming more and more dense. We are discovering new symptoms that are part of systems that define new diseases and certain things that were not considered diseases are becoming them.
Is there also an increased risk of cyber attacks?
JM Besnier: Of course there is. We are going to expose ourselves more and more because we are going to hand ourselves over to machines that will capture information about us that we would not necessarily want to be shared. The Facebook CEO said ten years ago that privacy was an obsolete notion, and this will be confirmed more and more. It goes very well with this de-substantialization I was talking about earlier. If we are nothing more than individuals without consistency, there is no reason why we should not be managed as quantifiable objects.
Algorithms, Transparency, Data Cohorts and Biases
In terms of responsibility, if there is a machine error, who is responsible?
JM Besnier: Lawyers are working a lot on this in terms of robot law. Some jurists consider that it should not be forbidden to provide robots with financial means that would be mobilized in case of liability. You give the robot a nest egg, and this nest egg can be used in case mistakes are made that deserve compensation. But you don’t take the robot out of circulation. This establishes a kind of retributive justice.
Who is behind the algorithms of these robots and who checks them?
JM Besnier: We don’t know how the algorithms work, we demand transparency on that, but the designers of the algorithms themselves are more and more unable to know exactly what their machines can do. These designers are computer scientists, engineers specialized in information processing, mathematicians, etc. There is a whole constellation of specialties involved in creating algorithms. Algorithms are simple things in principle because they are devices that generate automatisms on the basis of a certain number of parameters that are dictated to them. But there are emerging phenomena that appear because the quantities of data that the machines have to swallow are becoming more and more uncontrollable and they produce perverse effects, or biases, that we had not foreseen. The question of biases is becoming a central issue in cognitive science. The Nobel Prize in Physics was awarded to people who work on the modeling of complex systems. We are exactly in that configuration. Complex systems cannot be described analytically because there are extremely abundant interactions, feedback loops, etc. These complex systems generate surprising and unprecedented phenomena which, in the best of cases, we can use but which, in the worst of cases, can force us to submit to them without knowing exactly what is happening to us.
Can this also concern the machines that we use in the healthcare sector?
JM Besnier: It can be machines that collect data in cancerology or machines that contribute to synthetic biology which mixes electronics and life. These kinds of machines sometimes produce information that is inexplicable in an analytical way and we are confronted with information that we don’t know what to do with. If you’re using data cohorts, you know what you’re putting into the machine, but the machine is able to cross-reference information from heterogeneous data cohorts, and all of a sudden you have data intersections that produce things that you don’t know what they are. Some researchers will try to understand the thing, they will perhaps say to themselves that it is great what we have found, we will use it in pharmacology, we will make new drugs, or bring out the knowledge of new parasites. We can be in a positive research dynamic. But at the same time, we can be confronted with things that are completely uncontrollable.
JM Besnier: In synthetic biology, there is always the fear of seeing the development of new biological entities that reproduce endlessly without being able to control them. We work a lot in a closed environment, but the fantasy of seeing a kind of parasite emerge that could overshadow synthetic biology and saturate the atmosphere to create an absolutely global catastrophe is very real.
Human vs Robot: Who Will Be the Winner?
How do we control all this?
JM Besnier: This is the eternal problem of regulation and the difficulty of setting up an authority that would be able to guarantee international law. We are not succeeding. There is an international mobilization on the issue of AI and the problems of cybersecurity are mobilizing the major nations, but for my part, I do not believe at all in the possibility of agreeing to the point of setting up a global governance.
On the other hand, there is the possibility of multiplying regulatory and control bodies locally. For this, a political will is needed to ensure that the opinions given by local bodies have an impact. What I dream of is increasingly having bodies that consult citizens, scientists and experts before research is carried out and not afterwards. This seems to me to be common sense, but we can’t do it because we are the playthings of economic and industrial institutions that start by wanting to innovate, and then wonder what to do with the product of these innovations. This is not the way we should work and that’s why it goes in all directions. Technology is blind, it always goes to the end of itself, we re-develop technology to repair the disorders that technology has produced. It is called “solutionism”…
Will humans be the winner?
JM Besnier: But which humans are we talking about? Because if we aim at making augmented humans, we are not talking about humans anymore. Precisely, the question is also to know, at what point we are not quite human anymore? Just because I have a smartphone in my pocket, have I already lost my humanity? I don’t know. I think that Covid revealed to us that we are much more fragile than we thought. It’s a great lesson even if it’s dearly paid. We can see that we are not immune to a contingency, that would make our ambitions completely ridiculous. I have worked a lot on the subject of transhumanism and I have seen that with Covid, the transhumanists were very discreet. They were telling us that it was possible to make us immortal thanks to technology, and then suddenly a tiny virus arrived and caused panic in the world. The hyperbolic announcements of the transhumanists seem completely ridiculous now. If we have no electricity because of a solar storm, suddenly our beautiful technologized society will collapse and it could become terrible and terrifying. Perhaps the resilience of human beings will depend on the fragility of our technological devices.
At the Musée de l’Homme in Paris from October 13, 2021 to May 30, 2022, Jean-Michel Besnier participated in the exhibition “Aux frontières de l’humain” which raises questions about the interface between humans and machines and how to safeguard humanity in a context of mechanization and automation.