Health and Healthcare

Keeping Doctors and Patients Human in a Robotic Age

Key Insights from an Interview with Donald Norman

by Viola Davini | 30 05 2025

What is this article about?

This article presents key insights from a conversation with Professor Donald Norman, specifically focusing on: the impact of emerging technologies on medical practice, communication between doctors, patients, and healthcare systems, and the future of education in the medical-scientific field. 

 Through concrete examples – including the integration of augmented reality via Apple Vision Pro, robotic surgery using the Da Vinci system, and the use of personalized 3D models for preoperative planning – the discussion highlights a shift toward more personalized, visual, and collaborative forms of healthcare.

The interview also explores the growing role of artificial intelligence in managing medical knowledge, the challenges of alarm fatigue and inconsistent device interfaces, and the critical need for human-centered design in clinical environments.

Finally, Professor Norman offers a forward-looking perspective on education, advocating for interdisciplinary, project-based learning rooted in collaboration, ethics, and real-world problem-solving.

Professor Norman is the founder of the Design Lab at the University of California, San Diego, a world-renowned figure in design, ergonomics, and cognitive science, co-founder of the Nielsen Norman Group, and former VP of Apple’s Advanced Technology Group.

Ambito di Intervento

Salute e Sanità

Il Centro Ricerche sAu ha avviato una ricerca sulla diffusione di nuove tecnologie in ambito medico-sanitario e sul loro impatto sulla “personalizzazione delle cure”.

Vai all’Ambito di Intervento

unnamed

What are the most important technological advancements currently transforming healthcare, and how are they changing medical practice?

It is unusual to witness so many different technological revolutions happening at the same time. So, I want to focus on the ones that are having a direct impact on healthcare.

There are new types of sensors that allow us to detect things we could never detect before. We now have a much deeper understanding of what goes on inside the human body. We are beginning to realize that every individual is different. This is allowing us to move toward more personalized diagnoses and treatments. Today, we often classify a disease in broad terms, without accounting for the variations between people who share the same diagnosis. As a result, we tend to prescribe the same treatment to everyone, even though each body responds differently. This is especially true when it comes to the gut—the intestines and stomach—which make up the second most powerful component of our nervous system. The chemical makeup of this area is incredibly complex and unique to each person. We’re just starting to understand that, and it’s already making a big difference.

Secondly, we now have computers that are far more powerful than ever before—dramatically so. I’m talking about systems that are a thousand, even a million times more powerful than those still in use in many hospitals and medical settings today. Of course, there is also the role of artificial intelligence, which is important in many different ways. But today’s large language models are complex: on one hand, they can’t always be trusted; on the other hand, they are incredibly powerful and helpful.

I often tell people to think of them as if they had just hired a new assistant—someone who works hard, is eager to help, but doesn’t yet have all the knowledge they need. And you can’t fully rely on them—you need to double-check their work. Still, they can offer new and unexpected insights.

In medicine, the amount of knowledge in the world is simply overwhelming—no single physician can know it all. But AI systems have already read the entire body of medical literature, which makes them extremely good at answering questions and surfacing information that no individual could ever retain.This is going to transform science, law, medicine, and many other fields. 

Another major aspect of this revolution is the combination of robotics, large language models, and AI.

Until recently, robots were quite clumsy, and programming them to perform specific tasks was very difficult. But that’s no longer the case—robots can now learn by watching humans and by experimenting on their own. They are beginning to master tasks we never imagined they could. Most of this is still being developed in research labs, but it will soon become part of everyday practice.

Can you give us an example of the most advanced technologies you’ve seen up close?

Absolutely. One fascinating example is robotic surgery—though technically, it’s not a true robot. The most well-known system is the Da Vinci surgical platform. It’s not autonomous; we call it a teleoperator because the surgeon is always in full control but operates through a robotic interface.
Watching these surgeries is truly remarkable. Imagine: you’re the patient, and I’m the surgeon. Instead of standing over you, I’m seated at a console across the room, remotely guiding the instruments with precision.
But even more exciting, in my opinion, is what’s happening with Apple’s Vision Pro. A lot of mainstream media say it’s too expensive or not that useful—but that’s mostly from a consumer perspective. I had the opportunity to visit Sharp HealthCare in San Diego, the largest hospital network in the region. They acquired 12 Vision Pro units to test their potential in clinical settings, and I also attended a global conference where experts discussed medical applications for this technology.
The Vision Pro is already being used in healthcare, especially in education and surgical planning. When you wear the headset, it functions like a computer—but instead of one or two monitors, you have an entire workspace of virtual screens. You could have one screen showing anesthesiology data, another showing vitals, another for surgical images or patient records—all visible at once. During surgery, you can even plan your incision and project it directly into your field of view. That’s augmented reality—but Apple’s approach is different. Most AR systems struggle with aligning digital content precisely with real-world objects. With Vision Pro, you’re seeing the world through ultra-high-resolution cameras. It’s so lifelike, it feels like you’re seeing with your own eyes. That’s a big reason for the cost: the quality is exceptionally high.

unnamed

As part of our Master’s Program in Medical and Health Services Communication, students have the opportunity to observe live surgical procedures performed with robotic assistance in the operating room.

This unique experience allows them to witness firsthand the interaction between advanced technology and human expertise, offering valuable insights into the role of communication—even in highly technical and clinical environments such as robotic surgery.

Although the Vision Pro is not approved as a medical device, its potential in healthcare is clear. 

Broader adoption will require further clinical studies, dedicated software development, and possibly regulatory approval.

In short, while not originally designed for medicine, the Vision Pro is already finding innovative applications in the healthcare sector.

Click here for more informations

This allows surgeons to operate just as they normally would, but with critical information floating exactly where they need it. In a typical OR, you have to glance away at monitors. With Vision Pro, everything you need is right in front of you, which improves focus and efficiency.
It’s also a game changer for medical education. Students and residents can see exactly what the surgeon sees. Instead of struggling to follow what’s happening, they get a front-row seat—every time.
Now, imagine combining the Da Vinci surgical system with Apple Vision Pro. That would be revolutionary. I know people working on Da Vinci, and I know people at Apple, but as far as I’m aware, they’re not collaborating—yet. They really should be.
Let me share a personal example. For years, radiologist colleagues tried to explain how they guide a catheter from a patient’s leg to the brain through the blood vessels. I could never quite visualize how it navigated past the heart.
Then I put on the Vision Pro. Suddenly, I was looking at a life-size, beating 3D model of the human heart. I could rotate it, look inside, and finally understand how the catheter makes that journey. That moment of clarity was instant—and unforgettable.
This is already being integrated into medical education. Elsevier, the major medical publisher, has converted much of its textbook content into interactive 3D models. Many of the anatomical structures I saw inside the Vision Pro came directly from Elsevier’s work.

One of our main research goals is to understand how these technologies impact communication between doctors, patients, and healthcare services. In your opinion, what is the most significant aspect?

Let me share a story that really illustrates this. A close friend of mine—a senior professor of advanced technologies at the University of California, San Diego—has a chronic illness.

Over the past five or six years, he’s been tracking his own health data daily. Traditionally, doctors rely on a single measurement taken every few months, but he was measuring his vital signs continuously. For the first time, we could see just how much these indicators can fluctuate—not just day to day, but even hour to hour.

At his research center, he had access to a massive high-resolution display—larger than the wall in this room—where he visualized his data in real time. That allowed him to detect changes that most doctors wouldn’t even know to look for. In some cases, the variability within a single day was more dramatic than what you’d typically see over six months in standard clinical settings.

He also had regular MRIs taken and converted them into a 3D-printed model of his colon. With that, he mapped out a surgical plan himself and brought it directly to his surgeon. I attended the “Grand Rounds” session after his surgery, where the surgeon explained how unusual this case was. She said that normally, she would have to walk the patient through every step. But in this instance, the patient handed her the MRIs, pulled a model from his pocket, and said:
“Here’s what you’re going to do. Cut here and here, reconnect these sections, and approach from this angle—because of the anatomy, you won’t be able to use the standard entry point.”

He came prepared with diagrams, photographs, and computer-generated images. It was one of the most detailed, patient-led surgical plans I’ve ever seen—and I think it’s a preview of where healthcare is going.

This is where Apple’s spatial computing technology comes in. One of its emerging uses is in preoperative planning. Rather than using generic anatomical models, doctors can now create personalized 3D models from actual MRIs and X-rays. These models allow for incredibly detailed, patient-specific planning. And during the procedure, the surgical plan can be overlaid directly onto the patient’s body in real time, guiding the surgeon with precise visual cues.

Another huge benefit is how visualization technology improves communication—with both patients and families. Explaining a complex procedure to a patient’s family is notoriously difficult. But when you can show them an interactive 3D model, everything changes. They can finally see what’s happening and understand the treatment. That level of clarity was nearly impossible before.

Let me shift gears for a moment. A few years ago, I visited Philips in Eindhoven, in the Netherlands. They were working on innovations for neonatal care. When premature babies are born, they’re placed in incubators – traditionally enclosed in plastic. This setup allows observation but creates a suboptimal environment.

In the past, many of these babies didn’t survive. Today, survival rates have improved dramatically, but long-term health issues remain common. Philips identified several problems – like the plastic casing blocking natural light cycles, which are essential for developing circadian rhythms. Also, nurses entering at all hours can disturb the baby unnecessarily.

So they came up with a more holistic solution. They installed infrared cameras to continuously monitor the baby without disruption. They also created family-friendly spaces—small bedrooms near the incubators—so parents could stay close. Most notably, they built a large digital display that showed the baby’s medical data in real time.

That data could be viewed in three modes:

  • A clinical version for physicians, with full medical detail.
  • A nursing version with vitals and medication schedules.
  • And a simplified version for families, with clear visuals and plain language.

What really struck me was that families had access to all three versions. They weren’t limited to just the simplified one. Even if they didn’t fully understand the clinical version, having access to it helped build trust. 

They didn’t feel like anything was being hidden from them – and that inclusion made a real difference.

To me, this is what it’s all about: creating technologies that don’t just advance medicine, but also strengthen communication and trust between patients, doctors, and families.

How can we ensure that technological advancements are developed to strengthen professionals’ skills?

This has always been a huge challenge in medicine: how do you train someone to use robotic systems like Da Vinci effectively?

In traditional surgery, training was hands-on. The lead surgeon would begin the procedure with residents observing. Then they’d say, “Okay, you make the first cut.” Gradually, over time, the residents would be allowed to take on more of the procedure, until eventually the senior surgeon would just observe while the resident performed the operation.

That’s how surgical skills were passed down—from person to person, in real time.But that method doesn’t translate well to robotic surgery. Trainees can watch, but they don’t build up the same tactile intuition. The only way to gain experience is by practicing on animals or simulations, but that’s still not the same. This has sparked serious discussions about how to train the next generation of surgeons effectively in a robotic context.

One thing that surprises many people is how many specialists are in the operating room. When we took students to observe surgeries, they were amazed by how collaborative the environment is.

In traditional surgery, the surgical nurse plays a critical role—just as experienced as the surgeon in many ways. They anticipate every step, prepare each instrument in advance, and often change tools mid-procedure without the surgeon having to ask.

Project

Master in Medical-Scientific and Health Services Communication

Master’s program  – developed by the Department of Experimental and Clinical Medicine at the University of Florence, in collaboration with the sAu Research Center – carries out action-research projects on Generative Communication. These projects aim to improve the relationship between doctors, patients, and healthcare services, initiate awareness-raising processes, and ensure the involvement of stakeholders in project development. The current edition focuses on robotic surgery, artificial intelligence, and their impact on the personalization of care.

One thing that surprises many people is how many specialists are in the operating room. When we took students to observe surgeries, they were amazed by how collaborative the environment is.

In traditional surgery, the surgical nurse plays a critical role—just as experienced as the surgeon in many ways. They anticipate every step, prepare each instrument in advance, and often change tools mid-procedure without the surgeon having to ask.

Even Da Vinci, which has been around for maybe five years—or a bit longer—is still relatively new in the broader sense. And Apple Vision Pro, for instance, is only being used by a small number of people so far. Adoption takes time.

But we also have to think about everyone else in the operating room. They need real-time information, too. Take the anesthesiologist, for example—they’re constantly monitoring the patient’s vitals.

I remember once being in an operating room with an anesthesiologist who was showing me how everything worked. At one point, the surgeon got so excited he turned to me mid-procedure, pulled something from the patient’s body, and said, “Look! This is the [organ], and here’s the [other structure]!”

Meanwhile, the anesthesiologist was still explaining things to me when suddenly all the alarms went off. He paused and said, “Oops—better take care of that.”

This kind of moment isn’t unique to medicine. The issue is that most alarms sound the same. You can’t tell which device triggered it. And when something serious happens, all the alarms go off at once—making it impossible to think clearly. If you observe closely, the first thing medical staff do is silence the alarms just to regain focus. But that wastes valuable time. This isn’t just a medical problem. The same issue exists in aviation, energy, even at home. Alarm systems are often poorly designed, and it’s hard for users to prioritize what’s actually urgent.

Even in home environments, devices use high-pitched alerts because they’re cheap to produce. 

The problem? As people age, they lose the ability to hear high frequencies—meaning older adults might not even hear their smoke detectors. And even if they do hear something, they often have no idea which device is beeping.

All of this represents a huge opportunity for better, human-centered design.

Another related issue in healthcare is the inconsistency across medical devices. For example, in infusion therapy, every brand of infusion pump has a different interface. I’ve seen nurses struggle to operate a pump simply because they were trained on a different model.

This kind of inconsistency has led to serious medical errors. If someone enters the wrong dosage, the system might not issue a proper warning. And even if you try to fix it, some so-called “smart” devices will remember the last value—but not warn you that it’s dangerously high.

Of course, if you make the system too cautious, it constantly interrupts: “Are you sure this is the correct value?”

Sometimes that’s helpful. Other times, it just gets in the way.

I’ve toured hospitals across the U.S. as part of National Academy of Sciences studies, and I can honestly say—we never visited a single hospital that didn’t have serious issues with technology or workflow.

One thing that surprises many people is how many specialists are in the operating room. When we took students to observe surgeries, they were amazed by how collaborative the environment is.

In traditional surgery, the surgical nurse plays a critical role—just as experienced as the surgeon in many ways. They anticipate every step, prepare each instrument in advance, and often change tools mid-procedure without the surgeon having to ask.

What kind of education should we be designing for a future where humanity and technology cooperate effectively?

A good friend of mine—he’s putting together a book on why the social sciences are essential to design. He needed draft chapters to show the publisher, so I sent him something. 

It all goes back to the Greeks and Romans—one teacher lecturing, and students memorizing. But lectures are probably one of the worst ways to learn, even though they’re the easiest way to teach.

So I tell my students, “We shouldn’t be talking about teaching—we should be talking about learning.” And those are two very different things.

Take Leonardo da Vinci, for example. He’s a perfect early model of multidisciplinary learning. He worked across countless fields—art, engineering, anatomy, music. But today, that kind of breadth is almost impossible. There’s just too much knowledge in each discipline.

When people say we need to bring in the social sciences, I ask, “Which ones?” Even within psychology, there are behavioral psychologists, cognitive psychologists… the list goes on.

And then there’s STEM. I’ve had extensive education in science, technology, engineering, and mathematics. But I’m deeply critical of how STEM is taught.

About a world-centered education. Highlights from an interview with Gert Biesta

With Gert Biesta we address some of the themes presented in his recent book World-Centred Education: A View for the Present, recently published in Italian by Tab Edizioni. Biesta’s reflection is pedagogical and political at the same time, as it proposes an idea of educational action which has direct action on the world as its ultimate goal.

Where are the people in STEM? The human perspective? I’ve heard people say, “We should get rid of the humanities—what’s the point of history or philosophy?” Others suggest expanding STEM to STEAM by adding art, or even HSTEAM by adding humanities.

But I say no.

Because when we teach STEM, we often just stack a bunch of separate courses: physics, math, programming… all disconnected. And no one tells you why you’re learning any of it.

Instead, I believe we should stop teaching subjects. We should start teaching projects.

When students work on real-world problems, they naturally pull from multiple disciplines. They see how everything connects—technology, history, culture, ethics. That’s when learning becomes meaningful.

And yes, we need to teach ethics. In medicine, ethics is often a core part of the training. But in engineering and design? Much less so. And that’s a problem.

Ethics shouldn’t be just a one-time course you pass to get a certificate. It should be embedded in everything we do.

I also believe students should be trained to “cheat”—and let me explain what I mean by that.

In school, we’re told to do everything alone. But in the real world, if someone knows something better than you, you ask them. You collaborate. If you find a great article, you use it—while giving credit, of course.

In school, using someone else’s work is called cheating. In life, it’s called teamwork. The key is attribution.

We should teach students how to integrate others’ work and give proper credit. That’s how we advance ideas.

I’m not trying to invent everything from scratch. I’m trying to gather brilliant ideas, build a framework around them, and present them in a way that’s clear and useful. 

Education should be about building on each other’s ideas—not memorizing facts to pass an exam.

I remember when calculators first came out. Schools said, “We can’t let students use calculators—they won’t learn arithmetic.” But today, we require calculators in exams, and they can do far more than arithmetic—they solve algebra and calculus problems.

Calculators didn’t make us less smart. They made us more effective.

And I think the same should be true for AI and other emerging technologies. If a robot can help us wash the dishes, great—we already have dishwashers. The hard part is still unloading them! But even that’s progress.

Ultimately, education should empower people to think, collaborate, and create—using every tool available.

Conclusion

From the conversation with Donald Norman, a clear and multidimensional vision of the future of healthcare emerges—one where technology enhances, rather than replaces, human insight, empathy, and decision-making. Innovations such as artificial intelligence, augmented reality, and robotic systems are not only transforming clinical procedures and medical training, but are also reshaping how communication occurs between healthcare professionals, patients, and families.

A central theme is the power of technology to improve communication—not just by increasing access to information, but by making that information more visible, more intelligible, and more actionable. Whether it’s through personalized 3D models that help patients understand complex procedures, or real-time data visualizations that allow entire clinical teams to stay aligned, communication is becoming more interactive, inclusive, and transparent.

Perhaps the most compelling insight Norman shares lies in the field of education. Preparing for this new era means moving beyond outdated models of passive learning. 

A long-standing dialogue with Donald Norman on the intelligent use of new technologies is ongoing. Here are some excerpts from his public talks in Florence.

We need to teach how to learn, how to collaborate, and how to communicate across disciplines and roles. Ethics, design, and dialogue must be embedded in the training of future professionals.

In the end, the most meaningful healthcare innovation may not be a new device or algorithm—but the ability to connect people, ideas, and systems in ways that are deeply human. Technology should amplify that connection, not replace it.

Author

Viola Davini

Ph.D., Researcher and founding member of the “scientia Atque usus” Research Center for Generative Communication (ETS)
Interviewed

Donald Norman

Donald Norman is the founder of the Design Lab at the University of San Diego, as well as one of the world’s leading figures in design, ergonomics, and cognitive science. He is also the co-founder of the Nielsen Norman Group and former Vice President of Apple’s Advanced Technology Group.