The UMB Pulse Podcast

Can AI Help Your Dentist Detect Oral Cancer and Cavities?

University of Maryland, Baltimore Season 5 Episode 2

Send us a text

Can your dentist use artificial intelligence (AI) to spot health problems sooner? Imagine an extra set of eyes that never gets tired — that’s what AI is bringing to dentistry. 

In this episode, Ahmed Sultan, BDS, PhD, director of the Division of Artificial Intelligence Research at the University of Maryland School of Dentistry, shares how new AI tools are helping dentists catch issues like cavities and oral cancer earlier. He also talks about why it matters to use diverse data, the ethical questions behind AI in health care, and how these advances could especially benefit people in rural and low-income communities.

Tune in to discover how AI is shaping the future of dental visits — and maybe even protecting more than just your smile.

Learn more about AI research at the University of Maryland School of Dentistry at https://www.dental.umaryland.edu/ai/

Listen to The UMB Pulse on Apple, Spotify, Amazon Music, and wherever you like to listen. The UMB Pulse is also now on YouTube.

Visit our website at umaryland.edu/pulse or email us at umbpulse@umaryland.edu.


Jena Frick: [00:00:00] You are listening to the heartbeat of the University of Maryland, Baltimore, the UMB Pulse.
Dana Rampolla: What if your dentist had a second set of eyes powered by AI? Could artificial intelligence actually spot oral cancer or cavities before your dentist does? Today on the UMB Pulse, we're looking at how researchers at the University of Maryland School of Dentistry are using AI to detect disease earlier, improve outcomes, and even flag conditions beyond your teeth, like cardiovascular risk.
However, is there a risk that AI slop could muddy the results?
Charles Schelle: At the University of Maryland School of Dentistry, Dr. Ahmed Sultan directs the new division of Artificial Intelligence research, the first of its kind at a dental school.
Along with co-director Dr. Jeffrey Price, his team is [00:01:00] building AI tools that can scan X-rays and pathology slides for early warning signs of disease. The goal isn't to replace your dentist, it's to act as a second set of eyes, reducing errors, catching problems earlier, and giving patients more peace of mind. In this episode, you'll hear directly from Dr. Sultan, who's also the program director of Oral and Maxillofacial Pathology Residency, and co-director of Oral Medicine programs at the University of Maryland, Greenebaum Comprehensive Cancer Center.

Dana Rampolla: Welcome Dr. Sultan. It's so nice to have the opportunity to chat with you today on the UMB Pulse. I'm looking forward to learning all about AI and dentistry, and I'm hoping you may be able to debunk some AI myths for us.
Ahmed Sultan: It's a pleasure and I'm excited to talk to you today.
Dana Rampolla: Well, I think you're gonna have some, some answers to questions that I have.
I just literally was at the dentist last week, so this is all fresh in my mind [00:02:00] and in the forefront of what I'm thinking about in terms of the future of dental care. When people hear AI in healthcare, a lot of us worry that that means machines are going to be replacing doctors. How do you explain the real goal of AI in medicine, and especially in dentistry?
Ahmed Sultan: Yeah. You know, there is, um, a lot of skepticism and a lot of apprehension about what we call replacement of dentists or doctors, radiologists, pathologists, anyone that looks at images by robots or dentists. The fear is becoming more and more real because. In China, for example, there are research groups that already have robots that are cleaning teeth, dental hygienist robots that are powered by AI. In Texas,
um, last year, July last year, there was the very first crown prep, a dental crown prep done in 15 minutes—that [00:03:00] procedure that could take, uh, much longer than that—by a fully autonomous AI robot. And how that worked was they imaged you in a 3D fashion and then they produced
the, um, crown prep with zero error, so zero human error in such a short period of time.
And you also start to, uh, appreciate that there are certain skills that we could teach our new graduates, our dentists, uh, practicing doctors and dentists alike,
where you give them that essential skill of how to interpret the limitations of AI. 'Cause once you interpret the limitations or have an understanding of the limitations of AI,
you begin to realize that AI is powerful and effective in very narrow specific tasks. So these are narrow use cases where AI is likely not to replace dentists or doctors and be very impactful. [00:04:00]
Dana Rampolla: And then in terms of actual dentistry, not just administrative, uh, my understanding is you've built one of the largest dental AI data sets in the country. Why does having such a diverse collection of images, for example, matter for patients?
Ahmed Sultan: Yeah, the way AI works or the success of AI largely depends on how well it's been trained and how diverse the dataset is.
So in AI, there's something called overfitting, and overfitting occurs when you have an AI model that's been trained on one patient population, one type of image data from one type of scanner. And then when you use that
AI model somewhere else in the country or internationally, you'll notice it starts failing or not producing accurate results.
And in healthcare, dentistry and medicine, especially for the research we're involved in, we're looking at cancer, we're looking at cardiovascular risk. So the chance of you making a mistake or the [00:05:00] AI having
a false positive—where it's detecting cancer when it's not there—or a false negative—it's missing cancer, or it's saying that there's no cardiovascular disease on a dental X-ray or medical X-ray—is quite significant.
So the error is not tolerable in most aspects in the realm we work in. And so you wanna ensure that your AI is primed for success. And so how you do that is you avoid overfitting by training it on diverse datasets from different patient populations with different conditions from different scanner types.
So often we get around that by doing what's called multicenter studies, where each center has a different scanner, a different exposure,
um, a different disease type, different patient demographic.
Um, so our multicenter study, we have a site in, in Italy, in Rome—that's, uh, our kind of international site.
And then we have a site in [00:06:00] San Antonio, Texas. And we have a site at UMKC in Kansas.
Um, and so that's where we can get multiple different data types to improve the diversity of the data. So it's generalized, it's generalizable to different patient sample images.
Dana Rampolla: Nice. And what kinds of conditions are you focusing on right now?
Ahmed Sultan: Yeah, so we have three PhD students, one just graduated. One was working on oral pre-cancer. So before you get an oral cancer, a squamous cell carcinoma, oftentimes your dentist, your doctor, your hygienist can pick up a white plaque in the early phases. So she was working on, um, the pathology images.
So once you do a biopsy, it generates a whole-slide image. So she was working on
segmenting, which means annotating all the different features in the AI model. And then another PhD [00:07:00] student is working on clinical images. So when you go to your dentist or hygienist, they might take a photo using an intraoral wand-type camera or a regular camera.
So she was
looking at whether we could use AI as a second set of eyes, essentially a flagging system to flag early features of cancer before. And then, you know, the larger issue is there's a shortage of specialists. Okay. There's a shortage of dentists, there's a shortage of specialists, especially in low-income countries and rural areas.
And so when there's a shortage of, uh, oral cancer experts or, or pathologists
that can review these images, you want to take advantage of AI and AI, true teledentistry, and using digital dentistry and imagery you can create that structure where it can flag things from rural offices or low-income countries that don't have enough dentists, [00:08:00] doctors, or specialists.
And then you can triage high- versus low-risk cases. So the most serious things—
you know, now there's an objective report that goes and says, well, this patient really needs to be seen by someone. And there's more of a, a claim to have that, uh, person seen, 'cause the AI has flagged something that's trained as
high risk. And then the final type of research that's not on clinical images or biopsy is on radiographs. So we work very closely with our oral radiologists here, and what we try and do is use standard-of-care images—so images that you already have obtained at a dental
visit—that are available, that might have what we call missed pathology.
And the reason you may have missed pathology is because as dentists we're not traditionally trained in looking at the carotid arteries, the carotid arteries or the intracranial, uh, structures and arteries. We're very interested in looking at the [00:09:00] jaws and the teeth. And so if you train AI for tasks that doctors and dentists weren't naturally trained on, essentially it can act as a flagging system. You know, in one of our studies we've published, we've shown that AI can highlight small calcifications in the carotid arteries in the neck and in the skull base
at a one millimeter-cubed kind of size. And that's indicative that you can make an early referral to cardiovascular specialists, 'cause we know that these small calcifications are linked to heart attack, myocardial infarction, and stroke. So using already available,
standard-of-care images where there could be a lot of missed pathology,
um, because it's not our domain expertise to be trained in the neck and the skull—and AI would flag it—you'd send it off to a radiologist and then you could save the patient, you know, a length of stay of hospital time and all the complications [00:10:00] that come with it.
Dana Rampolla: So it's not just increasing the chance of a dentist missing—or not increasing the chance, but increasing the chance of a dentist finding—something that they may have otherwise missed. It's also being that second set of eyes, but you're saying it could actually be even a broader tool for diagnoses of something outside of dentistry.
Ahmed Sultan: Yeah, we're, you know, in the field of, um, because AI is a clinician decision support tool, it's adjunctive. We refer to them as more—
or in the FDA, when they pass or approve these, they're referred to as identification systems or adjuncts, clinician decision support tools, rather than offering a diagnosis.
So it might flag
an abnormality, and what I like about AI is that—'cause some of our studies and a lot that's been published will highlight that AI is poor at very certain diagnoses or, or clinical or [00:11:00] pathologic images—but at least what AI has done, even if it's failed at its task, it's built up awareness for that specific condition.
So indirectly all the AI research that's happening is building awareness for early referral of cancers, pre-cancerous lesions, and things that can be already available on images that might not necessarily have been detected.
Um, so that's a kind of a broader impact from all the AI research because the end goal of AI research is flagging disease for early prevention.
And so you're gonna have so many studies,
uh, research groups doing it. The ultimate goal is you're looking at an endpoint of a disease that may have been overlooked before, and now just by doing AI research, whether it's been successful or not, you've built awareness for that disease.
Dana Rampolla: That's so interesting.
Much, much more than what I think many of us think when we just hear about AI. You know, as I said, I've talked to a lot of people and they [00:12:00] just worry about it replacing care, and it's actually hopefully going to enhance care in a lot of areas. So let's say I referred earlier to my dental appointment last week.
So if I'm a patient sitting in a dental chair, is there a chance that AI has already been part of my care, maybe without me even realizing it?
Ahmed Sultan: Yeah, and you've actually also raised another ethical point: the involvement of the patient in the consenting of the use of AI. So there are FDA-approved companies already out there that many dental offices are using, where
they'll take—usually it's a radiograph, an X-ray of your, a bitewing X-ray of your teeth or a panoramic X-ray. And the most used case is, um, dental decay or dental caries. And so, the workflow would be your dentist would review the X-rays, spot the caries, and then they would put it into the AI,
and the [00:13:00] AI would
essentially confirm their findings or maybe flag something for a second look—something they might have overlooked. Or, and oftentimes the dentist will look at it and say, actually, um, I'm appreciating the limitation of AI 'cause it's flagging disease, but that's not actually disease. It's just normal anatomy, or it's a shadow,
or the patient moved and tilted.
Where it can be a little more concerning or dangerous is when there's a hundred percent—or over—reliance on AI. So you go into your dental office and the AI is essentially producing a diagnosis rather than it being a second pair of eyes or flagging system. And so I come back to the ethical point of how involved the patient is.
At what stage is the dentist using AI? Are they using it as a
clinician decision support tool—as an adjunct, as a second pair of eyes? 'Cause that's what it's been FDA-approved for—as an identification system—or are they solely relying on it as a hundred percent [00:14:00] diagnosis? And, you know, there is a world where you have an increased volume of patients, very busy practices, and maybe dentists or doctors start seeing how effective or strong the AI models are. 'Cause usually when you get
AI studies published, the accuracy is in the 90%. And so they might have a—there is a worry that maybe they over-rely on it, but I think what's helpful is for the patient to engage in that kind of dialogue, the informed decision-making process, and say, well, are you using AI in your practice? If you are, are you using it for a second set of eyes? Are you using it for diagnosis or identification? Do I need to sign any consents for its use? And, you know, and that way you can open up
the conversation and maybe the dentist will start saying, well, I took this course, or I looked at several different AI companies and I chose this one because this one, in addition to flagging dental disease, also produces a report. [00:15:00]
And the report's helpful because I can interpret the report and see why it made its decision. So that's a kind of evolving field called XAI, explainable AI, or interpretable AI. Because remember, AI largely is an opaque black box. We don't know why it's producing such amazing results. And to build trust with dentists, doctors, and patients, you wanna have a report or some kind of explainable rationale for why it produced
the amazing result that it did.
Dana Rampolla: And what about a dentist who maybe has been in practice for a long time? Is that dentist going to have to purchase or create some sort of a new system, whether it's equipment or software that they're using for this type of analysis?
Ahmed Sultan: So most of the commercially available AI diagnostic identification tools, uh, they've done that hard work already.
They've, [00:16:00] um, made them so they can be implemented in your software, your regular workflow. So that was part of the hard work that went into many of the companies. Yeah.
Dana Rampolla: Gotcha. Gotcha. You've noticed that dentists who don't use AI may be replaced by those who do, in some of your literature. What should clinicians keep in mind as they begin to adopt these tools?
Ahmed Sultan: I think, you know, we're starting to see patients asking or calling up before making appointments or asking different dental offices if they use AI or not. More so that, you know, we've heard—or I've heard—patients saying, oh no, I'm gonna go to this office 'cause they use AI and that one doesn't. So the reality is it's happening already.
The questions that need to be asked by the provider are:
What is my understanding of AI? Have I had any literature provided or studies on how it works? [00:17:00] What are its limitations? Where does it not work? What are the image types it works on, and what are the kind of conversations I have to be having with the patient?
Do I need them to sign consent for using it? Who owns the AI data? Who owns the patient data? Will that data go to the AI company and what happens with it in five years’ time if it gets commercialized? So there are a lot of questions for the provider and the patient on it. In terms of replacement of the specialty or dentists—
I was skeptical that I would see replacement.
You should look at how AI would operate on its own versus, uh, yourself and then versus you using AI. And there have been numerous studies done recently in the medical world as well where
they've highlighted how AI and a human are outperforming a human alone. And then there are some recent studies that actually showed those that are not adequately trained in how to use it [00:18:00] do worse.
So, for example, the LLMs—large language models—like
ChatGPT 5.0 and Claude and, and all these other LLMs: if you are not appropriately trained in how to use them or if you haven't practiced with using them, the studies show that humans or AI on its own will outperform you. So it's not just about having AI, it's about having some trial and error.
Um, it's about testing it out and seeing if it's actually enhancing your practice. And I give great credit to those dentists or doctors that actually decide not to use AI and tell their patients, I'm not using AI 'cause I've tried it and I've actually found that it slows the process down. Or, I found that it's not as helpful as I thought it would be.
Dana Rampolla: Interesting. Dr. Sultan, I have one more question and then certainly open the floor to you. If I've left out anything that I should be asking, what's the biggest myth [00:19:00] about AI in dentistry or medicine that you'd like to debunk once and for all?
Ahmed Sultan: Yeah, I think one of the biggest myths is the idea that it will replace dentists for the purposes of it outperforming diagnosis. I think the key word is diagnosis there. It's not intended to be a diagnostic replacement. It's intended to be a flagging system. It's intended to be an adjunct, a clinician decision support tool.
And I guess that's how the FDA gets these things approved. So the misconception is that it's AI diagnosis, right? And it's actually AI flagging or identification. And if you truly understand how AI works, AI works at a pixel level. So it takes an image of a dental X-ray or a clinical image, and it breaks it down into a grid.
And each grid or pixel
then gets converted into a numerical [00:20:00] code, and then that code is fed into a neural network. And it works as an algorithm, right? So you have the model, the neural network in the center. The input is the image, and the desired output is, well, this is dental decay on the X-ray, or this is a cancer.
And so if you understand that kind of algorithmic process or the process of how it works, you'll begin to realize that all it's doing is it's segmenting or annotating
different parts of the pixel of an image. And so how can that essentially replace a dentist or replace a doctor or radiologist?
And so that way is probably the largest myth, and to kind of deconvolute the neural network—no pun intended—is to just get around how it works. And if you understand how it works, then you realize
it's not diagnosis, it's just an adjunctive identification tool. We are moving into a realm of what's called multimodal AI, [00:21:00] so we're not just taking the clinical images now.
We're taking the clinical images, we're taking the X-rays, we're taking the histopathology biopsy images, and then we're correlating it with patient demographics. So machine learning is another field of AI, which works on text or numerical data. So we're taking the patient's medications, their height, the—you know, and then we start moving into genetics, genetic information.
Dana Rampolla: Interesting. So it's just growing and—we always say getting smarter—but it's not necessarily an AI that's getting smarter. It's just more data that collectively tells the story.
Ahmed Sultan: It's, you know, what's interesting? So it's not just the more data, it's also the use case or the—
you know, taking an off-label use for AI or taking it from a different specialty and applying it to your own.
You create pseudo or synthetic images that look like the real thing, and then you feed it [00:22:00] into an adversarial system. And the adversarial neural network has a generative component that makes the fake or pseudo-realistic images, and it has a discriminator component that tries to differentiate if it's real or fake.
Where you can create a patient, you can create different pathologies instead of taking the time to photograph them, scan them in, and there's a lot of bottlenecks around that. What we try and do, or where I think the field would do well, is looking at a lot of the research done in computer science or computer vision
and applying that to your specific specialty, whether that's dentistry, medicine, radiology, or pathology.
Um, and to do that, there's a large technical barrier, right? You have to find computer scientists, collaborators, and you have to try to talk at the same pace and speed because, you know, we all talk at different technical or scientific languages.
So that's often an initial struggle. But once you can form that collaboration
and then start having [00:23:00] ideas about what's good in their field that hasn't been applied yet, and vice versa—that's where you can start seeing some innovation there.
Dana Rampolla: Interesting. Well, Dr. Sultan, I appreciate all of your knowledge and information sharing, as I'm sure our audience will.
Is there anything that I should be asking you that I haven't?
Ahmed Sultan: No, the only thing I will say is that, um, if you wanna learn more about what we're doing here on our website—so we're the Division of Artificial Intelligence Research—um, we're the first division
of AI research in a U.S. dental school.
Now we're developing autonomous AI agents. So we have one called ELI5-A—Explain It Like I'm Five-A—and that scours social media and debunks science. And so, um, that's kind of—um—and we've spun out a paper on patient information sheets, which is under review.
So basically when you go to your dentist or doctor and [00:24:00] they tell you this is your diagnosis, especially if it's an uncommon condition, and you wonder, well, what really, what is it? AI can generate a helpful, easy-to-read patient information sheet. So that's kind of our work on what we call static large language models like GPT, Claude, Perplexity—things people have used—and then our autonomous AI agent, which is not static.
Dana Rampolla: Perfect. Well, thank you again for joining us. We'll be sure to put the link to the website in our show notes, and we look forward to seeing how this all grows and evolves in the next couple of years.
Ahmed Sultan: Thanks very much for having me.

Charles Schelle: Here's what stands out with Dana's conversation with Dr. Sultan. AI in dentistry isn't science fiction. It's already being trained to flag subtle signs—a small lesion, a faint calcification—the things an exhausted dentist might miss on their sixth [00:25:00] patient of the morning. Sultan and Price's team is showing how AI can highlight those red flags, giving clinicians a second chance to catch disease earlier.
That can mean earlier treatment, better outcomes, and patients who don't slip through the cracks—
and beyond the clinic. Sultan says AI could even help rural and low-income communities triage urgent cases, making sure the most serious patients are seen first.
Ahmed Sultan: You can create that structure where it can flag things from rural offices or low-income countries that don't have enough dentists, doctors, or specialists.
And then you can triage high- versus low-risk cases. So the most serious things—
you know, now there's an objective report that goes and says, well, this patient really needs to be seen by someone. And there's more of a claim to have that person seen, 'cause the AI has flagged something that's trained as
as high risk.
[00:26:00]
Dana Rampolla: That's it for this episode of the UMB Pulse. A big thank you to Dr. Ahmed Sultan for joining us. If you found this conversation eye-opening, subscribe and share it with a friend, and you can find more resources about AI research at UMB in our show notes.
Jena Frick: The UMB Pulse with Charles Schelle and Dana Rampolla is a UMB Office of Communications and Public Affairs production edited by Charles Schelle, marketing by Dana Rampolla.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

The OSA Insider Artwork

The OSA Insider

University of Maryland School of Medicine Office of Student Affairs
Social Work is Everywhere Artwork

Social Work is Everywhere

University of Maryland School of Social Work
Law School'd Artwork

Law School'd

Maryland Carey Law