How AI Is Changing Health and Fitness

In just a few short years, artificial intelligence has become an inextricable part of modern life. AI has also introduced something new: a nonbiological form of cognition that can analyze, predict, and even help shape human health.
As Minneapolis-based futurist Cecily Sommers notes, AI represents “the first time that we have intelligence that lives outside of our skull.” And this disembodied intelligence is starting to influence our embodied experience. It’s nudging our behavior, designing new drugs, and altering how doctors care for patients — most of these with the stated goal of making us healthier.
Yet the full scale of the effects of AI aren’t yet known, and skeptics have concerns. As with any transformative technology, it comes with both promise and peril.
We asked experts to weigh in on the important ways AI is changing medicine, fitness, and mental healthcare.
Better Medical Diagnostics
One of the most promising applications of AI is its potential to improve disease diagnosis. A study in the American Journal of Roentgenology suggests that, on average, radiologists misinterpret roughly 4 percent of the scans they see each day.
AI models trained on vast datasets can reduce errors by identifying patterns that might escape even experienced specialists. A 2023 study published in Radiology found that an autonomous AI system exhibited greater sensitivity than radiologists in identifying critical abnormalities on chest x-rays (99.8 percent versus 93.5 percent).
An AI tool can alert a radiologist to look at a specific part of the lung where it locates an anomaly. It can also compare a patient’s new scan with an older one to look for subtle changes. “It doesn’t supplant the radiologist from making the final decision, but it helps to prevent an error,” notes Lloyd Minor, MD, dean of the Stanford University School of Medicine, on The Rich Roll Podcast.
“The best-case scenario is
one in which AI enhances,
rather than replaces,
human expertise.”
Taofic Mounajjed, MD, a pathologist with Hospital Pathology Associates, says AI can also provide clearer prognoses. Whereas human diagnosis relies on predefined classifications, such as tumor histology and stage, AI can analyze vast amounts of data to uncover subtle image characteristics that offer clues about a disease’s likely course. Those characteristics might include spatial relationships between cells or other statistical features of tumor cells, such as size, variability, and shape.
“There’s so much on a slide that we as human pathologists don’t see or report,” explains Mounajjed. “AI can look at hundreds of metrics and variables and say, ‘Based on our model, you have an 80 percent chance of recurrence in 10 years,’ which is a completely new way of looking at pathology.”
Treatment plans can also be refined with the assistance of AI tools. While doctors have long used biomarkers, like hormone receptor status, to individualize a patient’s treatment, AI can integrate multiple factors, such as tumor microenvironment and DNA mutations, to identify the most viable and effective treatment options for a particular patient.
Challenges remain. AI-based diagnostics can suffer from biases in the data used to train it, making it inaccurate and potentially harmful for underrepresented populations. Overreliance on AI might lead doctors to overlook their own clinical judgments or dismiss rare, complex cases that don’t align with algorithmic predictions.
The best-case scenario is one in which AI enhances, rather than replaces, human expertise and its benefits are accessible — and applicable — to all.
Streamlined Notetaking for Physicians
Sunjya Schweig, MD, used to spend his visits with patients trying to listen while he typed. Now, the San Francisco Bay Area–based functional-medicine doctor is free to focus fully on the person in front of him, thanks to an AI “scribe” that takes notes for him.
More health systems and clinics are turning to ambient AI scribes to document the content of patient visits, liberating doctors and patients to talk more freely and connect more fully. Multiple HIPAA-compliant scribe systems are now in use across the country. They listen in on patient visits and automatically generate detailed clinical notes, summarizing symptoms, concerns, and treatment plans in real time.
“I use it all day long,” Schweig says of his AI scribe. “It writes my notes for me, and I can then customize them, whether for a referral letter to a specialist or to summarize a particular part of the conversation, like their hormone history.”
Functional neurologist Jeremy Schmoe, DC, DACNB, also appreciates the support of an AI scribe. “It allows me to be able to just work with people and not spend as much time on paperwork,” he says. “I can focus on actually getting them better.”
Schmoe used to hesitate when patients asked him to share his notes from their visit: They were often dense, technical, and scattered, and he worried that his patients, many of whom struggle with brain injuries, wouldn’t be able to make sense of them or translate them for their loved ones.
“Now I have a way of saying,
‘Write these notes in a way a patient will understand.’
It’s been a game-changer.”
“Now I have a way of saying, ‘Write these notes in a way a patient will understand.’ It’s been a game-changer,” he says. “Now I can confidently say, ‘I’d love to give you my notes: How detailed do you want them?’” This is the AI factor.
Schweig’s hope is that AI can serve as a physician’s copilot in this way. “A doctor can go about their business with this … system that’s listening to and transcribing your visits, [that’s] deeply embedded in a patient’s chart, and [that] can look for patterns and put everything together to say, ‘You might want to think about this diagnosis or that set of tests or interventions.’”
Although AI scribes are a supportive tool for healthcare professionals, the technology does have drawbacks. “I think that any time you are providing an app access to your health data, you are introducing risk,” says Drew Trabing, engineering manager for technology at Life Time.
One study published in the Journal of Medical Internet Research in 2025 observed “frequent errors” in the generated notes, with errors of omission being the most common.
And while the data is typically protected by layers of encryption, data breaches are always possible.
New Options for Mental Health Support
With one in five American adults experiencing a mental health issue in a given year, demand for care far exceeds available human resources. (Help support your mental health needs by exploring our curated collection of articles at “How to Support Your Mental Health.”)
“Even if we could funnel every single dollar we have for healthcare to mental health, we just don’t have enough providers to see the people who need it,” notes Stevie Chancellor, PhD, in a TEDx Talk. She’s an assistant professor at the University of Minnesota who develops human-centered AI tools for mental health.
Nicholas Jacobson, PhD, is an associate professor in the departments of Biomedical Data Science, Psychiatry, and Computer Science at the Geisel School of Medicine at Dartmouth. He helped develop Therabot, a mental health platform that uses generative AI to engage in dynamic conversations based on cognitive behavioral therapy and other evidence-based approaches.
“The goal is to provide things in a way that emulates what therapists provide in their day-to-day settings, but in a digital means,” Jacobson says. He notes that Therabot’s continuous availability is an advantage over human therapists, who may only be able to connect weekly.
“With tools like this, you can interact with it anytime, as long as you have an internet connection,” he says. “That makes it available in folks’ moments of greatest need.”
“The goal is to provide things in a way
that emulates what therapists provide
in their day-to-day settings,
but in a digital means.”
Crucially, Therabot has safeguards in place to prevent some of the harmful outcomes that are possible when people look to resources like ChatGPT for mental health support. (A tragic example involved a man in Belgium who died by suicide after engaging with an AI chatbot that encouraged him to end his life.) Therabot has been tested to eliminate potentially harmful responses and equipped to respond to crisis situations.
General-use “companion” bots lack any such safeguards, Jacobson notes, so be cautious in turning to them for mental health support.
Considering the privacy concerns and the potential for manipulation, engaging with AI with too much trust or vulnerability comes with substantial risk. In at least one case, a nonprofit AI-driven suicide-crisis text hotline shared anonymized customer data with its for-profit spinoff to train customer service bots. If company policies don’t expressly prohibit it, information shared with mental health chatbots can be used in targeted advertising.
These types of mental health supports are best used as a complement to a human therapist or therapeutic group, not least because, as scholars of the loneliness epidemic have shown, human-to-human connection is vital for social and emotional health.
“What worries me is that young people grow up being online so much of the time anyway,” notes Jodi Halpern, MD, PhD, a professor of bioethics at the University of California, Berkeley. “Mutually curious, empathetic relationships are the richness of life. If they grow up with bots being the main forms of communication about emotions, this could slip away without people necessarily noticing it.”
Personalized Health and Fitness Support
AI is increasingly tailoring health interventions to the individual. “Personalization is the name of the game with AI,” says Pilar Gerasimo, the founding editor of Experience Life and author of The Healthy Deviant: A Rule Breaker’s Guide to Being Healthy in an Unhealthy World. “It can give real-time, personalized feedback, suggestions, and nudges to help you create and fine-tune your nutrition and fitness programs.”
Gerasimo has been watching fitness and healthcare trends come and go for decades, and — with a few caveats — she sees the overall movement toward personalized health supports like this as a net gain.
Trabing points out that personalized fitness AI tools can make it easier to sift through a world of complicated health advice. “Learning how to work out, eat right, balance stress, and more is a lot of work,” he notes. “There is so much content, much of which is contradictory.”
An AI fitness app might help you log your workout information and track your progress over time. It can give you workout recommendations based on your goals and individual capacity, push you to progress, and remind you to rest. An app can also customize workouts based on your preferences, making you more likely to pursue movement that you enjoy.
Still, Trabing doesn’t think that fitness apps, useful as they are, will replace personal trainers or fitness classes any time soon.
“Personal trainers are there to motivate, inspire, and bring an energy to the training experience,” he says. “AI can help to personalize plans for more people, but that doesn’t negate the value of a personal trainer.”
“It can give real-time,
personalized feedback, suggestions, and nudges
to help you create and fine-tune
your nutrition and fitness programs.”
Many wearable fitness devices use AI as well. These tools can monitor heart rate variability (HRV), sleep cycles, blood sugar, hydration, and recovery, among other metrics. More advanced wearables are currently in the works, including some that will track lactate and cholesterol levels or inflammation.
When this biometric data is combined with generative AI tools, like fitness apps that consider your motivations, goals, and preferences, the result might be akin to having a “personal trainer in your pocket,” Gerasimo says. “But AI still can’t replace the empathy and intuition of a human being. At present, it can’t read your face or feelings in the same way a caring person can.”
Additionally, wearable data isn’t always reliable, emphasizes fitness and nutrition educator Mike T. Nelson, PhD. This becomes obvious when you use more than one biometric device at a time: “My Oura Ring doesn’t match my Garmin, which doesn’t match my ithlete,” he says. These all capture data on different parts of the body, he explains, and likely rely on different algorithms.
An individual device is useful for tracking general trends (such as stress or activity levels), but trying to cross-reference multiple wearables can lead to confusion. “God forbid I have a client who has three wearable devices,” Nelson says. “What a disaster.”
He finds that data is most helpful when it helps drive habit change — such as the Oura Ring’s sleep score inspiring an earlier, more consistent bedtime. “Data is good,” he says. “But it’s not going to replace how you feel. I don’t want my clients to entirely outsource their decisions to a device, because sometimes its recommendations don’t line up with how you feel or what you want to do.”
Adds Gerasimo: “What has helped humans be healthy for millions of years is more or less the same thing. Eat mostly whole foods, move your body, rest, get out in nature, and connect with a supportive community. I don’t think AI is going to change those things anytime soon.”
Support for Healthy Behaviors
Our environments drive our behavior in ways big and small. Some AI advocates believe we can use it to design our environments to steer us toward healthy choices.
“I think AI will fuel an intersection of industries that create healthier homes, work environments, and health-motivating spaces,” Gerasimo says.
Some of this is already under way. Smart-lighting systems can adjust throughout the day to support circadian rhythms, improve sleep quality, and enhance focus and productivity. Apps may offer recipes based on your health goals or even the contents of your AI-connected fridge. Wearables set daily activity goals and ping you to suggest a stretch break.
In the future, these nudges could become even more adaptive, integrating with AI-powered home systems to adjust temperature, lighting, and soundscapes using real-time biometric feedback. AI tools could even suggest preventive actions before a migraine or anxiety attack escalates, based on subtle physiological cues. Algorithms might match people who share similar hobbies or interests, facilitating social connection and reducing loneliness.
“We still want human wisdom to rise
in the age of information —
perhaps more information than
we know what to do with.”
Meanwhile, all this integration of AI into our environments raises questions about the balance between external guidance and personal agency. Although it can be appealing to outsource some decision-making, the convenience of AI-driven health nudges may come at a cost. “We don’t want to be completely manipulated by external agents,” Sommers says. “We want to cultivate our own inner agency for the choices we make.”
Self-direction is like a muscle: It weakens without regular use. Recent research from Carnegie Mellon University and Microsoft found that the more confidence people feel in AI, the less likely they are to engage their critical thinking skills as they use AI programs. This is especially concerning given that AI can make mistakes, mislead, and manufacture incorrect information that it presents as fact, known as “hallucinating.”
“We still want human wisdom to rise in the age of information — perhaps more information than we know what to do with,” Sommers says. “If we’re not discerning, we won’t know the difference between what’s valuable and what’s not.”
Risks of AI
The growing availability of health-supportive AI technologies offers much to appreciate, but the developments are not without hidden costs or challenges, like these:
It’s a new tech to regulate.
The rapid application of AI in healthcare is already outpacing the regulations meant to ensure its safety and fairness. Federal agencies were designed to regulate static developments (the U.S. Food and Drug Administration regulates drugs and medical devices, for example), but they may not have the expertise or processes in place to evaluate dynamic technologies like the algorithms that are central to AI.
Algorithms are validated on certain training datasets, but many adapt to new inputs after they’ve been deployed, notes pathologist Taofic Mounajjed, MD. In other words, they’re constantly shape-shifting. “How nimble are the regulatory bodies going to be in evaluating them if they’re continuously evolving?” he asks.
Data can be biased.
AI models are often trained on datasets that represent limited, homogeneous populations. In an article in the journal Science, researchers describe how an AI system widely used in U.S. healthcare underestimated the health needs of Black patients compared with white patients who had similar conditions. The training data was based on healthcare spending rather than actual health status, so it reflected systemic racial disparities in access to care.
Data privacy is difficult to maintain.
AI systems collect vast amounts of personal health data, often from wearables, medical records, and even social media interactions. Users may not fully understand how their data is being used, who has access to it, or whether it’s being shared with third parties.
AI has a substantial environmental impact.
Large-scale AI models require massive computational power, contributing to high energy consumption. And data centers that power AI systems require lots of water for cooling: According to one study, a single ChatGPT prompt for a 100-word email uses the approximate equivalent of a standard bottle of water.
This article was written by Mo Perry, an Experience Life contributing editor, and.originally appeared as “AI and Your Health” in the September/October 2025 issue of Experience Life.




