Your Source of Innovation in the Medical Field
Artificial IntelligenceFeaturedMedical ImagingOncologySpecialtiesTechnologies

Rewriting the Lung Map: How AI-Driven Imaging Is Transforming Emphysema Diagnosis

Rewriting the Lung Map: How AI-Driven Imaging Is Transforming Emphysema Diagnosis
Image via Envato

At the SophIA Summit, Professor Elsa Angelini revealed how a decade of AI-powered biomedical imaging has uncovered new emphysema specification categories that may reshape clinical care.

In a hurry? Here are the key points to know:

  • New emphysema specification categories uncovered: AI-powered analysis of long-term lung CT cohorts allowed Angelini’s team to identify previously unseen emphysema subtypes, supporting more precise COPD diagnosis and patient stratification.
  • Advanced AI drives imaging breakthroughs — with caution: Deep learning and generative models now underpin most of the team’s work, enabling domain transfer, missing-sequence inference and super-resolution, but requiring strict control to avoid hallucinations and errors.
  • The rise of foundational and multimodal imaging models: New large pretrained models and end-to-end multimodal architectures are reshaping biomedical imaging, while highlighting urgent challenges such as domain shift, scanner variability and maintaining reproducibility.

For more than a decade, Professor Elsa Angelini, biomedical imaging and machine-learning specialist at Télécom Paris and adjunct senior research scientist at Columbia University, has led one of the world’s most ambitious imaging-based investigations into chronic lung disease. Drawing on vast multicenter longitudinal cohorts and cutting-edge AI technologies, her team has uncovered previously unrecognized specification categories of emphysema, deepening the clinical understanding of COPD and its heterogeneous manifestations. In an era when AI tools increasingly inform diagnosis and treatment planning, Angelini’s work demonstrates how robust imaging analytics — grounded in reproducibility, explainability, and biological insight — can deliver meaningful improvements to patient care.

A Decade of Imaging the Lung: Uncovering New Emphysema Specification Categories

Angelini’s research began with two major multicenter cohorts: MESA, a general-population cohort, and SPIROMICS, designed around heavy smokers at high risk for COPD. Both have followed participants for more than 15 years, generating a rich longitudinal archive of lung CT scans and clinical measures. 

“We deal with cohorts of lung CT scans. I told the story of 10 years of investigation of how to better quantify and subtype emphysema,” she explained during her interview with MedicalExpo e-Magazine at the SophIA Summit.

Historically, the clinical community suspected that emphysema was not a single entity, but a spectrum of subtypes distinguished by spatial presentation, association with fibrosis, and distinct structural signatures. Yet these differences had not been rigorously quantified. Angelini’s team closed the gap. 

“We are really the first group to have targeted that… to quantify very rigorously and in a reproducible manner emphysema, and then to try to subtype it,” she said. 

Their work has revealed new emphysema specification categories — new structural patterns identifiable through advanced AI-driven imaging. This offers clinicians the possibility of more precise phenotyping and, ultimately, more tailored patient management.

The implications are profound. These new categories correlate with clinical measures such as lung function, symptom severity, and long-term outcomes. They also help distinguish which patients exhibit fibrotic tendencies, which show diffuse versus localized tissue destruction, and which may respond differently to treatment. For front-line pulmonologists and radiologists, this refined map of disease holds potential for improving early detection and guiding therapy selection.

READ Pulmonary emphysema subtypes defined by unsupervised machine learning on CT scans.

From Classical Vision to Full AI Adoption: Evolving the Biomedical Imaging Toolbox

The emphysema project also tells a broader story about the rapid evolution of biomedical image analysis. Angelini recalls that, in the early 2010s, her lab relied on classical computer-vision techniques — textural signatures, spatial features, probabilistic pixel-classification models, and clustering algorithms like k-means. But by 2015, deep learning had matured enough to transform the field. 

“We took the opportunity to include that… using convolutional neural networks and specific adversarial training,” she said, describing how her team adapted high-resolution lung CT models to lower-resolution cardiac imaging domains through AI-based domain transfer techniques.

Today, her group’s pipeline is fully AI-driven. 

“Everything we are developing currently is only based on AI and the most modern architectures,” she emphasized.

Leveraging open-source frameworks like MONAI and bespoke code written by her students, Angelini’s laboratory integrates neural segmentation, deep representation learning, and generative modeling into workflows. Yet she stresses that advanced tools do not eliminate risk: 

“AI makes lots of errors… we definitely need to be careful,” she warns, highlighting the need for explainable AI systems that allow researchers to verify which structures the network is attending to — and why.

Generative AI, in particular, presents opportunity and danger. Angelini’s team uses generative networks to infer missing imaging sequences (for example, generating CT-equivalent information from MRI for radiotherapy planning) and to enhance microscopy images through super-resolution and denoising. But training such models is notoriously difficult. 

“They can hallucinate very quickly… that’s the very high risk with generative models,” she cautioned. 

For clinical practice, she argues, generative tools must be developed only with tightly constrained use cases and always under human expert guidance.

READ more about the topics presented at MicroscopAI conference in September 2025, organized by Angelini.

The Road Ahead: Foundational Models, Multimodal Data, and the Challenge of Domain Shift

Looking to the future, Angelini sees biomedical imaging entering a new transformative phase powered by large-scale foundational models — neural architectures trained on hundreds of thousands of CT, MRI, and histopathology images. 

“People have trained models that know how to encode the visual information… and they are sharing that with the community,” she noted, describing tools that can segment whole CT scans or automate complex structural analyses with unprecedented efficiency. 

Equally promising is the rise of multimodal learning, where imaging data are combined with radiology reports, physiological measurements, and even blood biomarkers. 

“The idea would be to combine all of this… to encode them together instead of using them separately,” Angelini explained, emphasizing the reduction of excess imaging, thus decreasing radiation exposure for patients. 

Such integrations could enable richer phenotyping and personalized prediction models, but they also dramatically expand data dimensionality — making large, harmonized datasets more important than ever. One of the greatest challenges, she says, is domain shift: the often-drastic differences between images produced by different scanners or imaging centers. 

“Whatever you’re training today will have to be updated for the new machine… and it’s a very big problem for us,” she said. 

Her team now sub-parameterizes models by scanner type and fine-tunes essential parameters for each machine family to maintain accuracy across cohorts. 

“You can’t handle them the same way… one has very noisy images, the other not,” she noted, explaining why such adjustments are essential for lung CT analysis in particular.

Despite these obstacles, Angelini remains confident in the field’s direction — and in the collaborative scientific culture driving its progress. 

“We are a good community in terms of sharing code… to make sure it’s reproducible,” she said.

Ultimately, she believes the key to successful clinical translation lies in combining the power of AI with the discernment of medical professionals. As she put it: 

“The expertise of the human being is extremely important. You don’t generate an image just for the fun of generating.”

Toward Safer, Smarter, and More Unified Imaging AI

As biomedical imaging moves toward increasingly automated workflows, Angelini stresses that the promise of AI must be balanced with vigilance. Generative models that infer missing imaging sequences or reconstruct CT-equivalent data from MRI could streamline workflows and reduce patient burden, but they must be deployed with caution. 

At the same time, the emergence of powerful foundational models — pretrained across hundreds of thousands of images and capable of segmenting or interpreting whole CT scans — is opening a new chapter in medical imaging. These shared resources are accelerating research and leveling the playing field across institutions. 

“People have trained models that know how to encode the visual information… and they are sharing that with the community,” Angelini noted. 

With these tools, and continued emphasis on transparency and reproducibility, the field is steadily moving toward a more unified, reliable, and clinically impactful era of AI-driven imaging.

Advertisement
Advertisement
Advertisement
Advertisement
Advertisement