MedicalExpo e-Magazine - #21 – A Future Assisted by Artificial IntelligenceMedicalExpo e-Magazine

The Smart Magazine About Medical Technology Innovations

A Future Assisted by Artificial Intelligence




The words “artificial intelligence” might sound scary. Yet they are ever more present in many different areas of the economy—finance, transportation, the hotel industry and even education. What of the healthcare sector? An increasing number of applications for intelligent machines help doctors offer the most suitable treatment and advice for their patients.

To understand how artificial intelligence really works, we interviewed an expert from IBM Watson, a pioneer in cognitive computing. We also talked to several startups specializing in the use of deep learning to help detect cancer.

 

In another futuristic vein, this issue features an interview with Dr. Shafi Ahmed. Last April, he performed the very first surgery to be live-streamed in virtual reality. The operation was watched by around 65,000 people.

Fullpage Villeroy Boch
Hot Topics
Saying that Watson will replace doctors is a sacrilege
Artificial intelligence (Courtesy of Hans-Joachim Roy/Shutterstock)

/

It’s impossible to talk about artificial intelligence without mentioning IBM’s Watson. A pioneer in cognitive computing, the American computer giant has found multiple health applications for Watson. Pascal Sempé, senior sales consultant for Watson Health Solutions in France, explained how Watson functions and what’s...


Hot Topics
In medical imaging, the ultimate goal of machine learning is to recognize patterns better and faster than humans can
Cancer cells (Courtesy of Wellcome Images)

/

Radiologists’ work may soon become a lot more efficient. With inroads in computer vision and deep learning, they will someday use computer algorithms to more efficiently analyze cancer tissue. The field is proceeding slowly but surely.

What’s driving the field forward are new capabilities in data storage that, until a few years ago, were unimaginable. “Computers could only scale up so far,” Dr. Chris Pal, associate professor of computer software and engineering, Polytechnique Montreal, the engineering school of the University of Montreal, told MedicalExpo e-magazine.

“Today it is a lot easier to collect data; hard drives are cheap; and you can put more processors on a chip,” he added. “We will soon be able to reproduce human images on a number of benchmark problems, within the realm of usability.”

Interest in building better clinical decision support tools led Dr. Ronald Summers to pursue deep learning, which he thinks will add value to medical imaging in reducing errors. Summers is a senior investigator in the imaging biomarkers and CAD Laboratory, at the National Institutes of Health Clinical Center in Bethesda, Maryland. He is a key player in research efforts at the federal level in the United States.

Recognizing Patterns Better Than a Human

Deep learning constructs many layers of abstraction to help map machine inputs to progressively higher-level representations. Sometimes mocked as the stuff of science fiction, it is making incremental advances in medical imaging and seems poised for broader acceptance.

Machine learning relies on big data and the bigger the database, the better the pattern recognition becomes.

In medical imaging, the ultimate goal of machine learning is to recognize patterns better and faster than humans can, which ratchets up accuracy and productivity. Artificial intelligence uses deep neural networks, a form of machine learning, to train medical imaging machines. Machine learning relies on big data and the bigger the database, the better the pattern recognition becomes.

“So far, deep learning is accurate for such tasks as non-medical image analysis and speech recognition,” he said. “Right now, although there have been large improvements, deep learning is not as good as the experts for most medical image diagnosis tasks.”

In a recent review of the current status of the field, Dr. William M. Wells III, of Harvard Medical School and Brigham and Women’s Hospital, Boston, identified some of the most “powerful capabilities emerging” in the areas of “segmentation and registration of medical images, and registration of medical images, and representations of shapes of anatomical structures in individuals and populations, to name a few.”

ME21_deeplearning2

DICE Scores comparing expert vs deep learning segmentation of liver tumor from CT-Scan (Courtesy of Imagia)

Uses for Detecting Cancer

Only a small number of peer-reviewed studies with machine learning and cancer have been performed, but early results are laying the foundation for the field. Dr. Summers described results by his group, which show impressive improvements in sensitivity for three areas of oncology.

In detecting polyps of the colon, sensitivity rose from 58% to 75%; for cancer of the spine, sensitivity rose from 57% to 70%; and for lymph node analysis, improvement in sensitivity was especially striking, rising from 43% to 77%.

Dr. Igor Barani, CEO of the deep-learning start-up Enlitic, San Francisco, told MedicalExpo that Enlitic analyzed lung CT scans with its deep-learning system, finding it was 50% better at classifying malignant tumors, with a false-negative rate of zero, compared with 7% for humans.

ME21_deeplearning

3D rendering of tumor from deep learning algorithm in liver from CT-Scan (Courtesy of Imagia)

In separate research using MR images to evaluate glioblastoma tumor segmentation with deep neural networks by Mohammad Havaei, Chris Pal, and others, the authors report superior architecture depiction and speed. “It not only offers far more detail and accuracy than state-of-the-art studies, it is also 30 times faster,” said Pal, associate professor at Polytechnique Montreal.

Pal sometimes teams up with the Montreal deep-learning start-up Imagia, which is devoted to working on cancer and deep learning. Alexandre Le Bouthillier, CEO of Imagia, told MedicalExpo: “Deep learning is key to the future of medical image processing because of its ability to merge large and diverse data sources to make more accurate predictions. We long ago reached our limits with conventional radiology.”


Innovation Focus
If we need 2.2 million extra surgeons and another 150 million operations, we have to think about how to train people on a bigger scale
Surgery live-streamed in VR (Courtesy of Medical Realities)

A surgeon, cancer specialist, and co-founder of virtual and augmented reality firm Medical Realities, Dr. Shafi Ahmed performed an important surgery last April at The Royal London Hospital. It was the first surgery live-streamed to the world in virtual reality—an experience Dr. Ahmed wants to repeat in order to train...


Using VR to understand aortic stent grafts (Courtesy of Medtronic).

Clinicians can now “deep dive” using virtual reality to understand and view aortic stent grafts. Medtronic, the Dublin-based medical technology company, has...



Medical students can probe the body using augmented reality (Courtesy of CWRU).

Medical students of the future may spend less time doing autopsies to learn anatomy and more time in augmented reality. The Cleveland Clinic and Case Western...



  • Join our 155,000 subscribers

  • The made-to-measure 3-D-printed GO wheelchair (Courtesy of Layer)

    Imagine a made-to-measure 3-D-printed wheelchair “cool enough to go to the club with” and that would be a real “human body extension.” That’s what Layer, an industrial design agency based in the U.K., has tried to make with its new project: “GO.” Layer’s founder, Benjamin Hubert, talked to us.

     

    ME e-mag: Why did you use 3-D printing for the GO wheelchair?

    Benjamin Hubert: The GO wheelchair is a particularly unique device—I would call it more a mobility device than a medical device. The project started by talking to wheelchair users a lot. We spent about six months just to talk to people. One of the insights that came out of that period was really about the fact that everybody has a different size, shape, weight, injury, physical condition and that what they need is a made-to-measure solution that will be just for them.

    They want an extension of the body. It should be beautiful, it should be cool, it should express their style.

    So 3-D printing presented itself as an option to potentially do something that could be automated, use the physical form and shape of human body and be represented in the product very literally. With 3-D printing, we are using a very sophisticated but automatically quite simple approach—taking data and transforming it into a piece of three-dimensional equipment.

    For this approach, we take your biometric scan—your physical shape—and then there is a consultation period because not everything is just about the human shape. It’s also about how you live your life, about how long you’ve had your condition, etc. That combination has been a really powerful tool to make something that is really a human body extension.

    ME e-mag: Is everything 3-D-printed?

    Benjamin Hubert: The only two parts that are 3-D-printed are the seat and the footrest. Every other component is the same on every chair. The only two components you need to change and to look at everybody’s shape are the seat and the foot rest because they basically control everything: height, weight, angle, leg length, feet size, etc. The advantages of the GO wheelchair is that it increases comfort, it reduces injury because when something is made specifically for you you’re not moving around in your seat, you can control the body a lot more.

    GO-6

    The seat and the footrest are 3-D-printed (Courtesy of Layer)

    ME e-mag: Does it also change the way people view wheelchairs?

    Benjamin Hubert: Yes. A lot of wheelchair users were telling us that they don’t want a machine or a medical device. They want an extension of the body. It should be beautiful, it should be cool, it should express their style. The whole idea is that it’s a vehicle that they’re in all day so why shouldn’t it be cool enough to go to the club or to use in every occasion? That was one of the biggest frustrations [that came] out of our research project.

    GO-detail-7

    The GO wheelchair increases comfort and reduces injury (Courtesy of Layer)

    ME e-mag: What materials do you use to 3-D-print the seat and the footrest?

    Benjamin Hubert: The seat is a combination of different types of plastic resin. The footrest is 3-D-printed titanium. It’s very strong and lightweight at the same time. The advantage when you 3-D-print, for example, the footrest, is that inside it’s completely empty. Traditionally if you make that component it would be cast and it would be solid and much heavier. The goal is to go to a full-scale production. We are talking to some companies at the moment, we’ll see how it goes.

    ME e-mag: How long does it take to print this chair?

    Benjamin Hubert: It only takes one or two days to actually put the parts but the process is a little bit longer. And of course it depends on where you are in the world and where you’re printing it, but the aim of the project is to dramatically reduce the amount of time it takes to manufacture a wheelchair. Because at the moment the whole process is very long and quite old-fashioned.


    The HAPIfork vibrates to encourage slowing down (Courtesy of HAPILABS).

    The first forks were eating utensils in ancient Egypt. Now the fork is going high tech in the interest of health. The new HAPIfork has a Bluetooth connection,...


    New flexible MRI coils for babies (Courtesy of Nature)

    MRI machines can take a long time to produce the images needed—sometimes more than an hour. This can be challenging, especially for pediatric patients. To help...



    Healcerion, a company based in South Korea, has created an ultrasound system that is no bigger than the transducer itself. The SONON 300C uses a tablet or...



    The AspireAssist System is a brand new approach and a minimally invasive alternative to weight loss surgery, which has been recently approved by the U.S. Food and Drug Administration. The system works by removing a portion of the food from the stomach through a tube before the calories are even absorbed.

    For that, a thin tube is placed in the user’s stomach during a 15-minute outpatient procedure. This tube connects the inside of the stomach directly to a discreet button on the outside of the user’s abdomen. A reservoir connected to the small button on the skin is filled by normal drinking water. When the lever on the button is rotated, stomach contents begin to empty into the toilet. When the flow stops, the reservoir is squeezed in order to infuse water into the stomach to help loosen food particles. A user can repeat the process until the draining stops.

    About 30% of the food can be removed from the stomach before the calories are absorbed. The aspiration process is performed about 20 to 30 minutes after the entire meal is consumed and takes 5 to 10 minutes to complete. The rest of the meal is digested normally.


    CONTRIBUTORS



    Celia Sampol

    Celia Sampol has been a journalist for 15 years. She worked in Brussels and Washington for national medias (Agence France Presse, Liberation). She’s now the editor-in-chief of MedicalExpo e-magazine.


    Read More

    Howard Wolinsky

    Howard Wolinsky is a Chicago-based freelance journalist specializing in health-care topics.


    Read More

    Laura Newman

    Laura Newman is a New York-based medical writer who writes frequently about medical technological advances and health policy.


    Read More

    Christina Kuhrcke

    Christina Kuhrcke is a Berlin-based freelance journalist, doctor and digital storyteller.


    Read More

    Style Switcher

    Highlight Color:

                   

    Backgrounds:

                        

    You can also set your own colors or background from the Admin Panel.