According to a new study from Frost & Sullivan, the use of artificial intelligence (AI) and analytics in conventional operating rooms will help hospitals address inefficiencies and clinical challenges physicians face when performing surgery. The Growth Opportunities in Artificial Intelligence and Analytics in Surgery study says that by 2024, the AI market for surgery will reach $225.4 million up from $69.1 million in 2019. Siddharth Shah, Transformational Health Program Manager at Frost & Sullivan, said that patients would be end-beneficiaries of these solutions that better help the surgeons perform their job. "Some of these solutions can help determine the risk of complications even before a patient is wheeled into the operating room so that doctors can pre-empt those, and ensure smoother surgeries, and faster recovery," said Shah. "Having an AI or analytics solution supporting surgeons only enhances their skill set further, meaning patients have improved outcomes." Shah adds that benefits such as fewer complications, re-admissions, or need for corrective surgeries and earlier recoveries also mean healthcare costs go down.
Italian medical implant manufacturer REJOINT is introducing mass customization and therapy personalization through a combination of Electron Beam Melting (EBM) and computerized analysis of intraoperative and post-operative data collection through IoT-connected sensorized wearables. The market for knee implants is now estimated at around five million implants per year worldwide. In advanced markets, already in 2011 the number of surgical procedures was 150 per 100,000 inhabitants, with peaks of 250 in some markets such as Austria and Switzerland. The strongest annual increase (7%) occurred in patients 64 years and under 1. The knee arthroplasty market until recently solely consisted of standard prosthetic systems, with a limited range of sizes available.
A da Vinci surgical robot system performs heart surgery in 2017 at a hospital in Hefei, China.Credit: Shutterstock In 2006, China highlighted the importance of robotics in its 15-year plan for science and technology. In 2011, the central government fleshed out these ambitions in its 12th five-year plan, specifying that robots should be used to support society in a wide range of roles, from helping emergency services during natural disasters and firefighting, to performing complex surgery and aiding in medical rehabilitation. Guang-Zhong Yang, head of the Institute of Medical Robotics at Shanghai Jiao Tong University, says that China's robotics research output has been growing steadily for two decades, driven by three major factors: "The clinical utilization of robotics; increased funding levels driven by national planning needs; and advances in engineering in areas such as precision mechatronics, medical imaging, artificial intelligence and new materials for making robots." Yang points out that funding levels for medical robotics from the National Natural Science Foundation of China and the Ministry of Science and Technology began to increase more sharply in 2011 compared to the previous decade. The accompanying rises in research output are closely related to the introduction of specialized robotics equipment in medical-research facilities, says Yao Li, a research scientist at Stanford Robotics Laboratory in California and founder of the company Borns Medical Robotics, based in both Chengdu, China, and Silicon Valley, California.
Stitching a patient back together after surgery is a vital but monotonous task for medics, often requiring them to repeat the same simple movements over and over hundreds of times. But thanks to a collaborative effort between Intel and the University of California, Berkeley, tomorrow's surgeons could offload that grunt work to robots -- like a macro, but for automated suturing. The UC Berkeley team, led by Dr. Ajay Tanwani, has developed a semi-supervised AI deep-learning system, dubbed Motion2Vec. This system is designed to watch publically surgical videos performed by actual doctors, break down the medic's movements when suturing (needle insertion, extraction and hand-off) and then mimic them with a high degree of accuracy. "There's a lot of appeal in learning from visual observations, compared to traditional interfaces for learning in a static way or learning from [mimicking] trajectories, because of the huge amount of information content available in existing videos," Tanwani told Engadget.
Scientists at Harvard have copied their large, insect-inspired robot HAMR (Harvard Ambulatory Microrobot) into a smaller form factor. The new robot, HAMR-JR, is the size of a penny, measuring 2.25 centimetres across. The robot is capable of quick movement, able to travel 14 times the length of his body in a single second, which makes it one of the smallest and fastest robots currently made, according to Harvard. While it might be the size of a penny, it is significantly lighter, weighing only 0.3 grams. The ability to keep the familiar design, but change the scale of the robot, means that it can be used in a variety of purposes, including surgeries or large-scale industry because of its ability to carry heavy payloads. The method of miniaturising the robot was surprisingly straightforward: researchers simply shrunk the 2D sheet design of the robot, as well as its circulatory, to a more minute scale.
GAINESVILLE, Fla.--(BUSINESS WIRE)--Exactech, a developer and producer of innovative implants, instrumentation and computer-assisted technologies for joint replacement surgery, and KenSci, a healthcare artificial intelligence (AI) platform company, announced today that a collaborative, foundational study on using machine learning (ML) to predict outcomes after shoulder arthroplasty has been published in Clinical Orthopaedics and Related Research, one of the premier scientific journals in orthopaedics. The research analyzes the potential of ML to use preoperative data to anticipate patients' post-operative results after anatomic total shoulder arthroplasty (aTSA) or reverse total shoulder arthroplasty (rTSA). These results can help surgeons preoperatively identify if a patient will achieve certain clinical improvement thresholds to appropriately risk-stratify patients for these elective procedures. Specifically, this research explores the efficacy of ML to predict the American Shoulder and Elbow Surgery (ASES), Constant, global shoulder function and VAS pain score, as well as to predict a patient's active range of motion in abduction, forward flexion and external rotation. This research also studies the ability of ML to identify if a patient may achieve clinical improvement that exceeds the minimal clinically important difference threshold as well as the substantial clinical benefit threshold for each outcome measure.
Urology fellow, Jeremy Fallot, and nurse, Shauna Harnedy, assist in robotic surgery by Ruban Thanigasalam (out of view) in Sydney, Australia.Credit: Ken Leanfore for Nature Loved by surgeons and patients alike for its ease of use and faster recovery times, the da Vinci surgical robot is less invasive than conventional procedures, and lacks the awkwardness of laparoscopic (keyhole) surgery. But the robot's US$2-million price tag and negligible effect on cancer outcomes is sparking concern that it's crowding out more affordable treatments. There are more than 5,500 da Vinci robots globally, manufactured by California-based tech giant, Intuitive. The system is used in a range of surgical procedures, but its biggest impact has been in urology, where it has a market monopoly on robot-assisted radical prostatectomies (RARP), the removal of the prostate and surrounding tissues to treat localized cancer. Uptake in the United States, Europe, Australia, China and Japan for performing this procedure has been rapid.
Automatic surgical workflow recognition in video is an essentially fundamental yet challenging problem for developing computer-assisted and robotic-assisted surgery. Existing approaches with deep learning have achieved remarkable performance on analysis of surgical videos, however, heavily relying on large-scale labelled datasets. Unfortunately, the annotation is not often available in abundance, because it requires the domain knowledge of surgeons. In this paper, we propose a novel active learning method for cost-effective surgical video analysis. Specifically, we propose a non-local recurrent convolutional network (NL-RCNet), which introduces non-local block to capture the long-range temporal dependency (LRTD) among continuous frames. We then formulate an intra-clip dependency score to represent the overall dependency within this clip. By ranking scores among clips in unlabelled data pool, we select the clips with weak dependencies to annotate, which indicates the most informative ones to better benefit network training. We validate our approach on a large surgical video dataset (Cholec80) by performing surgical workflow recognition task. By using our LRTD based selection strategy, we can outperform other state-of-the-art active learning methods. Using only up to 50% of samples, our approach can exceed the performance of full-data training.
Artificial intelligence (AI) has "tremendous potential" to revolutionise comprehensive spine care across areas including patient selection, outcome prediction, research, pre-operative workup and peri-operative assistance, the authors of a large systematic review on the topic have found. Published in the Global Spine Journal, the review, led by Jonathan J Rasouli (Cleveland Clinic, Cleveland, USA) looks at the current trends and applications of AI and machine learning in conventional and robotic-assisted spine surgery. According to Rasouli and colleagues, there has been increasing attention and interest in the system-based benefits of AI and its applications to spine surgery. This includes helping clinicians and hospital centres define the quality and cost of care, improve outcomes and mitigate downrange financial exposures to both institutions and payers. "While there has also been controversy surrounding AI, if implemented appropriately, it has the potential to revolutionise the standard of care in spine surgery, reduce cost and waste, and improve the efficiency and patient care. In addition, AI could enhance individualised care to patients to reduce heterogeneity in both clinical practice and research," the study team writes.