Mobile Augmented Reality (MAR) integrates computer-generated virtual objects with physical environments for mobile devices. MAR systems enable users to interact with MAR devices, such as smartphones and head-worn wearables, and performs seamless transitions from the physical world to a mixed world with digital entities. These MAR systems support user experiences by using MAR devices to provide universal accessibility to digital contents. Over the past 20 years, a number of MAR systems have been developed, however, the studies and design of MAR frameworks have not yet been systematically reviewed from the perspective of user-centric design. This article presents the first effort of surveying existing MAR frameworks (count: 37) and further discusses the latest studies on MAR through a top-down approach: 1) MAR applications; 2) MAR visualisation techniques adaptive to user mobility and contexts; 3) systematic evaluation of MAR frameworks including supported platforms and corresponding features such as tracking, feature extraction plus sensing capabilities; and 4) underlying machine learning approaches supporting intelligent operations within MAR systems. Finally, we summarise the development of emerging research fields, current state-of-the-art, and discuss the important open challenges and possible theoretical and technical directions. This survey aims to benefit both researchers and MAR system developers alike.
Anesthestic drugs act on the brain, but most anesthesiologists rely on heart rate, respiratory rate, and movement to infer whether surgery patients remain unconscious to the desired degree. In a new study, a research team based at MIT and Massachusetts General Hospital shows that a straightforward artificial intelligence approach, attuned to the kind of anesthetic being used, can yield algorithms that assess unconsciousness in patients based on brain activity with high accuracy and reliability. "One of the things that is foremost in the minds of anesthesiologists is'Do I have somebody who is lying in front of me who may be conscious and I don't realize it?' Being able to reliably maintain unconsciousness in a patient during surgery is fundamental to what we do," says senior author Emery N. Brown, the Edward Hood Taplin Professor in The Picower Institute for Learning and Memory and the Institute for Medical Engineering and Science at MIT, and an anesthesiologist at MGH. "This is an important step forward." More than providing a good readout of unconsciousness, Brown adds, the new algorithms offer the potential to allow anesthesiologists to maintain it at the desired level while using less drug than they might administer when depending on less direct, accurate, and reliable indicators.
When you are awake, your neurons talk to each other by tuning into the same electrical impulse frequencies. One set might be operating in unison at 10 hertz, while another might synchronize at 30 hertz. When you are under anesthesia, this complicated hubbub collapses into a more uniform hum. The neurons are still firing, but the signal loses its complexity. A better understanding of how this works could make surgery safer, but many anesthesiologists don't use an EEG to monitor their patients.
With the current ongoing debate about fairness, explainability and transparency of machine learning models, their application in high-impact clinical decision-making systems must be scrutinized. We consider a real-life example of risk estimation before surgery and investigate the potential for bias or unfairness of a variety of algorithms. Our approach creates transparent documentation of potential bias so that the users can apply the model carefully. We augment a model-card like analysis using propensity scores with a decision-tree based guide for clinicians that would identify predictable shortcomings of the model. In addition to functioning as a guide for users, we propose that it can guide the algorithm development and informatics team to focus on data sources and structures that can address these shortcomings.
Brain shift makes the pre-operative MRI navigation highly inaccurate hence the intraoperative modalities are adopted in surgical theatre. Due to the excellent economic and portability merits, the Ultrasound imaging is used at our collaborating hospital, Charing Cross Hospital, Imperial College London, UK. However, it is found that intraoperative diagnosis on Ultrasound images is not straightforward and consistent, even for very experienced clinical experts. Hence, there is a demand to design a Computer-aided-diagnosis system to provide a robust second opinion to help the surgeons. The proposed CAD system based on "Mixed-Attention Res-U-net with asymmetric loss function" achieves the state-of-the-art results comparing to the ground truth by classification at pixel-level directly, it also outperforms all the current main stream pixel-level classification methods (e.g. U-net, FCN) in all the evaluation metrices.
The TriRhenaTech alliance presents a collection of accepted papers of the cancelled tri-national 'Upper-Rhine Artificial Inteeligence Symposium' planned for 13th May 2020 in Karlsruhe. The TriRhenaTech alliance is a network of universities in the Upper-Rhine Trinational Metropolitan Region comprising of the German universities of applied sciences in Furtwangen, Kaiserslautern, Karlsruhe, and Offenburg, the Baden-Wuerttemberg Cooperative State University Loerrach, the French university network Alsace Tech (comprised of 14 'grandes \'ecoles' in the fields of engineering, architecture and management) and the University of Applied Sciences and Arts Northwestern Switzerland. The alliance's common goal is to reinforce the transfer of knowledge, research, and technology, as well as the cross-border mobility of students.
Virtual reality and artificial intelligence are being touted as the transformative tools that will reduce human error in the operating room. But how do they work? We speak to Gabriel Jones, CEO and co-founder of Seattle-based Proprio, to find out. Proprio was founded in 2016 by Dr. Sam Browd, a pediatric neurosurgeon, with the initial goal to eliminate the need for loupes - the magnifying glasses surgeons wear to perform delicate operations, and replace them with a digital alternative. In the four years since, Browd and his co-founders Gabriel Jones and James Youngquist (Chief Technology Officer) have added machine learning, computer vision, robotics and mixed reality to their solution to augment human vision during surgery.
DBC team has posted a message on their twitter account which says that the Deep Brain Chain AI Machines are ready for delivery to all miners who have not had their machines delivered. By this, Deep Brain Chain Foundation has fulfilled its original promise to upgrade the GPU servers from Nvidia GTX 1080Ti to Nvidia RTX 2080Ti. In addition, Deep Brain Chain foundation has already found customers for the miner's machines and it is possible to lease out the machines, if needed. The buyers of the mining machines are required to reply back to their emails in order to complete the delivery which will be completed within 7 working days. If you have received a message about this on telegram, you must send an email to email@example.com to confirm.
You cannot learn to play the piano by going to concerts. A compass [will] point you True North from where you're standing, but it's got no advice about the swamps and deserts and chasms that you'll encounter along the way. If in pursuit of your destination, you plunge ahead, heedless of obstacles, and achieve nothing more than to sink in a swamp... What's the use of knowing True North? The practice of surgery often forces unique ad hoc decisions based on contextual intricacies in the moment, which are not typically captured in broad, top-down, or committee-approved guidelines. Surgical ethics are principled, of course, but also pragmatic. They are also replete with moral contradictions and uncertainties; the introduction of novel technology into this environment can potentially increase those challenges. The essential element that distinguishes an ethical problem from a tragic situation is the element of choice." Moreover, choosing between options often involves identifying factors by which those options are not exactly equal, and the method one uses to weigh these factors can draw upon a set of ethical frameworks that, themselves, can be somewhat incongruous. At their core, artificial intelligence (AI) systems - and machine learning (ML) more specifically - are also designed to make choices, often by categorizing some input among a set of nominal categories. In the past, the choices these systems made could only be evaluated by their correctness - their accuracy in applying the same categorical labels that a human would to previously unseen inputs, like whether an image contains a tumour, or not.