algorithm


AI Could Predict Death. But What If the Algorithm Is Biased?

WIRED

Earlier this month the University of Nottingham published a study in PloSOne about a new artificial intelligence model that uses machine learning to predict the risk of premature death, using banked health data (on age and lifestyle factors) from Brits aged 40 to 69. This study comes months after a joint study between UC San Francisco, Stanford, and Google, which reported results of machine-learning-based data mining of electronic health records to assess the likelihood that a patient would die in hospital. One goal of both studies was to assess how this information might help clinicians decide which patients might most benefit from intervention. Amitha Kalaichandran, M.H.S., M.D., is a resident physician based in Ottawa, Canada. Follow her on Twitter at @DrAmithaMD.


Artificial Intelligence is Deciphering the World's Oldest Writings

#artificialintelligence

Scientists are constantly figuring out how to expand the field of use of this incredible invention, which enables computer software to progressively improve its actions by adopting knowledge gained from previous experience. Machine learning, also referred to as artificial intelligence due to its ability to perform tasks using its own judgment, has been the subject of both praise and controversy. However, the sophisticated algorithms that have served in providing you ads on social networks might have a grand future in philology, archaeology, and linguistics. According to Émilie Pagé-Perron, a Ph.D. candidate in Assyriology at the University of Toronto, we might be closer than we thought to deciphering numerous Middle-Eastern cuneiform tablets written in Sumerian and Akkadian languages, all of which are several thousand years old. Pagé-Perron is in charge of the project officially titled Machine Translation and Automated Analysis of Cuneiform Languages, which currently operates in Frankfurt, Toronto, and Los Angeles, using combined efforts to create a program capable of translating the clay tablets.


Machine learning on edge devices solves lack of data scientists

#artificialintelligence

The current approach to AI and machine learning is great for big companies that can afford to hire data scientists. But questions remain as to how smaller companies, which often lack the hiring budgets to bring in high-priced data scientists, can tap into the potential of AI. One potential solution may lie in doing machine learning on edge devices. Gadi Singer, vice president of the Artificial Intelligence Products Group and general manager of architecture at Intel, said in an interview at the O'Reilly AI Conference in New York that even one or two data scientists are enough to manage AI integration at most enterprises. But will the labor force supply adequate amounts of trained data scientists to cover all enterprises' AI ambitions?



The Role of Structure in AI

#artificialintelligence

As artificial intelligence algorithms get applied to more and more domains, a question that often arises is whether to somehow build structure into the algorithm itself to mimic the structure of the problem. There's usually some amount of knowledge we already have of each domain, an understanding of how it usually works, but it's not clear how (or even if) to lend this knowledge to an AI algorithm to help it get started. Sure, it may get the algorithm caught up to where we already were on solving that problem, but will it eventually become a limitation where the structure and assumptions prevent the algorithm from surpassing human performance? This week, we'll talk about the question in general, and especially recommend a recent discussion between Christopher Manning and Yann LeCun, two AI researchers who hold different opinions on whether structure is a necessary good or a necessary evil.


How HIEs and AI can work in tandem to boost interoperability ROI

#artificialintelligence

"We spent all those years adopting EHRs, and now we're wanting to get the most out of them. Now we have the digital data, so it should be more liquid and in control of patients and put to use in the care process, even if I go to multiple sites for my care." As ONC and CMS prepare to digest the voluminous public comment on their proposed interoperability rules, especially the emphasis on exchange specs such as FHIR and open APIs, he sees the future only getting brighter for these types of advances as data flows more freely. "We're in the interoperability business, and we like having data being more available and more liquid, and systems being more open to getting data out of them," Woodlock said. "A lot of customers are starting to embark on their journey with with FHIR, and they're really bullish on this as well: having a standards-based API way to interact with medical record medical record data," he added.


Comparing Emotion Recognition Tech: Microsoft, Neurodata Lab, Amazon, Affectiva

#artificialintelligence

Automated emotion recognition has been with us for some time already. Ever since it entered the market, it has never stopped getting more accurate. Even tech giants joined the race and released their software for emotion recognition, after smaller startups had successfully done the same. We set out to compare the most known algorithms. Emotions are subjective and variable, so when it comes to accuracy in emotion recognition, the matters are not that self-evident.


Facial recognition : 7 trends to watch (2019 review)

#artificialintelligence

Few biometric technologies are sparking the imagination quite like facial recognition. Equally, its arrival has prompted profound concerns and reactions. With artificial intelligence and the blockchain, face recognition certainly represents a significant digital challenge for all companies and organizations - and especially governments. In this dossier, you'll discover the 7 face recognition facts and trends that are set to shape the landscape in 2019. Let's jump right in .


Three Benefits to Deploying Artificial Intelligence in Radiology Workflows

#artificialintelligence

Artificial Intelligence (AI) has the capability to provide radiologists with tools to improve their productivity, decision making and effectiveness and will lead to quicker diagnosis and improved patient outcomes. It will initially deploy as a diverse collection of assistive tools to augment, quantify and stratify the information available to the diagnostician, and offer a major opportunity to enhance and augment the radiology reading. It will improve access to medical record information and give radiologists more time to think about what is going on with patients, diagnose more complex cases, collaborate with patient care teams, and perform more invasive procedures. Deep Learning algorithms in particular will form the foundation for decision and workflow support tools and diagnostic capabilities. Algorithms will provide software the ability to "learn" by example on how to execute a task, then automatically execute those tasks as well as interpret new data.


The Future of IT Operations - ITChronicles

#artificialintelligence

We are now living in a digital economy. Organizations all over the globe are turning their much of their focus away from producing physical assets, and towards designing and developing digital products and services that either complement or completely replace their physical predecessors. Digital transformation initiatives are at the very top of business agendas across all industries, resulting in big changes and big demands being placed on enterprise IT operations. For many years, high demands have been placed on IT operations teams – and DevOps in particular – to become more agile and proactive so that their businesses can quickly embrace new technologies and practices in order to remain competitive. However, to meet these demands, IT operations has faced the challenge of keeping costs down on the one hand, while dealing with the increasing complexity of operations on the other.