artificial intelligence


Deep biomarkers of aging and longevity: From research to applications

#artificialintelligence

IMAGE: Using age predictors within specified age groups to infer causality and identify therapeutic interventions. The deep age predictors can help advance aging research by establishing causal relationships in nonlinear systems. Deep aging clocks can be used for identification of novel therapeutic targets, evaluating the efficacy of various interventions, data quality control, data economics, prediction of health trajectories, mortality, and many other applications. Dr. Alex Zhavoronkov from Insilico Medicine, Hong Kong Science and Technology Park, in Hong Kong, China & The Buck Institute for Research on Aging in Novato, California, USA as well as The Biogerontology Research Foundation in London, UK said "The recent hype cycle in artificial intelligence (AI) resulted in substantial investment in machine learning and increase in available talent in almost every industry and country." Over many generations humans have evolved to develop from a single-cell embryo within a female organism, come out, grow with the help of other humans, reach reproductive age, reproduce, take care of the young, and gradually decline.


Using CD with machine learning models to tackle fraud

#artificialintelligence

Credit card fraudsters are always changing their behavior, developing new tactics. For banks, the damage isn't just financial; their reputations are also on the line. So how do banks stay ahead of the crooks? For many, detection algorithms are essential. Given enough data, a supervised machine learning model can learn to detect fraud in new credit card applications. This model will give each application a score -- typically between 0 and 1 -- to indicate the likelihood that it's fraudulent. The banks can then set a threshold for which they regard an application as fraudulent or not -- typically that threshold will enable the bank to keep false positives and false negatives at a level it finds acceptable. False positives are the genuine applications that have been mistaken as fraud; false negatives are the fraudulent applications that are missed.


The top AI and machine learning conferences to attend in 2020

#artificialintelligence

While artificial intelligence may be powering Siri, Google searches, and the advance of self-driving cars, many people still have sci-fi-inspired notions of what AI actually looks like and how it will affect our lives. AI-focused conferences give researchers and business executives a clear view of what is already working and what is coming down the road. To bring AI researchers from academia and industry together to share their work, learn from one another, and inspire new ideas and collaborations, there are a plethora of AI-focused conferences around the world. There's a growing number of AI conferences geared toward business leaders who want to learn how to use artificial intelligence and related machine learning and deep learning to propel their companies beyond their competitors. So, whether you're a post-doc, a professor working on robotics, or a programmer for a major company, there are conferences out there to help you code better, network with other researchers, and show off your latest papers.


6 revolutionary things to know about Machine Learning

#artificialintelligence

We are stepping into an avant-garde period, powered by advances in robotics, the adoption of smart home appliances, intelligent retail stores, self-driving car technology etc. Machine leaning is at the forefront of all these new-age technological advancements. The development of automated machines which have the capability match up to or maybe even surpass the human intelligence in the coming time. Machine learning is undoubtedly the next'big' thing. And, it is believed that most of the future technologies will be hooked on to it. Machine learning is given a lot of importance because it helps in prophesying behavior and spotting patterns that humans fail to predict.


UK universities match America's – except in funds

#artificialintelligence

Luke Johnson's column last week ("As Stanford invents the future, our dreaming spires still slumber") reinforces an outdated myth that UK innovation is behind the international curve. In fact, Britain produces more spinouts, disclosures of discoveries, patents and licences than America when adjusting for economic size. Universities are central to this culture of innovation. Our universities are driving discoveries and commercial opportunities in quantum computing, gene sequencing, artificial intelligence, cancer therapies and other fields. As Mr Johnson, chairman of the Institute of Cancer Research, knows, there's a lot more to UK innovation than Oxford and Cambridge.


Blog: Contrasting Chatbots and Intelligent Virtual Assistants Intelligent Virtual Assistants for Customer Engagement Sales AI Assistants Conversica

#artificialintelligence

When attending trade shows and conferences supporting Conversica, a question I am frequently asked is whether Conversica's Intelligent Virtual Assistant for customer engagement is "a chatbot." And while I can understand the source of the question, I emphatically stress that Conversica is not a chatbot. A more cynical reader might assume that this differentiation is little more than branding. But I can assure you there are very real differences between what the Conversica Sales AI Assistant offers and what chatbot providers deliver. Each technology has its place and purpose, but neither is synonymous with the other.


Deep Double Descent

#artificialintelligence

We show that the double descent phenomenon occurs in CNNs, ResNets, and transformers: performance first improves, then gets worse, and then improves again with increasing model size, data size, or training time. This effect is often avoided through careful regularization. While this behavior appears to be fairly universal, we don't yet fully understand why it happens, and view further study of this phenomenon as an important research direction. The peak occurs predictably at a "critical regime," where the models are barely able to fit the training set. As we increase the number of parameters in a neural network, the test error initially decreases, increases, and, just as the model is able to fit the train set, undergoes a second descent.


VA launches National Artificial Intelligence Institute to drive research and development

#artificialintelligence

The Department of Veterans Affairs (VA) wants to become a leader in artificial intelligence and launched a new national institute to spur research and development in the space. The VA's new National Artificial Intelligence Institute (NAII) is incorporating input from veterans and its partners across federal agencies, industry, nonprofits, and academia to prioritize AI R&D to improve veterans' health and public health initiatives, the VA said in a press release. "VA has a unique opportunity to be a leader in artificial intelligence," VA Secretary Robert Wilkie said in a statement. "VA's artificial intelligence institute will usher in new capabilities and opportunities that will improve health outcomes for our nation's heroes." RELATED: VA taps Google's DeepMind to predict patient deterioration For its AI projects, the VA plans to leverage its integrated health care system and the healthcare data it has amassed, thanks to its Million Veteran Program.


10 Predictions How AI Will Improve Cybersecurity In 2020

#artificialintelligence

AI and machine learning will continue to enable asset management improvements that also deliver exponential gains in IT security by providing greater endpoint resiliency in 2020. Nicko van Someren, Ph.D. and Chief Technology Officer at Absolute Software, observes that "Keeping machines up to date is an IT management job, but it's a security outcome. Knowing what devices should be on my network is an IT management problem, but it has a security outcome. And knowing what's going on and what processes are running and what's consuming network bandwidth is an IT management problem, but it's a security outcome. I don't see these as distinct activities so much as seeing them as multiple facets of the same problem space, accelerating in 2020 as more enterprises choose greater resiliency to secure endpoints."


Active Learning for Probabilistic Hypotheses Using the Maximum Gibbs Error Criterion

Neural Information Processing Systems

We introduce a new objective function for pool-based Bayesian active learning with probabilistic hypotheses. This objective function, called the policy Gibbs error, is the expected error rate of a random classifier drawn from the prior distribution on the examples adaptively selected by the active learning policy. Exact maximization of the policy Gibbs error is hard, so we propose a greedy strategy that maximizes the Gibbs error at each iteration, where the Gibbs error on an instance is the expected error of a random classifier selected from the posterior label distribution on that instance. We apply this maximum Gibbs error criterion to three active learning scenarios: non-adaptive, adaptive, and batch active learning. In each scenario, we prove that the criterion achieves near-maximal policy Gibbs error when constrained to a fixed budget.