Goto

Collaborating Authors

United States


AI is reshaping the way we buy, sell and value homes

#artificialintelligence

The housing market continues to defy gravity. Sales of existing homes rose more than 10% last month compared to a year ago, hitting their highest level since December 2006, according to the National Association of Realtors. And now, more than ever, people are relying on online platforms to search for -- and even buy -- houses. And that opens the door for artificial intelligence to play a bigger role, like using computer vision to create real estate listings based on photos. I spoke with Christopher Geczy, a professor at the Wharton School of the University of Pennsylvania who teaches about real estate and insurance technology.


SIMBig Conference 2020

#artificialintelligence

Dr. Dina Demner-Fushman is a Staff Scientist at the Lister Hill National Center for Biomedical Communications, NLM. Demner-Fushman is a lead investigator in several NLM projects in the areas of Information Extraction for Clinical Decision Support, EMR Database Research and Development, and Image and Text Indexing for Clinical Decision Support and Education. The outgrowths of these projects are the evidence-based decision support system in use at the NIH Clinical Center since 2009, an image retrieval engine, Open-i, launched in 2012, and an automatic question answering service. Dr. Demner-Fushman earned her doctor of medicine degree from Kazan State Medical Institute in 1980, and clinical research Doctorate (PhD) in Medical Science degree from Moscow Medical and Stomatological Institute in 1989. She earned her MS and PhD in Computer Science from the University of Maryland, College Park in 2003 and 2006, respectively.


New AI Paradigm May Reduce a Heavy Carbon Footprint

#artificialintelligence

Artificial intelligence (AI) machine learning can have a considerable carbon footprint. Deep learning is inherently costly, as it requires massive computational and energy resources. Now researchers in the U.K. have discovered how to create an energy-efficient artificial neural network without sacrificing accuracy and published the findings in Nature Communications on August 26, 2020. The biological brain is the inspiration for neuromorphic computing--an interdisciplinary approach that draws upon neuroscience, physics, artificial intelligence, computer science, and electrical engineering to create artificial neural systems that mimic biological functions and systems. The human brain is a complex system of roughly 86 billion neurons, 200 billion neurons, and hundreds of trillions of synapses.


Yale researchers develop AI technology for adults with autism

#artificialintelligence

Researchers from several American universities are collaborating to develop artificial intelligence based software to help people on the autism spectrum find and hold meaningful employment. The project is a collaboration between experts at Vanderbilt, Yale, Cornell and the Georgia Institute of Technology. It consists of developing multiple pieces of technology, each one aimed at a different aspect of supporting people with Autism Spectrum Disorder (ASD) in the workplace, according to Nilanjan Sarkar, professor of engineering at Vanderbilt University and the leader of the project. "We realized together that there are some support systems for children with autism in this society, but as soon as they become 18 years old and more, there is a support cliff and the social services are not as much," Sarkar said. The project began a year ago with preliminary funding from the National Science Foundation. The NSF initially invested in around 40 projects, but only four -- including this one -- were chosen to be funded for a longer term of two years.


Event Stream Processing: How Banks Can Overcome SQL and NoSQL Related Obstacles with Apache Kafka

#artificialintelligence

While getting to grips with open banking regulation, skyrocketing transaction volumes and expanding customer expectations, banks have been rolling out major transformations of data infrastructure and partnering with Silicon Valley's most innovative tech companies to rebuild the banking business around a central nervous system. This can also be labelled as event stream processing (ESP), which connects everything happening within the business - including applications and data systems - in real-time. ESP allows banks to respond to a series of data points – events - that are derived from a system that consistently creates data – the stream – to then leverage this data through aggregation, analytics, transformations, enrichment and ingestion. Further, ESP is instrumental where batch processing falls short and when action needs to be taken in real-time, rather than on static data or data at rest. However, handling a flow of continuously created data requires a special set of technologies.


Adobe Research Proposes HDMatt, A Deep Learning-Based Image Matting Approach

#artificialintelligence

Image matting is an essential technique to estimate the foreground objects in images and videos for editing and composition. The conventional deep learning approach takes the input image and associated trimap to get the alpha matte using convolution neural networks. But since the real-world input images for matting are mostly of very high resolution, such approaches efficiency suffers in real-world matting applications due to hardware limitations. To address the issue mentioned above, HD-Matt, the first deep learning-based image matting approach for high-resolution image inputs, is proposed by a group of researchers from UIUC (University of Illinois, Urbana Champaign), Adobe Research, and the University of Oregon. HD-Matt works on the'divide-and-conquer' principle.


Protecting Our Future Food Supply with AI and Geospatial Analytics

#artificialintelligence

Corn, coffee, chocolate, even wine are a few of the foods that stand to be massively disrupted by the effects of climate change, population growth and water scarcity -- if they haven't already. A recent study found the yields of the world's top ten crops have begun to decrease, a drop that is disproportionately affecting food-insecure countries. The situation stands to worsen. Researchers project that the global population will increase by 3 billion in 2050. To feed these additional global residents, agricultural production must increase by 50 percent, says Dr. Ranga Raju Vatsavai, an associate professor in computer science at North Carolina State University and the associate director of the Center for Geospatial Analytics.


Programming Fairness in Algorithms

#artificialintelligence

"Being good is easy, what is difficult is being just." "We need to defend the interests of those whom we've never met and never will." Note: This article is intended for a general audience to try and elucidate the complicated nature of unfairness in machine learning algorithms. As such, I have tried to explain concepts in an accessible way with minimal use of mathematics, in the hope that everyone can get something out of reading this. Supervised machine learning algorithms are inherently discriminatory. They are discriminatory in the sense that they use information embedded in the features of data to separate instances into distinct categories -- indeed, this is their designated purpose in life. This is reflected in the name for these algorithms which are often referred to as discriminative algorithms (splitting data into categories), in contrast to generative algorithms (generating data from a given category). When we use supervised machine learning, this "discrimination" is used as an aid to help us categorize our data into distinct categories within the data distribution, as illustrated below. Whilst this occurs when we apply discriminative algorithms -- such as support vector machines, forms of parametric regression (e.g.


Scientists use reinforcement learning to train quantum algorithm

#artificialintelligence

Recent advancements in quantum computing have driven the scientific community's quest to solve a certain class of complex problems for which quantum computers would be better suited than traditional supercomputers. To improve the efficiency with which quantum computers can solve these problems, scientists are investigating the use of artificial intelligence approaches. In a new study, scientists at the U.S. Department of Energy's (DOE) Argonne National Laboratory have developed a new algorithm based on reinforcement learning to find the optimal parameters for the Quantum Approximate Optimization Algorithm (QAOA), which allows a quantum computer to solve certain combinatorial problems such as those that arise in materials design, chemistry and wireless communications. "Combinatorial optimization problems are those for which the solution space gets exponentially larger as you expand the number of decision variables," said Argonne computer scientist Prasanna Balaprakash. "In one traditional example, you can find the shortest route for a salesman who needs to visit a few cities once by enumerating all possible routes, but given a couple thousand cities, the number of possible routes far exceeds the number of stars in the universe; even the fastest supercomputers cannot find the shortest route in a reasonable time."


Inside the Army's futuristic test of its battlefield artificial intelligence in the desert

#artificialintelligence

After weeks of work in the oppressive Arizona desert heat, the U.S. Army carried out a series of live fire engagements Sept. 23 at Yuma Proving Ground to show how artificial intelligence systems can work together to automatically detect threats, deliver targeting data and recommend weapons responses at blazing speeds. Set in the year 2035, the engagements were the culmination of Project Convergence 2020, the first in a series of annual demonstrations utilizing next generation AI, network and software capabilities to show how the Army wants to fight in the future. The Army was able to use a chain of artificial intelligence, software platforms and autonomous systems to take sensor data from all domains, transform it into targeting information, and select the best weapon system to respond to any given threat in just seconds. Army officials claimed that these AI and autonomous capabilities have shorted the sensor to shooter timeline -- the time it takes from when sensor data is collected to when a weapon system is ordered to engaged -- from 20 minutes to 20 seconds, depending on the quality of the network and the number of hops between where it's collected and its destination. "We use artificial intelligence and machine learning in several ways out here," Brigadier General Ross Coffman, director of the Army Futures Command's Next Generation Combat Vehicle Cross-Functional Team, told visiting media.