Researchers at Memorial Sloan Kettering Cancer Center (MSK) have developed a sensor that can be trained to sniff for cancer, with the help of artificial intelligence. Although the training doesn't work the same way one trains a police dog to sniff for explosives or drugs, the sensor has some similarity to how the nose works. The nose can detect more than a trillion different scents, even though it has just a few hundred types of olfactory receptors. The pattern of which odor molecules bind to which receptors creates a kind of molecular signature that the brain uses to recognize a scent. Like the nose, the cancer detection technology uses an array of multiple sensors to detect a molecular signature of the disease.
Researchers have demonstrated that artificial intelligence may be performed using small nanomagnets that interact like neurons in the brain. Researchers have shown it is possible to perform artificial intelligence using tiny nanomagnets that interact like neurons in the brain. The new technology, developed by a team led by Imperial College London researchers, could significantly reduce the energy cost of artificial intelligence (AI), which is currently doubling globally every 3.5 months. In a paper published today (May 5, 2022) in the journal Nature Nanotechnology, the international team has produced the first proof that networks of nanomagnets can be used to perform AI-like processing. The researchers showed nanomagnets can be used for'time-series prediction' tasks, such as predicting and regulating insulin levels in diabetic patients.
In this project we will be working with a data set, indicating whether or not a particular internet user clicked on an Advertisement. We will try to create a model that will predict whether or not they will click on an ad based off the features of that user. Welcome to this project on predict Ads Click in Apache Spark Machine Learning using Databricks platform community edition server which allows you to execute your spark code, free of cost on their server just by registering through email id. In this project, we explore Apache Spark and Machine Learning on the Databricks platform. I am a firm believer that the best way to learn is by doing.
China has developed a remote sensing satellite powered by the latest artificial intelligence technology that helps the People's Liberation Army (PLA) trace the movements of U.S. aircraft carriers. A new study by Chinese space scientists said the technology was put into use last year in June to detect the movements of the USS Harry S. Truman. The satellite, which has not been named in the study, is said to have alerted Beijing with the precise coordinates of the carrier as it headed to a strait transit drill off the coast of Long Island in New York, reported South China Morning Post. According to the study published by the domestic peer-reviewed journal Spacecraft Engineering last month, the drill held on June 17 involved a joint action of seven warships and planes beside the USS Harry S Truman. Before this satellite, the PLA had to go through a large amount of raw satellite data on the ground to get a clue about such drills happening in the U.S. home waters, and the results usually came after the event was over, the report added. But, with the AI-powered satellites, China could now "live stream" military activities or assets of interest on the other side of the planet, the report quoted the study by space scientist Yang Fang and her colleagues with DFH Satellite.
Scientists from the National Eye Institute (NEI) discovered five subpopulations of retinal pigment epithelium (RPE). Using artificial intelligence (AI), the researchers were able to analyze images of RPE at single-cell resolution to create a reference map that locates each subpopulation within the eye. Their findings are published in the journal Proceedings of the National Academy of Sciences, in a paper titled, "Single-cell–resolution map of human retinal pigment epithelium helps discover subpopulations with differential disease sensitivity." "These results provide a first-of-its-kind framework for understanding different RPE cell subpopulations and their vulnerability to retinal diseases, and for developing targeted therapies to treat them," said Michael F. Chiang, MD, director of the NEI, part of the National Institutes of Health. "The findings will help us develop more precise cell and gene therapies for specific degenerative eye diseases," said the study's lead investigator, Kapil Bharti, PhD, who directs the NEI Ocular and Stem Cell Translational Research Section.
In the second of our round-ups of the invited talks at the International Conference on Learning Representations (ICLR) we focus on the presentation by Been Kim. Been Kim's research focusses on interpretability and explanability of AI models. In this presentation she talked about work towards developing a language to communicate with AI systems. The ultimate goal is that we would be able to query an algorithm as to why a particular decision was made, and it would be able to provide us with an explanation. To illustrate this point, Been used the example of AlphaGo, and the famous match against world champion Lee Sedol. At move 37 in one of the games, AlphaGo produced what commentators described as a "very strange move" that turned the course of the game.
To be truly useful, drones--that is, autonomous flying vehicles--will need to learn to navigate real-world weather and wind conditions. Right now, drones are either flown under controlled conditions, with no wind, or are operated by humans using remote controls. Drones have been taught to fly in formation in the open skies, but those flights are usually conducted under ideal conditions and circumstances. However, for drones to autonomously perform necessary but quotidian tasks, such as delivering packages or airlifting injured drivers from a traffic accident, drones must be able to adapt to wind conditions in real time--rolling with the punches, meteorologically speaking. To face this challenge, a team of engineers from Caltech has developed Neural-Fly, a deep-learning method that can help drones cope with new and unknown wind conditions in real time just by updating a few key parameters.
A new study from researchers at ETH Zurich's EcoVision Lab is the first to produce an interactive Global Canopy Height map. Using a newly developed deep learning algorithm that processes publicly available satellite images, the study could help scientists identify areas of ecosystem degradation and deforestation. The work could also guide sustainable forest management by identifying areas for prime carbon storage--a cornerstone in mitigating climate change. "Global high-resolution data on vegetation characteristics are needed to sustainably manage terrestrial ecosystems, mitigate climate change, and prevent biodiversity loss. With this project, we aim to fill the missing data gaps by merging data from two space missions with the help of deep learning," said Konrad Schindler, a Professor in the Department of Civil, Environmental, and Geomatic Engineering at ETH Zurich.
AI has made impressive strides in recent years, but it's still far from learning language as efficiently as humans. For instance, children learn that "orange" can refer to both a fruit and color from a few examples, but modern AI systems can't do this nearly as efficiently as people. This has led many researchers to wonder: Can studying the human brain help to build AI systems that can learn and reason like people do? Today, Meta AI is announcing a long-term research initiative to better understand how the human brain processes language. In collaboration with neuroimaging center Neurospin (CEA) and INRIA we're comparing how AI language models and the brain respond to the same spoken or written sentences.
Increasing complexity of modern laser systems, mostly originated from the nonlinear dynamics of radiation, makes control of their operation more and more challenging, calling for development of new approaches in laser engineering. Machine learning methods, providing proven tools for identification, control, and data analytics of various complex systems, have been recently applied to mode-locked fiber lasers with the special focus on three key areas: self-starting, system optimization and characterization. However, the development of the machine learning algorithms for a particular laser system, while being an interesting research problem, is a demanding task requiring arduous efforts and tuning a large number of hyper-parameters in the laboratory arrangements. It is not obvious that this learning can be smoothly transferred to systems that differ from the specific laser used for the algorithm development by design or by varying environmental parameters. Here we demonstrate that a deep reinforcement learning (DRL) approach, based on trials and errors and sequential decisions, can be successfully used for control of the generation of dissipative solitons in mode-locked fiber laser system. We have shown the capability of deep Q-learning algorithm to generalize knowledge about the laser system in order to find conditions for stable pulse generation. Region of stable generation was transformed by changing the pumping power of the laser cavity, while tunable spectral filter was used as a control tool. Deep Q-learning algorithm is suited to learn the trajectory of adjusting spectral filter parameters to stable pulsed regime relying on the state of output radiation. Our results confirm the potential of deep reinforcement learning algorithm to control a nonlinear laser system with a feed-back. We also demonstrate that fiber mode-locked laser systems generating data at high speed present a fruitful photonic test-beds for various machine learning concepts based on large datasets.