"Many researchers … speculate that the information-processing abilities of biological neural systems must follow from highly parallel processes operating on representations that are distributed over many neurons. [Artificial neural networks] capture this kind of highly parallel computation based on distributed representations"
– from Machine Learning (Section 4.1.1; page 82) by Tom M. Mitchell, McGraw Hill Companies, Inc. (1997).
For the second part of this article series, see here. It has only been 8 years since the modern era of deep learning began at the 2012 ImageNet competition. Progress in the field since then has been breathtaking and relentless. If anything, this breakneck pace is only accelerating. Five years from now, the field of AI will look very different than it does today.
The cloud giant is introducing the Gaudi instances at an opportune time. AI models are getting more complex, partially because enterprise machine learning initiatives are maturing and partially because research conducted by the likes of OpenAI is facilitating bigger neural network architectures. As neural networks grow in complexity, the amount of computing power necessary to train them is increasing and fueling demand for more efficient training infrastructure.
In the past year, lockdowns and other COVID-19 safety measures have made online shopping more popular than ever, but the skyrocketing demand is leaving many retailers struggling to fulfill orders while ensuring the safety of their warehouse employees. Researchers at the University of California, Berkeley, have created new artificial intelligence software that gives robots the speed and skill to grasp and smoothly move objects, making it feasible for them to soon assist humans in warehouse environments. The technology is described in a paper published online today (Wednesday, Nov. 18) in the journal Science Robotics. Automating warehouse tasks can be challenging because many actions that come naturally to humans--like deciding where and how to pick up different types of objects and then coordinating the shoulder, arm and wrist movements needed to move each object from one location to another--are actually quite difficult for robots. Robotic motion also tends to be jerky, which can increase the risk of damaging both the products and the robots. "Warehouses are still operated primarily by humans, because it's still very hard for robots to reliably grasp many different objects," said Ken Goldberg, William S. Floyd Jr. Distinguished Chair in Engineering at UC Berkeley and senior author of the study.
Taipei, Oct. 28 (CNA) A team at National Taiwan University Hospital (NTUH) has developed an artificial intelligence (AI) system that can identify tumors in the pancreas with an accuracy of over 90 percent. At a press conference on Tuesday where the team introduced the technology, NTUH doctor Liao Wei-chih (廖偉智) said that pancreatic cancer was the seventh deadliest type of cancer in Taiwan in 2019, causing nearly 2,500 deaths that year. The disease is extremely hard to detect, however, as patients experience no symptoms during the early stages, and studies have found that 40 percent of pancreatic tumors that are smaller than 2 centimeters are missed when doctors use CT scans, Liao said. This is because these small tumors do not look like lumps, but appear to be a thin layer of gray film, Liao explained, which is a challenge for even the most experienced of experts to identify. As a result of these difficulties, patients are often only diagnosed when the cancer has spread to other parts of the body, thus complicating treatment, he said.
With the success of DeepMind's AlphaGo system defeating the world Go champion, reinforcement learning has achieved significant attention among researchers and developers. Deep reinforcement learning has become one of the most significant techniques in AI that is also being used by the researchers in order to attain artificial general intelligence. Below here is a list of 10 best free resources, in no particular order to learn deep reinforcement learning using TensorFlow. About: This tutorial "Introduction to RL and Deep Q Networks" is provided by the developers at TensorFlow. The topics include an introduction to deep reinforcement learning, the Cartpole Environment, introduction to DQN agent, Q-learning, Deep Q-Learning, DQN on Cartpole in TF-Agents and more.
An illustration of the possible structure of a "membrane protein" associated with the coronavirus, according to a model created by DeepMind's AlphaFold program. DeepMind, a division of Alphabet, says it has solved one of the most difficult computing challenges in the world: predicting how protein molecules will fold. It is key to understanding important biological processes and treating diseases such as COVID-19. The London-based organization said that its claims of a breakthrough had been verified by organizers of a competition held every two years to test computer models, the Critical Assessment of protein Structure Prediction (CASP). Healthline cuts through the confusion with straightforward, expert-reviewed, person-first experiences -- all designed to help you make the best decisions. DeepMind named its protein folding prediction system AlphaFold and said that the latest version has been four years in development.
Here I will tell you about "NERF in the wild" a research presented in August 2020 and which has all the features to revolutionize some application areas starting from ** Augmented and Virtual Reality **. The aim of the research is to produce a 3D visual synthesis of places starting from photographs of the same place very different from each other, taken at different times and with the automatic removal of objects / people that are not relevant to the object (done by the neural network itself during the process). Eg. the video of the Trevi fountain below was generated by NERF-W starting from public domain photographs on the Web and as you can see it was not only reproduced three-dimensionally but it is also possible to see it at different times of the day with different lights. Obviously all the tourists, cars, any advertising billboards, have been cleaned up. The intuition in this case is balanced between the use of a "standard" neural network, ie not a CNN or 3D-CNN (Convolutional Neural Networks -- born to be able to process photos / videos), and the basic rules of optics.
The receptive field (RF) of a neuron is the term applied to the space in which the presence of a stimulus alters the response of the same neuron. The responses of visual neurons, as well as visual perception phenomena in general, are highly nonlinear functions of the visual input (in mathematics, nonlinear systems represent phenomena whose behaviour cannot be expressed as the sum of the behaviours of its descriptors). Conversely, vision models used in science are based on the notion of linear receptive field; in artificial intelligence and machine learning, as artificial neural networks are based on classical models of vision, also use linear receptive fields. Modelling vision based on a linear receptive field poses several inherent problems: it changes with each input, it presupposes a set of basis functions for the visual system, and it conflicts with recent studies on dendritic computations." The study was recently published in the journal of the group Nature, Scientific Reports.
While some of the applications for artificial intelligence involve say, winning games of Texas hold'em or recreating pretty paintings, there are areas where the technology could have truly profound consequences. Among those is medical care, and a major breakthrough from Alphabet's DeepMind AI could be a gamechanger in this regard, with the system demonstrating an ability to predict the 3D structures of unique proteins, overcoming a problem that has plagued biologists for half a century. By understanding the 3D shapes of different proteins, scientists can better understand what they do and how the cause diseases, which in turn paves the way for better drug discovery. Beyond that, as a central component to the chemical processes for all living things, more expedient mapping of 3D protein structures would benefit many fields of biological research, but this process has proven painstaking. This is because while modern scientific tools such as X-ray crystallography and cryo-electron microscopy allow researchers to study these structures in amazing new detail, they all still hinge on a process of trial and error.
At the height of the exchange of accusations between the United States and China regarding the "Covid-19" disease, new signs of a war between the two countries appeared, the Artificial intelligence War, which lead us to ask: Is this technology ready to work in safety? And can military AI be deceived easily? Although military AI technologies dominate military strategy in the US and China; But what sparked the crisis was that last March, Chinese researchers launched a brilliant, and potentially devastating, attack against one of America's most valuable technological assets, the Tesla electric car. A research team from the security laboratory of the Chinese technology giant "Tencent" has succeeded in finding several ways to deceive the artificial intelligence algorithms in the Tesla electric car by carefully changing the data, which are fed to the car's sensors, and the team managed to trick and confuse the vehicle's AI. The team tricked Tesla's brilliant algorithms capable of detecting raindrops on the windshield or following the lines on the road, operating the windshield wipers to act as if there was rain, and the lane markings on the road were modified to confuse the autonomous driving system so that it passed in the opposite traffic lane in violation of traffic rules.