Robots still have some trouble handling the basics when put to the test, apparently. Roborace team SIT Acronis Autonomous suffered an embarrassment in round one of the Season Beta 1.1 race after its self-driving car abruptly drove directly into a wall. It's not certain what led to the mishap, but track conditions clearly weren't at fault -- the car had been rounding a gentle curve and wasn't racing against others at the same time. It wasn't the only car to suffer a problem, either. Autonomous Racing Graz's vehicle had positioning issues that got it "lost" on the track and cut its race short.
Recently, a team of researchers from MIT, Institute of Science and Technology Austria (IST Austria) and Technische Universität Wien (TU Wien) developed an AI system by combining brain-inspired neural computation principles and scalable deep learning architectures. The AI system is basically a brain-inspired intelligent agent that learns to control an autonomous vehicle directly from its camera inputs. The researchers discovered that a single algorithm with 19 control neurons, connecting 32 encapsulated input features to outputs by 253 synapses, learns to map high-dimensional inputs into steering commands. One of the interesting facts of this research is that the AI agent is inspired by the neural computations known to happen in biological brains in order to achieve a remarkable degree of controllability. They took the inspiration from animals as small as the roundworms.
Researchers from TU Wien, IST Austria and MIT have developed a recurrent neural network (RNN) method for application to specific tasks within an autonomous vehicle control system. What is interesting about this architecture is that it uses just a small number of neurons. This smaller scale allows for a greater level of generalization and interpretability compared with systems containing orders of magnitude more neurons. The researchers found that a single algorithm with 19 control neurons, connecting 32 encapsulated input features to outputs by 253 synapses, learnt to map high-dimensional inputs into steering commands. This was achieved by use of a liquid time-constant RNN, a concept that they introduced in 2018.
Artificial intelligence has arrived in our everyday lives--from search engines to self-driving cars. This has to do with the enormous computing power that has become available in recent years. But new results from AI research now show that simpler, smaller neural networks can be used to solve certain tasks even better, more efficiently, and more reliably than ever before. An international research team from TU Wien (Vienna), IST Austria and MIT (USA) has developed a new artificial intelligence system based on the brains of tiny animals, such as threadworms. This novel AI-system can control a vehicle with just a few artificial neurons.
Modern AI has produced models that exceed human performance across countless tasks. Now, an international research team is suggesting AI might become even more efficient and reliable if it learns to think more like worms. In a paper recently published in Nature Machine Intelligence journal, the team from MIT CSAIL, TU Wien in Vienna, and IST Austria proposes an AI system that mimics biological models. The system was developed based on the brains of tiny animals such as threadworms and is able to control a vehicle using just a small number of artificial neurons. The researchers say the system has decisive advantages over other deep learning models because it copes much better with noisy input, and, because of its simplicity, its operations can be explained in detail -- alleviating the "black box" concerns affecting today's deep AI models. Explains TU Wien Cyber-Physical Systems head Professor Radu Grosu in a project press release: "For years, we have been investigating what we can learn from nature to improve deep learning.
Deep learning, a subset of the broad field of AI, refers to the engineering of developing intelligent machines that can learn, perform and achieve goals as humans do. Over the last few years, deep learning models have been illustrated to outpace conventional machine learning techniques in diverse fields. The technology enables computational models of multiple processing layers to learn and represent data with manifold levels of abstraction, imitating how the human brain senses and understands multimodal information. A team of researchers from TU Wien (Vienna), IST Austria and MIT (USA) has developed a new artificial intelligence system based on the brains of tiny animals like threadworms. This new AI-powered system is said to have the potential to control a vehicle with just a few synthetic neurons. According to the researchers, the system has decisive advantages over previous deep learning models.
An international research team from TU Wien (Vienna), IST Austria and MIT (USA) has developed a new artificial intelligence system based on the brains of tiny animals, such as threadworms. This novel AI-system can control a vehicle with just a few artificial neurons. The team says that system has decisive advantages over previous deep learning models: It copes much better with noisy input, and, because of its simplicity, its mode of operation can be explained in detail. It does not have to be regarded as a complex "black box," but it can be understood by humans. This new deep learning model has now been published in the journal Nature Machine Intelligence.
If the pace of the pre-coronavirus world was already fast, the luxury of time now seems to have disappeared completely. Businesses that once mapped digital strategy in one- to three-year phases must now scale their initiatives in a matter of days or weeks. In one European survey, about 70 percent of executives from Austria, Germany, and Switzerland said the pandemic is likely to accelerate the pace of their digital transformation. The quickening is evident already across sectors and geographies. Consider how Asian banks have swiftly migrated physical channels online.
Have a look at 2020's best robot vacuums -- all tried and tested in my office and home. A new project that forms a data visualization of brain signals in clothing has recently been showcased at the virtual Ars Electronica festival. The robotic dress is coupled to 1,024 channels of a BCI (Brain-Computer Interface) and has 64 outputs for light and movement. The Pangolin Scales' dress components function like animatronic elements that move and light up based on the recordings of the brain waves. The project originated at the Institute for integrated circuits at JKU (Johannes Kepler University, Linz, Austria), in collaboration with the Austrian Neurotechnology company G.tec.
Data Science Conference (DSC) Austria is knocking on YOUR door – and it is all for free! DSC Austria will happen on September 8-9th and during the event, you will get a chance to listen to over 15 high-quality talks and 8 tech tutorials on the topic of AI & ML, Data-Driven Decision and Data & AI Literacy – but that is not all! On September 8th you are going to listen to 2 Tech Tutorials & 3 Data Discussion. You are going to listen to Use Julia for your Scientific Computing Jobs! by Przemyslaw Szufel from Nunatak Capital and Recommender Systems using Deep Graph Library and Apache MXNet by Cyrus Vahid from AWS. Also, you will get a chance to listen to the next data discussions: Are Robo Bankers on our Doorstep?, May AI be Profitable and Ethical at the Same Time? and How AI is Fostering Dehumanization of Decision Making?.