Behavioural analysis of single-cell aneural ciliate, Stentor roeseli, using machine learning approaches

#artificialintelligence

There is still a significant gap between our understanding of neural circuits and the behaviours they compute--i.e. the computations performed by these neural networks (Carandini 2012 Nat. Cellular decision-making processes, learning, behaviour and memory formation--all that have been only associated with animals with neural systems--have also been observed in many unicellular aneural organisms, namely Physarum, Paramecium and Stentor (Tang & Marshall2018 Curr. As these are fully functioning organisms, yet being unicellular, there is a much better chance to elucidate the detailed mechanisms underlying these learning processes in these organisms without the complications of highly interconnected neural circuits. An intriguing learning behaviour observed in Stentor roeseli (Jennings 1902 Am. So far, none of the existing learning paradigm can fully encapsulate this particular series of five characteristic avoidance reactions.


Autonomous Driving Still Terra Incognita - Semiwiki

#artificialintelligence

I already posted on one automotive panel at this year's Arm TechCon. A second I attended was a more open-ended discussion on where we're really at in autonomous driving. Most of you probably agree we've passed the peak of the hype curve and are now into the long slog of trying to connect hope to reality. There are a lot of challenges, not all technical; this panel did a good job (IMHO) of exposing some of the tough questions and acknowledging that answers are still in short supply. I left even more convinced that autonomous driving is still a hard problem needing a lot more investment and a lot more time to work through.


Scientists use night vision to save bats

#artificialintelligence

High-resolution radar and night vision cameras may help scientists protect bats from untimely deaths at wind farms, according to new research. Researchers are using these technologies to provide more specific details about the number of bats killed by wind turbines in Iowa. These details will improve scientists' understanding of bat activity and potentially save their lives, said Jian Teng, a graduate researcher at the University of Iowa who presented the work this week at the 2019 American Geophysical Union Fall Meeting in San Francisco. This work has broad impacts, according to Teng. "The more bats you kill, the more insects you have on farms; then, farmers will put more pesticides; and then, people will eat more pesticides," he said.


Israeli Start-Up Claims System Can See Like a Bee In the Scan

#artificialintelligence

In the race to develop fully autonomous vehicles, Israeli start-Up Lirhot Systems says they "see" the road ahead and assess potential hazards. While most leading industry actors have relied on and heavily invested in laser-based LiDAR (light detection and ranging) three-dimensional sensors for self-driving navigation, Tesla CEO Elon Musk has been the primary – and vocal – proponent of navigation based on using inexpensive cameras and radar. While developers continue to argue among themselves regarding the pros and cons of the two systems, Rehovot-based robotic vision start-up Lirhot Systems says it has developed a third method of navigation: a camera-like sensor inspired by insect navigation. "In nature, you have bugs and insects that navigate in a specific way, and we're copying that to enable autonomous vehicles to see," Lirhot CEO Shlomi Voro, an applied physicist with dozens of patents in the field of quantum physics, told The Jerusalem Post. "We were inspired by the heads of bees, their artificial intelligence-like neural network, size, accuracy of navigation, and how they see the world through their five eyes – two for vision and three for navigation."


Digital Civil Right to Transparency? - Lone Star Analysis

#artificialintelligence

California's passage of their "GDPR-lite" caught people off guard. We think this is part of a trend we've studied for a long time. Much of the current analysis misses key points, so it seems worth explaining. About two years ago, we asked several thought leaders in the U.S. about the odds we'd see legislation like the E.U. GDPR provides clear rights to E.U citizens, controlling data captured on-line.


This is what the AI industry will look like in 2020

#artificialintelligence

As we come to the end of 2019, we reflect on a year whose start already saw 100 machine learning papers published a day and its end looks to see a record-breaking funding year for AI. But the path getting real value from data science and AI can be a long and difficult journey. To paraphrase Eric Beinhocker from the Institute for New Economic Thinking, there are physical technologies that evolve at the pace of science, and social technologies that evolve at the pace at which humans can change -- much slower. Applied to the domain of data science and AI, the most sophisticated deep learning algorithms or the most robust and scalable real-time streaming data pipelines ('physical technology') mean little if decisions are not effectively made, organizational processes actively hinder data science and AI, and AI applications are not adopted due to lack of trust ('social technology'). With that in mind, my predictions for 2020 attempt to balance both aspects, with an emphasis on real value for companies, and not just'cool things' for data science teams.


Google AI chief Jeff Dean interview: Machine learning trends in 2020

#artificialintelligence

At the Neural Information Processing Systems (NeurIPS) conference this week in Vancouver, Canada, machine learning took center stage as 13,000 researchers explored things like neuroscience, how to interpret neural network outputs, and how AI can help solve big real-world problems. With more than 1,400 works accepted for publication, you have to choose how to prioritize your time. For Google AI chief Jeff Dean, that means giving talks at workshops about how machine learning can help confront the threat posed by climate change and how machine learning is reshaping systems and semiconductors. VentureBeat spoke with Dean Thursday about Google's early work on the use of ML to create semiconductors for machine learning, the impact of Google's BERT on conversational AI, and machine learning trends to watch in 2020. This interview has been edited for brevity and clarity.


How Machine Learning Drives the Deceptive World of Deepfakes

#artificialintelligence

Deepfakes are spreading fast, and while some have playful intentions, others can cause serious harm. We stepped inside this deceptive new world to see what experts are doing to catch this altered content. Chances are you've seen a deepfake; Donald Trump, Barack Obama, and Mark Zuckerberg have all been targets of the computer-generated replications. A deepfake is a video or an audio clip where deep learning models create versions of people saying and doing things that have never actually happened. A good deepfake can chip away at our ability to discern fact from fiction, testing whether seeing is really believing.


What's the real end-game for AI music? Popgun's CEO has some ideas...

#artificialintelligence

There isn't this place in the world where teenagers come together to make music for each other. That place does not exist, and that's nuts! That thing needs to exist, and it will exist. And getting the AI working is the price of admission to build that thing…" Stephen Phillips, CEO of Australian startup Popgun, thinks that the early business models in this sector – AI-music as a replacement for production music, for example – are just a sliver of the ultimate potential for this technology. What's more, his thoughts on how AI music might disrupt the current music industry are less about people choosing to listen to AI-made music instead of human-made music, but rather about people (non-musicians) using AI tools to make music for one another. "Where's the'pop stars on training wheels' place where they make music for each other, release it and watch each other pretend to be pop stars, but then go on to become legitimate pop stars?


Greg Walters on real-world applications of GANs and PyTorch Packt Hub

#artificialintelligence

Introduced in 2014, GANs (Generative Adversarial Networks) was first presented by Ian Goodfellow and other researchers at the University of Montreal. It comprises of two deep networks, the generator which generates data instances, and the discriminator which evaluates the data for authenticity. GANs works not only as a form of generative model for unsupervised learning, but also has proved useful for semi-supervised learning, fully supervised learning, and reinforcement learning. In this article, we are in conversation with Greg Walters, one of the authors of the book'Hands-On Generative Adversarial Networks with PyTorch 1.x', where we discuss some of the real-world applications of GANs. According to Greg, facial recognition and age progression will one of the areas where GANs will shine in the future.