Goto

Collaborating Authors

Hot Robotics Symposium celebrates UK success

Robohub

An internationally leading robotics initiative that enables academia and industry to find innovative solutions to real world challenges, celebrated its success with a Hot Robotics Symposium hosted across three UK regions last week. The National Nuclear User Facility (NNUF) for Hot Robotics is a government funded initiative that supports innovation in the nuclear sector by making world-leading testing facilities, sensors and robotic equipment easily accessible to academia and industry. Ground-breaking, impactful research in robotics and artificial intelligence will benefit the UK's development of fusion energy as safe, low carbon and sustainable energy source in addition to adjacent sectors such as nuclear decommissioning, space, and mobile applications. Visitors to UKAEA's RACE (UK Atomic Energy Authority / Remote Applications in Challenging Environments) in Oxfordshire, the University of Bristol facility in Fenswood Farm (North Somerset), and the National Nuclear Laboratory in Cumbria, were treated to a host of robots in action, tours and a packed speaker programme. A combination of robotic manipulators, ground, aerial and underwater vehicles along with deployment robots, plant mock-ups, and supporting infrastructure, were all showcased to demonstrate the breadth of the scheme.


Using machine learning to narrow down the possibilities for a better quantum tunneling interface

#artificialintelligence

A pair of researchers at Fudan University in China has used machine learning to narrow the list of possible improved tunneling interface configurations for use in transistors. They have published their results in Physical Review Letters. Over the past several decades, engineers have worked to uphold Moore's law, faithfully doubling the number of transistors that could be placed on an integrated circuit roughly every two years. But such efforts are in jeopardy due to the laws of physics--most particularly, those related to quantum tunneling that degrade performance. More specifically, the material that is used to separate gates on chips (interfaces) from channels has become so thin that charge carriers can wiggle their way through via quantum tunneling.


The amazing potential of artificial intelligence, machine learning for every industry

#artificialintelligence

In a moment of frustration, you might have wished your organization had two superpowers. First, the ability to put your most time-consuming, labor-intensive, and detail-oriented processes on autopilot so you could focus on improving your growth outcomes. Second, the ability to answer questions that seem too complicated, confusing, or contradictory to make sense of. With the advent of artificial intelligence (AI) and machine learning (ML), teams are accomplishing what used to seem impossible and learning what was once thought unknowable. More:If you're a financial institution, you need to know about the Safeguards Rule Let's dig into why AI and ML are such transformative technologies. Then, we'll illustrate how diverse (and unexpected) industries are using these technologies to solve their biggest challenges and unlock opportunities.


Introduction to Machine Learning: Supervised Learning

#artificialintelligence

In this course, you'll be learning various supervised ML algorithms and prediction tasks applied to different data. You'll learn when to use which model and why, and how to improve the model performances. We will cover models such as linear and logistic regression, KNN, Decision trees and ensembling methods such as Random Forest and Boosting, kernel methods such as SVM. Prior coding or scripting knowledge is required. We will be utilizing Python extensively throughout the course.


Machine Learning in Elixir with Sean Moriarity

#artificialintelligence

Sean Moriarity, the author of Genetic Algorithms in Elixir, lays out Machine Learning in the Elixir space. We talk about where it is today and where it's going in the future. Sean talks more about his book, how that led to working with José Valim which then led to the creation of Nx. He fills us in on recent ML events with Google and Facebook and shows us how Elixir fits into the bigger picture. It's a fast developing area and Sean helps us follow the important points even if we aren't doing ML ourselves… because our teams may still need it.


Zscaler Provides New AI/ML Capabilities for the Zscaler Zero Belief Trade - Channel969

#artificialintelligence

Zscaler, Inc. (NASDAQ: ZS), the chief in cloud safety, at present introduced newly superior AI/ML improvements powered by the most important safety cloud on the earth for unparalleled consumer safety and digital expertise monitoring. The brand new capabilities additional improve Zscaler's Zero Belief Trade safety platform to allow organizations to implement a Safety Service Edge (SSE) that protects towards essentially the most superior cyberattacks, whereas delivering an distinctive digital expertise to customers, and simplifying adoption of a zero belief structure. Organizations are dealing with a 314 % enhance in cyberattacks on encrypted web site visitors and an 80 % enhance in ransomware with almost a 120 % enhance in double extortion assaults. Phishing can be on the rise with industries like monetary companies, authorities and retail seeing annual will increase in assaults of over 100 % in 2021. To fight advancing threats, organizations have to adapt their defenses to real-time modifications in threat.


Efficient Deep Learning: From Theory to Practice

#artificialintelligence

Modern machine learning often relies on deep neural networks that are prohibitively expensive in terms of the memory and computational footprint. This in turn significantly inhibits the potential range of applications where we are faced with non-negligible resource constraints, e.g., real-time data processing, embedded devices, and robotics. In this thesis, we develop theoretically-grounded algorithms to reduce the size and inference cost of modern, large-scale neural networks. By taking a theoretical approach from first principles, we intend to understand and analytically describe the performance-size trade-offs of deep networks, i.e., the generalization properties. We then leverage such insights to devise practical algorithms for obtaining more efficient neural networks via pruning or compression. Beyond theoretical aspects and the inference time efficiency of neural networks, we study how compression can yield novel insights into the design and training of neural networks. We investigate the practical aspects of the generalization properties of pruned neural networks beyond simple metrics such as test accuracy. Finally, we show how in certain applications pruning neural networks can improve the training and hence the generalization performance.


The Annotated Diffusion Model

#artificialintelligence

In this blog post, we'll take a deeper look into Denoising Diffusion Probabilistic Models (also known as DDPMs, diffusion models, score-based generative models or simply autoencoders) as researchers have been able to achieve remarkable results with them for (un)conditional image/audio/video generation. Popular examples (at the time of writing) include GLIDE and DALL-E 2 by OpenAI, Latent Diffusion by the University of Heidelberg and ImageGen by Google Brain. We'll go over the original DDPM paper by (Ho et al., 2020), implementing it step-by-step in PyTorch, based on Phil Wang's implementation - which itself is based on the original TensorFlow implementation. Note that the idea of diffusion for generative modeling was actually already introduced in (Sohl-Dickstein et al., 2015). However, it took until (Song et al., 2019) (at Stanford University), and then (Ho et al., 2020) (at Google Brain) who independently improved the approach.


La veille de la cybersécurité

#artificialintelligence

A team of researchers at Cornell University has developed a new method enabling autonomous vehicles to create "memories" of previous experiences, which can then be used in future navigation. This will be especially useful when these self-driving cars can't rely on sensors in bad weather environments. Current self-driving cars that use artificial neural networks have no memory of the past, meaning they are constantly "seeing" things for the first time. And this is true regardless of how many times they've driven the exact same road. Killian Weinberger is senior author of the research and a professor of computer science.


Data alternatives for pretraining computer vision models

#artificialintelligence

Not only did a classifier pre-trained on Task2Sim's fake images perform as well as a model trained on real ImageNet photos, it also outperformed a rival trained on images generated with random simulation parameters. Task2Sim even transferred its know-how to entirely new tasks, creating images to teach a classifier how to identify cactuses and hand-drawn numbers. "The more tasks you use during training, the more generalizable the model will be," Feris said. A related tool, SimVQA,2 also appearing at CVPR, generates synthetic text and images for training robot agents to reason about the visual world. In a typical visual-reasoning task, an agent might be asked to count the number of chairs at a table or identify the color of a bouquet of flowers.