"The field of Machine Learning seeks to answer these questions: How can we build computer systems that automatically improve with experience, and what are the fundamental laws that govern all learning processes?"
– from The Discipline of Machine Learning by Tom Mitchell. CMU-ML-06-108, 2006.
Cerebras Systems, with its latest WSE-2 chip, has set the record for the largest AI model ever trained on a single device. The chip, which has 850k cores and 2.6 trillion transistors, is much larger than the largest GPUs. It has 123x more cores, 1k times more memory, and 12k times more bandwidth than the largest GPU. This allowed Cerebras to train a 20 billion parameter neural network model on a single chip. Doing so with GPUs would require complex compute cluster engineering and management, which could be much more expensive and only doable at large tech companies.
Have you ever wanted all the benefits of going to a bar without having to talk to the actual human being serving your drinks? You're in luck, because Italian scientists at the University of Naples Federico II have developed a machine that can do just that. Using machine-learning algorithms, BRILLO (Bartending Robot for Interactive Long-Lasting Operations) can do everything you expect of an experienced, battle-hardened bartender. He can remember your favorite drinks, make small chit-chat, and even crack jokes if that's the mood at the bar. As seen in the video above, BRILLO sports an old-fashioned look complete with a bow tie and vest, alongside long mechanical arms and a human-like face to make him more personable.
A new AI system that was trained to mimic human gaze could soon be used to detect cancer. "Being able to focus our attention is an important part of the human visual system, which allows humans to select and interpret the most relevant information in a particular scene. Scientists all over the world have been using computer software to try and recreate this ability to pick out the most salient parts of an image, but with mixed success up until now. In the study, the team used a deep learning computer algorithm known as a convolutional neural network, which is designed to mimic the interconnected web of neurons in the human brain and is modelled specifically on the visual cortex. This type of algorithm is ideal for taking images as an input and being able to assign importance to various objects or aspects within the image itself. According to the team, they utilised a huge database of images in which each image had already been assessed, or viewed, by humans and assigned so-called'areas of interest' using eye-tracking software. These images were then fed into the algorithm and by using deep learning the system slowly began to learn from the images to a point where it could then accurately predict which parts of the image were most salient. Researchers said their system was tested against seven advanced visual saliency systems already in use, and was shown to be'superior on all metrics'. 'Being able to successfully predict where people look in natural images could unlock a wide range of applications from automatic target detection to robotics, image processing and medical diagnostics,' said Dr Hantao Liu, co-author of the study, from Cardiff University's School of Computer Science and Informatics."
PyG (PyTorch Geometric) is a library built upon PyTorch to easily write and train Graph Neural Networks (GNNs) for a wide range of applications related to structured data. It consists of various methods for deep learning on graphs and other irregular structures, also known as geometric deep learning, from a variety of published papers. In addition, it consists of easy-to-use mini-batch loaders for operating on many small and single giant graphs, multi GPU-support, DataPipe support, distributed graph learning via Quiver, a large number of common benchmark datasets (based on simple interfaces to create your own), the GraphGym experiment manager, and helpful transforms, both for learning on arbitrary graphs as well as on 3D meshes or point clouds. Click here to join our Slack community! Whether you are a machine learning researcher or first-time user of machine learning toolkits, here are some reasons to try out PyG for machine learning on graph-structured data.
When people talk about artificial intelligence, they usually don't mean supervised and unsupervised machine learning. These tasks are pretty trivial compared to what we think of AIs doing - playing chess and Go, driving cars, and beating video games at a superhuman level. Reinforcement learning has recently become popular for doing all of that and more. Much like deep learning, a lot of the theory was discovered in the 70s and 80s but it hasn't been until recently that we've been able to observe first hand the amazing results that are possible. In 2016 we saw Google's AlphaGo beat the world Champion in Go.
Role requiring'No experience data provided' months of experience in None Samsara (NYSE: IOT) is the pioneer of the Connected Operations Cloud, which allows businesses that depend on physical operations to harness IoT (Internet of Things) data to develop actionable business insights and improve their operations. Founded in San Francisco in 2015, we now employ more than 1,800 people globally and have over 1.5 million active devices. Samsara also went public in December 2021 and we're just getting started. Recent awards we've won include: • #2 in the Financial Times' Fastest Growing Companies in Americas list 2021 • Named as a Best Place to Work in Built In 2022 • #19 in the Forbes Cloud 100 2021 • IoT Analytics Company of the Year in 2022's IoT Breakthrough Winners • Forbes Advisor named us the Best Solution for Large Companies – Fleet management software for 2022! We're driving change in industries that are yet to fully embrace digital transformation. Physical operations make up a massive slice of the global economy but haven't benefited from innovation and actionable information in the way that other sectors have.
Originally published on Towards AI the World's Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses. Some time ago, a scientific paper with the title A Neural Algorithm of Artistic Style by Gatys et al.  caught my attention.
Farshad Kheiri is the Head of AI and Data Science at Legion Technologies, an industry leader for AI-powered, machine-learning workforce management products. The company uses advanced technology to solve some of the biggest WFM business challenges while creating an employee experience that helps to attract and retain employees. What initially attracted you to computer science and engineering? I learned programming through online courses, as well as some on-campus classes. My background is in electrical engineering, but I have a minor in math, stochastic processes, and probability.
Nvidia has developed PrefixRL, an approach based on reinforcement learning (RL) to designing parallel-prefix circuits that are smaller and faster than those designed by state-of-the-art electronic-design-automation (EDA) tools. Various important circuits in the GPU such as adders, incrementors, and encoders are called parallel-prefix circuits. These circuits are fundamental to high-performance digital design and can be defined at a higher level as prefix graphs. PrefixRL is focused on this class of arithmetic circuits and the main goal of this approach is to understand if an AI agent could design a good prefix graph, considering that the state-space of the problem is O(2 n n) and cannot be resolved using brute-force methods. The desirable circuit should be small, fast and consume less power.