New computational algorithms make it possible to build neural networks with many input nodes and many layers, and distinguish "deep learning" of these networks from previous work on artificial neural nets.
In the past year, lockdowns and other COVID-19 safety measures have made online shopping more popular than ever, but the skyrocketing demand is leaving many retailers struggling to fulfill orders while ensuring the safety of their warehouse employees. Researchers at the University of California, Berkeley, have created new artificial intelligence software that gives robots the speed and skill to grasp and smoothly move objects, making it feasible for them to soon assist humans in warehouse environments. The technology is described in a paper published online today (Wednesday, Nov. 18) in the journal Science Robotics. Automating warehouse tasks can be challenging because many actions that come naturally to humans -- like deciding where and how to pick up different types of objects and then coordinating the shoulder, arm and wrist movements needed to move each object from one location to another -- are actually quite difficult for robots. Robotic motion also tends to be jerky, which can increase the risk of damaging both the products and the robots.
DeepMind, an AI research lab that was bought by Google and is now an independent part of Google's parent company Alphabet, announced a major breakthrough this week that one evolutionary biologist called "a game changer." "This will change medicine," the biologist, Andrei Lupas, told Nature. The breakthrough: DeepMind says its AI system, AlphaFold, has solved the "protein folding problem" -- a grand challenge of biology that has vexed scientists for 50 years. Proteins are the basic machines that get work done in your cells. They start out as strings of amino acids (imagine the beads on a necklace) but they soon fold up into a unique three-dimensional shape (imagine scrunching up the beaded necklace in your hand).
Researchers from all over the world contribute to this repository as a prelude to the peer review process for publication in traditional journals. The articles listed below represent a small fraction of all articles appearing on the preprint server. They are listed in no particular order with a link to each paper along with a brief overview. Links to GitHub repos are provided when available. Especially relevant articles are marked with a "thumbs up" icon.
Verdict lists the top five terms tweeted on big data in November 2020 based on data from GlobalData's Influencer Platform. The top tweeted terms are the trending industry discussions happening on Twitter by key individuals (influencers) as tracked by the platform. The massive adoption of artificial intelligence (AI) for driving innovations, top applications of AI, and risks associated with AI were popularly discussed in November. According to an article shared by Dr Omkar Rai, director general of Software Technology Parks of India (STPI), the massive adoption of AI is driving innovations in areas such as health research, data analytics, and robotic assistants, to name a few. Research from UnivDatos Market Insights, a market research firm, finds that AI's contribution to the healthcare sector is expected to grow at a compounded annual growth rate (CAGR) of 41% between 2018 and 2025 and will be worth $26.6bn by 2025.
Computer vision models known as convolutional neural networks can be trained to recognize objects nearly as accurately as humans do. However, these models have one significant flaw: Very small changes to an image, which would be nearly imperceptible to a human viewer, can trick them into making egregious errors such as classifying a cat as a tree. A team of neuroscientists from MIT, Harvard University, and IBM have developed a way to alleviate this vulnerability, by adding to these models a new layer that is designed to mimic the earliest stage of the brain's visual processing system. In a new study, they showed that this layer greatly improved the models' robustness against this type of mistake. "Just by making the models more similar to the brain's primary visual cortex, in this single stage of processing, we see quite significant improvements in robustness across many different types of perturbations and corruptions," says Tiago Marques, an MIT postdoc and one of the lead authors of the study.
Self-driving cars have begun to become a reality in the fields of agriculture, transportation, and military, and the day when ordinary consumers use self-driving cars in their daily lives is quickly approaching. An autonomous vehicle performs necessary operations based on sensor information and AI algorithms. It needs to collect data, plan trajectories, and execute driving routes. These tasks, especially planning and executing trajectories, require non-traditional programming methods, which rely on machine learning techniques in AI. Traditional heuristic algorithms in computer science can be used for path planning and control, such as Bellman-Ford algorithm and Dijkstra algorithm.
Within any living organism, there are thousands of different proteins, each with its own unique shape. For decades, the exact formation of those shapes has been a pain for scientists to figure out. How exactly does a protein, which starts as a string of amino acids, fold itself into the funky 3D shapes you might recognize from diagrams? AlphaFold, an AI from DeepMind, may have an answer. It can predict, with heretofore unseen accuracy, the shape a protein will take.
Artificial intelligence (AI) provides a wide range of current society applications, including predicting, classifying, and solving both social and scientific problems. As one of the oldest and most traditional engineering disciplines, civil engineering covers various aspects of the built environment, from design and construction to maintenance. Civil engineering offers ample practical scope for applications of AI. In turn, AI can improve human life quality and originate novel approaches to solving engineering problems. AI methods and techniques, including neural networks, evolutionary computation, fuzzy logic systems, and deep learning, have rapidly evolved over the past few years.
Artificial Intelligence (AI) has been a top trend in many industries lately, attracting massive media attention and investments. Over the last decade, this complex area of research has rapidly progressed from being a "resurrected cool technology from the past" to a full-blown driver of nothing less than a new industrial revolution -- a digital one. As of today, AI is widely commercialized in such applications as manufacturing robots, smart assistants (e.g. Siri), automated financial investing systems, virtual travel booking agents, social media monitoring tools, conversational bots, surveillance systems, online security systems, language translators, self-driving cars, and much more. In some industries, AI (including its many technologies and sub-disciplines, such as deep learning, recommender systems, and natural language processing), is becoming a standardized component rather than a cutting-edge innovation it once was. This rapid progress in AI adoption is also seen in the pharmaceutical industry -- not without caveats, however. Unlike "mainstream" use cases, like image recognition or spam email filtering, drug discovery research appears to be a much harder case for several reasons.
This is the second edition of my weekly update on deep learning. Every Thursday, I'll release a new batch of research papers, blog posts, Github repos, etc. that I liked over the past week. Links are provided for each featured project, so you can dive in and learn about whatever catches your eye. If you missed last week's edition, you can find it here. All thoughts and opinions are my own.