ariel
ARIEL: Adversarial Graph Contrastive Learning
Feng, Shengyu, Jing, Baoyu, Zhu, Yada, Tong, Hanghang
Contrastive learning is an effective unsupervised method in graph representation learning, and the key component of contrastive learning lies in the construction of positive and negative samples. Previous methods usually utilize the proximity of nodes in the graph as the principle. Recently, the data-augmentation-based contrastive learning method has advanced to show great power in the visual domain, and some works extended this method from images to graphs. However, unlike the data augmentation on images, the data augmentation on graphs is far less intuitive and much harder to provide high-quality contrastive samples, which leaves much space for improvement. In this work, by introducing an adversarial graph view for data augmentation, we propose a simple but effective method, Adversarial Graph Contrastive Learning (ARIEL), to extract informative contrastive samples within reasonable constraints. We develop a new technique called information regularization for stable training and use subgraph sampling for scalability. We generalize our method from node-level contrastive learning to the graph level by treating each graph instance as a super-node. ARIEL consistently outperforms the current graph contrastive learning methods for both node-level and graph-level classification tasks on real-world datasets. We further demonstrate that ARIEL is more robust in the face of adversarial attacks.
- North America > United States > New York > New York County > New York City (0.14)
- North America > United States > Illinois > Champaign County > Urbana (0.04)
- North America > United States > Texas > Travis County > Austin (0.04)
- (9 more...)
- Information Technology (0.90)
- Government > Military (0.36)
Adversarial Graph Contrastive Learning with Information Regularization
Feng, Shengyu, Jing, Baoyu, Zhu, Yada, Tong, Hanghang
Contrastive learning is an effective unsupervised method in graph representation learning. Recently, the data augmentation based contrastive learning method has been extended from images to graphs. However, most prior works are directly adapted from the models designed for images. Unlike the data augmentation on images, the data augmentation on graphs is far less intuitive and much harder to provide high-quality contrastive samples, which are the key to the performance of contrastive learning models. This leaves much space for improvement over the existing graph contrastive learning frameworks. In this work, by introducing an adversarial graph view and an information regularizer, we propose a simple but effective method, Adversarial Graph Contrastive Learning (ARIEL), to extract informative contrastive samples within a reasonable constraint. It consistently outperforms the current graph contrastive learning methods in the node classification task over various real-world datasets and further improves the robustness of graph contrastive learning. The code is at https://github.com/Shengyu-Feng/ARIEL.
- North America > United States > New York > New York County > New York City (0.14)
- Europe > France > Auvergne-Rhône-Alpes > Lyon > Lyon (0.05)
- North America > United States > Illinois (0.05)
- (6 more...)
Don't Start From Scratch: Leveraging Prior Data to Automate Robotic Reinforcement Learning
Walke, Homer, Yang, Jonathan, Yu, Albert, Kumar, Aviral, Orbik, Jedrzej, Singh, Avi, Levine, Sergey
Reinforcement learning (RL) algorithms hold the promise of enabling autonomous skill acquisition for robotic systems. However, in practice, real-world robotic RL typically requires time consuming data collection and frequent human intervention to reset the environment. Moreover, robotic policies learned with RL often fail when deployed beyond the carefully controlled setting in which they were learned. In this work, we study how these challenges can all be tackled by effective utilization of diverse offline datasets collected from previously seen tasks. When faced with a new task, our system adapts previously learned skills to quickly learn to both perform the new task and return the environment to an initial state, effectively performing its own environment reset. Our empirical results demonstrate that incorporating prior data into robotic reinforcement learning enables autonomous learning, substantially improves sample-efficiency of learning, and enables better generalization. Project website: https://sites.google.com/view/ariel-berkeley/
Generative Adversarial Learning: Architectures and Applications (Intelligent Systems Reference Library, 217): Razavi-Far, Roozbeh, Ruiz-Garcia, Ariel, Palade, Vasile, Schmidhuber, Juergen: 9783030913892: Amazon.com: Books
This book provides a collection of recent research works addressing theoretical issues on improving the learning process and the generalization of GANs as well as state-of-the-art applications of GANs to various domains of real life. Generative adversarial networks (GANs), as the main method of adversarial learning, achieve great success and popularity by exploiting a minimax learning concept, in which two networks compete with each other during the learning process. Their key capability is to generate new data and replicate available data distributions, which are needed in many practical applications, particularly in computer vision and signal processing. The book is intended for academics, practitioners, and research students in artificial intelligence looking to stay up to date with the latest advancements on GANs' theoretical developments and their applications.
6 Steps Companies Can Take to Strengthen Their Cyber Strategy - InformationWeek
While these technical skills are certainly important, we're also now looking more holistically at candidates to test their abilities to think critically and creatively as well as uncover new solutions. As we face new and unprecedented challenges in cyber protection, it's critical that cyber leaders hire team members who think outside-the-box, have intellectual curiosity, employ bold thinking, and are natural problem solvers. Protecting an organization against advanced cyber threats requires innovative thinking and techniques; people, process and technology capabilities are needed to properly defend ourselves against sophisticated attackers, such as nation states. Cyber threats will continue to evolve, as will the new techniques described above to enable cyber resiliency. Ariel Weintraub is currently the Head of Enterprise Cyber Security at MassMutual. Ariel first joined MassMutual in the fall of 2019 as the Head of Security Operations & Engineering, responsible for the Global Security Operations Center, Security Engineering, Security Intelligence, and Identity & Access Management. Prior to joining MassMutual, Ariel served as Senior Director of Data & Access Security within Cybersecurity Operations at TIAA where she led a three-year business transformation program to position IAM as a digital business enabler. Prior to TIAA, Ariel held the position of Global Head of Vulnerability Management at BNY Mellon and was part of the Threat & Vulnerability Management practice at PricewaterhouseCoopers (PwC).
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.38)
This New Hotel Is the First in Africa to Introduce Robot Staff
Opened in November 2020, Hotel Sky in Sandton, Johannesburg, made its debut with three robots: Lexi, Micah, and Ariel. Lending a helpful hand to the human staff at the property, these robots are the hotel's answer to travelers' increased desire for socially distant interactions. Lexi, Micah, and Ariel can deliver room service, provide travel information, and carry up to 165 pounds of luggage each from the marble-floored lobby to the rooms.
FLI Podcast- Artificial Intelligence: American Attitudes and Trends with Baobao Zhang - Future of Life Institute
Artificial intelligence is already inextricably woven into everyday life, and its impact will only grow in the coming years. But while this development inspires much discussion among members of the scientific community, public opinion on artificial intelligence has remained relatively unknown. Artificial Intelligence: American Attitudes and Trends, a report published earlier in January by the Center for the Governance of AI, explores this question. Its authors relied on an in-depth survey to analyze American attitudes towards artificial intelligence, from privacy concerns to beliefs about U.S. technological superiority. Some of their findings--most Americans, for example, don't trust Facebook--were unsurprising. But much of their data reflects trends within the American public that have previously gone unnoticed. This month Ariel was joined by Baobao Zhang, lead author of the report, to talk about these findings. Zhang is a PhD candidate in Yale University's political science department and research affiliate with the Center for the Governance of AI at the University of Oxford. Her work focuses on American politics, international relations, and experimental methods. In this episode, Zhang spoke about her take on some of the report's most interesting findings, the new questions it raised, and future research directions for her team.
- North America > United States (1.00)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.24)
- Asia > China (0.07)
- Personal > Interview (0.94)
- Research Report > New Finding (0.88)
Podcast: Top AI Breakthroughs and Challenges of 2017 with Richard Mallah and Chelsea Finn - Future of Life Institute
AlphaZero, progress in meta-learning, the role of AI in fake news, the difficulty of developing fair machine learning -- 2017 was another year of big breakthroughs and big challenges for AI researchers! To discuss this more, we invited FLI's Richard Mallah and Chelsea Finn from UC Berkeley to join Ariel for this month's podcast. They talked about some of the technical progress they were most excited to see and what they're looking forward to in the coming year. You can listen to the podcast here, or read the transcript below. In 2017, we saw an increase in investments into artificial intelligence. More students are applying for AI programs, and more AI labs are cropping up around the world. With 2017 now solidly behind us, we wanted to take a look back at the year and go over some of the biggest AI breakthroughs. To do so, I have Richard Mallah and Chelsea Finn with me today.
- Information Technology (0.69)
- Leisure & Entertainment > Games > Computer Games (0.68)
Podcast: Six Experts Explain the Killer Robots Debate - Future of Life Institute
Why are so many AI researchers so worried about lethal autonomous weapons? What makes autonomous weapons so much worse than any other weapons we have today? And why is it so hard for countries to come to a consensus about autonomous weapons? Not surprisingly, the short answer is: it's complicated. In this month's podcast, Ariel spoke with experts from a variety of perspectives on the current status of LAWS, where we are headed, and the feasibility of banning these weapons. Guests include ex-Pentagon advisor Paul Scharre (3:40), artificial intelligence professor Toby Walsh (40:51), Article 36 founder Richard Moyes (53:30), Campaign to Stop Killer Robots founders Mary Wareham and Bonnie Docherty (1:03:38), and ethicist and co-founder of the International Committee for Robot Arms Control, Peter Asaro (1:32:39). You can listen to the podcast above, and read the full transcript below. You can check out previous podcasts on SoundCloud, iTunes, GooglePlay, and Stitcher. If you work with ...
- Europe > United Kingdom (0.92)
- Europe > Russia (0.14)
- Asia > Russia (0.14)
- (18 more...)
- Leisure & Entertainment (1.00)
- Law Enforcement & Public Safety (1.00)
- Law (1.00)
- (7 more...)
Podcast: Law and Ethics of Artificial Intelligence - Future of Life Institute
The rise of artificial intelligence presents not only technical challenges, but important legal and ethical challenges for society, especially regarding machines like autonomous weapons and self-driving cars. To discuss these issues, I interviewed Matt Scherer and Ryan Jenkins. Matt is an attorney and legal scholar whose scholarship focuses on the intersection between law and artificial intelligence. Ryan is an assistant professor of philosophy and a senior fellow at the Ethics and Emerging Sciences group at California Polytechnic State, where he studies the ethics of technology. In this podcast, we discuss accountability and transparency with autonomous systems, government regulation vs. self-regulation, fake news, and the future of autonomous systems. The following interview has been heavily edited for brevity, but you can listen to it in its entirety above or read the full transcript here.
- Government > Military (0.95)
- Law > Statutes (0.89)