Goto

Collaborating Authors

Interview with Alessandra Rossi: an insight into the RoboCup virtual humanoid league

AIHub

Alessandra Rossi is a member of both the technical and organising committees for the RoboCup Humanoid League. We spoke to her about the Humanoid League Virtual Season, which concluded with the grand final of the virtual soccer competition, and a three day workshop. The Humanoid League Virtual Season (HLVS) has been driven by two main core motivations: firstly to allow teams to have support for continuous testing while making progresses and changes to their software, and secondly, to keep the teams connected throughout the year, thus strengthening the community and collaboration between teams. We wanted to let teams use the longer periods between games, and the continuous games throughout the year to test novel approaches, with less risk, and to aid their success in the overall tournament. In addition, this way, teams can thoroughly analyse the collected data between games, and make informed decisions on how to improve and implement their approaches for the following match.



Imagen: Text-to-Image Diffusion Models

#artificialintelligence

There are several ethical challenges facing text-to-image research broadly. We offer a more detailed exploration of these challenges in our paper and offer a summarized version here. First, downstream applications of text-to-image models are varied and may impact society in complex ways. At this time we have decided not to release code or a public demo. In future work we will explore a framework for responsible externalization that balances the value of external auditing with the risks of unrestricted open-access.


The Dark Secret Behind Those Cute AI-generated Animal Images - AI Summary

#artificialintelligence

It's no secret that large models, such as DALL-E 2 and Imagen, trained on vast numbers of documents and images taken from the web, absorb the worst aspects of that data as well as the best. Scroll down the Imagen website--past the dragon fruit wearing a karate belt and the small cactus wearing a hat and sunglasses--to the section on societal impact and you get this: "While a subset of our training data was filtered to removed noise and undesirable content, such as pornographic imagery and toxic language, we also utilized [the] LAION-400M dataset which is known to contain a wide range of inappropriate content including pornographic imagery, racist slurs, and harmful social stereotypes. Imagen relies on text encoders trained on uncurated web-scale data, and thus inherits the social biases and limitations of large language models. It's the same kind of acknowledgement that OpenAI made when it revealed GPT-3 in 2019: "internet-trained models have internet-scale biases." And as Mike Cook, who researches AI creativity at Queen Mary University of London, has pointed out, it's in the ethics statements that accompanied Google's large language model PaLM and OpenAI's DALL-E 2. In short, these firms know that their models are capable of producing awful content, and they have no idea how to fix that. It's no secret that large models, such as DALL-E 2 and Imagen, trained on vast numbers of documents and images taken from the web, absorb the worst aspects of that data as well as the best. Scroll down the Imagen website--past the dragon fruit wearing a karate belt and the small cactus wearing a hat and sunglasses--to the section on societal impact and you get this: "While a subset of our training data was filtered to removed noise and undesirable content, such as pornographic imagery and toxic language, we also utilized [the] LAION-400M dataset which is known to contain a wide range of inappropriate content including pornographic imagery, racist slurs, and harmful social stereotypes.



Could machine learning and operations research lift each other up?

#artificialintelligence

Is deep learning really going to be able to do everything? Opinions on deep learning's true potential vary. Geoffrey Hinton, awarded for pioneering deep learning, is not entirely unbiased, but others, including Hinton's deep learning collaborator Yoshua Bengio, are looking to infuse deep learning with elements of a domain still under the radar: operations research, or an analytical method of problem-solving and decision-making used in the management of organizations. Machine learning and its deep learning variety are practically household names now. There is a lot of hype around deep learning, as well as a growing number of applications using it.


BMW, Pasqal Apply Quantum Computing to Car Design, Manufacturing - EE Times Europe

#artificialintelligence

Car manufacturer BMW and quantum computing technology developer Pasqal have entered a new phase of collaboration to analyze the applicability of quantum computational algorithms to metal forming applications modeling. The automotive industry is one of the most demanding industrial environments, and quantum computing could solve some of the key design and manufacturing issues. According to a report by McKinsey, automotive will be one of the primary value pools for quantum computing, with a high impact noticeable by about 2025. The consulting firm also expects a significant economic impact of related technologies for the automotive industry, estimated at $2 billion to $3 billion, by 2030. Volkswagen Group led the way with the launch of a dedicated quantum computing research team back in 2016.


My First Experience Deploying an ML Model to Production

#artificialintelligence

I have been working on Machine Learning since my third year in college. But during this time, the process always involved taking the dataset from Kaggle or some other open-source website. Also, these models/algorithms were either there on some Jupyter Notebook or Python script and were not deployed to some production website, it was always localhost. While interning at HackerRank and also after starting as a Software Engineer here as a part of the HackerRank Labs team working on a new product, I got a chance to deploy three different ML Models to production, working end-to-end on them. In this blog, I will be sharing my learnings and experience from one of the deployed models.


Graph machine learning with missing node features

#artificialintelligence

Graph Neural Network (GNN) models typically assume a full feature vector for each node. The two inputs to this model are the (normalised) adjacency matrix A encoding the graph structure and the feature matrix X containing as rows the node feature vectors and outputs the node embeddings Z. Each layer of GCN performs node-wise feature transformation (parametrised by the learnable matrices W₁ and W₂) and then propagates the transformed feature vectors to the neighboring nodes. Importantly, GCN assumes that all the entries in X are observed. In real-world scenarios, we often see situations where some node features are missing (Fig 1).


The Fierce Debate on General Artificial Intelligence

#artificialintelligence

In the world of Artificial Intelligence (AI), there is a raging debate on whether or not we can achieve General Artificial Intelligence (GAI). Some believe that it will happen in the next few years, others that it will never come about. Those opposed have some fairly solid arguments as to why not. Essentially, it is when AI becomes "sentient", can truly think on its own, formulate ideas, views and opinions and be an equal to mankind in intellectual terms. Right now, AI, an umbrella term for many different technologies, is known as Narrow AI (NAI.)