Goto

Collaborating Authors

Machine Learning


Ethics in Ai -- Current issues, existing precautions, and probable solutions

#artificialintelligence

Introduction- Most of the Artificial Intelligent (Ai) Systems are developed as black boxes, especially Machine Learning and Deep Learning-based systems. Nowadays, these Machine and Deep Learning-based systems make decisions for our daily life, and should be explainable and should not be taken for granted to the end-users. The implication of such systems is rarely explored for the efficiency in the public usage (i.e., usage in -- Agriculture, Air Combat, Military Training, Education, Finance, Health Care, Human Resources, Customer Service, Autonomous Vehicles, Social Media, and several others[1]-[9]). Not only these, but the future might also be relying on Ai based system that will do our laundry, mow our lawn, fight wars [9]. Thus, there is so much room to improve the transparency of the systems along with fairness and accountability. There are some works that already stated the necessity of guidelines and governance of the Ai based systems, but more exposure is required in each area of application.


Controlling complex systems with artificial intelligence

#artificialintelligence

Researchers at ETH Zurich and the Frankfurt School have developed an artificial neural network that can solve challenging control problems. The self-learning system can be used for the optimization of supply chains and production processes as well as for smart grids or traffic control systems. Power cuts, financial network failures and supply chain disruptions are just some of the many of problems typically encountered in complex systems that are very difficult or even impossible to control using existing methods. Control systems based on artificial intelligence (AI) can help to optimize complex processes--and can also be used to develop new business models. Together with Professor Lucas Böttcher from the Frankfurt School of Finance and Management, ETH researchers Nino Antulov-Fantulin and Thomas Asikis--both from the Chair of Computational Social Science--have developed a versatile AI-based control system called AI Pontryagin which is designed to steer complex systems and networks towards desired target states.


Edge processing research takes discovery closer to use in artificial intelligence networks

#artificialintelligence

The MMT, first reported by Surrey researchers in 2020, overcomes long-standing challenges associated with transistors and can perform the same operations as more complex circuits. This latest research, published in the peer-reviewed journal Scientific Reports, uses mathematical modelling to prove the concept of using MMTs in artificial intelligence systems, which is a vital step towards manufacturing. Using measured and simulated transistor data, the researchers show that well-designed multimodal transistors could operate robustly as rectified linear unit-type (ReLU) activations in artificial neural networks, achieving practically identical classification accuracy as pure ReLU implementations. They used both measured and simulated MMT data to train an artificial neural network to identify handwritten numbers and compared the results with the built-in ReLU of the software. The results confirmed the potential of MMT devices for thin-film decision and classification circuits.


This is how AI bias really happens--and why it's so hard to fix

#artificialintelligence

Over the past few months, we've documented how the vast majority of AI's applications today are based on the category of algorithms known as deep learning, and how deep-learning algorithms find patterns in data. We've also covered how these technologies affect people's lives: how they can perpetuate injustice in hiring, retail, and security and may already be doing so in the criminal legal system. But it's not enough just to know that this bias exists. If we want to be able to fix it, we need to understand the mechanics of how it arises in the first place. We often shorthand our explanation of AI bias by blaming it on biased training data. The reality is more nuanced: bias can creep in long before the data is collected as well as at many other stages of the deep-learning process.


Iconary: A pictionary-like game to improve the communication skills of AI agents

#artificialintelligence

While artificial intelligence (AI) agents have become increasingly skilled at communicating with humans, they still struggle with several aspects of language, including complex semantics. The term semantics refers to the area of linguistics that relates to the meaning associated with specific words or logical connections between different concepts. A few years ago, researchers at Allen Institute for AI developed a game called Iconary, which is designed to improve the ability of AI techniques to communicate and make connections between different objects. In a recent paper pre-published on arXiv and presented at last year's ENMLP conference, the researchers introduced a more advanced version of the game and trained machine learning algorithms to play against each other or with humans. "Our paper is based on a project at AI2 aimed at training models to play Iconary, a Pictionary-based game we created, where a player has to guess what another player is drawing," Christopher Clark, one of the researchers who carried out the study, told TechXplore.


Neural networks learn faster using ETH software

#artificialintelligence

Two researchers from the Scalable Parallel Computing Lab at the Swiss Federal Institute of Technology in Zurich (ETH) have developed a software solution to rapidly speed up the training of deep learning applications. This is important as this process is the most resource-demanding and costly step of all, ETH Zurich writes in a press release. It accounts for up to 85 percent of the training time. For example, a single training run of a sophisticated voice recognition model can cost around 10 million US dollars. The new software named NoPFS was developed by Roman Böhringer and Nikoli Dryden.


Cluster Analysis and Unsupervised Machine Learning in Python

#artificialintelligence

Created by Lazy Programmer Inc. English [Auto], Portuguese [Auto], Created by Lazy Programmer Inc. Cluster analysis is a staple of unsupervised machine learning and data science. It is very useful for data mining and big data because it automatically finds patterns in the data, without the need for labels, unlike supervised machine learning. In a real-world environment, you can imagine that a robot or an artificial intelligence won't always have access to the optimal answer, or maybe there isn't an optimal correct answer. You'd want that robot to be able to explore the world on its own, and learn things just by looking for patterns. Do you ever wonder how we get the data that we use in our supervised machine learning algorithms?


Google rolls out Vertex AI Forecast for retailers

ZDNet

Google on Tuesday introduced Vertex AI Forecast, a tool for retailers to help generate more accurate demand forecasts. The tool is part of the managed Vertex AI platform that Google rolled out last year to help enterprises quickly deploy machine learning. Demand forecasting can have a significant impact on a retailer's business; factors like supply chain fluctuations and growing global markets can make it challenging to keep inventory in stock. Vertex AI Forecast can ingest datasets of up to 100 million rows from BigQuery or CSV files, covering years of historical data for thousands of product lines. The tool automatically processes the data and evaluates hundreds of different model architectures to create one model that should be relatively easy to manage.


Machine Learning and 5G Are Crucial to Scale the Metaverse

#artificialintelligence

Machine learning and 5G can attract more people in the metaverse, blurring the lines between the virtual and real worlds. The concept of metaverse is closely related to advanced technologies such as artificial intelligence (AI), machine learning (ML), augmented reality (AR), virtual reality (VR), blockchain, 5G and the internet of things (IoT). Improved technology will allow avatars to use body language effectively and better convey human emotions producing a feeling of real communication in a virtual space. AR and VR won't be the only critical components of the metaverse, 5G and machine learning are also crucial. The metaverse is a future iteration of the internet, made up of 3D virtual spaces linked into a perceived virtual universe.


Moon's Hidden Depths Uncovered with New Algorithm

#artificialintelligence

Certain areas near the moon's poles linger perpetually in shadow, never receiving direct sunlight. Recent studies suggest these so-called permanently shadowed regions (PSRs) contain rich ice reservoirs that could reveal details about the early solar system; they could also help future visitors make fuel and other resources. But these areas are hard to photograph from satellites orbiting the moon and thus are a challenge to study. The few photons PSRs do reflect are often overwhelmed by staticlike camera noise and quantum effects. Now researchers have produced a deep-learning algorithm to cut through the interference and see these dark zones.