self-learning algorithm
A Safety-Oriented Self-Learning Algorithm for Autonomous Driving: Evolution Starting from a Basic Model
Yang, Shuo, Wang, Caojun, Ma, Zhenyu, Huang, Yanjun, Chen, Hong
Autonomous driving vehicles with self-learning capabilities are expected to evolve in complex environments to improve their ability to cope with different scenarios. However, most self-learning algorithms suffer from low learning efficiency and lacking safety, which limits their applications. This paper proposes a safety-oriented self-learning algorithm for autonomous driving, which focuses on how to achieve evolution from a basic model. Specifically, a basic model based on the transformer encoder is designed to extract and output policy features from a small number of demonstration trajectories. To improve the learning efficiency, a policy mixed approach is developed. The basic model provides initial values to improve exploration efficiency, and the self-learning algorithm enhances the adaptability and generalization of the model, enabling continuous improvement without external intervention. Finally, an actor approximator based on receding horizon optimization is designed considering the constraints of the environmental input to ensure safety. The proposed method is verified in a challenging mixed traffic environment with pedestrians and vehicles. Simulation and real-vehicle test results show that the proposed method can safely and efficiently learn appropriate autonomous driving behaviors. Compared reinforcement learning and behavior cloning methods, it can achieve comprehensive improvement in learning efficiency and performance under the premise of ensuring safety.
- Asia > China > Shanghai > Shanghai (0.05)
- Europe > Germany > Baden-Württemberg > Stuttgart Region > Stuttgart (0.04)
- Asia > China > Jilin Province > Changchun (0.04)
- Research Report (0.70)
- Personal (0.68)
- Transportation > Ground > Road (1.00)
- Information Technology > Robotics & Automation (1.00)
- Automobiles & Trucks (1.00)
The human being, the weak point of Artificial Intelligence
Magic formulas for some or mathematical formulas full of future for others, algorithms are far from being infallible. Yet today, most of them lead to decisions that influence many companies and even human lives. Denis Molin -- consultant at TeraData, a technology company specialized in database analysis and Big Data software -- puts into perspective the biases that humans generate in AI. Cathy O'Neil was one of the first to warn about these dangers in her book Algorithms, the time bomb published in 2016. Buried inside the algorithms, intentional or unintentional biases can lead to bad interpretations of data and ultimately to bad decisions. Especially since these algorithms are much more important than they appear, since artificial intelligence is based on self-learning algorithms that evolve over time, depending on the data they are provided with.
Allowing Computers to learn from Data : Gold of 21st Century
As a Data scientist, I think that machine learning, the application and science of algorithms that can make sense of data, are the most exciting of all the fields within computer science. As a society, we are living in a time when data is abundant and we can turn this data into knowledge by using self-learning algorithms from the field of machine learning. The plethora of powerful open source algorithms that have been developed in recent years has made now our best time ever to learn more about machine learning and learn how to use them to spot patterns in data and anticipate future events was probably not the case before many useful open source libraries were created. Despite the fact that we live in an age of modern technology, there is one resource that is plenty available to anyone: a large amount of structured and unstructured data. A subfield of artificial intelligence that evolved in the second half of the 20th century was machine learning.
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Games > Go (0.40)
Podcast: How pricing algorithms learn to collude
Algorithms now determine how much things cost. It's called dynamic pricing and it adjusts according to current market conditions in order to increase profits. The rise of e-commerce has propelled pricing algorithms into an everyday occurrence--whether you're shopping on Amazon, booking a flight, hotel or ordering an Uber. In this continuation of our series on automation and your wallet, we explore what happens when a machine determines the price you pay. This episode was reported by Anthony Green and produced by Jennifer Strong and Emma Cillekens. We're edited by Mat Honan and our mix engineer is Garret Lang, with sound design and music by Jacob Gorski. Jennifer: Alright so I'm in an airport just outside New York City and just looking at the departures board here seeing all these flights going different places… It makes me think about how we decide how much something should cost… like a ticket for one of these flights. Because where the plane is going is just part of the puzzle. The price of airfare is highly personalized.
- North America > United States > New York (0.24)
- North America > Canada > Quebec > Montreal (0.14)
- North America > United States > Washington > King County > Seattle (0.04)
- (4 more...)
- Transportation > Passenger (1.00)
- Retail (1.00)
- Law (1.00)
- (3 more...)
- Information Technology > Communications > Mobile (0.40)
- Information Technology > Artificial Intelligence > Machine Learning (0.32)
Thinking Darwinian
Some people have updated other people's views and understanding of life with the new idea they presented. Darwin is undoubtedly one of these people. Darwin's difference from other biologists and researchers is that he explains the evolutionary process in an algorithmic way and bases it on the laws of nature. Darwin's dangerous idea began in biology but has spread from engineering to sociology. There is greatness in this idea to be able to conceive of infinite beauty and complexity.
Improving Zero-shot Multilingual Neural Machine Translation for Low-Resource Languages
Although the multilingual Neural Machine Translation(NMT), which extends Google's multilingual NMT, has ability to perform zero-shot translation and the iterative self-learning algorithm can improve the quality of zero-shot translation, it confronts with two problems: the multilingual NMT model is prone to generate wrong target language when implementing zero-shot translation; the self-learning algorithm, which uses beam search to generate synthetic parallel data, demolishes the diversity of the generated source language and amplifies the impact of the same noise during the iterative learning process. In this paper, we propose the tagged-multilingual NMT model and improve the self-learning algorithm to handle these two problems. Firstly, we extend the Google's multilingual NMT model and add target tokens to the target languages, which associates the start tag with the target language to ensure that the source language can be translated to the required target language. Secondly, we improve the self-learning algorithm by replacing beam search with random sample to increases the diversity of the generated data and makes it properly cover the true data distribution. Experimental results on IWSLT show that the adjusted tagged-multilingual NMT separately obtains 9.41 and 7.85 BLEU scores over the multilingual NMT on 2010 and 2017 Romanian-Italian test sets. Similarly, it obtains 9.08 and 7.99 BLEU scores on Italian-Romanian zero-shot translation. Furthermore, the improved self-learning algorithm shows its superiorities over the conventional self-learning algorithm on zero-shot translations.
Options Forecast Based on a Self-learning Algorithm: Returns up to 22.27% in 7 Days
This forecast is part of the Options Package, as one of I Know First's algorithmic trading tools. Package Name: Options Recommended Positions: Long Forecast Length: 7 Days (11/27/2020 – 12/5/2020) I Know First Average: 10.75% I Know First's State of the Art Algorithm accurately forecasted 10 out of 10 trades in this Options Package for the 7 Days time period. KSS was our best stock pick this week a return of 22.27%. AA and CCL followed with returns of 16.5% and 13.56% for the 7 Days period. The package had an overall average return of 10.75%, providing investors with a 8.84% premium over the S&P 500's return of 1.91% during the period.
Artificial Intelligence Stocks Based on a Self-learning Algorithm: Returns up to 333.09% in 1 Year
This Best Artificial Intelligence Stocks forecast is designed for investors and analysts who need predictions for the best companies which are in the frontier of AI application in their products and services. Package Name: Best AI Stocks Recommended Positions: Long Forecast Length: 1 Year (6/23/2019 – 6/24/2020) I Know First Average: 75.08% TSLA was the top performing prediction with a return of 333.09%. The package had an overall average return of 75.08%, providing investors with a premium of 71.70% over the S&P 500's return of 3.38% during the same period. Tesla, Inc., formerly Tesla Motors, Inc., incorporated on July 1, 2003, designs, develops, manufactures and sells fully electric vehicles, and energy storage systems, as well as installs, operates and maintains solar and energy storage products. The Company operates through two segments: automotive, and energy generation and storage.
- Transportation > Ground > Road (1.00)
- Transportation > Electric Vehicle (1.00)
- Energy (1.00)
- Automobiles & Trucks (1.00)
Leiden team wins challenge for faster MRI scan through artificial intelligence
Researchers from Leiden, in cooperation with Philips, have won a challenge in which international research groups dedicate themselves to accelerating MRI scans with the help of artificial intelligence (AI). They developed an algorithm with which it is possible to use eight times less data than normal and still reconstruct an MRI image of a knee that is almost as good as one using the usual amount of data. In the fastMRI challenge, organised by the Facebook AI research lab and New York University, artificial intelligence specialists were challenged to apply their knowledge to making MRI scans faster and more efficient. The 34 teams taking part were supplied with a raw data set of a few hundred MRI scans of knees. They also received a number of incomplete data sets.
- Europe > Netherlands > South Holland > Leiden (0.63)
- North America > United States > New York (0.27)
Semi-supervised Wrapper Feature Selection with Imperfect Labels
Feofanov, Vasilii, Amini, Massih-Reza, Devijver, Emilie
In this paper, we propose a new wrapper approach for semi-supervised feature selection. A common strategy in semi-supervised learning is to augment the training set by pseudo-labeled unlabeled examples. However, the pseudo-labeling procedure is prone to error and has a high risk of disrupting the learning algorithm with additional noisy labeled training data. To overcome this, we propose to model explicitly the mislabeling error during the learning phase with the overall aim of selecting the most relevant feature characteristics. We derive a $\mathcal{C}$-bound for Bayes classifiers trained over partially labeled training sets by taking into account the mislabeling errors. The risk bound is then considered as an objective function that is minimized over the space of possible feature subsets using a genetic algorithm. In order to produce both sparse and accurate solution, we propose a modification of a genetic algorithm with the crossover based on feature weights and recursive elimination of irrelevant features. Empirical results on different data sets show the effectiveness of our framework compared to several state-of-the-art semi-supervised feature selection approaches.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > Saint Martin (0.04)
- North America > Canada (0.04)
- Europe > France > Auvergne-Rhône-Alpes > Isère > Grenoble (0.04)