xSeedScore Accurate Performance Prediction - Computomics Molecular Data Analysis

#artificialintelligence

Reliable performance predictions are saving our breeders time that they can convert into a head start with time-to-market. Integrating predictions for locations and different climate conditions faster lets us prepare for changing climate conditions today. Benefit from our machine learning-based regularized kernel methods to predict phenotypes from genome-wide markers. We store the trained predictors to reproducibly analyze next season's data to make results directly comparable.


AI development requires good datasets, and OMB wants ideas on how to help - FedScoop

#artificialintelligence

The White House Office of Management and Budget is looking for feedback on which government datasets could be released or opened up or generally improved in order to help support the development of artificial intelligence. The office published its request for input to the Federal Register on Wednesday. The RFI is part of the administration's American AI Initiative, an executive order that President Trump signed in February. The directive aims to promote American leadership in the development of this new technology. As part of this effort, OMB wants feedback on what kind of data people want and how this data will help in AI research and development.


AI Standards

#artificialintelligence

On Feb. 11, 2019, President Donald J. Trump issued the Executive Order on Maintaining American Leadership in Artificial Intelligence (EO 13859). Among its objectives, the EO aims to "Ensure that technical standards minimize vulnerability to attacks from malicious actors and reflect Federal priorities for innovation, public trust, and public confidence in systems that use AI technologies; and develop international standards to promote and protect those priorities." The EO directs the National Institute of Standards and Technology (NIST) to create "a plan for Federal engagement in the development of technical standards and related tools in support of reliable, robust, and trustworthy systems that use AI technologies." NIST is committed to fulfilling that responsibility in a timely way, engaging the public and private sectors in producing a plan within 180 days. The EO's language provides directions and a 180-day timetable for NIST to deliver the plan for federal standards-related engagement: "Sec.


Building a Product Catalog: eBay's University Machine Learning Competition

#artificialintelligence

At eBay, we use state-of-the-art machine learning (ML), statistical modeling and inference, knowledge graphs, and other advanced technologies to solve business problems associated with massive amounts of data, much of which enters our system unstructured, incomplete, and sometimes incorrect. The use cases include query expansion and ranking, image recognition, recommendations, price guidance, fraud detection, machine translation, and more. Though most of the above use cases are common among other technology companies, there is a very distinctive and unique challenge that pertains only to eBay -- making sense of more than 1.3 billion listings, of which many are unstructured. Currently, we use our in-house machine learning solutions to approach this problem, but we also want to grow our community and future technologists that haven't had access to this type of data. By working with universities, we hope that it will pique academic curiosity within ML, spur more research in the ecommerce domain powered by a real-world ecommerce dataset, and help us improve our platform.


Naive Bayes: A Baseline Model for Machine Learning Classification Performance - KDnuggets

#artificialintelligence

The above equation represents Bayes Theorem in which it describes the probability of an event occurring P(A) based on our prior knowledge of events that may be related to that event P(B). Example of using Bayes theorem: I'll be using the tennis weather dataset. What is the probability of playing tennis given it is rainy? The probability of playing tennis when it is rainy is 60%. The process is very simple once you obtain the frequencies for each category.


Location Powers: Super Hero of Location

#artificialintelligence

The explosive availability of data about nearly every aspect of human activity along with revolutionary advances in computing technologies is transforming geospatial data science. The shift from data-scarce to data-rich environment comes from mobile devices, remote sensing and the Internet of Things. Nearly all of this data has components of location and time. Innovations in cloud computing and big data provides methods to perform data analytics at exceedingly large scale and speed. The development of intelligent systems using knowledge models and their impact on our insights and understanding will be focus of the Location Powers: Data Science Summit.


Connections between SVMs, Wasserstein distance and GANs

#artificialintelligence

Check out my new paper entitled "Support Vector Machines, Wasserstein's distance and gradient-penalty GANs are connected"! In this paper, we explain how one can derive SVMs and gradient penalized GANs (or those with Lipschitz-1 discriminator) from the same framework! We also show new gradient penalties that lead to better GANs. This paper may completely change your perspective on the Wasserstein's distance, Wasserstein GAN (WGAN), Hinge GAN (HingeGAN), and the use of gradient penalties in GANs. At least, it did for me!


Cluster multiple time series using K-means

#artificialintelligence

I have been recently confronted to the issue of finding similarities among time-series and though about using k-means to cluster them. To illustrate the method, I'll be using data from the Penn World Tables, readily available in R (inside the {pwt9} package): First, of all, let's only select the needed columns: The goal here is to cluster the different countries by looking at how similar they are on the avh variable. Let's do some further cleaning. The k-means implementation in R expects a wide data frame (currently my data frame is in the long format) and no missing values. These could potentially be imputed, but I can't be bothered: We're ready to use the k-means algorithm.


Fully automated ship will trace Mayflower journey

#artificialintelligence

A fully autonomous ship tracing the journey of the Mayflower is being built by a UK-based team, with help from tech firm IBM. The Mayflower Autonomous Ship, or MAS, will launch from Plymouth in the UK in September 2020. Its voyage will mark the 400th anniversary of the pilgrim ship which brought European settlers to America in 1620. IBM is providing artificial intelligence systems for the ship. The vessel will make its own decisions on its course and collision avoidance, and will even make expensive satellite phone calls back to base if it deems it necessary.


RealTime Robotics scores $11.7M Series A to help robots avoid collisions – TechCrunch

#artificialintelligence

One of the major challenges facing engineers as they develop more agile robots is helping them move through space while avoiding collisions, especially in a dynamic environment. RealTime Robotics, a Boston-based startup announced an $11.7 million Series A investment to help solve this problem. SPARX Asset Management led the round with participation from some strategic investors including Mitsubishi Electric Corporation, Hyundai Motor Company and Omron Ventures. Existing investors Toyota AI Ventures, Scrum Ventures and the Duke Angel Network also pitched in. Today's investment is actually the culmination of a couple of investments over this year that the company is announcing today, and brings the total raised to $12.9 million.