Visual interface for Azure Machine Learning service Blog Microsoft Azure

#artificialintelligence

During Microsoft Build we announced the preview of the visual interface for Azure Machine Learning service. This new drag-and-drop workflow capability in Azure Machine Learning service simplifies the process of building, testing, and deploying machine learning models for customers who prefer a visual experience to a coding experience. This capability brings the familiarity of what we already provide in our popular Azure Machine Learning Studio with significant improvements to ease the user experience. The Azure Machine Learning visual interface is designed for simplicity and productivity. It offers a rich set of modules covering data preparation, feature engineering, training algorithms, and model evaluation.


Global Big Data Conference

#artificialintelligence

Facebook's commitment to the wider dev community remains as strong as ever, if recent developments are any indication. Following the open-sourcing of image processing library Spectrum in January, natural language processing modeling framework PyText late last year, and AI reinforcement learning platform Horizon in November, Facebook's AI research division today announced that Pythia, a modular plug-and-play framework that enables data scientists to quickly build, reproduce, and benchmark AI models, is now freely available on GitHub. As Facebook explains in a blog post, Pythia -- which is built atop the company's PyTorch machine learning framework -- is principally intended for vision and language tasks, such as answering questions related to visual data and automatically generating image captions. It incorporates elements of Facebook AI Research's top entries in AI competitions like LoRRA, a vision and language model that won both the VQA Challenge 2018 and Vizwiz Challenge 2018, and it's capable of showing how previous state-of-the-art AI systems achieved top benchmark results and comparing their performance to that of new models. Pythia also supports distributed training and a variety of data sets, as well as custom losses, metrics, scheduling, and optimizers.


Machine Learning for Product Managers: Defining the business problem

#artificialintelligence

Every company is overflowing with data. They look around and see innovation is happening in the industry. Executives hear from their customers about their AI strategy. Management sees competitors with AI solution and make critical moves that bite into their addressable market. With all this background noise, the immediate reaction for the management is to conclude that we got to do something with our data and let us go and hire some data scientists.


Racing tips from AWS DeepRacer League winners in Stockholm, and AWS DeepRacer TV! Amazon Web Services

#artificialintelligence

The AWS DeepRacer League is the world's first global autonomous racing league. There are races at 21 AWS Summits globally and select Amazon events, as well as monthly virtual races happening online and open for racing. No matter where you are in the world or your skill level, you can join the league. Get a chance to win AWS DeepRacer cars and the top prize of an all-expenses-paid trip to re:Invent 2019, to compete in the AWS DeepRacer Championship Cup. The competition is heating up as the Summit Circuit hit the halfway mark in Sweden this week.


Ink Lines and Machine Learning

#artificialintelligence

Pav Grochola is a Effects Supervisor at Sony Pictures Imageworks (SPI) and was co-effects supervisor on the Oscar winning Spiderman: Into the Spiderverse (along with Ian Farnsworth). He was tasked with solving how to produce natural looking line work for the film. A critical visual component for successfully achieving the comic book illustrative style, in CGI, was the creation of line work or "ink lines". SPI in testing discovered any approach that involves creating ink lines based on procedural "rules" (for example toon shaders) were ineffective in achieving the natural look that was wanted. The fundamental problem is that artists simply do not draw based on limited'rule sets' or guidelines.


When AI Becomes a Part of Our Daily Lives

#artificialintelligence

As we live longer and technology continues its rapid arc of development, we can imagine a future where machines will augment our human abilities and help us make better life choices, from health to wealth. Instead of conducting a question and answer with a device on the countertop, we will be able to converse naturally with our virtual assistant that is fully embedded in our physical environment. Through our dialogue and digital breadcrumbs, it will understand our life goals and aspirations, our obligations and limitations. It will seamlessly and automatically help us budget and save for different life events, so we can spend more time enjoying life's moments. While we can imagine this future, the technology itself is not without challenges -- at least for now.


Artists Machine Intelligence Grants by Google Arts & Culture Lab, Google AI Experiments with Google

#artificialintelligence

As part of Google's ongoing commitment to support ambitious computer science research and the arts, Google Arts & Culture, in collaboration with Google AI, invite proposals from contemporary artists working with machine learning in their art practices. Artists Machine Intelligence (AMI) grants will support six artists with technical mentorship, core Google research, and funding. Artists will have the opportunity to work with Google creative technologists to develop and produce artworks over the course of a five-month period. Mentorship may cover technical processes like data collection and analysis, to pipeline design, and model deployment, and includes access to core Google U/X and technical research in generative and decentralized machine learning, computer vision, and natural language processing. Apart from any Google background IP (if relevant), artists will own IP of their artwork.


How AI Could Track Allergens on Every Block NVIDIA Blog

#artificialintelligence

As seasonal allergy sufferers will attest, the concentration of allergens in the air varies every few paces. A nearby blossoming tree or sudden gust of pollen-tinged wind can easily set off sneezing and watery eyes. But concentrations of airborne allergens are reported city by city, at best. A network of deep learning-powered devices could change that, enabling scientists to track pollen density block by block. Researchers at the University of California, Los Angeles, have developed a portable AI device that identifies levels of five common allergens from pollen and mold spores with 94 percent accuracy, according to the team's recent paper.


IBM's AI performs state-of-the-art broadcast news captioning

#artificialintelligence

Two years ago, researchers at IBM claimed state-of-the-art transcription performance with a machine learning system trained on two public speech recognition data sets, which was more impressive than it might seem. The AI system had to contend not only with distortions in the training corpora's audio snippets, but with a range of speaking styles, overlapping speech, interruptions, restarts, and exchanges among participants. In pursuit of an even more capable system, researchers at the Armonk, New York-based company recently devised an architecture detailed in a paper ("English Broadcast News Speech Recognition by Humans and Machines") that will be presented at the International Conference on Acoustics, Speech, and Signal Processing in Brighton this week. They say that in preliminary experiments it achieved industry-leading results on broadcast news captioning tasks. The system came with its own set of challenges, like audio signals with lots of background noise and presenters speaking on a wide variety of news topics.


What Roles Will Human Workers Play In The AI Economy Of The Future?

#artificialintelligence

What roles will human workers play in the AI economy of the future? The question is no longer if AI can coexist with humans, it's now a question of how (and what) they can achieve together. As we assess the economy of the future, it's important to understand the value people bring to AI – and that's data. So first of all, creating royalties and residuals based on the value and information that people are adding to the AI – paying them for their data, information, knowledge, and expertise – is the fundamental shift needed to account for how people will work alongside AI in the economy of the future. Otherwise, the economy will be split between those who are working with AI, and those who are not, and it's imperative our economic models avoid that.