2021-07
Professional standards for data science could pave way for AI regulation
A new alliance of professional and research organisations is aiming to deliver a set of professional standards for data scientists. If widely adopted, the framework could go a long way to ensuring those working on advanced AI and machine learning systems (AI/ML) do so in a way that mitigates the emerging technology's risk to society. It could eventually lead to anyone unethically implementing AI being'struck off', or banned from the profession, one expert told Tech Monitor. The Alliance for Data Science Professionals has been formed by organisations including the BCS, the chartered institute for IT, and the Alan Turing Institute for AI research, along with the Royal Statistical Society, the Institute of Mathematics and the National Physical Laboratory. It aims to set the standards "needed to ensure an ethical and well-governed approach so the public, organisations and governments can have confidence in how their data is used".
A Machine Learning Method to Block Ads Based on Local Browser Behavior
Researchers in Switzerland and the US have devised a new machine learning approach to the detection of website advertising material that's based on the way such material interacts with the browser, rather than by analyzing its content or network behavior – two approaches which have proved ineffective in the long term in the face of CNAME cloaking (see below). Dubbed WebGraph, the framework uses a graph-based AI ad-blocking approach to detect promotional content by concentrating on such essential activities of network advertising – including telemetry attempts and local browser storage – that the only effective evasion technique would be to not conduct these activities. Though previous approaches have achieved slightly higher detection rates than WebGraph, all of them are prone to evasive techniques, while WebGraph is able to approach 100% integrity in the face of adversarial responses, including more sophisticated hypothesized responses that may emerge in the face of this novel ad-blocking method. The paper is led by two researchers from the Swiss Federal Institute of Technology, in concert with researchers from University of California, Davis and the University of Iowa. The work is a development from a 2020 research initiative with Brave browser called AdGraph, which featured two of the researchers from the new paper.
Scaling Up Chatbots for Corporate Service Delivery Systems
Conversational agents, or chatbots, providing question-answer assistance on smart devices, have proliferated in recent years and are poised to transform online customer services of corporate sectors.1,6 Implemented through dialogue management systems, chatbots converse through voice-based and textual dialogue, and harness natural language processing and artificial intelligence to recognize requests, provide responses, and predict user behavior.5,28 Market analysts concur on current adoption trends and the magnitude of growth and impact of chatbots anticipated in the next five years. According to a report by Grand View Research, for instance, already 45% of users prefer chatbots as the primary point of communications for customer service enquiries, translating into a global'chatbot' market of $1.23 billion by 2025, at a compounded annual growth rate (CAGR) of 24.3%.9 The strategy for conducting conversations using chatbots requires an efficient resolution of two key aspects. First, user queries or automatically perceived needs through user interactions have to be interpreted and mapped into categories, or user intents. This is based on historical processing of queries and needs, and the use of intent classification techniques.12 Second, conversations must be constructed for specific intents using frame-based dialogue management2 and neural response generation techniques.15 In frame-based dialogue management, the chatbot needs to converse with the user to have a fully filled frame (for example, flight information) in which all slot values are provided by the user (for example, airline carrier, departure time, departure location, and arrival location). The dialogue flow is constructed through an ordered sequence of frames.
Trucks Move Past Cars on the Road to Autonomy
In 2016, three veterans of the still young autonomous vehicle industry formed Aurora, a startup focused on developing self-driving cars. Partnerships followed with major automakers, including Hyundai and Volkswagen. CEO Chris Urmson said at the time that the link-ups would help the company bring "mobility as a service" to urban areas--Uber-like rides without a human behind the wheel. But by late 2019, Aurora's emphasis had shifted. It said self-driving trucks, not cars, would be quicker to hit public roads en masse. Its executives, who had steadfastly refused to provide a timeline for their self-driving-car software, now say trucks equipped with its "Aurora Driver" will hit the roads in 2023 or 2024, with ride-hail vehicles following a year or two later.
This flying fire sensor could help track wildfires from a satellite in space
As wildfires currently devastate western North America, a new airborne project team hopes to develop a space solution to stop conflagrations before they get out of control. The project could one day help future firefighters acquire "fire behavior" maps within 20 minutes of an outbreak, using satellite data combined with machine learning (a kind of artificial intelligence), according to a statement from the University of California, Berkeley. The project, funded by a $1.5 million grant, will fund "spotter planes" with infrared detectors -- heat-seeking sensors to examine flame length and geometry to learn more about how fires spread. Meanwhile, machine learning algorithms -- provided they are trained well on other "hot spot" datasets -- could spot new fires in the region within milliseconds, to send alerts. If all goes well in airborne testing, the detector team -- which includes UC Berkeley's Space Sciences Laboratory and Nevada-based fire assessment company Fireball Information Technologies -- hopes to send similar sensors to space within four years to make monitoring and discovery a 24/7 activity.
How AI Algorithms Are Changing Trading Forever
In general, trading is about making decisions on transactions with assets in order to make a profit. All technical analysis is based on statistical data, past market behavior, and reactions. Consequently, the analysis and search for some market patterns can be performed not only by person but by computer and artificial intelligence. It is no secret that trading robots have been working in the stock market for a long time, focusing on price movements in trends and within channels. According to a 2020 JPMorgan study, over 60% of trades over $10M were executed using algorithms.
DeepMind open-sources protein structure dataset generated by AlphaFold 2
All the sessions from Transform 2021 are available on-demand now. DeepMind and the European Bioinformatics Institute (EMBL), a life sciences lab based in Hinxton, England, today announced the launch of what they claim is the most complete and accurate database of structures for proteins expressed by the human genome. In a joint press conference hosted by the journal Nature, the two organizations said that the database, the AlphaFold Protein Structure Database, which was created using DeepMind's AlphaFold 2 system, will be made available to the scientific community in the coming weeks. The recipe for proteins -- large molecules consisting of amino acids that are the fundamental building blocks of tissues, muscles, hair, enzymes, antibodies, and other essential parts of living organisms -- are encoded in DNA. It's these genetic definitions that circumscribe their three-dimensional structures, which in turn determine their capabilities.
Self-driving cars confront a daunting new challenge: New York City streets
Mobileye received a special permit from New York state, allowing manufacturers of "autonomous vehicle technology" to test on public streets. The permit requires that drivers be present in the vehicle but allows them to keep their hands off the steering wheel yet "be prepared to take control when required to … operate the vehicle safely and lawfully." It's unclear whether others may have applied. The state hasn't responded to a request for comment.
Alibaba Develops Search Engine Simulation AI That Uses Live Data
In collaboration with academic researchers in China, Alibaba has developed a search engine simulation AI that uses real world data from the ecommerce giant's live infrastructure in order to develop new ranking models that are not hamstrung by'historic' or out-of-date information. The engine, called AESim, represents the second major announcement in a week to acknowledge the need for AI systems to be able to evaluate and incorporate live and current data, instead of just abstracting the data that was available at the time the model was trained. The earlier announcement was from Facebook, which last week unveiled the BlenderBot 2.0 language model, an NLP interface that features live polling of internet search results in response to queries. The objective of the AESim project is to provide an experimental environment for the development of new Learning-To-Rank (LTR) solutions, algorithms and models in commercial information retrieval systems. In testing the framework, the researchers found that it accurately reflected online performance within useful and actionable parameters.
Why 90% of machine learning models never hit the market
Corporations are going through rough times. The times are uncertain, and having to make customer experiences more and more seamless and immersive isn't taking off any of the pressure on companies. In that light, it's understandable that they're pouring billions of dollars into the development of machine learning models to improve their products. Companies can't just throw money at data scientists and machine learning engineers, and hope that magic happens. Here's how AI can improve your company's customer journey The data speaks for itself.