If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Google's grip on the web has never been stronger. Its Chrome web browser has almost 70 percent of the market and its search engine a whopping 92 percent share. This story originally appeared on WIRED UK. But Google's dominance is being challenged. Regulators are questioning its monopoly position and claim the company has used anticompetitive tactics to strengthen its dominance.
The National Security Commission on Artificial Intelligence (NSCAI) recently published the Final Report for 2021 outlining an integrated national strategy to empower the US in the era of AI-accelerated competition and conflict. NSCAI worked with technologists, national security professionals, business executives and academic leaders to put out the report. According to the report, the US government is a long way from being "AI-ready." Based on the findings, the commission has proposed a set of policy recommendations. The US leads in almost all AI parameters than most countries, including India.
In 2017, I returned to Canada from Sweden, where I had spent a year working on automation in mining. Shortly after my return, the New York Times published a piece headlined The Robots Are Coming, and Sweden Is Fine, about Sweden's embrace of automation while limiting human costs. Although Swedes are apparently optimistic about their future alongside robots, other countries aren't as hopeful. One widely cited study estimates 47 per cent of jobs in the United States are at risk of being replaced by robots and artificial intelligence. Whether we like it or not, the robot era is upon us.
Transformer is a type of neural network mainly based on self-attention mechanism . Transformer is widely used in the field of natural language processing (NLP), e.g., the famous BERT and GPT3 models. Inspired by the breakthrough of transformer in NLP, researchers have recently applied transformer to computer vision (CV) tasks such as image recognition, object detection, and image processing . For example, DETR treats object detection as a direct set prediction problem and solve it using a transformer encoder-decoder architecture. Compared to the mainstream CNN models, these transformer-based models have also shown promising performance on visual tasks .
IMAGE: A computer created facial images that appealed to individual preferences. Researchers have succeeded in making an AI understand our subjective notions of what makes faces attractive. The device demonstrated this knowledge by its ability to create new portraits on its own that were tailored to be found personally attractive to individuals. The results can be utilised, for example, in modelling preferences and decision-making as well as potentially identifying unconscious attitudes. Researchers at the University of Helsinki and University of Copenhagen investigated whether a computer would be able to identify the facial features we consider attractive and, based on this, create new images matching our criteria.
As you can see, this is an impressive series of releases and one that addresses some of the hottest trends in modern ML applications. When it comes to ML, Microsoft continues to innovate at a very impressive pace and it's becoming one of the most complete suites of ML technologies in the market. Edge#69: search strategies in neural architecture search; Google's evolved transformer that is a killer combination of transformers and NAS; Microsoft's neural network intelligence -- the most impressive AutoML framework you have ever heard of.
New work by computer scientists at Lawrence Livermore National Laboratory (LLNL) and IBM Research on deep learning models to accurately diagnose diseases from X-ray images with less labeled data won the Best Paper award for Computer-Aided Diagnosis at the SPIE Medical Imaging Conference on Feb. 19. The technique, which includes novel regularization and "self-training" strategies, addresses some well-known challenges in the adoption of artificial intelligence (AI) for disease diagnosis, namely the difficulty in obtaining abundant labeled data due to cost, effort or privacy issues and the inherent sampling biases in the collected data, researchers said. AI algorithms also are not currently able to effectively diagnose conditions that are not sufficiently represented in the training data. LLNL computer scientist Jay Thiagarajan said the team's approach demonstrates that accurate models can be created with limited labeled data and perform as well or even better than neural networks trained on much larger labeled datasets. The paper, published by SPIE, included co-authors at IBM Research Almaden in San Jose.
On the 27th of January, DIHNET revealed the winners of the 2020 DIH Champions Challenge at the virtual EDIH Conference 2021 "Gearing up towards European Digital Innovation Hubs". The awards ceremony gathered more than 1176 participants including Digital Innovation Hubs, designated EDIHs, regions and Member States, representatives of EEN, Clusters, SME associations, among other stakeholders. DIHNET.EU was pioneer in launching the annual DIH Champions Challenge for identifying mature Digital Innovation Hubs in Europe. Begoña Sanchez, Innovation Systems and Policies manager at Tecnalia, and member of the DIHNET consortium, explains that the main purpose of this initiative is "to provide the DIHs community with a process for identifying good practices, showcase and support success stories of Mature DIHs that can inspire and guide other DIHs in their development." In this second edition, four DIHs were shortlisted as finalists: the am-LAB, the Basque Digital Innovation Hub (BDIH), the FZI Research Center for Information Technology and the ITI Data Hub (The Data Cycle Hub). The DIHNET consortium revised the proposals with the contribution of two external evaluators: Jan Kobliha, Ministerial Counsellor at the Ministry of Industry and Trade of the Czech Republic, and Thorsten Huelsmann, manager of the Digital Hub Logistics Dortmund, winner of the 2019 DIH Champions Challenge.
Machine learning has now entered its business heyday. Almost half of CIOs were predicted to have implemented AI by 2020, a number that is expected to grow significantly in the next five years. Because creating a machine learning model and putting it into operation in an enterprise environment are two very different things. The biggest challenge for companies looking to use AI is operationalizing machine learning, the same way DevOps operationalized software development in the 2000's. Simplifying the data science workflow by providing necessary architecture and automating feature serving with feature stores are two of the most important ways to make machine learning easy, accurate, and fast at scale.
Artificial intelligence is becoming good at many "human" jobs--diagnosing disease, translating languages, providing customer service--and it's improving fast. This is raising reasonable fears that AI will ultimately replace human workers throughout the economy. Never before have digital tools been so responsive to us, nor we to our tools. While AI will radically alter how work gets done and who does it, the technology's larger impact will be in complementing and augmenting human capabilities, not replacing them. Certainly, many companies have used AI to automate processes, but those that deploy it mainly to displace employees will see only short-term productivity gains. In our research involving 1,500 companies, we found that firms achieve the most significant performance improvements when humans and machines work together. Through such collaborative intelligence, humans and AI actively enhance each other's complementary strengths: the leadership, teamwork, creativity, and social skills of the former, and the speed, scalability, and quantitative capabilities of the latter. What comes naturally to people (making a joke, for example) can be tricky for machines, and what's straightforward for machines (analyzing gigabytes of data) remains virtually impossible for humans.