Artificial Intelligence and machine learning have been hot topics in 2020 as AI and ML technologies increasingly find their way into everything from advanced quantum computing systems and leading-edge medical diagnostic systems to consumer electronics and "smart" personal assistants. Revenue generated by AI hardware, software and services is expected to reach $156.5 billion worldwide this year, according to market researcher IDC, up 12.3 percent from 2019. But it can be easy to lose sight of the forest for the trees when it comes to trends in the development and use of AI and ML technologies. As we approach the end of a turbulent 2020, here's a big-picture look at five key AI and machine learning trends– not just in the types of applications they are finding their way into, but also in how they are being developed and the ways they are being used. Hyperautomation, an IT mega-trend identified by market research firm Gartner, is the idea that most anything within an organization that can be automated – such as legacy business processes – should be automated.
The word on the street is if you don't invest in ML as a company or become an ML specialist, the industry will leave you behind. The hype has caught on at all levels, catching everyone from undergrads to VCs. Words like "revolutionary," "innovative," "disruptive," and "lucrative" are frequently used to describe ML. Allow me to share some perspective from my experiences that will hopefully temper this enthusiasm, at least a tiny bit. This essay materialized from having the same conversation several times over with interlocutors who hope ML can unlock a bright future for them. I'm here to convince you that investing in an ML department or ML specialists might not be in your best interest. That is not always true, of course, so read this with a critical eye. The names invoke a sense of extraordinary success, and for a good reason. Yet, these companies dominated their industries before Andrew Ng's launched his first ML lectures on Coursera. The difference between "good enough" and "state-of-the-art" machine learning is significant in academic publications but not in the real world. About once or twice a year, something pops into my newsfeed, informing me that someone improved the top 1 ImageNet accuracy from 86 to 87 or so. Our community enshrines state-of-the-art with almost religious significance, so this score's systematic improvement creates an impression that our field is racing towards unlocking the singularity. No-one outside of academia cares if you can distinguish between a guitar and a ukulele 1% better. Sit back and think for a minute.
This idea could be interpreted as being rather bleak; are we doomed to repeat the errors of the past until we correct them? We certainly do need to learn and re-learn life lessons--whether in our work, relationships, finances, health, or other areas--in order to grow as people. Zooming out, the same phenomenon exists on a much bigger scale--that of our collective human history. We like to think we're improving as a species, but haven't yet come close to doing away with the conflicts and injustices that plagued our ancestors. What might happen over the course of this year, and what information would we use to make educated guesses about it? The editorial team at The Economist took a unique approach to answering these questions.
Before being an exaltation to Luddites (the English workers from the 19th century who actually destroyed textile machinery as a form of protest) or to some sort of technophobic movement, the provocative pun contained in the title of this article carries a methodological proposal, in the field of critical theory of information, to build a diagnosis about the algorithmic filtering of information, which reveals itself to be a structural characteristic of the new regime of information that brings challenges to human emancipation. Our analysis starts from the concept of mediation to problematize the belief, widespread in much of contemporary society, that the use of machine learning and deep learning techniques for algorithmic filtering of big data will provide answers and solutions to all our questions and problems. We will argue that the algorithmic mediation of information on the internet, which is responsible for deciding which information we will have access to and which will remain invisible, is operated according to the economic interests of the companies that control the platforms we visit on the internet, acting as obstacle to the prospects of informational diversity and autonomy that are fundamental in free and democratic societies.
The application of emerging technologies such as AI, cloud, blockchain and IoT in financial services has altered the traditional operating models of financial institutions, the competitive dynamics of the industry, the role of people in those institutions and the landscape of the financial system as a whole. In fact, AI is positioned as an essential investment, with the World Economic Forum arguing how it is set to become central to the fabric of financial institutions. While the adoption of AI in financial services may be in its infancy, the use cases are ever growing. From recommending loan and credit offerings to detecting fraud, 94% of financial services in European and Middle Eastern markets believe that AI will disrupt their business. The direction and the awareness of AI is clear but it is essential that companies invest now, as if done too hastily, the process is marred by pitfalls.
In recent years, technology is increasingly being used in a range of ways to make construction more efficient and innovative. It is no longer odd to fly a drone over a construction site, to optimise work schedules to improve workplace safety or choose the best setting based on predictions. Despite a retarded initial adoption pace, construction leaders are beginning to take a greater interest in the transformative prospects of AI tech. During the next upcoming years, expect an increasingly quick rate for tech acceptance as applications and products targeted for construction continue hitting the market. Most megaprojects go over budget despite employing the best project teams.
We are seeing overwhelming growth in AI/ML systems to process oceans of data that are being generated in the new digital economy. However, with this growth, there is a need to seriously consider the ethical and legal implications of AI. As we entrust increasingly more sophisticated and important tasks to AI systems, such as automatic loan approval, for example, we must be absolutely certain that these systems are responsible and trustworthy. Reducing bias in AI has become a massive area of focus for many researchers and has huge ethical implications, as does the amount of autonomy that we give these systems. The concept of Responsible AI is an important framework that can help build trust in your AI deployments.
According to the Digital Banking Report, the use of artificial intelligence (AI) by financial institutions of all sizes continues to escalate, as banks and credit unions better understand the benefits of the technology in reducing risk, improving operations, and enhancing the customer experience. While most organizations recognize that they are in the early stages of development, the pandemic has only accelerated this deployment. Much of the activity around the use of data, AI and machine learning in the banking industry has traditionally revolved around risk and fraud mitigation. More recently, a growing number of organizations have recognized how firms within and outside the financial services industry have used AI to improve personalization, customer communication and engagement. According to research by MIT Sloan Management Review and the Boston Consulting Group, the challenge of achieving data and analytics maturity has moved from understanding the power of AI to actually recognizing the financial benefits of the technology. Unfortunately, the research found that only 10% of companies that deploy AI actually realize significant financial benefits.
Artificial intelligence is no longer a buzz phrase -- it's doing real work for real companies. Even in the early stages of implementation, AI is providing enterprise organizations with benefits: Efficiency in operations, cybersecurity protections, digital innovation, and stronger customer relationships. Next up for AI in the enterprise is the ability to scale with more apps serving more departments. However, the race to implement AI and machine learning also raises citizen privacy concerns. There have been revelations about the potential for algorithmic bias reflected in data sources.
AI has blasted its way into the public consciousness and our everyday lives. It is powering advances in medicine, weather prediction, factory automation, and self-driving cars. Even golf club manufacturers report that AI is now designing their clubs. Google Translate helps us understand foreign language webpages and talk to Uber drivers in foreign countries. Vendors have built speech recognition into many apps.