If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
An academic and a lawyer have teamed up to develop a robot lawyer, which, if successful, will make legal advice affordable to people from all backgrounds, while revolutionising the legal sector. Robots could take on significant parts of a lawyer's work, reducing the costs and barriers to access to legal services for everyone, rather than just those who can afford the high costs. The project, at the University of Bradford, is initially working on a machine learning-based application to provide immigration-related legal advice, but if successful, it could be replicated across the legal sector. The idea has received government backing in the form of a £170,000 grant from Innovate UK Knowledge Transfer Partnerships. Legal firm AY&J Solicitors is providing a further £70,000 as well as the vital knowledge of lawyers.
The technology sector has been hit hard as of late, as the impending economic reopening has gotten more attention, and rising long-term bond rates have hit growth stocks particularly hard. As rates go up, future earnings are discounted more, harming valuations for growth stocks and increasing attention on value stocks that make profits today. And yet, technology will still play an ever-increasing role in society even post-pandemic. AI helps businesses make sense of their vast troves of data, glean insights, and react quickly in an automated fashion. As AI helps grow revenue and cut costs at the same time, it will be a mission-critical capability for any large company, even post-pandemic. But are there really any AI stocks that still trade at reasonable valuations, and which can handle the market's current value rotation?
Despite major disruptions from the ongoing COVID-19 pandemic, global investment in AI technologies grew by 40 percent in 2020 to $67.9 billion, up from $48.8 billion in 2019, as AI research and use continues to boom across broad segments of bioscience, healthcare, manufacturing and more. The figures, compiled as part of Stanford University's Artificlal Intelligence Index Report 2021 on the state of AI research, development, implementation and use around the world, help illustrate the continually changing scope of the still-maturing technology. The 222-page AI Index 2021 report, touted as the school's fourth annual study of AI impact and progress, was released March 3 by Stanford's Institute for Human-Centered Artificial Intelligence. The report provides a detailed portrait of the AI waterfront last year, including increasing AI investments and use in medicine and healthcare, China's growth in AI research, huge gains in AI capabilities across industries, concerns about diversity among AI researchers, ongoing debates about AI ethics and more. "The impact of AI this past year was both societal and economic, driven by the increasingly rapid progress of the technology itself," AI Index co-chair Jack Clark said in a statement.
Artificial intelligence is becoming good at many "human" jobs--diagnosing disease, translating languages, providing customer service--and it's improving fast. This is raising reasonable fears that AI will ultimately replace human workers throughout the economy. Never before have digital tools been so responsive to us, nor we to our tools. While AI will radically alter how work gets done and who does it, the technology's larger impact will be in complementing and augmenting human capabilities, not replacing them. Certainly, many companies have used AI to automate processes, but those that deploy it mainly to displace employees will see only short-term productivity gains. In our research involving 1,500 companies, we found that firms achieve the most significant performance improvements when humans and machines work together. Through such collaborative intelligence, humans and AI actively enhance each other's complementary strengths: the leadership, teamwork, creativity, and social skills of the former, and the speed, scalability, and quantitative capabilities of the latter. What comes naturally to people (making a joke, for example) can be tricky for machines, and what's straightforward for machines (analyzing gigabytes of data) remains virtually impossible for humans.
On the opening day of Industrial Internet of Things (IIoT) platform provider Advantech's online conference, company representatives and other industry experts gathered to discuss new developments on the horizon for IIoT, artificial intelligence (AI), and industrial networking. In particular, many sessions focused on the hurdles that still remain if IIoT and associated Industry 4.0 technologies are to see ubiquitous adoption in the future. The Advantech Connect conference continues online through May 6. Perhaps the greatest take-away from the first day of the event was that, while the real bedrock of value provided by IIoT is to be found in the data it generates, nothing can be attained from it unless that data is effectively gathered, communicated, and analyzed. Through the improvements these technologies enable in data gathering, transmission, and analytics, Advantech envisions industry moving beyond IIoT and toward an Artificial Intelligence of Things (AIoT) that allows cloud-delivered applications to make real-time, autonomous decisions at the device level.
When it comes to artificial intelligence (AI) and machine learning (ML) in testing, much of the interest and innovation today revolves around the concept of using these technologies to improve and accelerate the practice of testing. The more interesting problem lies in how you should go about testing the AI/ML applications themselves. In particular, how can you tell whether or not a response is correct? Part of the answer involves new ways to look at functional testing, but testers face an even bigger problem: cognitive bias, the possibility that an application returns an incorrect or non-optimal result because of systematic inflection in processing that produces results that are inconsistent with reality. This is very different from a bug, which you can define as an identifiable and measurable error in a process or result.
Toby Walsh, a professor of AI at the University of Sydney, told CNBC the dangers have only "become nearer and more serious" since the letter was published. "Autonomous weapons must be regulated," he said. The Future of Life Institute, a non-profit research institute in Boston, Massachusetts, said last month there are many positive military applications for AI but "delegating life and death decisions to autonomous weapon systems is not one of them." The institute pointed out that autonomous drones could be used for reconnaissance missions to avoid putting troops in danger, while AI could also be used to power defensive anti-missile guns which detect, target, and destroy incoming threats without a human command. "Neither application involves a machine selecting and attacking humans without an operator's green light," it said.
Data Science, Machine Learning, and Artificial Intelligence are the significant drivers of the fourth industrial revolution. Since data powers all these fields, they are often used interchangeably. However, despite the similarities, Data Science, ML and AI are different from each other. Data Science is a multidisciplinary field with a focus on the use of data to derive insights. A good data scientist must possess a wide range of skills, including programming, mathematics, and domain knowledge of the desired field of application.
IMAGE: SMU Assistant Professor Sun Qianru says highly diverse training data is critical to ensure the machine sees a wide range of examples and counterexamples that cancel out spurious patterns. SMU Office of Research and Tech Transfer - Artificial Intelligence, or AI, makes us look better in selfies, obediently tells us the weather when we ask Alexa for it, and rolls out self-drive cars. It is the technology that enables machines to learn from experience and perform human-like tasks. As a whole, AI contains many subfields, including natural language processing, computer vision, and deep learning. Most of the time, the specific technology at work is machine learning, which focuses on the development of algorithms that analyses data and makes predictions, and relies heavily on human supervision.