From virtual assistants like Siri and Alexa, to chatbots created by Facebook and Drift, AI is having a significant impact on the lives of consumers. A study from Statista showed that the number of consumers using virtual assistants worldwide is expected to exceed one billion in 2018. Additionally, a 2018 survey by Accenture projected that 37 percent of U.S. consumers will own a digital voice assistant (DVA) device by the end of 2018. It is readily apparent how AI-powered technology is making inroads into everyday life through DVAs and other consumer products, but AI is also having a transformative effect on an industry that impacts virtually all consumers and businesses: banking. Here are five ways that AI is already transforming the banking industry.
One of the most common refrains about fighting in cyber space is that the offense has the advantage over the defense: The offense only needs to be successful once, while the defense needs to be perfect all the time. Even though this has always been a bit of an exaggeration, we believe artificial intelligence has the potential to dramatically improve cyber defense to help right the offense-defense balance in cyber space.
Any system where humans interact with technology involves a tradeoff: security versus accessibility. The more secure the system, the more difficult it is to access. This poses a dilemma for any organization facing pressure to embrace anytime, anywhere accessibility, the mobile workplace and real-time interaction with customers and employees--and that describes almost every organization today. Advances in artificial intelligence (AI)--and the millions of data points created by the Internet of Things--are starting to change the nature of this tradeoff, particularly where trust is part of the product or service. As AI systems learn more, they can be trained to suggest next best actions, automate some repetitive tasks and minimize the greatest risk: human error.
As businesses struggle to combat increasingly sophisticated cybersecurity attacks, the severity of which is exacerbated by both the vanishing IT perimeters in today's mobile and IoT era, coupled with an acute shortage of skilled security professionals, IT security teams need both a new approach and powerful new tools to protect data and other high-value assets. Increasingly, they are looking to artificial intelligence (AI) as a key weapon to win the battle against stealthy threats inside their IT infrastructures, according to a new global research study conducted by the Ponemon Institute on behalf of Aruba, a Hewlett Packard Enterprise company HPE, 1.66% This press release features multimedia. The Ponemon Institute study, entitled "Closing the IT Security Gap with Automation & AI in the Era of IoT," surveyed 4,000 security and IT professionals across the Americas, Europe and Asia to understand what makes security deficiencies so hard to fix, and what types of technologies and processes are needed to stay a step ahead of bad actors within the new threat landscape. The research revealed that in the quest to protect data and other high-value assets, security systems incorporating machine learning and other AI-based technologies are essential for detecting and stopping attacks that target users and IoT devices.
SAP has released its guiding principles for artificial intelligence (AI). Recognizing the significant impact of AI on people, our customers, and wider society, SAP designed these guiding principles to steer the development and deployment of our AI software to help the world run better and improve people's lives. For us, these guidelines are a commitment to move beyond what is legally required and to begin a deep and continuous engagement with the wider ethical and socioeconomic challenges of AI. We look forward to expanding our conversations with customers, partners, employees, legislative bodies, and civil society; and to making our guiding principles an evolving reflection on these discussions and the ever-changing technological landscape. We recognize that, like with any technology, there is scope for AI to be used in ways that are not aligned with these guiding principles and the operational guidelines we are developing.
WIRE)--Imanis Data, the leader in enterprise data management powered by machine learning, today announced a major upgrade to the Imanis Data Management Platform, continuing the company's momentum since raising $13.5 million Series B funding earlier this year. The new Version 4.0 includes multiple industry firsts including autonomous backup, any-point-in-time recovery for multiple NoSQL databases, enhanced ransomware prevention, as well as numerous Imanis Data management enhancements. Hadoop and NoSQL applications are running in virtually every enterprise on-premises and in the cloud, but they lack enterprise data management capabilities, exposing organizations to data loss, downtime, and cyberattacks. "According to our research, 78% of organizations currently use NoSQL databases and an additional 18% plan to in the future," said Christophe Bertrand, senior analyst for data protection, at ESG Research. "The data protection market in this space is underserved by traditional vendors and Imanis Data with their unique machine learning approach is setting the bar for Hadoop and NoSQL enterprise data management."
In the next four years, more than 75 million jobs may be lost as companies shift to more automation, according to new estimates by the World Economic Forum. But the projections have an upside: 133 million new jobs will emerge during that period, as businesses develop a new division of labor between people and machines. The Future of Jobs Report arrives as the rising tide of automation is expected to displace millions of American workers in the long term and as corporations, educational institutions and elected officials grapple with a global technological shift that may leave many people behind. The report, published Monday, envisions massive changes in the worldwide workforce as businesses expand the use of artificial intelligence and automation in their operations. Machines account for 29 percent of the total hours worked in major industries, compared with 71 percent performed by people.
The three college-age defendants behind the creation of the Mirai botnet--an online tool that wreaked destruction across the internet in the fall of 2016 with unprecedentedly powerful distributed denial of service attacks--will stand in an Alaska courtroom Tuesday and ask for a novel ruling from a federal judge: They hope to be sentenced to work for the FBI. Josiah White, Paras Jha, and Dalton Norman, who were all between 18 and 20 years old when they built and launched Mirai, pleaded guilty last December to creating the malware that hijacked hundreds of thousands of Internet of Things devices, uniting them as a digital army that began as a way to attack rival Minecraft video game hosts, and evolved into an online tsunami of nefarious traffic that knocked entire web hosting companies offline. At the time, the attacks raised fears amid the presidential election targeted online by Russia that an unknown adversary was preparing to lay waste to the internet. The original creators, panicking as they realized their invention was more powerful than they had imagined, released the code--a common tactic by hackers to ensure that if and when authorities catch them, they don't possess any code that isn't already publicly known that can help finger them as the inventors. That release in turn lead to attacks by others throughout the fall, including one that made much of the internet unusable for the East Coast of the United States on an October Friday.
Artificial intelligence has a bias problem and the way to fix it is by making the tech industry in the West "much more diverse", according to the head of AI and machine learning at the World Economic Forum. Just two to three years ago, there were very few people raising ethical questions around the use of AI, Kay Firth-Butterfield told CNBC at the World Economic Forum's Annual Meeting of the New Champions in Tianjin, China. But ethical questions have now "come to the fore," she said. "That's partly because we have (the General Data Protection Regulation), obviously, in Europe, thinking about privacy, and also because there have been some obvious problems with some of the AI algorithms." Theoretically, machines are supposed to be unbiased.