If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
With AI (Artificial Intelligence) being the one of the hottest topics of 2017 for #InsurTech, here is a quick round-up of some of the takeaway points from the AI Summit that I attended earlier as part of the 2017 InsurTech Rising event. This is interesting, because within the insurance industry, many understand this fact but the distinction between AI, Machine Learning (ML) and Deep Learning (DL) is still misunderstood. AI being so topical is fuelling the misbelief that it is new. ML which is a subset of AI, is where machines learn a function from the data, namely patterns and trends, which we as humans can't always determine ourselves and not as quickly. DL which is a subset of ML, and thus, a subset of AI, is where neural networks (on a much bigger scale) work to think like humans.
Artificial Intelligence is the future of growth. There is sure to be at least one article in the newspaper/internet/blogs daily on the revolutionary advancements made in the field of Artificial Intelligence or its subfield disrupting standard industries like Fintech, Banking, Law, or any other. In banking domain digital banking teams of all modern banks planning to transform the customer experience with their AI based chat-driven intelligent virtual assistant i.e. bots. AI promises benefits, but also poses urgent challenges (not threats, please make a note) that cut across almost all industries and business be it of any nature, i.e software development, technical support, customer care, medicines, law domain or factory / manufacturing work. The need of the hour is to upgrade our skill sets to exploit AI rather than compete with it.
California regulators are embracing a General Motors recommendation that would help makers of self-driving cars avoid paying for accidents and other trouble, raising concerns that the proposal will put an unfair burden on vehicle owners. If adopted, the regulations drafted by the California Department of Motor Vehicles would protect these carmakers from lawsuits in cases where vehicles haven't been maintained according to manufacturer specifications. That could open a loophole for automakers to skirt responsibility for accidents, injuries and deaths caused by defective autonomous vehicles, said Armand Feliciano, vice president for the Association of California Insurance Companies. The regulations drafted by the California DMV would protect carmakers from lawsuits in cases where their self driving vehicles haven't been maintained according to manufacturer specifications. The regulations drafted by the California Department of Motor Vehicles would protect these carmakers from lawsuits in cases where vehicles haven't been maintained according to manufacturer specifications.
Artificial Intelligence is the future of growth. There is sure to be at least one article in the newspaper/internet/blogs daily on the revolutionary advancements made in the field of Artificial Intelligence or its subfield disrupting standard industries like Fintech, Banking, Law, or any other. In banking domain digital banking teams of all modern banks planning to transform the customer experience with their AI based chat-driven intelligent virtual assistant i.e. bots. Amalgamating the latest technology of artificial intelligence, predictive analytics and cognitive messaging to serve millions of customers is now a new winning strategy? AI and regulation are paving the way for Fintech.
The term Artificial Intelligence (AI) has been around for a while. A quick search on the web reveals that the field of modern AI was born in the year 1950, when Alan Turing published a paper on thinking machines. Here we are, almost seven decades later, still in the advent of this emerging technology. Over the last few years, Google CEO, Sundar Pitchai has been speaking about the increasing role of AI in software and it seems like this year might be the inflection point for the field. In May 2017, Pichai explained how at Google, it is an "AI-first" approach for several of its products.
At the beginning of the year, efforts to put driverless cars on California's streets looked like they were careening. Uber had defied state officials by failing to get permits to test its technology and then the company shipped its cars to Arizona to test them there. After four years of trying, regulators were still trying to write rules for testing cars without anyone in the driver's seat. Lawmakers and tech industry representatives worried that California was losing its grip on innovation in a sector primed for growth. Now, after this year's release of guidelines from the state Department of Motor Vehicles, the mood has changed.
The European Union's General Data Protection Regulation (GDPR), which will come into force on May 25, 2018, redefines how organizations are required to handle the collection and use of EU citizens' personal data. Debates around the GDPR focus mostly on the global reach of this legislation, the draconian fines it introduces, or its stricter rules for "informed consent" as a condition for processing personal data. However, one challenge the GDPR brings to companies is often overlooked: the citizens' right to explanation. Legal details aside, the GDPR mandates that citizens are entitled to be given sufficient information about the automated systems used for processing their personal data in order to be able to make an informed decision as to whether to opt out from such data processing. The right to explanation has long been overlooked.
When it comes to the advancement of Artificial Intelligence, Elon Musk has made his opinion very clear: regulation will be key. The Tesla CEO has even worked on driverless or autonomous vehicles and still believes that the new technology poses some dangers in addition to the positives it offers. On Monday, Musk took to Twitter to offer his insight once again. In a response to a video about how robot soldiers could make people safer, Musk used some sarcasm to dismiss the claim. The video from New Scientist proposed that robot soldiers could make decisions free from emotions, especially fear, which means they might be able to better fight than humans, "Letting robots kill without human supervision could save lives," read the first sequence in the video.
The General Data Protection Regulation (GDPR), the European Union's sweeping new data privacy law, is triggering a lot of sleepless nights for CIOs grappling with how to effectively comply with the new regulations and help their organizations avoid potentially hefty penalties. The GDPR, which goes into effect May 25, 2018, requires all companies that collect data on citizens in EU countries to provide a "reasonable" level of protection for personal data. The ramifications for non-compliance are significant, with fines of up to 4% of a firm's global revenues. Companies that do business in Europe have been scrambling to put new processes and platforms in place to improve data security and facilitate GDPR compliance at a time when data volumes are exploding across legacy IT and multi-cloud environments. A logical starting point for GDPR compliance, therefore, is a full understanding of where data is stored and how it is used.
Researchers at Facebook shut down an artificial intelligence (AI) program after it created its own language, Digital Journal reports. The system developed code words to make communication more efficient and researchers took it offline when they realized it was no longer using English. The incident, after it was revealed in early July, puts in perspective Tesla CEO Elon Musk's warnings about AI. "AI is the rare case where I think we need to be proactive in regulation instead of reactive," Musk said at a meeting of U.S. National Governors Association in July. "Because I think by the time we are reactive in AI regulation, it'll be too late." Facebook CEO Mark Zuckerberg has called Musk's warnings "pretty irresponsible," prompting Musk to respond that Zuckerberg's understanding of AI and its implications is "limited."