Services



AI, Machine Learning Increasingly Embraced by U.S. Carriers: LexisNexis - Carrier Management

#artificialintelligence

Artificial intelligence and machine learning are increasingly embraced by U.S. carriers as they seek to remain competitive and modernize their operations, a new LexisNexis Risk Solutions study has found. Struggles remain, however, in terms of figuring out staffing and proper use of the technology to optimize its benefits. LexisNexis' look at how the top 100 U.S. carriers are using and benefiting from artificial intelligence and machine learning found a robust adoption of the technology and a strong belief in the benefits it will bring. Approximately 62 percent of respondents said they worked for insurance carriers that have already adopted artificial intelligence (AI) and machine learning (ML) initiatives. About 75 percent said they believe AI and ML can provide carriers with a competitive advantage through better decision-making.


How AI in Ecommerce Enables True Personalization: Q&A With Elizabeth Gallagher of Lineate

#artificialintelligence

"It's machine learning's job to find patterns based on the data you give it to help you focus on the data points most likely to lead to conversion." Elizabeth Gallagher, chief revenue officer at Lineate talks about how machine learning (ML) and artificial intelligence (AI) are changing the game for ecommerce brands. With the use of predictive analytics, marketers can create personalized marketing campaigns. In this edition of MarTalk Connect, Gallagher shares the key data points marketers should use to provide personalized recommendations. She stresses how data-driven automation and machine learning are strategic assets to enhance the customer journey.


Prepare for a Long Battle against Deepfakes - KDnuggets

#artificialintelligence

When Stephen Hawking warned of the dangers of Artificial Intelligence in 2015, his concerns were about the Superhuman AI that would pose an existential risk to humanity. But in recent years, much more imminent danger of AI has emerged that even a genius like Hawking could not have predicted. Deepfakes depict people in videos they never appeared in, saying things they never said and doing things they never really did. Some of the harmless ones have the actor Nicolas Cage's face superimposed on his Hollywood's peers while the more serious and dangerous ones target politicians like the US House Speaker Nancy Pelosi. Deeptrace, a cybersecurity startup based in Amsterdam found 14,698 deepfakes in June and July, an 84% increase since December of 2018 when the number of AI-manipulated videos was 7,964.


How Google used AI to supercharge Maps in 2019

#artificialintelligence

Google Maps celebrated its 15th birthday today by announcing a new milestone: in the last year, the company mapped as many buildings as it did in the previous decade. The service reached this landmark through a two-step process. Firstly, staff worked with Google's data operations team to manually trace common building outlines. They then trained machine learning models to recognize the edges and shapes of buildings. Another recent deployment of machine learning enabled Maps to recognize handwritten building numbers that were so unclear that even a passerby in a car couldn't see them.


Artificial intelligence set to jazz up software development and deployment ZDNet

#artificialintelligence

Artificial intelligence and machine learning has the potential to boost many, many areas of the enterprise. As explored in my recent post, it is capable of accelerating and adding intelligence to supply chain management, human resources, sales, marketing and finance. The inevitable impact of AI on IT departments was touched on in a recent survey of 2,280 business leaders from MIT Sloan Management Review and SAS, which finds that in these early days of AI, IT professionals will be feeling the greatest impact -- both from a career and an operational point of view.. CIOs, chief data officers, and chief analytics officers will be on the front lines of AI implementations, the study finds. IT road maps, software development, deployment processes, and data environments are likely to be transformed in the near future. Most IT managers report that they are still developing foundational capabilities for AI -- cloud or data center infrastructure, cybersecurity, data management, development processes and workflow.


A Decade Of Change: How Tech Evolved In The 2010s And What's In Store For The 2020s

#artificialintelligence

Significant technological advancements and societal shifts occurred during the 2010's decade. Yet many of these developments became so quickly engrained in our daily lives that they often went relatively unnoticed, and their impact all but forgotten. Over this next decade, the 2020s, we expect similar rapid and meaningful advancements to occur. Moore's law suggests that over a 10-year period, semiconductors will advance by 32 times, bringing about mesmerizing innovation in the digital age that should not only change technology but society as well. In this piece, we review the technological advancements over the last decade and anticipate what revolutionary changes may be in store for us over the next 10 years.


A16Z AI Playbook

#artificialintelligence

Precisely defining artificial intelligence is tricky. John McCarthy proposed that AI is the simulation of human intelligence by machines for the inaugural summer research project in 1956. Others have defined AI as the study of intelligent agents, human or not, that can perceive their environments and take actions to maximize their chances of achieving some goal. Jerry Kaplan wrestles with the question for an entire chapter in his book Artificial Intelligence: What Everyone Needs To Know before giving up on a succinct definition. Rather than try to define AI precisely, we'll simply differentiate AI's goals and techniques: Some people use Artificial Intelligence and Machine Learning interchangeably.


The real test of an AI machine is when it can admit to not knowing something John Naughton

#artificialintelligence

On Wednesday the European Commission launched a blizzard of proposals and policy papers under the general umbrella of "shaping Europe's digital future". The documents released included: a report on the safety and liability implications of artificial intelligence, the internet of things and robotics; a paper outlining the EU's strategy for data; and a white paper on "excellence and trust" in artificial intelligence. In their general tenor, the documents evoke the blend of technocracy, democratic piety and ambitiousness that is the hallmark of EU communications. That said, it is also the case that in terms of doing anything to get tech companies under some kind of control, the European Commission is the only game in town. In a nice coincidence, the policy blitz came exactly 24 hours after Mark Zuckerberg, supreme leader of Facebook, accompanied by his bag-carrier – a guy called Nicholas Clegg who looked vaguely familiar – had called on the commission graciously to explain to its officials the correct way to regulate tech companies.


Facebook's AI speeds up natural language processing without additional training

#artificialintelligence

Natural language models typically have to solve two tough problems: mapping sentence prefixes to fixed-sized representations and using the representations to predict the next word in the text. In a recent paper, researchers at Facebook AI Research assert that the first problem -- the mapping problem -- might be easier than the prediction problem, a hypothesis they build upon to augment language models with a "nearest neighbors" retrieval mechanism. They say it allows rare patterns to be memorized and that it achieves a state-of-the-art complexity score (a measure of vocabulary and syntax variety) with no additional training. As the researchers explain, language models assign probabilities to sequences of words, such that from a context sequence of tokens (e.g., words) they estimate the distribution (the probabilities of occurrence of different possible outcomes) over target tokens. The proposed approach -- kNN-LM -- maps a context to a fixed-length mathematical representation computed by the pre-trained language model.