If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
In phase one, DBS will work with Exiger to deploy DDIQ's AI-powered screening technology to initially enhance and complement the bank's customer screening processes for institutional and retail clients in key markets and segments, with a view to using this capability more broadly. Lam Chee Kin, Managing Director, and Head, Group Legal, Compliance and Secretariat at DBS Bank said, "Using AI to help manage risk in financial crime is a journey that involves many small, difficult steps but tremendous ambition and commitment to keep moving. It is incumbent for financial institutions and their like-minded partners to continue to strive to give customers great experiences yet be adversarial to criminals and terrorists." Brandon Daniels, President of Global Technology Markets at Exiger, said, "Banks are quickly recognizing that legacy systems and legacy technology will hold them back from achieving the next phase of growth and meeting increasingly demanding regulatory compliance requirements. DBS is cutting the path for traditional financial institutions to transform and compete in today's digital market. It's an honor to be a part of their leadership in financial services and to invest in what will set the standard for compliance departments across the world."
Naoto Ichihara, an Assurance partner for Ernst & Young ShinNihon LLC in the Tokyo office, always had a passion for programming. He develops models and systems for audit and was interested in how machine learning could be applied to accounting data. After surveying existing academic papers and algorithms, Naoto realized there was a better way to detect anomalies through machine learning, and he coded an AI solution that could sense anomalous entries in large databases -- the first-of-its-kind in the auditing field. Never imagining himself an inventor, the technology was patented and Naoto built a team of auditors and developers to test and improve the solution's detection method. This innovative tool was named EY Helix GL Anomaly Detector, or Helix GLAD.
NEW YORK: Two paintings up for auction in New York highlight a growing interest in artificial intelligence-created works - a technique that could transform how art is made and viewed but is also stirring up passionate debate. The art world was stunned last year when an AI painting sold for $432,500, and auctioneers are keen to further test demand for computer-generated works. "Art is a true reflection of what our society, what our environment responds to," said Max Moore of Sotheby's. "And so it's just a natural continuation of the progression of art," he added. Sotheby's will put two paintings by the French art collective Obvious up for sale on Thursday, including'Le Baron De Belamy'.
Any business in its right mind should be painfully aware of how much money they could bleed via skillful Business Email Compromise (BEC) scams, where fraudsters convincingly forge emails, invoices, contracts and letters to socially engineer the people who hold the purse strings. And any human in their right mind should be at least a little freaked out by how easy it now is to churn out convincing deepfake videos – including, say, of you, cast in an adult movie, or of your CEO saying things that… well, they would simply never say. Well, welcome to a hybrid version of those hoodwinks: deepfake audio, which was recently used in what's considered to be the first known case of an AI-generated voice of a CEO to bilk a UK-based energy firm out of €220,000 (USD $243,000). The Wall Street Journal reports that some time in March, the British CEO thought he had gotten a call from the CEO of his business's parent company, which is based in Germany. Whoever placed the call sounded legitimate.
Keeping kids safe while using technology at school is nothing new, but startups like Saasyan are using automation and artificial intelligence to up the ante on student security. The company offers a subscription software that can be added to all devices at school to create a historical footprint of each student's computer use and ping teachers if any risks, from bullying to -possible self harm or violence, emerge. As well as filtering and customising certain key words, Margossian and his team of six have been working on artificial intelligence solutions that can tell when students are communicating in a risky way even if it's not showing up explicitly. "We have developed an AI agent that can figure out whether a sentence has any cyber bullying or self harm meaning in it. You can bully someone without using any swear words," Margossian says.
And we focus really on the athletic part of it. I think, though, that if you do a good job on the athletic part, which is also kind of the low-level part, you can make it easier for high-level AI to interact with you." In other words, it's much easier to direct a robot to take care of a task for you if you've already taught the robot how to stand, walk, navigate, and so on.
In my last blog we focussed on some of the problems with Artificial Intelligence (AI) and public trust that can be compounded by organisational issues such as dark data. This time round we're going to look at a couple of examples that demonstrate how AI can be used as a force for good. Over the past few months we have been working with the World Economic Forum (WEF) to test out some of the guidance on AI that we have been drafting with them. There have been a lot of lively debates as the use of AI is clearly divisive, especially when it comes to image processing. If we look at the UK there has been controversy recently over police using facial recognition techniques on CCTV footage to support the fight against crime.
Money laundering is big criminal business worldwide. Banks are tasked by the regulators with reducing the volume and value of money laundering over their services, but that's easier said than done. In response, many are now starting to use artificial intelligence (AI) to tune results, finding small anomalies within a large amount of data. In the fight against money laundering, banks need both scale and granularity. In most countries, the regulatory requirements make it difficult to track the success of anti-money laundering (AML) projects, however.
"We are aware of the issue and are taking the necessary steps to address and resolve it," a Google spokesman said. "Mitigating bias from our systems is one of our A.I. principles, and is a top priority." Amazon, in a statement, said it "dedicates significant resources to ensuring our technology is highly accurate and reduces bias, including rigorous benchmarking, testing and investing in diverse training data." Researchers have long warned of bias in A.I. that learns from large amounts data, including the facial recognition systems that are used by police departments and other government agencies as well as popular internet services from tech giants like Google and Facebook. In 2015, for example, the Google Photos app was caught labeling African-Americans as "gorillas."
If you're on an AI project team that has massive data that requires labeling for machine learning or deep learning, you're in a race to usable data. Outsourcing seems the easiest answer. But what happens when data labeling involves protected or private data? What are the security risks that come with outsourcing your data labeling? Here's the short answer: you'll need to take a close look at your data labeling service provider and ask some critical questions.