If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
As AI and machine learning permeate every sphere of our lives today, it gets easier to celebrate these technologies. From entertainment to customer support to law enforcement, they provide humans with considerable help. Certain things they are capable of are so amazing that they seem almost like magic to an outside observer. However, it's necessary to remember that as astonishing as machine learning-powered tech advancements are, they are still a product created by us, humans. And we can't simply shed our personalities when developing anything, much less an AI – an algorithm that has to think on its own.
Every now and then, there is a fraudulent activity masquerading as the original – no exception to the business world. And if fraud detection is about dealing with smokes and mirrors with barely any room for errors, then Machine learning and AI have grown into reckoned technology forces giving enterprises the hope of clearing the smoke and smashing the mirror. Given the most complex of a situation – to decide whether it is a fraud being perpetrated or an original one being conducted– and the need to combat even the most-modern fraud tricks, organizations across Banking, Fintech, Insurance, Retail and other industries are using machine learning techniques for fraud detection to unearth subtle fraud patterns, detect anomalies as well as suspicious behaviors, and prevent fraud. In using machine learning techniques for fraud detection, what sets the prerogative for using machine prediction or anomaly detection or behavioral analytics? Now, when we are entrusted with this fraud detection and prevention task, data would be the first stop to frame the solution strategy.
New Zealand Police has recruited an unusual new officer to the force: an AI cop called Ella. Ella is a life-like virtual assistant that uses real-time animation to emulate face-to-face interaction in an empathetic way. Its first day of work will be next Monday, when Ella will be stationed in the lobby of the force's national headquarters in Wellington. Its chief duties there will be welcoming visitors to the building, telling staff that they've arrived, and directing them to collect their passes. It can also talk to visitors about certain issues, such as the force's non-emergency number and police vetting procedures.
Societies often rely on human experts to take a wide variety of decisions affecting their members, from jail-or-release decisions taken by judges and stop-and-frisk decisions taken by police officers to accept-or-reject decisions taken by academics. In this context, each decision is taken by an expert who is typically chosen uniformly at random from a pool of experts. However, these decisions may be imperfect due to limited experience, implicit biases, or faulty probabilistic reasoning. Can we improve the accuracy and fairness of the overall decision making process by optimizing the assignment between experts and decisions? In this paper, we address the above problem from the perspective of sequential decision making and show that, for different fairness notions from the literature, it reduces to a sequence of (constrained) weighted bipartite matchings, which can be solved efficiently using algorithms with approximation guarantees.
In the past, we've highlighted some West Coast AI research labs that we think are doing some really incredible work. Now, in an attempt to look past the dominating presence of Silicon Valley, we're turning our focus overseas and taking a closer look at some of the cutting-edge Europe AI research labs. Founded in 2015, the Alan Turing Institute is a fairly new research lab with a unique structure. Located within the British Library in London, it's a national institute comprised of 13 universities and the UK Engineering and Physical Science Research Council. This structure creates an environment that promotes collaborate across disciplines.
A researcher from Queen's University Belfast has developed an algorithm that could help address the issue of artificial intelligence (AI) bias. Although AI has many applications, it also brings the risk of bias. As AI is trained using large volumes of data, if this data contains human biases, AI algorithms will make connections based on this. For example, if shown images of doctors that are predominantly male, AI will learn that doctors are less likely to be female. This creates a significant issue when the technology is used in recruitment, insurance or policing, as there is a danger of it reinforcing existing bias rather than help eliminate it.
Amazon-owned home surveillance company, Ring, wants its users to be more'neighborly' by sharing candid videos of their fellow citizens committing good deeds. On Tuesday, the company introduced a'neighborly moments' category to the company's app - a tool that usually serves as a method for sharing'suspicious' or apparently criminal behavior caught on camera with other Ring users and police. The category, as Ring puts it, is meant to boost attention to positive deeds in one's community like'cleaning a driveway for a neighbor who is sick' or'securing a delivered package when another neighbor is away.' 'At Ring, we want to make it even easier for you to share those good deeds with your community,' writes Ring. 'That's why we're launching "Neighborly Moments," a new posting category in the Neighbors app that lets you highlight these acts of kindness and help your community celebrate them together.' Amazon's Ring wants its users to share'neighborly moments' - candid videos of community members committing good deeds While users were free to post those moments in the Neighbors app before Tuesday, the newest feature allows them to tag them as a positive example, allowing other users to filter out'suspicious' or'crime' videos.
Wellington, Feb 12: New Zealand Police in a step to modernise its services unveiled its first Artificial intelligence (AI) officer named "Ella"on Wednesday. "Ella is a digital person that is powered by AI and uses real-time animation to emulate face-to face interactions," said Bush. Ella, which stands for'Electronic Lifelike Assistant', is part of two new digital kiosks the New Zealand Police has designed to help reduce queues in stations and to provide a modern way to connect with the public, NZ Herald reported. Ella will be stationed in the lobby of New Zealand Police's National Headquarters from next week assisting the concierge team and talking to visitors about Police topics such as the 105 nonemergency number and police vetting. Touch-screen electronic service points called Police Connect was also revealed on Wednesday, which will provide basic non-emergency services to the public 24 hours a day.
If Hoan Ton-That is feeling the pressure, he isn't showing it. Over the last month, fears about facial recognition technology and police surveillance have intensified, all thanks to Ton-That's startup, Clearview AI. First came a front-page investigation in The New York Times, revealing Clearview has been working with law enforcement agencies to match photos of unknown faces to people's online images. Next, cease-and-desist letters rolled in from tech giants Twitter, Google and Facebook. Lawmakers made inquiries and New Jersey enacted a statewide ban on law enforcement using Clearview while it looks into the software.
The police have unveiled their first AI officer, with hopes she'll soon be smiling and blinking out of screens in stations all around New Zealand. Ella, the artificial intelligence cop at the centre of the police's new digital services, was revealed at the police national headquarters in Wellington this morning. Ella, which stands for Electronic Lifelike Assistant, is part of two new digital kiosks police have designed to help reduce queues in stations and to provide a modern way to connect with the public. Designed as a mix of 26 different people, Ella is the brainchild of project manager Erin Greally, and will primarily be available only at the headquarters building in Molesworth St, where users can ask for information or be connected to whoever they're visiting. If the three-month pilot goes well, police hope to have Ella's friendly, CGI face spread across kiosks throughout the country.