Law


"Hey, Update My Voice" Exposes Cyber Harassment.

#artificialintelligence

The "Hey, Update My Voice" movement, in partnership with UNESCO, was born out of this context with the goal of teaching respect towards virtual assistants and, in addition, asking tech companies to update their assistants' responses. Because if that happens to them, imagine what happens in real life to real women. Every day around the world, virtual assistants suffer abuse and harassment of all kinds. In Brazil, for example, Lu, the virtual assistant of Magazine Luiza stores, has been victimized by this sort of violence. Worldwide, cases have been reported involving Siri and Alexa, among others.


Artificial intelligence can boost compliance Investment Executive

#artificialintelligence

Over the past few years, the Canada Revenue Agency has been using data analytics and AI, such as machine-learning algorithms that predict tax non-compliance and detect activity in the underground economy. Since 2018, the Department of Justice Canada has licensed the use of Tax Foresight, AI software developed by Blue J Legal Inc. in Toronto, which employs machine learning to predict – with about 90% accuracy, according to the company – how a court might rule on a particular tax scenario. "It's not just about speeding up [analysis] that would otherwise happen," says Benjamin Alarie, co-founder and CEO of Blue J Legal and Osler Chair of Business Law at the University of Toronto. "It's about making [widely] available a really good prediction that would otherwise be the domain of an experienced [lawyer]." AI technology could bring more certainty to the interpretation of tax law, Alarie adds: "Everyone benefits from that."


New study examines mortality costs of air pollution in US

#artificialintelligence

A team of University of Illinois researchers estimated the mortality costs associated with air pollution in the U.S. by developing and applying a novel machine learning-based method to estimate the life-years lost and cost associated with air pollution exposure. Scholars from the Gies College of Business at Illinois studied the causal effects of acute fine particulate matter exposure on mortality, health care use and medical costs among older Americans through Medicare data and a unique way of measuring air pollution via changes in local wind direction. The researchers - Tatyana Deryugina, Nolan Miller, David Molitor and Julian Reif - calculated that the reduction in particulate matter experienced between 1999-2013 resulted in elderly mortality reductions worth $24 billion annually by the end of that period. Garth Heutel of Georgia State University and the National Bureau of Economic Research was a co-author of the paper. "Our goal with this paper was to quantify the costs of air pollution on mortality in a particularly vulnerable population: the elderly," said Deryugina, a professor of finance who studies the health effects and distributional impact of air pollution.


London Cops Will Use Facial Recognition to Hunt Suspects

#artificialintelligence

There will soon be a new bobby on the beat in London: artificial intelligence. London's Metropolitan Police said Friday that it will deploy facial recognition technology to find wanted criminals and missing persons. It said the technology will be deployed at "specific locations," each with a "bespoke watch list" of wanted persons, mostly violent offenders. However, a spokesperson was unable to specify how many facial recognition systems will be used, where, or how frequently. The Met said use of the technology would be publicized beforehand and marked by signs on site.


Rogue NYPD cops are using facial recognition app Clearview

#artificialintelligence

Rogue NYPD officers are using a sketchy facial recognition software on their personal phones that the department's own facial recognition unit doesn't want to touch because of concerns about security and potential for abuse, The Post has learned. Clearview AI, which has scraped millions of photos from social media and other public sources for its facial recognition program -- earning a cease-and-desist order from Twitter -- has been pitching itself to law enforcement organizations across the country, including to the NYPD. The department's facial recognition unit tried out the app in early 2019 as part of a complimentary 90-day trial but ultimately passed on it, citing a variety of concerns. Those include app creator Hoan Ton-That's ties to viddyho.com, which was involved in a widespread phishing scam in 2009, according to police sources and reports. The NYPD was also concerned because Clearview could not say who had access to images once police loaded them into the company's massive database, sources said.


Artificial intelligence: EU must ensure a fair and safe use for consumers

#artificialintelligence

Parliament's Internal Market and Consumer Protection Committee approved on Thursday a resolution addressing several challenges arising from the rapid development of artificial intelligence (AI) and automated decision-making (ADM) technologies. When consumers interact with an ADM system, they should be "properly informed about how it functions, about how to reach a human with decision-making powers, and about how the system's decisions can be checked and corrected", says the committee. Those systems should only use high-quality and unbiased data sets and "explainable and unbiased algorithms" in order to boost consumer trust and acceptance, states the resolution. Review structures should be set up to remedy possible mistakes in automated decisions. It should also be possible for consumers to seek human review of, and redress for, automated decisions that are final and permanent.


London Police Roll Out Facial Recognition Technology

#artificialintelligence

The London Metropolitan Police have announced that it intends to begin using Live Facial Recognition (LFR) technology in various parts of the UK's capital city. The police explained that the technology will be "intelligence-led and deployed to specific locations in London," used for five to six hours at a time, with bespoke lists drawn up of "wanted individuals." As the BBC reports, the police claim the technology is able to identify 70 percent of wanted suspects while only generating false alerts once per 1,000 people detected by the system. The cameras will be rolled out within a month and clearly signposted. Police officers are going to hand out leaflets about the facial recognition technology and consult with local communities.


London Police Roll Out Facial Recognition Technology

#artificialintelligence

The London Metropolitan Police have announced that it intends to begin using Live Facial Recognition (LFR) technology in various parts of the UK's capital city. The police explained that the technology will be "intelligence-led and deployed to specific locations in London," used for five to six hours at a time, with bespoke lists drawn up of "wanted individuals." As the BBC reports, the police claim the technology is able to identify 70 percent of wanted suspects while only generating false alerts once per 1,000 people detected by the system. The cameras will be rolled out within a month and clearly signposted. Police officers are going to hand out leaflets about the facial recognition technology and consult with local communities.


The battle for ethical AI at the world's biggest machine-learning conference

#artificialintelligence

Facial-recognition algorithms have been at the centre of privacy and ethics debates.Credit: Qilai Shen/Bloomberg/Getty Diversity and inclusion took centre stage at one of the world's major artificial-intelligence (AI) conferences in 2018. But once a meeting with a controversial reputation, last month's Neural Information Processing Systems (NeurIPS) conference in Vancouver, Canada, saw attention shift to another big issue in the field: ethics. The focus comes as AI research increasingly deals with ethical controversies surrounding the application of its technologies -- such as in predictive policing or facial recognition. Issues include tackling biases in algorithms that reflect existing patterns of discrimination in data, and avoiding affecting already vulnerable populations. "There is no such thing as a neutral tech platform," warned Celeste Kidd, a developmental psychologist at University of California, Berkeley, during her NeurIPS keynote talk about how algorithms can influence human beliefs.


Investorideas.com Newswire - AI Stock News: GBT (OTCPINK: GTCH) Is Expanding Its Autonomous Machines (Robotics) Research

#artificialintelligence

Newswire) GBT Technologies Inc. (OTCPINK: GTCH) ("GBT", or the "Company"), a company specializing in the development of Internet of Things (IoT) and Artificial Intelligence (AI) enabled networking and tracking technologies, including its GopherInsight wireless mesh network technology platform and its Avant! AI, for both mobile and fixed solutions, announced that it is expanding its autonomous machines research, working on the development of a dynamic simulation program for robots. With the requirement for complex, real-time information analysis, a dynamic simulation of autonomous machines is a must for advanced robotic systems development and prototyping. As part of GBT's on-going robotics R&D activities, the Company is developing a new robotics simulation program in order to enable better emulate real-time robot control and functionality. A dynamic simulation for robots has strict requirements due to the fact that it is dealing with real world physical reality in real time.