San Francisco District Attorney George Gascon, left, announces a new AI tool that will curb racial biases when deciding criminal charges, alongside Alex Chohlas-Wood, right, who helped develop the tool.ASSOCIATED PRESS San Francisco says it will start using an artificial intelligence tool to reduce possible racial bias among prosecutors reviewing police reports, a "first-in-the-nation" use of a technology whose applications have been criticized for compounding bias. On Wednesday, District Attorney George Gascón announced that the city on July 1 would begin to use a "bias mitigation tool" that automatically redacts anything on the police report that might be suggestive of race, from hair color to zip code. Information about the police officer, such as badge number, will also be hidden. Currently, the district attorney's office manually removes the first few pages of the report, but if any race details are in the narrative--the section where the police officer describes the crime--prosecutors can see them. "This technology will reduce the threat that implicit bias poses to the purity of decisions which have serious ramifications for the accused, and that will help make our system of justice more fair and just," Gascón said.
THIS DATA IS PROVIDED "AS IS" AND ANY EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS DATA, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
In a well-worn cliché, data is often referred to as "the new oil". The analogy is limited, but it does have some truth to it as data -- like oil -- is the defining resource for a new industrial age. Likewise, data seems set to be dominated by a small number of massive global players. For organisations hoping to become pioneers in artificial intelligence (AI) and data analytics, scale confers significant competitive advantages. Bigger companies will be better placed to build the bigger data sets that enable more sophisticated analysis to be performed more quickly.
When the European Union enacted the General Data Protection Regulation (GDPR) a year ago, one of the most revolutionary aspects of the regulation was the "right to be forgotten"--an often-hyped and debated right, sometimes perceived as empowering individuals to request the erasure of their information on the internet, most commonly from search engines or social networks. Darren Shou is vice president of research at Symantec. Since then, the issue of digital privacy has rarely been far from the spotlight. There is widespread debate in governments, boardrooms, and the media on how data is collected, stored, and used, and what ownership the public should have over their own information. But as we continue to grapple with this crucial issue, we've largely failed to address one of the most important aspects--how do we control our data once it's been fed into the artificial intelligence (AI) and machine-learning algorithms that are becoming omnipresent in our lives?
Countering digital fraud is a lot like playing whack-a-mole: As soon as one fraudster is taken out, two more pop up where they're least expected. Fighting bad actors is particularly challenging for those in the banking industry, which lost more than $31 billion to fraud in 2018 and is projected to lose even more as cybercriminals become more sophisticated. The popularity of digital banking services has created ample opportunities for bad actors, leaving banks scrambling to protect themselves against the rising tide of fraud. Faster payments have also contributed, as banks now have less time to identify fraudulent transactions. It's nearly impossible for human analysts to examine every sign of malfeasance with banks processing millions of transactions each day, but that is exactly where learning technologies like artificial intelligence (AI) and machine learning (ML) can help.
Despite having a $12 billion budget and being located adjacent to Silicon Valley, San Francisco doesn't always take advantage of the ways in which tech can improve civic life or the work of its city employees. But there is one office that is pushing the envelope and collaborating with programmers, nonprofits, and computer scientists with the vital goal of improving its criminal justice practices. Just last month District Attorney Geroge Gascón announced that a partnership with Code for America had enabled his office to clear all old marijuana convictions made defunct with the passage of Proposition 64. And on Wednesday, he shared the news that a new collaboration with Stanford was in the works, to employ artificial intelligence as a means of mitigating implicit racial bias among his staff. If the words "artificial intelligence" combined with "criminal justice system" give you goosebumps, you're not alone.
Deloitte Global announced that Deloitte firms are now offering clients an artificial intelligence (AI) platform. The platform can monitor, measure, and analyze changes in tax regulation in real-time to give Deloitte clients the edge in monitoring and responding to regulatory updates across the world. The AI platform is the result of a collaboration between Deloitte firms and Signal A.I. Signal's proprietary AI technology was trained by Deloitte tax experts to understand key regulatory changes in over 100 jurisdictions from over 100 regulators, tax authorities and government bodies. "Tax professionals are being asked to do more with fewer resources--to stay ahead of more risks, draw insights from more data, track more regulations across more jurisdictions. To address these challenges, they are transforming how they operate and manage processes by implementing new technologies, such as AI, robotic process automation and natural language processing," says Conrad Young, Deloitte Global Tax & Legal, Chief Digital Officer.
Companies that take the time to meet with app developers are going to have a number of questions that need answering. Many of these questions are centered around artificial intelligence. Now that AI is commonly used by app developers, businesses of all sizes are doing their best to learn more about the benefits it can provide. What these businesses may not be aware of is the role that artificial intelligence will play in various criminal investigations. App developers who are truly experienced will let their clients know more about what is taking place but a business must make sure that they are on the right side of the law.
Millions of security cameras become equipped with "video analytics" and other AI-infused technologies that allow computers not only record but "understand" the objects they're capturing, they could be used for both security and marketing purposes, the American Civil Liberties Union (ACLU) warned in a recent report,"The Dawn of Robot Surveillance." As they become more advanced, the camera use is shifting from simply capturing and storing video "just in case" to actively evaluating video with real-time analytics and for surveillance. While ownership of cameras is mostly under decentralized ownership and control the ACLU cautioned policymakers to be proactive and create rules to regulate the potential negative impact this could have. The report also listed specific features that could allow for intrusive surveillance and recommendations to curtail potential abuse. The organization warned legislators to be wary of technologies such as human action recognition, anomaly detection, contextual understanding, emotion recognition, wide-area surveillance, and video search and summarization among other changes in camera technology.
More and more organizations are beginning to use or expand their use of artificial intelligence (AI) tools and services in the workplace. Despite AI's proven potential for enhancing efficiency and decision-making, it has raised a host of issues in the workplace which, in turn, have prompted an array of federal and state regulatory efforts that are likely to increase in the near future. Artificial intelligence, defined very simply, involves machines performing tasks in a way that is intelligent. The AI field involves a number of subfields or forms of AI that solve complex problems associated with human intelligence--for example, machine learning (computers using data to make predictions), natural-language processing (computers processing and understanding a natural human language like English), and computer vision or image recognition (computers processing, identifying, and categorizing images based on their content). One area where AI is becoming increasingly prevalent is in talent acquisition and recruiting.