If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Aside from the immature design of many AI applications, another ethical pitfall presents itself in the way we are using these applications. We are designing AI for tasks that could (and should) be done by humans and neglecting to use AI where it is urgently needed -- namely, in repairing our online ecosystem. It is human nature to have opinions. Many of pride ourselves on the fact-based nature of our opinions -- whether they be based on news stories, personal experience, a book we read recently, etc. However, even the strongest opinions are subject to bias, and it is becoming easier than ever to bias our opinions through social media.
SAN FRANCISCO – A House subcommittee is investigating popular dating services such as Tinder and Bumble for allegedly allowing minors and sex offenders to use their services. Bumble, Grindr, The Meet Group and the Match Group, which owns such popular services as Tinder, Match.com and OkCupid, are the current targets of the investigation by the U.S. House Oversight and Reform subcommittee on economic and consumer policy. In separate letters Thursday to the companies, the subcommittee is seeking information on users' ages, procedures for verifying ages, and any complaints about assaults, rape or the use of the services by minors. It is also asking for the services' privacy policies and details on what users see when they review and agree to the policies. Although the minimum age for using internet services is typically 13 in the U.S., dating services generally require users to be at least 18 because of concerns about sexual predators.
KOTA KINABALU (Jan 30): Courts in Kota Kinabalu will be the first in the country to use data analysis by artificial intelligence (AI) application in deciding on sentences, Minister in the Prime Minister's Department Datuk Liew Vui Keong said. He said AI application would be used in a criminal case hearing on Feb 17. According to Liew, the use of the application was launched on Jan 17, after opening of the legal year in Sabah and Sarawak by Chief Judge Tan Sri David Wong Kah Wah in Kuching Sarawak. For a start, he said AI would be used for drug possession offences under Section 12(2) of the Dangerous Drugs Act 1952 and rape offences under Section 376 of the Penal Code. "The technology will be able to analyse information to help judges and magistrates determine appropriate sentences on the accused. "With AI, there will be less disparity and inconsistency when sentences are meted out," he told reporters after a briefing on the use of AI application at the Kota Kinabalu Court today. The AI application would only serve as a guideline for judicial officials in making their decisions. Before sentencing he said, the judge or magistrate would inform the defence lawyers and prosecutors on the result of the AI analysis. "However, the discretion of the judges or magistrates in imposing an appropriate sentence will not be affected with the implementation of this technology,.
It's no secret that bias is present everywhere in our society, from our educational institutions to the criminal justice system. The manifestation of this bias can be as seemingly trivial as the timing of a judge's lunch break or, more often, as fraught as race or economic class. We tend to attribute such discrimination to our own internalized prejudices and our inability to make decisions in truly objective ways. Because of this, machine learning algorithms seem like a compelling solution: we can write software to look at the data, crunch the numbers, and tell us what decision we should make.
After a long time of neglect, Artificial Intelligence is once again at the center of most of our political, economic, and socio-cultural debates. Recent advances in the field of Artifical Neural Networks have led to a renaissance of dystopian and utopian speculations on an AI-rendered future. Algorithmic technologies are deployed for identifying potential terrorists through vast surveillance networks, for producing sentencing guidelines and recidivism risk profiles in criminal justice systems, for demographic and psychographic targeting of bodies for advertising or propaganda, and more generally for automating the analysis of language, text, and images. Against this background, the aim of this book is to discuss the heterogenous conditions, implications, and effects of modern AI and Internet technologies in terms of their political dimension: What does it mean to critically investigate efforts of net politics in the age of machine learning algorithms?
Hundreds of law enforcement agencies across the US have started using a new facial recognition system from Clearview AI, a new investigation by The New York Times has revealed. The database is made up of billions of images scraped from millions of sites including Facebook, YouTube, and Venmo. The Times says that Clearview AI's work could "end privacy as we know it," and the piece is well worth a read in its entirety. The use of facial recognition systems by police is already a growing concern, but the scale of Clearview AI's database, not to mention the methods it used to assemble it, is particularly troubling. The Clearview system is built upon a database of over three billion images scraped from the internet, a process which may have violated websites' terms of service.
Recently, the emergence of the #MeToo trend on social media has empowered thousands of people to share their own sexual harassment experiences. This viral trend, in conjunction with the massive personal information and content available on Twitter, presents a promising opportunity to extract data driven insights to complement the ongoing survey based studies about sexual harassment in college. In this paper, we analyze the influence of the #MeToo trend on a pool of college followers. The results show that the majority of topics embedded in those #MeToo tweets detail sexual harassment stories, and there exists a significant correlation between the prevalence of this trend and official reports on several major geographical regions. Furthermore, we discover the outstanding sentiments of the #MeToo tweets using deep semantic meaning representations and their implications on the affected users experiencing different types of sexual harassment. We hope this study can raise further awareness regarding sexual misconduct in academia.
HR leaders predict how cultural, social, and technological shifts will impact the way people work in the coming year. Not too long ago, HR professionals were relegated to the realm of "personnel management"--paper-pushers responsible for administrative tasks and little else. But as organizations have grown and globalized in increasingly challenging environments, so has the role of human resources. Today's HR departments are deeply rooted in organizational planning and business strategy, more essential to the success of a company than ever before. HR leaders have made their way to the C-suite, guiding strategies that unite the goals of a business under one umbrella: talent.
Since its invention in 1970, email has undergone very little changes. Its ease of use has made it the most common method of business communication, used by 3.7 billion users worldwide. Simultaneously, it has become the most targeted intrusion point for cybercriminals, with devastating outcomes. When initially envisioned, email was built for connectivity. Network communication was in its early days, and merely creating a digital alternative for mailboxes was revolutionary and difficult enough.
In this paper, we proposed a novel automated model, called Vulnerability Index for Population at Risk (VIPAR) scores, to identify rare populations for their future shooting victimizations. Likewise, the focused deterrence approach identifies vulnerable individuals and offers certain types of treatments (e.g., outreach services) to prevent violence in communities. The proposed rule-based engine model is the first AI-based model for victim prediction. This paper aims to compare the list of focused deterrence strategy with the VIPAR score list regarding their predictive power for the future shooting victimizations. Drawing on the criminological studies, the model uses age, past criminal history, and peer influence as the main predictors of future violence. Social network analysis is employed to measure the influence of peers on the outcome variable. The model also uses logistic regression analysis to verify the variable selections. Our empirical results show that VIPAR scores predict 25.8% of future shooting victims and 32.2% of future shooting suspects, whereas focused deterrence list predicts 13% of future shooting victims and 9.4% of future shooting suspects. The model outperforms the intelligence list of focused deterrence policies in predicting the future fatal and non-fatal shootings. Furthermore, we discuss the concerns about the presumption of innocence right.