government & the courts


Man named Brett Kavanagh complains about having name like SCOTUS judge

Daily Mail

Sharing a name with a famous person can prompt endless jokes and comments -- but in these particularly politically-charged times, having the same name as a political figure can be especially tiresome. That's something a young man from Kentucky named Brett Kavanagh has learned only too well in recent weeks: On Friday, Brett, 27, complained about the recent woes of having his name, prompting others with famous names to commiserate. Women named Siri and Alexa, and men named Michael Jackson and Bruce Lee, all tweeted about how hard it is to have a well-known name. His tweet inspired others to chime in, including this person who pointed to a Scottish man named Steve Bannon -- who is not the same as Breitbart's Steve Bannon A man named Bruce Y. Lee knows the struggle This Brett, who works in customer service and lives in Louisville, spells his last name differently from new Supreme Court Justice Brett Kavanaugh, but it seems their nearly-identical names has caused him some trouble. Tough times: Brett (pictured) doesn't spell his name the same way as the judge, either'This is a terrible time to be named Brett Kavanagh,' he tweeted.


Ford gives scientific explanation for her memory of alleged Kavanaugh incident

FOX News

Dr. Christine Blasey Ford responds to a question from Sen. Dianne Feinstein during testimony before the Senate Judiciary Committee on her sexual assault allegations against Supreme Court nominee Brett Kavanaugh. Christine Blasey Ford gave a detailed scientific explanation for her memory of the alleged incident involving Supreme Court nominee Judge Brett Kavanaugh at her highly anticipated Senate testimony Thursday. Senate Judiciary Committee Ranking Member Dianne Feinstein, D-Calif., pressed Ford over her level of certainty that it was, in fact, Kavanaugh who allegedly pinned her down 36 years ago, while in high school, and attempted to remove her clothing. "How are you so sure that it was he?" Feinstein asked. Ford, a California-based psychology professor, laid out a detailed scientific explanation.


Amazon's Face-Scanning Surveillance Software Contrasts With Its Privacy Stance

WSJ.com: WSJD - Technology

Face recognition is a stark example of a technology that is being deployed faster than society and the law can adopt new norms and rules. It lets governments and private enterprise track citizens anywhere there is a camera, even if they're not carrying any devices. In general, people who are in public don't have any legal expectation of privacy and can be photographed or recorded. Because of this, the technology has the potential to be more intrusive than phone tracking, the legality of which the U.S. Supreme Court will soon decide. There are only two states, Texas and Illinois, that limit private companies' ability to track people via their faces.


The Ethical Implications Of Artificial Intelligence

#artificialintelligence

Artificial intelligence is transforming the legal profession -- and that includes legal ethics. AI and similar cutting-edge technologies raise many complex ethical issues and challenges that lawyers ignore at their peril. At the same time, AI also holds out the promise of helping lawyers to meet their ethical obligations, serve their clients more effectively, and promote access to justice and the rule of law. What does AI mean for legal ethics, what should lawyers do to prepare for these changes, and how could AI help improve the legal profession? Together with our partners at Thomson Reuters, we at Above the Law have been examining these important subjects.


When Software Rules: Rule of Law in the Age of Artificial Intelligence

#artificialintelligence

Artificial Intelligence (AI) is changing how our society operates. AI now helps make judiciary decisions, medical diagnoses, and drives cars. The use of AI in our society also has important environmental implications. AI can help improve resource use, improve energy efficiency, predict extreme weather events, and aid in scientific research. But while AI has the potential to improve human interaction with the environment, AI can also exacerbate existing environmental issues.


What do AI and blockchain mean for the rule of law?

#artificialintelligence

Digital services have frequently been in collision -- if not out-and-out conflict -- with the rule of law. But what happens when technologies such as deep learning software and self-executing code are in the driving seat of legal decisions? How can we be sure next-gen'legal tech' systems are not unfairly biased against certain groups or individuals? And what skills will lawyers need to develop to be able to properly assess the quality of the justice flowing from data-driven decisions? While entrepreneurs have been eyeing traditional legal processes for some years now, with a cost-cutting gleam in their eye and the word'streamline' on their lips, this early phase of legal innovation pales in significance beside the transformative potential of AI technologies that are already pushing their algorithmic fingers into legal processes -- and perhaps shifting the line of the law itself in the process.


Qualit\"atsma{\ss}e bin\"arer Klassifikationen im Bereich kriminalprognostischer Instrumente der vierten Generation

arXiv.org Machine Learning

This master's thesis discusses an important issue regarding how algorithmic decision making (ADM) is used in crime forecasting. In America forecasting tools are widely used by judiciary systems for making decisions about risk offenders based on criminal justice for risk offenders. By making use of such tools, the judiciary relies on ADM in order to make error free judgement on offenders. For this purpose, one of the quality measures for machine learning techniques which is widly used, the $AUC$ (area under curve), is compared to and contrasted for results with the $PPV_k$ (positive predictive value). Keeping in view the criticality of judgement along with a high dependency on tools offering ADM, it is necessary to evaluate risk tools that aid in decision making based on algorithms. In this methodology, such an evaluation is conducted by implementing a common machine learning approach called binary classifier, as it determines the binary outcome of the underlying juristic question. This thesis showed that the $PPV_k$ (positive predictive value) technique models the decision of judges much better than the $AUC$. Therefore, this research has investigated whether there exists a classifier for which the $PPV_k$ deviates from $AUC$ by a large proportion. It could be shown that the deviation can rise up to 0.75. In order to test this deviation on an already in used Classifier, data from the fourth generation risk assement tool COMPAS was used. The result were were quite alarming as the two measures derivate from each other by 0.48. In this study, the risk assessment evaluation of the forecasting tools was successfully conducted, carefully reviewed and examined. Additionally, it is also discussed whether such systems used for the purpose of making decisions should be socially accepted or not.


Pa. Attorney General Probing How Data-Mining Firm Acquired Facebook Data

NPR

NPR's Mary Louise Kelly speaks with Pennsylvania Attorney General Josh Shapiro about his office's intent to look into how the data of 50 million Facebook users got into the hands of the political data-mining firm, Cambridge Analytica.


Artificial intelligence to enhance Australian judiciary system

#artificialintelligence

Sentences handed down by artificial intelligence would be fairer, more efficient, transparent and accurate than those of sitting judges, according to Swinburne researchers.


Artificial intelligence to enhance Australian judiciary system

#artificialintelligence

Sentences handed down by artificial intelligence would be fairer, more efficient, transparent and accurate than those of sitting judges, according to Swinburne researchers.