An automated system that could assist a judge in predicting the outcome of a case would help expedite the judicial process. For such a system to be practically useful, predictions by the system should be explainable. To promote research in developing such a system, we introduce ILDC (Indian Legal Documents Corpus). ILDC is a large corpus of 35k Indian Supreme Court cases annotated with original court decisions. A portion of the corpus (a separate test set) is annotated with gold standard explanations by legal experts. Based on ILDC, we propose the task of Court Judgment Prediction and Explanation (CJPE). The task requires an automated system to predict an explainable outcome of a case. We experiment with a battery of baseline models for case predictions and propose a hierarchical occlusion based model for explainability. Our best prediction model has an accuracy of 78% versus 94% for human legal experts, pointing towards the complexity of the prediction task. The analysis of explanations by the proposed algorithm reveals a significant difference in the point of view of the algorithm and legal experts for explaining the judgments, pointing towards scope for future research.
The Supreme Court's Artificial Intelligence Committee on Tuesday launched its Artificial Intelligence portal SUPACE(Supreme Court Portal for Assistance in Court's Efficiency). The event was attended by CJI Bobde, CJI designate Justice NV Ramana and Justice Nageswara Rao, who is also the Chairman of the Supreme Court's AI Committee, and High Court Judges. While launching the Court's Artificial Intelligence Portal, the CJI called the system a'perfect blend of human intelligence and machine learning' and'a hybrid system', which works together with human intelligence. He stated that the system being launched is unique as there is interaction between the human being and machine which creates remarkable results. During the event, CJI addressed the objections and criticisms that Artificial Intelligence faces, as for most people it means automated decision making.
Last fall, we learned that Washington, DC police used a previously-undisclosed facial recognition system to identify a protester who allegedly punched a law enforcement officer during the June 2020 Lafayette Square riots. Privacy advocates will be happy to know that the system, which was used by 14 federal and local agencies, is being shut down soon. As reported by The Washington Post, the National Capital Region Facial Recognition Investigative Leads System (NCRFRILS) will no longer be used thanks to a new Virginia state law that goes into effect on July 1st. The law is putting tighter restrictions on how local law enforcement agencies can use facial recognition; specifically, it requires agencies to get approval from the state legislature before using any facial recognition system. A spokesperson for the Metropolitan Washington Council of Governments described it as a pilot program that won't continue as it "depended on regional participation and financial support."
Some people think they are above the law. In a constitutional democracy this cannot be the case. Neither the head of state nor the doctor or the police are above the law. They should all be enabled to do their work, but we do not buy the claim that they could act as they wish. In 18th century Europe we replaced the authoritarian rule by law with a rule of law, to mitigate uninhibited power, and to ensure that those in power can be held to account in a court of law.
Media bias can lead to increased political polarization, and thus, the need for automatic mitigation methods is growing. Existing mitigation work displays articles from multiple news outlets to provide diverse news coverage, but without neutralizing the bias inherent in each of the displayed articles. Therefore, we propose a new task, a single neutralized article generation out of multiple biased articles, to facilitate more efficient access to balanced and unbiased information. In this paper, we compile a new dataset NeuWS, define an automatic evaluation metric, and provide baselines and multiple analyses to serve as a solid starting point for the proposed task. Lastly, we obtain a human evaluation to demonstrate the alignment between our metric and human judgment.
The year is 2029, and you wake up one morning living in a community called Hope, a dystopian dictatorship. "Everyone here wears the same outfit, lives the same repetitive routine, and is happy … For many, Hope is their entire universe. They are uninterested in the outside world. However, you are different--you have the ability to choose." This is how you are introduced to the game Name of the Will on Kickstarter.
There is an increasing interest in the entwining of the field of antitrust with the field of Artificial Intelligence (AI), frequently referred to jointly as Antitrust and AI (AAI) in the research literature. This study focuses on the synergies entangling antitrust and AI, doing so to extend the literature by proffering the primary ways that these two fields intersect, consisting of: (1) the application of antitrust to AI, and (2) the application of AI to antitrust. To date, most of the existing research on this intermixing has concentrated on the former, namely the application of antitrust to AI, entailing how the marketplace will be altered by the advent of AI and the potential for adverse antitrust behaviors arising accordingly. Opting to explore more deeply the other side of this coin, this research closely examines the application of AI to antitrust and establishes an antitrust vigilance lifecycle to which AI is predicted to be substantively infused for purposes of enabling and bolstering antitrust detection, enforcement, and post-enforcement monitoring. Furthermore, a gradual and incremental injection of AI into antitrust vigilance is anticipated to occur as significant advances emerge amidst the Levels of Autonomy (LoA) for AI Legal Reasoning (AILR).
We propose a Distributional Approach to address Controlled Text Generation from pre-trained Language Models (LMs). This view permits to define, in a single formal framework, "pointwise" and "distributional" constraints over the target LM -- to our knowledge, this is the first approach with such generality -- while minimizing KL divergence with the initial LM distribution. The optimal target distribution is then uniquely determined as an explicit EBM (Energy-Based Model) representation. From that optimal representation we then train the target controlled autoregressive LM through an adaptive distributional variant of Policy Gradient. We conduct a first set of experiments over pointwise constraints showing the advantages of our approach over a set of baselines, in terms of obtaining a controlled LM balancing constraint satisfaction with divergence from the initial LM (GPT-2). We then perform experiments over distributional constraints, a unique feature of our approach, demonstrating its potential as a remedy to the problem of Bias in Language Models. Through an ablation study we show the effectiveness of our adaptive technique for obtaining faster convergence.