What if I told a story here, how would that story start?" Thus, the summarization prompt: "My second grader asked me what this passage means: …" When a given prompt isn't working and GPT-3 keeps pivoting into other modes of completion, that may mean that one hasn't constrained it enough by imitating a correct output, and one needs to go further; writing the first few words or sentence of the target output may be necessary.
A few years ago, I was invited to Minnesota Public Radio to speak about various legal issues related to cybersecurity. To my left was Bruce Schneier, a famous and respected cybersecurity researcher and prolific author. There wasn't much disagreement between us during the interview, though I recall emphasizing a bit more the FTC's cybersecurity efforts, noting that I thought they were doing a pretty good job in the current regulatory vacuum, building a de-facto common law as they went along. In his latest book, "Click Here to Kill Everybody," Schneier argues, among other things, that there is a systemic lack of security in all things computer (something he calls "Internet ", essentially an extension of IoT) and that what is needed to fix this is government intervention. Schneier's call for intervention comes in the form of a new government agency, one that has the ability to "coordinate and advise with other agencies" on the Internet .
Fine-tuning language models, such as BERT, on domain specific corpora has proven to be valuable in domains like scientific papers and biomedical text. In this paper, we show that fine-tuning BERT on legal documents similarly provides valuable improvements on NLP tasks in the legal domain. Demonstrating this outcome is significant for analyzing commercial agreements, because obtaining large legal corpora is challenging due to their confidential nature. As such, we show that having access to large legal corpora is a competitive advantage for commercial applications, and academic research on analyzing contracts.
September 13, 2019: Brain-machine interfaces (BMI) applications, be they noninvasive (positioned on the body) or invasive (inserted into the body) significantly amplify the liability concerns that we are already familiar with through experience with, for example, implantable medical devices. The liability amplifying variable here is capability: the BMI's potential to cause wide-ranging harm is far greater than a legacy medical device. For instance, injecting a virus carrying nanobots to fight a disease or to carry out another mission is vastly different and carries an intrinsic operational risk that is vastly greater than implanting a pacemaker. Iterative liability, XAI, and the regulation of AI discussed in this post coalesce into a normative and legal safety net that can help mitigate the risks associated with BMI. July 19, 2019: Regulating AI behavior is necessary in order to mitigate harm.
The legal value of XAI can be significant, especially (though by no means exclusively) in mitigating developer and end-user liability.¹ Though it is somewhat early to presume that the perfect information model introduced in The Role of Explainable AI (XAI) in Regulating AI Behavior: Delivery of "Perfect Information"can be viewed as a mature standard, the model does possess the necessary proto qualities and can be reasonably viewed as a proto standard. Therefore, a properly developed XAI is one that possesses, at a minimum, all the attributes of perfect information. And once that parameter is fixed, the XAI is deemed properly developed and ready to provide a variety of risk mitigation benefits. One example of how this can work is in dispositive-centric efforts, including in crafting safe harbors.
Beyond the classroom curriculum, many law schools are designing experiential modes of introducing law students to artificial intelligence. At Georgia State University School of Law, for instance, the Legal Analytics and Innovation Initiative gives law students a chance to collaborate closely with computer science and business students at the same university to design complex technologies that solve previously unsolvable legal problems (such as predicting to a high degree of accuracy how a particular judge will rule in cases defined by a large set of parameters). This kind of work not only has the potential to be a flow-through to the legal practitioner space, but could over time become a mechanism for law schools to "spin out" the kinds of revenue-generating start-up businesses that are a common facet of life science departments at research universities. These programs have also been shown (according to the programs' own statistics) to help law students land jobs at higher rates than the overall student body, no doubt because the intersection of technology and law is a rare and valuable skillset in the eyes of employers.
The explosion of AI capabilities and other emerging technologies is clearly transforming the practice of law. Can these technologies also be leveraged to prepare students for an evolving job market? Working closely with our partners at Thomson Reuters, we at Above the Law have been exploring the impact of AI and other technologies on law schools. We now invite you to explore Cognifying Legal Education, the first in a four-part, multimedia exploration of how artificial intelligence and similar innovations are reshaping the legal profession: Law2020.
Law students in the United States usually get a healthy dose of legal technology infused into their educations to prepare them for the future of law practice. Since most legal technology education, research, and innovation tends to happen here, students studying law in foreign countries may be at a disadvantage. One elite law school is trying to make sure its students don't get left behind.
This report sets out a series of strategic recommendations to the government, based on core pillars including data supply and exchange, skills and education and developing an artificial intelligence infrastructure in the UK, with a view to growing the country's AI sector, something which was also augmented by the recent Budget and government's Industrial Strategy White Paper this week.
Jerry Kaplan does for the future what Jared Diamond did for the past: He pulls together our human (or humanoid) fate in sparkling,often hilarious, prose. Kaplan begins by offering the non scientific reader (me) a clear overview of the AI advances that are poised to make human workers obsolete--offering eye popping examples explaining how the pace of technology is destined to overwhelm the human landscape of life and work. He then charts the changes that span FAR more than driverless cars. Mechanical robots (or what Kaplan calls "forged intelligences") will be more adept (and. of course, far more cost effective) than humans at performing every routine job from collecting our garbage to stocking our grocery shelves (and make those physical stores quaint relics of the past). "Synthetic intelligences" (machines that think and analyze information) will outwit humans at making complex diagnoses or writing legal briefs--automating out many of the hapless law school or medical students spending decades accumulating those mountainous student debts .