This paper shows how to construct knowledge graphs (KGs) from pre-trained language models (e.g., BERT, GPT-2/3), without human supervision. Popular KGs (e.g, Wikidata, NELL) are built in either a supervised or semi-supervised manner, requiring humans to create knowledge. Recent deep language models automatically acquire knowledge from large-scale corpora via pre-training. The stored knowledge has enabled the language models to improve downstream NLP tasks, e.g., answering questions, and writing code and articles. In this paper, we propose an unsupervised method to cast the knowledge contained within language models into KGs. We show that KGs are constructed with a single forward pass of the pre-trained language models (without fine-tuning) over the corpora. We demonstrate the quality of the constructed KGs by comparing to two KGs (Wikidata, TAC KBP) created by humans. Our KGs also provide open factual knowledge that is new in the existing KGs. Our code and KGs will be made publicly available.
What if I told a story here, how would that story start?" Thus, the summarization prompt: "My second grader asked me what this passage means: …" When a given prompt isn't working and GPT-3 keeps pivoting into other modes of completion, that may mean that one hasn't constrained it enough by imitating a correct output, and one needs to go further; writing the first few words or sentence of the target output may be necessary.
The authors of the Harrisburg University study make explicit their desire to provide "a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime" as a co-author and former NYPD police officer outlined in the original press release. At a time when the legitimacy of the carceral state, and policing in particular, is being challenged on fundamental grounds in the United States, there is high demand in law enforcement for research of this nature, research which erases historical violence and manufactures fear through the so-called prediction of criminality. Publishers and funding agencies serve a crucial role in feeding this ravenous maw by providing platforms and incentives for such research. The circulation of this work by a major publisher like Springer would represent a significant step towards the legitimation and application of repeatedly debunked, socially harmful research in the real world. To reiterate our demands, the review committee must publicly rescind the offer for publication of this specific study, along with an explanation of the criteria used to evaluate it. Springer must issue a statement condemning the use of criminal justice statistics to predict criminality and acknowledging their role in incentivizing such harmful scholarship in the past. Finally, all publishers must refrain from publishing similar studies in the future.
The irony of the ethical scandal enveloping Joichi Ito, the former director of the MIT Media Lab, is that he used to lead academic initiatives on ethics. After the revelation of his financial ties to Jeffrey Epstein, the financier charged with sex trafficking underage girls as young as 14, Ito resigned from multiple roles at MIT, a visiting professorship at Harvard Law School, and the boards of the John D. and Catherine T. MacArthur Foundation, the John S. and James L. Knight Foundation, and the New York Times Company. Many spectators are puzzled by Ito's influential role as an ethicist of artificial intelligence. Indeed, his initiatives were crucial in establishing the discourse of "ethical AI" that is now ubiquitous in academia and in the mainstream press. In 2016, then-President Barack Obama described him as an "expert" on AI and ethics. Since 2017, Ito financed many projects through the $27 million Ethics and Governance of AI Fund, an initiative anchored by the MIT Media Lab and the Berkman Klein Center for Internet and Society at Harvard University.
We learn to lie as children, between the ages of two and five. By adulthood, we are prolific. We lie to our employers, our partners and, most of all, one study has found, to our mothers. The average person hears up to 200 lies a day, according to research by Jerry Jellison, a psychologist at the University of Southern California. The majority of the lies we tell are "white", the inconsequential niceties – "I love your dress!" – that grease the wheels of human interaction. But most people tell one or two "big" lies a day, says Richard Wiseman, a psychologist at the University of Hertfordshire. We lie to promote ourselves, protect ourselves and to hurt or avoid hurting others. The mystery is how we keep getting away with it. Our bodies expose us in every way. We stutter, stall and make Freudian slips.
Virtual assistants such as Amazon's Alexa and Google Home have the capacity to analyse how happy and healthy a couple's relationship is, research has found. In-home listening devices will soon be able to judge how functional relationships are as well as interrupt an argument with an idea for how to resolve it, the study said. The research, by Imperial College Business School, stated that within the next two to three years, digital assistants could predict with 75 per cent accuracy the likelihood of a relationship or marriage being a success. The technology would reach a verdict through acoustic analysis of communication between couples – examining everything from everyday encounters to arguments. The virtual assistants would then be able to provide relationship advice and what researchers refer to as democratising counselling.
See how Apple's new facial recognition system works in real life. A conductive model of a finger, used to spoof a fingerprint ID system. Created by Prof. Anil Jain, a professor of computer science at Michigan State University and expert on biometric technology. SAN FRANCISCO -- Your shiny new smartphone may unlock with only your thumbprint, eye or face. The FBI is struggling to gain access to the iPhone of Texas church gunman Devin Kelley, who killed 25 people in a shooting rampage.
It was supposed to be an easy $1,000 job. All 25-year-old Jorge Edwin Rivera had to do was pilot a drone, carrying a lunchbox filled with 13 pounds of methamphetamine, from one side of the US-Mexico border to the other where an accomplice could retrieve the smuggled cargo. What he didn't count on was Border Patrol agents spotting the UAV in flight and tracking it back to his hiding spot, 2,000 yards from the national divide. This isn't the first time that smugglers have used commercially-available drones to carry contraband. In 2015, the Border Patrol caught a two people dropping off 28 pounds of heroin in Calexico, California, and, in the same year, caught another drug ring delivering 30 pounds of cannabis to San Luis, Arizona.
The buzz of a motor overhead at nearly 11:30 p.m. was the tip-off. A remote control-operated drone flew over the border fence from Mexico, heading for San Ysidro while a Border Patrol agent listened and watched. He radioed ahead to other agents to be on the lookout for the small aircraft. Ten minutes later, federal authorities had what they say is their first confirmed San Diego case of drug smuggling by drone. Late on the night of Aug. 8, agents arrested a man carrying a bag full of heroin -- more than 13 pounds valued at an estimated $46,000.