Goto

Collaborating Authors

Results


Los Angeles man admits flying drone that struck LAPD helicopter over Hollywood

Los Angeles Times

A Los Angeles man admitted in federal court Thursday that he flew a drone that struck a Los Angeles Police Department helicopter that was responding to a crime scene in Hollywood. Andrew Rene Hernandez, 22, made the admission in pleading guilty to one count of unsafe operation of an unmanned aircraft, a misdemeanor. A spokesman for the U.S. attorney's office in Los Angeles said Hernandez is believed to be the first person in the country to be convicted of that offense, which carries a punishment of up to one year in prison. In his plea agreement, Hernandez admitted that he "recklessly interfered with and disrupted" the operation of the LAPD helicopter, which was responding to a burglary of a pharmacy, and that his actions "posed an imminent safety hazard" to the chopper's occupants. Reached by phone Thursday, Hernandez declined to comment.


California man charged with crashing drone into LAPD helicopter

FOX News

Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. A Hollywood man who operated a drone that crashed into a police helicopter, forcing an emergency landing, is facing a federal charge. Andrew Rene Hernandez, 22, was arrested by FBI agents Thursday and charged with one count of unsafe operation of an unmanned aircraft, the Justice Department said. The criminal case is believed to be the first in the nation stemming from a drone collision.


Feds charge Hollywood man after drone collides with LAPD helicopter

Los Angeles Times

FBI agents have arrested a Hollywood man, accusing him of recklessly operating a drone and crashing it into a Los Angeles Police Department helicopter earlier this year. The collision damaged the chopper's fuselage and required the LAPD pilot to make an emergency landing following the September encounter. The drone, which authorities say was operated by Andrew Rene Hernandez, then tumbled from the sky and crashed into a vehicle. Hernandez, 22, was arrested Thursday and charged with unsafe operation of an unmanned aircraft after an investigation by the FBI, the LAPD and the Federal Aviation Administration. The potentially deadly collision occurred Sept. 18 after Los Angeles police officers responding to a predawn burglary call at a Hollywood pharmacy requested air support.


Does Palantir See Too Much?

#artificialintelligence

On a bright Tuesday afternoon in Paris last fall, Alex Karp was doing tai chi in the Luxembourg Gardens. He wore blue Nike sweatpants, a blue polo shirt, orange socks, charcoal-gray sneakers and white-framed sunglasses with red accents that inevitably drew attention to his most distinctive feature, a tangle of salt-and-pepper hair rising skyward from his head. Under a canopy of chestnut trees, Karp executed a series of elegant tai chi and qigong moves, shifting the pebbles and dirt gently under his feet as he twisted and turned. A group of teenagers watched in amusement. After 10 minutes or so, Karp walked to a nearby bench, where one of his bodyguards had placed a cooler and what looked like an instrument case. The cooler held several bottles of the nonalcoholic German beer that Karp drinks (he would crack one open on the way out of the park). The case contained a wooden sword, which he needed for the next part of his routine. "I brought a real sword the last time I was here, but the police stopped me," he said matter of factly as he began slashing the air with the sword. Those gendarmes evidently didn't know that Karp, far from being a public menace, was the chief executive of an American company whose software has been deployed on behalf of public safety in France. The company, Palantir Technologies, is named after the seeing stones in J.R.R. Tolkien's "The Lord of the Rings." Its two primary software programs, Gotham and Foundry, gather and process vast quantities of data in order to identify connections, patterns and trends that might elude human analysts. The stated goal of all this "data integration" is to help organizations make better decisions, and many of Palantir's customers consider its technology to be transformative. Karp claims a loftier ambition, however. "We built our company to support the West," he says. To that end, Palantir says it does not do business in countries that it considers adversarial to the U.S. and its allies, namely China and Russia. In the company's early days, Palantir employees, invoking Tolkien, described their mission as "saving the shire." The brainchild of Karp's friend and law-school classmate Peter Thiel, Palantir was founded in 2003. It was seeded in part by In-Q-Tel, the C.I.A.'s venture-capital arm, and the C.I.A. remains a client. Palantir's technology is rumored to have been used to track down Osama bin Laden -- a claim that has never been verified but one that has conferred an enduring mystique on the company. These days, Palantir is used for counterterrorism by a number of Western governments.


Language Models are Open Knowledge Graphs

arXiv.org Artificial Intelligence

This paper shows how to construct knowledge graphs (KGs) from pre-trained language models (e.g., BERT, GPT-2/3), without human supervision. Popular KGs (e.g, Wikidata, NELL) are built in either a supervised or semi-supervised manner, requiring humans to create knowledge. Recent deep language models automatically acquire knowledge from large-scale corpora via pre-training. The stored knowledge has enabled the language models to improve downstream NLP tasks, e.g., answering questions, and writing code and articles. In this paper, we propose an unsupervised method to cast the knowledge contained within language models into KGs. We show that KGs are constructed with a single forward pass of the pre-trained language models (without fine-tuning) over the corpora. We demonstrate the quality of the constructed KGs by comparing to two KGs (Wikidata, TAC KBP) created by humans. Our KGs also provide open factual knowledge that is new in the existing KGs. Our code and KGs will be made publicly available.


Multi-document Summarization with Maximal Marginal Relevance-guided Reinforcement Learning

arXiv.org Artificial Intelligence

While neural sequence learning methods have made significant progress in single-document summarization (SDS), they produce unsatisfactory results on multi-document summarization (MDS). We observe two major challenges when adapting SDS advances to MDS: (1) MDS involves larger search space and yet more limited training data, setting obstacles for neural methods to learn adequate representations; (2) MDS needs to resolve higher information redundancy among the source documents, which SDS methods are less effective to handle. To close the gap, we present RL-MMR, Maximal Margin Relevance-guided Reinforcement Learning for MDS, which unifies advanced neural SDS methods and statistical measures used in classical MDS. RL-MMR casts MMR guidance on fewer promising candidates, which restrains the search space and thus leads to better representation learning. Additionally, the explicit redundancy measure in MMR helps the neural representation of the summary to better capture redundancy. Extensive experiments demonstrate that RL-MMR achieves state-of-the-art performance on benchmark MDS datasets. In particular, we show the benefits of incorporating MMR into end-to-end learning when adapting SDS to MDS in terms of both learning effectiveness and efficiency.


Scientists use big data to sway elections and predict riots -- welcome to the 1960s

Nature

Ignorance of history is a badge of honour in Silicon Valley. "The only thing that matters is the future," self-driving-car engineer Anthony Levandowski told The New Yorker in 20181. Levandowski, formerly of Google, Uber and Google's autonomous-vehicle subsidiary Waymo (and recently sentenced to 18 months in prison for stealing trade secrets), is no outlier. The gospel of'disruptive innovation' depends on the abnegation of history2. 'Move fast and break things' was Facebook's motto. Another word for this is heedlessness. And here are a few more: negligence, foolishness and blindness.


Visiting researcher at UCLA is arrested and charged with destroying evidence

Los Angeles Times

A visiting researcher at UCLA has been arrested and charged with destroying evidence, the latest Chinese national to face accusations in U.S. courts of trying to conceal ties to China's military or government institutions. The FBI began investigating Guan Lei in July, suspecting he had committed visa fraud and possibly transferred "sensitive software or technical data" from UCLA, where he studied machine-learning algorithms in the school's mathematics department, to "high-ranking" officials in the Chinese military, an FBI agent wrote in an affidavit. Guan, 29, isn't charged with those crimes. Instead he's accused of destroying evidence after agents, staking out his apartment in Irvine, saw him pull a computer hard drive from his sock and throw it into a trash bin, Agent Timothy D. Hurt wrote in the affidavit. Guan discarded the damaged drive days after being interviewed by investigators and attempting to board a flight back to China, Hurt wrote.


Which Countries Allow and which Ban AI Facial Recognition?

#artificialintelligence

Facial recognition technology is now common in a growing number of places around the world from public CCTV cameras to biometric identification systems in airports already touching half of the global population on a regular basis. Visualizations from SurfShark classify 194 countries and regions based on the extent of surveillance. More recently, the Department of Homeland Security unveiled its "Biometric Exit" plan, which aims to use facial recognition technology on nearly all air travel passengers by 2023, to identify compliance with visa status. Perhaps surprisingly, 59% of Americans are actually in favour of implementing facial recognition technology, considering it acceptable for use in law enforcement according to a Pew Research survey. Yet, some cities such as San Francisco have pushed to ban surveillance, citing a stand against its potential abuse by the government. Facial recognition technology can potentially come in handy after a natural disaster.


GPT-3 Creative Fiction

#artificialintelligence

What if I told a story here, how would that story start?" Thus, the summarization prompt: "My second grader asked me what this passage means: …" When a given prompt isn't working and GPT-3 keeps pivoting into other modes of completion, that may mean that one hasn't constrained it enough by imitating a correct output, and one needs to go further; writing the first few words or sentence of the target output may be necessary.