Strict laws, lack of shops and pandemic-related delays are making it harder for Americans to purchase guns in crime-ridden cities; attorney and gun rights activist Colion Noir weighs in. Authorities said a man from Boston had a stun gun pulled on him Tuesday morning, as he was being robbed by a woman he met through an online dating app. The unidentified man rendezvoused with the young woman at a local hotel, the Associated Press reported. He told police the two talked for about 30 minutes before she pointed a Taser stun gun at him and began rifling through his pockets. She allegedly stole $100 in cash before law enforcement was called in.
In a win for transparency, a state court judge ordered the California Department of Corrections and Rehabilitation (CDCR) to disclose records regarding the race and ethnicity of parole candidates. This is also a win for innovation, because the plaintiffs will use this data to build new technology in service of criminal justice reform and racial justice. In Voss v. CDCR, EFF represented a team of researchers (known as Project Recon) from Stanford University and University of Oregon who are attempting to study California parole suitability determinations using machine-learning models. This involves using automation to review over 50,000 parole hearing transcripts and identify various factors that influence parole determinations. Project Recon's ultimate goal is to develop an AI tool that can identify parole denials that may have been influenced by improper factors as potential candidates for reconsideration.
While many people use Deepfakes to paste Nicholas Cage's face into as many films as they can, or to make their favourite celebrity say something funny, this AI forgery has worrying potential for malicious use. While Deepfakes currently can be made following a set script, it is expected that they will eventually develop the capability to be interactive. This could be used to trick parents into giving login or personal details to (what they believe to be) their ever forgetful child, to forge fake blackmail, and to release videos of a political candidate saying campaign-wrecking statements, to name a few. Just the existence of Deepfakes would bring question to the legitimacy of any visual evidence in cases as we could no longer be certain what we see is true, undermining what is nowadays considered reliable and often case-winning evidence. But how can Deepfakes be stopped? Unfortunately, part of the reason for Deepfakes being considered such a great threat is that they're very difficult to detect.
A former Google engineer has been sentenced to 18 months in prison after pleading guilty to stealing trade secrets before joining Uber's effort to build robotic vehicles for its ride-hailing service. The sentence handed down Tuesday by U.S. District Judge William Alsup came more than four months after former Google engineer Anthony Levandowski reached a plea agreement with the federal prosecutors who brought a criminal case against him last August. Levandowski, who helped steer Google's self-driving car project before landing at Uber, was also ordered to pay more than $850,000. Alsup had taken the unusual step of recommending the Justice Department open a criminal investigation into Levandowski while presiding over a high-profile civil trial between Uber and Waymo, a spinoff from a self-driving car project that Google began in 2007 after hiring Levandowski to be part of its team. Levandowski eventually became disillusioned with Google and left the company in early 2016 to start his own self-driving truck company, called Otto, which Uber eventually bought for $680 million. He wound up pleading guilty to one count, culminating in Tuesday's sentencing.
US District Judge William Alsup has sentenced Anthony Levandowski, the former lead Waymo engineer at the heart of a trade secret legal battle between the Alphabet subsidiary and Uber, to 18 months in prison. Prosecutors sought a 27-month sentence, while Levandowski requested a one-year home confinement, telling the court that his recent bouts with pneumonia makes him susceptible to COVID--19. According to TechCrunch, Alsup shot his request down, explaining that home confinement and a short prison sentence "[give] a green light to every future brilliant engineer to steal trade secrets. That said, he allowed Levandowski to enter custody once the pandemic has subsided. Alphabet filed a lawsuit against Uber in 2017, accusing the company of colluding with its former employee to steal secrets from Waymo. While Levandowski didn't immediately join Uber after leaving the Google division that eventually became Waymo, the ride-hailing titan quickly acquired the self-driving truck startup he founded. In its lawsuit, Alphabet said its former employee downloaded over 14,000 confidential and proprietary design files for various Waymo hardware, including its LiDAR system. The two companies reached a settlement in 2018, with Waymo making sure that Uber would develop its own self-driving technology. In mid--March this year, Levandowski agreed to plead guilty to one count of stealing materials from Google to make other criminal charges go away. "The last three and a half years have forced me to come to terms with what I did.
The federal government on Tuesday asked a federal judge to sentence Anthony Levandowski to 27 months in prison for theft of trade secrets. In March, Levandowski pleaded guilty to stealing a single confidential document related to Google's self-driving technology on his way out the door to his new startup. That startup was quickly acquired by Uber, triggering a titanic legal battle between the companies that was settled in 2018. This story originally appeared on Ars Technica, a trusted source for technology news, tech policy analysis, reviews, and more. Ars is owned by WIRED's parent company, Condé Nast.
Federal prosecutors are also seeking three years of supervised release and an agreed-upon restitution payment of nearly $756,500 to Alphabet Inc's self-driving car company Waymo, according to the court papers filed in the U.S. District Court for Northern District of California. Levandowski's attorneys have asked for 12 months of home confinement for him, with an obligation to perform community service, and a $95,000 fine, the court papers added. "It is, unfortunately, no exaggeration to say that a prison sentence today can amount to the imposition of a serious health crisis, even a death sentence, given the BOP's (Federal Bureau of Prisons) current inability to control the spread of the coronavirus," Levandowski's attorneys wrote. The case stemmed from accusations by Google and its sister company Waymo in 2017 that Uber jump-started its own self-driving car development with trade secrets and staff that Levandowski unlawfully took from Google. Uber issued company stock to Alphabet and revised its software to settle the case, and the Department of Justice later announced a 33-count criminal indictment against Levandowski.
What if I told a story here, how would that story start?" Thus, the summarization prompt: "My second grader asked me what this passage means: …" When a given prompt isn't working and GPT-3 keeps pivoting into other modes of completion, that may mean that one hasn't constrained it enough by imitating a correct output, and one needs to go further; writing the first few words or sentence of the target output may be necessary.
Sheema Khan is the author of Of Hockey and Hijab: Reflections of a Canadian Muslim Woman. The warning signs had been there all along. An assault on a 15-year-old boy; death threats against the man's own parents; a police safety bulletin warning of his gun stash and desire to kill a cop; violent attacks against his spouse; a weapons complaint to the RCMP; fear by neighbours and relatives of his sociopathic behaviour; rampant alcoholism. As an in-depth Globe feature reported, the nation's worst mass shooter "was the kind of man who made people nervous, bragged about knowing how to dispose of bodies and built miniature coffins as a hobby." As we wait for the launch of a public inquiry, there are so many questions about the horrible incident in Nova Scotia.
This method exposes fake images created by computer algorithms rather than by humans. They look deceptively real, but they are made by computers: so-called deep-fake images are generated by machine learning algorithms, and humans are pretty much unable to distinguish them from real photos. Researchers at the Horst Görtz Institute for IT Security at Ruhr-Universität Bochum and the Cluster of Excellence "Cyber Security in the Age of Large-Scale Adversaries" (Casa) have developed a new method for efficiently identifying deep-fake images. To this end, they analyze the objects in the frequency domain, an established signal processing technique. Credit: RUB, Marquard The team presented their work at the International Conference on Machine Learning (ICML) on 15 July 2020, one of the leading conferences in the field of machine learning.