The following article by Zhang et al. is well-known for having highlighted that widespread success of deep learning in artificial intelligence brings with it a fundamental new theoretical challenge, specifically: Why don't today's deep nets overfit to training data? This question has come to animate the theory of deep learning. Let's understand this question in context of supervised learning, where the machine's goal is to learn to provide labels to inputs (for example, learn to label cat pictures with "1" and dog pictures with "0"). Deep learning solves this task by training a net on a suitably large training set of images that have been labeled correctly by humans. The parameters of the net are randomly initialized and thereafter adjusted in many stages via the simplest algorithm imaginable: gradient descent on the current difference between desired output and actual output.
In January 2020, Robert Williams of Farmington Hills, MI, was arrested at his home by the Detroit Police Department. He was photographed, fingerprinted, had his DNA taken, and was then locked up for 30 hours. He had not committed one; a facial recognition system operated by the Michigan State Police had wrongly identified him as the thief in a 2018 store robbery. However, Williams looked nothing like the perpetrator captured in the surveillance video, and the case was dropped. Rewind to May 2019, when Detroit resident Michael Oliver was arrested after being identified by the very same police facial recognition unit as the person who stole a smartphone from a vehicle.
Seeking to call into question the mental acuity of his opponent, Donald Trump looked across the presidential debate stage at Joseph Biden and said, "So you said you went to Delaware State, but you forgot the name of your college. Biden chuckled, but viewers may have been left wondering: did the former vice president misstate where he went to school? Those who viewed the debate live on an app from the London-based company Logically were quickly served an answer: the president's assertion was false. A brief write-up posted on the company's website the next morning provided links to other fact-checks from National Public Radio and the Delaware News Journal on the same claim, which explain that Biden actually said his first Senate campaign received a boost from students at the school. Logically is one of a number of efforts, both commercial and academic, to apply techniques of artificial intelligence (AI), including machine learning and natural language processing (NLP), to identify false ...
Motional did not say how many cars had participated in the Las Vegas tests, but said in its statement that "multiple" autonomous vehicles had been used on routes that included public roads and closed courses. During the tests, the vehicles sensed and responded to human-driven vehicles, cyclists and pedestrians, the company said. Some tests were completed with a safety operator in the car; others were completed without one.
Boston Dynamics has racked up hundreds of millions of YouTube views with viral clips of its futuristic, legged robots dancing together, doing parkour, and working in a warehouse. A group of meme-spinning pranksters now wants to present a more dystopian view of the company's robotic tech. They added a paintball gun to Spot, the company's doglike machine, and plan to let others control it inside a mocked-up art gallery via the internet later this week. The project, called Spot's Rampage, is the work of MSCHF (pronounced "mischief," of course), an internet collective that regularly carries out meme-worthy pranks. Previous MSCHF stunts include creating an app that awarded $25,000 to whomever could hold a button down for the longest; selling "Jesus Shoes" sneakers with real holy water in the soles (Drake bought a pair); developing an astrology-based stock-picking app; and cutting up and selling individual spots from a Damian Hirst painting.
Google has fired one of its top artificial intelligence researchers, Margaret Mitchell, escalating internal turmoil at the company following the departure of Timnit Gebru, another leading figure on Google's AI ethics team. Mitchell, who announced her firing on Twitter, did not immediately respond to requests for comment. In a statement to Reuters, Google said the firing followed a weeks-long investigation that found she moved electronic files outside the company. Google said Mitchell violated the company's code of conduct and security policies. Google's ethics in artificial intelligence research unit has been under scrutiny since December's dismissal of Gebru, a prominent Black researcher in Silicon Valley.
With every new generation of consoles and components, video games grow closer and closer to replicating reality. From the glistening sweat on star athletes' faces in sports franchises like "Madden" and "NBA 2K," to the soft swaying of grass in samurai thriller "Ghost of Tsushima," game-makers are always leveraging the latest in granular detail to sell the immersive power of the medium. Tristan Cooper, who owns the Twitter account "Can You Pet the Dog?," never set out to create a social media juggernaut. Rather, he was just trying to point out what he felt was a common quirk of many high-profile games: While many featured dogs, wolves and other furry creatures as hostile foes of the protagonist, those that did feature cuddly animal friends rarely let you pet them. Cooper says the account was particularly inspired by his early experience with online shooter "The Division 2." "'The Division 2′s' apocalyptic streets were rife with frightened dogs that you could not console or help in any way," he wrote in an email to The Washington Post.
Swift for TensorFlow, a Google-led project to integrate the TensorFlow machine learning library and Apple's Swift language, is no longer in active development. Nevertheless, parts of the effort live on, including language-differentiated programming for Swift. The GitHub repo for the project notes it is now in archive mode and will not receive further updates. The project, the repo notes, was positioned as a new way to develop machine learning models. "Swift for TensorFlow was an experiment in the next-generation platform for machine learning, incorporating the latest research across machine learning, compilers, differentiable programming, systems design, and beyond."