SEARCH SUSPENDED - No survivors found after plane violates DC airspace, scrambles military before crashing in Virginia. LAND OF THE FREE? - Capitol Police spark outrage as youth choir's national anthem performance halted. 'BEST SOLUTION' - AI could help solve NJ missing child mystery, become model for cold case probes. RECORD SCRATCH - 'American Pie' icon Don McLean weighs in on AI's effect on the music industry. WHAT'S IN STORE - Target backs organization pushing US demilitarization, Mt. 'IT HAS TO BE JOE BIDEN' - Ex-FBI director James Comey speaks out on 2024 race.
That sort of computational power requires GPUs, or graphics processing units, that were first made for video games but were found to be the only chips that could handle such heavy computer tasks as large language models. Currently, just one company, Nvidia, sells the best of those, for which it charges tens of thousands of dollars. Nvidia's valuation recently rocketed to $1 trillion on the anticipated sales. The Taiwan-based company that manufactures many of those chips, TSMC, has likewise soared in value.
TL;DR: We study the use of differential privacy in personalized, cross-silo federated learning (NeurIPS'22), explain how these insights led us to develop a 1st place solution in the US/UK Privacy-Enhancing Technologies (PETs) Prize Challenge, and share challenges and lessons learned along the way. If you are feeling adventurous, checkout the extended version of this post with more technical details! Patient data collected by groups such as hospitals and health agencies is a critical tool for monitoring and preventing the spread of disease. Unfortunately, while this data contains a wealth of useful information for disease forecasting, the data itself may be highly sensitive and stored in disparate locations (e.g., across multiple hospitals, health agencies, and districts). In this post we discuss our research on federated learning, which aims to tackle this challenge by performing decentralized learning across private data silos. We then explore an application of our research to the problem of privacy-preserving pandemic forecasting--a scenario where we recently won a 1st place, $100k prize in a competition hosted by the US & UK governments--and end by discussing several directions of future work based on our experiences.
Amid increasing global alarm that artificial intelligence (AI) is poised to cause irreparable harm to human society in the near future, the Australian government headed by Liberal party leader Anthony Albanese has launched its own review of this rapidly evolving technology. Industry and Science Minister Ed Husic has released two papers that kickoff an eight-week consultative process that seeks to get input from a variety of stakeholders on a new framework. Also: Today's AI boom will amplify social problems if we don't act now, says AI ethicist One is a'rapid response' report commissioned by the National Science and Technology Council (NSTC) that explores the opportunities and risks posed by generative AI. This analysis has become expedient because of the speed at which existing tech companies in Australia and globally are pivoting to AI, and the pace with which AI is seeping into almost every industry. It's a transformation that has triggered increasing concern about AI's intrusiveness or tendencies toward bias as well as concerns about truthfulness and'hallucinations'.
Despite the potential for vast productivity gains from generative AI tools such as ChatGPT or GitHub Copilot, will technology professionals' jobs actually grow more complicated? People can now pump out code on demand in an abundance of languages, from Java to Python. Already, 95% of developers in a recent survey from Sourcegraph report they use Copilot, ChatGPT, and other gen AI tools this way. But auto-generating new code only addresses part of the problem in enterprises that already maintain unwieldy codebases, and require high levels of cohesion, accountability, and security. For starters, security and quality assurance tasks associated with software jobs aren't going to go away anytime soon.
Harvey Castro talks about how AI could be used in cold cases and the symbiotic relationship between AI and a detective. New Jersey police are deploying new technology to try to break an unsolved case in what some experts believe could be the greatest advancement in cold-case investigations since forensic genetic genealogy caught the infamous Golden State Killer in 2018. A police department in the 70-square-mile town of Middle Township, along with the Cape May County Prosecutor's Office, will use artificial intelligence to try to solve the case of Mark Himebaugh, an 11-year-old child who seemingly vanished on Nov. 25, 1991. In the 30-plus years since Himebaugh went missing, law enforcement's strongest leads are a composite sketch of a person of interest and a theory that a convicted child sex predator, who's currently in prison, is involved. But neither are strong enough to bring charges or even advance the case.
People in Texas sounded off on AI job displacement, with half of people who spoke to Fox News convinced that the tech will rob them of work. Don McLean, the one-man creative force behind the hit songs "American Pie," "Vincent (Starry, Starry Night)," "And I Love You So," "Castles in the Air," and other songs, albums, tours and projects, shared thoughts about artificial intelligence, music, creativity and authenticity with Fox News Digital in a recent phone interview amid his current "American Pie" 50th anniversary tour. "When you talk about artificial intelligence right now -- I'm not sure what that means at the moment, but clearly it's evolving," he said from California, where he was making several tour stops after returning from concert performances in Australia. "With any technology, you have an inflection point where it takes off," said McLean. "Today, AI has merely presented itself -- but the inflection point hasn't been reached yet. He added, "I also want to say that before a form of artificial intelligence was in use -- and it's been in use for many years -- the tape recorder and the photographic lens were both honest. If you took a picture, that was the way something looked." However, in current times, he said, "you have all this photoshopping and massaging and whatnot, so now the camera lies.
Recently, the White House decided that appointing an unqualified, politicized leader is perfect for tackling the complex issue of AI regulation. Kamala Harris, who has now become the AI czar, will likely lead America into a very gloomy future. The nation must correct this blunder before it's too late. We can only solve a problem by asking the right questions and Harris and the polarized Congress are clearly unable to do so. The United States must replace her with an unbiased committee of experts who can protect and fully develop effective AI regulations.
Getting digitally cloned was easier than Devin Finley expected it to be. The voice-over artist, who also works as a model and bar manager, entered a studio in Manhattan last spring and read a script from a teleprompter. Across the room, a man with a large camera working for Hour One, a Tel Aviv–based video agency specializing in providing clients with lifelike virtual humans, filmed Finley from the waist up. Over Zoom, a director offered instructions about how much to move his hands. He was done in less than an hour.
Elon Musk's brain-implant company Neuralink last week received regulatory approval to conduct the first clinical trial of its experimental device in humans. But the billionaire executive's bombastic promotion of the technology, his leadership record at other companies and animal welfare concerns relating to Neuralink experiments have raised alarm. "I was surprised," said Laura Cabrera, a neuroethicist at Penn State's Rock Ethics Institute about the decision by the US Food and Drug Administration to let the company go ahead with clinical trials. Musks' erratic leadership at Twitter and his "move fast" techie ethos raise questions about Neuralink's ability to responsibly oversee the development of an invasive medical device capable of reading brain signals, Cabrera argued. "Is he going to see a brain implant device as something that requires not just extra regulation, but also ethical consideration?" she said.