Goto

Collaborating Authors

Results


More than 1,000 humans fail to beat AI contender in top crossword battle

#artificialintelligence

In brief An AI system has bested nearly 1,300 human competitors in the annual American Crossword Puzzle Tournament to achieve the top score. The computer, named Dr Fill, is the brainchild of computer scientist Matt Ginsberg, who designed its software to automatically fill out crosswords using a mixture of "good old-fashioned AI" and more modern machine-learning techniques, according to Slate. It was able to solve multiple word conundrums fast with fewer errors than its opponents. Dr Fill, however, was not eligible for the $3,000 cash prize, which instead went to the best human player, a man named Tyler Hinman, who presumably isn't feeling somewhat redundant. Ginsberg's machine contained a computer running a 64-core CPU and two GPUs, and was trained on tons of text scraped from Wikipedia to learn words, and a database of crossword clues and their answers to parse the competition questions.


Ethics of AI: Benefits and risks of artificial intelligence

ZDNet

In 1949, at the dawn of the computer age, the French philosopher Gabriel Marcel warned of the danger of naively applying technology to solve life's problems. Life, Marcel wrote in Being and Having, cannot be fixed the way you fix a flat tire. Any fix, any technique, is itself a product of that same problematic world, and is therefore problematic, and compromised. Marcel's admonition is often summarized in a single memorable phrase: "Life is not a problem to be solved, but a mystery to be lived." Despite that warning, seventy years later, artificial intelligence is the most powerful expression yet of humans' urge to solve or improve upon human life with computers. But what are these computer systems? As Marcel would have urged, one must ask where they come from, whether they embody the very problems they would purport to solve. Ethics in AI is essentially questioning, constantly investigating, and never taking for granted the technologies that are being rapidly imposed upon human life. That questioning is made all the more urgent because of scale. AI systems are reaching tremendous size in terms of the compute power they require, and the data they consume. And their prevalence in society, both in the scale of their deployment and the level of responsibility they assume, dwarfs the presence of computing in the PC and Internet eras. At the same time, increasing scale means many aspects of the technology, especially in its deep learning form, escape the comprehension of even the most experienced practitioners. Ethical concerns range from the esoteric, such as who is the author of an AI-created work of art; to the very real and very disturbing matter of surveillance in the hands of military authorities who can use the tools with impunity to capture and kill their fellow citizens. Somewhere in the questioning is a sliver of hope that with the right guidance, AI can help solve some of the world's biggest problems. The same technology that may propel bias can reveal bias in hiring decisions. The same technology that is a power hog can potentially contribute answers to slow or even reverse global warming. The risks of AI at the present moment arguably outweigh the benefits, but the potential benefits are large and worth pursuing. As Margaret Mitchell, formerly co-lead of Ethical AI at Google, has elegantly encapsulated, the key question is, "what could AI do to bring about a better society?" Mitchell's question would be interesting on any given day, but it comes within a context that has added urgency to the discussion. Mitchell's words come from a letter she wrote and posted on Google Drive following the departure of her co-lead, Timnit Gebru, in December.


US army develops new tool to detect deepfakes threatening national security

The Independent - Tech

US Army scientists have developed a novel tool that can help soldiers detect deepfakes that pose threat to national security. The advance could lead to a mobile software that warns people when fake videos are played on the phone. Deepfakes are hyper-realistic video content made using artificial intelligence tools that falsely depicts individuals saying or doing something, explained Suya You and Shuowen (Sean) Hu from the Army Research Laboratory in the US. The growing number of these fake videos in circulation can be harmful to society – from the creation of non-consensual explicit content to doctored media by foreign adversaries that are used in disinformation campaigns. According to the scientists, while there were close to 8,000 of these deepfake video clips online at the beginning of 2019, in just about nine months, this number nearly doubled to about 15,000.


Ethics of AI: Benefits and risks of artificial intelligence

#artificialintelligence

In 1949, at the dawn of the computer age, the French philosopher Gabriel Marcel warned of the danger of naively applying technology to solve life's problems. Life, Marcel wrote in Being and Having, cannot be fixed the way you fix a flat tire. Any fix, any technique, is itself a product of that same problematic world, and is therefore problematic, and compromised. Marcel's admonition is often summarized in a single memorable phrase: "Life is not a problem to be solved, but a mystery to be lived." Despite that warning, seventy years later, artificial intelligence is the most powerful expression yet of humans' urge to solve or improve upon human life with computers. But what are these computer systems? As Marcel would have urged, one must ask where they come from, whether they embody the very problems they would purport to solve. Ethics in AI is essentially questioning, constantly investigating, and never taking for granted the technologies that are being rapidly imposed upon human life. That questioning is made all the more urgent because of scale. AI systems are reaching tremendous size in terms of the compute power they require, and the data they consume. And their prevalence in society, both in the scale of their deployment and the level of responsibility they assume, dwarfs the presence of computing in the PC and Internet eras. At the same time, increasing scale means many aspects of the technology, especially in its deep learning form, escape the comprehension of even the most experienced practitioners. Ethical concerns range from the esoteric, such as who is the author of an AI-created work of art; to the very real and very disturbing matter of surveillance in the hands of military authorities who can use the tools with impunity to capture and kill their fellow citizens. Somewhere in the questioning is a sliver of hope that with the right guidance, AI can help solve some of the world's biggest problems. The same technology that may propel bias can reveal bias in hiring decisions. The same technology that is a power hog can potentially contribute answers to slow or even reverse global warming. The risks of AI at the present moment arguably outweigh the benefits, but the potential benefits are large and worth pursuing. As Margaret Mitchell, formerly co-lead of Ethical AI at Google, has elegantly encapsulated, the key question is, "what could AI do to bring about a better society?" Mitchell's question would be interesting on any given day, but it comes within a context that has added urgency to the discussion. Mitchell's words come from a letter she wrote and posted on Google Drive following the departure of her co-lead, Timnit Gebru, in December.


Machine Learning Models Can Predict Persistence of Early Childhood Asthma - Pulmonology Advisor

#artificialintelligence

Machine learning modules can be trained with the use of electronic health record (EHR) data to differentiate between transient and persistent cases of early childhood asthma, according the results of an analysis published in PLoS One. Researchers conducted a retrospective cohort study using data derived from the Pediatric Big Data (PBD) resource at the Children's Hospital of Philadelphia (CHOP) -- a pediatric tertiary academic medical center located in Pennsylvania. The researchers sought to develop machine learning modules that could be used to identify individuals who were diagnosed with asthma at aged 5 years or younger whose symptoms will continue to persist and who will thus continue to experience asthma-related visits. They trained 5 machine learning modules to distinguish between individuals without any subsequent asthma-related visits (transient asthma diagnosis) from those who did experience asthma-related visits from 5 to 10 years of age (persistent asthma diagnosis), based on clinical information available in these children up to 5 years of age. The PBD resource used in the current study included data obtained from the CHOP Care Network -- a primary care network of more than 30 sites -- and from CHOP Specialty Care and Surgical Centers.


Google-led paper pushes back against claims of AI inefficiency

#artificialintelligence

Google this week pushed back against claims by earlier research that large AI models can contribute significantly to carbon emissions. In a paper coauthored by Google AI chief scientist Jeff Dean, researchers at the company say that the choice of model, datacenter, and processor can reduce carbon footprint by up to 100 times and that "misunderstandings" about the model lifecycle contributed to "miscalculations" in impact estimates. Carbon dioxide, methane, and nitrous oxide levels are at the highest they've been in the last 800,000 years. Together with other drivers, greenhouse gases likely catalyzed the global warming that's been observed since the mid-20th century. It's widely believed that machine learning models, too, have contributed to the adverse environmental trend.


AI empowers environmental regulators

#artificialintelligence

Like superheroes capable of seeing through obstacles, environmental regulators may soon wield the power of all-seeing eyes that can identify violators anywhere at any time, according to a new Stanford University-led study. The paper, published the week of April 19 in Proceedings of the National Academy of Sciences (PNAS), demonstrates how artificial intelligence combined with satellite imagery can provide a low-cost, scalable method for locating and monitoring otherwise hard-to-regulate industries. Go to the web site to view the video. Brick production, a major industry in South Asia, is a source of pollution that threatens health. Regulating brick kilns is difficult because there is no database of kiln locations.


Artificial intelligence could sway your dating and voting preferences

#artificialintelligence

AI algorithms on our computers and smartphones have quickly become a pervasive part of everyday life, with relatively little attention to their scope, integrity, and how they shape our attitudes and behaviours. Spanish researchers have now shown experimentally that people's voting and dating preferences can be manipulated depending on the type of persuasion used. "Every day, new headlines appear in which Artificial Intelligence (AI) has overtaken human capacity in new and different domains," write Ujue Agudo and Helena Matute, from the Universidad de Deusto, in the journal PLOS ONE. "This results in recommendation and persuasion algorithms being widely used nowadays, offering people advice on what to read, what to buy, where to eat, or whom to date," they add. "[P]eople often assume that these AI judgements are objective, efficient and reliable; a phenomenon known as machine bias."


First ever FDA-approved brain-computer interface targets stroke rehab

#artificialintelligence

A novel device designed to help stroke patients recover wrist and hand function has been approved by the US Food and Drug Administration (FDA). Called IpsiHand, the system is the first brain-computer interface (BCI) device to ever receive FDA market approval. The IpsiHand device consists of two separate parts – a wireless exoskeleton that is positioned over the wrist, and a small headpiece that records brain activity using non-invasive electroencephalography (EEG) electrodes. The system is based on a discovery made by Eric Leuthardt and colleagues at the Washington University School of Medicine over a decade ago. It is well known that each side of the brain controls movement on the opposite side of the body, so if a stroke damages motor function on the right side of the brain movement on a person's left side will be affected.


How "spell check for doctors" could save your life

ZDNet

Here's an alarming statistic that seems to fly under the news radar. Medical errors are the third-leading cause of death in the US, with a reported 250,000 annual deaths due to medical mistakes. This results in as many as 4 in 10 patients harmed in healthcare settings, with up to 80% of those medical errors preventable. Artificial intelligence in the real world: What can it actually do? What are the limits of AI?