The forecasting tool assesses multiple patient-specific biological and clinical factors to predict the degree of response to immune checkpoint inhibitors and survival outcomes. It markedly outperforms individual biomarkers or other combinations of variables developed so far, according to findings published in Nature Biotechnology. With further validation, the tool may help oncologists better identify patients most likely to benefit from ICB. Discerning, prior to treatment, patients for whom ICB would be ineffective could reduce unnecessary expense and exposure to potential side effects. It could also indicate the need to pursue alternate treatment strategies, such as combination therapies. "It's important to know which treatment modalities patients are most suited for," said Dr. Chan, director of Cleveland Clinic's Center for Immunotherapy & Precision Immuno-Oncology.
Being the Silicon Valley hi-tech giant, Google has started implementing cutting-edge technologies like artificial intelligence to provide efficiency to the world. Google AI conducts research to advance the state-of-the-art through AI-based software. AI at Google develops artificial intelligence tools to ensure that the world can access the strong and smart functionalities of AI. The mission of Google AI is to organize the real-time information and make it accessible to the world for multiple different useful purposes for each and every sector. The implementation of artificial intelligence has offered Google Translate, Google Assistant, and many more with new ways of solving real-life complicated problems.
One word: AI – Many of the world's leaders in the field of science and technology, including the late Stephen Hawking, Telsa founder Elon Musk, Apple co-founder Steve Wozniak and Microsoft founder Bill Gates, have all expressed concern in recent years over the risks of artificial intelligence (AI) – most notably its potential use in autonomous weapons. Along with many in academia and human rights groups, the science and tech visionaries have warned that in the wrong hands there is a serious danger posed by AI. One concern is that these weapons could be designed to be extremely difficult to simply "turn off," as the Future of Life Institute noted in its report on the development of autonomous weapon platforms. That could result in a scenario straight out of science fiction where humans lose control of their dangerous creations. While it may not mean a world-ending scenario presented in The Terminator, even losing control of a few AI weapons temporarily could result in unnecessary mass causalities or worse.
Mischief can happen when AI is let loose in the world, just like any technology. The examples of AI gone wrong are numerous, the most vivid in recent memory being the disastrously bad performance of Amazon's facial recognition technology, Rekognition, which had a propensity to erroneously match members of some ethnic groups with criminal mugshots to a disproportionate extent. Given the risk, how can society know if a technology has been adequately refined to a level where it is safe to deploy? "This is a really good question, and one we are actively working on," Sergey Levine, assistant professor with the University of California at Berkeley's department of electrical engineering and computer science, told ZDNet by email this week. Levine and colleagues have been working on an approach to machine learning where the decisions of a software program are subjected to a critique by another algorithm within the same program that acts adversarially.
We've heard the fable of "the self-made billionaire" a thousand times: some unrecognized genius toiling away in a suburban garage stumbles upon The Next Big Thing, thereby single-handedly revolutionizing their industry and becoming insanely rich in the process -- all while comfortably ignoring the fact that they'd received $300,000 in seed funding from their already rich, politically-connected parents to do so. In The Warehouse: Workers and Robots at Amazon, Alessandro Delfanti, associate professor at the University of Toronto and author of Biohackers: The Politics of Open Science, deftly examines the dichotomy between Amazon's public personas and its union-busting, worker-surveilling behavior in fulfillment centers around the world -- and how it leverages cutting edge technologies to keep its employees' collective noses to the grindstone, pissing in water bottles. In the excerpt below, Delfanti examines the way in which our current batch of digital robber barons lean on the classic redemption myth to launder their images into that of wonderkids deserving of unabashed praise. This is an excerpt from The Warehouse: Workers and Robots at Amazon by Alessandro Delfanti, available now from Pluto Press. Besides the jobs, trucks and concrete, what Amazon brought to Piacenza and to the dozens of other suburban areas which host its warehouses is a myth: a promise of modernization, economic development, and even individual emancipation that stems from the "disruptive" nature of a company heavily based on the application of new technology to both consumption and work.
A journalist who reports on cities and autonomous vehicles responds to Linda Nagata's "Ride." I like to think of myself as deeply skeptical of the many internet algorithms telling me what I want and need. I turn off targeted advertising wherever I can. I use AdBlock to hide what's left. Most of my YouTube recommendations are for concerts or sports highlights, but I know I'm just a few clicks away from a wild-eyed influencer telling me to gargle turpentine for a sore throat. But I make an exception for the sweet, all-knowing embrace of the Spotify algorithm, to whom I surrender my ears several times a day.
This story is part of Future Tense Fiction, a monthly series of short stories from Future Tense and Arizona State University's Center for Science and the Imagination about how technology and science will change our lives. A handsome boy, 17 and soft-spoken, told Jasmine about an Easter egg. "Try it," he urged, sincerity in his voice and in his eyes as he gazed at her across the tall front desk. She smiled all day at the hotel's guests, chatting with them when time permitted, listening to their stories. Her role came easily: bright-eyed island girl, young and pretty, a white flower tucked behind her ear. "Ah, your parents are here," she said as the couple emerged from the elevator alcove into the expansive lobby, its glittering perfection empty now of other guests in the lull of early afternoon. The boy waved at them, then turned again to Jasmine. "Give it a try," he exhorted her in a conspiratorial whisper. She didn't want to disappoint those eyes. So she played along, teasing, "I might." It was just a little game, after all. "And if it works for you, then tell someone else, OK? Keep it going." "And how will I know if it works?" He answered with a blissful smile. His parents joined him at the desk. Jasmine wished them all a safe trip home. Her shift ended at 4. Still wearing her uniform--a blue, body-hugging aloha-print dress--she left alone through the employee entrance, sighing at the shock of transition from air-conditioned comfort to the withering heat and humidity of a late-summer afternoon. Out of sight but audible, surf rumbled against the artificial reef. Closer, mynah birds chattered amid the heavy bloom of a rainbow shower tree. After a few minutes, an electric cart rolled up, nearly full with resort employees on their way home.
In a recent Mind Matters podcast, "Artificial General Intelligence: the Modern Homunculus," Walter Bradley Center director Robert J. Marks, a and computer engineering prof, spoke with Justin Bui from his own research group at Baylor University in Texas on what is -- and isn't -- really happening in artificial intelligence today. Some of the more far-fetched claims remind Dr. Marks of the homunculus, the "little man" of alchemy. So what are the AI engineers really doing and how do they do it?