Elon Musk said startup Neuralink, which aims to build a scalable implant to connect human brains with computers, has already implanted chips in rats and plans to test its brain-machine interface in humans within two years, with a long-term goal of people "merging with AI." Brain-machine interfaces have been around for awhile. Some of the earliest success with the technology include Brown University's BrainGate, which first enabled a paralyzed person to control a computer cursor in 2006. Since then a variety of research groups and companies, including the University of Pittsburgh Medical Center and DARPA-backed Synchron, have been working on similar devices. There are two basic approaches: You can do it invasively, creating an interface with an implant that directly touches the brain, or you can do it non-invasively, usually by electrodes placed near the skin. Neuralink, says Musk, is going to go the invasive route.
SONAL SHAH: It's also about how do we make data more useful for people to use and to solve problems in their communities? TANYA OTT: Okay, that is a big job. Who is this superhuman who fills it? TANYA OTT: We'll tell you, in a moment. But first, let me say, you're listening to the Press Room, where we talk about some of the biggest issues facing businesses today. I'm Tanya Ott and joining me today are Bill Eggers … I am the executive director and a professor of practice at Georgetown University's Beeck Center. TANYA OTT: Bill and Sonal are coauthors of The CDO Playbook – a guide for Chief Data Officers. For the last decade, government has been focused on making data more open and easily [accessible] to the public.
Our partner Verint has #AI powered tools to ensure private Omni-Channel conversations stay secure. Mayday Communications Inc promotes Verint's complete portfolio of #security solutions. In this newsletter featuring Gartner's report, "Predicts 2019: The Ambiguous Future of Privacy," we dig into steps you can take now to prepare your business for the rising tide of #privacy #regulations..
"The increasing amount of available data, mainly due to the proliferation of access to the internet in countries where peacekeeping missions take place, has caused a technology-driven transformation of the operational environment. This comes at a time of significant developments in the fields of artificial intelligence and particularly machine learning, most of whose applications still rely on massive amounts of data. As such these developments have produced some promising individual initiatives to exploit this new and growing potential for United Nations operations." At least as early as 1996 researchers have used machine learning (ML) to predict conflicts. Today, mainly due to significantly higher amounts of available data, advancements in computing power and the progress made in natural language processing, several artificial intelligence (AI) tools have been added to the peacekeeping arsenal.
QuantX recently became the first-ever computer-aided breast cancer diagnosis system cleared by the FDA for use in radiology, but it's not putting radiologists out of a job any time soon. "Radiology is the backbone of diagnosing many diseases today," said Jeffrey Aronin, chairman and CEO of Paragon Biosciences. "We believe the future is radiologists with technology." The combination of humans and machines apparently works really well. In a clinical study, QuantX helped radiologists interpret MRIs, noting the differences between cancerous and noncancerous breast lesions.
Advancements in artificial intelligence software in the commercial space have gained traction in recent years. From Watson assisting with diagnoses in doctors' offices to the computer programs running risk analysis for banks making lending decisions, AI has permeated many facets of our lives. However, the federal government's use of AI has received far less attention, despite directly impacting most citizens across the country. The term "artificial intelligence" has been co-opted for a wide array of applications. The definition often includes everything from marginally automated systems to advanced machine learning programs that make decisions independently of a human operator.
Elon Musk doesn't think his newest endeavor, revealed Tuesday night after two years of relative secrecy, will end all human suffering. At a presentation at the California Academy of Sciences, hastily announced via Twitter and beginning a half hour late, Musk presented the first product from his company Neuralink. It's a tiny computer chip attached to ultrafine, electrode-studded wires, stitched into living brains by a clever robot. And depending on which part of the two-hour presentation you caught, it's either a state-of-the-art tool for understanding the brain, a clinical advance for people with neurological disorders, or the next step in human evolution. The chip is custom-built to receive and process the electrical action potentials--"spikes"--that signal activity in the interconnected neurons that make up the brain.
Some Uber drivers in New York City want to see a decrease in the commission taken by the company. SAN FRANCISCO -- Gig economy workers are increasingly ubiquitous, shuttling us to appointments and delivering our food while working for Uber, Lyft, DoorDash and others. Thanks in large part to the app-based tech boom emanating from this city, 36% of U.S. workers participate in the gig economy, according to Gallup. But not all gigs are created equal, Gallup adds, noting that so-called "contingent gig workers" experience their workplace "like regular employees do, just without the benefits of a traditional job -- benefits, pay and security." California lawmakers are weighing what is considered a pro-worker bill that, if passed into law, would set a national precedent that fundamentally redefines the relationship between worker and boss by forcing corporations to pay up.
We hear a lot about AI and its transformative potential. What that means for the future of humanity, however, is not altogether clear. Some futurists believe life will be improved, while others think it is under serious threat. Here's a range of takes from 11 experts. Join nearly 200,000 subscribers who receive actionable tech insights from Techopedia.
Artificial intelligence (AI) is rapidly finding applications in nearly every walk of life. Self-driving cars, social media networks, cybersecurity companies, and everything in between uses it. But a new report published by the SHERPA consortium – an EU project studying the impact of AI on ethics and human rights – finds that while human attackers have access to machine learning techniques, they currently focus most of their efforts on manipulating existing AI systems for malicious purposes instead of creating new attacks that would use machine learning. The study's primary focus is on how malicious actors can abuse AI, machine learning, and smart information systems. The researchers identify a variety of potentially malicious uses for AI that are well within reach of today's attackers, including the creation of sophisticated disinformation and social engineering campaigns.