zittrain
What if AI in health care is the next asbestos? - STAT
Artificial intelligence is often hailed as a great catalyst of medical innovation, a way to find cures to diseases that have confounded doctors and make health care more efficient, personalized, and accessible. But what if it turns out to be poison? Jonathan Zittrain, a Harvard Law School professor, posed that question during a conference in Boston Tuesday that examined the use of AI to accelerate the delivery of precision medicine to the masses. "I think of machine learning kind of as asbestos," he said. "It turns out that it's all over the place, even though at no point did you explicitly install it, and it has possibly some latent bad effects that you might regret later, after it's already too hard to get it all out."
- North America > United States > North Carolina (0.05)
- North America > United States > New York (0.05)
- Asia > Afghanistan (0.05)
- Health & Medicine > Therapeutic Area (1.00)
- Education > Educational Setting > Higher Education (0.55)
- Education > Curriculum > Subject-Specific Education (0.55)
Bracing Medical AI Systems for Attacks
Last June, a team at Harvard Medical School and MIT showed that it's pretty darn easy to fool an artificial intelligence system analyzing medical images. Researchers modified a few pixels in eye images, skin photos and chest X-rays to trick deep learning systems into confidently classifying perfectly benign images as malignant. These so-called "adversarial attacks" implement small, carefully designed changes to data--in this case pixel changes imperceptible to human vision--to nudge an algorithm to make a mistake. That's not great news at a time when medical AI systems are just reaching the clinic, with the first AI-based medical device approved in April and AI systems besting doctors at diagnosis across healthcare sectors. Now, in collaboration with a Harvard lawyer and ethicist, the same team is out with an article in the journal Science to offer suggestions about when and how the medical industry might intervene against adversarial attacks.
- Health & Medicine (1.00)
- Information Technology > Security & Privacy (0.63)
- Government > Military (0.63)
- Education > Educational Setting > Higher Education (0.32)
Mark Zuckerberg wants to build a 'brain-computer interface' that can read your THOUGHTS
Facebook is developing technology that could soon make it possible to read your mind. CEO Mark Zuckerberg detailed how the Silicon Valley giant is researching a'brain-computer interface' in an interview with Harvard law school professor Jonathan Zittrain, according to Wired. In the near future, this system would allow users to interact with augmented reality environments using just their brain - no keyboards, touchscreens or hand gestures required. Facebook is developing technology that could soon make it possible to read your mind. CEO Mark Zuckerberg detailed how the firm is researching a'brain-computer interface' The concept that Zuckerberg envisions would allow users to navigate menus, move objects in an AR room or even type words with their brain.
- Information Technology > Services (1.00)
- Health & Medicine (1.00)
- Education > Educational Setting > Higher Education (0.56)
- Education > Curriculum > Subject-Specific Education (0.56)
Zuckerberg Wants Facebook to Build a Mind-Reading Machine
For those of us who worry that Facebook may have serious boundary issues when it comes to the personal information of its users, Mark Zuckerberg's recent comments at Harvard should get the heart racing. Zuckerberg dropped by the university last month ostensibly as part of a year of conversations with experts about the role of technology in society, "the opportunities, the challenges, the hopes, and the anxieties." His nearly two-hour interview with Harvard law school professor Jonathan Zittrain in front of Facebook cameras and a classroom of students centered on the company's unprecedented position as a town square for perhaps 2 billion people. To hear the young CEO tell it, Facebook was taking shots from all sides--either it was indifferent to the ethnic hatred festering on its platforms or it was a heavy-handed censor deciding whether an idea was allowed to be expressed. Zuckerberg confessed that he hadn't sought out such an awesome responsibility.
- Information Technology > Services (1.00)
- Education > Educational Setting > Higher Education (0.55)
- Education > Curriculum > Subject-Specific Education (0.55)
Zuckerberg Wants Facebook to Build a Mind-Reading Machine
For those of us who worry that Facebook may have serious boundary issues when it comes to the personal information of its users, Mark Zuckerberg's recent comments at Harvard should get the heart racing. Zuckerberg ostensibly dropped by the university last month as part of a year of conversations with experts about the role of technology in society, "the opportunities, the challenges, the hopes, and the anxieties." His nearly two-hour interview with the Harvard law school professor Jonathan Zittrain in front of Facebook cameras and a classroom of students centered on the company's unprecedented position as a town square for perhaps two billion people. To hear the young CEO tell it, Facebook was taking shots from all sides--either it was indifferent to the ethnic hatred festering on its platforms or it was a heavy-handed censor deciding whether an idea was allowed to be expressed. Zuckerberg confessed that he hadn't sought out such an awesome responsibility.
- Information Technology > Services (1.00)
- Education > Educational Setting > Higher Education (0.55)
- Education > Curriculum > Subject-Specific Education (0.55)
As AI identity management takes shape, are enterprises ready?
Enterprises may soon find themselves replacing their usernames and passwords with algorithms. At the Identiverse 2018 conference last month, a chorus of vendors, infosec experts and keynote speakers discussed how machine learning and artificial intelligence are changing the identity and access management (IAM) space. Specifically, IAM professionals promoted the concept of AI identity management, where vulnerable password systems are replaced by systems that rely instead on biometrics and behavioral security to authenticate users. And, as the argument goes, humans won't be capable of effectively analyzing the growing number of authentication factors, which can include everything from login times and download activity to mouse movements and keystroke patterns. Sarah Squire, senior technical architect at Ping Identity, believes that use of machine learning and AI for authentication and identity management will only increase.
Why the biggest challenge facing AI is an ethical one
Artificial intelligence is everywhere and it's here to stay. Most aspects of our lives are now touched by artificial intelligence in one way or another, from deciding what books or flights to buy online to whether our job applications are successful, whether we receive a bank loan, and even what treatment we receive for cancer. We may have things better than ever – but we've also never faced such world-changing challenges. That's why Future Now asked 50 experts – scientists, technologists, business leaders and entrepreneurs – to name what they saw as the key challenges in their area. The range of different responses demonstrate the richness and complexity of the modern world. Inspired by these responses, over the next month we will be publishing a series of feature articles and videos that take an in-depth look at the biggest challenges we face today.
- North America > United States > North Carolina (0.05)
- North America > United States > Massachusetts (0.05)
- Europe > Germany > North Rhine-Westphalia > Cologne Region > Cologne (0.05)
- Asia > China (0.05)
- Law (0.96)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.49)
$27M Fund Wants Artificial Intelligence with a Conscience
Humans need to make artificial intelligence socially conscious, and a new technology fund aims to support developers and behavioral scientists trying to do that. Several prominent technology institutions are contributing a combined $27 million to the Ethics and Governance of Artificial Intelligence Fund, which is designed to harness artificial intelligence for the public interest. The fund, helmed by MIT's Media Lab and Harvard's Center for Internet & Society, encompasses $10 million each from LinkedIn Founder Reid Hoffman and the Omidyar Network, an investment firm started by eBay founder Pierre Omidyar. The Knight Foundation chipped in $5 million. The fund was created to ensure AI research can be influenced by philosophers, ethicists, social scientists and other nonengineering perspectives, according to the Knight Foundation.
- Banking & Finance (1.00)
- Information Technology > Services (0.60)