Goto

Collaborating Authors

 farahany


5 things conservatives need to know before AI wipes out conservative thought altogether

FOX News

Texas residents share how familiar they are with artificial intelligence on a scale from one to 10 and detailed how much they use it each day. The "Godfather of A.I.," Geoffrey Hinton, quit Google out of fear that his former employer intends to deploy artificial intelligence in ways that will harm human beings. "It is hard to see how you can prevent the bad actors from using it for bad things," Hinton recently told The New York Times. But stomping out the door does nothing to atone for his own actions, and it certainly does nothing to protect conservatives – who are the primary target of A.I. programmers – from being canceled. Here are five things to know as the battle over A.I. turns hot: Elon Musk recently revealed that Google co-founder Larry Page and other Silicon Valley leaders want AI to establish a "digital god" that "would understand everything in the world.


What Grimes' AI music offer could mean for the future of the industry

FOX News

Duke law and philosophy professor and author Nita Farahany says the challenge for humans with quickly developing artificial intelligence is the ethical and legal constraints around it. As controversy swirls around the use of famous artists' vocals for AI-generated music, Grimes seems to be embracing the use of artificial intelligence in the music industry. In a tweet Sunday, the 33-year-old Canadian singer, whose real name is Claire Elise Boucher, said she is happy to have her voice featured on AI-simulated music tracks as long as she is compensated with royalties for successful songs. "I'll split 50 [percent] royalties on any successful AI generated song that uses my voice," Grimes, who shares two children with Elon Musk, tweeted. Feel free to use my voice without penalty.


The professor trying to protect our private thoughts from technology

#artificialintelligence

Private thoughts may not be private for much longer, heralding a nightmarish world where political views, thoughts, stray obsessions and feelings could be interrogated and punished all thanks to advances in neurotechnology. Or at least that is what one of the world's leading brain scientists believes. In a new book, The Battle for Your Brain, Duke University bioscience professor Nita Farahany argues that such intrusions into the human mind by technology are so close that a public discussion is long overdue and lawmakers should immediately establish brain protections as it would for any other area of personal liberty. Advances in hacking and tracking thoughts, with Orwellian fears of mind control running just below the surface, is the subject of Farahany's scholarship alongside urgent calls for legislative guarantees to thought privacy, including freedoms from "cognitive fingerprinting", that lie within an area of ethics broadly termed "cognitive liberty". Certainly the field is advancing rapidly.


Neurotechnology is here. Without laws, your brain's privacy is at risk. - Vox

#artificialintelligence

If you've ever wished your brain was more user-friendly, neurotechnology might seem like a dream come true. It's all about offering you ways to hack your brain, getting it to do more of what you want and less of what you don't want. There are "nootropics" -- also known as "smart drugs" or "cognitive enhancers" -- pills that supposedly give your brain a boost. There's neurofeedback, a tool for training yourself to regulate your brain waves; research has shown it has the potential to help people struggling with conditions like ADHD and PTSD. There's brain stimulation, which uses electric currents to directly target certain brain areas and change their behavior; it's shown promise in treating severe depression by disrupting depression-linked neural activity. Oh, and Elon Musk and Mark Zuckerberg are working on brain-computer interfaces that could pick up thoughts directly from your neurons and translate them into words in real time, which could one day allow you to control your phone or computer with just your thoughts. Some of these technologies can offer very valuable help to people who need it.


How Far Has AI Mindreading Come?

#artificialintelligence

It's becoming easier all the time to read signals from the human brain: Tesla founder Elon Musk's company Neuralink just this summer announced that human trials will move forward next year for an implantable device that can read a user's mind; scientists at UCSF recently released the results of a brain activity study, backed by Facebook, that shows it's possible to use brain-wave technology to decode speech; in 2018, Nissan unveiled Brain-to-Vehicle technology that would allow vehicles to interpret signals from the driver's brain; and Nielsen is already using neuroscience to capture nonconscious aspects of consumer decision-making. There will be good and bad outcomes. Recently, we've talked about how high tech can help the blind see and amputees feel by reading brain signals directly. A mind-controlled robotic "arm" can help sufferers from movement disorders with the tasks of daily living. One workplace use is to monitor drowsiness in, for example, the operators of high-speed trains.


Duke Experts Talk Artificial Intelligence With Congressional Staff

#artificialintelligence

Increased federal funding and ethical inquiry are needed to best develop America's artificial intelligence capabilities, argued three Duke experts in a congressional briefing on Capitol Hill on Feb. 15. Duke Professors Vincent Conitzer, Nita Farahany and Walter Sinnott-Armstrong spoke broadly about the ethical implications of what the advent of A.I. means for medicine, lethal weapons, automobiles and unemployment. Conitzer's presentation offered a definition of A.I., Sinnott-Armstrong explored the ethics of lethal autonomous weapons and Farahany dove into the legal questions A.I. will challenge. Artificially intelligent systems already excel at games of probability and prediction but fail at games of context and interpretation, said Conitzer in his presentation. This program began a three-part Duke in DC series for congressional staff exploring policy implications for human-A.I. collaboration.


Could you be sacked for your THOUGHTS?

Daily Mail - Science & tech

Employees may soon be forced to wear headsets that track thoughts and monitor productivity, engagement and even when staff want to complain to their boss. A mind-reading device known as an electroencephalogram (EEG) can be fitted to a person's scalp to track electrical signals produced by the brain. Some companies have started forcing workers to wear them and combining the readings with artificial intelligence to dissect their thoughts. Nita Farahany, a professor of law and philosophy at Duke University, gave a TedTalk on the topic and revealed she was concerned that this may cost people their jobs. The headsets are already being used to track alertness, productivity and mental states in China.