Rice University statistician Genevera Allen says scientists must keep questioning the accuracy and reproducibility of scientific discoveries made by machine-learning techniques until researchers develop new computational systems that can critique themselves. Allen, associate professor of statistics, computer science and electrical and computer engineering at Rice and of pediatrics-neurology at Baylor College of Medicine, will address the topic in both a press briefing and a general session today at the 2019 Annual Meeting of the American Association for the Advancement of Science (AAAS). "The question is, 'Can we really trust the discoveries that are currently being made using machine-learning techniques applied to large data sets?'" "The answer in many situations is probably, 'Not without checking,' but work is underway on next-generation machine-learning systems that will assess the uncertainty and reproducibility of their predictions." Machine learning (ML) is a branch of statistics and computer science concerned with building computational systems that learn from data rather than following explicit instructions. Allen said much attention in the ML field has focused on developing predictive models that allow ML to make predictions about future data based on its understanding of data it has studied.
Robert Heinlein is the legendary author of such classic works as Starship Troopers, The Moon Is a Harsh Mistress, and Stranger in a Strange Land. His books have influenced generations of artists and scientists, including physicist and science fiction writer Gregory Benford. "He was one of the people who propelled me forward to go into the sciences," Benford says in Episode 348 of the Geek's Guide to the Galaxy podcast. "Because his depiction of the prospect of the future of science, engineering--everything--was so enticing. He was my favorite science fiction writer."
So convincing, in fact, that the researchers have refrained from open-sourcing the code, in hopes of stalling its potential weaponization as a means of mass-producing fake news. An OpenAI employee printed out this AI-written sample and posted it by the recycling bin: https://t.co/PT8CMSU2AR While the impressive results are a remarkable leap beyond what existing language models have achieved, the technique involved isn't exactly new. Instead, the breakthrough was driven primarily by feeding the algorithm ever more training data--a trick that has also been responsible for most of the other recent advancements in teaching AI to read and write. "It's kind of surprising people in terms of what you can do with [...] more data and bigger models," says Percy Liang, a computer science professor at Stanford.
Taylor Swift raised eyebrows late last year when Rolling Stone magazine revealed her security team had deployed facial recognition recognition technology during her Repudiation tour to root out stalkers. But the company contracted for the efforts uses its technology to provide much more than just security. ISM Connect also uses its smart screens to capture metrics for promotion and marketing. Facial recognition, used for decades by law enforcement and militaries, is quickly becoming a commercial tool to help brands engage consumers. Swift's tour is just the latest example of the growing privacy concerns around the largely unregulated, billion-dollar industry.
There is widespread public support for a ban on so-called "killer robots", which campaigners say would "cross a moral line" after which it would be difficult to return. Polling across 26 countries found over 60 per cent of the thousands asked opposed lethal autonomous weapons that can kill with no human input, and only around a fifth backed them. The figures showed public support was growing for a treaty to regulate these controversial new technologies - a treaty which is already being pushed by campaigners, scientists and many world leaders. However, a meeting in Geneva at the close of last year ended in a stalemate after nations including the US and Russia indicated they would not support the creation of such a global agreement. Mary Wareham of Human Rights Watch, who coordinates the Campaign to Stop Killer Robots, compared the movement to successful efforts to eradicate landmines from battlefields.
Autonomous vehicles relying on light-based image sensors often struggle to see through blinding conditions, such as fog. But MIT researchers have developed a sub-terahertz-radiation receiving system that could help steer driverless cars when traditional methods fail. Sub-terahertz wavelengths, which are between microwave and infrared radiation on the electromagnetic spectrum, can be detected through fog and dust clouds with ease, whereas the infrared-based LiDAR imaging systems used in autonomous vehicles struggle. To detect objects, a sub-terahertz imaging system sends an initial signal through a transmitter; a receiver then measures the absorption and reflection of the rebounding sub-terahertz wavelengths. That sends a signal to a processor that recreates an image of the object.
A six-legged robot can find its way home without the help of GPS, thanks to tactics borrowed from desert ants. The robot, called AntBot, uses light from the sky to judge the direction its going. To assess the distance travelled it uses a combination of observing the motion of objects on the ground as they pass by and counting steps. All three of these techniques are used by desert ants. To test AntBot, Stéphane Viollet at the Aix-Marseille University in France and colleagues set an outdoor homing task: first go to several checkpoints, then return home.
Such targeted care is referred to as precision medicine--drugs or treatments designed for small groups, rather than large populations, based on characteristics such as medical history, genetic makeup, and data recorded by wearable devices. In 2003, the completion of the Human Genome Project was attended by fanatic promises about the imminence of these treatments, but results have so far underwhelmed. Today, new technologies are revitalizing the promise. Precision medicine: drugs or treatments designed for small groups, rather than large populations. At organizations ranging from large corporations to university-led and government-funded research collectives, doctors are using artificial intelligence (AI) to develop precision treatments for complex diseases.
AI promises to be a boon to medical practice, improving diagnoses, personalizing treatment, and spotting future public-health threats. By 2024, experts predict, healthcare AI will be a nearly $20 billion market, with tools that transcribe medical records, assist surgery, and investigate insurance claims for fraud. Even so, the technology raises some knotty ethical questions. What happens when an AI system makes the wrong decision--and who is responsible if it does? How can clinicians verify, or even understand, what comes out of an AI "black box"?
U.S. President Donald Trump on Monday will sign an executive order asking federal government agencies to dedicate more resources and investment into research, promotion and training on artificial intelligence (AI), a senior administration official said. Under the American AI Initiative, the administration will direct agencies to prioritize AI investments in research and development, increase access to federal data and models for that research and prepare workers to adapt to the era of AI. There was no specific funding announced for the initiative, the administration official said on a conference call, adding that it called for better reporting and tracking of spending on AI-related research and development. The initiative aims to make sure the United States keeps its research and development advantage in AI and related areas, such as advanced manufacturing and quantum computing. Trump, in his State of the Union speech last week, said he was willing to work with lawmakers to deliver new and important infrastructure investment, including investments in the cutting-edge industries of the future, calling it a "necessity."