The study found that hyper-social canines carry variants of the genes GTF2I and GTF2IRD1, the deletion of which, in humans, triggers the Williams-Beuren Syndrome, or more commonly known as the Williams syndrome. The National Organization for Rare Disorders characterizes WBS as a "rare genetic disorder characterized by growth delays before and after birth (prenatal and postnatal growth retardation), short stature, a varying degree of mental deficiency, and distinctive facial features that typically become more pronounced with age." "This exciting observation highlights the utility of the dog as a genetic system informative for studies of human disease, as it shows how minor variants in critical genes in dogs result in major syndromic effects in humans," she said, BBC reported. In this handout image provided by Kensington Palace, Catherine, Duchess of Cambridge and Prince William, Duke of Cambridge pose for a photograph with their son, Prince George Alexander Louis of Cambridge, surrounded by Lupo, the couple's cocker spaniel, and Tilly the retriever (a Middleton family pet) in the garden of the Middleton family home in Bucklebury, Berkshire, England, in Aug. 2013.
According to California startup Halo Neuroscience, the device can help improve the performance of athletes, pilots and surgeons, and potentially help rehabilitation for stroke victims. By stimulating the motor cortex, Chao says the Halo device can "extract latent potential" in the brain to improve performance for people who rely on making quick decisions and movements such as athletes. The San Francisco startup has also concluded deals with the San Francisco Giants baseball team and the U.S. Olympic ski team to integrate Halo in training programs. Chao, who trained as a doctor and studied neuroscience at Stanford, previously worked at a startup called Neuro Pace, which uses electrical stimulation to treat epilepsy.
A new competition heralds what is likely to become the future of cybersecurity and cyberwarfare, with offensive and defensive AI algorithms doing battle. "It's a brilliant idea to catalyze research into both fooling deep neural networks and designing deep neural networks that cannot be fooled," says Jeff Clune, an assistant professor at the University of Wyoming who studies the limits of machine learning. Machine learning, and deep learning in particular, is rapidly becoming an indispensable tool in many industries. "Adversarial machine learning is more difficult to study than conventional machine learning--it's hard to tell if your attack is strong or if your defense is actually weak," says Ian Goodfellow, a researcher at Google Brain, a division of Google dedicated to researching and applying machine learning, who organized the contest.
Elon Musk thinks the government needs to regulate artificial intelligence (AI) now, before it becomes dangerous to humanity, the entrepreneur told a gathering of state governors over the weekend. "I have exposure to the very cutting-edge AI, and I think people should be really concerned about it," Musk told attendees at the National Governors Association summer meeting on Saturday (July 15). "I keep sounding the alarm bell, but until people see robots going down the street killing people, they don't know how to react, because it seems so ethereal." Musk isn't the only scientist to sound alarm bells over AI.
On July 5, Demis Hassabis, co-founder and CEO, DeepMind announced "the opening of DeepMind's first ever international AI research office in Edmonton, Canada, in close collaboration with the University of Alberta." In addition to contributing on the research and education end DeepMind plans to invest in programs to promote "Edmonton's growth as a technology and research hub." It welcomes the DeepMind move as yet another advance toward AI research in the country, which is the goal set by "the federal government's Pan-Canadian Artificial Intelligence Strategy." However, such systems tend to eliminate the need for humans on the jobs rather than increase employment opportunities, and new jobs don't magically open up when old ones are filled by machines.
Artificial intelligence will better our lives in many ways, but it will also pose a danger if people don't program AI properly. On Monday's "The Glenn Beck Radio Program," Glenn Beck wondered how we will train AI to know good from bad when our postmodern society doesn't even know that. If our society doesn't know right from wrong, how can we program AI with the proper foundation of truth, saying that pain and failure are bad? To see more from Glenn, visit his channel on TheBlaze and listen live to "The Glenn Beck Radio Program" with Glenn Beck, Pat Gray, Stu Burguiere and Jeffy Fisher weekdays 9 a.m.–noon ET on TheBlaze Radio Network.
Facebook showed off some artificial intelligence at its F8 event. The United Kingdom's government has some questions about artificial intelligence. On Wednesday, the House of Lords announced a public call for experts to weigh in on issues surrounding AI, including its ethical, economic and social effects as the technology becomes more prevalent. "The Committee wants to use this inquiry to understand what opportunities may exist for society in the development and use of artificial intelligence, as well as what risks there might be," Lord Clement-Jones, chairman of the committee on AI, said in a statement.
Many cybersecurity companies are starting to invest or implement AI in their cybersecurity solutions and it is giving their security teams a significant boost, according to a recently released report commissioned by McAfee. Cybercriminals are starting to use these solutions to sift through large amounts of data to "classify victims that have weaker defenses" so they can get the maximum "return on their investment," Steve Grobman, chief technology officer for McAfee, told Bloomberg BNA. Grobman told Bloomberg BNA that AI and machine-learning won't replace cybersecurity teams, rather "it will change the way that cybersecurity professionals will do their jobs." To keep up with the constantly evolving world of privacy and security sign up for the Bloomberg BNA Privacy and Security Update.
A House of Representatives panel just greenlit a measure that, once officially signed into law, would allow thousands of autonomous cars to hit the road while federal legislators draft more comprehensive safety laws. The legislation would exempt automakers from US safety rules and allow them to let loose tens of thousands of autonomous vehicles on American roads, all while prohibiting states from regulating their mechanical, software and/or safety systems. In the House's current version of this bill, automakers and tech firms would need to establish a cybersecurity plan before a self-driving car hits the road. That's probably be a relief to companies like Apple and Tesla that are stuck trying to change autonomous car laws on a state-by-state basis.