If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
With some new and improved features. The second generation of AirPods have finally arrived. Following a hardware update cycle that saw new iPads on Monday and refreshed iMacs on Tuesday, Apple released the long-awaited update to AirPods Wednesday. Keeping the same name and largely same design as the original AirPods first released in 2016, the new earbuds start at $199 and come with a new wireless charging case, the ability to wirelessly summon Siri and Apple's new H1 chip which promises to add an extra hour of talk time. The wireless charging case, which has a little light on the front to indicate when it is charging, uses the same Qi-standard found on recent iPhones and Android devices.
For most people who talk to our technology -- whether it's Amazon's Alexa, Apple Siri or the Google Assistant -- the voice that talks back sounds female. Some people do choose to hear a male voice. Now, researchers have unveiled a new gender-neutral option: Q. "One of our big goals with Q was to contribute to a global conversation about gender, and about gender and technology and ethics, and how to be inclusive for people that identify in all sorts of different ways," says Julie Carpenter, an expert in human behavior and emerging technologies who worked on developing Project Q. The voice of Q was developed by a team of researchers, sound designers and linguists in conjunction with the organizers of Copenhagen Pride week, technology leaders in an initiative called Equal AI and others. They first recorded dozens of voices of people -- those who identify as male, female, transgender or nonbinary.
Fei-Fei Li heard the crackle of a cat's brain cells a couple of decades ago and has never forgotten it. Researchers had inserted electrodes into the animal's brain and connected them to a loudspeaker, filling a lab at Princeton with the eerie sound of firing neurons. "They played the symphony of a mammalian visual system," she told an audience Monday at Stanford, where she is now a professor. The music of the brain helped convince Li to dedicate herself to studying intelligence--a path that led the physics undergraduate to specializing in artificial intelligence, and helping catalyze the recent flourishing of AI technology and use cases like self-driving cars. These days, though, Li is concerned that the technology she helped bring to prominence may not always make the world better.
Techno-optimist prognosticators will tell you that driverless trucks are just around the corner. They will also gently tell you--always gently--that yes, truck driving, a job that nearly 3.7 million Americans perform today, is perhaps on the brink of extinction. A startup called Peloton Technology sees the future a bit differently. Based in Mountain View, California, the eight-year-old company has a plan to broadly commercialize a partially automated truck technology called platooning. It would still depend on drivers sitting in front of a steering wheel, but it would be more fuel efficient and, hopefully, safer than truck-based transportation today.
Folded and sealed with a dollop of red wax, the will of Catharuçia Savonario Rivoalti lay in Venice's State Archives, unread, for more than six and a half centuries. Scholars don't know why the document, written in 1351, was never opened. But to physicist Fauzia Albertin, the three-page document--six pages, folded--was the perfect thickness for an experiment. Albertin, who now works at the Enrico Fermi Research Center in Italy, wanted to read the will without unsealing it. In a 2017 demonstration, Albertin and her team beamed X-rays at the document to photograph the text inside.
When school began in Lockport, New York, this past fall, the halls were lined not just with posters and lockers, but cameras. Over the summer, a brand new $4 million facial recognition system was installed by the school district in the town's eight schools from elementary to high school. The system scans the faces of students as they roam the halls, looking for faces that have been uploaded and flagged as dangerous. "Any way that we can improve safety and security in schools is always money well spent," David Lowry, president of the Lockport Education Association, told the Lockport Union-Sun & Journal. Rose Eveleth is an Ideas contributor at WIRED and the creator and host of Flash Forward, a podcast about possible (and not so possible) futures.
The internet is full of lies. That maxim has become an operating assumption for any remotely skeptical person interacting anywhere online, from Facebook and Twitter to phishing-plagued inboxes to spammy comment sections to online dating and disinformation-plagued media. Now one group of researchers has suggested the first hint of a solution: They claim to have built a prototype for an "online polygraph" that uses machine learning to detect deception from text alone. But what they've actually demonstrated, according to a few machine learning academics, is the inherent danger of overblown machine learning claims. In last month's issue of the journal Computers in Human Behavior, Florida State University and Stanford researchers proposed a system that uses automated algorithms to separate truths and lies, what they refer to as the first step toward "an online polygraph system--or a prototype detection system for computer-mediated deception when face-to-face interaction is not available."
The top of Monarch Crest trail outside Salida, Colorado, provides a stunning view, a 360-degree vista of the Continental Divide and the 14,000-foot Collegiate Peaks towering to the north. But as I clip into my mountain bike's pedals, I've got more than selfies on my mind. I flip open the compression damping switches on my suspension fork and rear shock. I lower my telescoping "dropper" seat post to move the saddle out of the way. I shift to a sprocket in the middle of the gearing range to keep chain tension high.
Kaia Health caught our attention last year with an app that tracks your motion using your phone's camera in a bid to help you achieve perfect squat form, though we found it didn't quite hit the mark. Still, Kaia is elevating the concept with an updated version called Kaia Personal Trainer. It says the app will track your exercises and reps, create workout plans tailored to you and offer audio feedback in real time. It doesn't need any equipment other than an iPhone or iPad running iOS 12 (an Android version will arrive in the next few months), though you might still opt to use a fitness tracker. Once you get into position around seven feet away from your device, the app's AI uses a 16-point system to compare the way you move to optimal movement, looking at factors including the positions and angles of your limbs and joints.
Facebook rushed to pull down footage of the New Zealand mass shooter's video from its platform, but it didn't start doing so until after the live broadcast was done. In a new post, Facebook VP of Integrity Guy Rosen discussed the company's successes and shortcomings in addressing the situation, as well as its plans to prevent videos like that from spreading on the social network in the future. He explained that while the platform's AI can quickly detect videos containing suicidal or harmful acts, the shooter's stream didn't trigger it. To be able to train the matching AI to detect that specific type of content, the platform needs big volumes of training data. As Facebook explains, something like that is difficult to obtain as "these events are thankfully rare."