Any sufficiently advanced technology is indistinguishable from magic. In the world of artificial intelligence & machine learning (AI & ML), black- and white-box categorization of models and algorithms refers to their interpretability. That is, given a model trained to map data inputs to outputs (e.g. And just as the software testing dichotomy is high-level behavior vs low-level logic, only white-box AI methods can be readily interpreted to see the logic behind models' predictions. In recent years with machine learning taking over new industries and applications, where the number of users far outnumber experts that grok the models and algorithms, the conversation around interpretability has become an important one.
"Scraping people's information violates our policies, which is why we've demanded that Clearview stop accessing or using information from Facebook or Instagram," a Facebook spokesperson said in an email to Fast Company. The previously little-known company drew national attention last month after an article by New York Times reporter Kashmir Hill revealed that the company claimed to have scraped billions of photos from services including Facebook, YouTube, and Venmo to match against people of interest to law enforcement. Twitter, YouTube parent Google, and Venmo have also reportedly told the startup to stop accessing data from their sites, saying it violates their policies. Whether they can legally enforce those rules may be uncertain: The Ninth Circuit Court of Appeals ruled in September that a company scraping LinkedIn in violation of the social site's policies likely didn't violate the Computer Fraud and Abuse Act, a key federal anti-hacking law. Clearview didn't immediately respond to an inquiry from Fast Company.
IBM has outlined principles to promote transparency -- and foster public trust -- in the way companies use artificial intelligence. The principles call on banks and other organizations to designate a lead AI official, own up to their use of the technology, explain it and test it for bias. Bankers say they're already on it. IBM unveiled the principles last month at Davos through its new IBM Policy Lab. The goal was to provide guidance for developing intelligent policy that will provide societal protections without stifling innovation.
And for in-house teams, labeling data can be the proverbial bottleneck, limiting a company's ability to quickly train and validate machine learning models. By its very definition, artificial intelligence refers to computer systems that can learn, reason, and act for themselves, but where does this intelligence come from? For decades, the collaborative intelligence of humans and machines has produced some of the world's leading technologies. And while there's nothing glamorous about the data being used to train today's AI applications, the role of data annotation in AI is nonetheless fascinating. Imagine reviewing hours of video footage – sorting through thousands of driving scenes, to label all of the vehicles that come into frame, and you've got data annotation.
A team of researchers at Leiden University in the Netherlands have developed a neural network called "Hazardous Object Identifier" that they say can predict if an asteroid is on a collision course with Earth. Their new AI singled out 11 asteroids that were not previously classified by NASA as hazardous, and which were larger than 100 meters in diameter -- big enough to explode with the force of hundreds of nuclear weapons if they impacted Earth, potentially leveling entire cities. They also focused on space rocks that could come within 4.7 million miles of Earth, as detailed in a paper published in the journal Astronomy & Astrophysics earlier this month. None are an imminent threat, however: not only are their chances of ever hitting Earth astronomically slim, but they are making their flyby between the years 2131 and 2923 -- hundreds of years from now. The team then reversed the simulation, simulating future Earth-impacting asteroids by flinging them away from Earth and tracking their exact locations and orbits.
In Marvel's Iron Man movie series, protagonist Tony Stark relies heavily on the artificial intelligence JARVIS for his superhero needs. Not the least of JARVIS' abilities is designing and constructing Iron Man's impressive suit of armor. Accomplishing such a task would require a deep knowledge of the physical properties of metals and metallic alloys, an incredible feat given the vast number of permutations of alloy compositions. Taking us one step closer to a real-life JARVIS, researchers at A*STAR's Institute of High Performance Computing (IHPC), together with scientists in the US and Russia, have developed a machine learning model for determining the structure-property relationship in multi-principal element alloys (MPEAs). "The emergence of high-entropy alloys and, more generally, MPEAs, is a paradigm shift in conventional alloy design," said Mehdi Jafary-Zadeh, a Scientist at IHPC.
The medical speech recognition company Nuance said Monday it will begin widely selling an artificial intelligence system to automate physician note-taking. The system, built in a partnership with Microsoft, uses technology wired into the walls of the exam room to record and build a narrative of each patient encounter that is uploaded into electronic health records. Physicians can use voice commands to fill in specific fields within the health record, including the patient's list of medical problems and medication orders. Unlock this article by subscribing to STAT Plus and enjoy your first 30 days free! STAT Plus is STAT's premium subscription service for in-depth biotech, pharma, policy, and life science coverage and analysis.
Recent years have seen breakthroughs in neural network technology: computers can now beat any living person at the most complex game invented by humankind, as well as imitate human voices and faces (both real and non-existent) in a deceptively realistic manner. Is this a victory for artificial intelligence over human intelligence? And if not, what else do researchers and developers need to achieve to make the winners in the AI race the "kings of the world?" Over the last 60 years, artificial intelligence (AI) has been the subject of much discussion among researchers representing different approaches and schools of thought. One of the crucial reasons for this is that there is no unified definition of what constitutes AI, with differences persisting even now.
WASHINGTON – The Pentagon is adopting new ethical principles as it prepares to accelerate its use of artificial intelligence technology on the battlefield. The new principles call for people to "exercise appropriate levels of judgment and care" when deploying and using AI systems, such as those that scan aerial imagery to look for targets. They also say decisions made by automated systems should be "traceable" and "governable," which means "there has to be a way to disengage or deactivate" them if they are demonstrating unintended behavior, said Air Force Lt. Gen. Jack Shanahan, director of the Pentagon's Joint Artificial Intelligence Center. The Pentagon's push to speed up its AI capabilities has fueled a fight between tech companies over a $10 billion cloud computing contract known as the Joint Enterprise Defense Infrastructure, or JEDI. Microsoft won the contract in October but hasn't been able to get started on the 10-year project because Amazon sued the Pentagon, arguing that President Donald Trump's antipathy toward Amazon and its CEO Jeff Bezos hurt the company's chances at winning the bid.
"The Five" discussed the media reaction to reports on Russia's involvement or prospective involvement in the 2020 presidential election Monday, with particular focus on cable news channels CNN and MSNBC. "In terms of these talking heads on TV, the makeup-wearing misery mongers, you're never, ever, ever going to hear them apologize for getting it wrong literally for the last four years," Fox Business Network's Dagen McDowell said. "Because in their in their arrogance and insecurity, they'll never be able to admit that they are tools for Putin and also fools." A U.S. intelligence official told Fox News Sunday that contrary to numerous recent media reports, there is no evidence to suggest that Russia is making a specific "play" to boost President Trump's reelection bid. The official added that top election security official Shelby Pierson, who briefed Congress on Russian election interference efforts earlier this month, may have overstated intelligence regarding the issue.