Military


Data Science for Public Policy: How I Fake My Way Through Imposter Syndrome - Medium

#artificialintelligence

Three years ago, if you told me that one day I would use python to analyze AI policy and make Guido van Rossum chuckle, I would think you are crazy. Three years later at PyCon 2019 in Cleveland, that's exactly what happened. I was by no means a tech person. I was trained as an economist (read: stats nerd), but somehow for the past three years I've been writing analysis on deep-tech fields including AI and 5G. What I hope to achieve with this post is not #humblebrag (ok, maybe a little happy dance) but to share with you all the struggles I had and am still experiencing on a daily basis and to reassure a fellow researcher somewhere feeling that he/she is faking it all the time, you are not alone.


Iran says 'spy drone' violated its airspace in May amid U.S. escalation

The Japan Times

TEHRAN - Iran said on Sunday a "spy drone" had encroached its airspace in May, about a month before it downed an American drone as part of a series of escalatory incidents between Tehran and Washington. Foreign Minister Mohammad Javad Zarif tweeted a map saying the U.S.-made MQ9 Reaper drone -- also widely used for carrying out military strikes -- had entered his country's airspace on May 26. Iran shot down a U.S. Global Hawk drone Thursday, saying it had violated its airspace near the strategic Strait of Hormuz -- a claim the United States denies. U.S. President Donald Trump called off a planned retaliatory military strike Friday, saying the response would not have been "proportionate," with Tehran warning any attack would see Washington's interests across the Middle East go up in flames. On Sunday U.S. national security adviser John Bolton cautioned Iran against misinterpreting the last-minute cancellation.


What Deep Learning Means for CyberSecurity

#artificialintelligence

Danelle is CMO at Blue Hexagon. She has more than 15 years of experience bringing new technologies to market. Prior to Blue Hexagon, Danelle was VP Marketing at SafeBreach where she built the marketing team and defined the Breach and Attack Simulation category. Previously, she led strategy and marketing at Adallom, a cloud security company acquired by Microsoft. She was also Director, Security Solutions at Palo Alto Networks, driving growth in critical IT initiatives like virtualization, network segmentation and mobility.


CIPD 2018: Language of AI and automation spreads fear

#artificialintelligence

The language of artificial intelligence (AI) and automation is misused and incites fear among the workforce, according to a panel on day two of the 2018 CIPD Annual Conference and Exhibition. Speaking during a panel session called'Will automation and artificial intelligence (AI) help or hinder good people management?', Andrew Spence, HR transformation director at Glass Bead Consulting, said that there's "fear-mongering" in the rhetoric that "the robots will take our jobs". "The word robot comes from the word'robota', which means slave so the language puts fear in people," he said. But the "term AI has a naming problem in itself", he continued, pointing out that there are multiple ways people would define the word intelligence. Cheryl Allen, HR director transformation at Atos, agreed that "people use AI as a general [term] and so many words are used interchangeably".


Will Smith Was Wrong About the Robots

#artificialintelligence

I, Robot was first released to theaters back in 2004. In it, the movie's filmmakers paint a fictional future (2035) where humanoid robots serve humanities needs. Our beloved Fresh Prince of Bel Air superstar is cast as a Chicago police detective named Del Spooner. Del hates robots with a deep skepticism after his experience with one that was unable to navigate a moral conundrum. Throughout the film, he condescends to these mechanical stewards for being unable to empathize and emote the way he believes only humans can.


The dangers and benefits of Artificial Intelligence techsocialnetwork

#artificialintelligence

The threat of Artificial Intelligence (AI) used to be nothing more than a science fiction doomsday scenario. Today, an AI threat is a very real possibility, and could be more disastrous than nuclear weapons or World War Three. Over the years, AI has been advancing at an alarming rate. In fact, it is precisely AI's exponential rate of improvement that makes it so dangerous. Should we decide to follow through with making AI sentient and giving it free will, it could, as many science fiction narratives have suggested, see humans as a problem and decide to do something about it.


Facing Intensifying Confrontation With Iran, Trump Has Few Appealing Options

NYT > Middle East

President Trump's last-minute decision to pull back from a retaliatory strike on Iran underscored the absence of appealing options available to him as Tehran races toward its next big challenge to the United States: building up and further enriching its stockpile of nuclear fuel. Two weeks of flare-ups over the attacks on oil tankers and the downing of an American surveillance drone, administration officials said, have overshadowed a larger, more complex and fast-intensifying showdown over containing Iran's nuclear program. In meetings in the White House Situation Room in recent days, Secretary of State Mike Pompeo contended that the potential for Iran to move closer to being able to build a nuclear weapon was the primary threat from Tehran, one participant said, a position echoed by Mr. Trump on Twitter on Friday. Left unsaid was that Iran's moves to bolster its nuclear fuel program stemmed in substantial part from the president's decision last year to pull out of the 2015 international accord, while insisting that Tehran abide by the strict limits that agreement imposed on its nuclear activities. Mr. Trump has long asserted that the deal would eventually let Iran restart its nuclear program and did too little to curb its support for terrorism.


Iranian Force Exults in Downing of U.S. Drone With a Feast and a Prayer

NYT > Middle East

Seated on the floor of a villa in northeast Tehran around a tablecloth spread with platters of saffron chicken and rice with barberries, about 30 officials of Iran's Islamic Revolutionary Guards Corps and guests gathered Thursday night for a prayerful celebration. "A special blessing for the commander who ordered the attack on the American drone and for the fighters who carried it out," a preacher declared, as recalled by one of the guests present, who said a raucous chorus of "amen" arose from the room. Their success earlier that day at shooting down an unmanned American Global Hawk surveillance drone (list price $131 million) surprised even some leaders of the Revolutionary Guards. They had wondered themselves whether they could hit an American target so high in the sky, according to the guest. In fact, the Revolutionary Guards sought to take out the drone in large part to prove they could do it, according to that guest and four other Iranians, including two senior current members.


You Can't Improve Cybersecurity By Throwing People At The Problem

#artificialintelligence

It may seem counter-intuitive, but the answer probably isn't a surge in employee training or hiring of cybersecurity talent. That's because humans will always make errors, and humans can't cope with the scale and stealth of today's cyberattacks. To best protect information systems, including data, applications, networks, and mobile devices, look to more automation and artificial intelligence-based software to give the defense-in-depth required to reduce risk and stop attacks. That's one of the key conclusions of a new report conducted by Oracle, "Security in the Age of AI," released in May. The report draws on a survey of 775 respondents based in the US, including 341 CISOs, CSOs, and other CXOs at firms with at least $100 million in annual revenue; 110 federal or state government policy influencers; and 324 technology-engaged workers in non-managerial roles.


DoD's Joint AI Center to open-source natural disaster satellite imagery data set

#artificialintelligence

As climate change escalates, the impact of natural disasters is likely to become less predictable. To encourage the use of machine learning for building damage assessment this week, Carnegie Mellon University's Software Engineering Institute and CrowdAI -- the U.S. Department of Defense's Joint AI Center (JAIC) and Defense Innovation Unit -- open-sourced a labeled data set of some of the largest natural disasters in the past decade. Called xBD, it covers the impact of disasters around the globe, like the 2010 earthquake that hit Haiti. "Although large-scale disasters bring catastrophic damage, they are relatively infrequent, so the availability of relevant satellite imagery is low. Furthermore, building design differs depending on where a structure is located in the world. As a result, damage of the same severity can look different from place to place, and data must exist to reflect this phenomenon," reads a research paper detailing the creation of xBD.