Government


Interpreting AI Is More Than Black And White

#artificialintelligence

Any sufficiently advanced technology is indistinguishable from magic. In the world of artificial intelligence & machine learning (AI & ML), black- and white-box categorization of models and algorithms refers to their interpretability. That is, given a model trained to map data inputs to outputs (e.g. And just as the software testing dichotomy is high-level behavior vs low-level logic, only white-box AI methods can be readily interpreted to see the logic behind models' predictions. In recent years with machine learning taking over new industries and applications, where the number of users far outnumber experts that grok the models and algorithms, the conversation around interpretability has become an important one.


Neural network says these 11 asteroids could smash into Earth

#artificialintelligence

A team of researchers at Leiden University in the Netherlands have developed a neural network called "Hazardous Object Identifier" that they say can predict if an asteroid is on a collision course with Earth. Their new AI singled out 11 asteroids that were not previously classified by NASA as hazardous, and which were larger than 100 meters in diameter -- big enough to explode with the force of hundreds of nuclear weapons if they impacted Earth, potentially leveling entire cities. They also focused on space rocks that could come within 4.7 million miles of Earth, as detailed in a paper published in the journal Astronomy & Astrophysics earlier this month. None are an imminent threat, however: not only are their chances of ever hitting Earth astronomically slim, but they are making their flyby between the years 2131 and 2923 -- hundreds of years from now. The team then reversed the simulation, simulating future Earth-impacting asteroids by flinging them away from Earth and tracking their exact locations and orbits.


Pentagon adopts new ethical principles for using AI in war

The Japan Times

WASHINGTON – The Pentagon is adopting new ethical principles as it prepares to accelerate its use of artificial intelligence technology on the battlefield. The new principles call for people to "exercise appropriate levels of judgment and care" when deploying and using AI systems, such as those that scan aerial imagery to look for targets. They also say decisions made by automated systems should be "traceable" and "governable," which means "there has to be a way to disengage or deactivate" them if they are demonstrating unintended behavior, said Air Force Lt. Gen. Jack Shanahan, director of the Pentagon's Joint Artificial Intelligence Center. The Pentagon's push to speed up its AI capabilities has fueled a fight between tech companies over a $10 billion cloud computing contract known as the Joint Enterprise Defense Infrastructure, or JEDI. Microsoft won the contract in October but hasn't been able to get started on the 10-year project because Amazon sued the Pentagon, arguing that President Donald Trump's antipathy toward Amazon and its CEO Jeff Bezos hurt the company's chances at winning the bid.


Dagen McDowell blasts 'talking heads' as 'tools for Putin' over disputed Russian election interference reports

FOX News

"The Five" discussed the media reaction to reports on Russia's involvement or prospective involvement in the 2020 presidential election Monday, with particular focus on cable news channels CNN and MSNBC. "In terms of these talking heads on TV, the makeup-wearing misery mongers, you're never, ever, ever going to hear them apologize for getting it wrong literally for the last four years," Fox Business Network's Dagen McDowell said. "Because in their in their arrogance and insecurity, they'll never be able to admit that they are tools for Putin and also fools." A U.S. intelligence official told Fox News Sunday that contrary to numerous recent media reports, there is no evidence to suggest that Russia is making a specific "play" to boost President Trump's reelection bid. The official added that top election security official Shelby Pierson, who briefed Congress on Russian election interference efforts earlier this month, may have overstated intelligence regarding the issue.


Keeping Cows Happy and Soil Healthy With AI and Open Source Data Management

#artificialintelligence

To map every gene in the human body, scientists around the world collaborated for more than a decade, from 1990 to 2003. Thanks to their work, entire vistas of medicine have opened up, from new diagnoses to drug regimens tailored to an individual's genetic makeup. What if, posits Dorn Cox, a produce farmer in New Hampshire, the same could be done for the world's soil? With detailed knowledge of the nutrients in their soil, farmers could better tend their dirt and significantly reduce negative environmental impacts. For example, they could better learn what to plant and when, or how to maximize soil nutrients and track carbon content (more carbon in the soil means less carbon in the atmosphere).


Your Tesla could explain why it crashed. But good luck getting its Autopilot data

#artificialintelligence

On Jan. 21, 2019, Michael Casuga drove his new Tesla Model 3 southbound on Santiago Canyon Road, a two-lane highway that twists through hilly woodlands east of Santa Ana. He wasn't alone, in one sense: Tesla's semiautonomous driver-assist system, known as Autopilot -- which can steer, brake and change lanes -- was activated. Suddenly and without warning, Casuga claims in a Superior Court of California lawsuit, Autopilot yanked the car left. The Tesla crossed a double yellow line, and without braking, drove through the oncoming lane and crashed into a ditch, all before Casuga was able to retake control. Tesla confirmed Autopilot was engaged, according to the suit, but said the driver was to blame, not the technology.


African AI Experts Get Excluded From a Conference--Again

#artificialintelligence

At the G7 meeting in Montreal last year, Justin Trudeau told WIRED he would look into why more than 100 African artificial intelligence researchers had been barred from visiting that city to attend their field's most important annual event, the Neural Information Processing Systems conference, or NeurIPS. Now the same thing has happened again. More than a dozen AI researchers from African countries have been refused visas to attend this year's NeurIPS, to be held next month in Vancouver. This means an event that shapes the course of a technology with huge economic and social importance will have little input from a major portion of the world. The conference brings together thousands of researchers from top academic institutions and companies, for hundreds of talks, workshops, and side meetings at which new ideas and theories are hashed out.


Stargazing with Computers: What Machine Learning Can Teach Us about the Cosmos

#artificialintelligence

Gazing up at the night sky in a rural area, you'll probably see the shining moon surrounded by stars. If you're lucky, you might spot the furthest thing visible with the naked eye – the Andromeda galaxy. When the Department of Energy's (DOE) Legacy Survey of Space and Time (LSST) Camera at the National Science Foundation's Vera Rubin Observatory turns on in 2022, it will take photos of 37 billion galaxies and stars over the course of a decade. The output from this huge telescope will swamp researchers with data. In those 10 years, the LSST Camera will take 2,000 photos for each patch of the Southern Sky it covers.


500

#artificialintelligence

This is just an image representation. Let's talk about this topic in detail... The immense capabilities artificial intelligence is bringing to the world would have been inconceivable to past generations. But even as we marvel at the incredible power these new technologies afford, we're faced with complex and urgent questions about the balance of benefit and harm. When most people ponder whether AI is good or evil, what they're essentially trying to grasp is whether AI is a tool or a weapon.


UK government investigates AI bias in decision-making

#artificialintelligence

The UK government is launching an investigation to determine the levels of bias in algorithms that could affect people's lives. A browse through our'ethics' category here on AI News will highlight the serious problem of bias in today's algorithms. With AIs being increasingly used for decision-making, parts of society could be left behind. Conducted by the Centre for Data Ethics and Innovation (CDEI), the investigation will focus on areas where AI has tremendous potential – such as policing, recruitment, and financial services – but would have a serious negative impact on lives if not implemented correctly. "Technology is a force for good which has improved people's lives but we must make sure it is developed in a safe and secure way. Our Centre for Data Ethics and Innovation has been set up to help us achieve this aim and keep Britain at the forefront of technological development. I'm pleased its team of experts is undertaking an investigation into the potential for bias in algorithmic decision-making in areas including crime, justice and financial services. I look forward to seeing the Centre's recommendations to Government on any action we need to take to help make sure we maximise the benefits of these powerful technologies for society."