Collaborating Authors


A review of machine learning applications in wildfire science and management Machine Learning

Artificial intelligence has been applied in wildfire science and management since the 1990s, with early applications including neural networks and expert systems. Since then the field has rapidly progressed congruently with the wide adoption of machine learning (ML) in the environmental sciences. Here, we present a scoping review of ML in wildfire science and management. Our objective is to improve awareness of ML among wildfire scientists and managers, as well as illustrate the challenging range of problems in wildfire science available to data scientists. We first present an overview of popular ML approaches used in wildfire science to date, and then review their use in wildfire science within six problem domains: 1) fuels characterization, fire detection, and mapping; 2) fire weather and climate change; 3) fire occurrence, susceptibility, and risk; 4) fire behavior prediction; 5) fire effects; and 6) fire management. We also discuss the advantages and limitations of various ML approaches and identify opportunities for future advances in wildfire science and management within a data science context. We identified 298 relevant publications, where the most frequently used ML methods included random forests, MaxEnt, artificial neural networks, decision trees, support vector machines, and genetic algorithms. There exists opportunities to apply more current ML methods (e.g., deep learning and agent based learning) in wildfire science. However, despite the ability of ML models to learn on their own, expertise in wildfire science is necessary to ensure realistic modelling of fire processes across multiple scales, while the complexity of some ML methods requires sophisticated knowledge for their application. Finally, we stress that the wildfire research and management community plays an active role in providing relevant, high quality data for use by practitioners of ML methods.

Energy Usage Reports: Environmental awareness as part of algorithmic accountability Machine Learning

The carbon footprint of algorithms must be measured and transparently reported so computer scientists can take an honest and active role in environmental sustainability. In this paper, we take analyses usually applied at the industrial level and make them accessible for individual computer science researchers with an easy-to-use Python package. Localizing to the energy mixture of the electrical power grid, we make the conversion from energy usage to CO2 emissions, in addition to contextualizing these results with more human-understandable benchmarks such as automobile miles driven. We also include comparisons with energy mixtures employed in electrical grids around the world. We propose including these automatically-generated Energy Usage Reports as part of standard algorithmic accountability practices, and demonstrate the use of these reports as part of model-choice in a machine learning context.

Tackling Climate Change with Machine Learning Artificial Intelligence

Climate change is one of the greatest challenges facing humanity, and we, as machine learning experts, may wonder how we can help. Here we describe how machine learning can be a powerful tool in reducing greenhouse gas emissions and helping society adapt to a changing climate. From smart grids to disaster management, we identify high impact problems where existing gaps can be filled by machine learning, in collaboration with other fields. Our recommendations encompass exciting research questions as well as promising business opportunities. We call on the machine learning community to join the global effort against climate change.

Fusion of Heterogeneous Earth Observation Data for the Classification of Local Climate Zones Machine Learning

This paper proposes a novel framework for fusing multi-temporal, multispectral satellite images and OpenStreetMap (OSM) data for the classification of local climate zones (LCZs). Feature stacking is the most commonly-used method of data fusion but does not consider the heterogeneity of multimodal optical images and OSM data, which becomes its main drawback. The proposed framework processes two data sources separately and then combines them at the model level through two fusion models (the landuse fusion model and building fusion model), which aim to fuse optical images with landuse and buildings layers of OSM data, respectively. In addition, a new approach to detecting building incompleteness of OSM data is proposed. The proposed framework was trained and tested using data from the 2017 IEEE GRSS Data Fusion Contest, and further validated on one additional test set containing test samples which are manually labeled in Munich and New York. Experimental results have indicated that compared to the feature stacking-based baseline framework the proposed framework is effective in fusing optical images with OSM data for the classification of LCZs with high generalization capability on a large scale. The classification accuracy of the proposed framework outperforms the baseline framework by more than 6% and 2%, while testing on the test set of 2017 IEEE GRSS Data Fusion Contest and the additional test set, respectively. In addition, the proposed framework is less sensitive to spectral diversities of optical satellite images and thus achieves more stable classification performance than state-of-the art frameworks.

As international order languishes, experts at G1 Global Conference discuss Japan's new role as global 'stabilizer'

The Japan Times

In a world increasingly fragmented by U.S. President Donald Trump's "America First" agenda, Japan should take on the role as the world's new "stabilizer" by committing to the landmark Paris accord on climate change and keeping a multilateral trade regime from falling apart in the absence of the United States, according to experts who gathered at a Tokyo conference earlier this week. The annual G1 Global Conference, held at Globis University in Tokyo, examined Japan's shifting roles on the global stage on the heels of an intensifying trade war between the U.S. and China that its panelists said has thrown the international order into disarray. The conference held Sunday invited experts on fields including security, energy and technology -- as well as social entrepreneurs and business executives -- to discuss a "fractured world" caused by the rise of protectionism, the shift in Asian geopolitics and potential threats stemming from the advent of artificial intelligence. With the election of Trump in 2016, "many tensions in the U.S. that had existed beforehand became much more evident," former U.S. Democratic member of congress Jane Harman told the all-English conference, titled "Connecting a Fractured World." The Japan Times was a media partner for the event.

Mount Sinai makes a step forward in using machine learning to interpret medical images


Upfront, he let the reporters and editors in the room know he thought their reporting has been unfair to him. During the wide-ranging conversation, Trump denounced Nazi celebrations in Washington, D.C., offered Jared Kushner as a peace-broker between Israel and Palestine, promised to stay open-minded about the Paris climate-change accord, and mused that prosecuting the Clintons would be a nationally divisive move. He also stood by his appointment of Steve Bannon, saying that had he thought Bannon were racist he wouldn't have hired him. The new feature is an expansion of its "popular times" product. There are currently 32.9 million millionaires.