Goto

Collaborating Authors

 copernicus


Digital elevation model correction in urban areas using extreme gradient boosting, land cover and terrain parameters

Okolie, Chukwuma, Mills, Jon, Adeleke, Adedayo, Smit, Julian

arXiv.org Artificial Intelligence

The accuracy of digital elevation models (DEMs) in urban areas is influenced by numerous factors including land cover and terrain irregularities. Moreover, building artifacts in global DEMs cause artificial blocking of surface flow pathways. This compromises their quality and adequacy for hydrological and environmental modelling in urban landscapes where precise and accurate terrain information is needed. In this study, the extreme gradient boosting (XGBoost) ensemble algorithm is adopted for enhancing the accuracy of two medium-resolution 30m DEMs over Cape Town, South Africa: Copernicus GLO-30 and ALOS World 3D (AW3D). XGBoost is a scalable, portable and versatile gradient boosting library that can solve many environmental modelling problems. The training datasets are comprised of eleven predictor variables including elevation, urban footprints, slope, aspect, surface roughness, topographic position index, terrain ruggedness index, terrain surface texture, vector roughness measure, forest cover and bare ground cover. The target variable (elevation error) was calculated with respect to highly accurate airborne LiDAR. After training and testing, the model was applied for correcting the DEMs at two implementation sites. The correction achieved significant accuracy gains which are competitive with other proposed methods. The root mean square error (RMSE) of Copernicus DEM improved by 46 to 53% while the RMSE of AW3D DEM improved by 72 to 73%. These results showcase the potential of gradient boosted trees for enhancing the quality of DEMs, and for improved hydrological modelling in urban catchments.


The Appeal of Scientific Heroism

The New Yorker

In 2008, the journalist Jonah Lehrer paid a visit to a lab in Lausanne, Switzerland, to profile Henry Markram, a world-renowned neuroscientist. Markram, a South African, had trained at a series of élite institutions in Israel, the United States, and Germany; in the nineties, he published foundational papers on neural connections and synaptic activity. Markram's work in the laboratory, which involved piercing neural membranes with what Lehrer described as an "invisibly sharp glass pipette," was known for its painstaking precision. Lehrer's visit, however, had been occasioned not by Markram's incremental contributions to the field--it's not easy to sell a colorful profile on the basis of such publications as "The neural code between neocortical pyramidal neurons depends on neurotransmitter release probability"--but by Markram's pivot, in the early two-thousands, to brain simulation. Neuroscience, Markram declaimed to Lehrer, had reached an impasse. Researchers had generated an enormous wealth of fine-grained data, but the marginal returns had begun to diminish.


Snowpack Estimation in Key Mountainous Water Basins from Openly-Available, Multimodal Data Sources

Moran, Malachy, Woputz, Kayla, Hee, Derrick, Girotto, Manuela, D'Odorico, Paolo, Gupta, Ritwik, Feldman, Daniel, Vahabi, Puya, Todeschini, Alberto, Reed, Colorado J

arXiv.org Artificial Intelligence

Accurately estimating the snowpack in key mountainous basins is critical for water resource managers to make decisions that impact local and global economies, wildlife, and public policy. Currently, this estimation requires multiple LiDAR-equipped plane flights or in situ measurements, both of which are expensive, sparse, and biased towards accessible regions. In this paper, we demonstrate that fusing spatial and temporal information from multiple, openly-available satellite and weather data sources enables estimation of snowpack in key mountainous regions. Our multisource model outperforms single-source estimation by 5.0 inches RMSE, as well as outperforms sparse in situ measurements by 1.2 inches RMSE.


Earth Observation data and Artificial Intelligence in support of Journalism

#artificialintelligence

Earth Observation data is valuable for journalist's reports to the public. An example are the maps released in little time during or after the tsunami in Indian Ocean in 2004 or the Fukushima disaster in 2011, accompanying the verbal or text reports of theirs. Taking advantage of the improved temporal frequency and spatial cover of the Sentinel satellite sensors SnapEarth aims to assimilate latest spaceborne retrieved information to support journalists in their work in near real time. In this context, a dedicated services' module aims to leverage on Copernicus monitoring services, like the EMS's (Emergency Management Service) EFAS (European Flood Awareness System) and EFFIS (European Forest Fire Information System). It will add in tandem to them the ability to exploit latest AI (Artificial Intelligence) techniques to automatically and unsupervised query through big data piles to deliver in minimum time required products.


Are Neural Networks About to Reinvent Physics? - Issue 78: Atmospheres

Nautilus

Can AI teach itself the laws of physics? Will classical computers soon be replaced by deep neural networks? Sure looks like it, if you've been following the news, which lately has been filled with headlines like, "A neural net solves the three-body problem 100 million times faster: Machine learning provides an entirely new way to tackle one of the classic problems of applied mathematics," and "Who needs Copernicus if you have machine learning?". The latter was described by another journalist, in an article called "AI Teaches Itself Laws of Physics," as a "monumental moment in both AI and physics," which "could be critical in solving quantum mechanics problems." The trouble is, the authors have given no compelling reason to think that they could actually do this.


AI discovered Copernicus' heliocentricity on its own

#artificialintelligence

In the process, SciNet generated formulas that place the Sun at the center of our solar system. Remarkably, SciNet accomplished this in a way similar to how astronomer Nicolaus Copernicus discovered heliocentricity. "In the 16th century, Copernicus measured the angles between a distant fixed star and several planets and celestial bodies and hypothesized that the Sun, and not the Earth, is in the centre of our solar system and that the planets move around the Sun on simple orbits," the team wrote in a paper published on the preprint repository arXiv. "This explains the complicated orbits as seen from Earth." The team "encouraged" SciNet to come up with ways to predict the movements of the Sun and Mars in the simplest way possible.


MIT Deep Learning Basics: Introduction and Overview with TensorFlow

#artificialintelligence

As part of the MIT Deep Learning series of lectures and GitHub tutorials, we are covering the basics of using neural networks to solve problems in computer vision, natural language processing, games, autonomous driving, robotics, and beyond. This blog post provides an overview of deep learning in 7 architectural paradigms with links to TensorFlow tutorials for each. It accompanies the following lecture on Deep Learning Basics as part of MIT course 6.S094: Deep learning is representation learning: the automated formation of useful representations from data. How we represent the world can make the complex appear simple both to us humans and to the machine learning models we build. My favorite example of the former is the publication in 1543 by Copernicus of the heliocentric model that put the Sun at the center of the "Universe" as opposed to the prior geocentric model that put the Earth at the center.


Telling AI to not replicate itself is like telling teenagers to just not have sex

#artificialintelligence

Do humans have the capacity for safe AI? Our history shows innovation and technology advancements are replete with unintended consequences. Who knew that widespread social-media adoption would lead to disinformation campaigns aimed at undermining liberal democracy, when it was originally thought it would increase civic engagement? After all, AI not only enables the development of autonomous vehicles, but also autonomous weapons. Who wants to contemplate a possible future where self-aware AI becomes catatonically depressed while in possession of nuclear launch codes?


Fantastic answers to universal questions

AITopics Original Links

A long time ago, in a galaxy far, far away, someone had the idea that science would be at its most interesting when it was being subverted. Just as science itself was developing, storytellers began expanding the worlds of physics, biology, chemistry and engineering. They came up with a universe full of lightsabers, spaceships and robots, steeped in a heady brew of technobabble and draped on a background of journeys to exotic worlds. But science fiction is more than just pulp fiction; at its core is the desire to understand humanity's place in the universe. We asked leading scientists from around the world what science fiction meant to them: how they related to it and what influence it had on them.


Is our world a simulation? Why some scientists say it's more likely than not

The Guardian

When Elon Musk isn't outlining plans to use his massive rocket to leave a decaying Planet Earth and colonize Mars, he sometimes talks about his belief that Earth isn't even real and we probably live in a computer simulation. "There's a billion to one chance we're living in base reality," he said at a conference in June. Musk is just one of the people in Silicon Valley to take a keen interest in the "simulation hypothesis", which argues that what we experience as reality is actually a giant computer simulation created by a more sophisticated intelligence. If it sounds a lot like The Matrix, that's because it is. According to this week's New Yorker profile of Y Combinator venture capitalist Sam Altman, there are two tech billionaires secretly engaging scientists to work on breaking us out of the simulation.