HarperDB was named after CEO Stephen Goldberg's dog, a five-year old adopted pup.HarperDB The so-called Internet of Things (IoT) is growing, exponentially, obviously. As an example of the machines that populate the IoT, modern aircraft are estimated to now fly with connected sensors monitoring as many as 5000 component elements per engine every second - and that's just the engines. For equipment engineers in aviation (and every other industry now digitally transforming) this means a lot of head scratching, some cool innovations and a lot of fine-grained physical tuning with a fair dose of engine grease. The same challenge also exists for information technologists supporting these systems. For software programmers and database engineers in every industry, making the IoT work means a lot of brain-aches, some super-cool innovations and a lot of fine-grained keyboard and screen based tuning, with a fair dose of'virtual' microprocessor engine grease (spoiler alert: microprocessors are built in clean room labs and rarely get oiled with lubricant).
Kepler.gl, a collaboration between Uber and Mapbox, allows for easier mapping of large-scale data. Using kepler.gl, a user can drag and drop a CSV or GeoJSON file into the browser, visualize it with different map layers, explore it by filtering and aggregating it, and eventually export the final visualization as a static map or an animated video. It plays nice with Mapbox if that's your jam. So far we've seen when you will die and how other people tend to die. Now let's put the two together to see how and when you will die, given your sex, race, and age.
Universal learning machine is a theory trying to study machine learning from mathematical point of view. The outside world is reflected inside an universal learning machine according to pattern of incoming data. This is subjective pattern of learning machine. In [2,4], we discussed subjective spatial pattern, and established a powerful tool -- X-form, which is an algebraic expression for subjective spatial pattern. However, as the initial stage of study, there we only discussed spatial pattern. Here, we will discuss spatial and temporal patterns, and algebraic expression for them.
We present ASP Modulo `Space-Time', a declarative representational and computational framework to perform commonsense reasoning about regions with both spatial and temporal components. Supported are capabilities for mixed qualitative-quantitative reasoning, consistency checking, and inferring compositions of space-time relations; these capabilities combine and synergise for applications in a range of AI application areas where the processing and interpretation of spatio-temporal data is crucial. The framework and resulting system is the only general KR-based method for declaratively reasoning about the dynamics of `space-time' regions as first-class objects. We present an empirical evaluation (with scalability and robustness results), and include diverse application examples involving interpretation and control tasks.
Traffic congestion increases the time required to commute. We all know this too well here in the bay area, like any urban citizens across the world. It inflicts increased operational costs on the urban transport system. Many forecasts suggest that this will only get worse in the years to come. This rise in congestion has pushed governing authorities to promote the usage of public transport vehicles instead of private ones.
Commonsense reasoning, in particular qualitative spatial and temporal reasoning (QSTR), provides flexible and intuitive methods for reasoning about vague and uncertain information including spatial orientation, topology and proximity. Despite a number of theoretical advances in QSTR, there are relatively few applications that employ these methods. The central problem is a significant lack of application-level standards and validation methods for supporting developers in adapting and integrating QSTR with their domain specific qualitative spatial and temporal models. To address this we present a significantly novel methodology for QSTR application validation, inspired by research in software engineering. In this paper we focus on unit testing, and adapt the software engineering strategy of defining boundary cases. We present two critical boundary concepts, a methodology for isolating the units under testing from other parts of the model, and methods to assist the designer in integrating our critical boundary unit testing approach with a broader validation plan.
Spatial information is often expressed using qualitative terms such as natural language expressions instead of coordinates; reasoning over such terms has several practical applications, such as bus routes planning. Representing and reasoning on trajectories is a specific case of qualitative spatial reasoning that focuses on moving objects and their paths. In this work, we propose two versions of a trajectory calculus based on the allowed properties over trajectories, where trajectories are defined as a sequence of non-overlapping regions of a partitioned map. More specifically, if a given trajectory is allowed to start and finish at the same region, 6 base relations are defined (TC-6). If a given trajectory should have different start and finish regions but cycles are allowed within, 10 base relations are defined (TC-10). Both versions of the calculus are implemented as ASP programs; we propose several different encodings, including a generalised program capable of encoding any qualitative calculus in ASP. All proposed encodings are experimentally evaluated using a real-world dataset. Experiment results show that the best performing implementation can scale up to an input of 250 trajectories for TC-6 and 150 trajectories for TC-10 for the problem of discovering a consistent configuration, a significant improvement compared to previous ASP implementations for similar qualitative spatial and temporal calculi. This manuscript is under consideration for acceptance in TPLP.
There are also a bunch of other R packages that, like albersusa, make it easy to query geo-spatial data as an sf data. The "Reverse dependencies" section of sf's CRAN page is a good place to discover them, but just to name a few: tidycensus, rnaturalearth, and mapsapi. The most brilliant thing about sf is that it stores geo-spatial structures in a special list-column of a data frame. This allows each row to represent the real unit of observation/interest – whether be a polygon, multi-polygon, point, line, or even a collection of these features – and as a result, supports workflows that leverage tidy-data principles3. Moreover, sf tracks additional information about the coordinate system and bounding box which ensures your aspect ratios are always correct and also makes it easy transform and simplify those features (more on this later).
Work with me or attend my 2 day workshop! You might be wondering, "What can plotly offer over other interactive mapping packages such as leaflet, mapview, mapedit, etc?". One big feature is the linked brushing framework, which works best when linking plotly together with other plotly graphs (i.e., only a subset of brushing features are supported when linking to other crosstalk-compatible htmlwidgets). Another is the ability to leverage the plotly.js API to make efficient updates in shiny apps via plotlyProxy().
Our lives today, and in the future, will necessarily pivot around the digitization of objects in the universe, through the efficient land, sea, and aerial surveys. The data collected will embed locational intelligence that will help us create maps with enhanced and meaningful spatial properties. These maps will form the substrate upon which the DNA of physical objects and their thematic properties will be seamlessly interwoven. The resulting rich datasets will become amenable to real-time analysis through Cloud computing that can be shared anytime, anywhere! Temporal resolution of the data is going to be crucial for real-time and near-real-time applications and thus controlled crowdsourcing with automated validation tools is bound to lead to more opportunities.