Telea, Alexandru
ShaRP: Shape-Regularized Multidimensional Projections
Machado, Alister, Telea, Alexandru, Behrisch, Michael
Projections, or dimensionality reduction methods, are techniques of choice for the visual exploration of high-dimensional data. Many such techniques exist, each one of them having a distinct visual signature - i.e., a recognizable way to arrange points in the resulting scatterplot. Such signatures are implicit consequences of algorithm design, such as whether the method focuses on local vs global data pattern preservation; optimization techniques; and hyperparameter settings. We present a novel projection technique - ShaRP - that provides users explicit control over the visual signature of the created scatterplot, which can cater better to interactive visualization scenarios. ShaRP scales well with dimensionality and dataset size, generically handles any quantitative dataset, and provides this extended functionality of controlling projection shapes at a small, user-controllable cost in terms of quality metrics.
Supporting Optimal Phase Space Reconstructions Using Neural Network Architecture for Time Series Modeling
Pagliosa, Lucas, Telea, Alexandru, Mello, Rodrigo
The reconstruction of phase spaces is an essential step to analyze time series according to Dynamical System concepts. A regression performed on such spaces unveils the relationships among system states from which we can derive their generating rules, that is, the most probable set of functions responsible for generating observations along time. In this sense, most approaches rely on Takens' embedding theorem to unfold the phase space, which requires the embedding dimension and the time delay. Moreover, although several methods have been proposed to empirically estimate those parameters, they still face limitations due to their lack of consistency and robustness, which has motivated this paper. As an alternative, we here propose an artificial neural network with a forgetting mechanism to implicitly learn the phase spaces properties, whatever they are. Such network trains on forecasting errors and, after converging, its architecture is used to estimate the embedding parameters. Experimental results confirm that our approach is either as competitive as or better than most state-of-the-art strategies while revealing the temporal relationship among time-series observations.