Goto

Collaborating Authors

 measurement


Unsupervised Learning From Incomplete Measurements for Inverse Problems

Neural Information Processing Systems

In many real-world inverse problems, only incomplete measurement data are available for training which can pose a problem for learning a reconstruction function. Indeed, unsupervised learning using a fixed incomplete measurement process is impossible in general, as there is no information in the nullspace of the measurement operator. This limitation can be overcome by using measurements from multiple operators. While this idea has been successfully applied in various applications, a precise characterization of the conditions for learning is still lacking. In this paper, we fill this gap by presenting necessary and sufficient conditions for learning the underlying signal model needed for reconstruction which indicate the interplay between the number of distinct measurement operators, the number of measurements per operator, the dimension of the model and the dimension of the signals. Furthermore, we propose a novel and conceptually simple unsupervised learning loss which only requires access to incomplete measurement data and achieves a performance on par with supervised learning when the sufficient condition is verified. We validate our theoretical bounds and demonstrate the advantages of the proposed unsupervised loss compared to previous methods via a series of experiments on various imaging inverse problems, such as accelerated magnetic resonance imaging, compressed sensing and image inpainting.


AI-Based Energy Transportation Safety: Pipeline Radial Threat Estimation Using Intelligent Sensing System

Zhu, Chengyuan, Yang, Yiyuan, Yang, Kaixiang, Zhang, Haifeng, Yang, Qinmin, Chen, C. L. Philip

arXiv.org Artificial Intelligence

The application of artificial intelligence technology has greatly enhanced and fortified the safety of energy pipelines, particularly in safeguarding against external threats. The predominant methods involve the integration of intelligent sensors to detect external vibration, enabling the identification of event types and locations, thereby replacing manual detection methods. However, practical implementation has exposed a limitation in current methods - their constrained ability to accurately discern the spatial dimensions of external signals, which complicates the authentication of threat events. Our research endeavors to overcome the above issues by harnessing deep learning techniques to achieve a more fine-grained recognition and localization process. This refinement is crucial in effectively identifying genuine threats to pipelines, thus enhancing the safety of energy transportation. This paper proposes a radial threat estimation method for energy pipelines based on distributed optical fiber sensing technology. Specifically, we introduce a continuous multi-view and multi-domain feature fusion methodology to extract comprehensive signal features and construct a threat estimation and recognition network. The utilization of collected acoustic signal data is optimized, and the underlying principle is elucidated. Moreover, we incorporate the concept of transfer learning through a pre-trained model, enhancing both recognition accuracy and training efficiency. Empirical evidence gathered from real-world scenarios underscores the efficacy of our method, notably in its substantial reduction of false alarms and remarkable gains in recognition accuracy. More generally, our method exhibits versatility and can be extrapolated to a broader spectrum of recognition tasks and scenarios.


Elasticity Measurements of Expanded Foams using a Collaborative Robotic Arm

Beber, Luca, Lamon, Edoardo, Palopoli, Luigi, Fambri, Luca, Saveriano, Matteo, Fontanelli, Daniele

arXiv.org Artificial Intelligence

Medical applications of robots are increasingly popular to objectivise and speed up the execution of several types of diagnostic and therapeutic interventions. Particularly important is a class of diagnostic activities that require physical contact between the robotic tool and the human body, such as palpation examinations and ultrasound scans. The practical application of these techniques can greatly benefit from an accurate estimation of the biomechanical properties of the patient's tissues. In this paper, we evaluate the accuracy and precision of a robotic device used for medical purposes in estimating the elastic parameters of different materials. The measurements are evaluated against a ground truth consisting of a set of expanded foam specimens with different elasticity that are characterised using a high-precision device. The experimental results in terms of precision are comparable with the ground truth and suggest future ambitious developments.


Topological Reconstruction of Particle Physics Processes using Graph Neural Networks

Ehrke, Lukas, Raine, John Andrew, Zoch, Knut, Guth, Manuel, Golling, Tobias

arXiv.org Artificial Intelligence

We present a new approach, the Topograph, which reconstructs underlying physics processes, including the intermediary particles, by leveraging underlying priors from the nature of particle physics decays and the flexibility of message passing graph neural networks. The Topograph not only solves the combinatoric assignment of observed final state objects, associating them to their original mother particles, but directly predicts the properties of intermediate particles in hard scatter processes and their subsequent decays. In comparison to standard combinatoric approaches or modern approaches using graph neural networks, which scale exponentially or quadratically, the complexity of Topographs scales linearly with the number of reconstructed objects. We apply Topographs to top quark pair production in the all hadronic decay channel, where we outperform the standard approach and match the performance of the state-of-the-art machine learning technique.


Forget Police Sketches: Researchers Perfectly Reconstruct Faces by Reading Brainwaves

#artificialintelligence

Using brain scans and direct neuron recording from macaque monkeys, the team found specialized "face patches" that respond to specific combinations of facial features. In the early 2000s, while recording from epilepsy patients with electrodes implanted into their brains, Quian Quiroga and colleagues found that face cells are particularly picky. In a stroke of luck, Tsao and team blew open the "black box" of facial recognition while working on a different problem: how to describe a face mathematically, with a matrix of numbers. In macaque monkeys with electrodes implanted into their brains, the team recorded from three "face patches"--brain areas that respond especially to faces--while showing the monkeys the computer-generated faces.


Comparing Distance Measurements with Python and SciPy

@machinelearnbot

At the core of cluster analysis is the concept of measuring distances between a variety of different data point dimensions. For example, when considering k-means clustering, there is a need to measure a) distances between individual data point dimensions and the corresponding cluster centroid dimensions of all clusters, and b) distances between cluster centroid dimensions and all resulting cluster member data point dimensions. While k-means, the simplest and most prominent clustering algorithm, generally uses Euclidean distance as its similarity distance measurement, contriving innovative or variant clustering algorithms which, among other alterations, utilize different distance measurements is not a stretch. It is thus a judgment of orientation and not magnitude: two vectors with the same orientation have a cosine similarity of 1, two vectors at 90 have a similarity of 0, and two vectors diametrically opposed have a similarity of -1, independent of their magnitude.


dan-cziczo-maria-zawadowicz-measuring-biological-dust-in-upper-atmosphere-0620

MIT News

When applied to previously-collected atmospheric samples and data, their findings support evidence that on average these bioaerosols globally make up less than 1 percent of the particles in the upper troposphere -- where they could influence cloud formation and by extension, the climate -- and not around 25 to 50 percent as some previous research suggests. While atmospheric and climate modeling suggests that bioaerosols, globally averaged, are not abundant and efficient enough at freezing to significantly influence cloud formation, research findings have varied significantly. The group leveraged the presence of phosphorus in the mass spectra to train the classification machine learning algorithm on known samples and then, primed, applied it to field data acquired from Desert Research Institute's Storm Peak Laboratory in Steamboat Springs, Colorado, and from the Carbonaceous Aerosol and Radiative Effects Study based in the town of Cool, California. Knowing that the principal atmospheric emissions of phosphorus are from mineral dust, combustion products, and biological particles, they exploited the presence of phosphate and organic nitrogen ions and their characteristic ratios in known samples to classify the particles.


Artificial intelligence replaces physicists

#artificialintelligence

The experiment, developed by physicists from ANU, University of Adelaide and UNSW ADFA, created an extremely cold gas trapped in a laser beam, known as a Bose-Einstein condensate, replicating the experiment that won the 2001 Nobel Prize. The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. The team cooled the gas to around 1 microkelvin, and then handed control of the three laser beams over to the artificial intelligence to cool the trapped gas down to nanokelvin. "It may be able to come up with complicated ways humans haven't thought of to get experiments colder and make measurements more precise.


8 A Theory of Advice bonald Michie

AI Classics

Machine intelligence problems are sometimes defined as those problems which (i) computers can't yet do, and (ii) humans can. We shall further consider how much "knowledge" about a finite mathematical function can, on certain assumptions, be credited to a computer program. Although our approach is quite general, we are really only interested in programs which evaluate "semi-hard" functions, believing that the evaluation of such functions constitutes the defining aspiration of machine intelligence work. If a function is less hard than "semi-hard," then we can evaluate it by pure algorithm (trading space for time) or by pure look-up (making the opposite trade), with no need to talk of knowledge, advice, machine intelligence, or any of those things. We call such problems "standard." If however the function is "semi-hard," then we will be driven to construct some form of artful compromise between the two representations: without such a compromise the function will not be evaluable within practical resource limits. If the function is harder than "semi-hard," i.e. is actually "hard," then no amount of compromise can ever make feasible its evaluation by any terrestrial device. "Hard" problems In a recent lecture Knuth (1976) called attention to the notion of a "hard" problem as one for which solutions are computable in the theoretical sense but 151 MEASUREMENT OF KNOWLEDGE For illustration he referred to the task, studied by Meyer and Stockmeyer, of determining the truth-values of statements about whole numbers expressed in a restricted logical symbolism, for example Vx Vy(y. But is the problem nevertheless in some important sense "hard?" Meyer and Stockmeyer showed that if we allow input expressions to be as long as only 617 symbols then the answer is "yes," reckoning "hardness" as follows: find an evaluation algorithm expressed as an electrical network of gates and registers such as to minimise the number of components; if this number exceeds the number of elementary particles in the observable Universe (say, 10125), then the problem is "hard."


7 Dynamic Probability, Computer Chess, and the Measurement of Knowledge* I. J. Good

AI Classics

Virginia Polytechnic Institute and State University Blacksburg, Virginia Philosophers and - "pseudognosticians" (the artificial intelligentsial) are coming more and more to recognize that they share common ground and that each can learn from the other. This has been generally recognized for many years as far as symbolic logic is concerned, but less so in relation to the foundations of probability. In this essay I hope to convince the pseudognostician that the philosophy of probability is relevant to his work. One aspect that I could have discussed would have been probabilistic causality (Good, 1961/62), in view of Hans Berliner's forthcoming paper "Inferring causality in tactical analysis", but my topic here will be mainly dynamic probability. The close relationship between philosophy and pseudognostics is easily understood, for philosophers often try to express as clearly as they can how people make judgments. To parody Wittgenstein, what can be said at all can be said clearly and it can be programmed A paradox might seem to arise. Formal systems, such as those used in mathematics, logic, and computer programming, can lead to deductions outside the system only when there is an input of assumptions. For example, no probability can be numerically inferred from the axioms of probability unless some probabilities are assumed without using the axioms: ex nihilo nihil fit.2 This leads to the main controversies in the foundations of statistics: the controversies of whether intuitive probability3 should be used in statistics and, if so, whether it should be logical probability (credibility) or subjective (personal).