Goto

Collaborating Authors

simplification


How To Automate Your Statistical Data Analysis

#artificialintelligence

During my university studies, I attended a course named Statistical Data Analysis. I was excited about this course because it taught me all the basic statistical analysis methods such as (non-)linear regression, ANOVA, MANOVA, LDA, PCA, etc. However, I never learned about the business application of these methods. During the course, we worked with several examples. Still, all the samples were CSV datasets, mainly from Kaggle.


Less is More: Understanding Neural Network Decisions via Simplified Yet Informative Inputs

#artificialintelligence

Understanding the relative importance of input information on the neural network learning process could lead to improved model interpretability and new scientific discoveries. A popular method for finding connections between information and learning is using heuristics-based ablation techniques to mask or remove information to create simpler versions of an input, then analyzing the network's predictions based on these simpler inputs. But might there be a better way? In the new paper When Less is More: Simplifying Inputs Aids Neural Network Understanding, a research team from University Medical Center Freiburg, ML Collective, and Google Brain introduces SimpleBits -- an information-reduction method that learns to synthesize simplified inputs that contain less information yet remain informative for the task, providing a new method for exploring the basis of network decisions. The researchers set out to answer two questions: How do neural network image classifiers respond to simpler and simpler inputs?


FPGA-optimized Hardware acceleration for Spiking Neural Networks

arXiv.org Artificial Intelligence

Artificial intelligence (AI) is gaining success and importance in many different tasks. The growing pervasiveness and complexity of AI systems push researchers towards developing dedicated hardware accelerators. Spiking Neural Networks (SNN) represent a promising solution in this sense since they implement models that are more suitable for a reliable hardware design. Moreover, from a neuroscience perspective, they better emulate a human brain. This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task, using the MNIST as the target dataset. Many techniques are used to minimize the area and to maximize the performance, such as the replacement of the multiplication operation with simple bit shifts and the minimization of the time spent on inactive spikes, useless for the update of neurons' internal state. The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources and reducing the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.


Disease Informed Neural Networks

#artificialintelligence

In the paper we used DINNs to identify the dynamics of 11 highly infectious and deadly diseases. These systems vary in their complexity and their number of parameters. The diseases include COVID, Anthrax, HIV, Zika, Smallpox, Tuberculosis, Pneumonia, Ebola, Dengue, Polio, and Measles. The entire code & experiments can be found here, and the specific tutorial notebook can be found here. Diseases can differ vastly in which organisms they affect, their symptoms, and the speed at which they spread.


Data-Driven AI Model Signal-Awareness Enhancement and Introspection

arXiv.org Artificial Intelligence

AI modeling for source code understanding tasks has been making significant progress, and is being adopted in production development pipelines. However, reliability concerns, especially whether the models are actually learning task-related aspects of source code, are being raised. While recent model-probing approaches have observed a lack of signal awareness in many AI-for-code models, i.e. models not capturing task-relevant signals, they do not offer solutions to rectify this problem. In this paper, we explore data-driven approaches to enhance models' signal-awareness: 1) we combine the SE concept of code complexity with the AI technique of curriculum learning; 2) we incorporate SE assistance into AI models by customizing Delta Debugging to generate simplified signal-preserving programs, augmenting them to the training dataset. With our techniques, we achieve up to 4.8x improvement in model signal awareness. Using the notion of code complexity, we further present a novel model learning introspection approach from the perspective of the dataset.


CIOs and other IT leaders share predictions and tech trends for 2022

#artificialintelligence

As we enter 2022, CIOs and other IT leaders are predicting more of the same issues: a tech talent shortage that will stress organizations still working on modernization efforts and increased use of artificial intelligence and analytics, and enhancing security, among other innovative technologies. Among its predictions for 2022, Forrester believes "a tech talent panic will create broad gaps until new sourcing models go mainstream." IT organizations face a 13.8% attrition rate, reflecting a slow move to "future fit" talent strategies, according to Forrester. "The demand for technology talent maintains a frenetic pace with an emphasis on data and analytics, information security, architecture, cloud and engineering," said Craig Stephenson, managing director, North America Technology Officers Practice, at Korn-Ferry. Here's how they plan to cope.


POS Tagging, Explained - KDnuggets

#artificialintelligence

Modern approaches to Natural Language Processing are offering a streamlining of the process of document analysis by way of simplification. Simply put, there's a tendency to drop the hard stuff (i.e., understanding the content) for more direct techniques like looking at words, how often they appear in documents, what other words show up next to them or somewhere else in the same document; this kind of statistical information is collected and carefully optimized during what is known in Machine Learning as the Training stage. Practically speaking, a person will manually tag a document that talks, for instance, about sports with the label "Sports" (known as the Target), and, when that document is processed, the engine will collect all the words present and mark them as potentially leading to the assumption they indicate a sport context. When more content from the Training Set of documents is analyzed, some of those words will be present again (reinforcing the idea that they truly are indicative of the domain of sports) while others will be absent (softening the possibility they have domain significance.) Naturally, while simplification is appealing (because of its speed and being forgiving in terms of skills necessary to meet a challenge around unstructured data), it has its drawbacks too.


Control Prefixes for Text Generation

arXiv.org Artificial Intelligence

Prompt learning methods adapt pre-trained language models to downstream applications by using a task-specific prompt together with the input. Most of the current work on prompt learning in text generation relies on a shared dataset-level prompt for all examples in the dataset. We extend this approach and propose a dynamic method, Control Prefixes, which allows for the inclusion of conditional input-dependent information in each prompt. Control Prefixes is at the intersection of prompt learning and controlled generation, empowering the model to have finer-grained control during text generation. The method incorporates attribute-level learnable representations into different layers of a pre-trained transformer, allowing for the generated text to be guided in a particular direction. We provide a systematic evaluation of the technique and apply it to five datasets from the GEM benchmark for natural language generation (NLG). We present state-of-the-art results on several data-to-text datasets, including WebNLG.


Ethics of Artificial Intelligence Plays a Role in Engineering

#artificialintelligence

We know that predictive models developed by artificial-intelligence (AI) and machine-learning (ML) algorithms are based on data. And, because we know how this data is used to build AI-based models, the main target of AI ethics is addressing how AI models become biased based on the quality and the quantity of the data that is used. This second part of this two-part series discusses how AI ethics can determine and clarify how human biases of traditional engineers--assumptions, interpretations, simplifications, and preconceived notions--can be revealed in the engineering applications of AI and ML. Part 1 discussed the nonengineering applications of AI and ML and how human biases such as racism and sexism can be included in AI models through the inclusion of biased data during the training of the algorithms. Application of AI Ethics in Engineering Bias (including major assumptions, interpretations, and simplifications) from traditional engineers can be included in the engineering application of AI.


EVOQUER: Enhancing Temporal Grounding with Video-Pivoted BackQuery Generation

arXiv.org Artificial Intelligence

Temporal grounding aims to predict a time interval of a video clip corresponding to a natural language query input. In this work, we present EVOQUER, a temporal grounding framework incorporating an existing text-to-video grounding model and a video-assisted query generation network. Given a query and an untrimmed video, the temporal grounding model predicts the target interval, and the predicted video clip is fed into a video translation task by generating a simplified version of the input query. EVOQUER forms closed-loop learning by incorporating loss functions from both temporal grounding and query generation serving as feedback. Our experiments on two widely used datasets, Charades-STA and ActivityNet, show that EVOQUER achieves promising improvements by 1.05 and 1.31 at R@0.7. We also discuss how the query generation task could facilitate error analysis by explaining temporal grounding model behavior.