Collaborating Authors


Forecasting: theory and practice Machine Learning

Forecasting has always been at the forefront of decision making and planning. The uncertainty that surrounds the future is both exciting and challenging, with individuals and organisations seeking to minimise risks and maximise utilities. The large number of forecasting applications calls for a diverse set of forecasting methods to tackle real-life challenges. This article provides a non-systematic review of the theory and the practice of forecasting. We provide an overview of a wide range of theoretical, state-of-the-art models, methods, principles, and approaches to prepare, produce, organise, and evaluate forecasts. We then demonstrate how such theoretical concepts are applied in a variety of real-life contexts. We do not claim that this review is an exhaustive list of methods and applications. However, we wish that our encyclopedic presentation will offer a point of reference for the rich work that has been undertaken over the last decades, with some key insights for the future of forecasting theory and practice. Given its encyclopedic nature, the intended mode of reading is non-linear. We offer cross-references to allow the readers to navigate through the various topics. We complement the theoretical concepts and applications covered by large lists of free or open-source software implementations and publicly-available databases.

Is Apple building a DRONE? New patents filed by tech giant describe small unmanned aerial vehicles

Daily Mail - Science & tech

Apple is rumored to be developing several technologies outside of smartphones and tablets, such as a VR headset and a car, but new patents awarded to the tech giant on Thursday suggest it may be working on a drone. Approximately two patents describe small unmanned aerial vehicles (UAVs) that pair with wireless controllers or drones operated via an iPhone or a Nintendo DS. Apple, however, initially filed the patents in Singapore'to keep the projects a secret,' but have since filed the pair with the US Patent & Trademark Office. The images in the patents depict a small drone with four rotors, a common designed for small UAVs. Approximately two patents describe small unmanned aerial vehicles (UAVs) that pair with wireless controllers or drones operated via an iPhone or a Nintendo DS.

Scalable Teacher Forcing Network for Semi-Supervised Large Scale Data Streams Artificial Intelligence

The large-scale data stream problem refers to high-speed information flow which cannot be processed in scalable manner under a traditional computing platform. This problem also imposes expensive labelling cost making the deployment of fully supervised algorithms unfeasible. On the other hand, the problem of semi-supervised large-scale data streams is little explored in the literature because most works are designed in the traditional single-node computing environments while also being fully supervised approaches. This paper offers Weakly Supervised Scalable Teacher Forcing Network (WeScatterNet) to cope with the scarcity of labelled samples and the large-scale data streams simultaneously. WeScatterNet is crafted under distributed computing platform of Apache Spark with a data-free model fusion strategy for model compression after parallel computing stage. It features an open network structure to address the global and local drift problems while integrating a data augmentation, annotation and auto-correction ($DA^3$) method for handling partially labelled data streams. The performance of WeScatterNet is numerically evaluated in the six large-scale data stream problems with only $25\%$ label proportions. It shows highly competitive performance even if compared with fully supervised learners with $100\%$ label proportions.

Quantum Computing Market worth $1,765 million by 2026 - Exclusive Report by MarketsandMarkets


According to the new market research report "Quantum Computing Market with COVID-19 impact by Offering (Systems and Services), Deployment (On Premises and Cloud Based), Application, Technology, End-use Industry and Region - Global Forecast to 2026", published by MarketsandMarkets, the market is expected to grow from USD 472 million in 2021 to USD 1,765 million by 2026, at a CAGR of 30.2%. The early adoption of quantum computing in the banking and finance sector is expected to fuel the growth of the market globally. Other key factors contributing to the growth of the quantum computing market include rising investments by governments of different countries to carry out research and development activities related to quantum computing technology. Several companies are focusing on the adoption of QCaaS post-COVID-19. This, in turn, is expected to contribute to the growth of the quantum computing market.

A Survey of FPGA-Based Robotic Computing Artificial Intelligence

Recent researches on robotics have shown significant improvement, spanning from algorithms, mechanics to hardware architectures. Robotics, including manipulators, legged robots, drones, and autonomous vehicles, are now widely applied in diverse scenarios. However, the high computation and data complexity of robotic algorithms pose great challenges to its applications. On the one hand, CPU platform is flexible to handle multiple robotic tasks. GPU platform has higher computational capacities and easy-touse development frameworks, so they have been widely adopted in several applications. On the other hand, FPGA-based robotic accelerators are becoming increasingly competitive alternatives, especially in latency-critical and power-limited scenarios. With specialized designed hardware logic and algorithm kernels, FPGA-based accelerators can surpass CPU and GPU in performance and energy efficiency. In this paper, we give an overview of previous work on FPGA-based robotic accelerators covering different stages of the robotic system pipeline. An analysis of software and hardware optimization techniques and main technical issues is presented, along with some commercial and space applications, to serve as a guide for future work. Therefore, the computation and storage complexity, as well as real-time and power constraints of the robotic system, Over the last decade, we have seen significant progress hinders its wide application in latency-critical or power-limited in the development of robotics, spanning from algorithms, scenarios [13]. Various robotic systems, like Therefore, it is essential to choose a proper compute platform manipulators, legged robots, unmanned aerial vehicles, selfdriving for the robotic system. CPU and GPU are two widely cars have been designed for search and rescue [1], [2], used commercial compute platforms. CPU is designed to exploration [3], [4], package delivery [5], entertainment [6], handle a wide range of tasks quickly and is often used to [7] and more applications and scenarios. These robots are develop novel algorithms. A typical CPU can achieve 10-on the rise of demonstrating their full potential. Take drones, 100 GFLOPS with below 1GOP/J power efficiency [14]. In a type of aerial robots, for example, the number of drones contrast, GPU is designed with thousands of processor cores has grown by 2.83x between 2015 and 2019 based on the running simultaneously, which enable massive parallelism. The typical GPU can perform up to 10 TOPS performance and registered number has reached 1.32 million in 2019, and the become a good candidate for high-performance scenarios. Recently, FFA expects this number will come to 1.59 billion by 2024.

GPT-3 Creative Fiction


What if I told a story here, how would that story start?" Thus, the summarization prompt: "My second grader asked me what this passage means: …" When a given prompt isn't working and GPT-3 keeps pivoting into other modes of completion, that may mean that one hasn't constrained it enough by imitating a correct output, and one needs to go further; writing the first few words or sentence of the target output may be necessary.

Single-Layer Graph Convolutional Networks For Recommendation Machine Learning

Graph Convolutional Networks (GCNs) and their variants have received significant attention and achieved start-of-the-art performances on various recommendation tasks. However, many existing GCN models tend to perform recursive aggregations among all related nodes, which arises severe computational burden. Moreover, they favor multi-layer architectures in conjunction with complicated modeling techniques. Though effective, the excessive amount of model parameters largely hinder their applications in real-world recommender systems. To this end, in this paper, we propose the single-layer GCN model which is able to achieve superior performance along with remarkably less complexity compared with existing models. Our main contribution is three-fold. First, we propose a principled similarity metric named distribution-aware similarity (DA similarity), which can guide the neighbor sampling process and evaluate the quality of the input graph explicitly. We also prove that DA similarity has a positive correlation with the final performance, through both theoretical analysis and empirical simulations. Second, we propose a simplified GCN architecture which employs a single GCN layer to aggregate information from the neighbors filtered by DA similarity and then generates the node representations. Moreover, the aggregation step is a parameter-free operation, such that it can be done in a pre-processing manner to further reduce red the training and inference costs. Third, we conduct extensive experiments on four datasets. The results verify that the proposed model outperforms existing GCN models considerably and yields up to a few orders of magnitude speedup in training, in terms of the recommendation performance.

K-EmoCon, a multimodal sensor dataset for continuous emotion recognition in naturalistic conversations Artificial Intelligence

Recognizing emotions during social interactions has many potential applications with the popularization of low-cost mobile sensors, but a challenge remains with the lack of naturalistic affective interaction data. Most existing emotion datasets do not support studying idiosyncratic emotions arising in the wild as they were collected in constrained environments. Therefore, studying emotions in the context of social interactions requires a novel dataset, and K-EmoCon is such a multimodal dataset with comprehensive annotations of continuous emotions during naturalistic conversations. The dataset contains multimodal measurements, including audiovisual recordings, EEG, and peripheral physiological signals, acquired with off-the-shelf devices from 16 sessions of approximately 10-minute long paired debates on a social issue. Distinct from previous datasets, it includes emotion annotations from all three available perspectives: self, debate partner, and external observers. Raters annotated emotional displays at intervals of every 5 seconds while viewing the debate footage, in terms of arousal-valence and 18 additional categorical emotions. The resulting K-EmoCon is the first publicly available emotion dataset accommodating the multiperspective assessment of emotions during social interactions.

Edge computing drives storage innovation while China edges its way into flash memory - SiliconANGLE News - UrIoTNews


The superpowers of the new economy are also the buzz words changing how the world interacts: Artificial intelligence, the "internet of things" and edge computing are the megatrends dominating the conversation on both a business and a personal level. "Practical everyday things are being done in AI," said David Floyer, co-host of theCUBE, SiliconANGLE Media's mobile livestreaming studio. "It's going from being a niche to being just everyday use, and its impact long-term is profound." TheCUBE co-host Dave Vellante joined Floyer during today's Micron Insight event in San Francisco. They discussed recent developments in storage and memory, as well as the challenges and opportunities facing Micron Technology Inc. in the marketplace (see the full interview with transcript here).

Neuromorphic Chipsets - Industry Adoption Analysis


Von Neumann Architecture Neuromorphic Architecture Neuromorphic architectures address challenges like high power consumption, low speed, and other efficiency-related bottlenecks prevalent in the traditional von Neumann architecture Architecture Bottleneck CPU Memory Neuromorphic architectures integrate processing and storage, getting rid of the bus bottleneck connecting the CPU and memory Encoding Scheme and Signals Unlike the von Neumann architecture with sudden highs and lows in the form of binary encoding, neuromorphic chips offer a continuous analog transition in the form of spiking signals Devices and Components CPU, memory, logic gates, etc. Artificial neurons and synapses Neuromorphic devices and components are more complex than logic gates Versus Versus Versus 10. NEUROMORPHIC CHIPSETS 10 SAMPLE REPORT Neuromorphic Chipsets vs. GPUs Parameters Neuromorphic Chips GPU Chips Basic Operation Based on the emulation of the biological nature of neurons onto a chip Use parallel processing to perform mathematical operations Parallelism Inherent parallelism enabled by neurons and synapses Require the development of architectures for parallel processing to handle multiple tasks simultaneously Data Processing High High Power Low Power-intensive Accuracy Low High Industry Adoption Still in the experimental stage More accessible Software New tools and methodologies need to be developed for programming neuromorphic hardware Easier to program than neuromorphic silicons Memory Integrated memory and neural processing Use of an external memory Limitations • Not suitable for precise calculations and programming- related challenges • Creation of neuromorphic devices is difficult due to the complexity of interconnections • Thread limited • Suboptimal for massively parallel structures Neuromorphic chipsets are at an early stage of development, and would take approximately 20 years to be at the same level as GPUs. The asynchronous operation of neuromorphic chips makes them more efficient than other processing units.