Goto

Collaborating Authors

Results


20 Free Online Books to Learn R and Data Science - Python and R Tips

#artificialintelligence

If you are interested in learning Data Science with R, but not interested in spending money on books, you are definitely in a very good space. There are a number of fantastic R/Data Science books and resources available online for free from top most creators and scientists. Here are such 13 free 20 free (so […]


Urban Traffic Flow Forecast Based on FastGCRNN

arXiv.org Artificial Intelligence

Traffic forecasting is an important prerequisite for the application of intelligent transportation systems in urban traffic networks. The existing works adopted RNN and CNN/GCN, among which GCRN is the state of art work, to characterize the temporal and spatial correlation of traffic flows. However, it is hard to apply GCRN to the large scale road networks due to high computational complexity. To address this problem, we propose to abstract the road network into a geometric graph and build a Fast Graph Convolution Recurrent Neural Network (FastGCRNN) to model the spatial-temporal dependencies of traffic flow. Specifically, We use FastGCN unit to efficiently capture the topological relationship between the roads and the surrounding roads in the graph with reducing the computational complexity through importance sampling, combine GRU unit to capture the temporal dependency of traffic flow, and embed the spatiotemporal features into Seq2Seq based on the Encoder-Decoder framework. Experiments on large-scale traffic data sets illustrate that the proposed method can greatly reduce computational complexity and memory consumption while maintaining relatively high accuracy.


Job2Vec: Job Title Benchmarking with Collective Multi-View Representation Learning

arXiv.org Artificial Intelligence

Job Title Benchmarking (JTB) aims at matching job titles with similar expertise levels across various companies. JTB could provide precise guidance and considerable convenience for both talent recruitment and job seekers for position and salary calibration/prediction. Traditional JTB approaches mainly rely on manual market surveys, which is expensive and labor-intensive. Recently, the rapid development of Online Professional Graph has accumulated a large number of talent career records, which provides a promising trend for data-driven solutions. However, it is still a challenging task since (1) the job title and job transition (job-hopping) data is messy which contains a lot of subjective and non-standard naming conventions for the same position (e.g., Programmer, Software Development Engineer, SDE, Implementation Engineer), (2) there is a large amount of missing title/transition information, and (3) one talent only seeks limited numbers of jobs which brings the incompleteness and randomness modeling job transition patterns. To overcome these challenges, we aggregate all the records to construct a large-scale Job Title Benchmarking Graph (Job-Graph), where nodes denote job titles affiliated with specific companies and links denote the correlations between jobs. We reformulate the JTB as the task of link prediction over the Job-Graph that matched job titles should have links. Along this line, we propose a collective multi-view representation learning method (Job2Vec) by examining the Job-Graph jointly in (1) graph topology view, (2)semantic view, (3) job transition balance view, and (4) job transition duration view. We fuse the multi-view representations in the encode-decode paradigm to obtain a unified optimal representation for the task of link prediction. Finally, we conduct extensive experiments to validate the effectiveness of our proposed method.


Narrative Maps: An Algorithmic Approach to Represent and Extract Information Narratives

arXiv.org Artificial Intelligence

Narratives are fundamental to our perception of the world and are pervasive in all activities that involve the representation of events in time. Yet, modern online information systems do not incorporate narratives in their representation of events occurring over time. This article aims to bridge this gap, combining the theory of narrative representations with the data from modern online systems. We make three key contributions: a theory-driven computational representation of narratives, a novel extraction algorithm to obtain these representations from data, and an evaluation of our approach. In particular, given the effectiveness of visual metaphors, we employ a route map metaphor to design a narrative map representation. The narrative map representation illustrates the events and stories in the narrative as a series of landmarks and routes on the map. Each element of our representation is backed by a corresponding element from formal narrative theory, thus providing a solid theoretical background to our method. Our approach extracts the underlying graph structure of the narrative map using a novel optimization technique focused on maximizing coherence while respecting structural and coverage constraints. We showcase the effectiveness of our approach by performing a user evaluation to assess the quality of the representation, metaphor, and visualization. Evaluation results indicate that the Narrative Map representation is a powerful method to communicate complex narratives to individuals. Our findings have implications for intelligence analysts, computational journalists, and misinformation researchers.


Continuous Artificial Prediction Markets as a Syndromic Surveillance Technique

arXiv.org Artificial Intelligence

According to the World Health Organisation (WHO) [World Health Organization, 2013], the United Nations directing and coordinating health authority, public health surveillance is: The continuous, systematic collection, analysis and interpretation of health-related data needed for the planning, implementation, and evaluation of public health practice. Public health surveillance practice has evolved over time. Although it was limited to pen and paper at the beginning of 20th century, it is now facilitated by huge advances in informatics. Information technology enhancements have changed the traditional approaches of capturing, storing, sharing and analysing of data and resulted efficient and reliable health surveillance techniques [Lombardo and Buckeridge, 2007]. The main objective and challenge of a health surveillance system is the earliest possible detection of a disease outbreak within a society for the purpose of protecting community health. In the past, before the widespread deployment of computers, health surveillance was based on reports received from medical care centres and laboratories.


Intelligent Radio Signal Processing: A Contemporary Survey

arXiv.org Artificial Intelligence

Intelligent signal processing for wireless communications is a vital task in modern wireless systems, but it faces new challenges because of network heterogeneity, diverse service requirements, a massive number of connections, and various radio characteristics. Owing to recent advancements in big data and computing technologies, artificial intelligence (AI) has become a useful tool for radio signal processing and has enabled the realization of intelligent radio signal processing. This survey covers four intelligent signal processing topics for the wireless physical layer, including modulation classification, signal detection, beamforming, and channel estimation. In particular, each theme is presented in a dedicated section, starting with the most fundamental principles, followed by a review of up-to-date studies and a summary. To provide the necessary background, we first present a brief overview of AI techniques such as machine learning, deep learning, and federated learning. Finally, we highlight a number of research challenges and future directions in the area of intelligent radio signal processing. We expect this survey to be a good source of information for anyone interested in intelligent radio signal processing, and the perspectives we provide therein will stimulate many more novel ideas and contributions in the future.


Event Prediction in the Big Data Era: A Systematic Survey

arXiv.org Artificial Intelligence

Events are occurrences in specific locations, time, and semantics that nontrivially impact either our society or the nature, such as civil unrest, system failures, and epidemics. It is highly desirable to be able to anticipate the occurrence of such events in advance in order to reduce the potential social upheaval and damage caused. Event prediction, which has traditionally been prohibitively challenging, is now becoming a viable option in the big data era and is thus experiencing rapid growth. There is a large amount of existing work that focuses on addressing the challenges involved, including heterogeneous multi-faceted outputs, complex dependencies, and streaming data feeds. Most existing event prediction methods were initially designed to deal with specific application domains, though the techniques and evaluation procedures utilized are usually generalizable across different domains. However, it is imperative yet difficult to cross-reference the techniques across different domains, given the absence of a comprehensive literature survey for event prediction. This paper aims to provide a systematic and comprehensive survey of the technologies, applications, and evaluations of event prediction in the big data era. First, systematic categorization and summary of existing techniques are presented, which facilitate domain experts' searches for suitable techniques and help model developers consolidate their research at the frontiers. Then, comprehensive categorization and summary of major application domains are provided. Evaluation metrics and procedures are summarized and standardized to unify the understanding of model performance among stakeholders, model developers, and domain experts in various application domains. Finally, open problems and future directions for this promising and important domain are elucidated and discussed.


On the Nature and Types of Anomalies: A Review

arXiv.org Artificial Intelligence

Anomalies are occurrences in a dataset that are in some way unusual and do not fit the general patterns. The concept of the anomaly is generally ill-defined and perceived as vague and domain-dependent. Moreover, no comprehensive and concrete overviews of the different types of anomalies have hitherto been published. By means of an extensive literature review this study therefore offers the first theoretically principled and domain-independent typology of data anomalies, and presents a full overview of anomaly types and subtypes. To concretely define the concept of the anomaly and its different manifestations the typology employs four dimensions: data type, cardinality of relationship, data structure and data distribution. These fundamental and data-centric dimensions naturally yield 3 broad groups, 9 basic types and 61 subtypes of anomalies. The typology facilitates the evaluation of the functional capabilities of anomaly detection algorithms, contributes to explainable data science, and provides insights into relevant topics such as local versus global anomalies.


Cyber Threat Intelligence for Secure Smart City

arXiv.org Artificial Intelligence

Sujata238dash@gmail.com Abstract--Smart city improved the quality of life for the The rest of this paper is structured as follows. York start becoming more intelligent. These cities are providing services through technology such as IoT and Cyber-A. Smart City Physical Systems (CPS), where they are connected through a The smart city concept refers to urban systems that network to monitor, control and automate the city services to integrated with ICT to improve city services in terms of provide the best quality of life for the citizens [1]. The smart city contains a huge number of sensors Smart city technologies exchange and process different that continuously generate a tremendous amount of sensitive types of data to provide services. These data can be sensitive data such as location coordinates, credit card numbers, and and critical which imposes security and privacy requirements. These data are transmitted through the However, the characteristics of smart city technology such as network to data centers for processing and analysis to take the IoT and CPS in terms of resources limitation such as power, appropriate decisions such as managing traffic and energy in memory, and processing imposes challenges to run a smart city [6][3]. Therefore, different attacks Sensors that generate data and devices that handle the data target smart city infrastructure including Distributed Denial of in a smart city have vulnerabilities that can be exploited by Service (DDoS) using IoT devices by infecting IoT devices by cybercriminals.


Community detection and Social Network analysis based on the Italian wars of the 15th century

arXiv.org Artificial Intelligence

In this contribution we study social network modelling by using human interaction as a basis. To do so, we propose a new set of functions, affinities, designed to capture the nature of the local interactions among each pair of actors in a network. By using these functions, we develop a new community detection algorithm, the Borgia Clustering, where communities naturally arise from the multi-agent interaction in the network. We also discuss the effects of size and scale for communities regarding this case, as well as how we cope with the additional complexity present when big communities arise. Finally, we compare our community detection solution with other representative algorithms, finding favourable results.