Goto

Collaborating Authors

Results


Command Line Interface (CLI) for Deep Learning Applications

#artificialintelligence

I bet that you have already seen in movies the IT guy hacking a system by writing commands inside a black window and thought "How cool is that!". Well, in reality, things are not that easy to hack but we do have some basic commands that can help interact with the computer, which is called command-line interface (CLI). The command-line interface is a program on your computer that allows you to create and delete files, run programs, and navigate through folders and files. On a Mac and Linux Systems, it's called Terminal, and on Windows, it's Command Prompt. CLI is not just a fancy method to interact with your computer.


Modern Computing: A Short History, 1945-2022

#artificialintelligence

Inspired by A New History of Modern Computing by Thomas Haigh and Paul E. Ceruzzi. But the selection of key events in the journey from ENIAC to Tesla, from Data Processing to Big Data, is mine. This was the first computer made by Apple Computers Inc, which became one of the fastest growing ... [ ] companies in history, launching a number of innovative and influential computer hardware and software products. Most home computer users in the 1970s were hobbyists who designed and assembled their own machines. The Apple I, devised in a bedroom by Steve Wozniak, Steven Jobs and Ron Wayne, was a basic circuit board to which enthusiasts would add display units and keyboards. April 1945 John von Neumann's "First Draft of a Report on the EDVAC," often called the founding document of modern computing, defines "the stored program concept." July 1945 Vannevar Bush publishes "As We May Think," in which he envisions the "Memex," a memory extension device serving as a large personal repository of information that could be instantly retrieved through associative links.


Lambda and Razer Launch Laptop for Developing Deep–Learning Applications – EnterpriseTalk

#artificialintelligence

Lambda has launched the Razer x Lambda Tensorbook. The laptops include Nvidia GPUs, 64GB of RAM, Ubuntu Linux, Lambda's deep–learning.


Deep Learning setup (TensorFlow & Keras) on Windows 10 + Ubuntu (WLS)

#artificialintelligence

When it comes to working with Machine/Deep learning and Python, most people recommend that you use a Unix-based environment. Machine learning tools can be installed and configured easily on Linux, allowing you to focus your efforts on developing and improving your code instead of wasting time-solving installation conflicts. Windows OS users suffer this all the time, even trying to follow the references is difficult because they are based on the Unix operating system. To avoid this problem, I propose using the Windows Subsystem for Linux (WSL). WLS: allows you to install a complete Ubuntu terminal environment in minutes on your Windows machine, allowing you to develop cross-platform applications without leaving Windows [1].


Lambda Stack: an AI software stack that's always up-to-date

#artificialintelligence

Lambda Stack provides a one line installation and managed upgrade path for: PyTorch, TensorFlow, CUDA, cuDNN, and NVIDIA Drivers. No more futzing with your Linux AI software, Lambda Stack is here. To install Lambda Stack on your desktop, run this command on a fresh Ubuntu installation (20.04, 18.04, or 16.04). For servers, see the server installation section below. Lambda Stack can run on your laptop, workstation, server, cluster, inside a container, on the cloud, and comes pre-installed on every Lambda GPU Cloud instance.


YOLOv4 Object Detection Course

#artificialintelligence

I started out wanting to learn AI Object Detection in Computer Vision... Now even though I have a masters degree in electronic engineering (M.Eng). It was still challenging for me to figure out. I had a lot of questions like... If Ubuntu, what version 16.04, 18.04, What kernel do I need? If I am training, what format does my dataset need to be in?


SemiRetro: Semi-template framework boosts deep retrosynthesis prediction

arXiv.org Artificial Intelligence

Recently, template-based (TB) and template-free (TF) molecule graph learning methods have shown promising results to retrosynthesis. TB methods are more accurate using pre-encoded reaction templates, and TF methods are more scalable by decomposing retrosynthesis into subproblems, i.e., center identification and synthon completion. To combine both advantages of TB and TF, we suggest breaking a full-template into several semi-templates and embedding them into the two-step TF framework. Since many semi-templates are reduplicative, the template redundancy can be reduced while the essential chemical knowledge is still preserved to facilitate synthon completion. We call our method SemiRetro, introduce a new GNN layer (DRGAT) to enhance center identification, and propose a novel self-correcting module to improve semi-template classification. Experimental results show that SemiRetro significantly outperforms both existing TB and TF methods. In scalability, SemiRetro covers 98.9\% data using 150 semi-templates, while previous template-based GLN requires 11,647 templates to cover 93.3\% data. In top-1 accuracy, SemiRetro exceeds template-free G2G 4.8\% (class known) and 6.0\% (class unknown). Besides, SemiRetro has better training efficiency than existing methods.


LTC-SUM: Lightweight Client-driven Personalized Video Summarization Framework Using 2D CNN

arXiv.org Artificial Intelligence

This paper proposes a novel lightweight thumbnail container-based summarization (LTC-SUM) framework for full feature-length videos. This framework generates a personalized keyshot summary for concurrent users by using the computational resource of the end-user device. State-of-the-art methods that acquire and process entire video data to generate video summaries are highly computationally intensive. In this regard, the proposed LTC-SUM method uses lightweight thumbnails to handle the complex process of detecting events. This significantly reduces computational complexity and improves communication and storage efficiency by resolving computational and privacy bottlenecks in resource-constrained end-user devices. These improvements were achieved by designing a lightweight 2D CNN model to extract features from thumbnails, which helped select and retrieve only a handful of specific segments. Extensive quantitative experiments on a set of full 18 feature-length videos (approximately 32.9 h in duration) showed that the proposed method is significantly computationally efficient than state-of-the-art methods on the same end-user device configurations. Joint qualitative assessments of the results of 56 participants showed that participants gave higher ratings to the summaries generated using the proposed method. To the best of our knowledge, this is the first attempt in designing a fully client-driven personalized keyshot video summarization framework using thumbnail containers for feature-length videos.


Cross-Language Binary-Source Code Matching with Intermediate Representations

arXiv.org Artificial Intelligence

Binary-source code matching plays an important role in many security and software engineering related tasks such as malware detection, reverse engineering and vulnerability assessment. Currently, several approaches have been proposed for binary-source code matching by jointly learning the embeddings of binary code and source code in a common vector space. Despite much effort, existing approaches target on matching the binary code and source code written in a single programming language. However, in practice, software applications are often written in different programming languages to cater for different requirements and computing platforms. Matching binary and source code across programming languages introduces additional challenges when maintaining multi-language and multi-platform applications. To this end, this paper formulates the problem of cross-language binary-source code matching, and develops a new dataset for this new problem. We present a novel approach XLIR, which is a Transformer-based neural network by learning the intermediate representations for both binary and source code. To validate the effectiveness of XLIR, comprehensive experiments are conducted on two tasks of cross-language binary-source code matching, and cross-language source-source code matching, on top of our curated dataset. Experimental results and analysis show that our proposed XLIR with intermediate representations significantly outperforms other state-of-the-art models in both of the two tasks.


Forecasting: theory and practice

arXiv.org Machine Learning

Forecasting has always been at the forefront of decision making and planning. The uncertainty that surrounds the future is both exciting and challenging, with individuals and organisations seeking to minimise risks and maximise utilities. The large number of forecasting applications calls for a diverse set of forecasting methods to tackle real-life challenges. This article provides a non-systematic review of the theory and the practice of forecasting. We provide an overview of a wide range of theoretical, state-of-the-art models, methods, principles, and approaches to prepare, produce, organise, and evaluate forecasts. We then demonstrate how such theoretical concepts are applied in a variety of real-life contexts. We do not claim that this review is an exhaustive list of methods and applications. However, we wish that our encyclopedic presentation will offer a point of reference for the rich work that has been undertaken over the last decades, with some key insights for the future of forecasting theory and practice. Given its encyclopedic nature, the intended mode of reading is non-linear. We offer cross-references to allow the readers to navigate through the various topics. We complement the theoretical concepts and applications covered by large lists of free or open-source software implementations and publicly-available databases.