In recent years, there has been a surge in demand for AI-driven big data analysis in various business fields. AI is also expected to help support the detection of anomalies in data to reveal things like unauthorized attempts to access networks, or abnormalities in medical data for thyroid values or arrhythmia data. Data used in many business operations is high-dimensional data. As the number of dimensions of data increases, the complexity of calculations required to accurately characterize the data increases exponentially, a phenomenon widely known as the "Curse of Dimensionality"(1). In recent years, a method of reducing the dimensions of input data using deep learning has been identified as a promising candidate for helping to avoid this problem. However, since the number of dimensions is reduced without considering the data distribution and probability of occurrence after the reduction, the characteristics of the data have not been accurately captured, and the recognition accuracy of the AI is limited and misjudgment can occur (Figure 1). Solving these problems and accurately acquiring the distribution and probability of high-dimensional data remain important issues in the AI field.
Mitsui O.S.K. Lines, Ltd. (MOL) today announced that it teamed up with Fujitsu Laboratories Ltd., and Tokyo University of Marine Science and Technology to verify the accuracy of technology to estimate vessel performance at sea by applying Fujitsu's artificial intelligence (AI) technology, "FUJITSU Human Centric AI Zinrai." This project is a part of MOL's initiative to assess the effectiveness of AI technology, and aims to reduce fuel consumption and vessels' environmental impact by verifying the accuracy of the technology, using Fujitsu's AI Technology to estimate vessel performance at sea. MOL provided actual voyage data collected from MOL fleet in operation to Fujitsu Laboratories, which, along with Tokyo University of Marine Science and Technology, verified the data by using their jointly developed machine learning method. Learned the correlation of each item of operation data using Fujitsu's unique AI technology and high-dimensional statistics analysis technology, and established the technology that estimates vessel performance. Estimated the ship speed from the data other than the speed and verified the comparison between that estimated value and actual operation data, in case to assess allowance of speed.
IT giant Fujitsu has been developing a series of in-house technologies aimed at the burgeoning market of artificial intelligence and machine learning. Although the company has made less fanfare of its ambitions in this regard than companies like IBM, Google and Microsoft, the Japanese multinational seems intent on expanding its datacenter business into this new high-value segment. The step-up in AI focus has been especially noticeable over the past several months, where hardly week went by without an announcement of a new technology or use case. In fact, Fujitsu has issued no less than 15 press releases on AI or machine learning since the beginning of 2016. Most are the result of technologies developed at Fujitsu Laboratories.
Fujitsu Laboratories Ltd. and Fujitsu Research and Development Center Co., Ltd. have innovated an AI technology for video-based behavioral analysis. Dubbed "Actlyzer", the tech can recognize a variety of subtle and complex human activities without relying on large amounts of training data. Deep learning technologies conventionally demand large amounts of video data for training systems to recognize individual behaviors, and video data must be collected from scratch in order to add each new behavior. This time-consuming process means that it can often take several months to introduce functional AI into the field. Taking advantage of the fact that human behaviors generally consist of a combination of basic movements and actions, (e.g.
Locality-sensitive hashing converts high-dimensional feature vectors, such as image and speech, into bit arrays and allows high-speed similarity calculation with the Hamming distance. There is a hashing scheme that maps feature vectors to bit arrays depending on the signs of the inner products between feature vectors and the normal vectors of hyperplanes placed in the feature space. This hashing can be seen as a discretization of the feature space by hyperplanes. If labels for data are given, one can determine the hyperplanes by using learning algorithms. However, many proposed learning methods do not consider the hyperplanes' offsets. Not doing so decreases the number of partitioned regions, and the correlation between Hamming distances and Euclidean distances becomes small. In this paper, we propose a lift map that converts learning algorithms without the offsets to the ones that take into account the offsets. With this method, the learning methods without the offsets give the discretizations of spaces as if it takes into account the offsets. For the proposed method, we input several high-dimensional feature data sets and studied the relationship between the statistical characteristics of data, the number of hyperplanes, and the effect of the proposed method.