fastfood
Export Reviews, Discussions, Author Feedback and Meta-Reviews
For [23] we refer to the paper pointed by you. 5. To Reviewer_25: the overall conceptual motivation for the paper is somewhat weak... Nystrom approximation can be used to approximate the kernel matrix and speed up kernel machines, and from Table 1 we can see that the performance is suboptimal even when rank=200 (see the 5-th column). In this case, it requires 200 inner product computations to make one prediction, which is too slow for many real-time systems (e.g., web applications, robotic applications ...). Therefore state-of-the-art Nystrom method is not good enough, and we reduce the prediction time to 10~20 inner products with a better classification accuracy, which is a big improvement. Also, as we mentioned in the point 1 above, although we want to optimize the prediction time, our method still has fast training time. We agree that the psuedo landmark point technique can be potentially applied to speed up the training time, and it is an interesting research direction.
621bf66ddb7c962aa0d22ac97d69b793-Reviews.html
The authors propose a method (SHRT) to accelerate Ridge Regression when the number of features p is much greater than the number of samples n (p n). The main idea is to reduce the dimensionality of the design input matrix X to make the computation of XX T faster, from O(n 2) to O(nplogn). This is done via Walsh-Hadamard transforms, which can be computed in log-time. The research idea is of high-quality and great use. The paper is well written and technically correct.
Artificial Intelligence in Restaurants: Should We Be Terrified or Excited?
When Artifical Intelligence (AI) is brought up, it's hard to shake the rising fear of robots outsmarting humans and making us the subservient species. We can thank Hollywood for the assumption that AI will bring violent, uncontrollable robots and a lack of control as these new machines begin to think autonomously and more efficiently than us. AI will soon affect every dining experience. Will robots completely replace humans, from servers to managers, as AI advances in the food and hospitality industry? Restaurants and food service businesses truly have more to anticipate than fear as smart, self-learning systems are applied to bring food lovers a better experience and meal.
Random Feature Mapping with Signed Circulant Matrix Projection
Feng, Chang (Tianjin University) | Hu, Qinghua (Tianjin University) | Liao, Shizhong (Tianjin University)
Random feature mappings have been successfully used for approximating non-linear kernels to scaleup kernel methods. Some work aims at speeding up the feature mappings, but brings increasing variance of the approximation. In this paper, we propose a novel random feature mapping method that uses a signed Circulant Random Matrix (CRM) instead of an unstructured random matrix to project input data. The signed CRM has linear space complexity as the whole signed CRM can be recovered from one column of the CRM, and ensures loglin-ear time complexity to compute the feature mapping using the Fast Fourier Transform (FFT). Theoretically, we prove that approximating Gaussian kernel using our mapping method is unbiased and does not increase the variance. Experimentally, we demonstrate that our proposed mapping method is time and space efficient while retaining similar accuracies with state-of-the-art random feature mapping methods. Our proposed random feature mapping method can be implemented easily and make kernel methods scalable and practical for large scale training and predicting problems.
- Asia > China > Tianjin Province > Tianjin (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Fastfood: Approximate Kernel Expansions in Loglinear Time
Le, Quoc Viet, Sarlos, Tamas, Smola, Alexander Johannes
Despite their successes, what makes kernel methods difficult to use in many large scale problems is the fact that storing and computing the decision function is typically expensive, especially at prediction time. In this paper, we overcome this difficulty by proposing Fastfood, an approximation that accelerates such computation significantly. Key to Fastfood is the observation that Hadamard matrices, when combined with diagonal Gaussian matrices, exhibit properties similar to dense Gaussian random matrices. Yet unlike the latter, Hadamard and diagonal matrices are inexpensive to multiply and store. These two matrices can be used in lieu of Gaussian matrices in Random Kitchen Sinks proposed by Rahimi and Recht (2009) and thereby speeding up the computation for a large range of kernel functions. Specifically, Fastfood requires O(n log d) time and O(n) storage to compute n non-linear basis functions in d dimensions, a significant improvement from O(nd) computation and storage, without sacrificing accuracy. Our method applies to any translation invariant and any dot-product kernel, such as the popular RBF kernels and polynomial kernels. We prove that the approximation is unbiased and has low variance. Experiments show that we achieve similar accuracy to full kernel expansions and Random Kitchen Sinks while being 100x faster and using 1000x less memory. These improvements, especially in terms of memory usage, make kernel methods more practical for applications that have large training sets and/or require real-time prediction.
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- South America > Paraguay > Asunción > Asunción (0.04)
- (7 more...)