- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > France (0.04)
- North America > United States > New York > Tompkins County > Ithaca (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.04)
Moonshine: Distilling with Cheap Convolutions
Many engineers wish to deploy modern neural networks in memory-limited settings; but the development of flexible methods for reducing memory use is in its infancy, and there is little knowledge of the resulting cost-benefit. We propose structural model distillation for memory reduction using a strategy that produces a student architecture that is a simple transformation of the teacher architecture: no redesign is needed, and the same hyperparameters can be used. Using attention transfer, we provide Pareto curves/tables for distillation of residual networks with four benchmark datasets, indicating the memory versus accuracy payoff. We show that substantial memory savings are possible with very little loss of accuracy, and confirm that distillation provides student network performance that is better than training that student architecture directly on data.
- Asia > China > Jiangsu Province > Nanjing (0.04)
- Oceania > Australia > Australian Capital Territory > Canberra (0.04)
- North America > Canada (0.04)
- Asia > China (0.04)
Nonparametric Density Estimation under Adversarial Losses
We study minimax convergence rates of nonparametric density estimation under a large class of loss functions called ``adversarial losses'', which, besides classical L^p losses, includes maximum mean discrepancy (MMD), Wasserstein distance, and total variation distance. These losses are closely related to the losses encoded by discriminator networks in generative adversarial networks (GANs). In a general framework, we study how the choice of loss and the assumed smoothness of the underlying density together determine the minimax rate. We also discuss implications for training GANs based on deep ReLU networks, and more general connections to learning implicit generative models in a minimax statistical sense.
Oh no, Intel is moving customer support to AI
Intel is launching'Ask Intel,' an AI virtual assistant built on Microsoft Copilot Studio to handle customer support cases and warranty checks. PCWorld reports this shift follows Intel's removal of inbound phone support in December, directing customers to online assistance instead. The AI system warns users its answers may be inaccurate, raising concerns about potential hardware damage from incorrect technical advice. If your Intel processor requires a warranty return or support, the first "person" you'll probably be dealing with at Intel will be an AI. Intel is rolling out "Ask Intel," an addition to its Intel support site, that runs on Microsoft Copilot rather than on human intervention. Ask Intel will appear as part of support.intel.com
- Information Technology > Hardware (0.89)
- Information Technology > Security & Privacy (0.77)
- Leisure & Entertainment > Games > Computer Games (0.58)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Personal Assistant Systems (0.71)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.58)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.46)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.46)
- North America > Canada > Quebec > Montreal (0.04)
- Oceania > Tonga (0.04)
- North America > United States > Indiana > Hamilton County > Fishers (0.04)