Goto

Collaborating Authors

 device


Personalized Adapter for Large Meteorology Model on Devices: Towards Weather Foundation Models

Neural Information Processing Systems

This paper demonstrates that pre-trained language models (PLMs) are strong foundation models for on-device meteorological variable modeling. We present LM-Weather, a generic approach to taming PLMs, that have learned massive sequential knowledge from the universe of natural language databases, to acquire an immediate capability to obtain highly customized models for heterogeneous meteorological data on devices while keeping high efficiency. Concretely, we introduce a lightweight personalized adapter into PLMs and endows it with weather pattern awareness. During communication between clients and the server, low-rank-based transmission is performed to effectively fuse the global knowledge among devices while maintaining high communication efficiency and ensuring privacy. Experiments on real-wold dataset show that LM-Weather outperforms the state-of-the-art results by a large margin across various tasks (e.g., forecasting and imputation at different scales).


Frac-Connections: Fractional Extension of Hyper-Connections

Zhu, Defa, Huang, Hongzhi, Zhou, Jundong, Huang, Zihao, Zeng, Yutao, Wu, Banggu, Min, Qiyang, Zhou, Xun

arXiv.org Artificial Intelligence

Residual connections are central to modern deep learning architectures, enabling the training of very deep networks by mitigating gradient vanishing. Hyper-Connections recently generalized residual connections by introducing multiple connection strengths at different depths, thereby addressing the seesaw effect between gradient vanishing and representation collapse. However, Hyper-Connections increase memory access costs by expanding the width of hidden states. In this paper, we propose Frac-Connections, a novel approach that divides hidden states into multiple parts rather than expanding their width. Frac-Connections retain partial benefits of Hyper-Connections while reducing memory consumption. To validate their effectiveness, we conduct large-scale experiments on language tasks, with the largest being a 7B MoE model trained on up to 3T tokens, demonstrating that Frac-Connections significantly outperform residual connections.


Device can transform into four components for artificial intelligence systems – Physics World

#artificialintelligence

Researchers in the US have developed a perovskite-based device that could be used to create a high-plasticity architecture for artificial intelligence. The team, led by Shriram Ramanathan at Purdue University, has showed that the material's electronic properties can be easily reconfigured, allowing the devices to function like artificial neurons and other components. Their results could lead to more flexible artificial-intelligence hardware that could learn much like how the brain does. Artificial intelligence systems can be trained to perform a task such as voice recognition using real-world data. Today this is usually done in software, which can adapt when additional training data are provided.


When Mind Melds With Machine, Who's in Control?

WIRED

The last time I saw my friend James was at the townie bar near our old high school. He had been working in roofing for a few years, no longer a rail-thin teenager with lank hippie hair. I had just gotten back from a stint with the Peace Corps in Turkmenistan. We reminisced about the summer after our freshman year, when we were inseparable--adventuring in the creek that sliced through the woods, debating the merits of Batman versus the Crow, watching every movie in my father's bootlegged VHS collection. I had no idea what I wanted to do next.


Tech trick

USATODAY - Tech Top Stories

At dinner, I mentioned that I would like to go hiking in Patagonia. I never searched for these trips or anything like it. Yet, an hour later, I started getting ads on my phone about hiking adventures in Patagonia. While there's been no concrete evidence that your device's microphone is always listening, many Americans believe apps and sites routinely collect their voice data and use it for marketing purposes. Your smart speaker with its virtual assistant is always listening.


Facebook experiments with adding Face ID to Messenger inbox

Engadget

Facebook is testing a new feature for Messenger that allows users to better protect their messages from prying eyes. When enabled, users will need to authenticate their identity using Face ID, Touch ID, or their passcode before they can view their inbox, even if their phone is already unlocked. You can also set a designated period of time after leaving the app for when you'll need to re-authenticate. The company is currently testing the new security feature among a small percentage of Messenger's iOS users, though it could eventually be available more widely, including on Android. "We want to give people more choices and controls to protect their private messages, and recently, we began testing a feature that lets you unlock the Messenger app using your device's settings," a Facebook spokesperson said in a statement.


Virtual Assistants Provide Disappointing Advice When Asked for First Aid, Emergency Information: Study

#artificialintelligence

A study at Canada's University of Alberta found some virtual assistants are far better than others at providing users reliable, relevant information on medical emergencies. Researchers at the University of Alberta in Canada have found that virtual assistants do not live up to their potential in terms of providing users with reliable, relevant information on medical emergencies. The team tested four commonly used devices--Alexa, Google Home, Siri, and Cortana--using 123 questions about 39 first aid topics, including heart attacks, poisoning, nosebleeds, and splinters. The devices' responses were measured for accuracy of topic recognition, detection of the severity of the emergency, complexity of language used, and how closely the advice given fit with accepted first aid treatment and guidelines. Google Home performed the best, recognizing topics with 98% accuracy and providing relevant advice 56% of the time.


TensorFlow 2.1.0: First release candidate available

#artificialintelligence

As Python 2.7 will reach end of life on January 1, 2020, TensorFlow 2.1 will be the last version to support it. TensorFlow is an open source software library for ML that was originally developed by the Google Brain team in 2015. It has since become very popular within the open source community and was found to be the 5th most popular open source project on GitHub in the latest State of the Octoverse report. Among the breaking changes are API renamings as well as removals, and six APIs are now stable. The tensorflow pip package has received an update: GPU support is now included by default for Linux and Windows on machines with and without NVIDIA GPUs.


5 tips for multi-GPU training with Keras

#artificialintelligence

Deep Learning (the favourite buzzword of late 2010s along with blockchain/bitcoin and Data Science/Machine Learning) has enabled us to do some really cool stuff the last few years. Other than the advances in algorithms (which admittedly are based on ideas already known since 1990s aka "Data Mining era"), the main reasons of its success can be attributed to the availability of large free datasets, the introduction of open-source libraries and the use of GPUs. In this blog post I will focus on the last two and I'll share with you some tips that I learned the hard way. TensorFlow is a very popular Deep Learning library developed by Google which allows you to prototype quickly complex networks. It comes with lots of interesting features such as auto-differentiation (which saves you from estimating/coding the gradients of the cost functions) and GPU support (which allows you to get easily a 200x speed improvement using decent hardware).


Understanding a Dice Roll with Vision and Object Detection

#artificialintelligence

Pass the frames from the camera to the VNCoreMLRequest so it can make predictions using a VNImageRequestHandler object. The VNImageRequestHandler object handles image resizing and preprocessing as well as post-processing of your model's outputs for every prediction. To pass camera frames to your model, you first need to find the image orientation that corresponds to your device's physical orientation. If the device's orientation changes, the aspect ratio of the images can also change. Because you need to scale the bounding boxes for the detected objects back to your original image, you need to keep track of its size.