frequency


The success of machine learning rests on scalability

#artificialintelligence

Instead, engineers will need to design systems that offer scalable performance, that are able to dynamically adjust the type of processing resource they deliver based on the task at hand. This is different to what embedded engineers may be comfortable with right now. For some years embedded processors have had the ability to vary their operating frequency and supply voltage based on workload. Essentially, a processor's core can run slower when it isn't busy; scaling back the main clock frequency directly translates to fewer transistors switching on and off per second, which saves power. When the core really needs to get busy, the clock frequency is scaled up, increasing the throughput.


Take your Machine Learning Models to Production with these 5 simple steps

#artificialintelligence

The world around us is rapidly changing, and what might be applicable two months back might not be relevant now. In a way, the models we build are reflections of the world, and if the world is changing our models should be able to reflect this change. Model performance deteriorates typically with time. For this reason, we must think of ways to upgrade our models as part of the maintenance cycle at the onset itself. The frequency of this cycle depends entirely on the business problem that you are trying to solve.


The Necessity of Musical Hallucinations - Issue 77: Underworlds 

Nautilus

During the last months of my mother's life, as she ventured further from lucidity, she was visited by music. In collusion with her dementia, her hearing loss filled her consciousness with musical hallucinations. Sometimes welcome, more often not, her musical visitations were vivid, yet segmented and tattered. She would occasionally comment on the singers. On rare occasions she would identify the performer. Mitch Miller, who wrote oppressively cheerful arrangements of popular songs from the 1950s, seemed to command a prominent role in her hallucinations.


Hackers Can Use Lasers to 'Speak' to Your Amazon Echo

#artificialintelligence

In the spring of last year, cybersecurity researcher Takeshi Sugawara walked into the lab of Kevin Fu, a professor he was visiting at the University of Michigan. He wanted to show off a strange trick he'd discovered. Sugawara pointed a high-powered laser at the microphone of his iPad--all inside of a black metal box, to avoid burning or blinding anyone--and had Fu put on a pair of earbuds to listen to the sound the iPad's mic picked up. As Sugawara varied the laser's intensity over time in the shape of a sine wave, fluctuating at about 1,000 times a second, Fu picked up a distinct high-pitched tone. The iPad's microphone had inexplicably converted the laser's light into an electrical signal, just as it would with sound.


Machine learning finds new metamaterial designs for energy harvesting

#artificialintelligence

Electrical engineers at Duke University have harnessed the power of machine learning to design dielectric (non-metal) metamaterials that absorb and emit specific frequencies of terahertz radiation. The design technique changed what could have been more than 2000 years of calculation into 23 hours, clearing the way for the design of new, sustainable types of thermal energy harvesters and lighting. The study was published online on September 16 in the journal Optics Express. Metamaterials are synthetic materials composed of many individual engineered features, which together produce properties not found in nature through their structure rather than their chemistry. In this case, the terahertz metamaterial is built up from a two-by-two grid of silicon cylinders resembling a short, square Lego.


DeepMind Uses GANs to Convert Text to Speech

#artificialintelligence

Generative Adversarial Networks (GANs) have revolutionized high-fidelity image generation, making global headlines with their hyperrealistic portraits and content-swapping, while also raising concerns with convincing deepfake videos. Now, DeepMind researchers are expanding GANs to audio, with a new adversarial network approach for high fidelity speech synthesis. Text-to-Speech (TTS) is a process for converting text into a humanlike voice output. One of the most commonly used TTS network architectures is WaveNet, a neural autoregressive model for generating raw audio waveforms. But because WaveNet relies on the sequential generation of one audio sample at a time, it is poorly suited to today's massively parallel computers.


Choose the Right Accelerometer for Predictive Maintenance

#artificialintelligence

Maintenance, traditionally preventive or corrective, usually represents a significant portion of production costs. Now, having the IIoT monitoring a machine's health status helps enable predictive maintenance, which allows industries to anticipate breakdowns and realize substantial operational savings. Industry 4.0, made possible by the generalization of digitization and connectivity for industrial equipment, is on track to revolutionize production tools. This game changer makes the production chain more flexible and allows for the manufacture of customized products, while maintaining earnings. Maintenance, too, can benefit from the advantages of digitization and connectivity of the IIoT.


Measuring Customers Value Using Python/Lifetimes

#artificialintelligence

E[X(t)], the expected number of transactions in a time period of length t, which is central to computing the expected transaction volume for the whole customer base over time.


Federated Learning over Wireless Networks: Convergence Analysis and Resource Allocation

arXiv.org Machine Learning

--There is an increasing interest in a fast-growing machine learning technique called Federated Learning, in which the model training is distributed over mobile user equipments (UEs), exploiting UEs' local computation and training data. Despite its advantages in data privacy-preserving, Federated Learning (FL) still has challenges in heterogeneity across users' data and UE's characteristics. We first address the heterogeneous data challenge by proposing a FL algorithm that can bypass the independent and identically distributed (i.i.d.) UEs' data assumption for strongly convex and smooth problems. We provide the convergence rate characterizing the tradeoff between local computation rounds of UE to update its local model and global communication rounds to update the global model. We then employ the proposed FL algorithm in wireless networks as a resource allocation optimization problem that captures various tradeoffs between computation and communication latencies as well as between the Federated Learning time and UE energy consumption. Even though the wireless resource allocation problem of FL is non-convex, we exploit this problem's structure to decompose it into three sub-problems and analyze their closed-form solutions as well as insights to problem design. Finally, we illustrate the theoretical analysis for the new algorithm with T ensorflow experiments and extensive numerical results for the wireless resource allocation sub-problems. The experiment results not only verify the theoretical convergence but also show that our proposed algorithm converges significantly faster than the existing baseline approach. Index T erms --Distributed Machine Learning over Wireless Networks, Federated Learning, Optimization Decomposition. The significant increase in the number of cutting-edge mobiles and Internet of Things (IoT) devices results in the phenomenal growth of the data volume generated at the edge network. It has been predicted that in 2025 there will be 80 billion devices connected to the Internet and the global data will achieve 180 trillion gigabytes [2]. However, most of this data is privacy-sensitive in nature. It is not only risky to store this data in data centers but also costly in terms of communication. For example, location-based services such as the app Waze [3], can help users avoid heavy-traffic roads and thus reduce the congestion.


Model Interpretation: What and How? Open Data Science Conference

#artificialintelligence

Editor's note: Brian is a speaker for ODSC West in California this November! Be sure to check out his talk, "Advanced Methods for Explaining XGBoost Models" there! As modern machine learning methods become more ubiquitous, increasing attention is being paid to understanding how these models work -- model interpretation instead of just model use. Typically, these questions come in two sorts of flavors. In question 1, we are trying to get a general understanding of the mechanisms behind the model.