AI Vision IoT


This is to use Camera's camera view. Changing the width and height, 1280 x 720 worked great for me, but you can play around with the dimensions to see what fits your need. I set this to 30, the higher you set the number the more computing power it would require. You can play around to see what the benchmark for it, but 30 has worked great for me.

GPU computing: Accelerating the deep learning curve


Artificial intelligence (AI) may be what everyone's talking about, but getting involved isn't straightforward. You'll need a more than decent grasp of maths and theoretical data science, plus an understanding of neural networks and deep learning fundamentals -- not to mention a good working knowledge of the tools required to turn those theories into practical models and applications. You'll also need an abundance of processing power -- beyond that required by even the most demanding of standard applications. One way to get this is via the cloud but, because deep learning models can take days or even weeks to come up with the goods, that can be hugely expensive. In this article, therefore, we'll look at on-premises alternatives and why the once-humble graphics controller is now the must-have accessory for the would-be AI developer.

Why Micron Is So Excited About Artificial Intelligence


Memory specialist Micron (NASDAQ:MU) sells both DRAM, a type of computer memory that's used in virtually every kind of computing device, and NAND flash, which is rapidly gaining traction for high-performance data storage applications as it's quicker and more efficient than hard disk drive-based storage. Micron's business has continued to benefit from what seems like an insatiable amount of demand for both DRAM and NAND in applications such as mobile phones and data center servers. One of the sub-segments within data center servers is the market for servers that handle machine learning, commonly referred to as artificial intelligence, processing tasks. That sub-segment is small today, with data center chip giant Intel estimating the market at around 7% of total data center server shipments in 2016, but it's also, according to Intel, the fastest growing. The companies that make the processors that perform these machine learning computations are clearly very excited about the artificial intelligence opportunity as it means they'll get to sell a lot more computing power over the years.

AMD Unleashes 32-Core Processor While Betting on Machine Learning in GPU -


Advanced Micro Devices (NASDAQ:AMD) is rallying as the company unveiled the world's first 7nm graphics processing unit (GPU) alongside the new 32 core Ryzen Threadripper processor at Computex Tapai 2018 through a live stream yesterday. The next generation Vega GPU products will be based on GlobalFoundries' 7nm technology and are expected to be launched during the second half of 2018. The new 32 core Threadripper CPU, however, will be based on a 12nm process technology and is set to debut during the third quarter of 2018. Furthermore, AMD continues to make strides on the server side as it will start sampling the 7nm EPYC – processors targeting data centers and servers – during the second half of 2018. "At Computex 2018 we demonstrated how the strongest CPU and GPU product portfolio in the industry gets even stronger in the coming months," said AMD President and CEO Dr. Lisa Su in a press release.

Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning Artificial Intelligence

To improve the quality of computation experience for mobile devices, mobile-edge computing (MEC) is a promising paradigm by providing computing capabilities in close proximity within a sliced radio access network (RAN), which supports both traditional communication and MEC services. Nevertheless, the design of computation offloading policies for a virtual MEC system remains challenging. Specifically, whether to execute a computation task at the mobile device or to offload it for MEC server execution should adapt to the time-varying network dynamics. In this paper, we consider MEC for a representative mobile user in an ultra-dense sliced RAN, where multiple base stations (BSs) are available to be selected for computation offloading. The problem of solving an optimal computation offloading policy is modelled as a Markov decision process, where our objective is to maximize the long-term utility performance whereby an offloading decision is made based on the task queue state, the energy queue state as well as the channel qualities between MU and BSs. To break the curse of high dimensionality in state space, we first propose a double deep Q-network (DQN) based strategic computation offloading algorithm to learn the optimal policy without knowing a priori knowledge of network dynamics. Then motivated by the additive structure of the utility function, a Q-function decomposition technique is combined with the double DQN, which leads to novel learning algorithm for the solving of stochastic computation offloading. Numerical experiments show that our proposed learning algorithms achieve a significant improvement in computation offloading performance compared with the baseline policies.

Secure Mobile Edge Computing in IoT via Collaborative Online Learning Machine Learning

To accommodate heterogeneous tasks in Internet of Things (IoT), a new communication and computing paradigm termed mobile edge computing emerges that extends computing services from the cloud to edge, but at the same time exposes new challenges on security. The present paper studies online security-aware edge computing under jamming attacks. Leveraging online learning tools, novel algorithms abbreviated as SAVE-S and SAVE-A are developed to cope with the stochastic and adversarial forms of jamming, respectively. Without utilizing extra resources such as spectrum and transmission power to evade jamming attacks, SAVE-S and SAVE-A can select the most reliable server to offload computing tasks with minimal privacy and security concerns. It is analytically established that without any prior information on future jamming and server security risks, the proposed schemes can achieve ${\cal O}\big(\sqrt{T}\big)$ regret. Information sharing among devices can accelerate the security-aware computing tasks. Incorporating the information shared by other devices, SAVE-S and SAVE-A offer impressive improvements on the sublinear regret, which is guaranteed by what is termed "value of cooperation." Effectiveness of the proposed schemes is tested on both synthetic and real datasets.

Billion-scale Network Embedding with Iterative Random Projection Machine Learning

Network embedding has attracted considerable research attention recently. However, the existing methods are incapable of handling billion-scale networks, because they are computationally expensive and, at the same time, difficult to be accelerated by distributed computing schemes. To address these problems, we propose RandNE, a novel and simple billion-scale network embedding method. Specifically, we propose a Gaussian random projection approach to map the network into a low-dimensional embedding space while preserving the high-order proximities between nodes. To reduce the time complexity, we design an iterative projection procedure to avoid the explicit calculation of the high-order proximities. Theoretical analysis shows that our method is extremely efficient, and friendly to distributed computing schemes without any communication cost in the calculation. We demonstrate the efficacy of RandNE over state-of-the-art methods in network reconstruction and link prediction tasks on multiple datasets with different scales, ranging from thousands to billions of nodes and edges.

Power Density Edges Higher, But AI Could Accelerate Trend


Data center rack density is trending higher, prompted by growing adoption of powerful hardware to support artificial intelligence applications. It's an ongoing trend with a new wrinkle, as industry observers see a growing opportunity for specialists in high-density hosting, perhaps boosted by the rise of edge computing. Increases in rack density are being seen broadly, with 67 percent of data center operators seeing increasing densities, according to the recent State of the Data Center survey by AFCOM. With an average power density of about 7 kilowatts per rack, the report found that the vast majority of data centers have few problems managing their IT workloads with traditional air cooling methods. But there are also growing pockets of extreme density, as AI and cloud applications boost adoption of advanced hardware, including GPUs, FPGAs and ASICs.

Nvidia aims to extend its lead in AI


When Nvidia held its first annual GPU Technology Conference almost a decade ago, it was a gaming company is search of new markets for its specialized chips. At the time, high-performance computing was the main target. Then, AlexNet came along and swept the ImageNet challenge in 2012, sparking a boom in deep neural networks trained on GPUs. Today, Nvidia's data center business generates $2 billion in annual sales leaving larger rivals playing catch up while venture capitalists throw money at AI hardware start-ups vying to build a better mousetrap. Nvidia no longer needs to make the case for GPU computing.

Nvidia details next steps in AI, including self-driving simulator


Nvidia Corp. has advanced deep learning techniques, but now it's looking to take AI technology into new areas: Putting self-driving cars into virtual reality instead of our roads, and setting its sights on Hollywood and hospitals. Over the past few years, Nvidia has made inroads into equipping cars with the computer hardware that gives them self-driving capability. That move has become so crucial that Nvidia NVDA, -7.76% shares fell more than 6% in recent trading as the company kicked off its GPU Technology Conference in San Jose, Calif., after it confirmed that it is suspending real-world testing following a recent fatality in Arizona in one of Uber Technologies Inc.'s self-driving cars. In his keynote address Tuesday morning, Chief Executive Jensen Huang did not mention the halt, but did show off a potential solution to the problem of testing self-driving automobiles on public roads. Huang showed off a simulator that can allow companies to test their self-driving systems in a virtual environment, providing opportunity to drive billions of miles in a year without endangering pedestrians.