Goto

Collaborating Authors

What's The Difference Between Edge Computing and Decentralized Computing?

#artificialintelligence

Edge Computing is a decentralized, distributed computing infrastructure that has evolved with the growth of the internet ot things. Due to their similar names and the general unawareness of advanced computing, some people tend to think that decentralized computing and edge computing are similar. But, both types of computing are different and complementary to each other. When combined together as decentralized edge computing, they can perform tasks that cannot be achieved individually. Edge computing is the deployment of computing and storage resources at the location where data is produced.


Council Post: How 5G Will Change The Enterprise

#artificialintelligence

As mobile carriers continue consumer 5G network rollouts across the United States, they are looking for ways to approach the enterprise and showcase the value it delivers. As I've written previously, 5G mmWave is known for delivering the low latency and fast speeds people demand from the new wireless generation, but it will be challenging to provision across the country due to the short distances the frequency bands travel. In this sense, 5G for enterprises will spring from specific use cases, and mobile carriers understand they must adapt to meet this changing landscape. At the recent 5G World Virtual Trade Show, Swisscom, Ciena and other panelists discussed the need for mobile carriers to transition from network operators into IT companies when trying to engage with enterprises and meet their specific needs. This shift marks a truly transformative moment for telecom: It's the first time businesses will build their wireless networks into core operations, transcending communications.


Three ways to fix DRAM's latency problem

ZDNet

In a brilliant PhD thesis, Understanding and Improving the Latency of DRAM-Based Memory Systems, Kevin K. Chang of CMU tackles the DRAM issue, and suggests some novel architectural enhancements to make substantial improvements in DRAM latency.


Brazilian gamers see improvement in broadband latency and speed

ZDNet

Brazilians have seen recent improvements in fixed broadband latency as demand for online gaming rises during the Covid-19 outbreak, a new study has found. Latency - the reaction time of a connection - varies between countries across Latin America, particularly when it comes to fixed broadband. Latency is a key metric in gaming and determines much of the user's experience in terms of the absence of lags during gameplay. According to the data from Ookla's Speedtest Intelligence, gamers in Brazil had the lowest mean latency on fixed broadband, relevant for games played on PC and console games, at 19 ms during Q2 2020, down from 23 ms in the same period in 2019. By comparison, Colombia had the highest fixed broadband latency at 43 ms.


LC-NAS: Latency Constrained Neural Architecture Search for Point Cloud Networks

arXiv.org Artificial Intelligence

Point cloud architecture design has become a crucial problem for 3D deep learning. Several efforts exist to manually design architectures with high accuracy in point cloud tasks such as classification, segmentation, and detection. Recent progress in automatic Neural Architecture Search (NAS) minimizes the human effort in network design and optimizes high performing architectures. However, these efforts fail to consider important factors such as latency during inference. Latency is of high importance in time critical applications like self-driving cars, robot navigation, and mobile applications, that are generally bound by the available hardware. In this paper, we introduce a new NAS framework, dubbed LC-NAS, where we search for point cloud architectures that are constrained to a target latency. We implement a novel latency constraint formulation to trade-off between accuracy and latency in our architecture search. Contrary to previous works, our latency loss guarantees that the final network achieves latency under a specified target value. This is crucial when the end task is to be deployed in a limited hardware setting. Extensive experiments show that LC-NAS is able to find state-of-the-art architectures for point cloud classification in ModelNet40 with minimal computational cost. We also show how our searched architectures achieve any desired latency with a reasonably low drop in accuracy. Finally, we show how our searched architectures easily transfer to a different task, part segmentation on PartNet, where we achieve state-of-the-art results while lowering latency by a factor of 10.