Goto

Collaborating Authors

Making Sense of Edge Computing

#artificialintelligence

Much needed text that covers important edge computing topics in an approachable and systemized manner.


Edge Computing

#artificialintelligence

Edge computing stands to transform the internet of things (IoT) much the same way that cloud computing is transforming enterprise IT. By creating secure, highly programmable and flexible computing systems that enhance both artificial intelligence (AI) and machine learning (ML), we help usher in the era of local AI, where edge nodes are not only smart, but are trained to be aware of their environment and situation, making them capable of operating offline or with limited cloud connectivity. NXP platforms offer secure edge computing at the hardware and software level, providing the essential technologies that enable low-power, low-latency, high-throughput solutions to deliver greater efficiencies, convenience, privacy and security.


Why is Edge Computing getting so popular? Read it here.

#artificialintelligence

At this point in the tech game, you likely have information stored away on the Cloud somewhere. In 1980, IBM launched the first hard drive to have more than 1 GB of storage. The phone you're likely holding in your hand or pocket has monumentally more than that. Yet, even as the technology world has experimented with ways to reduce the need for storage on a device itself, the need for communication between the client and the server has increased poor performance due to latency and bandwidth. Edge computing helps to reduce these issues – here's everything you need to know.


Tackling Edge Computing Challenges

#artificialintelligence

Edge computing is a form of computing where the processing occurs close to the source of activity and data. Working close to the edge reduces the latency of transporting data from the source to the processing units, and is ideal for uses cases that require rapid responses, such as the internet of things. The concept of edge computing is complementary to cloud computing, which is typically centralized processing residing far from the source of data. In edge-based systems, which some call the "near cloud," the goal is to extend the boundary of the cloud to be closer to the edge. It's easy to think edge computing magically solves many problems that cloud computing can't, but there's a trade-off due to the highly distributed nature of edge systems.


ARM's new chip design targets high-performance computing

ZDNet

ARM on Monday announced a new chip design targeting high-performance computing -- an update to its ARMv8-A architecture, known as the Scalable Vector Extension (SVE). The new design significantly extends the vector processing capabilities associated with AArch64 (64-bit) execution, allowing CPU designers to choose the most appropriate vector length for their application and market, from 128 to 2048 bits. SVE will also allow advanced vectorizing compilers to extract more fine-grain parallelism from existing code. "Immense amounts of data are being collected today in areas such as meteorology, geology, astronomy, quantum physics, fluid dynamics, and pharmaceutical research," ARM fellow Nigel Stephens wrote. HPC systems over the next five to 10 years will shoot for exascale computing, he continued.