Goto

Collaborating Authors

 ethernet


Framework Laptop 12 review: fun, flexible and repairable

The Guardian

The modular and repairable PC maker Framework's latest machine moves into the notoriously difficult to fix 2-in-1 category with a fun 12in laptop with a touchscreen and a 360-degree hinge. The new machine still supports the company's innovative expansion cards for swapping the different ports in the side, which are cross-compatible with the Framework 13 and 16 among others. And you can still open it up to replace the memory, storage and internal components with a few simple screws. The Framework 12 is available in either DIY form, starting at 499 ( 569/ 549/A 909), or more conventional prebuilt models starting at 749. It sits under the 799-and-up Laptop 13 and 1,399 Laptop 16 as the company's most compact and affordable model.


Deep Learning-Based Intrusion Detection for Automotive Ethernet: Evaluating & Optimizing Fast Inference Techniques for Deployment on Low-Cost Platform

Carmo, Pedro R. X., de Moura, Igor, Filho, Assis T. de Oliveira, Sadok, Djamel, Zanchettin, Cleber

arXiv.org Artificial Intelligence

Modern vehicles are increasingly connected, and in this context, automotive Ethernet is one of the technologies that promise to provide the necessary infrastructure for intra-vehicle communication. However, these systems are subject to attacks that can compromise safety, including flow injection attacks. Deep Learning-based Intrusion Detection Systems (IDS) are often designed to combat this problem, but they require expensive hardware to run in real time. In this work, we propose to evaluate and apply fast neural network inference techniques like Distilling and Prunning for deploying IDS models on low-cost platforms in real time. The results show that these techniques can achieve intrusion detection times of up to 727 μs using a Raspberry Pi 4, with AUCROC values of 0.9890.


QRscript: Embedding a Programming Language in QR codes to support Decision and Management

Scanzio, Stefano, Cena, Gianluca, Valenzano, Adriano

arXiv.org Artificial Intelligence

Embedding a programming language in a QR code is a new and extremely promising opportunity, as it makes devices and objects smarter without necessarily requiring an Internet connection. In this paper, all the steps needed to translate a program written in a high-level programming language to its binary representation encoded in a QR code, and the opposite process that, starting from the QR code, executes it by means of a virtual machine, have been carefully detailed. The proposed programming language was named QRscript, and can be easily extended so as to integrate new features. One of the main design goals was to produce a very compact target binary code. In particular, in this work we propose a specific sub-language (a dialect) that is aimed at encoding decision trees. Besides industrial scenarios, this is useful in many other application fields. The reported example, related to the configuration of an industrial networked device, highlights the potential of the proposed technology, and permits to better understand all the translation steps.


Holmes: Towards Distributed Training Across Clusters with Heterogeneous NIC Environment

Yang, Fei, Peng, Shuang, Sun, Ning, Wang, Fangyu, Tan, Ke, Wu, Fu, Qiu, Jiezhong, Pan, Aimin

arXiv.org Artificial Intelligence

Large language models (LLMs) such as GPT-3, OPT, and LLaMA have demonstrated remarkable accuracy in a wide range of tasks. However, training these models can incur significant expenses, often requiring tens of thousands of GPUs for months of continuous operation. Typically, this training is carried out in specialized GPU clusters equipped with homogeneous high-speed Remote Direct Memory Access (RDMA) network interface cards (NICs). The acquisition and maintenance of such dedicated clusters is challenging. Current LLM training frameworks, like Megatron-LM and Megatron-DeepSpeed, focus primarily on optimizing training within homogeneous cluster settings. In this paper, we introduce Holmes, a training framework for LLMs that employs thoughtfully crafted data and model parallelism strategies over the heterogeneous NIC environment. Our primary technical contribution lies in a novel scheduling method that intelligently allocates distinct computational tasklets in LLM training to specific groups of GPU devices based on the characteristics of their connected NICs. Furthermore, our proposed framework, utilizing pipeline parallel techniques, demonstrates scalability to multiple GPU clusters, even in scenarios without high-speed interconnects between nodes in distinct clusters. We conducted comprehensive experiments that involved various scenarios in the heterogeneous NIC environment. In most cases, our framework achieves performance levels close to those achievable with homogeneous RDMA-capable networks (InfiniBand or RoCE), significantly exceeding training efficiency within the pure Ethernet environment. Additionally, we verified that our framework outperforms other mainstream LLM frameworks under heterogeneous NIC environment in terms of training efficiency and can be seamlessly integrated with them.


Still Networking

Communications of the ACM

ACM A.M. Turing Award recipient Bob Metcalfe--engineer, entrepreneur, and Professor Emeritus at the University of Texas at Austin--is embarking on his sixth career, as a Computational Engineer at the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory (CSAIL). He is always willing to tell the story of his first career, as a researcher at the Xerox Palo Alto Research Center (PARC) where, in 1973, Metcalfe and then-graduate student David Boggs invented Ethernet, a standard for connecting computers over short distances. In the ensuing years, thanks in no small part to Metcalfe's entrepreneurship and advocacy, Ethernet has become the industry standard for local area networks. Leah Hoffmann spoke to Metcalfe about the development of Ethernet and what it has meant for the future of connectivity. You published your first paper about Ethernet in Communications in July 1976 (https://bit.ly/403Sxmm).


Making Connections

Communications of the ACM

When he was a student at the Massachusetts Institute of Technology (MIT), Ethernet inventor Bob Metcalfe briefly considered pursuing a career in tennis. He was captain of the 1968–1969 MIT tennis team, which had a record of 15 wins and 4 losses, and he was ranked sixth in New England in doubles, even while taking classes and holding a programming job at defense contractor Raytheon. That, unfortunately, was not enough to make a go of it. "There's playing pros and there's teaching pros," Metcalfe says. "I could easily be a teaching pro, but that just seemed boring. Metcalfe wrote his undergraduate thesis on a bus coming back from a tennis match and submitted it to Minsky at the last possible moment. The tennis world's loss was the computer world's gain, however, as Metcalfe went on to become an Internet pioneer, develop Ethernet, and help get it named a networking standard, actions that earned him the 2022 ACM A.M. Turing Award on the 50th anniversary of the invention of the technology.


Bob Metcalfe, The Man Who Discovered Network Effects, Isn't Sorry

WIRED

ChatGPT warned me against asking legendary engineer Bob Metcalfe about his 1996 prediction that the internet would collapse. This came after I sought the chatbot's guidance on what questions to ask the man who this week received the ACM Turing Award, the $1 million prize dubbed the Nobel of computing. The AI oracle suggested I stick to quizzing him on his famous accomplishments--inventing Ethernet, starting the 3Com Corporation, codifying the value of networks, and teaching students in Texas about innovation, which he did until he retired last year "to pursue a sixth career." But ChatGPT thought it was a terrible idea to bring up Metcalfe's bold prognostication, just as the network he'd helped pioneer was taking off, that the volume of bits zipping around the internet would cause the mother of all crashes. OpenAI's black box told me that since Metcalfe's guess had flopped in a very public manner, I'd be risking the honoree's pique if I raised it, and from then on he'd be too annoyed to share his best thoughts.


Broadcom turbocharges AI and ML with Tomahawk 5

#artificialintelligence

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Artificial intelligence (AI) and machine learning (ML) are about more than algorithms: The right hardware to turbocharge your AI and ML computations is key. To speed up job completion, AI and ML training clusters need high bandwidth and dependable transport with predictable low-tail latency (tail latency is the 1 or 2% of a job that trails the rest of responses). A high-performance interconnection can optimize data center and high-performance computing (HPC) workloads across your portfolio of hyperconverged AI and ML training clusters, resulting in lower latency for better model training, increased data packet utilization and lower operational costs. Today, San Jose-based Broadcom announced its contribution to the need for high-performance interconnections: the StrataXGS Tomahawk 5 switch series, which offers 51.2 Tbps of Ethernet switching capacity in a single, monolithic device – more than double the bandwidth of its contemporaries, the company claims.


Ikea Symfonisk review: a good Sonos wifi speaker hiding in a lamp

The Guardian

The second generation of Ikea's novel Sonos-powered wifi speaker lamp looks a little sleeker than the first, sounds a bit better and comes in new shapes, materials and colour combinations. The idea is the same as for the rest of the Symfonisk range: hide a speaker in a piece of stylish furniture. In true Ikea fashion, the £179 ($169) lamp comes flat-packed, although thankfully only in three parts: the base, the plug and the shade. The base of the lamp is an 18cm-wide fabric-covered cylinder, available in grey/white or black. The shade is available in either grey or black and in two styles: fabric, as photographed, or glass, similar to the lamp's predecessor.


World-Record AI Chip Announced By Habana Labs

#artificialintelligence

Out of the tsunami of AI chip startups that hit the scene in the last few years, Israeli startup Habana Labs stands out from the crowd. The company surprised and impressed many with the announcement last fall of a chip designed to process a trained neural network (a task called "inference") with record performance at low power. At the time, Eitan Medina, the company's Chief Business Officer, promised a second chip called Gaudi that could challenge NVIDIA in the market for training those neural networks. On Monday, the company made good on that promise, announcing a very fast chip that also includes an on-die standards-based fabric to build large networks of accelerators and systems. Availability is set for the second half of 2019. What did Habana Labs announce?