esim
Verizon Outage Knocks Out US Mobile Service, Including Some 911 Calls
A major Verizon outage appeared to impact customers across the United States starting around noon ET on Wednesday. Calls to Verizon customers from other carriers may also be impacted. Customers of the telecom giant Verizon began reporting cellular outages around the United States beginning around noon ET on Wednesday, saying they could not complete calls and did not have access to mobile data. Verizon broadband internet customers are also reporting issues. AT&T and T-Mobile customers also began reporting service outages in the same timeframe, however these reports may be linked to the Verizon outage.
- South America > Venezuela (0.06)
- North America > United States > Minnesota (0.05)
- North America > United States > District of Columbia > Washington (0.05)
- (4 more...)
- Telecommunications (1.00)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Networks (1.00)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Communications > Mobile (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.49)
Esim: EVM Bytecode Similarity Detection Based on Stable-Semantic Graph
Chen, Zhuo, Ji, Gaoqiang, He, Yiling, Wu, Lei, Zhou, Yajin
Abstract--Decentralized finance (DeFi) is experiencing rapid expansion. However, prevalent code reuse and limited open-source contributions have introduced significant challenges to the blockchain ecosystem, including plagiarism and the propagation of vulnerable code. Consequently, an effective and accurate similarity detection method for EVM bytecode is urgently needed to identify similar contracts. Traditional binary similarity detection methods are typically based on instruction stream or control flow graph (CFG), which have limitations on EVM bytecode due to specific features like low-level EVM bytecode and heavily-reused basic blocks. Moreover, the highly-diverse Solidity Compiler (Solc) versions further complicate accurate similarity detection. Motivated by these challenges, we propose a novel EVM bytecode representation called Stable-Semantic Graph (SSG), which captures relationships between "stable instructions" (special instructions identified by our study). Moreover, we implement a prototype, Esim, which embeds SSG into matrices for similarity detection using a heterogeneous graph neural network. Esim demonstrates high accuracy in SSG construction, achieving F1-scores of 100% for control flow and 95.16% for data flow, and its similarity detection performance reaches 96.3% AUC, surpassing traditional approaches. Our large-scale study, analyzing 2,675,573 smart contracts on six EVM-compatible chains over a one-year period, also demonstrates that Esim outperforms the SOT A tool Etherscan in vulnerability search. With the rapid expansion of decentralized finance (DeFi) in the blockchain ecosystem, DeFi projects, which are built on smart contracts on the Ethereum Virtual Machine (EVM), have attracted substantial investment in recent years, with over $88.8 billion Total V alue Locked (TVL) in 2024 [1]. As a representative case, the Compound v2 protocol [3], one of the top lending protocols, has been widely adopted and forked by numerous DeFi projects. This protocol has a known precision loss issue that can be exploited when the corresponding market lacks liquidity. Since 2022, a series of attacks (e.g., Hundred Finance Attack [4], Onyx Protocol Attack [5], Radiant Attack [6]) have been observed due to the code abuse of Compound v2 protocol, resulting in millions of dollars in losses. Consequently, there is an urgent need for an efficient method to detect code reuse in EVM bytecode (binaries), a process also known as EVM bytecode similarity detection. More than 99% of the Ethereum contracts are not open source [2] In general, binary similarity detection studies in traditional languages (e.g., C++ [7], [8], [9] and Java [10]) can be divided into two categories, i.e., instruction stream based and control flow graph (CFG) based.
- Information Technology > Security & Privacy (1.00)
- Banking & Finance > Trading (1.00)
Using Prior Knowledge to Guide BERT's Attention in Semantic Textual Matching Tasks
Xia, Tingyu, Wang, Yue, Tian, Yuan, Chang, Yi
We study the problem of incorporating prior knowledge into a deep Transformer-based model,i.e.,Bidirectional Encoder Representations from Transformers (BERT), to enhance its performance on semantic textual matching tasks. By probing and analyzing what BERT has already known when solving this task, we obtain better understanding of what task-specific knowledge BERT needs the most and where it is most needed. The analysis further motivates us to take a different approach than most existing works. Instead of using prior knowledge to create a new training task for fine-tuning BERT, we directly inject knowledge into BERT's multi-head attention mechanism. This leads us to a simple yet effective approach that enjoys fast training stage as it saves the model from training on additional data or tasks other than the main task. Extensive experiments demonstrate that the proposed knowledge-enhanced BERT is able to consistently improve semantic textual matching performance over the original BERT model, and the performance benefit is most salient when training data is scarce.
- Europe > Slovenia > Central Slovenia > Municipality of Ljubljana > Ljubljana (0.05)
- North America > United States > North Carolina (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (2 more...)
Council Post: 16 Game-Changing Technologies You Might Not Know About Yet
In both the consumer and business worlds, technology is constantly and rapidly evolving. Unique and innovative new business, health and consumer technologies are emerging every day, but sometimes it takes a little time for the "next big thing" to get recognized and catch on. Google, for instance, launched the original iteration of G-Suite back in 2006--long before the cloud computing and real-time collaboration became the standard. As leaders in the tech field, the members of Forbes Technology Council are always on the lookout for emerging devices, programs and systems that could revolutionize their industry--even if the tech is still in its early phases. We asked a group of them to share the most impressive piece of tech from the last three years that most people aren't aware of yet.
ESIM
Event cameras are revolutionary sensors that work radically differently from standard cameras. Instead of capturing intensity images at a fixed rate, event cameras measure changes of intensity asynchronously, in the form of a stream of events, which encode per-pixel brightness changes. In the last few years, their outstanding properties (asynchronous sensing, no motion blur, high dynamic range) have led to exciting vision applications, with very low-latency and high robustness. However, these sensors are still scarce and expensive to get, slowing down progress. To address this issue, we present ESIM: an efficient event camera simulator implemented in C and available open source.
Compare, Compress and Propagate: Enhancing Neural Architectures with Alignment Factorization for Natural Language Inference
Tay, Yi, Tuan, Luu Anh, Hui, Siu Cheung
This paper presents a new deep learning architecture for Natural Language Inference (NLI). Firstly, we introduce a new architecture where alignment pairs are compared, compressed and then propagated to upper layers for enhanced representation learning. Secondly, we adopt factorization layers for efficient and expressive compression of alignment vectors into scalar features, which are then used to augment the base word representations. The design of our approach is aimed to be conceptually simple, compact and yet powerful. We conduct experiments on three popular benchmarks, SNLI, MultiNLI and SciTail, achieving competitive performance on all. A lightweight parameterization of our model also enjoys a $\approx 3$ times reduction in parameter size compared to the existing state-of-the-art models, e.g., ESIM and DIIN, while maintaining competitive performance. Additionally, visual analysis shows that our propagated features are highly interpretable.
- North America > United States > Texas > Travis County > Austin (0.04)
- Europe > Spain > Valencian Community > Valencia Province > Valencia (0.04)
- Europe > Denmark > Capital Region > Copenhagen (0.04)
- (8 more...)
UKP-Athene: Multi-Sentence Textual Entailment for Claim Verification
Hanselowski, Andreas, Zhang, Hao, Li, Zile, Sorokin, Daniil, Schiller, Benjamin, Schulz, Claudia, Gurevych, Iryna
The Fact Extraction and VERification (FEVER) shared task was launched to support the development of systems able to verify claims by extracting supporting or refuting facts from raw text. The shared task organizers provide a large-scale dataset for the consecutive steps involved in claim verification, in particular, document retrieval, fact extraction, and claim classification. In this paper, we present our claim verification pipeline approach, which, according to the preliminary results, scored third in the shared task, out of 23 competing systems. For the document retrieval, we implemented a new entity linking approach. In order to be able to rank candidate facts and classify a claim on the basis of several selected facts, we introduce two extensions to the Enhanced LSTM (ESIM).
- Europe > Germany > Hesse > Darmstadt Region > Darmstadt (0.05)
- North America > United States > California > Los Angeles County > Los Angeles (0.05)
- North America > United States > California > San Diego County > San Diego (0.04)
- (2 more...)
- Leisure & Entertainment (1.00)
- Media > Film (0.47)
Adversarially Regularising Neural NLI Models to Integrate Logical Background Knowledge
Minervini, Pasquale, Riedel, Sebastian
Adversarial examples are inputs to machine learning models designed to cause the model to make a mistake. They are useful for understanding the shortcomings of machine learning models, interpreting their results, and for regularisation. In NLP, however, most example generation strategies produce input text by using known, pre-specified semantic transformations, requiring significant manual effort and in-depth understanding of the problem and domain. In this paper, we investigate the problem of automatically generating adversarial examples that violate a set of given First-Order Logic constraints in Natural Language Inference (NLI). We reduce the problem of identifying such adversarial examples to a combinatorial optimisation problem, by maximising a quantity measuring the degree of violation of such constraints and by using a language model for generating linguistically-plausible examples. Furthermore, we propose a method for adversarially regularising neural NLI models for incorporating background knowledge. Our results show that, while the proposed method does not always improve results on the SNLI and MultiNLI datasets, it significantly and consistently increases the predictive accuracy on adversarially-crafted datasets -- up to a 79.6% relative improvement -- while drastically reducing the number of background knowledge violations. Furthermore, we show that adversarial examples transfer among model architectures, and that the proposed adversarial training procedure improves the robustness of NLI models to adversarial examples.
- North America > United States > New York (0.04)
- Europe > Netherlands (0.04)
Behavior Analysis of NLI Models: Uncovering the Influence of Three Factors on Robustness
Carmona, Vicente Ivan Sanchez, Mitchell, Jeff, Riedel, Sebastian
Natural Language Inference is a challenging task that has received substantial attention, and state-of-the-art models now achieve impressive test set performance in the form of accuracy scores. Here, we go beyond this single evaluation metric to examine robustness to semantically-valid alterations to the input data. We identify three factors - insensitivity, polarity and unseen pairs - and compare their impact on three SNLI models under a variety of conditions. Our results demonstrate a number of strengths and weaknesses in the models' ability to generalise to new in-domain instances. In particular, while strong performance is possible on unseen hypernyms, unseen antonyms are more challenging for all the models. More generally, the models suffer from an insensitivity to certain small but semantically significant alterations, and are also often influenced by simple statistical correlations between words and training labels. Overall, we show that evaluations of NLI models can benefit from studying the influence of factors intrinsic to the models or found in the dataset used.
- North America > United States > New York > New York County > New York City (0.14)
- Europe > Denmark > Capital Region > Copenhagen (0.04)
- North America > United States > Texas > Travis County > Austin (0.04)
- (8 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.68)
The Morning After: Monday, June 5th 2017
Welcome to the new week. Apple's Worldwide Developer Conference kicks off later today. While we will be reporting live from it, we've also got thoughts on what you might see, right here. We also try overclocking processors with liquid nitrogen (and some skill), and explain that the end of SIM cards as we know them is coming. Oh, and Google Photos is now smart enough to delete your useless photos all by itself.
- Information Technology > Communications > Mobile (0.41)
- Information Technology > Artificial Intelligence (0.35)