zscore
Deep Reputation Scoring in DeFi: zScore-Based Wallet Ranking from Liquidity and Trading Signals
Kandaswamy, Dhanashekar, Sahoo, Ashutosh, SP, Akshay, S, Gurukiran, Paul, Parag, N, Girish G
As decentralized finance (DeFi) evolves, distinguishing between user behaviors - liquidity provision versus active trading - has become vital for risk modeling and on-chain reputation. We propose a behavioral scoring framework for Uniswap that assigns two complementary scores: a Liquidity Provision Score that assesses strategic liquidity contributions, and a Swap Behavior Score that reflects trading intent, volatility exposure, and discipline. The scores are constructed using rule-based blueprints that decompose behavior into volume, frequency, holding time, and withdrawal patterns. To handle edge cases and learn feature interactions, we introduce a deep residual neural network with densely connected skip blocks inspired by the U-Net architecture. We also incorporate pool-level context such as total value locked (TVL), fee tiers, and pool size, allowing the system to differentiate similar user behaviors across pools with varying characteristics. Our framework enables context-aware and scalable DeFi user scoring, supporting improved risk assessment and incentive design. Experiments on Uniswap v3 data show its usefulness for user segmentation and protocol-aligned reputation systems. Although we refer to our metric as zScore, it is independently developed and methodologically different from the cross-protocol system proposed by Udupi et al. Our focus is on role-specific behavioral modeling within Uniswap using blueprint logic and supervised learning.
- North America > Panama (0.05)
- North America > United States > Ohio (0.04)
- Banking & Finance > Trading (1.00)
- Information Technology (0.88)
zScore: A Universal Decentralised Reputation System for the Blockchain Economy
Udupi, Himanshu, Sahoo, Ashutosh, P., Akshay S., S., Gurukiran, Paul, Parag, Martens, Petrus C.
Modern society functions on trust. The onchain economy, however, is built on the founding principles of trustless peer-to-peer interactions in an adversarial environment without a centralised body of trust and needs a verifiable system to quantify credibility to minimise bad economic activity. We provide a robust framework titled zScore, a core primitive for reputation derived from a wallet's onchain behaviour using state-of-the-art AI neural network models combined with real-world credentials ported onchain through zkTLS. The initial results tested on retroactive data from lending protocols establish a strong correlation between a good zScore and healthy borrowing and repayment behaviour, making it a robust and decentralised alibi for creditworthiness; we highlight significant improvements from previous attempts by protocols like Cred showcasing its robustness. We also present a list of possible applications of our system in Section 5, thereby establishing its utility in rewarding actual value creation while filtering noise and suspicious activity and flagging malicious behaviour by bad actors.
- Banking & Finance > Trading (1.00)
- Banking & Finance > Credit (0.68)
- Banking & Finance > Loans (0.66)
Modular Conformal Calibration
Marx, Charles, Zhao, Shengjia, Neiswanger, Willie, Ermon, Stefano
Uncertainty estimates must be calibrated (i.e., accurate) and sharp (i.e., informative) in order to be useful. This has motivated a variety of methods for recalibration, which use held-out data to turn an uncalibrated model into a calibrated model. However, the applicability of existing methods is limited due to their assumption that the original model is also a probabilistic model. We introduce a versatile class of algorithms for recalibration in regression that we call Modular Conformal Calibration (MCC). This framework allows one to transform any regression model into a calibrated probabilistic model. The modular design of MCC allows us to make simple adjustments to existing algorithms that enable well-behaved distribution predictions. We also provide finite-sample calibration guarantees for MCC algorithms. Our framework recovers isotonic recalibration, conformal calibration, and conformal interval prediction, implying that our theoretical results apply to those methods as well. Finally, we conduct an empirical study of MCC on 17 regression datasets. Our results show that new algorithms designed in our framework achieve near-perfect calibration and improve sharpness relative to existing methods.
- North America > United States > Maryland > Baltimore (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
Effect of Post-processing on Contextualized Word Representations
Sajjad, Hassan, Alam, Firoj, Dalvi, Fahim, Durrani, Nadir
Post-processing of static embedding has beenshown to improve their performance on both lexical and sequence-level tasks. However, post-processing for contextualized embeddings is an under-studied problem. In this work, we question the usefulness of post-processing for contextualized embeddings obtained from different layers of pre-trained language models. More specifically, we standardize individual neuron activations using z-score, min-max normalization, and by removing top principle components using the all-but-the-top method. Additionally, we apply unit length normalization to word representations. On a diverse set of pre-trained models, we show that post-processing unwraps vital information present in the representations for both lexical tasks (such as word similarity and analogy)and sequence classification tasks. Our findings raise interesting points in relation to theresearch studies that use contextualized representations, and suggest z-score normalization as an essential step to consider when using them in an application.
- Europe (1.00)
- Asia (0.93)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.28)
The confounding problem of garbage-in, garbage-out in ML
One of the top 10 trends in data and analytics this year as leaders navigate the covid-19 world, according to Gartner, is "augmented data management." It's the growing use of tools with ML/AI to clean and prepare robust data for AI-based analytics. Companies are currently striving to go digital and derive insights from their data, but the roadblock is bad data, which leads to faulty decisions. "I was talking to a university dean the other day. It had 20,000 students in its database, but only 9,000 students had actually passed out of the university," says Deleep Murali, co-founder and CEO of Bengaluru-based Zscore. This kind of faulty data has a cascading effect because all kinds of decisions, including financial allocations, are based on it.
- Media > News (0.40)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (0.36)
- Health & Medicine > Therapeutic Area > Immunology (0.36)
- Health & Medicine > Epidemiology (0.36)
Data Preparation for Machine Learning in Vertica - myVertica
Data Preparation for Machine Learning in Vertica Posted on Monday, May 8th, 2017 at 1:05 pm. This blog post was authored by Vincent Xu. Introduction Machine learning (ML) is an iterative process. From understanding data, preparing data, building models, testing models to deploying models, every step of the way requires careful examination and manipulation of the data. This is especially true at the beginning of this cycle where the raw data must be cleaned and prepared for modelling.