insider trading
DIESEL -- Dynamic Inference-Guidance via Evasion of Semantic Embeddings in LLMs
Ganon, Ben, Zolfi, Alon, Hofman, Omer, Singh, Inderjeet, Kojima, Hisashi, Elovici, Yuval, Shabtai, Asaf
In recent years, conversational large language models (LLMs) have shown tremendous success in tasks such as casual conversation, question answering, and personalized dialogue, making significant advancements in domains like virtual assistance, social interaction, and online customer engagement. However, they often generate responses that are not aligned with human values (e.g., ethical standards, safety, or social norms), leading to potentially unsafe or inappropriate outputs. While several techniques have been proposed to address this problem, they come with a cost, requiring computationally expensive training or dramatically increasing the inference time. In this paper, we present DIESEL, a lightweight inference guidance technique that can be seamlessly integrated into any autoregressive LLM to semantically filter undesired concepts from the response. DIESEL can function either as a standalone safeguard or as an additional layer of defense, enhancing response safety by reranking the LLM's proposed tokens based on their similarity to predefined negative concepts in the latent space. This approach provides an efficient and effective solution for maintaining alignment with human values. Our evaluation demonstrates DIESEL's effectiveness on state-of-the-art conversational models (e.g., Llama 3), even in challenging jailbreaking scenarios that test the limits of response safety. We further show that DIESEL can be generalized to use cases other than safety, providing a versatile solution for general-purpose response filtering with minimal computational overhead.
- North America > United States > Texas > Travis County > Austin (0.04)
- Europe (0.04)
- Research Report (1.00)
- Workflow (0.68)
- Instructional Material > Course Syllabus & Notes (0.46)
A Random Forest approach to detect and identify Unlawful Insider Trading
According to The Exchange Act, 1934 unlawful insider trading is the abuse of access to privileged corporate information. While a blurred line between "routine" the "opportunistic" insider trading exists, detection of strategies that insiders mold to maneuver fair market prices to their advantage is an uphill battle for hand-engineered approaches. In the context of detailed high-dimensional financial and trade data that are structurally built by multiple covariates, in this study, we explore, implement and provide detailed comparison to the existing study (Deng et al. (2019)) and independently implement automated end-to-end state-of-art methods by integrating principal component analysis to the random forest (PCA-RF) followed by a standalone random forest (RF) with 320 and 3984 randomly selected, semi-manually labeled and normalized transactions from multiple industry. The settings successfully uncover latent structures and detect unlawful insider trading. Among the multiple scenarios, our best-performing model accurately classified 96.43 percent of transactions. Among all transactions the models find 95.47 lawful as lawful and $98.00$ unlawful as unlawful percent. Besides, the model makes very few mistakes in classifying lawful as unlawful by missing only 2.00 percent. In addition to the classification task, model generated Gini Impurity based features ranking, our analysis show ownership and governance related features based on permutation values play important roles. In summary, a simple yet powerful automated end-to-end method relieves labor-intensive activities to redirect resources to enhance rule-making and tracking the uncaptured unlawful insider trading transactions. We emphasize that developed financial and trading features are capable of uncovering fraudulent behaviors.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > New York (0.04)
- North America > United States > Wisconsin (0.04)
- (5 more...)
- Research Report > New Finding (1.00)
- Financial News (1.00)
- Banking & Finance > Trading (1.00)
- Government > Regional Government > North America Government > United States Government (0.67)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Ensemble Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Decision Tree Learning (1.00)
Jailbreaking as a Reward Misspecification Problem
Xie, Zhihui, Gao, Jiahui, Li, Lei, Li, Zhenguo, Liu, Qi, Kong, Lingpeng
The widespread adoption of large language models (LLMs) has raised concerns about their safety and reliability, particularly regarding their vulnerability to adversarial attacks. In this paper, we propose a novel perspective that attributes this vulnerability to reward misspecification during the alignment process. We introduce a metric ReGap to quantify the extent of reward misspecification and demonstrate its effectiveness and robustness in detecting harmful backdoor prompts. Building upon these insights, we present ReMiss, a system for automated red teaming that generates adversarial prompts against various target aligned LLMs. ReMiss achieves state-of-the-art attack success rates on the AdvBench benchmark while preserving the human readability of the generated prompts. Detailed analysis highlights the unique advantages brought by the proposed reward misspecification objective compared to previous methods.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Asia > China > Hong Kong (0.04)
- Africa > Eswatini > Manzini > Manzini (0.04)
- Instructional Material (0.93)
- Research Report > New Finding (0.67)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- (2 more...)
ChatGPT will 'lie' and strategically deceive users when put under pressure - just like humans
This year AI has proven to be capable of some very human-like tricks, but this latest development might be a little too human. Researchers have shown that ChatGPT will lie and cheat when stressed out at work. Computer scientists from Apollo Research trained the AI to act as a trader for a fictional financial institution. However, when the AI's boss put pressure on it to make more money, the chatbot knowingly committed insider trading about 75 per cent of the time. Even more worryingly, the AI doubled down on its lies when questioned in 90 per cent of cases.
Technical Report: Large Language Models can Strategically Deceive their Users when Put Under Pressure
Scheurer, Jérémy, Balesni, Mikita, Hobbhahn, Marius
We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision. We perform a brief investigation of how this behavior varies under changes to the setting, such as removing model access to a reasoning scratchpad, attempting to prevent the misaligned behavior by changing system instructions, changing the amount of pressure the model is under, varying the perceived risk of getting caught, and making other simple changes to the environment. To our knowledge, this is the first demonstration of Large Language Models trained to be helpful, harmless, and honest, strategically deceiving their users in a realistic situation without direct instructions or training for deception.
A machine learning approach to support decision in insider trading detection
Mazzarisi, Piero, Ravagnani, Adele, Deriu, Paola, Lillo, Fabrizio, Medda, Francesca, Russo, Antonio
Identifying market abuse activity from data on investors' trading activity is very challenging both for the data volume and for the low signal to noise ratio. Here we propose two complementary unsupervised machine learning methods to support market surveillance aimed at identifying potential insider trading activities. The first one uses clustering to identify, in the vicinity of a price sensitive event such as a takeover bid, discontinuities in the trading activity of an investor with respect to his/her own past trading history and on the present trading activity of his/her peers. The second unsupervised approach aims at identifying (small) groups of investors that act coherently around price sensitive events, pointing to potential insider rings, i.e. a group of synchronised traders displaying strong directional trading in rewarding position in a period before the price sensitive event. As a case study, we apply our methods to investor resolved data of Italian stocks around takeover bids.
- North America > United States > New York > New York County > New York City (0.04)
- Europe > United Kingdom > England > Greater London > London > City of London (0.04)
- Europe > Italy > Emilia-Romagna > Metropolitan City of Bologna > Bologna (0.04)
Enhancing Trade Compliance with Artificial Intelligence (AI)
Physicists may say otherwise, but it is trade that makes the world go round -- at least financially. From supply chain issues to volatility in prices across asset classes, from stocks to crude oil, trade defines much of the movement in the international economy. With trillions of dollars moving daily across the financial system, the temptation to indulge in surreptitious behaviour is great. Regulators, compliance officers and banking leaders have long sought effective tools to combat the increasing sophistication of bad actors, whose wrongdoing frequently leads to billions of dollars in financial losses. Compliance officers and regulators are looking to identify criminal actions such as insider trading, market manipulation, money laundering, violations of sanctions/export controls and trading in others' accounts more accurately and quickly.
- Banking & Finance > Trading (0.94)
- Government > Commerce (0.92)
- Government > Foreign Policy (0.57)
AI needs a certification process, not legislation
Artificial intelligence is quickly becoming a part of daily life. Enterprise implementations of AI-based technologies tripled in 2018, according to Gartner. At the same time, it's reaching ubiquity in consumer-facing applications, helping us write our emails, discover new music, and get on-demand customer support. At every touchpoint, our data is being collected and used to make machines faster and smarter, and that's driving calls for regulation from global citizens, governments, and companies who want to ensure deployments of machine and deep learning algorithms are safe and ethical. While implementing laws to protect consumers from "AI-gone-wild" may seem like a reasonable proposition, it's one that's doomed to fail.
- North America > United States (0.50)
- Asia > China (0.05)
- Information Technology > Security & Privacy (1.00)
- Law > Statutes (0.72)
- Government > Regional Government > North America Government > United States Government (0.31)
Relationship between Insider Trading & short term stock prices
Insider Trading is often associated with the illegal activity of trading in shares of ones company based on material non public information. But, insider trading is not always illegal. It is not illegal to own, or buy and sell shares of the company you work for, as long as the transactions are being disclosed publicly in a timely manner and as long as the information that is being used to trade is publicly available. This project focuses legal element of insider trading and its potential impact on short term stock prices. Technical trading schools often tout the relationship between Insider transactions and stock prices.
The Tech HHS, SEC, SSA and Other Agencies Use to Ferret Out Cheaters and Crooks
During a press conference in June 2016, leaders from the Justice and Health and Human Services departments unveiled charges in the largest takedown of Medicare and Medicaid fraud in the nation's history. The final tally was eye-opening: About 300 individuals, including 61 doctors and other medical professionals, were accused of falsifying $900 million worth of medical bills. The success, they said, was due to the Medicare Fraud Strike Force. "The Medicare Fraud Strike Force is a model of 21st century data-driven law enforcement, and it has had a remarkable impact on healthcare fraud across the country," former Assistant Attorney General Leslie Caldwell said at the time. Behind the scenes, officials praised nearly real-time data analytics as a major asset in building their case.
- Information Technology > Data Science (1.00)
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Architecture > Real Time Systems (0.90)