Goto

Collaborating Authors

 Victoria


Welcome to the dark side of crypto's permissionless dream

MIT Technology Review

Jean-Paul Thorbjornsen is a leader of THORChain, a blockchain that is not supposed to have any leaders--and is reeling from a series of expensive controversies. We can do whatever we want," Jean-Paul Thorbjornsen tells me from the pilot's seat of his Aston Martin helicopter. As we fly over suburbs outside Melbourne, Australia, it's becoming clear that doing whatever he wants is Thorbjornsen's MO. Upper-middle-class homes give way to vineyards, and Thorbjornsen points out our landing spot outside a winery. "They're going to ask for a shot now," he says, used to the attention drawn by his luxury helicopter, emblazoned with the tail letters "BTC" for bitcoin (the price tag of $5 million in Australian dollars--$3.5 million in US dollars today--was perhaps reasonable for someone who claims a previous crypto project made more than AU$400 million, although he also says those funds were tied up in the company). Thorbjornsen is a founder of THORChain, a blockchain through which users can swap ...



A Appendix A531A.1 Detailed explanation of continuous nature of similarity

Neural Information Processing Systems

In this section, we expand on our observation that similarity between training samples is not binary. Consider the images shown in Figure 6. As a consequence, any similarity between the anchor image and the so-called'negative' examples is completely ignored. Further, all'positive' examples are considered to be The batch size is set to 16000. We train on 4 A100 GPUs.




Text Alignment Is An Efficient Unified Model for Massive NLP Tasks

Neural Information Processing Systems

Large language models (LLMs), typically designed as a function of next-word prediction, have excelled across extensive NLP tasks. Despite the generality, next-word prediction is often not an efficient formulation for many of the tasks, demanding an extreme scale of model parameters (10s or 100s of billions) and sometimes yielding suboptimal performance. In practice, it is often desirable to build more efficient models--despite being less versatile, they still apply to a substantial subset of problems, delivering on par or even superior performance with much smaller model sizes.



Repetition In Repetition Out: Towards Understanding Neural Text Degeneration from the Data Perspective Huayang Li Tian Lan Zihao Fu Deng Cai Lemao Liu Nigel Collier

Neural Information Processing Systems

In this work, we aim to advance our understanding by presenting a straightforward and fundamental explanation from the data perspective. Our preliminary investigation reveals a strong correlation between the degeneration issue and the presence of repetitions in training data. Subsequent experiments also demonstrate that by selectively dropping out the attention to repetitive words in training data, degeneration can be significantly minimized.


A Limitations and Societal Impacts

Neural Information Processing Systems

Limitations One limitation of our model is its potential for data bias. This could limit the applications of the model. MLLMs could be used to create fake news articles or social media posts. Hyperparameters Number of layers 24 Hidden size 2,048 FFN inner hidden size 8,192 Attention heads 32 Dropout 0.1 Attention dropout 0.1 Activation function GeLU [1] V ocabulary size 64,007 Soft tokens V size 64 Max length 2,048 Relative position embedding xPos [2] Initialization Magneto [3] Table 1: Hyperparameters of causal language model of K The detailed instruction tuning hyperparameters are listed in Table 3. The models are trained on web-scale multimodal corpora.