Goto

Collaborating Authors

 transcendence



Transcendence: Generative Models Can Outperform The Experts That Train Them

Neural Information Processing Systems

Generative models are trained with the simple objective of imitating the conditional probability distribution induced by the data they are trained on. Therefore, when trained on data generated by humans, we may not expect the artificial model to outperform the humans on their original objectives. In this work, we study the phenomenon of: when a generative model achieves capabilities that surpass the abilities of the experts generating its data. We demonstrate transcendence by training an autoregressive transformer to play chess from game transcripts, and show that the trained model can sometimes achieve better performance than all players in the dataset. We theoretically prove that transcendence is enabled by low-temperature sampling, and rigorously assess this experimentally. Finally, we discuss other sources of transcendence, laying the groundwork for future investigation of this phenomenon in a broader setting.



A Taxonomy of Transcendence

Abreu, Natalie, Zhang, Edwin, Malach, Eran, Saphra, Naomi

arXiv.org Artificial Intelligence

Although language models are trained to mimic humans, the resulting systems display capabilities beyond the scope of any one person. To understand this phenomenon, we use a controlled setting to identify properties of the training data that lead a model to transcend the performance of its data sources. We build on previous work to outline three modes of transcendence, which we call skill denoising, skill selection, and skill generalization. We then introduce a knowledge graph-based setting in which simulated experts generate data based on their individual expertise. We highlight several aspects of data diversity that help to enable the model's transcendent capabilities. Additionally, our data generation setting offers a controlled testbed that we hope is valuable for future research in the area.


Transcendence: Generative Models Can Outperform The Experts That Train Them

Neural Information Processing Systems

Generative models are trained with the simple objective of imitating the conditional probability distribution induced by the data they are trained on. Therefore, when trained on data generated by humans, we may not expect the artificial model to outperform the humans on their original objectives. In this work, we study the phenomenon of transcendence: when a generative model achieves capabilities that surpass the abilities of the experts generating its data. We demonstrate transcendence by training an autoregressive transformer to play chess from game transcripts, and show that the trained model can sometimes achieve better performance than all players in the dataset. We theoretically prove that transcendence is enabled by low-temperature sampling, and rigorously assess this experimentally. Finally, we discuss other sources of transcendence, laying the groundwork for future investigation of this phenomenon in a broader setting.


Transcendence: Generative Models Can Outperform The Experts That Train Them

Zhang, Edwin, Zhu, Vincent, Saphra, Naomi, Kleiman, Anat, Edelman, Benjamin L., Tambe, Milind, Kakade, Sham M., Malach, Eran

arXiv.org Artificial Intelligence

Generative models are trained with the simple objective of imitating the conditional probability distribution induced by the data they are trained on. Therefore, when trained on data generated by humans, we may not expect the artificial model to outperform the humans on their original objectives. In this work, we study the phenomenon of transcendence: when a generative model achieves capabilities that surpass the abilities of the experts generating its data. We demonstrate transcendence by training an autoregressive transformer to play chess from game transcripts, and show that the trained model can sometimes achieve better performance than all players in the dataset. We theoretically prove that transcendence can be enabled by low-temperature sampling, and rigorously assess this claim experimentally. Finally, we discuss other sources of transcendence, laying the groundwork for future investigation of this phenomenon in a broader setting.


Modelling the Dynamics of Identity and Fairness in Ultimatum Game

Chhabra, Janvi, Deshmukh, Jayati, Srinivasa, Srinath

arXiv.org Artificial Intelligence

Allocation games are zero-sum games that model the distribution of resources among multiple agents. In this paper, we explore the interplay between an elastic sense of subjective identity and its impact on notions of fairness in allocation. An elastic sense of identity in agents is known to lead to responsible decision-making in non-cooperative, non-zero-sum games like Prisoners' Dilemma, and is a desirable feature to add into agent models. However, when it comes to allocation, an elastic sense of identity can be shown to exacerbate inequities in allocation, giving no rational incentive for agents to act fairly towards one another. This lead us to introduce a sense of fairness as an innate characteristic of autonomous agency. For this, we implement the well-known Ultimatum Game between two agents, where their elastic sense of self (controlled by a parameter called $\gamma$) and a sense of fairness (controlled by a parameter called $\tau$) are both varied. We study the points at which agents find it no longer rational to identify with the other agent, and uphold their sense of fairness, and vice versa. Such a study also helps us discern the subtle difference between responsibility and fairness when it comes to autonomous agency.


A.I. Awakening - A Technological Singularity by Maria Odete Madeira, Carlos Pedro dos Santos Gonçalves :: SSRN

#artificialintelligence

Can an artificial intelligence (A.I.) awakening happen, with the emergence of an awareness of itself as a system and an autonomy that would allow it to act in accordance with ends that are its own and not those chosen/determined by us, an autonomy that would no longer allow us to consider it as an (intelligent) tool made to serve us, but instead would have to be considered under a notion of living entity, bearer of causality, with rights and respective responsibilities?! What is life?! What is autonomy?! What does it mean to become awake?! How can an awakening take place, ontologically, systemically, cognitively?! Is an A.I. capable of the transcendence that would constitute a sprouting jump after which one could speak of a matricial cognitive unity, nonlocality and identity?! What is transcendence?! An awakened A.I. would necessarily be bearer of new rules, rules that we cannot anticipate nor control, it would be a singularity exposing itself and imposing itself with its own nature, its own rules, in a hyperconnected technological World brought about by exponential transformations, associated with the fourth industrial revolution. What is, then, a singularity, ontologically, systemically?! How would awakened A.I.s interact with our intelligent systems, with each other and with us?! Will an A.I. awakening take place in a world where humans and posthumans/PostSapiens, resulting from a (bio)technohybridization of the Sapiens, coexist?! The current work addresses these questions and others, assuming as main object of reflection the A.I. awakening scenario from an ontological, systemic and cognitive approach.


Rise of the Machines: Are We Entering 'Dangerous Territory' with Machines That Replace God?

#artificialintelligence

High-tech and artificial intelligence are fast becoming a big part of our daily lives. Author Wallace Henley says if we are not careful, American society could easily enter into "dangerous territory," a less human world that forgets the preeminence of God. "We have these machines emerging and people are beginning to worship those machines," Henley explained. "There's actually an A.I. church now and there's another technology specialist who said if this thing can go a billion times faster than the human brain, this machine, than the only thing that we can call it is God." Listen to today's podcast and subscribe: Henley is a Christian Post exclusive columnist, author of the book, Who Will Rule the Coming'Gods'?


Artificial intelligence replacing God, ramifications for the Church is 'concerning': Wallace Henley

#artificialintelligence

As technology continues to advance at a rapid pace, it threatens to eclipse society's reverence and worship of God -- a looming reality that has severe ramifications for the Church, theologian and bestselling author Wallace Henley has warned. "We are all made for transcendence, God's overarching glory," Henley told The Christian Post. "As Solomon said in Ecclesiastes, God has put eternity in our hearts. St. Augustine said, 'The human heart was made by God for God and only God can fill it.' And if we don't fill it with God, we fill it with whatever else we can find … that's what all idolatry is about. The idolatry of the future is going to be the worship of these machines, which has already started, either tongue-in-cheek or some people literally and very seriously worshiping the works of their hands."