Goto

Collaborating Authors

 jayne


Variational Autoencoders for Feature Exploration and Malignancy Prediction of Lung Lesions

Keel, Benjamin, Quyn, Aaron, Jayne, David, Relton, Samuel D.

arXiv.org Artificial Intelligence

Lung cancer is responsible for 21% of cancer deaths in the UK and five-year survival rates are heavily influenced by the stage the cancer was identified at. Recent studies have demonstrated the capability of AI methods for accurate and early diagnosis of lung cancer from routine scans. However, this evidence has not translated into clinical practice with one barrier being a lack of interpretable models. This study investigates the application Variational Autoencoders (VAEs), a type of generative AI model, to lung cancer lesions. Proposed models were trained on lesions extracted from 3D CT scans in the LIDC-IDRI public dataset. Latent vector representations of 2D slices produced by the VAEs were explored through clustering to justify their quality and used in an MLP classifier model for lung cancer diagnosis, the best model achieved state-of-the-art metrics of AUC 0.98 and 93.1% accuracy. Cluster analysis shows the VAE latent space separates the dataset of malignant and benign lesions based on meaningful feature components including tumour size, shape, patient and malignancy class. We also include a comparative analysis of the standard Gaussian VAE (GVAE) and the more recent Dirichlet VAE (DirVAE), which replaces the prior with a Dirichlet distribution to encourage a more explainable latent space with disentangled feature representation. Finally, we demonstrate the potential for latent space traversals corresponding to clinically meaningful feature changes.


Towards a Mathematical Theory of Abstraction

Millidge, Beren

arXiv.org Machine Learning

While the utility of well-chosen abstractions for understanding and predicting the behaviour of complex systems is well appreciated, precisely what an abstraction $\textit{is}$ has so far has largely eluded mathematical formalization. In this paper, we aim to set out a mathematical theory of abstraction. We provide a precise characterisation of what an abstraction is and, perhaps more importantly, suggest how abstractions can be learnt directly from data both for static datasets and for dynamical systems. We define an abstraction to be a small set of `summaries' of a system which can be used to answer a set of queries about the system or its behaviour. The difference between the ground truth behaviour of the system on the queries and the behaviour of the system predicted only by the abstraction provides a measure of the `leakiness' of the abstraction which can be used as a loss function to directly learn abstractions from data. Our approach can be considered a generalization of classical statistics where we are not interested in reconstructing `the data' in full, but are instead only concerned with answering a set of arbitrary queries about the data. While highly theoretical, our results have deep implications for statistical inference and machine learning and could be used to develop explicit methods for learning precise kinds of abstractions directly from data.


How Per Aspera makes you feel like an Artificial Intelligence

#artificialintelligence

Grappling with the ramifications of Artificial Intelligence is one of the first things science fiction ever did as a genre. Yet most sci-fi books, movies, and games explore those ideas from the perspective of a person, whether we're taking down SHODAN in System Shock or chatting with Cortana in Halo. That's something the developers of Per Aspera, Tlön Industries, wanted to change. From their offices in Buenos Aires the team of about 12 people have spent the last few years trying to figure out what it would be like to inhabit the mind of a newly awakened—a newborn—Artificial Consciousness.The result is Per Aspera, a strategic city-builder that has players working to terraform Mars as the artificial consciousness AMI. You're a genderless superintelligence capable of incredible things, but you're also effectively a child with no conception of society or social interaction.


Hacking with God: a Common Programming Language of Robopsychology and Robophilosophy

Bátfai, Norbert

arXiv.org Artificial Intelligence

This note is a sketch of how the concept of robopsychology and robophilosophy could be reinterpreted and repositioned in the spirit of the original vocation of psychology and philosophy. The notion of the robopsychology as a fictional science and a fictional occupation was introduced by Asimov in the middle of the last century. The robophilosophy, on the other hand, is only a few years old today. But at this moment, none of these new emerging disciplines focus on the fundamental and overall issues of the development of artificial general intelligence. Instead, they focus only on issues that, although are extremely important, play a complementary role, such as moral or ethical ones, rather than the big questions of life. We try to outline a conception in which the robophilosophy and robopsychology will be able to play a similar leading rule in the progress of artificial intelligence than the philosophy and psychology have done in the progress of human intelligence. To facilitate this, we outline the idea of a visual artificial language and interactive theorem prover-based computer application called Prime Convo Assistant. The question to be decided in the future is whether we can develop such an application. And if so, can we build a computer game on it, or even an esport game? It may be an interesting question in order for this game will be able to transform human thinking on the widest possible social scale and will be able to develop a standard mathematical logic-based communication channel between human and machine intelligence.


AI Is Changing Our Brains – argodesign – Medium

#artificialintelligence

In 1976, philosopher Julian Jaynes issued the provocative theory that recent ancestors lacked self-awareness. Instead, they mistook their inner voices for outside sources–the voice of God, say, or the ghosts of their ancestors. Jaynes called his theory "bicameralism" (Westworld fans will recall an episode from the last season called "The Bicameral Mind") and, in his telling, it persisted in early humans until about 3,000 years ago. We are in a similar pre-conscious state now, but the voice we hear is not the other side of our brains. It's our digital self–a version of us that is quickly becoming inseparable from our physical self. I call this comingled digital and analog self our "Meta Me."


Consciousness Began When the Gods Stopped Speaking - Issue 54: The Unspoken

Nautilus

Julian Jaynes was living out of a couple of suitcases in a Princeton dorm in the early 1970s. He must have been an odd sight there among the undergraduates, some of whom knew him as a lecturer who taught psychology, holding forth in a deep baritone voice. He was in his early 50s, a fairly heavy drinker, untenured, and apparently uninterested in tenure. "I don't think the university was paying him on a regular basis," recalls Roy Baumeister, then a student at Princeton and today a professor of psychology at Florida State University. But among the youthful inhabitants of the dorm, Jaynes was working on his masterpiece, and had been for years. From the age of 6, Jaynes had been transfixed by the singularity of conscious experience. Gazing at a yellow forsythia flower, he'd wondered how he could be sure that others saw the same yellow as he did. As a young man, serving three years in a Pennsylvania prison for declining to support the war effort, he watched a worm in the grass of the prison yard one spring, wondering what separated the unthinking earth from the worm and the worm from himself. It was the kind of question that dogged him for the rest of his life, and the book he was working on would grip a generation beginning to ask themselves similar questions.


Pre-Conscious Humans May Have Been Like the Borg - Issue 47: Consciousness

Nautilus

Captain Picard: "How do we reason with them, let them know that we are not a threat?" At least, I've never known anyone who did." With this brief, ominous exchange, the heroes of Star Trek: The Next Generation are introduced to one of their most formidable enemies: the Borg, a race of cyborgs whose minds are linked to a collective "hive mind" through sophisticated technology. The collective expands their civilization through a process of mental and physical "assimilation": They find new intelligent beings, like humans, implant them with Borg technology, and integrate them into the hive mind, erasing their previous identities. Individual Borg are not conscious in the way humans are, and they have no sense of individuality. The hive mind is a dictator, an unquestioned voice that commands each individual. The Borg nature is split in two, an executive called the collective and a follower called the drone. For the humans living in the Star Trek universe, the prospect of assimilation is terrifying. When asked why humans resist assimilation, Chief Engineer Geordi La Forge says, "For somebody like me, losing that sense of individuality is almost worse than dying." In his 2008 TED Talk, Philip Zimbardo introduced his subject by showing his audience M.C. The art, Zimbardo explained, reminds us that "good and evil are...READ MORE For many humans living in the real world, the fictional Borg are similarly unsettling.


Information entropy as an anthropomorphic concept

Rodis, Panteleimon

arXiv.org Artificial Intelligence

According to E.T. Jaynes and E.P. Wigner, entropy is an anthropomorphic concept in the sense that in a physical system correspond many thermodynamic systems. The physical system can be examined from many points of view each time examining different variables and calculating entropy differently. In this paper we discuss how this concept may be applied in information entropy; how Shannon's definition of entropy can fit in Jayne's and Wigner's statement. This is achieved by generalizing Shannon's notion of information entropy and this is the main contribution of the paper. Then we discuss how entropy under these considerations may be used for the comparison of password complexity and as a measure of diversity useful in the analysis of the behavior of genetic algorithms.


`Plausibilities of plausibilities': an approach through circumstances

Mana, P. G. L. Porta, Månsson, A., Björk, G.

arXiv.org Artificial Intelligence

Probability-like parameters appearing in some statistical models, and their prior distributions, are reinterpreted through the notion of `circumstance', a term which stands for any piece of knowledge that is useful in assigning a probability and that satisfies some additional logical properties. The idea, which can be traced to Laplace and Jaynes, is that the usual inferential reasonings about the probability-like parameters of a statistical model can be conceived as reasonings about equivalence classes of `circumstances' - viz., real or hypothetical pieces of knowledge, like e.g. physical hypotheses, that are useful in assigning a probability and satisfy some additional logical properties - that are uniquely indexed by the probability distributions they lead to.


Bayesian classification

Cheeseman, P. | Self, M. | Kelly, J. | Stutz, J.

Classics

This paper describes a Bayesian technique for unsupervised classification of data and its computer implementation, AutoClass. Given real valued or discrete data, AutoClass determines the most probable number of classes present in the data, the most probable descriptions of those classes, and each object's probability of membership in each class. The program performs as well as or better than other automatic classification systems when run on the same data and contains no ad hoc similarity measures or stopping criteria. AutoClass has been applied to several databases in which it has discovered classes representing previously unsuspected phenomena.