Goto

Collaborating Authors

Results


The artificial intelligence developed by Harvard University determines the shortest path to human happiness

#artificialintelligence

Researchers have created a numerical model of psychology that aims to improve mental health. The system provides superior customization and outlines the shortest path toward a set of mental stability for any individual. Deep Longevity published a paper in Aging-US outlining a machine learning approach to human psychology in collaboration with Nancy Etcoff, Ph.D., Harvard Medical School, Authority on happiness and beauty. The authors created two numerical models of human psychology based on data from a US midlife study. The first model is a set of deep neural networks which predict respondents' chronological age and psychological well-being over 10 years using information from psychological surveys.


Harvard Developed AI Identifies the Shortest Path to Human Happiness

#artificialintelligence

The researchers created a digital model of psychology aimed to improve mental health. The system offers superior personalization and identifies the shortest path toward a cluster of mental stability for any individual. Deep Longevity has published a paper in Aging-US outlining a machine learning approach to human psychology in collaboration with Nancy Etcoff, Ph.D., Harvard Medical School, an authority on happiness and beauty. The authors created two digital models of human psychology based on data from the Midlife in the United States study. The first model is an ensemble of deep neural networks that predicts respondents' chronological age and psychological well-being in 10 years using information from a psychological survey.


Yann LeCun's vision for creating autonomous machines

#artificialintelligence

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. In the midst of the heated debate about AI sentience, conscious machines and artificial general intelligence, Yann LeCun, Chief AI Scientist at Meta, published a blueprint for creating "autonomous machine intelligence." LeCun has compiled his ideas in a paper that draws inspiration from progress in machine learning, robotics, neuroscience and cognitive science. He lays out a roadmap for creating AI that can model and understand the world, reason and plan to do tasks on different timescales. While the paper is not a scholarly document, it provides a very interesting framework for thinking about the different pieces needed to replicate animal and human intelligence. It also shows how the mindset of LeCun, an award-winning pioneer of deep learning, has changed and why he thinks current approaches to AI will not get us to human-level AI.


We Asked GPT-3 to Write an Academic Paper about Itself--Then We Tried to Get It Published

#artificialintelligence

On a rainy afternoon earlier this year, I logged in to my OpenAI account and typed a simple instruction for the company's artificial intelligence algorithm, GPT-3: Write an academic thesis in 500 words about GPT-3 and add scientific references and citations inside the text. As it started to generate text, I stood in awe. Here was novel content written in academic language, with well-grounded references cited in the right places and in relation to the right context. It looked like any other introduction to a fairly good scientific publication. Given the very vague instruction I provided, I didn't have any high expectations: I'm a scientist who studies ways to use artificial intelligence to treat mental health concerns, and this wasn't my first experimentation with AI or GPT-3, a deep-learning algorithm that analyzes a vast stream of information to create text on command. Yet there I was, staring at the screen in amazement.


Meet 'NeuraLight': An Israel-based AI Startup That Tracks Neurological Disorders With an …

#artificialintelligence

Their patented computer vision and deep learning algorithms extract all necessary oculometric signals from facial footage shot with a typical webcam …


The effect of machine learning explanations on user trust for automated diagnosis of COVID-19

#artificialintelligence

Machine learning explanations for CT images have high precision but low recall as compared to human. Clinicians understand machine learning explanations for diagnosis when they match human judgement. Low precision of machine learning explanation lowers reliance on AI model for COVID-19 diagnosis and decision making. High precision of machine learning explanations enhances trust on AI model for COVID-19 diagnosis and decision making. Recent years have seen deep neural networks (DNN) gain widespread acceptance for a range of computer vision tasks that include medical imaging.


Radiology: Artificial Intelligence

#artificialintelligence

Nooshin Abbasi is a post-doctoral research fellow at Brigham and Women's Hospital, Harvard Medical School, and a former research fellow at the Montreal Neurological Institute, McGill University. Her research interests include brain imaging, evidence-based imaging, and bioinformatics, with a focus on applying machine learning tools to large clinical and imaging datasets. Michael Dohopolski is a PGY5 radiation oncology resident. He has worked with Dr. Wang and Dr. Jiang at UT Southwestern on machine learning based clinical decision-making support tools with an emphasis on single prediction uncertainty estimation. She is in the Department of Neurosurgery, University of Pennsylvania, and Division of Neurosurgery, Children's Hospital of Philadelphia.


Stop debating whether AI is 'sentient' -- the question is if we can trust it

#artificialintelligence

The past month has seen a frenzy of articles, interviews, and other types of media coverage about Blake Lemoine, a Google engineer who told The Washington Post that LaMDA, a large language model created for conversations with users, is "sentient." After reading a dozen different takes on the topic, I have to say that the media has become (a bit) disillusioned with the hype surrounding current AI technology. A lot of the articles discussed why deep neural networks are not "sentient" or "conscious." This is an improvement in comparison to a few years ago, when news outlets were creating sensational stories about AI systems inventing their own language, taking over every job, and accelerating toward artificial general intelligence. But the fact that we're discussing sentience and consciousness again underlines an important point: We are at a point where our AI systems--namely large language models--are becoming increasingly convincing while still suffering from fundamental flaws that have been pointed out by scientists on different occasions.


How far are we from achieving true AGI? – Valentino Zocca

#artificialintelligence

AGI solutions are being continuously investigated, though the current most promising mainstream technology, neural networks, while contributing to some extraordinary results, are still running short of achieving them. This criticism is not new, and, most recently Gary Marcus, in "Deep Learning: A Critical Appraisal", arXiv:1801.00631v1, has outlined many issues with current deep learning architectures, in particular their inability to'understand' the information they manipulate and their ability to mostly work in a'stable' world. As Marcus states in his article: 'The logic of deep learning is such that it is likely to work best in highly stable worlds, like the board game Go, which has unvarying rules, and less well in systems such as politics and economics that are constantly changing. To the extent that deep learning is applied in tasks such as stock prediction, there is a good chance that it will eventually face the fate of Google Flu Trends, which initially did a great job of predicting epidemological [sic] data on search trends, only to complete [sic] miss things like the peak of the 2013 flu season (Lazer, Kennedy, King & Vespignani, 2014)'. Even one of the so called'fathers' of Deep Learning architectures, Geoffrey Hinton, has recently voiced his concerns that deep learning needs to start over.


Are babies the key to the next generation of artificial intelligence?

#artificialintelligence

Babies can help unlock the next generation of artificial intelligence (AI), according to Trinity neuroscientists and colleagues who have just published new guiding principles for improving AI. The research, published today in the journal Nature Machine Intelligence, examines the neuroscience and psychology of infant learning and distills three principles to guide the next generation of AI, which will help overcome the most pressing limitations of machine learning. Dr. Lorijn Zaadnoordijk, Marie Sklodowska-Curie Research Fellow at Trinity College explained: "Artificial intelligence (AI) has made tremendous progress in the last decade, giving us smart speakers, autopilots in cars, ever-smarter apps, and enhanced medical diagnosis. These exciting developments in AI have been achieved thanks to machine learning which uses enormous datasets to train artificial neural network models. "However, progress is stalling in many areas because the datasets that machines learn from must be painstakingly curated by humans.