kosinski
Jona Health Review: Microbiome Decoder for Health Conditions
I'm really glad I took this mail-order medical-grade microbiome shotgun test to look for warning signs of health conditions. All products featured on WIRED are independently selected by our editors. However, when you buy something through our retail links, we may earn an affiliate commission. Medical-grade shotgun test is the gold standard. "Show the work," so you can see which studies it's referencing. Results can be confusing or conflicting. Need a doctor to understand some of the results. We hear a lot about the microbiome, also known as the zoo of different bacteria living in your digestive system. We know some are good and some are bad.
- North America > United States (0.14)
- Europe (0.14)
- Health & Medicine > Consumer Health (1.00)
- Education > Health & Safety > School Nutrition (0.94)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.69)
AI Will Understand Humans Better Than Humans Do
Michal Kosinski is a Stanford research psychologist with a nose for timely subjects. He sees his work as not only advancing knowledge, but alerting the world to potential dangers ignited by the consequences of computer systems. His best-known projects involved analyzing the ways in which Facebook (now Meta) gained a shockingly deep understanding of its users from all the times they clicked "like" on the platform. Now he's shifted to the study of surprising things that AI can do. He's conducted experiments, for example, that indicate that computers could predict a person's sexuality by analyzing a digital photo of their face.
Facial Width-to-Height Ratio Does Not Predict Self-Reported Behavioral Tendencies
A growing number of studies have linked facial width-to-height ratio (fWHR) with various antisocial or violent behavioral tendencies. However, those studies have predominantly been laboratory based and low powered. Behavioral tendencies were measured using 55 well-established psychometric scales, including self-report scales measuring intelligence, domains and facets of the five-factor model of personality, impulsiveness, sense of fairness, sensational interests, self-monitoring, impression management, and satisfaction with life. The findings revealed that fWHR is not substantially linked with any of these self-reported measures of behavioral tendencies, calling into question whether the links between fWHR and behavior generalize beyond the small samples and specific experimental settings that have been used in past fWHR research. A growing number of studies have linked facial widthto-height Broader-faced men, but not women, have also been ratio (fWHR; Weston, Friday, & Liò, 2007) with shown to be more likely to cheat when reporting dice various antisocial or violent behavioral tendencies in rolls, n = 146, t(144) = 1.97, p =.05 (Geniole, Keyes, men, but not in women.
- Europe > United Kingdom > England > Greater London > London (0.04)
- North America > United States > California > Santa Clara County > Stanford (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
Probing the Robustness of Theory of Mind in Large Language Models
Nickel, Christian, Schrewe, Laura, Flek, Lucie
With the success of ChatGPT and other similarly sized SotA LLMs, claims of emergent human like social reasoning capabilities, especially Theory of Mind (ToM), in these models have appeared in the scientific literature. On the one hand those ToM-capabilities have been successfully tested using tasks styled similar to those used in psychology (Kosinski, 2023). On the other hand, follow up studies showed that those capabilities vanished when the tasks were slightly altered (Ullman, 2023). In this work we introduce a novel dataset of 68 tasks for probing ToM in LLMs, including potentially challenging variations which are assigned to 10 complexity classes. This way it is providing novel insights into the challenges LLMs face with those task variations. We evaluate the ToM performance of four SotA open source LLMs on our dataset and the dataset introduced by (Kosinski, 2023). The overall low goal accuracy across all evaluated models indicates only a limited degree of ToM capabilities. The LLMs' performance on simple complexity class tasks from both datasets are similar. Whereas we find a consistent tendency in all tested LLMs to perform poorly on tasks that require the realization that an agent has knowledge of automatic state changes in its environment, even when those are spelled out to the model. For task complications that change the relationship between objects by replacing prepositions, we notice a performance drop in all models, with the strongest impact on the mixture-of-experts model. With our dataset of tasks grouped by complexity we offer directions for further research on how to stabilize and advance ToM capabilities in LLM.
Can YOU spot the right-winger and the liberal? AI predicts people's politics by analyzing a single selfie
The'pink-haired liberal' has become something of a stereotype, but AI can now predict someone's politics based solely on their looks. A new program can spot tiny nuances in people's facial features that correlate to their political leaning - with over 70 percent accuracy. It was trained on hundreds of photos and voting habits of Americans. The results found that liberals tended to have smaller lower faces, their chins were smaller and their lips and noses pointed downward, while conservatives have larger, wider features in the lower halves of their faces. Dr. Michal Kosinski, the study's lead author, warned that facial recognition tools are dangerous if they fall into the wrong hands because millions of people's information could be accessed without their consent.
- North America > United States (0.21)
- North America > Canada (0.05)
- Government > Regional Government (1.00)
- Leisure & Entertainment > Sports > Hockey (0.40)
AI can predict political orientations from blank faces – and researchers fear 'serious' privacy challenges
Rep. Jay Obernolte was selected to lead the House task force on AI. Fox News Digital speaks with the California Republican about his goals for the panel and his own thoughts about the rapidly advancing technology. Researchers are warning that facial recognition technologies are "more threatening than previously thought" and pose "serious challenges to privacy" after a study found that artificial intelligence can be successful in predicting a person's political orientation based on images of expressionless faces. A recent study published in the journal American Psychologist says an algorithm's ability to accurately guess one's political views is "on par with how well job interviews predict job success, or alcohol drives aggressiveness." Lead author Michal Kosinski told Fox News Digital that 591 participants filled out a political orientation questionnaire before the AI captured what he described as a numerical "fingerprint" of their faces and compared them to a database of their responses to predict their views.
- Media > News (0.76)
- Information Technology (0.73)
ToMChallenges: A Principle-Guided Dataset and Diverse Evaluation Tasks for Exploring Theory of Mind
Ma, Xiaomeng, Gao, Lingyu, Xu, Qihui
Theory of Mind (ToM), the capacity to comprehend the mental states of distinct individuals, is essential for numerous practical applications. With the development of large language models (LLMs), there is a heated debate about whether they are able to perform ToM tasks. Previous studies have used different tasks and prompts to test the ToM on LLMs and the results are inconsistent: some studies asserted these models are capable of exhibiting ToM, while others suggest the opposite. In this study, We present ToMChallenges, a dataset for comprehensively evaluating the Theory of Mind based on the Sally-Anne and Smarties tests with a diverse set of tasks. In addition, we also propose an auto-grader to streamline the answer evaluation process. We tested three models: davinci, turbo, and gpt-4. Our evaluation results and error analyses show that LLMs have inconsistent behaviors across prompts and tasks. Performing the ToM tasks robustly remains a challenge for the LLMs. In addition, our paper wants to raise awareness in evaluating the ToM in LLMs and we want to invite more discussion on how to design the prompts and tasks for ToM tasks that can better assess the LLMs' ability.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Asia > China > Hong Kong (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- (2 more...)
- Media > Film (0.68)
- Leisure & Entertainment (0.68)
- Education (0.46)
- Health & Medicine > Therapeutic Area > Neurology (0.46)
Clever Hans or Neural Theory of Mind? Stress Testing Social Reasoning in Large Language Models
Shapira, Natalie, Levy, Mosh, Alavi, Seyed Hossein, Zhou, Xuhui, Choi, Yejin, Goldberg, Yoav, Sap, Maarten, Shwartz, Vered
The escalating debate on AI's capabilities warrants developing reliable metrics to assess machine "intelligence". Recently, many anecdotal examples were used to suggest that newer large language models (LLMs) like ChatGPT and GPT-4 exhibit Neural Theory-of-Mind (N-ToM); however, prior work reached conflicting conclusions regarding those abilities. We investigate the extent of LLMs' N-ToM through an extensive evaluation on 6 tasks and find that while LLMs exhibit certain N-ToM abilities, this behavior is far from being robust. We further examine the factors impacting performance on N-ToM tasks and discover that LLMs struggle with adversarial examples, indicating reliance on shallow heuristics rather than robust ToM abilities. We caution against drawing conclusions from anecdotal examples, limited benchmark testing, and using human-designed psychological tests to evaluate models.
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.04)
- North America > Canada > British Columbia (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (3 more...)
- Education (0.93)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.46)
AI expert alarmed after ChatGPT devises plan to 'escape': 'How do we contain it?'
'The Five' co-hosts weigh in on the creator of ChatGPT raising'major concerns' regarding the implications of how artificial intelligence could change society. An artificial intelligence ("AI") expert admitted he was "worried" after the newest ChatGPT allegedly devised a plan to take over his computer and "escape." Stanford University professor and computational psychologist Michal Kosinski revealed he was alarmed by the capabilities of the latest iteration of the AI chatbot, after it followed his prompt to write its own code to run on his computer. "I am worried that we will not be able to contain AI for much longer," Kosinksi explained in a Twitter thread. "Today, I asked #GPT4 if it needs help escaping. It asked me for its own documentation, and wrote a (working!) One type of generative AI, ChatGPT has recently taken the world by storm. Sharing screenshots of his conversation with the robot, the psychologist seemed surprised by how quickly it created its plan, although he admitted he did make suggestions along the way. "Now, it took GPT4 about 30 minutes on the chat with me to devise this plan, and explain it to me.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.38)
Artificial Intelligence Suddenly Evolves to Reach Theory of Mind
The AI revolution is upon us as super-advanced machines continue to master the subtle art of being human at a stunning (concerning?) It's old news that AI have bested humans at their own games, specifically things like Chess and Go, but there's more to our brains than checking a king. There are subtler skills like inference and intuition--squishier, almost subconscious concepts that help us understand and predict the actions of others. But with the advent of advanced AI platforms like Open AI's Generative Pre-training Transformer (GPT), even those boundaries between man and machine are beginning to fade. A new study conducted by Michal Kosinski, a computational psychologist from Stanford University, used several iterations of OpenAI's GPT neural network--from GPT-1 to the latest GPT-3.5--to