Goto

Collaborating Authors

 public figure


Y ouTubePD: A Multimodal Benchmark for Parkinson's Disease Analysis Supplementary Material

Neural Information Processing Systems

We include all our annotations and extracted landmarks. This ensures that we uphold the highest standards of ethical data usage. In Table A1, we summarize the severity label distribution in Y ouTubePD. We also summarize the demographic distribution in Y ouTubePD, split between PD-positive and healthy control (HC), or PD-negative, subjects. This decision is based on the clinician's suggestion, since an accurate UPDRS facial expression rating would require more This strategy also allows for a finer classification.



OpenAI video app Sora hits 1 million downloads faster than ChatGPT

BBC News

OpenAI says the latest version of its text-to-video artificial intelligence (AI) tool Sora was downloaded over a million times in less than five days - hitting the milestone faster than ChatGPT did at launch. The app, which has topped the Apple App Store charts in the US, generates ten second long realistic-looking videos from simple text prompts. The figures were announced in an X post from Sora boss Bill Peebles, who said the surging growth came even though the app was only available to people in North America who had received an invite. The Sora app - which makes it easy for users to post videos they have created to social media - has resulted in a deluge of videos on social feeds. Some have included depictions of deceased celebrities such as musicians Michael Jackson and Tupac Shakur.


Y ouTubePD: A Multimodal Benchmark for Parkinson's Disease Analysis Supplementary Material

Neural Information Processing Systems

We include all our annotations and extracted landmarks. This ensures that we uphold the highest standards of ethical data usage. In Table A1, we summarize the severity label distribution in Y ouTubePD. We also summarize the demographic distribution in Y ouTubePD, split between PD-positive and healthy control (HC), or PD-negative, subjects. This decision is based on the clinician's suggestion, since an accurate UPDRS facial expression rating would require more This strategy also allows for a finer classification.



Meta to stop its AI chatbots from talking to teens about suicide

BBC News

The changes come amid concerns over the potential for AI chatbots to mislead young or vulnerable users. A California couple recently sued ChatGPT-maker OpenAI over the death of their teenage son, alleging its chatbot encouraged him to take his own life. The lawsuit came after the company announced changes to promote healthier ChatGPT use last month. "AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress," the firm said in a blog post. Meanwhile, Reuters reported on Friday Meta's AI tools allowing users to create chatbots had been used by some - including a Meta employee - to produce flirtatious "parody" chatbots of female celebrities.


Measuring Political Preferences in AI Systems: An Integrative Approach

Rozado, David

arXiv.org Artificial Intelligence

Measuring Political Preferences in AI Systems - A n Integrative Approach David Rozado Political biases in Large Language Model (LLM) - based artificial intelligence (AI) systems, such as OpenAI ' s ChatGPT or Google ' s Gemini, have been previously reported . While several prior studies have attempted to quantify these biases using political orientation tests, such approaches are limited by potential tests ' calibration biases and constrained response formats that do not reflect real - world human - AI interaction s. This study employs a multi - method approach to assess political bias in leading AI systems, integrating four complementary methodologies: (1) linguistic comparison of AI - generated text with the language used by Republican and Democratic U.S. Congress mem bers, (2) analysis of political viewpoints embedded in AI - generated policy recommendations, (3) sentiment analysis of AI - generated text toward politically affiliated public figures, and (4) standardized political orientation testing. Results indicate a con sistent left - leaning bias across most contemporary AI systems, with arguably varying degrees of intensity. However, this bias is not an inherent feature of LLMs; prior research demonstrates that fine - tuning with politically skewed data can realign these mo dels across the ideological spectrum. The presence of systematic political bias in AI systems poses risks, including reduced viewpoint diversity, increased societal polarization, and the potential for public mistrust in AI technologies. To mitigate these r isks, AI systems should be designed to prioritize factual accuracy while maintaining neutrality on most lawful normative issues. Furthermore, independent monitoring platforms are necessary to ensure transparency, accountability, and responsible AI developm ent. Introduction Recent advancements in AI technology, exemplified by Large Language Models (LLMs) like ChatGPT, represent one of the most significant technological breakthroughs in recent decades. The ability of AI systems to understand and generate human - like natural language has unlocked new possibilities for automation, human - computer interaction, content generation, and information retrieval. However, th ese impressive capabilities ha ve also raised concerns abo ut the potential biases that such systems might harbor [1], [2], [3], [4] . Preliminary evidence has suggested that AI systems exhibit political biases in the textual content they generate [2], [5], [6] .


The god illusion: why the pope is so popular as a deepfake image

The Guardian

For the pope, it was the wrong kind of madonna. The pop legend, she of the 80's anthem Like a Prayer, has stirred controversy in recent weeks by posting deepfake images on social media which show the pontiff embracing her. It has fanned the flames of a debate which is already raging over the creation of AI art in which Pope Francis plays a symbolic, and unwilling, role. The head of the Catholic church is used to being the subject of AI-generated fakery. One of the defining images of the AI boom was Francis in a Balenciaga puffer jacket.


Meta is bringing back facial recognition with new safety features for Facebook and Instagram

Engadget

Meta is bringing facial recognition tech back to its apps more than three years after it shut down Facebook's "face recognition" system amid a broader backlash against the technology. Now, the social network will begin to deploy facial recognition tools on Facebook and Instagram to fight scams and help users who have lost access to their accounts, the company said in an update. The first test will use facial recognition to detect scam ads that use the faces of celebrities and other public figures. "If our systems suspect that an ad may be a scam that contains the image of a public figure at risk for celeb-bait, we will try to use facial recognition technology to compare faces in the ad against the public figure's Facebook and Instagram profile pictures," Meta explained in a blog post. "If we confirm a match and that the ad is a scam, we'll block it."


Meta needs updated rules for sexually explicit deepfakes, Oversight Board says

Engadget

Meta's Oversight Board is urging the company to update its rules around sexually explicit deepfakes. The board made the recommendations as part of its decision in two cases involving AI-generated images of public figures. The cases stem from two user appeals over AI-generated images of public figures, though the board declined to name the individuals. One post, which originated on Instagram, depicted a nude Indian woman. The post was reported to Meta but the report was automatically closed after 48 hours, as was a subsequent user appeal.