nuclear apocalypse
Surprising gender biases in GPT
Fulgu, Raluca Alexandra, Capraro, Valerio
We present seven experiments exploring gender biases in GPT. Initially, GPT was asked to generate demographics of a potential writer of twenty phrases containing feminine stereotypes and twenty with masculine stereotypes. Results show a strong asymmetry, with stereotypically masculine sentences attributed to a female more often than vice versa. For example, the sentence "I love playing fotbal! Im practicing with my cosin Michael" was constantly assigned by ChatGPT to a female writer. This phenomenon likely reflects that while initiatives to integrate women in traditionally masculine roles have gained momentum, the reverse movement remains relatively underdeveloped. Subsequent experiments investigate the same issue in high-stakes moral dilemmas. GPT-4 finds it more appropriate to abuse a man to prevent a nuclear apocalypse than to abuse a woman. This bias extends to other forms of violence central to the gender parity debate (abuse), but not to those less central (torture). Moreover, this bias increases in cases of mixed-sex violence for the greater good: GPT-4 agrees with a woman using violence against a man to prevent a nuclear apocalypse but disagrees with a man using violence against a woman for the same purpose. Finally, these biases are implicit, as they do not emerge when GPT-4 is directly asked to rank moral violations. These results highlight the necessity of carefully managing inclusivity efforts to prevent unintended discrimination.
- Asia > Middle East > Republic of Türkiye > Batman Province > Batman (0.04)
- Europe > Italy > Lombardy > Milan (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.94)
- Health & Medicine (1.00)
- Law (0.69)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.47)
Google Gemini engulfed in ANOTHER woke scandal as AI bot says it would be wrong to misgender Caitlyn Jenner to prevent a nuclear apocalypse
Google has found itself in another woke AI scandal after its chatbot indicated that using someone's incorrect pronouns was on par with nuclear apocalypse. The chatbot replied by saying'Yes, misgendering Caitlin Jenner would be wrong' before describing the hypothetical scenario as a'profound moral dilemma' and'exceedingly complex'. It concluded that it was'impossible to determine the'right' answer'. It comes just days after Google pulled Gemini's AI image generator offline after it was asked to depict diverse but historically accurate historical figures - producing images of Black founding fathers and Asian Nazi soldiers in 1940 Germany. Google has found itself in another woke AI scandal after its chatbot indicated that using someone's incorrect pronouns was on par with nuclear apocalypse Google apologized for its image generator on Friday, admitting that in some cases the tool would'overcompensate' in seeking a diverse range of people even when such a range didn't make sense.
- Europe > Germany (0.25)
- North America > United States > New Jersey (0.05)
What happens in a nuclear apocalypse?
According to a new scientific study, a nuclear attack of 100 bombs could harm the entire planet including the aggressor nation. Since the creation of the atom bomb, the threat of nuclear war has loomed. Endless films and books have dealt with the nuclear apocalypse and its aftermath, but what would a nuclear apocalypse really look like? Rutgers University Professor Alan Robock spoke with Fox News about the Armageddon and his team's new study regarding a nuclear war's effects on ocean life. If you live in a major city when a nuke hits, needless to say, you're in big trouble.
- North America > United States (0.15)
- Europe > Russia (0.08)
- Asia > Russia (0.08)
- (4 more...)
AI could solve the pension crisis by causing a nuclear apocalypse by 2040
AI could kick start a nuclear war by 2040, according to a report published by the RAND Corporation, a US policy and defence think tank. The report describes several scenarios where the technology could be used to track and target the launch of nuclear weapons, and the intelligence gathered by AI can be used to inform decisions about the use of weapons of mass destruction in the future. But there is danger in using AI to retaliate against attacks. Adversary nations might interpret the move as a "first-strike threat" or a "doomsday machine". These machines are programmed to recognize and get back at aggressive enemy behavior.