choice
In 'Alien: Earth', the Future Is a Corporate Hellscape
Seventeen years ago, Noah Hawley became a father during the Great Recession. If you look at everything he's written since having children--including the TV series Fargo and Legion--Hawley says it all revolves around the same question every parent faces: "How are we supposed to raise these people in the world that we're living in?" Hawley's new series, Alien: Earth, which premieres August 12 on Hulu and FX, explores this question even more directly than his previous work. Set two years before the original Alien in 2120, it imagines a future where the race for immortality has led to three competing technologies: synths (AI minds in synthetic bodies), cyborgs (humans with cybernetic enhancements), and hybrids (human minds downloaded into synthetic bodies). When a deep space research vessel, the USCSS Maginot, crashes into Earth carrying five captured alien species, a megacorporation called Prodigy sends six hybrids to investigate. The first-ever hybrid, Wendy, played by Sydney Chandler, was a terminally ill child before she was selected for the immortality experiment, just like the rest of Prodigy's hybrids, all six of whom wake up in super-strong, super-fast, synthetic adult bodies that will never age.
- South America (0.06)
- Oceania > Australia (0.06)
- North America > Greenland (0.06)
- (9 more...)
- Leisure & Entertainment (0.73)
- Media > Television (0.57)
- Media > Film (0.37)
ANPMI: Assessing the True Comprehension Capabilities of LLMs for Multiple Choice Questions
Cho, Gyeongje, So, Yeonkyoung, Lee, Jaejin
Multiple-choice benchmarks, consisting of various prompts and choices, are among the most widely used methods to assess a language model's natural language understanding capability. Given a specific prompt, we typically compute $P(Choice|Prompt)$ to evaluate how likely a language model is to generate the correct choice compared to incorrect ones. However, we observe that performance measured using this approach reflects not only the model's comprehension of the prompt but also its inherent biases for certain choices regardless of the prompt. This issue makes it challenging to accurately measure a model's natural language understanding, as models may select the answer without fully understanding the prompt. To address this limitation, we propose a novel metric called ANPMI, which normalizes Pointwise Mutual Information (PMI) by $-\log P(Choice)$. ANPMI provides a more accurate assessment of the model's natural language understanding by ensuring that it is challenging to answer a question without properly understanding the prompt.
UrbanVideo-Bench: Benchmarking Vision-Language Models on Embodied Intelligence with Video Data in Urban Spaces
Zhao, Baining, Fang, Jianjie, Dai, Zichao, Wang, Ziyou, Zha, Jirong, Zhang, Weichen, Gao, Chen, Wang, Yue, Cui, Jinqiang, Chen, Xinlei, Li, Yong
Large multimodal models exhibit remarkable intelligence, yet their embodied cognitive abilities during motion in open-ended urban 3D space remain to be explored. We introduce a benchmark to evaluate whether video-large language models (Video-LLMs) can naturally process continuous first-person visual observations like humans, enabling recall, perception, reasoning, and navigation. We have manually control drones to collect 3D embodied motion video data from real-world cities and simulated environments, resulting in 1.5k video clips. Then we design a pipeline to generate 5.2k multiple-choice questions. Evaluations of 17 widely-used Video-LLMs reveal current limitations in urban embodied cognition. Correlation analysis provides insight into the relationships between different tasks, showing that causal reasoning has a strong correlation with recall, perception, and navigation, while the abilities for counterfactual and associative reasoning exhibit lower correlation with other tasks. We also validate the potential for Sim-to-Real transfer in urban embodiment through fine-tuning.
- Asia > China > Guangdong Province (0.28)
- Asia > Thailand (0.14)
- Transportation > Ground > Road (1.00)
- Health & Medicine (0.76)
- Transportation > Infrastructure & Services (0.68)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.69)
- Information Technology > Artificial Intelligence > Robots > Autonomous Vehicles > Drones (0.46)
My Face My Choice: Privacy Enhancing Deepfakes for Social Media Anonymization - Technology Org
The achievements in face recognition and identification are often applied in a maleficent way. Researchers work on a masking mechanism that does not break the image continuity and misleads face recognition systems with fake faces. Currently, access rights in social networks are defined per image, which friends are allowed to see. But our faces appear in many photos, even when we do not want this. Researchers suggest the "My Face My Choice" principle.
Kmart halts use of in-store facial recognition amid Australian privacy investigation
Retailers in Australia are the latest companies to back away from facial recognition, albeit under pressure. The Guardian reports Kmart and Bunnings have temporarily halted use of facial recognition in their local stores while the Office of the Australian Information Commissioner (OAIC) investigates the privacy implications of their systems. The two chains were trialing the technology to spot banned customers, prevent refund fraud and reduce theft. The investigation started in mid-July, a month after the consumer advocacy group Choice learned that Kmart and Bunnings were testing facial recognition. Bunnings had already paused use as it migrated to a new system.
- Law (0.79)
- Government > Regional Government (0.41)
Australian firm halts facial recognition trial over privacy fears
Australia's second-biggest appliances chain says it is pausing a trial of facial recognition technology in stores after a consumer group referred it to the privacy regulator for possible enforcement action. In an email on Tuesday, a spokesperson for JB Hi-Fi Ltd said The Good Guys, which JB Hi-Fi owns, would stop trialling a security system with optional facial recognition in two Melbourne outlets. Use of the technology by The Good Guys, owned by JB Hi-Fi Ltd, was "unreasonably intrusive" and potentially in breach of privacy laws, the group, CHOICE, told the Office of the Australian Information Commissioner (OAIC). While the company took confidentiality of personal information seriously and is confident it complied with relevant laws, it decided "to pause the trial … pending any clarification from the OAIC regarding the use of this technology", JB Hi-Fi's spokesperson added. The Good Guys was named in a complaint alongside Bunnings, Australia's biggest home improvement chain, and big box retailer Kmart, both owned by Wesfarmers Ltd, with total annual sales of about 25 billion Australian dollars ($19.47m) across 800 stores.
Woolworths leak says it uses AI and facial recognition -- but the company denies it
A leaked Woolworths employee training module slide claims that it is using "artificial intelligence and facial mapping" in its stores -- but the company denies it is using the technology. This is from a Woolies training module from 2020." At the bottom of the slide, a box titled "Did You Know?" boasts about the company's use of technology to catch offenders: "Our high standard CCTV is already resulting in offenders being arrested by police. We are using technology like artificial intelligence and facial mapping to identify offenders!" Woolworths confirmed that the slide was real, but denied it is using either artificial intelligence or facial recognition to prevent theft.
Technology: Facial recognition is on the rise – but the law is lagging a long way behind
Melbourne/Canberra: Private companies and public authorities are quietly using facial recognition systems around Australia. Despite the growing use of this controversial technology, there is little in the way of specific regulations and guidelines to govern its use. Spying on shoppers We were reminded of this fact recently when consumer advocates at CHOICE revealed that major retailers in Australia are using the technology to identify people claimed to be thieves and troublemakers. There is no dispute about the goal of reducing harm and theft. But there is also little transparency about how this technology is being used.
- Oceania > Australia > Australian Capital Territory > Canberra (0.25)
- Oceania > Australia > South Australia (0.08)
- North America > United States (0.06)
- Oceania > Australia > New South Wales (0.05)
- Retail (0.71)
- Law (0.53)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.50)
- Information Technology > Security & Privacy (0.50)
Making Choices for the Future of Work - MKAI
The most crucial facet of Artificial Intelligence (AI) is developing the technology without turning a blind eye to its consequences. AI is ultimately built by human beings, and humans can have very diverse motives for why they create something. Unfortunately, today there is a massive gap between people making these systems and those impacted by these systems. The changes that AI will bring to the jobs done by humans will have marked consequences on societies, economies and labour markets. For example, robotics has enabled us to do work that was too dangerous or impossible for humans or that is done more effectively by a machine.