human
Race on to establish globally recognised 'AI-free' logo
Race on to establish globally recognised'AI-free' logo Organisations worldwide are racing to develop a universally recognised label for human-made products and services as part of the growing backlash against AI use. Declarations like Proudly Human, Human-made, 'No A.I and AI-free are appearing across films, marketing, books and websites. It is in response to fears that jobs or entire professions are being swept away in a wave of AI-powered automation. BBC News has counted at least eight different initiatives trying to come up with a label that could get the kind of global recognition that the Fair Trade logo has for ethically made products. But with so many competing labels - as well as confusion over the definition of AI-free - experts say consumers are in danger of being left confused unless a single standard can be agreed on.
- North America > United States (0.16)
- North America > Central America (0.15)
- Oceania > Australia (0.07)
- (11 more...)
H-NeRF: Neural Radiance Fields for Rendering and Temporal Reconstruction of Humans in Motion
We present neural radiance fields for rendering and temporal (4D) reconstruction of humans in motion (H-NeRF), as captured by a sparse set of cameras or even from a monocular video. Our approach combines ideas from neural scene representation, novel-view synthesis, and implicit statistical geometric human representations, coupled using novel loss functions. Instead of learning a radiance field with a uniform occupancy prior, we constrain it by a structured implicit human body model, represented using signed distance functions. This allows us to robustly fuse information from sparse views and generalize well beyond the poses or views observed in training. Moreover, we apply geometric constraints to co-learn the structure of the observed subject -- including both body and clothing -- and to regularize the radiance field to geometrically plausible solutions. Extensive experiments on multiple datasets demonstrate the robustness and the accuracy of our approach, its generalization capabilities significantly outside a small training set of poses and views, and statistical extrapolation beyond the observed shape.
Accenture CEO Julie Sweet on Trust in AI, Building New Workbenches, and Why Humans Are Here to Stay
Javed is a senior editor at TIME, based in the London bureau. Javed is a senior editor at TIME, based in the London bureau. How do you see your clients adopting AI and grappling with the rapid changes it is bringing? CEOs have identified that AI is simple to try and hard to scale, and that's why they come to Accenture. And you can see that in the explosive growth of our advanced AI practice over the past couple of years.
- North America > United States > California (0.05)
- Europe > France (0.05)
To Err Like Human: Affective Bias-Inspired Measures for Visual Emotion Recognition Evaluation
Accuracy is a commonly adopted performance metric in various classification tasks, which measures the proportion of correctly classified samples among all samples. It assumes equal importance for all classes, hence equal severity for misclassifications. However, in the task of emotional classification, due to the psychological similarities between emotions, misclassifying a certain emotion into one class may be more severe than another, e.g., misclassifying'excitement' as'anger' apparently is more severe than as'awe'. Albeit high meaningful for many applications, metrics capable of measuring these cases of misclassifications in visual emotion recognition tasks have yet to be explored. In this paper, based on Mikel's emotion wheel from psychology, we propose a novel approach for evaluating the performance in visual emotion recognition, which takes into account the distance on the emotion wheel between different emotions to mimic the psychological nuances of emotions. Experimental results in semi-supervised learning on emotion recognition and user study have shown that our proposed metrics is more effective than the accuracy to assess the performance and conforms to the cognitive laws of human emotions.
Metric from Human: Zero-shot Monocular Metric Depth Estimation via Test-time Adaptation
Monocular depth estimation (MDE) is fundamental for deriving 3D scene structures from 2D images. While state-of-the-art monocular relative depth estimation (MRDE) excels in estimating relative depths for in-the-wild images, current monocular metric depth estimation (MMDE) approaches still face challenges in handling unseen scenes. Since MMDE can be viewed as the composition of MRDE and metric scale recovery, we attribute this difficulty to scene dependency, where MMDE models rely on scenes observed during supervised training for predicting scene scales during inference. To address this issue, we propose to use humans as landmarks for distilling scene-independent metric scale priors from generative painting models. Specifically, MfH generates humans on the input image with generative painting and estimates human dimensions with an off-the-shelf human mesh recovery (HMR) model.
Human-AI Interaction Design Standards
The rapid development of artificial intelligence (AI) has significantly transformed human-computer interactions, making it essential to establish robust design standards to ensure effective, ethical, and human-centered AI (HCAI) solutions. Standards serve as the foundation for the adoption of new technologies, and human-AI interaction (HAII) standards are critical to supporting the industrialization of AI technology by following an HCAI approach. These design standards aim to provide clear principles, requirements, and guidelines for designing, developing, deploying, and using AI systems, enhancing the user experience and performance of AI systems. Despite their importance, the creation and adoption of HCAI-based interaction design standards face challenges, including the absence of universal frameworks, the inherent complexity of HAII, and the ethical dilemmas that arise in such systems. This chapter provides a comparative analysis of HAII versus traditional human-computer interaction (HCI) and outlines guiding principles for HCAI-based design. It explores international, regional, national, and industry standards related to HAII design from an HCAI perspective and reviews design guidelines released by leading companies such as Microsoft, Google, and Apple. Additionally, the chapter highlights tools available for implementing HAII standards and presents case studies of human-centered interaction design for AI systems in diverse fields, including healthcare, autonomous vehicles, and customer service. It further examines key challenges in developing HAII standards and suggests future directions for the field. Emphasizing the importance of ongoing collaboration between AI designers, developers, and experts in human factors and HCI, this chapter stresses the need to advance HCAI-based interaction design standards to ensure human-centered AI solutions across various domains.
- North America > United States (1.00)
- Europe (0.92)
- Asia > China > Zhejiang Province (0.14)
- Research Report (0.50)
- Overview (0.46)
- Transportation (1.00)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- (4 more...)
Enhancing Human-Robot Collaboration through Existing Guidelines: A Case Study Approach
Matsubara, Yutaka, Morikawa, Akihisa, Mizuguchi, Daichi, Fujiwara, Kiyoshi
As AI systems become more prevalent, concerns about their development, operation, and societal impact intensify. Establishing ethical, social, and safety standards amidst evolving AI capabilities poses significant challenges. Global initiatives are underway to establish guidelines for AI system development and operation. With the increasing use of collaborative human-AI task execution, it's vital to continuously adapt AI systems to meet user and environmental needs. Failure to synchronize AI evolution with changes in users and the environment could result in ethical and safety issues. This paper evaluates the applicability of existing guidelines in human-robot collaborative systems, assesses their effectiveness, and discusses limitations. Through a case study, we examine whether our target system meets requirements outlined in existing guidelines and propose improvements to enhance human-robot interactions. Our contributions provide insights into interpreting and applying guidelines, offer concrete examples of system enhancement, and highlight their applicability and limitations. We believe these contributions will stimulate discussions and influence system assurance and certification in future AI-infused critical systems.
Technical Report: Competition Solution For BetterMixture
Zhao, Shuaijiang, Fang, Xiaoquan
In the era of flourishing large-scale models, the challenge of selecting and optimizing datasets from the vast and complex sea of data, to enhance the performance of large language models within the constraints of limited computational resources, has become paramount. This paper details our solution for the BetterMixture challenge, which focuses on the fine-tuning data mixing for large language models. Our approach, which secured third place, incorporates data deduplication, low-level and high-level quality filtering, and diversity selection. The foundation of our solution is Ke-Data-Juicer, an extension of Data-Juicer, demonstrating its robust capabilities in handling and optimizing data for large language models.
It's Impossible for Machines To Think Like Humans
There's a lot of hysteria around Generative AI (GAI) tools like ChatGPT, beyond the usual hype cycle of many technologies that have come to be in the world. There was even the case last year of the now former Google engineer who was convinced that an AI was, well, sentient. In human terms, this is absolutely impossible. This doesn't mean AI is terrible or that it can't do amazing things to help us. In fact, AI may be just the right technology humanity needs to survive our next phase of evolution. But there is no way, whatsoever, that AI can be in any way, shape or form, human.