Goto

Collaborating Authors

 human-centric approach


Calling all fashion models … now AI is coming for you

The Guardian

The impact of AI has been felt across industries from Hollywood to publishing – and now it's come for modelling. H&M announced last week that it would create AI "twins" of 30 models with the intention of using them in social media posts and marketing imagery if the model gives her permission. In a statement, Jörgen Andersson, the chief creative officer at H&M, described the idea as "something that will enhance our creative process and how we work with marketing but fundamentally not change our human-centric approach in any way". The retail giant has worked with successful models including Vilma Sjöberg and Mathilda Gvarliani, who model for Vogue and brands including Chanel. As part of the agreement, each model would be able book her twin on shoots for other brands – meaning they could, in image anyway, be in two places at the same time.


Optimizing Autonomous Driving for Safety: A Human-Centric Approach with LLM-Enhanced RLHF

Sun, Yuan, Pargoo, Navid Salami, Jin, Peter J., Ortiz, Jorge

arXiv.org Artificial Intelligence

Reinforcement Learning from Human Feedback (RLHF) is popular in large language models (LLMs), whereas traditional Reinforcement Learning (RL) often falls short. Current autonomous driving methods typically utilize either human feedback in machine learning, including RL, or LLMs. Most feedback guides the car agent's learning process (e.g., controlling the car). RLHF is usually applied in the fine-tuning step, requiring direct human "preferences," which are not commonly used in optimizing autonomous driving models. In this research, we innovatively combine RLHF and LLMs to enhance autonomous driving safety. Training a model with human guidance from scratch is inefficient. Our framework starts with a pre-trained autonomous car agent model and implements multiple human-controlled agents, such as cars and pedestrians, to simulate real-life road environments. The autonomous car model is not directly controlled by humans. We integrate both physical and physiological feedback to fine-tune the model, optimizing this process using LLMs. This multi-agent interactive environment ensures safe, realistic interactions before real-world application. Finally, we will validate our model using data gathered from real-life testbeds located in New Jersey and New York City.


A human-centric approach to adopting AI

MIT Technology Review

This episode is part of our "Building the future" podcast series. It's a multi-episode series focusing on how organizations, researchers, and innovators are meeting our evolving global challenges. We understand the importance of inclusive conversations and have chosen to highlight the work of women on the cutting edge of technological innovation, and business excellence. Researchers are similarly unlocking the value of AI through machine learning and robots that are developed to augment rather than replace human capabilities across manufacturing, health care, and space exploration. The robots of the past were kept in cages on factory floors and in labs, but this new era of AI-enabled robotics allows humans to work interdependently with robots to boost productivity, increase quality of work, and enable greater flexibility, says Julie Shah, professor in the department of aeronautics at MIT. Shah is also the co-lead of the Work of the Future Initiative at MIT. "Sometimes it can feel as though the emergence of these technologies is just going to sort of steamroll and work and jobs are going to change in some predetermined way because the technology now exists," says Shah. "But we know from the research that the data doesn't bear that out actually."


Possible Failures of ChatGPT - EnterpriseTalk

#artificialintelligence

Without a human-centric approach, OpenAI ChatGPT runs on the data available on the various channels, which can also deliver services without meeting the context requirements. Sometimes, it writes plausible-sounding content but can be trustworthy. The new kid on the block, AI-powered ChatGPT offers numerous exceptional services and is claimed to be useful for coding, content writing, etc., minimizing human intervention. As erudite machinery becomes a trending sensation, companies can also see AI biases, security risks, and less personalized CX. The uncapped accessibility, and unrestricted usage of ChatGPT have increased the cybersecurity risks that can hamper the whole organization. Through ChatGPT, cybercriminals can draft a fraudulent email carrying unsecured links, attachments providing sensitive data, or instructions regarding transferring money into specific accounts from a reputed company or person.


Commission yearns for setting the global standard on artificial intelligence

#artificialintelligence

The European Commission believes that its proposed Artificial Intelligence Act should become the global standard if it is to be fully effective. The upcoming AI treaty that is being drafted by the Council of Europe might help the EU achieve just that. In April the European Commission launched its proposal for an Artificial Intelligence Act (AIA). Structured around a risk-based approach, the regulation introduces tighter obligations in proportion to the potential impact of AI applications. Commissioner Thierry Breton argued that "one should not underestimate the advantage of the EU being the first mover" and emphasised that the EU is the main "pacemaker" in regulating the use of AI on a global scale. In a similar vein, the Commission's director-general for communications networks, content and technology, Roberto Viola said that "equilibrium is key to have a horizontal risk-based approach in which many voices are heard to avoid extremism and create rules that last.


EU challenges for an AI human-centric approach: lessons learnt from ECAI 2020

AIHub

During this period of progressive development and deployment of artificial intelligence, discussions around the ethical, legal, socio-economic and cultural implications of its use are increasing. What are the challenges and the strategy, and what are the values that Europe can bring to this domain? During the European Conference on AI (ECAI 2020), two special events in the format of panels discussed the challenges of AI made in the European Union, the shape of future research and industry, and the strategy to retain talent and compete with other world powers. This article collects some of the main messages from these two sessions, which included the participation of AI experts from leading European organisations and networks. Since the publication of European directives and guidance, such as the EC White Paper on AI and the Trustworthy AI Guidelines, Europe has been laying the foundation for the future vision of AI. The European strategy for AI builds on the well-known and accepted principles found in the Charter of Fundamental Rights of the European Commission and the Universal Declaration of Human Rights to define a human-centric approach, whose primary purpose is to enhance human capabilities and societal well-being.


Why autonomous vehicle systems need human-centric approach

#artificialintelligence

Currently the trending concept behind autonomous vehicles is removing the human and focusing on the machine. But I have a different view. After 12 years at NASA researching autonomous systems for Mars, and seven years at Nissan leading work on autonomous vehicles in Silicon Valley, I believe that an autonomous system without people as a central component will be pretty much useless. As the Hong Kong government targets a 30 percent adoption of connected and autonomous vehicles (CAV), and begins testing autonomous technologies, it's crucial to take a human-centric perspective to reap the real rewards of this technology. Imagine you just bought your first autonomous vehicle.


Building trust in human-centric AI - FUTURIUM - European Commission

#artificialintelligence

The Ethics Guidelines for Trustworthy Artificial Intelligence (AI) is a document prepared by the High-Level Expert Group on Artificial Intelligence (AI HLEG). This independent expert group was set up by the European Commission in June 2018, as part of the AI strategy announced earlier that year. The AI HLEG presented a first draft of the Guidelines in December 2018. Following further deliberations by the group in light of discussions on the European AI Alliance, a stakeholder consultation and meetings with representatives from Member States, the Guidelines were revised and published in April 2019. In parallel, the AI HLEG also prepared a revised document which elaborates on a definition of Artificial Intelligence used for the purpose of its deliverables.


Opinion Artificial Intelligence needs to become less and less artificial

#artificialintelligence

AI (Artificial Intelligence) is everywhere and it's here to stay. Along with these consumer applications, companies across sectors are increasingly harnessing AI's power for productivity growth and innovation. There are many who believe that AI has the potential to become more significant than even the internet. Availability of enormous amount of data combined with huge leap in computational power and huge improvements in engineering skills should help AI, backed with deep learning, to make huge impact across various facets of human life. Amid all the hype, genuine and inflated, around the world of AI, it is pertinent to ask an important question.