Goto

Collaborating Authors

ethics


Top 6 Machine Learning Trends In 2022

#artificialintelligence

If we look at the structure of technology, then machine learning definitely falls as a subset of artificial intelligence. Machine learning generates algorithms that help machines to get a better understanding of the data and take data-driven decisions. For instance, software testing is a classic example of implementation of machine learning in many organizations, including the likes of giants such as Google, Apple, Facebook and soon. According to some analysts, they are anticipating that machine learning will gain immense popularity by 2024, with the maximum thrust in 2022 and 2023. Why was machine learning technology developed is something that is quite technical in nature, but the fundamental reason for the development of this technology was to create a method that will help developers and IT professionals in quickly generating applications and solutions.


DeepMind co-founder Mustafa Suleyman departs Google

#artificialintelligence

DeepMind co-founder Mustafa Suleyman has departed Google after an eight-year stint at the company. Suleyman co-founded AI giant DeepMind alongside Demis Hassabis and Shane Legg in 2010 before it was acquired by Google in 2014 for $500 million. DeepMind has become somewhat of an AI darling and has repeatedly made headlines for creating neural networks that have beat human capabilities in a range of games. DeepMind's AlphaGo even beat Go world champion Lee Sedol in a five-game match. He left for Google in 2019 and was most recently the company's vice president of AI product management and policy.


AI bias harms over a third of businesses, 81% want more regulation

#artificialintelligence

AI bias is already harming businesses and there's significant appetite for more regulation to help counter the problem. The findings come from the State of AI Bias report by DataRobot in collaboration with the World Economic Forum and global academic leaders. The report involved responses from over 350 organisations across industries. "DataRobot's research shows what many in the artificial intelligence field have long-known to be true: the line of what is and is not ethical when it comes to AI solutions has been too blurry for too long. The CIOs, IT directors and managers, data scientists, and development leads polled in this research clearly understand and appreciate the gravity and impact at play when it comes to AI and ethics."


Sustainability starts in the design process, and AI can help

MIT Technology Review

Artificial intelligence helps build physical infrastructure like modular housing, skyscrapers, and factory floors. "…many problems that we wrestle with in all forms of engineering and design are very, very complex problems…those problems are beginning to reach the limits of human capacity," says Mike Haley, the vice president of research at Autodesk. But there's hope with AI capabilities, Haley continues "This is a place where AI and humans come together very nicely because AI can actually take certain very complex problems in the world and recast them." And where "AI and humans come together" is at the start of the process with generative design, which incorporates AI into the design process to explore solutions and ideas that a human alone might not have even considered. "You really want to be able to look at the entire lifecycle of producing something and ask yourself, 'How can I produce this by using the least amount of energy throughout?'" This kind of thinking will reduce the impact of, not just construction, but any sort of product creation on the planet. The symbiotic human-computer relationship behind generative design is necessary to solve those "very complex problems"--including sustainability. "We are not going to have a sustainable society until we learn to build products--from mobile phones to buildings to large pieces of infrastructure--that survive the long-term," Haley notes. The key, he says, is to start in the earliest stages of the design process. "Decisions that affect sustainability happen in the conceptual phase, when you're imagining what you're going to create." He continues, "If you can begin to put features into software, into decision-making systems, early on, they can guide designers toward more sustainable solutions by affecting them at this early stage."


Singapore: UNESCO members adopt a global agreement on the ethics of artificial intelligence

#artificialintelligence

On 25 November 2021, the United Nations (UN) announced that all 193 member states of the United Nations Educational, Scientific and Cultural Organization (UNESCO), including Singapore, adopted a first-of-its-kind global agreement on the ethics of artificial intelligence (AI). The agreement focuses on the broader ethical implications of AI systems in relation to education, science, culture, communication and information; and articulates common values and principles to assist in the creation of legal infrastructure for the healthy development of AI. The rise of AI is well documented. AI is present in everyday life, where UNESCO has recognized that AI supports the decision-making of governments and the private sector; and helps to combat global problems such as climate change and world hunger. As AI becomes increasingly used and relied upon, it is likely that further standards and regulations will emerge as governments and agencies begin to pay more attention to the development of AI.


Organizations Struggle with AI Bias

#artificialintelligence

As organizations roll out machine learning and AI models into production, they're increasing cognizant of the presence of bias in their systems. Not only does this bias potentially lead to poorer decisions on the part of the AI systems, but it can put the organizations running them in legal jeopardy. However, getting on top of this problem is turning out to be tougher than expected for a lot of organizations. For example, Harvard University and Accenture demonstrated how algorithmic bias can creep into the hiring processes at human resources departments in a report issued last year. In their 2021 joint report "Hidden Workers: Untapped Talent," the two organizations show how the combination of outdated job descriptions and automated hiring systems that leans heavily on algorithmic processes for posting of ads for open job and evaluation of resumes can keep otherwise qualified individuals from landing jobs.


Ethics in Ai -- Current issues, existing precautions, and probable solutions

#artificialintelligence

Introduction- Most of the Artificial Intelligent (Ai) Systems are developed as black boxes, especially Machine Learning and Deep Learning-based systems. Nowadays, these Machine and Deep Learning-based systems make decisions for our daily life, and should be explainable and should not be taken for granted to the end-users. The implication of such systems is rarely explored for the efficiency in the public usage (i.e., usage in -- Agriculture, Air Combat, Military Training, Education, Finance, Health Care, Human Resources, Customer Service, Autonomous Vehicles, Social Media, and several others[1]-[9]). Not only these, but the future might also be relying on Ai based system that will do our laundry, mow our lawn, fight wars [9]. Thus, there is so much room to improve the transparency of the systems along with fairness and accountability. There are some works that already stated the necessity of guidelines and governance of the Ai based systems, but more exposure is required in each area of application.


Why Timnit Gebru Isn't Waiting for Big Tech to Fix AI's Problems

#artificialintelligence

Three hundred and sixty-four days after she lost her job as a co-lead of Google's ethical artificial intelligence (AI) team, Timnit Gebru is nestled into a couch at an Airbnb rental in Boston, about to embark on a new phase in her career. Google hired Gebru in 2018 to help ensure that its AI products did not perpetuate racism or other societal inequalities. In her role, Gebru hired prominent researchers of color, published several papers that highlighted biases and ethical risks, and spoke at conferences. She also began raising her voice internally about her experiences of racism and sexism at work. But it was one of her research papers that led to her departure. "I had so many issues at Google," Gebru tells TIME over a Zoom call.


DIGITAL EYE: HOW AI is quietly eating up the workforce with job automation

#artificialintelligence

Welcome to your new weekly Briefing from The Digital Eye. This has been compiled for busy professionals who have limited time but want to stay up to date with the latest digital news. WHY Ethics is'the new frontier for technology' We hope you have found these articles informative. Would you please share with others who might also be interested? To build trust, important those developing AI are aware of unintended consequences". "Questions about AI ethics AI are uncomfortable, as requires reflection on our own ethics & society generally.


Why giving AI 'human ethics' is probably a terrible idea

#artificialintelligence

If you want artificial intelligence to have human ethics, you have to teach it to evolve ethics like we do. At least that's what a pair of researchers from the International Institute of Information Technology in Bangalore, India proposed in a pre-print paper published today. Titled "AI and the Sense of Self," the paper describes a methodology called "elastic identity" by which the researchers say AI might learn to gain a greater sense of agency while simultaneously understanding how to avoid "collateral damage." In short, the researchers are suggesting that we teach AI to be more ethically-aligned with humans by allowing it to learn when it's appropriate to optimize for self and when its necessary to optimize for the good of a community. While we may be far from a comprehensive computational model of self, in this work, we focus on a specific characteristic of our sense of self that may hold the key for the innate sense of responsibility and ethics in humans.