Goto

Collaborating Authors

Social & Ethical Issues


How will AI impact the future of businesses and society?

#artificialintelligence

At a time when India is trying to rekindle productivity and growth, AI promises to fill the gap. AI can boost profitability and transform businesses across sectors through systems that can learn, adapt and evolve with changing times. Such systems are increasingly important in a post-pandemic world where scalable AI solutions may be able to help organizations be prepared even during unprecedented situations. As organisations are working hard to re-architect themselves by changing their business models and technology architecture to survive in the pandemic world, it is time for them to invest in scalable AI solutions to achieve their goals faster. At the same time, technologists and businesses across the world have to advocate for the responsible use of AI.


Taking On Tech: Dr. Timnit Gebru Exposes The Underbelly Of Performative Diversity In The Tech Industry

#artificialintelligence

SAN FRANCISCO, CA - SEPTEMBER 07: Google AI Research Scientist Timnit Gebru speaks onstage during ... [ ] Day 3 of TechCrunch Disrupt SF 2018 at Moscone Center on September 7, 2018 in San Francisco, California. 'Taking On Tech is an informative series that explores artificial intelligence, data science, algorithms, and mass censorship. In this inaugural report, For(bes) The Culture kicks things off with Dr. Timnit Gebru, a former researcher and co-lead of Google's Ethical AI team. When Gebru was forced out of Google after refusing to retract a research paper that was already cleared by Google's internal review process, a conversation about the tech industry's inherent diversity problem resurfaced. The paper raised concerns on algorithmic bias in machine learning and the latent perils that AI presents for marginalized communities. Around 1,500 Google employees signed a letter in protest, calling for accountability and answers over her unethical firing.


The University Of Utah: UI2 And Tanner Humanities Center Team Up For Discussion Of Artificial Intelligence

#artificialintelligence

"Each of our lives is increasingly impacted by Artificial Intelligence--how we accomplish tasks, our habits, even the way we think about the world--and nearly every aspect of intellectual pursuit is changing through this technology," said Mike Kirby, UI2 director. "To be proactive, we need to ask not only, 'What do we do?' but'How do we do it?'" The symposium, which runs from 3:30 to 5 p.m. on Sept. 21 and 22, is offered to University of Utah community members and industry partners from across the academic spectrum as an opportunity to discuss the ethical, social and technical implications of artificial intelligence and its impact on society. The Zoom gathering will feature keynote speaker Moshe Vardi, who leads Rice University's Initiative on Technology, Culture and Society. Other discussions about the intersections of technology and society are planned throughout the fall.


NIST calls for help in developing framework managing risks of AI

ZDNet

The National Institute of Standards and Technology (NIST) -- part of the US Department of Commerce -- is asking the public for input on an AI risk management framework, which the organization is in the process of developing as a way to "manage the risks posed by artificial intelligence." The Artificial Intelligence Risk Management Framework (AI RMF) will be a voluntary document that can be used by developers, evaluators and others as a way to "improve the trustworthiness of AI systems." NIST noted that the request for input comes after Congress and the White House asked the organization to create a framework for AI. Deputy Commerce Secretary Don Graves said in a statement that the document "could make a critical difference in whether or not new AI technologies are competitive in the marketplace." "Each day it becomes more apparent that artificial intelligence brings us a wide range of innovations and new capabilities that can advance our economy, security and quality of life. It is critical that we are mindful and equipped to manage the risks that AI technologies introduce along with their benefits," Graves said.


The Question Medical AI Can't Answer

#artificialintelligence

Artificial intelligence (AI) is at an inflection point in health care. A 50-year span of algorithm and software development has produced some powerful approaches to extracting patterns from big data. For example, deep-learning neural networks have been shown to be effective for image analysis, resulting in the first FDA-approved AI-aided diagnosis of an eye disease called diabetic retinopathy, using only photos of a patient's eye. However, the application of AI in the health care domain has also revealed many of its weaknesses, outlined in a recent guidance document from the World Health Organization (WHO). The document covers a lengthy list of topics, each of which are just as important as the last: responsible, accountable, inclusive, equitable, ethical, unbiased, responsive, sustainable, transparent, trustworthy and explainable AI.


Disclosure: What is the Future of Artificial Intelligence in India?

#artificialintelligence

The world knows that Artificial Intelligence is creating a massive shift in the technology field as well as the lives of citizens across the world. The continuous emergence of AI models has successfully accelerated productivity and enhanced customer engagement in the cut-throat competitive market. Multiple industries and companies along with the government have started adopting AI models to provide freedom to employees from mundane tasks through automated services. India is one of the developing countries that has started allocating budgets for implementing Artificial Intelligence projects. India is set to use ground-breaking technologies for better productivity and earn revenue in the nearby future. Reputed tech companies, especially from Silicon Valley, have realized the potential of India's digital footprint and have started developing branches here.


Will Members of the Military Ever Be Willing to Fight Alongside Autonomous Robots?

Slate

A writer and military historian responds to Justina Ireland's "Collateral Damage." The histories of the military and technology often go hand in hand. Soldiers and military thinkers throughout the past have continually come up with new ways to fill the people over there full of holes as a means to encourage them to stop trying to do the same to their opponents. After the introduction of a new weapon or the improvement of an existing one, strategists spend their time trying to come up with the best way to deploy their forces to take advantage of the tools and/or to blunt their effectiveness by devising countermeasures. The development of the Greek phalanx helped protect soldiers from cavalry, the deployment of English longbows helped stymie large formations of enemy soldiers, new construction methods changed the shape of fortifications, line infantry helped European formations take advantage of firearms, and anti-aircraft cannons helped protect against incoming enemy aircraft.


Winning your AI journey - tune out the noise and tune in to the music!

#artificialintelligence

It is an acknowledged fact that data and Artificial Intelligence (AI) are pivotal levers of digital transformation, which can boost the competitiveness of businesses. But in what is an emerging trend, data and AI now have a new stakeholder in the organisation - the CEO. The cloud makes it possible for enterprises to scale and embed a data-driven approach into every business process. As a result, AI can enable the kind of value generation that CEOs are interested in – not incremental but transformative value. Besides the proven business benefits of a data-driven approach, there is one more dimension that is crucial to the CEO agenda. Responsible business is taking centre stage in board rooms and the need to mitigate risks for clients, employees and society is a top priority.


Five early reflections on the EU's proposed legal framework for AI

#artificialintelligence

As the use of AI accelerates around the world, policymakers are asking questions about what frameworks should guide the design and use of AI, and how it can benefit society. The EU is the first institution to take a major step to answer these questions through a proposed legal framework for AI released on 21 April 2021. In doing so, the EU is seeking to establish a safe environment for AI innovation and to position itself as a leader in setting "the global gold standard" for regulating AI. This is a positive aspect of the proposal as AI is a broad set of technology, tools and applications. Shifting the focus away from AI technology, which can have significantly different impacts depending on the application for which it is used, helps to mitigate the risk of divergent requirements for AI products and services.


Twitter offers bug bounty to spot AI bias so it can fix its algorithms

#artificialintelligence

Twitter has a new way to rid itself of artificial intelligence bias: pay outsiders to help it find problems. On Friday, the short-message app maker detailed a new bounty competition that offers prizes of up to $3,500 for showing Twitter how its technology incorrectly handles photos. Earlier this year, Twitter confirmed a problem in its automatic photo cropping mechanism, concluding the software favored white people over Black people. The cropping mechanism, which Twitter calls its "saliency algorithm," is supposed to present the most important section of an image when you're scrolling through tweets. Twitter's approach to tackling algorithmic bias -- asking outside experts and observers to study its code and results -- innovates on bug bounties, which have historically been used for reporting security vulnerabilities.