Goto

Collaborating Authors

Issues


The False Philosophy Plaguing AI

#artificialintelligence

The field of Artificial Intelligence (AI) is no stranger to prophesy. At the Ford Distinguished Lectures in 1960, the economist Herbert Simon declared that within 20 years machines would be capable of performing any task achievable by humans. In 1961, Claude Shannon -- the founder of information theory -- predicted that science fiction style robots would emerge within 15 years. Good conceived of a runaway "intelligence explosion," a process whereby smarter-than-human machines iteratively improve their own intelligence. Writing in 1965, Good predicted that the explosion would arrive before the end of the twentieth century.


Google Artificial Intelligence Team Draws From Critical Race Theory, Internal Document Shows

#artificialintelligence

Google's artificial intelligence (AI) work draws from Critical Race Theory, a philosophical framework that posits that nearly every interaction should be seen as a racial power struggle and seeks to "disrupt" American society which it views as immutably racist, according to a company document obtained by The Daily Wire. A screenshot of an internal company page, obtained by The Daily Wire, says under the header "Ethical AI": We focus on AI at the intersection of Machine Learning and society, developing projects that inform the general public; bringing the complexities of individual identity into the development of human-centric AI; and creating ways to measure different kinds of biases and stereotypes. Out [sic] work includes lessons from gender studies, critical race theory, computational linguistics, computer vision, engineering education, and beyond! Google's Ethical AI team appears intent on encoding far-left ideology into its algorithms even after previous leaders of the team plunged the section into chaos over their insistence on overlaying progressive politics onto mathematics. Until recently, the team was co-led by Timnit Gebru, who cofounded a "Black in AI" racial affinity group and in 2018 coauthored a paper saying facial recognition technology was less accurate at recognizing women and minorities.


Address artificial intelligence threats, politicians told

#artificialintelligence

Governments' increasing use of artificial intelligence (AI) technology and people's inability to avoid official computer services present threats politicians must address with law, privacy watchdogs say. "Regulatory intervention is necessary," the B.C. and Yukon ombudsman and information and privacy commissioners said in a report released June 17. "The regulatory challenge is deciding how to adapt or modernize existing regulatory instruments to account for the new and emerging challenges brought on by government's use of AI. The increasing automation of government decision-making undermines the applicability or utility of existing regulations or common law rules that would otherwise apply to and sufficiently address those decisions." Just as fairness and privacy issues resulting from the use of AI in commercial facial recognition systems have been shown to have bias and infringe people's privacy rights, government use of AI can have serious, long-lasting impacts on people's lives and could create tension with the fairness and privacy obligations of democratic institutions, the report said.


6 Ways Artificial Intelligence Will Transform StartUps Business

#artificialintelligence

Over the earlier decade, the field of Artificial Intelligence has taken immense leaps forward. Today, those types of progress are assisting associations with isolating themselves from the opposition. OTT's like Netflix and Amazon wouldn't be something very similar without their AI-based proposition engines. Retailers like Walmart and Tesco are burrowing for new AI openings for item gauging, production network the executives, advanced store foundations, and foreseeing buyer buying patterns. Clinical consideration in the hour of COVID is attempting to speed logical examination and immunization improvement.


What to know about the EU's facial recognition regulation

#artificialintelligence

The European Commission's (EC) proposed Artificial Intelligence (AI) regulation – a much-awaited piece of legislation – is out. While this text must still go through consultations within the EU before its adoption, the proposal already provides a good sense of how the EU considers the development of AI within the years to come: by following a risk-based approach to regulation. Other use-cases such as FRT for authentication processes are not part of the list of high-level risks and thus should require a lighter level of regulation. While technology providers have to maintain the highest level of performance and accuracy of their systems, this necessary step isn't the most critical to prevent harm. The EC doesn't detail any threshold of accuracy to meet, but rather requires a robust and documented risk-mitigation process designed to prevent harm.


The Future of AI in Law: Changing the Legal Landscape

#artificialintelligence

Artificial intelligence (AI) is one of the fastest-growing technological industries today, but what effects will it have on legal practices? In addition to the growing number of legal questions that arise as the explosive growth of AI creeps into our everyday lives, artificial intelligence is already enabling some software to carry out legal functions. Let's discuss the future of AI in law. Artificial intelligence, simply put, is teaching computers to "think" the way humans would, using the given data and desired output requested. There are many different types of systems that utilize AI, from advertising and marketing to shopping, to scheduling.


Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm Within the Next Decade

#artificialintelligence

This is the 12th "Future of the Internet" canvassing Pew Research Center and Elon University's Imagining the Internet Center have conducted together to get expert views about important digital issues. In this case, the questions focused on the prospects for ethical artificial intelligence (AI) by the year 2030. This is a nonscientific canvassing based on a nonrandom sample; this broad array of opinions about where current trends may lead in the next decade represents only the points of view of the individuals who responded to the queries. Pew Research and Elon's Imagining the Internet Center built a database of experts to canvass from a wide range of fields, choosing to invite people from several sectors, including professionals and policy people based in government bodies, nonprofits and foundations, technology businesses, think tanks and in networks of interested academics and technology innovators. The predictions reported here came in response to a set of questions in an online canvassing conducted between June 30 and July 27, 2020. In all, 602 technology innovators and developers, business and policy leaders, researchers and activists responded to at least one of the questions covered in this report. More on the methodology underlying this canvassing and the participants can be found in the final section. Artificial intelligence systems "understand" and shape a lot of what happens in people's lives. AI applications "speak" to people and answer questions when the name of a digital voice assistant is called out. They run the chatbots that handle customer-service issues people have with companies. They help diagnose cancer and other medical conditions. They scour the use of credit cards for signs of fraud, and they determine who could be a credit risk. They help people drive from point A to point B and update traffic information to shorten travel times. They are the operating system of driverless vehicles. They sift applications to make recommendations about job candidates. They determine the material that is offered up in people's newsfeeds and video choices. They recognize people's faces, translate languages and suggest how to complete people's sentences or search queries. They can "read" people's emotions. They beat them at sophisticated games.


Fairness and Ethics in Artificial Intelligence! - Analytics Vidhya

#artificialintelligence

Today, AI is getting adopted in everyday life and now it is more important to ensure that decisions that have been taken using AI are not reflecting discriminatory behavior towards a set of populations. It is important to take fairness into consideration while consuming the output from AI. A quote from "The Guardian" has summarized it very well – "Although neural networks might be said to write their own programs, they do so towards goals set by humans, using data collected for human purposes. If the data is skewed, even by accident, the computers will amplify injustice." Discrimination towards a sub-population can be created unintentionally and unknowingly but while the deployment of any AI solution, a check on bias is imperative.


Fixing Bias in AI Systems by Building Better AI Models

#artificialintelligence

AI models are as good as the algorithms and data they are trained on. When an AI system fails, it is usually due to three factors; 1) the algorithm has been incorrectly trained, 2) there is bias in the system's training data, or 3) there is developer bias in the model building process. The focus of this article is on the bias in training data and the bias that is coded directly into AI systems by model developers. "I think today, the AI community at large has a self-selecting bias simply because the people who are building such systems are still largely white, young and male. I think there is a recognition that we need to get beyond it, but the reality is that we haven't necessarily done so yet."


Training next generation of leaders in responsible use of artificial intelligence

#artificialintelligence

Artificial intelligence (AI) is transforming our world in powerful ways, from improving medical care and changing the retail landscape to enabling convenient features on our smartphones. But as AI increasingly underpins our daily lives, important questions about its application – and potential misuse – will continue to arise. A new cohort of students will soon be poised to tackle these crucial questions head on, thanks to a fellowship and award program being established at McGill University through a generous $2-million donation to the Faculty of Science from BMO Financial Group. The fellowship program, open to graduate students, and the award program, open to undergraduate students from across the University, aims to train the next generation of professionals in the important ethical considerations surrounding the use of AI. The program will equip not just computer scientists and software developers, but also future industry leaders and policy makers with the necessary grounding and skills in the responsible and ethical use of AI, while attracting a diverse group of voices to the field.