Goto

Collaborating Authors

 data and algorithm


"This is not a data problem": Algorithms and Power in Public Higher Education in Canada

McConvey, Kelly, Guha, Shion

arXiv.org Artificial Intelligence

Algorithmic decision-making is increasingly being adopted across public higher education. The expansion of data-driven practices by post-secondary institutions has occurred in parallel with the adoption of New Public Management approaches by neoliberal administrations. In this study, we conduct a qualitative analysis of an in-depth ethnographic case study of data and algorithms in use at a public college in Ontario, Canada. We identify the data, algorithms, and outcomes in use at the college. We assess how the college's processes and relationships support those outcomes and the different stakeholders' perceptions of the college's data-driven systems. In addition, we find that the growing reliance on algorithmic decisions leads to increased student surveillance, exacerbation of existing inequities, and the automation of the faculty-student relationship. Finally, we identify a cycle of increased institutional power perpetuated by algorithmic decision-making, and driven by a push towards financial sustainability.


Is artificial intelligence a threat to humans?

#artificialintelligence

Mitigating bias: AI systems can perpetuate and amplify bias in their training data, which can lead to unfair or discriminatory outcomes. To mitigate bias, it's important to actively identify and address sources of bias in the data and algorithms used to train AI systems. Transparency: AI systems should be transparent in their decision-making processes, so that users can understand how they arrived at a particular decision or output. This can help users to identify and correct any errors or biases in the system. Accountability: AI systems should be designed and implemented in a way that makes it possible to hold individuals and organizations responsible for their actions.


The Turing Deception

Noever, David, Ciolino, Matt

arXiv.org Artificial Intelligence

The outlier, however, for ChatGPT is Appendix F, based on the prompt to generate variants on poetry dedicated to Turing. In this instance, the generated content bypassed Open AI's detector with high confidence as real (99.98%). In their original report [24], the authors found "detection rates of ~95% for detecting 1.5B GPT-2-generated text" and noted that "We believe this is not high enough accuracy for standalone detection and needs to be paired with metadata-based approaches, human judgment, and public education to be more effective." Like the evolution of ever larger language models (>100 billion), refinements also have built-in heuristics or guardrails for model execution. The Instruct-series of GPT-3 demonstrated the ability to answer questions directly without conversational meanderings. The ChatGPT includes longer-term conversational memory, such that the API can track the dialog even with leaps of narration that single API calls could not span. One can test dialogs with impersonal pronouns like "it" carrying forward in the conversation with context to previous API calls in a single session-one easily grasped example for ChatGPT's API memory as both powerful and expensive to encode for more extended conversations. As Turing himself posed the human capacity to list memories [1]: "Actual human computers really remember what they have to do Constructing instruction tables is usually described as'programming.'"


STOA meets its International Advisory Board to discuss the Artificial Intelligence Act

#artificialintelligence

Written by Philip Boucher and Carl Pierer. The European Commission published the much-anticipated Artificial Intelligence Act (AIA), an ambitious cross-sectoral attempt to regulate artificial intelligence (AI) applications on 21 April 2021. Its aim is to ensure that all European citizens can trust AI by providing proportionate and flexible rules – harmonised across the single market – to address the specific risks posed by AI systems and set the highest standards worldwide. The proposal sets out a risk-based approach to regulating AI applications: those presenting an'unacceptable risk' would be banned, those presenting a'high-risk' would be subjected to additional requirements before entering the market, and others, such as chatbots and'deep fakes', would be subject to new transparency requirements. Applications presenting'low or minimal risk' – the vast majority of AI applications – could enter the market without restrictions, although voluntary codes of conduct may be developed. Other proposed measures include a European AI Board to monitor implementation and regulatory sandboxes to facilitate innovation.


Council Post: The Reality Behind The AI Illusion

#artificialintelligence

Though artificial intelligence has evolved recently and appears to be a new phenomenon in modern society, it is much older than you would imagine. Being actively involved in the global AI community, I've noticed that many people still associate AI with sci-fi Hollywood movies displaying the distant future powered by intelligent robots and machines. However, this perception is waning as AI becomes more commonplace in our daily lives. The early instances of intelligent machines were found in ancient Greek mythology with conceptions of mechanical robots made to help the Greek god Hephaestus. Following were some milestones in the history of AI, which started as a field of research in the late 1950s with the development of the first algorithms to solve complex mathematical problems.


Ethical AI – Decoded in 7 Principles

#artificialintelligence

Consider developing AI mindful of each stakeholder to benefit the environment and all present and future ecosystem members, human and non-human alike.


Artificial Intelligence and Decision-Making: Can We Trust AI Decisions Today? - Technoroll

#artificialintelligence

Recent technological advancements paved the way for Artificial Intelligence (AI) to be integrated into our daily lives. Innovations in the field have greatly disrupted the business landscape, changing consumer behavior, and redefining customer service. According to the 2019 report of the U.S. think-tank Centre for Data Innovation, AI is applied in 32% of Chinese businesses. Meanwhile, in the EU and U.S., AI application is at 18% and 22%, respectively. As businesses become more competitive, there is a proportionate increase in the demand for AI to help simplify complex tasks.


How the U.S. patent office is keeping up with AI

#artificialintelligence

Technology keeps creating challenges for intellectual property law. The infamous case of the "monkey selfie" challenged the notion of not just who owns a piece of intellectual property, but what constitutes a "who" in the first place. Last decade's semi-sentient monkey is giving way to a new "who": artificial intelligence. The rapid rise of AI has forced the legal field to ask difficult questions about whether an AI can hold a patent at all, how existing IP and patent laws can address the unique challenges that AI presents, and what challenges remain. The answers to these questions are not trivial; stakeholders have poured billions upon billions of dollars into researching and developing AI technologies and AI-powered products and services across academia, government, and industry.


Home :: Books :: Machine Learning & Security: Protecting Systems with Data and Algorithms

#artificialintelligence

Can machine learning techniques solve our computer security problems and finally put an end to the cat-and-mouse game between attackers and defenders? Or is this hope merely hype? Now you can dive into the science and answer this question for yourself. With this practical guide, you'll explore ways to apply machine learning to security issues such as intrusion detection, malware classification, and network analysis. Machine learning and security specialists Clarence Chio and David Freeman provide a framework for discussing the marriage of these two fields, as well as a toolkit of machine-learning algorithms that you can apply to an array of security problems.


Amazon.com: Machine Learning and Security: Protecting Systems with Data and Algorithms (9781491979907): Clarence Chio, David Freeman: Books

#artificialintelligence

We wrote this book to provide a framework for discussing the inevitable marriage of two ubiquitous concepts: machine learning and security. While there is some literature on the intersection of these subjects (and multiple conference workshops: CCS's AISec, AAAI's AICS, and NIPS's Machine Deception), most of the existing work is academic or theoretical. In particular, we did not find a guide that provides concrete, worked examples with code that can educate security practitioners about data science and help machine learning practitioners think about modern security problems effectively. In examining a broad range of topics in the security space, we provide examples of how machine learning can be applied to augment or replace rule-based or heuristic solutions to problems like intrusion detection, malware classification, or network analysis. In addition to exploring the core machine learning algorithms and techniques, we focus on the challenges of building maintainable, reliable, and scalable data mining systems in the security space.