ai development process
Identifying Potential Inlets of Man in the Artificial Intelligence Development Process
Workman, Deja, Dancy, Christopher L.
In this paper we hope to identify how the typical or standard artificial intelligence development process encourages or facilitates the creation of racialized technologies. We begin by understanding Sylvia Wynter's definition of the biocentric Man genre and its exclusion of Blackness from humanness. We follow this with outlining what we consider to be the typical steps for developing an AI-based technology, which we have broken down into 6 stages: identifying a problem, development process and management tool selection, dataset development and data processing, model development, deployment and risk assessment, and integration and monitoring. The goal of this paper is to better understand how Wynter's biocentric Man is being represented and reinforced by the technologies we are producing in the AI lifecycle and by the lifecycle itself; we hope to identify ways in which the distinction of Blackness from the "ideal" human leads to perpetual punishment at the hands of these technologies. By deconstructing this development process, we can potentially identify ways in which humans in general have not been prioritized and how those affects are disproportionately affecting marginalized people. We hope to offer solutions that will encourage changes in the AI development cycle.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.16)
- Asia > Middle East > Israel (0.05)
- North America > United States > Pennsylvania (0.04)
- (3 more...)
Tech Industry Stuck Over Patent Problems With AI Algorithms - AI Summary
Machine learning algorithms can, for example, spit out molecule combinations in the hunt for new drugs, map out schematics for novel chip designs, and even write code. She said Google had filed multiple patents describing a machine-learning technique used internally to automatically design and map out components in the company's custom AI accelerator TPU chips currently used in its servers. The uncertainty surrounding whether it's possible or how best to apply for patents protecting IP produced by algorithms can sometimes be a roadblock in developing new products, especially in the pharmaceutical and biotech industries. Companies that rely on using AI software to create new drugs or antibodies, for example, often need to secure patents before they can kick start clinical trials. "We at Google are definitely giving a lot of thought to the inventorship question overall… We're thinking through inventive contribution issues throughout the AI development process," Sheridan concluded.
AI Engineering: Inclusive or Exclusive?
In the past, data teams and other jobs dealing with data, we're still pretty much in the wild west, meaning all of it is new territory & is yet to be explored. Certain best practices have been uncovered in recent times, but for the most part, there's not any one proven method to follow and the fact that the job titles of data professionals' (and the roles they play) differ widely is another evidence of this. One of the forks in the path for the future of how data teams will evolve, roles in data, and even the field of artificial intelligence (AI) in general is how AI ought to be inclusive (that includes the various types of people with different roles, working together towards an end objective) as well as exclusive (siloed to particular and specific teams in order to get the job accomplished more precisely and effectively). Which direction AI veers will be able to alter the core structure of companies and even individual career paths. So, what is the future -- inclusive or exclusive?
Biased Artificial Intelligence
I've also come to realise that this code could have unintended consequences on the employment prospects, loan approvals and health outcomes of a complete stratum of society. This realisation prompted me to delve deeper into the notion of bias in artificial intelligence and its unintended consequences in real world scenarios. It's possible to build AI systems that are more robust against bias and discrimination. Furthermore, a partnership between human and machines could actually lead to improvements in the fairness of human decision making. My intention in this blog is to focus explicitly on the ways in which a biased system directly affects a minority group and steps we can take to fix it.
Improving Verifiability in AI Development
We've contributed to a multi-stakeholder report by 58 co-authors at 30 organizations, including the Centre for the Future of Intelligence, Mila, Schwartz Reisman Institute for Technology and Society, Center for Advanced Study in the Behavioral Sciences, and Center for Security and Emerging Technologies. This report describes 10 mechanisms to improve the verifiability of claims made about AI systems. Developers can use these tools to provide evidence that AI systems are safe, secure, fair, or privacy-preserving. Users, policymakers, and civil society can use these tools to evaluate AI development processes. While a growing number of organizations have articulated ethics principles to guide their AI development process, it can be difficult for those outside of an organization to verify whether the organization's AI systems reflect those principles in practice.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.43)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.43)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.43)