Goto

Collaborating Authors

Results


GPT-3 Creative Fiction

#artificialintelligence

What if I told a story here, how would that story start?" Thus, the summarization prompt: "My second grader asked me what this passage means: …" When a given prompt isn't working and GPT-3 keeps pivoting into other modes of completion, that may mean that one hasn't constrained it enough by imitating a correct output, and one needs to go further; writing the first few words or sentence of the target output may be necessary.


Abolish the #TechToPrisonPipeline

#artificialintelligence

The authors of the Harrisburg University study make explicit their desire to provide "a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime" as a co-author and former NYPD police officer outlined in the original press release.[38] At a time when the legitimacy of the carceral state, and policing in particular, is being challenged on fundamental grounds in the United States, there is high demand in law enforcement for research of this nature, research which erases historical violence and manufactures fear through the so-called prediction of criminality. Publishers and funding agencies serve a crucial role in feeding this ravenous maw by providing platforms and incentives for such research. The circulation of this work by a major publisher like Springer would represent a significant step towards the legitimation and application of repeatedly debunked, socially harmful research in the real world. To reiterate our demands, the review committee must publicly rescind the offer for publication of this specific study, along with an explanation of the criteria used to evaluate it. Springer must issue a statement condemning the use of criminal justice statistics to predict criminality and acknowledging their role in incentivizing such harmful scholarship in the past. Finally, all publishers must refrain from publishing similar studies in the future.


Dr Enrico Bonadio

#artificialintelligence

Enrico Bonadio is Reader at The City Law School, where he teaches various modules on intellectual property (IP) law. He holds law degrees from the University of Florence (PHD) and the University of Pisa (LLB), and is Associate Editor and Intellectual Property Correspondent of the European Journal of Risk Regulation as well as a member of the Editorial Board of NUART Journal. His current research agenda focuses on copyright protection of unconventional forms of expression, including graffiti and street art. Enrico has recently co-edited the book "Non-Conventional Copyright - Do New and Non-Traditional Works Deserve Protection?" (together with Nicola Lucchi, Elgar 2018); and edited the "Cambridge Handbook of Copyright in Street Art and Graffiti" (Cambridge University Press, 2019). Enrico is also researching on IP protection of AI and robotics: he is part of a consortium that has been awarded funding by the EU as part of Horizon2020 to assess the area of interactive robots in society (INBOTS project).


Home AI Expo Artificial Intelligence Meetings Artificial Intelligence Conferences Robotics and Artificial Intelligence Meetings Robotics and Artificial Intelligence Conferences Rome Italy

#artificialintelligence

Dr. Truby is Director of the Centre for Law & Development at Qatar University College of Law, alegal research and policy centre focused on delivering solutions to the needs of Qatar's National Development Strategy. Its current research and roundtable agenda focuses upon financial innovation for Qatar's economic diversification, including artificial intelligence, cybersecurity, digital currencies and blockchain technology. As a lawyer and academic established in law, policy and social sciences, he has secured major research grants from Qatar Foundation as well as other corporate and public sponsors, enabling him to research and publish in areas of interest including financial innovation and regulation, cybersecurity, AML/CFT, taxation and commercial law. He also studies policy tools to impact social behavior towards to achieve decarbonization and other sustainability objectives to mitigate climate change. Before joining QU College of Law in 2010, Dr. Truby taught graduate and undergraduate courses on the LLM and LLB courses at Newcastle Law School (England).


Is Artificial Intelligence (AI) A Threat To Humans?

#artificialintelligence

Are artificial intelligence (AI) and superintelligent machines the best or worst thing that could ever happen to humankind? This has been a question in existence since the 1940s when computer scientist Alan Turing wondered and began to believe that there would be a time when machines could have an unlimited impact on humanity through a process that mimicked evolution. Is Artificial Intelligence (AI) A Threat To Humans? When Oxford University Professor Nick Bostrom's New York Times best-seller, Superintelligence: Paths, Dangers, Strategies was first published in 2014, it struck a nerve at the heart of this debate with its focus on all the things that could go wrong. However, in my recent conversation with Bostrom, he also acknowledged there's an enormous upside to artificial intelligence technology.


AI Governance by Human Rights-Centred Design, Deliberation and Oversight: An End to Ethics Washing by Karen Yeung, Andrew Howes , Ganna Pogrebna :: SSRN

#artificialintelligence

In this paper, we (1) argue that the international human rights framework provides the most promising set of standards for ensuring that AI systems are ethical in their design, development and deployment, and (2) sketch the basic contours of a comprehensive governance framework, which we call'human rights-centred design, deliberation and oversight', for ensuring that AI can be relied upon to operate in ways that will not violate human rights.


LAIN: Artificial Intelligence, Platforms & Workers 25/10

#artificialintelligence

This paper aims at filling some gaps in the mainstream debate on automation, the introduction of new technologies at the workplace and the future of work. This debate has concentrated, so far, on how many jobs will be lost as a consequence of technological innovation. This paper examines instead issues related to the quality of jobs in future labour markets. It addresses the detrimental effects on workers of awarding legal capacity and rights and obligation to robots. It examines the implications of practices such as People Analytics and the use of big data and artificial intelligence to manage the workforce. It stresses on an oft-neglected feature of the contract of employment, namely the fact that it vests the employer with authority and managerial prerogatives over workers. It points out that a vital function of labour law is to limit these authority and prerogatives to protect the human dignity of workers.


The 2018 Survey: AI and the Future of Humans

#artificialintelligence

"Please think forward to the year 2030. Analysts expect that people will become even more dependent on networked artificial intelligence (AI) in complex digital systems. Some say we will continue on the historic arc of augmenting our lives with mostly positive results as we widely implement these networked tools. Some say our increasing dependence on these AI and related systems is likely to lead to widespread difficulties. Our question: By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them? That is, most of the time, will most people be better off than they are today? Or is it most likely that advancing AI and related technology systems will lessen human autonomy and agency to such an extent that most people will not be better off than the way things are today? Please explain why you chose the answer you did and sketch out a vision of how the human-machine/AI collaboration will function in 2030.


Contrastive Algorithmic Fairness: Part 1 (Theory)

arXiv.org Machine Learning

Was it fair that Harry was hired but not Barry? Was it fair that Pam was fired instead of Sam? How to ensure fairness when an intelligent algorithm takes these decisions instead of a human? How to ensure that the decisions were taken based on merit and not on protected attributes like race or sex? These are the questions that must be answered now that many decisions in real life can be made through machine learning. However research in fairness of algorithms has focused on the counterfactual questions "what if?" or "why?", whereas in real life most subjective questions of consequence are contrastive: "why this but not that?". We introduce concepts and mathematical tools using causal inference to address contrastive fairness in algorithmic decision-making with illustrative thought examples.