The authors of the Harrisburg University study make explicit their desire to provide "a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime" as a co-author and former NYPD police officer outlined in the original press release. At a time when the legitimacy of the carceral state, and policing in particular, is being challenged on fundamental grounds in the United States, there is high demand in law enforcement for research of this nature, research which erases historical violence and manufactures fear through the so-called prediction of criminality. Publishers and funding agencies serve a crucial role in feeding this ravenous maw by providing platforms and incentives for such research. The circulation of this work by a major publisher like Springer would represent a significant step towards the legitimation and application of repeatedly debunked, socially harmful research in the real world. To reiterate our demands, the review committee must publicly rescind the offer for publication of this specific study, along with an explanation of the criteria used to evaluate it. Springer must issue a statement condemning the use of criminal justice statistics to predict criminality and acknowledging their role in incentivizing such harmful scholarship in the past. Finally, all publishers must refrain from publishing similar studies in the future.
Carnegie Mellon's School of Computer Science will offer an undergraduate degree in artificial intelligence starting in the upcoming fall semester. The Pittsburgh-based school is the first to offer such a program in the US. Many industry experts believe there aren't enough qualified candidates in the workforce to fill all the vacancies that technology companies have for people with AI-related skills. This program could contribute quality candidates, though perhaps more importantly the prestige of Carnegie Mellon may spur other institutions to offer similar programs. Are you doing business in Amsterdam in May?
"Humans were are not built to spend more than two hours looking at a screen or scrolling through excel sheets. Humans are best at being human. Artificial Intelligence will do the rest." Kind of an employment company run by three humans overseeing 59 robots (actually computers working on algorithms created at the University of Amsterdam to solve problems). Stolze was addressing reporters in StartUp Village at the Amsterdam Science Park on the sidelines of the first World Summit AI in Amsterdam October 11-12.
"But until we have a hint of a beginning of a design, with some visible path towards autonomous AI systems with non-trivial intelligence, we are arguing about the sex of angels." This time it's the big one: will AI rise up and murder us all? While this isn't a new topic – humans have postulated about AI overlords for centuries – the timing and people involved in this debate make it interesting. Don't miss Hard Fork Summit in Amsterdam We're absolutely in the AI era now, and these dangers are no longer fictional. The architects of intelligence working on AI today could, potentially, be the ones who cause (or protect us from) an actual robot apocalypse.
Over 6,000 people are attending a conference focusing on artificial intelligence (AI) that opened in Amsterdam this morning. World Summit AI brings together corporates, startups, investors, scientists, academics, NGOs along with government bodies like the UN, EU and the World Economic Forum. Participants will be learning about some of the latest innovations in AI – the creation of human-like technology - that will transform business and the ethical issues that come with it. The event coincides with Artificial Intelligence in Europe, a report by Microsoft that reveals over half of the companies surveyed expect AI to have an impact on "business areas that are entirely unknown today". Yet only 4% of companies actively use AI suggesting European businesses, at least, have an enormous mountain to climb.