If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Machine Learning, a recurrent and obvious topic in Science and Technology, will also radically change the way research is carried out in the Social Sciences and the Humanities in a near future. A close cooperation between SSH scholars and computer scientists could have a huge impact on both SSH and STEM (Science, Technology, Engineering and Mathematics) related research topics. On the one hand, social scientists and humanities scholars may not be able to design and implement themselves the machine learning algorithms they need for their research. The role of a linguist, a historian or a social scientist should be thus to help computer scientists outperform current machine learning models by offering them theoretical approaches both could adapt together to improve their accuracy. This conference, organised by the Social Sciences and Humanities Working Group of the Coimbra Group, will explore the practical possibilities Machine Learning offers to selected research fields within SSH, particularly linguistics, literature, musicology, and sociology.
"There is mounting evidence that AI can exacerbate inequality, perpetuate discrimination, and inflict harm," write Mona Sloane, a research fellow at New York University's Institute for Public Knowledge, and Emanuel Moss, a doctoral candidate at the City University of New York. "To achieve socially just technology, we need to include the broadest possible notion of social science, one that includes disciplines that have developed methods for grappling with the vastness of social world and that helps us understand how and why AI harms emerge as part of a large, complex, and emergent techno-social system." The authors outline reasons where social science approaches, and its many qualitative methods, can broadly enhance the value of AI while also avoiding documented pitfalls. Studies have shown that search engines may discriminate against women of color while many analysts have raised questions about how self-driving cars will make socially acceptable decisions in crash situations (e.g., avoiding humans rather than fire hydrants). Sloane, also an adjunct faculty member at NYU's Tandon School of Engineering, and Moss acknowledge that AI engineers are currently seeking to instill "value-alignment" -- the idea that machines should act in accordance with human values -- in their creations, but add that "it is exceptionally difficult to define and encode something as fluid and contextual as'human values' into a machine."
We know that the money launderers are winning. We know that law enforcement is losing. By now, you've probably heard the UN's statistics on this, that we catch less than 1% of these crimes, which add up to an estimated $1.6 trillion a year. At the heart of this problem is asymmetrical technology. The bad guys have sophisticated, networked, rapidly improving technologies, and the good guys are stuck in the past.
In 1969, artificial-intelligence pioneer and Nobel laureate Herbert Simon proposed a new science, one that approached the study of artificial objects just as one would study natural objects. "Natural science is knowledge about natural objects and phenomena," Simon wrote. "We ask whether there cannot also be'artificial' science -- knowledge about artificial objects and phenomena." Now, 50 years later, a team of researchers from Harvard, MIT, Stanford, the University of California, San Diego, Google, Facebook, Microsoft, and other institutions is renewing that call. In a recent paper published in the journal Nature, the researchers proposed a new, interdisciplinary field -- machine behavior -- that would study artificial intelligence through the lens of biology, economics, psychology, and other behavioral and social sciences.
Thousands of academics are gathering in Vancouver for the annual Congress of the Humanities and Social Sciences from June 1-7. They will present papers on everything from child marriage in Canada to why dodgeball is problematic. It's been the edict of parents, teachers and etiquette experts since time immemorial: Not every thought that pops into your head needs to come out of your mouth. Discretion helps hold our society together. We don't tell each other how we really feel.
Since February, five working groups have been generating ideas about the form and content of the new MIT Stephen A. Schwarzman College of Computing. That includes the Working Group on Social Implications and Responsibilities of Computing, co-chaired by Melissa Nobles, the Kenan Sahin Dean of the MIT School of Humanities, Arts, and Social Sciences and a professor of political science, and Julie Shah, associate professor in the Department of Aeronautics and Astronautics at MIT and head of the Interactive Robotics Group of the Computer Science and Artificial Intelligence Laboratory. MIT News talked to Shah about the group's progress and goals to this point. Q: What are the main objectives of this working group? A: The goals of the working group are to think about how we can weave social and ethical considerations into the fabric of what the college is doing.
You might think that anthropology and AI is an odd pairing. But in a 2017 article in WIRED, journalist James Temperton wrote, "For DeepMind to realise its ambition of cracking general intelligence, it needs an interdisciplinary approach to AI". As DeepMind co-founder Mustafa Suleyman explained in the same article, "We need to have in-house the very best anthropologists, sociologists…specialists on bias and discrimination in machine learning systems, working with both our researchers and applied software development teams so that they can give them feedback and guidance and introduce them to new modes of critical thinking". So in the spirit of cross-disciplinary collaboration, around 300 technologists working in AI (machine learning, data science, robotics, AI) and anthropologists and sociologists from around the world will come together at the Watershed on Bristol's historic harbourside to discuss human-centred AI. One of the keynotes, Dr Julien Cornebise, is from Element AI.
With Google's dominance in the online search engine market we entered the Age of Free. Indeed, services offered online are nowadays expected to be offered at no cost. Which, of course, does not mean that there is no cost to it, only that the consumer doesn't pay it. Early attempts financed the services with ads, but we soon saw a move toward making the consumer the product. Today, free and unfree services alike compete for "users" and then make money off the data they collect.
As we finalize this article November 11, 2018, and consider current and future directions for computing in Europe and across the globe, we remember the end of World War I exactly 100 years ago: the end to a war of atrocities at a scale previously unseen and the culmination of a series of events that European nations had allowed themselves to'sleepwalk' into, with little thought for the consequences.10 When this article appears in spring 2019, we will remember the first proposal for a new global information sharing system written by Tim Berners-Lee 30 years ago at CERN,4 the European organization for nuclear research. This proposal marked the beginning of the World Wide Web, which now pervades every facet of modern life for over four billion users. However, the Web 30 years on, is not the land of free information and discussion, or an egalitarian space that supports the interests of all, as originally imagined.4 Rather, egotisms, nationalisms, and fundamentalisms freewheel on a landscape that is increasingly dominated by powerful corporate actors, often silencing other voices, including democratically elected representatives. For seven decades Europe has been a political and social project, seeking to integrate what has been divisive historically and to make citizens more equal. While the proponents of the Web were driven by similar values, there is now increasing concern in Europe--and beyond--that the Web has become a vehicle of disintegration, polarization, and exploitation.
In 2011, a friend of mine in college asked me if I'd read The Happiness Hypothesis: Finding Modern Truth in Ancient Wisdom, by Jonathan Haidt. Haidt's aim was to probe and distill--and "savor"--the moral precepts of antiquity in the light of modern science. The 2006 book was an answer to an overabundance of too-little-appreciated advice. "We might have already encountered the Greatest Idea, the insight that would have transformed us had we savored it, taken it to heart, and worked it into our lives," Haidt wrote." My friend was happy to encounter it: Haidt helped him through a difficult breakup. I hadn't heard of the book, but I had heard of its author. A paper of Haidt's, "The Emotional Dog and Its Rational Tail: A Social Intuitionist Approach to Moral Judgment," had been assigned in my moral psychology course, and I was in the middle of writing an essay that argued against its conclusion. Haidt wrote that reason, compared to emotion, typically matters little to what we believe is ...