As a combat veteran and more recently an industry technologist and university professor, I have observed with concern the increasing automation--and dehumanization--of warfare. Sarah Underwood's discussion of autonomous weapons in her news story "Potential and Peril" (June 2017) highlighting this trend also reminded me of the current effort to update the ACM Code of Ethics, which says nothing about the responsibilities of ACM members in defense industries building the software and hardware in weapons systems. Underwood said understanding the limitations, dangers, and potential of autonomous and other warfare technologies must be a priority for those designing such systems in order to minimize the "collateral damage" of civilian casualties and property/infrastructure destruction. Defense technologists must be aware of and follow appropriate ethical guidelines for creating and managing automated weapons systems of any kind.
This has changed in the last two decades, due to the progress in Satisfiability (SAT) solving, which by adding brute reason renders brute force into a powerful approach to deal with many problems easily and automatically. This combination of enormous computational power with "magical brute force" can now solve very hard combinatorial problems, as well as proving safety of systems such as railways. To solve the Boolean Pythagorean Triples Problem, it suffices to show the existence of a subset of the natural numbers, such that any partition of that subset into two parts has one part containing a Pythagorean triple. This performance boost resulted in the SAT revolution:3 encode problems arising from many interesting applications as SAT formulas, solve these formulas, and decode the solutions to obtain answers for the original problems.
I recently published and presented a paper at CHI 2017 (the annual ACM Conference on Human Factors in Computing Systems, https://chi2017.acm.org) This paper won an Honorable Mention award at the conference. Here's a summary of the project. There is now tremendous momentum behind initiatives to teach computer programming to a broad audience, yet many of these efforts (for example, Code.org, In contrast, I wanted to study the other end of the age spectrum: how older adults aged 60 and over are now learning to code. Because this population is already significant and also quickly growing as we all (hopefully!) continue to live longer in the coming decades. The United Nations estimates that by 2030, 25% of North Americans and Europeans will be over 60 years old, and 16% of the worldwide population will be over 60. There has been extensive research on how older adults consume technology, and some studies of how they curate and produce digital content such as blogs and personal photo ...
Particular attention has been devoted to the purported connection between a "Universal Turing Machine" (UTM), as introduced in Turing's article of 1936,27 and the design and implementation in the mid-1940s of the first stored-program computers, with particular emphasis on the respective proposals of John von Neumann for the EDVAC30 and of Turing himself for the ACE.26 In some recent accounts, von Neumann's and Turing's proposals (and the machines built on them) are unambiguously described as direct implementations of a UTM, as defined in 1936. "6 This and other similar testimonies have been cited repeatedly as solid historical evidence but are invariably vague and unsupported.a Similar is the case with the anecdotes about the purported early influence of Turing's paper on von Neumann; see, for example, Hodges.21 This article is intended as a further contribution to the historical ongoing debates about the actual role of Turing in the history of the modern electronic computer and, in particular, the putative connection between the UTM and the stored-program computer. Of particular interest is the actual, direct influence of Turing's paper on von Neumann at the time when the latter wrote his famous "First Draft." Moreover, I claim the very idea of a modern computer in the sense of either von Neumann's "First Draft" or of Turing's "Proposed Electronic Calculator" was in 1936 not only beyond the scope of Turing's capabilities but also of his concerns.
Indeed, rather than simply being used to replace contact center workers, artificial intelligence (Al)-based technologies, including machine learning, natural language processing, and even sentiment analysis, are being strategically deployed to improve the overall customer experience by providing functionality that would be too time-consuming or expensive to do manually. The company uses AI "bots" to handle routine tasks by utilizing natural language processing to interpret what customers are asking, search the business knowledge base system for an answer, and then interpreting this raw data into an intelligent, human-friendly response. Burgess highlights the power of machine learning and natural language processing to quickly process front-end requests, which often make up a significant amount of call volume and labor costs. The sheer number of possible phrases, words, and interactions does make it more challenging to automate the customer service experience, though with machine learning technology that can review thousands or millions of interactions, organizations can tailor responses based on its learnings.
"Computer graphics are pictures and movies created using computers." Geometric modeling, a subfield of computer graphics, has been motivated by industrial needs at the advent of computer-aided manufacturing, aiming at increased productivity via a completely digital workflow from design to production. The possibilities for digital shape design are almost unlimited and highly effective for the creation of "pictures and movies." I am not convinced here, since purely geometry-driven shape modeling creates bottlenecks when moving toward engineering and fabrication.
The first is our computational approach to analyzing sociocultural identity phenomena in virtual identity systems; these techniques support engineers developing systems that avoid or combat negative phenomena (such as discrimination and prejudice). The current expression of the Avatar Dream in many contemporary societies includes using virtual identities to communicate, share data, and interact in computer-based (virtual) environments. We thus enrich Gee's model with an approach from cognitive science called "conceptual blending theory"9 in which blending is a proposed cognitive mechanism by which humans integrate concepts.d We thus use Harrell's notion of a "blended identity"17 in which aspects of a player's physical identity (such as preferences, control, appearance, and understanding social categories) are selectively projected9 with aspects of the virtual identity onto a blended identity, integrating and elaborating aspects of each (see Figure 1). We later give examples of our approaches for analyzing blended identities computationally to reveal how physical-world values can be both embedded in virtual identity systems and enacted by virtual identity users.
This is why most software developers use UML only when forced to.1 For example, the UML diagrams in Figures 1 and 2 portray the embedded software in a fax machine. While these diagrams are attractive, they do not even tell you which objects control which others. Which object is the topmost controller over this fax machine? Which object(s) control the Modem object?
Applying a current across the antiferromagnetic layer affects its spins, which in turn applies torque to the spins in the magnetic layer, switching the magnetization from up to down. Another approach to building analog circuits with synapses uses memristors, a recently discovered fourth fundamental circuit element, in which flowing current alters resistance, providing the device with memory. By contrast, the TrueNorth chip, IBM's digital implementation of neural computing, simulates 100 million spiking neurons and 256 million synapses and consumes just 70mW, though Strukov argues that his flash chip is more efficient, with energy use and latency three orders of magnitude better than TrueNorth, when the IBM chip is configured to perform the same task. Going digital was important for TrueNorth, says Dharmendra Modha, chief scientist for IBM's Brain Inspired Research group, which developed the chip.
Propelled by massively parallel computer systems, huge datasets, and better algorithms, AI has brought a number of important applications, such as image- and speech-recognition and autonomous vehicle navigation, to near-human levels of performance. In reinforcement learning, systems are not trained in advance with huge amounts of labeled data--which is called "supervised learning"--but are simply rewarded when they get the right answer. According to Yann LeCun, director of AI research at Facebook, supervised learning is the dominant AI method today, while reinforcement learning occupies a niche mostly in games. Another promising new approach to unsupervised predictive learning lies in something called generative adversarial networks (GAN), in which two neural nets train themselves by competing in a zero-sum game to produce photorealistic images.