If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
At first glance, the creature known as Caenorhabditis elegans--commonly referred to as C. elegans, a type of roundworm--seems remarkably simple; it is comprised of only 959 cells and approximately 302 neurons. In contrast, the human body contains somewhere around 100 trillion cells and about 100 billion neurons in the brain. Yet decoding the genome for this worm and digitally reproducing it--something that could spur enormous advances in the understanding of life and how organisms work--is a challenge for the ages. "The project will take years to complete. It involves enormous time and resources," says Stephen Larson, project coordinator for the OpenWorm Foundation.
The proposed changes to the ACM Code of Ethics and Professional Conduct, as discussed by Don Gotterbarn et al. in "ACM Code of Ethics: A Guide for Positive Action"1 (Digital Edition, Jan. 2018), are generally misguided and should be rejected by the ACM membership. ACM is a computing society, not a society of activists for social justice, community organizers, lawyers, police officers, or MBAs. The proposed changes add nothing related specifically to computing and far too much related to these other fields, and also fail to address, in any significant new way, probably the greatest ethical hole in computing today--security and hacking. If the proposed revised Code is ever submitted to a vote by the membership, I will be voting against it and urge other members to do so as well. ACM promotes ethical and social responsibility as key components of professionalism.
A dynamic network is a network that changes with time. Nature, society, and the modern communications landscape abound with examples. Molecular interactions, chemical reactions, social relationships and interactions in human and animal populations, transportation networks, mobile wireless devices, and robot collectives form only a small subset of the systems whose dynamics can be naturally modeled and analyzed by some sort of dynamic network. Though many of these systems have always existed, it was not until recently the need for a formal treatment that would consider time as an integral part of the network has been identified. Computer science is leading this major shift, mainly driven by the advent of low-cost wireless communication devices and the development of efficient wireless communication protocols. The early years of computing could be characterized as the era of staticity and of the relatively predictable; centralized algorithms for (combinatorial optimization) problems concerning static instances, as is that of finding a minimum cost traveling salesman tour in a complete weighted graph, computability questions in cellular automata, and protocols for distributed tasks in a static network. Even when changes were considered, as is the case in fault-tolerant distributed computing, the dynamics were usually sufficiently slow to be handled by conservative approaches, in principle too weak to be useful for highly dynamic systems. An exception is the area of online algorithms, where the input is not known in advance and is instead revealed to the algorithm during its course. Though the original motivation and context of online algorithms is not related to dynamic networks, the existing techniques and body of knowledge of the former may prove very useful in tackling the high unpredictability inherent in the latter. In contrast, we are rapidly approaching, if not already there, the era of dynamicity and of the highly unpredictable. According to some latest reports, the number of mobile-only Internet users has already exceeded the number of desktop-only Internet users and more than 75% of all digital consumers are now using both desktop and mobile platforms to access the Internet. The Internet of Things, envisioning a vast number of objects and devices equipped with a variety of sensors and being connected to the Internet, and smart cities37 are becoming a reality (an indicative example is the recent £40M investment of the U.K. government on these technologies).
Organizations are adaptive systems that continually attempt to push the limits of their own effectiveness to approach perfection. This approach is true of the "mom and pop" store that is threatened by the growth of shopping malls. It is true of the gigantic corporation that is threatened by public regulation and private competition. It is particularly true of organizations that are confronted with complex tasks, the vagaries of uncertainty, and the high and visible costs of irreversible error. The cause of organization ineffectiveness or, indeed, failure is often perceived to be human frailty (Perrow 1984).
The Sixth International Conference on Enterprise Information Systems (ICEIS) was held in Porto, Portugal; previous venues were in Spain, France, and the United Kingdom. Since its inception in 1999, ICEIS has grown steadily, and is now one of the largest international conferences in the area of information systems. In 2004, more than 600 papers were submitted to the conference and its ten satellite workshops. One of the interesting features of this conference is the high number of invited speakers. In 2004, eighteen keynote speakers were featured at ICEIS and its workshops.
The Find-the-Remote event was considered the most challenging of the events in the 1997 AAAI Mobile Robot Competition and Exhibition. It required a broad range of both hardware and software capabilities. I discuss the rules and rationale for the event as well as the results. It involved fetching a known set of objects from unknown, but constrained, locations in a known environment. In real life, such functions might be useful for in-home care of the elderly or the physically disabled. This event was extremely difficult because it forced teams to implement both manipulation (the grasping and moving of objects) and visual object recognition. Furthermore, it explicitly required teams to implement them for a wide range of objects. It therefore eliminated a broad range of special-purpose sensing and manipulation strategies that would be specific to one or another class of objects. It also required that objects be lifted from a variety of surfaces (real furniture) at a variety of heights.
The Third Workshop on Enabling Technologies: Infrastructure for Collaborative Enterprises was held from 17-19 April 1994 in Morgantown, West Virginia, hosted by the Concurrent Engineering Research Center at West Virginia University. This report summarizes this year's workshop and outlines the philosophy behind this annual event. This report summarizes this year's workshop and outlines the philosophy behind this annual event. The WET ICE workshop, now in its third year, has become a fixture of the collaborative computing scene. A more specialized event than the Computer-Supported Cooperative Work gathering, which takes in everyone from anthropologists to futurists, this workshop focuses on hardware and software that enables agents of all kinds to interact in a variety of ways to accomplish some task--quickly, correctly, and easily.
MIT Artificial Intelligence Laboratory The MIT AI Laboratory has a long tradition of research in most aspects of Artificial Intelligence. Currently, the major foci include computer vision, manipulation, learning, Englishlanguage understanding, VLSI design, expert engineering problem solving, commonsense reasoning, computer architecture, distributed problem solving, models of human memory, programmer apprentices, and human education. Understanding Visual Images Professor Berthold K. P. Horn and his students have studied intensively the image irradiance equation and its applications. The reflectance and albedo map representations have been introduced to make surface orientation, illumination geometry, and surface reflectivity explicit. Recent work has centered on modelling the effects of the atmosphere which distort intensity values and make classification of terrain and related computations using the albedo map inaccurate.