Technology's tentacles have encroached every aspect of our lives. Sitting in the comfort of your home you can tune in to live discussions and gain new understanding about technologies that are reshaping our world view. NDTV Tech Conclave 2018 saw a congregation of leading minds in the technology, mobile, and digital industries. The conclave aimed to showcase and create opportunities by bringing together many of the top entrepreneurs, investors, enterprise leaders, academics, and policymakers from around the world. The moderator of this session outlined two diametrically opposite views of AI and threw it open to the panelists.
A panel of industry experts gathered at RSA 2018 in San Francisco to explore the role that machine learning and artificial intelligence is playing in the current cyber landscape. After opening the discussion by asking the panel to each give their own definition of what machine learning is, Ira asked the speakers to define what types of applications are most appropriate for the use of machine learning and AI. Hillard: The places where it is most mature is around speech and image processing, and also around fraud detection. "The technology should be an enabler to solving a problem but sometimes it gets lost in what's being accomplished." Friedrichs: Most people have woken up to the fact that machine learning and AI are not the panacea that marketing tells us they are, but they can add to the feature set of a product.
The key to getting better at deep learning (or most fields in life) is practice. Each of these problem has it's own unique nuance and approach. But where can you get this data? A lot of research papers you see these days use proprietary datasets that are usually not released to the general public. This becomes a problem, if you want to learn and apply your newly acquired skills.
Neural networks (NNs) and deep learning (DL) currently provide the best solutions to many problems in image recognition, speech recognition, natural language processing, control and precision health. NN and DL make the artificial intelligence (AI) much closer to human thinking modes. However, there are many open problems related to DL in NN, e.g.: convergence, learning efficiency, optimality, multi-dimensional learning, on-line adaptation. This requires to create new algorithms and analysis methods. Practical applications both require and stimulate this development.
There's no question artificial intelligence and machine learning technologies are enabling important discoveries in healthcare, but there can be a bit of a disconnect among the various stakeholders using them. A panel discussion at the upcoming CNS Summit in Boca Raton, Fla. presents a rare opportunity to bring the parties together and foster collaboration.
As evidenced by the articles in this special issue, transfer learning has come a long way in the past five or so years, partially because of DARPA's Transfer Learning program, which sponsored much of the work reported in this issue. There is a Transfer Learning Toolkit for Matlab available on the web. Transfer learning has developed techniques for classification, regression, and clustering (as summarized in Pan and Yang's 2009 survey) and for complex interactive tasks that are often best addressed by reinforcement learning techniques. However, there is a more practical and more feasible goal for transfer learning against which progress is being made. An engineering-oriented goal of artificial intelligence that could be enabled by transfer learning is the ability to construct a large number of diverse applications not from scratch, but by taking advantage of knowledge already acquired and formally represented for other purposes.
Its goal is to capture, in a general form, the internal structure of the objects, relations, strategies, and processes used to solve tasks drawn from a source domain, and exploit that knowledge to improve performance in a target domain. A Note from the AI Magazine Editor in Chief: Part Two of the Structured Knowledge Transfer special issue will be published in the summer 2011 issue (volume 32 number 2) of AI Magazine. Articles in this issue will include: "Knowledge Transfer between Automated Planners," by Susana Fernández, Ricardo Aler, and Daniel Borrajo "Transfer Learning by Reusing Structured Knowledge," by Qiang Yang, Vincent W. Zheng, Bin Li, and Hankz Hankui Zhuo "An Application of Transfer to American Football: From Observation of Raw Video to Control in a Simulated Environment," by David J. Stracuzzi, Alan Fern, Kamal Ali, Robin Hess, Jervis Pinto, Nan Li, Tolga Könik, and Dan Shapiro "Toward a Computational Model of Transfer," by Daniel Oblinger While the field of psychology has studied transfer learning in people for many years, AI has only recently taken up the challenge. The topic received initial attention with work on inductive transfer in the 1990s, while the number of workshops and conferences has noticeably increased in the last five years. This special issue represents the state of the art in the subarea of transfer learning that focuses on the acquisition and reuse of structured knowledge.
We selected them for significance, novelty, and (in several cases) common task focus. Every year, AI Magazine devotes one fourth of its annual production to a special issue based on the Innovative Applications of Artificial Intelligence (IAAI) conference. Because IAAI is the premier venue for documenting the transition of AI technology into application, these special issues provide a snapshot of the state of the art in AI with the practical syllogism in mind; they present work that has value because it delivers value in use. As a result, it is good to read these articles from a practical perspective. Papers that document deployed systems clarify the motivating application constraints, the match (and mismatch) between problems and technology, the innovations required to surmount barriers to deployment, and the impact of technology on application through practical measures of cost and benefit.
The Workshop on Connectionist-Symbolic Integration: From Unified to Hybrid Approaches was held on 19 to 20 August 1995 in Montreal, Canada, in conjunction with the Fourteenth International Joint Conference on Artificial Intelligence. The focus of the workshop was on learning and architectures that feature hybrid representations and support hybrid learning. The general consensus was that hybrid connectionist-symbolic models constitute a promising avenue to the development of more robust, more powerful, and more versatile architectures for both cognitive modeling and intelligent systems. The workshop was cochaired by myself and Frederic Alexandre. It featured 23 presentations, including 2 invited talks and 2 panel discussions.