Goto

Collaborating Authors

Chess: Instructional Materials


Fifty Years of P vs. NP and the Possibility of the Impossible

Communications of the ACM

Lance Fortnow (lfortnow@iit.edu) is a professor and dean of the College of Computing at Illinois Institute of Technology, Chicago, IL, USA.


A brief history of AI: how to prevent another winter (a critical review)

arXiv.org Artificial Intelligence

The field of artificial intelligence (AI), regarded as one of the most enigmatic areas of science, has witnessed exponential growth in the past decade including a remarkably wide array of applications, having already impacted our everyday lives. Advances in computing power and the design of sophisticated AI algorithms have enabled computers to outperform humans in a variety of tasks, especially in the areas of computer vision and speech recognition. Yet, AI's path has never been smooth, having essentially fallen apart twice in its lifetime ('winters' of AI), both after periods of popular success ('summers' of AI). We provide a brief rundown of AI's evolution over the course of decades, highlighting its crucial moments and major turning points from inception to the present. In doing so, we attempt to learn, anticipate the future, and discuss what steps may be taken to prevent another 'winter'.


Learn Machine Learning: With 45 Hrs Hands-on ( 10 Live Projects)

#artificialintelligence

Transformational advancements in technology in today's world are making it possible for data scientists to develop machines that think for themselves. Based on complex algorithms that can glean information from data, today's computers can use neural networks to mimic human brains, and make informed decisions based on the most likely scenarios. The immense possibilities that machine learning can unlock are fascinating, and with data exploding across all fields, it appears that in the near future Machine Learning will be the only viable alternative simply because there is nothing quite like it! With so many opportunities on the horizon, a career as a Machine Learning Engineer can be both satisfying and rewarding. A good workshop, such as the one offered by KnowledgeHut, can lead you on the right path towards becoming a machine learning expert.


3 Steps to Implement Artificial Intelligence.

#artificialintelligence

Artificial Intelligence (AI) could increase global GDP by 14 percent, or an astounding $15.7 trillion by 2030. This is due, in large part, to productivity gains from AI automation and workforce augmentation. AI will change the world, but it takes time to implement and train it. It's important for your business to understand how, when, and where to implement Artificial Intelligence, and it's often best to start small. The world at large is still learning how best AI can be used to benefit society.


Are Robots and Artificial Intelligence (AI) Threats to Human Employment?

#artificialintelligence

Artificial Intelligence (AI) is simply an attempt to create machines that mimic the human mind. Although AI has become the buzzword in tech circles in recent times, it's not a new thing. Remember when Deep Blue beat the world best chess player in 1997? The main idea behind AI is to perform repetitive, monotonous and possibly dangerous tasks with machines. Much of humanity will now be free to focus on higher intellectual pursuits that promise a better life.


Artificial Intelligence in Education Education Matters

#artificialintelligence

What is it, where is it now, where is it going? Artificial Intelligence holds significant promise to revolutionise our educational systems, but are our educational systems ready for a revolution? In this article, published in Ireland's Yearbook of Education 2017-2018, Brett Becker explores current advances of AI in education and discusses how AI is likely to affect our education systems in the years ahead. Very few subjects in science and technology today are causing as much excitement, and as much misconception, as Artificial Intelligence (AI). It seems that everyone from Obama to Putin and Bezos to Zuckerburg are commenting on both the possibilities and the problems that AI could bring to humanity.


Machine Learning can transform education

#artificialintelligence

Futurist Arthur C. Clarke wrote, "Any sufficiently advanced technology is indistinguishable from magic." The magic of software (giving data and rules to get answers) is often confused with the magic of machine learning (giving data and answers to get rules) but it is machine learning not software that is transforming the world of computer chess. So far, computer chess programs codified the actions of the best human players and inevitably pivoted around the strategy of "material", wherein the number and value of pieces mattered most. Reports suggest AlphaZero taught itself chess from scratch in just four hours by playing against itself and rejected human rules developed over centuries. As it started with only the basic rules, researchers suggest that its lack of knowledge of human chess history may have enabled AlphaZero to see the game in a fresh way.



Notes on a New Philosophy of Empirical Science

arXiv.org Machine Learning

This book presents a methodology and philosophy of empirical science based on large scale lossless data compression. In this view a theory is scientific if it can be used to build a data compression program, and it is valuable if it can compress a standard benchmark database to a small size, taking into account the length of the compressor itself. This methodology therefore includes an Occam principle as well as a solution to the problem of demarcation. Because of the fundamental difficulty of lossless compression, this type of research must be empirical in nature: compression can only be achieved by discovering and characterizing empirical regularities in the data. Because of this, the philosophy provides a way to reformulate fields such as computer vision and computational linguistics as empirical sciences: the former by attempting to compress databases of natural images, the latter by attempting to compress large text databases. The book argues that the rigor and objectivity of the compression principle should set the stage for systematic progress in these fields. The argument is especially strong in the context of computer vision, which is plagued by chronic problems of evaluation. The book also considers the field of machine learning. Here the traditional approach requires that the models proposed to solve learning problems be extremely simple, in order to avoid overfitting. However, the world may contain intrinsically complex phenomena, which would require complex models to understand. The compression philosophy can justify complex models because of the large quantity of data being modeled (if the target database is 100 Gb, it is easy to justify a 10 Mb model). The complex models and abstractions learned on the basis of the raw data (images, language, etc) can then be reused to solve any specific learning problem, such as face recognition or machine translation.