If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
But before delving into'behind-the-scenes' of US banking industry meeting ATM, let's turn back time for a second -- on March 27th, 1998, in the New Tech 1998 conference in Denver, Colorado. Here, Neil Postman, a prominent American cultural critic and professor at New York University, gave a keynote lecture. Professor Postman has been a long-time scholar of how new technologies relate to human society, and the book'Amusing Ourselves to Death', a 1985 book that rose to stardom, shows how television technology is destroying public discourse and turning everything into entertainment. I think it has something to do with how we feel about the impact of today's media and how our lives exposed to it are deteriorating. Since this book, Professor Postman has strongly criticized the tendency to respond to all social problems through technical solutions.
At the start of the first Terminator movie, Sarah Connor, unknowingly the future mother of Earth's resistance movement, is working as a waitress when Arnold Schwarzenegger's Cyberdyne Systems Model 101 Terminator is sent back through time to kill her. But what if, instead of attempting to murder her, Skynet's android assassin instead approached the owner of Big Jeff's family restaurant, where Sarah worked, and offered to do her shifts for lower wages, while working faster and making fewer mistakes? The newly jobless Sarah, unable to support herself, drops out of college and decides that maybe starting a family in this economic climate just isn't smart. This, in a somewhat cyberbolic nutshell, is the biggest immediate threat many fear when it comes to automation: Not a robopocalypse brought on by superintelligence, but rather one that ushers in an age of technological unemployment. Some very smart people have been sounding the alarm for years. A 2013 study carried out by the Oxford Martin School suggested that some 47% of jobs in the U.S. could be automated within the next two decades -- only 12 years of which now remain following the publishing of the study.
Several American banks have started using surveillance software and computer vision to watch people using their services. Computer vision is a part of artificial intelligence that uses computers to understand the world we see. A Reuters news agency investigation found that the software is used to learn about customers, watch employees and spot people sleeping near Automatic Teller Machines (ATMs). Banks like the City National Bank of Florida and JPMorgan Chase & Co have tested facial recognition and artificial intelligence (AI) technologies. The growth of AI tools within the banking industry could signal the spread of the technology into other industries.
The World Economic Forum says technologies like artificial intelligence (AI) will displace 75 million jobs by 2022 but will also create 133 million new roles. To prepare workers for these new jobs, organizations will have to provide significant resources for upskilling their workforces. And employees will need to take personal responsibility for their career development in a context of rapid technological change. How can HR professionals prepare employees and organizations for a present and future where AI is increasingly working with humans to drive business outcomes? "HR professionals need to begin by shifting their mindsets about AI," said Jeff Schwartz, a principal with Deloitte Consulting.
HSBC is replacing more manual processes, with artificial intelligence (AI) being used to automate when ATMs need to be refilled. The technology, developed by HSBC's operations and technology teams, has been trialled in Hong Kong, where the bank has 1,200 ATMs. The iCash AI technology has reduced ATM refills, which are done by third parties, by 15% – saving $1m. To calculate how much money is needed and where, iCash uses live ATM data and predictive machine learning algorithms that factor in seasonality, holidays, public events, location and recent withdrawal trends. The bank said it was a challenge to predict how much cash each ATM might need.
This paper proposes to model chaos in the ATM cash withdrawal time series of a big Indian bank and forecast the withdrawals using deep learning methods. It also considers the importance of day-of-the-week and includes it as a dummy exogenous variable. We first modelled the chaos present in the withdrawal time series by reconstructing the state space of each series using the lag, and embedding dimension found using an auto-correlation function and Cao's method. This process converts the uni-variate time series into multi variate time series. The "day-of-the-week" is converted into seven features with the help of one-hot encoding. Then these seven features are augmented to the multivariate time series. For forecasting the future cash withdrawals, using algorithms namely ARIMA, random forest (RF), support vector regressor (SVR), multi-layer perceptron (MLP), group method of data handling (GMDH), general regression neural network (GRNN), long short term memory neural network and 1-dimensional convolutional neural network. We considered a daily cash withdrawals data set from an Indian commercial bank. After modelling chaos and adding exogenous features to the data set, we observed improvements in the forecasting for all models. Even though the random forest (RF) yielded better Symmetric Mean Absolute Percentage Error (SMAPE) value, deep learning algorithms, namely LSTM and 1D CNN, showed similar performance compared to RF, based on t-test.
As technologies become increasingly capable of taking on a wide variety of repeatable tasks, many workers may find themselves increasingly nervous about their place in the workforce. Anxieties about technology in the workplace are nothing new -- in fact, they go back centuries. The good news is that the fear of humans being replaced en masse by machines has never been borne out by reality. Rather, history has repeatedly shown that as machines transform whole industries, they also create new opportunities for human workers. Indeed, the US is one of the most developed economies, and therefore one of the most automated, but it also currently has record-low unemployment.
Silicon Valley Bank, which has helped fund more than 30,000 startups, yesterday released a report on "The Future of Robotics: An Inside View on Innovation in Robotics." It described trends in production, business models, and the adoption of robotics reflecting the increasing maturity of Industry 4.0. The report also addressed concerns about automation displacing jobs and public-policy reactions. Overall, the free Silicon Valley Bank (SVB) report (download PDF) was cautiously optimistic about the prospects for industrial automation. It cited rising U.S. productivity, maturing technologies and suppliers supporting a variety of applications, and a steady climb for robotics deployments, particularly in Asia.
As the world grows increasingly connected, growing concern regarding the influence of artificial intelligence (AI) has been bubbling to the surface, affecting perceptions by industries big and small along with the general populace. Spurred on by sensationalized media predictions of AI taking over human decision-making and silver-screen tales of robot revolutions, there is a fear of allowing AI or its cousin, the Internet of Things (IoT), into our lives. Here is AI's man behind the curtain. One of the biggest sticking points is the popular – yet mistaken – notion that AI will cost people their jobs. In truth, the situation is just the opposite.
Modelling deontic notions through preferences  has the advantage of linking deontic notions to the manifold research on preferences, in multiple disciplines, such as philosophy, mathematics, economics and politics. In recent years, preferences have also been addressed within AI [15,8,18] and applications can be found in multi-agent systems  and recommender systems . We shall model deontic notions through ceteris-paribus preferences, namely, conditional preferences for a state of affairs over another state of affairs, all the rest being equal. In particular, we shall focus on the ceteris-paribus preference for a proposition over its complement. The idea of ceteris-paribus preferences was originally introduced by the philosopher and logician Georg von Wright .