"Please think forward to the year 2030. Analysts expect that people will become even more dependent on networked artificial intelligence (AI) in complex digital systems. Some say we will continue on the historic arc of augmenting our lives with mostly positive results as we widely implement these networked tools. Some say our increasing dependence on these AI and related systems is likely to lead to widespread difficulties. Our question: By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them? That is, most of the time, will most people be better off than they are today? Or is it most likely that advancing AI and related technology systems will lessen human autonomy and agency to such an extent that most people will not be better off than the way things are today? Please explain why you chose the answer you did and sketch out a vision of how the human-machine/AI collaboration will function in 2030.
Computational intelligence in finance has been a very popular topic for both academia and financial industry in the last few decades. Numerous studies have been published resulting in various models. Meanwhile, within the Machine Learning (ML) field, Deep Learning (DL) started getting a lot of attention recently, mostly due to its outperformance over the classical models. Lots of different implementations of DL exist today, and the broad interest is continuing. Finance is one particular area where DL models started getting traction, however, the playfield is wide open, a lot of research opportunities still exist. In this paper, we tried to provide a state-of-the-art snapshot of the developed DL models for financial applications, as of today. We not only categorized the works according to their intended subfield in finance but also analyzed them based on their DL models. In addition, we also aimed at identifying possible future implementations and highlighted the pathway for the ongoing research within the field.
Put more simply, AI depends on good data. Even Google--which is famous for the pioneering work in AI that underpins its standard-setting search-based advertising business--makes no bones about the critical role of data in AI. Peter Norvig, Google's director of research, has said: "We don't have better algorithms, we just have more data." Companies increasingly realize that data is critical to their success--and they are paying striking sums to acquire it. Microsoft's US$26 billion purchase of the enterprise social network LinkedIn is a prime example. But other technology companies are also seeking to acquire data-related assets, typically to acquire more than just identity-linked information from social media sources by focusing instead on vast troves of anonymized consumer data. Think, for example, of Oracle pursuing an M&A-led strategy for its Oracle Data Cloud data aggregation service, or IBM buying, within the past two years, both The Weather Company and Truven Health Analytics. Early returns for companies making such investments are promising.
From virtual assistants to driverless cars, technology imitating human intelligence is on the rise. But at what ethical cost and how do boards future-proof their organisations in the face of rapid change? Earlier this year, a Japanese insurance company made headlines for doing something that company executives and directors around the world have been anticipating - and fearing - for years. Fukoku Mutual Life Insurance made 34 of its staff redundant and replaced them with artificial intelligence (AI) system IBM Watson. Japanese newspaper The Mainichi reported the company will be using Watson to determine payout amounts and check customer cases against their insurance contracts. Evidently, the future of AI is already here and technology has been changing the world at a dramatic pace.