If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
What happens to Amazon's strategy as their data scientists, engineers, and machine learning experts work tirelessly to dial up the accuracy on the prediction machine? That said, one can imagine a scenario where Amazon adopts the new strategy even before the prediction accuracy is good enough to make it profitable because the company anticipates that at some point it will be profitable. Today, in the case of AI, some companies are making early bets anticipating that the dial on the prediction machine will start turning faster once it gains momentum. In 2016, GM paid over $1B to acquire AI startup Cruise Automation, and in 2017, Ford invested $1B in AI startup Argo AI, and John Deere paid over $300M to acquire AI startup Blue River Technology – all three startups had generated negligible revenue relative to the price at the time of purchase.
Advancements in machine learning, predictive analytics, big data and artificial intelligence aren't limited to Fortune 500 companies. But there is one critical area of the financial industry that has yet to dive deeply into the opportunities of machine learning: Marketing. The homepage on a bank or credit union website is one of the most powerful messaging channels in the financial marketer's arsenal. The largest and most valuable piece of real estate on the website homepage is called the marquee banner -- the big marketing panel that dominates the screen when a user first loads the site.
Artificial intelligence can predict Supreme Court decisions better than some experts. Decision outcomes included whether the court reversed a lower court's decision and how each justice voted. The model then looked at the features of each case for that year and predicted decision outcomes. "Every time we've kept score, it hasn't been a terribly pretty picture for humans," says the study's lead author, Daniel Katz, a law professor at Illinois Institute of Technology in Chicago.
The complexity, as well as the number of active servers to manage, has increased significantly, resulting in a much larger amount of collected data to sort through and track. Despite the increase in instrumentation capabilities and the amount of collected data, enterprises barely use significantly larger data sets to improve availability and performance process effectiveness with root cause analysis and incident prediction. This field studies how to design algorithms that can learn by observing data, discovering new insights in data, developing systems that can automatically adapt and customize themselves, and designing systems where it is too complicated and costly to implement all possible circumstances (such as search engines and self-driving cars). Many organizations are finding that machine learning allows them to better analyze large amounts of data, gain valuable insights, reduce incident investigation time, determine which alerts are correlated, and what causes event storms – and even prevent incidents from happening in the first place.
Last week, the WannaCry ransomware attack crippled their network -- one report suggested people with life-threatening injuries were told not to come to the hospital. In the future, security systems could use artificial intelligence to monitor user behavior, track activity, suggest when there may be a danger and even mount an attack against the ransomware purveyors, effectively rendering the deadly malware client inoperable. Raja Mukerji, the cofounder and Chief Customer Officer at ExtraHop Networks, equates how an AI can block ransomware to how airport security stops people from using water bottles. A new technique using AI in airport security would not block all water bottles.
Thanks to great experimental work by several research groups studying the behavior of Stochastic Gradient Descent (SGD), we are collectively gaining a much clearer understanding as to what happens in the neighborhood of training convergence. This paper I first discussed several months ago in a blog post "Rethinking Generalization in Deep Learning". Leslie Smith and Nicholay Topin, recently submitted a workshop paper to the ICLR 2017 workshop: "Exploring Loss Function Topology with Cyclic Learning Rate" where they discover some peculiar convergence behavior: Here, as you monotonically increase and decrease the learning rate, there is a transition near at the convergence regime that a large enough learning rate perturbs the system right off is basin into a space of much higher loss. There is however one pragmatic take away from this paper "Averaging two models within a basin tend to give a error that is the average of the two models (or less).Averaging two models between basins tend to give an error that is higher than both models".
To borrow a cliché opening from the last high school commencement or Maid of Honor speech you heard, the dictionary defines artificial intelligence (AI) as 1: a branch of computer science dealing with the simulation of intelligent behavior in computers; and 2: the capability of a machine to imitate intelligent human behavior. But, do these definitions really explain the difference between an artificially intelligent system and one that's just programmed to be useful? What is "intelligent" behavior or, more specifically, "intelligent human behavior"? For many, the term "artificial intelligence" draws to mind humanoid robots like C-3PO from "Star Wars" or Dolores from "Westworld."
Then, as high-bandwidth networking, cloud computing, and high-powered graphics-enabled microprocessors emerged, researchers began building multilayered neural networks -- still extremely slow and limited in comparison with natural brains, but useful in practical ways. The type of machine learning called deep learning has become increasingly important. Tomorrow's AI aggregators will be able to detect "fake news" and route people to alternative perspectives. AI applications in daily use include all smartphone digital assistants, email programs that sort entries by importance, voice recognition systems, image recognition apps such as Facebook Picture Search, digital assistants such as Amazon Echo and Google Home, and much of the emerging Industrial Internet.
Inside a simple computer simulation, a group of self-driving cars are performing a crazy-looking maneuver on a four-lane virtual highway. Half are trying to move from the right-hand lanes just as the other half try to merge from the left. It seems like just the sort of tricky thing that might flummox a robot vehicle, but they manage it with precision. I'm watching the driving simulation at the biggest artificial-intelligence conference of the year, held in Barcelona this past December. What's most amazing is that the software governing the cars' behavior wasn't programmed in the conventional sense at all.
You may recall that I was a judge for the 2016 Royal Society Insight Investment popular Science Book Prize. I mention this because you may not know this, and also because this is the reason I've not shared reviews for any of the many books I've read this year to avoid creating an impression of bias. Today, I continue my nearly-annual list of the best popular science books of the year (my first instalment is here). Today's list is biology, a broad topic that includes books about evolution, ecology, animal behavior and the natural history of animals. Those of you who are wondering where all the excellent botany/plant books are, stay tuned because those are included in another forthcoming list!