This opinion piece is inspired by the old Danish proverb: "Making predictions is hard, especially about the future" (1). As every reader knows, the momentum of artificial intelligence (AI) and the eventual implementation of deep learning models seem assured. Some pundits have gone considerably further, however, and predicted a sweeping AI takeover of radiology. Although many radiologists support AI and believe it will enable greater efficiency, a recent study of medical students found very different reactions (2). While such doomsday predictions are understandably attention-grabbing, they are highly unlikely, at least in the short term.
Yesterday, AIM published an article on how difficult it is for the small labs and individual researchers to persevere in the high compute, high-cost industry of deep learning. Today, the policymakers of the US have introduced a new bill that will ensure deep learning is affordable for all. The National AI Research Resource Task Force Act was introduced in the House by Representatives Anna G. Eshoo (D-CA) and her colleagues. This bill was met with unanimous support from the top universities and companies, which are engaged in artificial intelligence (AI) research. Some of the well-known supporters include Stanford University, Princeton University, UCLA, Carnegie Mellon University, Johns Hopkins University, OpenAI, Mozilla, Google, Amazon Web Services, Microsoft, IBM and NVIDIA amongst others.
Whether or not your organisation suffers a cyber attack has long been considered a case of'when, not if', with cyber attacks having a huge impact on organisations. In 2018, 2.8 billion consumer data records were exposed in 342 breaches, ranging from credential stuffing to ransomware, at an estimated cost of more than $654bn. In 2019, this had increased to an exposure of 4.1 billion records. While the use of artificial intelligence (AI) and machine learning as a primary offensive tool in cyber attacks is not yet mainstream, its use and capabilities are growing and becoming more sophisticated. In time, cyber criminals will, inevitably, take advantage of AI, and such a move will increase threats to digital security and increase the volume and sophistication of cyber attacks.
Uncovering evidence for historical theories and identifying patterns in past events has long been hindered by the labour-intensive process of inputting data from artefacts and handwritten records. The adoption of artificial intelligence and machine learning techniques is speeding up such research and drawing attention to overlooked information. But this approach, known as "digital humanities", is in a battle for funding against more future-focused applications of AI. "There is a lot of interest in digital humanities, but there is not a lot of money," says Ilan Shimshoni, professor of computer vision and machine learning at the University of Haifa in Israel, where he works on archaeological projects that include reassembling artefacts from photos of fragments. "If you want to do an analysis of Facebook you'll get much more money than if you want to look at ancient Greek artefacts." Archaeological puzzles may not seem as urgent as computer science projects in healthcare, finance and other industries, but applying algorithmic techniques to historical research can improve AI's capabilities, says Ayellet Tal, an archaeological and computer science researcher at Israel's Technion University.
Dow and S&P 500 are currently on track to close out the second quarter positive on the year. The markets remained flat to slightly higher this morning, during the last trading day of the quarter. Investors remain cautious amidst concerns of mixed economic data and a looming threat from the coronavirus pandemic. All eyes will be set on Federal Reserve Chairman Jerome Powell and Treasury Secretary Steve Mnuchin as they look to testify before the House Financial Services Committee. Amidst this uncertainty, our deep learning algorithms have parsed through the data and used Artificial Intelligence ("AI") to help you spot the Top Buys for today.
One of the many benefits of using artificial intelligence (AI) is to help us view societal problems from a different perspective. While there's been much hubbub about how AI might be misused, we must not overlook the many ways AI can be used for good. Our global issues are complex, and AI provides us with a valuable tool to augment human efforts to come up with solutions to vexing problems. Here are 10 of the best ways artificial intelligence is used for good. Artificial intelligence, powered by deep-learning algorithms, is already in use in healthcare.
What makes us humans so good at making sense of visual data? That's a question that has preoccupied artificial intelligence and computer vision scientists for decades. Efforts at reproducing the capabilities of human vision have so far yielded results that are commendable but still leave much to be desired. Our current artificial intelligence algorithms can detect objects in images with remarkable accuracy, but only after they've seen many (thousands or maybe millions) examples and only if the new images are not too different from what they've seen before. There is a range of efforts aimed at solving the shallowness and brittleness of deep learning, the main AI algorithm used in computer vision today. But sometimes, finding the right solution is predicated on asking the right questions and formulating the problem in the right way.
Deep learning (DL) models are known for tackling the nonlinearities associated with data, which the traditional estimators such as logistic regression couldn't. However, there is still a cloud of doubt with regards to the increased use of computationally intensive DL for simple classification tasks. To find out if DL really outperforms shallow models significantly, the researchers from the University of Pennsylvania experiment with three ML pipelines that involve traditional methods, AutoML and DL in a paper titled, 'Is Deep Learning Necessary For Simple Classification Tasks.' The UPenn researchers stated that a support-vector machine (SVM) model might predict more accurately susceptibility to a certain complex genetic disease than a gradient boosting model trained on the same dataset. Moreover, choosing different hyperparameters within that SVM model can vary performances.
Intel and the National Science Foundation (NSF), joint funders of the Machine Learning for Wireless Networking Systems (MLWiNS) program, today announced recipients of awards for research projects into ultra-dense wireless systems that deliver the throughput, latency and reliability requirements of future applications – including distributed machine learning computations over wireless edge networks. Institutions: University of Illinois Urbana-Champaign and University of Washington Project Leads: Pramod Viswanath (University of Illinois Urbana-Champaign) and Sewoong Oh (University of Washington) Project Description: This project will use deep learning applications in the physical layer of communications systems, which will enable researchers to: 1) study the operation of new neural-network based, nonlinear channel codes through jointly trained encoders and decoders, 2) integrate information-theory, which can reduce the number of parameters to be learned and improve the training efficiency of communication systems, to create non-linear codes in feedback channels, and 3) design a family of non-linear neural codes for interference networks.