If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Today, there are a large number of online discussion fora on the internet which are meant for users to express, discuss and exchange their views and opinions on various topics. In such fora, it has been often observed that user conversations sometimes quickly derail and become inappropriate such as hurling abuses, passing rude and discourteous comments on individuals or certain groups/communities. Similarly, some virtual agents or bots have also been found to respond back to users with inappropriate messages. As a result, inappropriate messages or comments are turning into an online menace slowly degrading the effectiveness of user experiences. Hence, automatic detection and filtering of such inappropriate language has become an important problem for improving the quality of conversations with users as well as virtual agents.
We live in a world where everything is connected, smart fridges, salt dispensers, egg timers, and even hair brushes. That sounds like a Ummm, ok-ish idea. Well, that's what this show is. Deep Learning with Merrill Grambell is the first show completely hosted by artificial intelligence as it interacts with real-life comedians, musicians, technologists and other interesting guests from all over the world. It's sort of as if Clippy from Microsoft Word and Bonzi Buddy got together and made a talk show.
Intratumor heterogeneity in lung cancer may influence outcomes. CT radiomics seeks to assess tumor features to provide detailed imaging features. However, CT radiomic features vary according to the reconstruction kernel used for image generation. To investigate the effect of different reconstruction kernels on radiomic features and assess whether image conversion using a convolutional neural network (CNN) could improve reproducibility of radiomic features between different kernels. In this retrospective analysis, patients underwent non–contrast material–enhanced and contrast material–enhanced axial chest CT with soft kernel (B30f) and sharp kernel (B50f) reconstruction using a single CT scanner from April to June 2017.
Plate from Muybridge's Animal Locomotion series published in 1887. Deep learning has become the dominate lens through which machines understand video. Yet video files consume huge amounts of storage space and are extremely computationally demanding to analyze using deep learning. Certain use cases can benefit from converting videos to sequences of still images for analysis, enabling full data parallelism and vast reductions in data storage and computation. Representing video as still imagery also presents unique opportunities for non-consumptive analysis similar to the use of ngrams for text.
If you attended last year's RSA conference, you may have left with the idea that all you needed to build a complete cyber-security solution was a machine learning engine (or better yet, "advanced next-gen Artificial Intelligence"). Every cyber-security company uses machine learning (or AI) because it is a powerful technique for malware analysis. But it is by no means the only one. Applied naïvely, it may not even work effectively. Sometimes, a powerful scanning engine is all that is required (it's'cheap'), or even just a great database of known malware hashes (it's fast).
This seems like an obvious one, but with so many potential areas for AI exploration, starting with the right projects--and stakeholders--is crucial for long-term success. First and foremost, the process of identifying and selecting use cases shouldn't be driven by technology alone. That is, you don't want to think about AI solely in terms of where you can apply natural language processing, for example, or how you can leverage a labeled data set. Instead, ask where you seek to increase productivity or derive new value. Going through the questioning exercise above with the various leaders who may own or touch AI, such as the chief information officer, chief digital officer, chief data scientist, and other specialists (see #3), will enable you to identify where to start.
Faced with digital disruption in their industry and a radical shift in consumer behaviour, enterprises are looking to boost operational efficiency and agility in solving business problems. For a business looking to be competitive or disrupt the existing market space, Artificial Intelligence (AI) can be a solution. It can help their machines become intelligent by providing it with a brain, ability to take decisions and outperform humans. Today enterprises of all sizes are adopting to AI to build smart applications and automate processes. A Gartner Survey reveals 37 % of organizations having implemented Artificial Intelligence in some form.
As deep learning has become ubiquitous, evaluations of its accuracy typically compare its performance against an idealized baseline of flawless human results that bear no resemblance to the actual human workflow those algorithms are being designed to replace. For example, the accuracy of real-time algorithmic speech recognition is frequently compared against human captioning produced in offline multi-coder reconciled environments and subjected to multiple reviews to generate flawless content that looks absolutely nothing like actual real-time human transcription. If we really wish to understand the usability of AI today we should be comparing it against the human workflows it is designed to replace, not an impossible vision of nonexistent human perfection. While the press is filled with the latest superhuman exploits of bleeding-edge research AI systems besting humans at yet another task, the reality of production AI systems is far more mundane. Most commercial applications of deep learning can achieve higher accuracy than their human counterparts at some tasks and worse performance on others.
When asked why he robbed banks, Willie Sutton famously replied, "Because that's where the money is". And so much of artificial antelligence evolved in the United States – because that's where the computers were. However with Europe's strong educational institutions, the path to advanced AI technologies has been cleared by European computer scientists, neuroscientists, and engineers – many of whom were later poached by US universities and companies. From backpropagation to Google Translate, deep learning, and the development of more advanced GPUs permitting faster processing and rapid developments in AI over the past decade, some of the greatest contributions to AI have come from European minds. Modern AI can be traced back to the work of the English mathematician Alan Turing, who in early 1940 designed the bombe – an electromechanical precursor to the modern computer (itself based on previous work by Polish scientists) that broke the German military codes in World War II.
It seems like using these pre-trained models have become a new standard for industry best practices. After all, why wouldn't you take advantage of a model that's been trained on more data and compute than you could ever muster by yourself? Advances within the NLP space have also encouraged the use of pre-trained language models like GPT and GPT-2, AllenNLP's ELMo, Google's BERT, and Sebastian Ruder and Jeremy Howard's ULMFiT (for an excellent over of these models, see this TOPBOTs post). One common technique for leveraging pretrained models is feature extraction, where you're retrieving intermediate representations produced by the pretrained model and using those representations as inputs for a new model. These final fully-connected layers are generally assumed to capture information that is relevant for solving a new task.