Goto

Collaborating Authors

Results


Stop debating whether AI is 'sentient' -- the question is if we can trust it

#artificialintelligence

The past month has seen a frenzy of articles, interviews, and other types of media coverage about Blake Lemoine, a Google engineer who told The Washington Post that LaMDA, a large language model created for conversations with users, is "sentient." After reading a dozen different takes on the topic, I have to say that the media has become (a bit) disillusioned with the hype surrounding current AI technology. A lot of the articles discussed why deep neural networks are not "sentient" or "conscious." This is an improvement in comparison to a few years ago, when news outlets were creating sensational stories about AI systems inventing their own language, taking over every job, and accelerating toward artificial general intelligence. But the fact that we're discussing sentience and consciousness again underlines an important point: We are at a point where our AI systems--namely large language models--are becoming increasingly convincing while still suffering from fundamental flaws that have been pointed out by scientists on different occasions.


Machine Learning: Natural Language Processing in Python (V2)

#artificialintelligence

Welcome to Machine Learning: Natural Language Processing in Python (Version 2). This is a massive 4-in-1 course covering: 1) Vector models and text preprocessing methods 2) Probability models and Markov models 3) Machine learning methods 4) Deep learning and neural network methods In part 1, which covers vector models and text preprocessing methods, you will learn about why vectors are so essential in data science and artificial intelligence. You will learn about various techniques for converting text into vectors, such as the CountVectorizer and TF-IDF, and you'll learn the basics of neural embedding methods like word2vec, and GloVe. You'll then apply what you learned for various tasks, such as: Document retrieval / search engine Along the way, you'll also learn important text preprocessing steps, such as tokenization, stemming, and lemmatization. You'll be introduced briefly to classic NLP tasks such as parts-of-speech tagging.


Text classification for online conversations with machine learning on AWS

#artificialintelligence

Online conversations are ubiquitous in modern life, spanning industries from video games to telecommunications. This has led to an exponential growth in the amount of online conversation data, which has helped in the development of state-of-the-art natural language processing (NLP) systems like chatbots and natural language generation (NLG) models. Over time, various NLP techniques for text analysis have also evolved. This necessitates the requirement for a fully managed service that can be integrated into applications using API calls without the need for extensive machine learning (ML) expertise. AWS offers pre-trained AWS AI services like Amazon Comprehend, which can effectively handle NLP use cases involving classification, text summarization, entity recognition, and more to gather insights from text.


Amazon digs into ambient and generalizable intelligence at re:MARS

#artificialintelligence

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Many, if not most, AI experts maintain that artificial general intelligence (AGI) is still many decades away, if not longer. And the AGI debate has been heating up over the past couple of months. However, according to Amazon, the route to "generalizable intelligence" begins with ambient intelligence. And it says that future is unfurling now.


Three opportunities of Digital Transformation: AI, IoT and Blockchain

#artificialintelligence

Koomey's law This law posits that the energy efficiency of computation doubles roughly every one-and-a-half years (see Figure 1–7). In other words, the energy necessary for the same amount of computation halves in that time span. To visualize the exponential impact this has, consider the face that a fully charged MacBook Air, when applying the energy efficiency of computation of 1992, would completely drain its battery in a mere 1.5 seconds. According to Koomey's law, the energy requirements for computation in embedded devices is shrinking to the point that harvesting the required energy from ambient sources like solar power and thermal energy should suffice to power the computation necessary in many applications. Metcalfe's law This law has nothing to do with chips, but all to do with connectivity. Formulated by Robert Metcalfe as he invented Ethernet, the law essentially states that the value of a network increases exponentially with regard to the number of its nodes (see Figure 1–8).


What is Artificial Intelligence? How does AI work, Types, Trends and Future of it?

#artificialintelligence

Let's take a detailed look. This is the most common form of AI that you'd find in the market now. These Artificial Intelligence systems are designed to solve one single problem and would be able to execute a single task really well. By definition, they have narrow capabilities, like recommending a product for an e-commerce user or predicting the weather. This is the only kind of Artificial Intelligence that exists today. They're able to come close to human functioning in very specific contexts, and even surpass them in many instances, but only excelling in very controlled environments with a limited set of parameters. AGI is still a theoretical concept. It's defined as AI which has a human-level of cognitive function, across a wide variety of domains such as language processing, image processing, computational functioning and reasoning and so on.


AI: The emerging Artificial General Intelligence debate

#artificialintelligence

Since Google's artificial intelligence (AI) subsidiary DeepMind published a paper a few weeks ago describing a generalist agent they call Gato (which can perform various tasks using the same trained model) and claimed that artificial general intelligence (AGI) can be achieved just via sheer scaling, a heated debate has ensued within the AI community. While it may seem somewhat academic, the reality is that if AGI is just around the corner, our society--including our laws, regulations, and economic models--is not ready for it. Indeed, thanks to the same trained model, generalist agent Gato is capable of playing Atari, captioning images, chatting, or stacking blocks with a real robot arm. It can also decide, based on its context, whether to output text, join torques, button presses, or other tokens. As such, it does seem a much more versatile AI model than the popular GPT-3, DALL-E 2, PaLM, or Flamingo, which are becoming extremely good at very narrow specific tasks, such as natural language writing, language understanding, or creating images from descriptions.


Language Models

Communications of the ACM

A transformer has strong language representation ability; a very large corpus contains rich language expressions (such unlabeled data can be easily obtained) and training large-scale deep learning models has become more efficient. Therefore, pre-trained language models can effectively represent a language's lexical, syntactic, and semantic features. Pre-trained language models, such as BERT and GPTs (GPT-1, GPT-2, and GPT-3), have become the core technologies of current NLP. Pre-trained language model applications have brought great success to NLP. "Fine-tuned" BERT has outperformed humans in terms of accuracy in language-understanding tasks, such as reading comprehension.8,17 "Fine-tuned" GPT-3 has also reached an astonishing level of fluency in text-generation tasks.3


How to get started with machine learning and AI

#artificialintelligence

Back in the 1950s, in the earliest days of what we now call artificial intelligence, there was a debate over what to name the field. Herbert Simon, co-developer of both the logic theory machine and the General Problem Solver, argued that the field should have the much more anodyne name of "complex information processing." This certainly doesn't inspire the awe that "artificial intelligence" does, nor does it convey the idea that machines can think like humans. However, "complex information processing" is a much better description of what artificial intelligence actually is: parsing complicated data sets and attempting to make inferences from the pile. Some modern examples of AI include speech recognition (in the form of virtual assistants like Siri or Alexa) and systems that determine what's in a photograph or recommend what to buy or watch next.


Natural Language Processing

#artificialintelligence

By the end of this Specialization, you will have designed NLP applications that perform question-answering and sentiment analysis, created tools to translate languages and summarize text, and even built a chatbot! Learners should have a working knowledge of machine learning, intermediate Python including experience with a deep learning framework (e.g., TensorFlow, Keras), as well as proficiency in calculus, linear algebra, and statistics. Please make sure that you've completed course 3 - Natural Language Processing with Sequence Models - before starting this course. This Specialization is designed and taught by two experts in NLP, machine learning, and deep learning. Younes Bensouda Mourri is an Instructor of AI at Stanford University who also helped build the Deep Learning Specialization.