If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
WIRE)--Veritone, Inc. (NASDAQ:VERI), a leading provider of artificial intelligence (AI) insights and cognitive solutions, today announced the general availability of its Veritone Developer application. The application empowers developers of cognitive engines, applications and application programming interfaces (APIs) to bring new AI ideas to life through simple integration with the Veritone aiWARE platform. Veritone Developer is a self-service development environment that empowers developers to create, submit and deploy public and private applications and cognitive engines directly into the aiWARE architecture. After a successful limited-beta-release to a select group of partners, Veritone Developer is now publicly available as a unique resource for machine learning experts, application development firms, and system integrators. Veritone Developer supports RESTful and GraphQL API integrations as well as engine development in major categories of cognition, including: transcription, translation, face and object recognition, audio/video fingerprinting, optical character recognition (OCR), geolocation, transcoding, and logo recognition.
Amazon SageMaker is a fully managed service for developers and data scientists to quickly build, train, deploy, and manage their own machine learning models. AWS also introduced AWS DeepLens, a deep learning-enabled wireless video camera that can run real-time computer vision models to give developers hands-on experience with machine learning. And, AWS announced four new application services that allow developers to build applications that emulate human-like cognition: Amazon Transcribe for converting speech to text; Amazon Translate for translating text between languages; Amazon Comprehend for understanding natural language; and, Amazon Rekognition Video, a new computer vision service for analyzing videos in batches and in real-time. Today, implementing machine learning is complex, involves a great deal of trial and error, and requires specialized skills. Developers and data scientists must first visualize, transform, and pre-process data to get it into a format that an algorithm can use to train a model.
The Gluon interface currently works with Apache MXNet and will support Microsoft Cognitive Toolkit (CNTK) in an upcoming release. With the Gluon interface, developers can build machine learning models using a simple Python API and a range of pre-built, optimized neural network components. This makes it easier for developers of all skill levels to build neural networks using simple, concise code, without sacrificing performance. AWS and Microsoft published Gluon's reference specification so other deep learning engines can be integrated with the interface.
A religion based around artificial intelligence is in the news again, this time helmed by Anthony Levandowski, a former member of Google's self-driving car team. His argument is that humans will eventually create AI that is more intelligent than we are, making it functionally god-like, so we might as well start planning for that eventuality. His thinking about the rise of super intelligent machines runs parallel to that of Elon Musk, who has been trumpeting the risks of artificial superintelligence on Twitter and in public appearances. But while talking about an AI god grabs headlines, we have more pressing problems to consider. The AI experts I get to speak with aren't concerned about an artificial superintelligence suddenly cropping up in the next few months and taking over the world.
Graphcore has today announced a $50 million Series C funding round by Sequoia Capital as the machine intelligence company prepares to ship its first Intelligence Processing Unit (IPU) products to early access customers at the start of 2018. The Series C round enables Graphcore to significantly accelerate growth to meet the expected global demand for its machine intelligence processor. The funding will be dedicated to scaling up production, building a community of developers around the Poplar software platform, driving Graphcore's extended product roadmap, and investing in its Palo Alto-based US team to help support customers. Nigel Toon, CEO at Graphcore said: "Efficient AI processing power is rapidly becoming the most sought-after resource in the technological world. We believe our IPU technology will become the worldwide standard for machine intelligence compute.
Microsoft announced today that its Visual Studio integrated development environment is getting a new set of tools aimed at easing the process of building AI systems. Visual Studio Tools for AI is a package that's designed to provide developers with built-in support for creating applications with a wide variety of machine learning frameworks, like Caffe2, TensorFlow, CNTK, and MXNet. Once users have coded up models inside Visual Studio, the AI tools make it easier for them to send that code off to Microsoft's Azure cloud platform for training and deployment. Launching these tools brings a host of advanced capabilities to developers in a point-and-click format that would have previously required the use of a command line interface. It should make building AI systems more accessible for a class of developers that haven't been able to use Visual Studio's rich development environment to its full potential for that purpose.
Twitter launched a set of premium application programming interfaces that will give developers access to more data such as Tweets per request as well as more complex queries. These premium APIs will serve as a bridge between Twitter's free APIs and enterprise versions. Twitter on its most recent earnings report noted that its data platform is among its fastest growing businesses. Prior to the premium APIs, Twitter offered basic query functionality and access to basic data for free and real-time and historical data for enterprises. The premium APIs are expected to bridge a gap and create an upgrade path from free to pay to enterprise with better reliability.
The first instances to include NVIDIA Tesla V100 GPUs, P3 instances are the most powerful GPU instances available in the cloud. P3 instances allow customers to build and deploy advanced applications with up to 14 times better performance than previous-generation Amazon EC2 GPU compute instances, and reduce training of machine learning applications from days to hours. With up to eight NVIDIA Tesla V100 GPUs, P3 instances provide up to one petaflop of mixed-precision, 125 teraflops of single-precision, and 62 teraflops of double-precision floating point performance, as well as a 300 GB/s second-generation NVIDIA NVLink interconnect that enables high-speed, low-latency GPU-to-GPU communication. P3 instances also feature up to 64 vCPUs based on custom Intel Xeon E5 (Broadwell) processors, 488 GB of DRAM, and 25 Gbps of dedicated aggregate network bandwidth using the Elastic Network Adapter (ENA). "When we launched our P2 instances last year, we couldn't believe how quickly people adopted them," said Matt Garman, Vice President of Amazon EC2.
Sensay, a Los Angeles-based tech company that specializes in AI, chatbots and conversation analysis has launched a sale of its new Ethereum-based application token, SENSE. Sensay, a Los Angeles-based tech company that specializes in AI, chatbots and conversation analysis has launched a sale of its new Ethereum-based application token, SENSE. Selling the SENSE tokens will enable Sensay to advance product development and to speed the rate of innovation in bridging AI and human conversational data. ABOUT SENSAY Sensay is a Los Angeles-based tech company that specializes in AI, chatbots, conversation analysis and messaging.
Facebook today launched Messenger Platform 2.1 with new features to give developers and brands more ways to reach potential customers, like built-in natural language processing, a payments SDK, and a global beta that makes it easier to switch between automated bots and the humans behind 70 million businesses on Facebook. In a separate post from Wit.ai today, the company announced it will discontinue its Bot Engine for NLP. Also with Messenger Platform 2.1, a new software development kit launches today to enable payments in Messenger webview. Bot discovery was the emphasis for Messenger Platform 2.0, with features like the Discover Tab to allow Messenger staff to pick featured bots; chat extensions to make Messenger bots available in group chats; and M Suggestions to suggest bots based on the words used in a Messenger conversation.