project brainwave
Project Brainwave - Microsoft Research
Project Brainwave is a deep learning platform for real-time AI inference in the cloud and on the edge. A soft Neural Processing Unit (NPU), based on a high-performance field-programmable gate array (FPGA), accelerates deep neural network (DNN) inferencing, with applications in computer vision and natural language processing. Project Brainwave is transforming computing by augmenting CPUs with an interconnected and configurable compute layer composed of programmable silicon. For example, this FPGA configuration achieved more than an order of magnitude improvement in latency and throughput on RNNs for Bing, with no batching. By delivering real-time AI and ultra-low latency without batching required, software overhead and complexity are reduced.
A Microsoft custom data type for efficient inference - Microsoft Research
AI is taking on an increasingly important role in many Microsoft products, such as Bing and Office 365. In some cases, it's being used to power outward-facing features like semantic search in Microsoft Word or intelligent answers in Bing, and deep neural networks (DNNs) are one key to powering these features. One aspect of DNNs is inference--once these networks are trained, they use inference to make judgments about unknown information based on prior learning. In Bing, for example, DNN inference enables multiple search scenarios including feature extraction, captioning, question answering, and ranking, which are all important tasks for customers to get accurate, fast responses to their search queries. These scenarios in Bing have stringent latency requirements and need to happen at an extremely large scale.
Project Catapult - Microsoft Research
Project Catapult is the code name for a Microsoft Research (MSR) enterprise-level initiative that is transforming cloud computing by augmenting CPUs with an interconnected and configurable compute layer composed of programmable silicon. We are living in an era where information grows exponentially and creates the need for massive computing power to process that information. At the same time, advances in silicon fabrication technology are approaching theoretical limits, and Moore's Law has run its course. Chip performance improvements no longer keep pace with the needs of cutting-edge, computationally expensive workloads like software-defined networking (SDN) and artificial intelligence (AI). To create a faster, more intelligent cloud that keeps up with growing appetites for computing power, datacenters need to add other processors distinctly suited for critical workloads.
This is Microsoft's AI pipeline, from research to reality
To seek the origins of Microsoft's interest in artificial intelligence, you need to go way back–well before Amazon, Facebook, and Google were in business, let alone titans of AI. Bill Gates founded Microsoft's research arm in 1991, and AI was an area of investigation from the start. Three years later, in a speech at the National Conference on Artificial Intelligence in Seattle, then-sales chief Steve Ballmer stressed Microsoft's belief in AI's potential and said he hoped that software would someday be smart enough to steer a vehicle. From the start, Microsoft Research (MSR for short) hired more than its fair share of computing's most visionary, accomplished scientists. For a long time, however, it had a reputation for struggling to turn their innovations into features and products that customers wanted. In the '90s, for instance, I recall being puzzled about why its ambitious work in areas such as speech recognition hadn't had a profound effect on Windows and Office. Five years into Satya Nadella's tenure as Microsoft CEO, that stigma is gone.
- Leisure & Entertainment > Games > Computer Games (1.00)
- Information Technology (1.00)
AI is the new normal: Recap of 2018
The year 2018 was a banner year for Azure AI as over a million Azure developers, customers, and partners engaged in the conversation on digital transformation. The next generation of AI capabilities are now infused across Microsoft products and services including AI capabilities for Power BI. With many exciting developments, why are these moments the highlight? Read on, as this blog begins to explain the importance of these moments. These services span pre-built AI capabilities such as Azure Cognitive Services and Cognitive Search, Conversational AI with Azure Bot Service, and custom AI development with Azure Machine Learning (AML).
- Information Technology > Software (0.35)
- Health & Medicine > Pharmaceuticals & Biotechnology (0.30)
Microsoft Announces Brain-wave For Real-time AI Wimoxez
There has been A profound learning rate stage introduced with group, code named Project brainwave. I'm delighted to talk more details as ProJect brain-wave accomplishes a leap forward in functionality and flexibility into functioning of types. We assembled the procedure of time AI, so asks are processed by that the system as quickly since they are received by it. Since cloud infrastructures method data flows, even while they will have already been movies, search questions, Job AI is growing more and more vital, sensor connections with all people, and on occasion flows. Project brain-wave incorporates an applications stack made to encourage.
Microsoft's AI Roadmap
Due to its nearly limitless potential, artificial intelligence is at the forefront of much of this research, and Microsoft has been making headlines with new technologies, major acquisitions, and innovative ideas. The tech giant has long been moving toward a cloud-based future, and investment in AI is helping solidify its path toward becoming the AI leader in a number of fields. Here are a few technologies Microsoft has invested in recently and the potential impact they'll have on the company's future and society as a whole. Traditional computer hardware can perform complex tasks quickly. However, most hardware is tuned primarily for general-purpose performance, and systems that demand effective real-time performance often rely on specialized hardware, as milliseconds saved can be critical in certain scenarios.
- Asia > China (0.05)
- North America > United States > California > Alameda County > Berkeley (0.05)
- Health & Medicine (1.00)
- Banking & Finance (1.00)
- Information Technology > Services (0.91)
AI news from Microsoft's Build developers conference
At Microsoft's Build developers conference in Seattle this week, the company is unveiling a series of new and updated tools that will help developers incorporate artificial intelligence into their processes and applications, regardless of their background and training in the fast-emerging field of AI. A suite of new and enhanced pre-trained models from Microsoft Cognitive Services, for example, allow developers to easily add AI across vision, speech, language, knowledge and search to their applications. Many of these pre-trained models are now customizable to meet the specific needs of companies and their customers. Microsoft also is announcing a preview of Project Brainwave, a hardware architecture designed to accelerate real-time AI calculations. Project Brainwave is deployed on a type of chip from Intel called a field programmable gate array, or FPGA, and is integrated with Azure Machine Learning.
Tech giants chart different courses for artificial intelligence
Until now, most firms have been using the Graphical Processing Unit (GPU) architecture, originally developed for video games by firms such as Nvidia, to build out their Artificial Intelligence (AI) programmes. The GPU is much more capable of handling voluminous data than the humble Central Processing Unit (CPU) that is at the heart of most computers that you and I are familiar with. A couple of weeks ago, I wrote in this column about a new hardware chip design for AI, and referenced a start-up firm called AlphaICs, which counts the renowned Vinod Dham among its founders. AlphaICs is trying to redefine the type of chip used for AI applications by designing a chip among a new class of processors called Tensor Processing Units (TPUs) that allow for several more pieces of data to be simultaneously processed on their chips. Hungry AI monster programmes need to crunch through enormous data stores in order to be able to continuously "learn", and the hope is that this new class of TPU chips, which are themselves an extension of GPUs, will be sufficient to handle the vast amount of data flying in from various devices that connect to the Internet.
AI Weekly: 5 takeaways from Microsoft Build and Google I/O
The AI Weekly usually dives deep into a single subject, but with Microsoft and Google hosting their annual developer conferences, this is no ordinary week. Each conference -- here's everything from Microsoft's Build and everything from Google's I/O -- resulted in dozens of headlines, making it tough to interpret what really matters. Well, here's a handful of important developments to follow from both events at the heart of the AI world. FPGA and TPU Perhaps the most important advances in computing power for training AI systems announced this week were the beta release of Microsoft's Project Brainwave, which uses field programmable gate array (FPGA) chips, and Google's plans to release a third generation of tensor processing unit (TPU) chips. Two new AI services worth following Also announced this week: The ML Kit SDK for fast application of AI for Android and iOS developers is now available, and edge deployment of Microsoft Cognitive Services is coming later this year.
- Information Technology (0.98)
- Leisure & Entertainment > Games > Chess (0.49)