Microsoft announced the general availability of Microsoft SharePoint Syntex as of Oc. 1, 2020. This is the first packaged product to come out of the code-name Project Cortex initiative first announced in November 2019. Project Cortex reflects Microsoft's ongoing investment in intelligent content services and graph APIs to proactively explore and categorize digital assets from Microsoft 365 and other connected sources. Teams need tools to help them collaborate and stay productive while remotely working. SharePoint Syntex will be available to M365 customers with E3 or E5 licenses for a small per-user uplift.
Microsoft has been working to deliver a knowledge-management service for several years. Last year, at its Ignite IT Pro conference, it officially announced plans for its latest iteration of such a service under the codename "Project Cortex." At this year's Ignite, Microsoft is announcing the revamp of Cortex, as well as its plans for the rollout of the first few Cortex components. Up until this week, it seemed as if Project Cortex was going to be a single, centralized service that could be accessed inside existing Microsoft applications like Outlook, SharePoint, and more. Microsoft officials had been calling Cortex the first new major Microsoft 365 service since Microsoft Teams was launched in 2017.
In his 1999 book Management Challenges for the 21 Century, Austrian-born American management consultant, professor, and author Peter Drucker wrote of the importance of "the coordination and exploitation of organizations' knowledge resources, in order to create benefit and competitive advantage." Today, businesses have embraced his point, demonstrating how maintaining and growing an organization's information to assist its employees and customers offer those benefits and advantage. As a practice, this collecting and sharing of information is referred to as knowledge management. Even prior to Drucker's observation, the Consortium for Service Innovation had already begun its work in 1992 on Knowledge-Centered Service (previously known as Knowledge-Centered Support) or KCS *. KCS is a method that focuses on organizational knowledge as a key asset that can benefit, among other things, customer service delivery.
"Knowledge is power," goes the famous saying, and, if it's true, then certainly more knowledge means more power. This is the goal of knowledge management products-- to arm organizations with increased power in the form of actionable insights about customers, market requirements, and evolving trends. The COVID-19 pandemic has made it clear that knowledge management is essential. The health crisis has impacted the way organizations and individuals work, as well as how they support and communicate with customers and partners. By transforming data into information and then getting it into the hands of the right people at the right time, the resulting knowledge can be used for decisions that can make a significant impact.
What if I told a story here, how would that story start?" Thus, the summarization prompt: "My second grader asked me what this passage means: …" When a given prompt isn't working and GPT-3 keeps pivoting into other modes of completion, that may mean that one hasn't constrained it enough by imitating a correct output, and one needs to go further; writing the first few words or sentence of the target output may be necessary.
Dong, Xin Luna, He, Xiang, Kan, Andrey, Li, Xian, Liang, Yan, Ma, Jun, Xu, Yifan Ethan, Zhang, Chenwei, Zhao, Tong, Saldana, Gabriel Blanco, Deshpande, Saurabh, Manduca, Alexandre Michetti, Ren, Jay, Singh, Surender Pal, Xiao, Fan, Chang, Haw-Shiuan, Karamanolakis, Giannis, Mao, Yuning, Wang, Yaqing, Faloutsos, Christos, McCallum, Andrew, Han, Jiawei
Can one build a knowledge graph (KG) for all products in the world? Knowledge graphs have firmly established themselves as valuable sources of information for search and question answering, and it is natural to wonder if a KG can contain information about products offered at online retail sites. There have been several successful examples of generic KGs, but organizing information about products poses many additional challenges, including sparsity and noise of structured data for products, complexity of the domain with millions of product types and thousands of attributes, heterogeneity across large number of categories, as well as large and constantly growing number of products. We describe AutoKnow, our automatic (self-driving) system that addresses these challenges. The system includes a suite of novel techniques for taxonomy construction, product property identification, knowledge extraction, anomaly detection, and synonym discovery. AutoKnow is (a) automatic, requiring little human intervention, (b) multi-scalable, scalable in multiple dimensions (many domains, many products, and many attributes), and (c) integrative, exploiting rich customer behavior logs. AutoKnow has been operational in collecting product knowledge for over 11K product types.
Evidence suggests that large pre-trained language models (LMs) acquire some reasoning capacity, but this ability is difficult to control. Recently, it has been shown that Transformer-based models succeed in consistent reasoning over explicit symbolic facts, under a "closed-world" assumption. However, in an open-domain setup, it is desirable to tap into the vast reservoir of implicit knowledge already encoded in the parameters of pre-trained LMs. In this work, we provide a first demonstration that LMs can be trained to reliably perform systematic reasoning combining both implicit, pre-trained knowledge and explicit natural language statements. To do this, we describe a procedure for automatically generating datasets that teach a model new reasoning skills, and demonstrate that models learn to effectively perform inference which involves implicit taxonomic and world knowledge, chaining and counting. Finally, we show that "teaching" the models to reason generalizes beyond the training distribution: they successfully compose the usage of multiple reasoning skills in single examples. Our work paves a path towards open-domain systems that constantly improve by interacting with users who can instantly correct a model by adding simple natural language statements.
As the U.S. and countries around the world begin to ease--or at least think about easing--restrictions stemming from the COVID-19 pandemic, executives at leading software and services organizations are reflecting on the lasting impact we can expect. Greater use of cloud services, accelerated digital transformation, a need for the latest and greatest technology, and a generally stronger appreciation for knowledge management are among the key changes being seen. To help shed light on the lessons learned from the novel coronavirus and how it is impacting the way public and private organizations work internally and respond to customers, KMWorld asked KM leaders about the changes they expect in a post-pandemic world. Answers have been edited and condensed. Trustworty and easy-to-find information is critical during uncertain times. It appears that we are starting to flatten the COVID-19 curve.
While the biomedical community has published several "open data" sources in the last decade, most researchers still endure severe logistical and technical challenges to discover, query, and integrate heterogeneous data and knowledge from multiple sources. To tackle these challenges, the community has experimented with Semantic Web and linked data technologies to create the Life Sciences Linked Open Data (LSLOD) cloud. In this paper, we extract schemas from more than 80 publicly available biomedical linked data graphs into an LSLOD schema graph and conduct an empirical meta-analysis to evaluate the extent of semantic heterogeneity across the LSLOD cloud. We observe that several LSLOD sources exist as stand-alone data sources that are not inter-linked with other sources, use unpublished schemas with minimal reuse or mappings, and have elements that are not useful for data integration from a biomedical perspective. We envision that the LSLOD schema graph and the findings from this research will aid researchers who wish to query and integrate data and knowledge from multiple biomedical sources simultaneously on the Web.
Managing rich content in a QnA Maker chatbot has always been a challenge, since the users had to edit raw markdown. Now QnA Maker enables your to add and edit rich content right in the portal, so what you see in the edit experience is what you see in the Bot response. Also introducing new access roles (Editor and Reader) which can be assigned to a QnA Maker service, to restrict allowed operations. The AI Show's Favorite links: Don't miss new episodes, subscribe to the AI Show https://aka.ms/aishowsubscribe