Goto

Collaborating Authors

Results


Top 10 Keyword Extraction API

#artificialintelligence

This article is brought to you by the Eden AI team. We allow you to test and use in production a large number of AI engines from different providers directly through our API and platform. You are a solution provider and want to integrate Eden AI, contact us at : contact@edenai.co Intro: In this article, we are going to see how we can easily integrate a Keyword Extraction engine in your project and how to choose and access the right engine according to your data. Definition: Keyword extraction (a


Phoebe: A Learning-based Checkpoint Optimizer

arXiv.org Artificial Intelligence

Easy-to-use programming interfaces paired with cloud-scale processing engines have enabled big data system users to author arbitrarily complex analytical jobs over massive volumes of data. However, as the complexity and scale of analytical jobs increase, they encounter a number of unforeseen problems, hotspots with large intermediate data on temporary storage, longer job recovery time after failures, and worse query optimizer estimates being examples of issues that we are facing at Microsoft. To address these issues, we propose Phoebe, an efficient learning-based checkpoint optimizer. Given a set of constraints and an objective function at compile-time, Phoebe is able to determine the decomposition of job plans, and the optimal set of checkpoints to preserve their outputs to durable global storage. Phoebe consists of three machine learning predictors and one optimization module. For each stage of a job, Phoebe makes accurate predictions for: (1) the execution time, (2) the output size, and (3) the start/end time taking into account the inter-stage dependencies. Using these predictions, we formulate checkpoint optimization as an integer programming problem and propose a scalable heuristic algorithm that meets the latency requirement of the production environment. We demonstrate the effectiveness of Phoebe in production workloads, and show that we can free the temporary storage on hotspots by more than 70% and restart failed jobs 68% faster on average with minimum performance impact. Phoebe also illustrates that adding multiple sets of checkpoints is not cost-efficient, which dramatically reduces the complexity of the optimization.


Towards Personalized and Human-in-the-Loop Document Summarization

arXiv.org Artificial Intelligence

The ubiquitous availability of computing devices and the widespread use of the internet have generated a large amount of data continuously. Therefore, the amount of available information on any given topic is far beyond humans' processing capacity to properly process, causing what is known as information overload. To efficiently cope with large amounts of information and generate content with significant value to users, we require identifying, merging and summarising information. Data summaries can help gather related information and collect it into a shorter format that enables answering complicated questions, gaining new insight and discovering conceptual boundaries. This thesis focuses on three main challenges to alleviate information overload using novel summarisation techniques. It further intends to facilitate the analysis of documents to support personalised information extraction. This thesis separates the research issues into four areas, covering (i) feature engineering in document summarisation, (ii) traditional static and inflexible summaries, (iii) traditional generic summarisation approaches, and (iv) the need for reference summaries. We propose novel approaches to tackle these challenges, by: i)enabling automatic intelligent feature engineering, ii) enabling flexible and interactive summarisation, iii) utilising intelligent and personalised summarisation approaches. The experimental results prove the efficiency of the proposed approaches compared to other state-of-the-art models. We further propose solutions to the information overload problem in different domains through summarisation, covering network traffic data, health data and business process data.


Dremio launches data lake service running on AWS cloud

#artificialintelligence

All the sessions from Transform 2021 are available on-demand now. Dremio today launched a cloud service that creates a data lake based on an in-memory SQL engine that launches queries against data stored in an object-based storage system. The goal is to make it easier for organizations to take advantage of the data lake, dubbed Dremio Cloud, without having to employ an internal IT team to manage it, said Tomer Shiran, chief product officer for Dremio. An organization can now start accessing Dremio Cloud in as little as five minutes, he said. Based on Dremio's existing SQL Lakehouse platform, the Dremio Cloud service runs on the Amazon Web Services (AWS) public cloud.


Push-down query capabilities: Five questions to ask your cloud BI provider

ZDNet

Software-as-a-service (SaaS) offers many benefits, including but not limited to elasticity: the ability to shrink and grow storage and compute resources on demand. Clients of most leading enterprise business intelligence (BI) platforms enjoy this cloud elasticity benefit but at a cost. Ultimately, elasticity requires both application and data components (compute and store) to be elastic, and therefore, cloud-native BI platforms require that on-premises data be ingested into the cloud platform before it can be analyzed. But not all organizations are ready to let go of their data from inside their firewalls, and they are not ready to commit to a single cloud provider -- most are opting for a hybrid on-premises and multicloud environment. Here's a look at how the cloud leaders stack up, the hybrid market, and the SaaS players that run your company as well as their latest strategic moves.


MongoDB stitching the cloud and edge together

ZDNet

In reviewing this year's batch of announcements for MongoDB's online user conference, there's a lot that fills the blanks opened last year as reported by Stephanie Condon. But the sleeper is unifying a platform that has expanded over the past few years with mobile and edge processing capabilities, not to mention a search engine, and the reality that Atlas, its cloud database-as-a-service (DBaaS) service, is now comprising the majority of new installs. Last year, MongoDB announced previews of Atlas Data Lake, the service of MongoDB's cloud service that lets you target data stored in Amazon S3 cloud storage; full text search, plans to integrate the then recently-acquired mobile Realm database platform with the Stretch serverless development environment; and autoscaling of MongoDB's Atlas cloud service. This year, all those previews announced last year are now going GA. Rounding it out is the announcement of the next release of MongoDB, version 4.4, that includes some modest enhancements with querying and sharding. The cloud is clearly MongoDB's future.


A Survey on the Use of Preferences for Virtual Machine Placement in Cloud Data Centers

arXiv.org Artificial Intelligence

With the rapid development of virtualization techniques, cloud data centers allow for cost effective, flexible, and customizable deployments of applications on virtualized infrastructure. Virtual machine (VM) placement aims to assign each virtual machine to a server in the cloud environment. VM Placement is of paramount importance to the design of cloud data centers. Typically, VM placement involves complex relations and multiple design factors as well as local policies that govern the assignment decisions. It also involves different constituents including cloud administrators and customers that might have disparate preferences while opting for a placement solution. Thus, it is often valuable to not only return an optimized solution to the VM placement problem but also a solution that reflects the given preferences of the constituents. In this paper, we provide a detailed review on the role of preferences in the recent literature on VM placement. We further discuss key challenges and identify possible research opportunities to better incorporate preferences within the context of VM placement.


Kubernetes For AI Hyperparameter Search Experiments NVIDIA Developer Blog

#artificialintelligence

The software industry has recently seen a huge shift in how software deployments are done thanks to technologies such as containers and orchestrators. While container technologies have been around, credit goes to Docker for making containers mainstream, by greatly simplifying the process of creating, managing and deploying containerized applications. Teams of developers and data scientists are increasingly moving their training and inference workloads from one-developer-one-workstation model to shared centralized infrastructure, to improve resource utilization and sharing. With container orchestration tools such as Kubernetes, Docker Swarm and Marathon, developers and data scientists get more control over how and when their apps are run and ops teams don't have to deal with deploying and managing workloads. NVIDIA actively contributes to making container technologies and orchestrators GPU friendly, enabling the same deployment best practices that exists for traditional software development and deployment to be applied to AI software development.


Microsoft Teardown

#artificialintelligence

We dive into the strategies Microsoft is pursuing across cloud, enterprise IT, AI, gaming, and more to see how the company is positioning itself for the future. As the world's most valuable company, and with a current market cap hovering around $780B, Microsoft may be the next company to reach the $1T threshold. While it may not grab as many headlines as its buzzier tech giant counterparts, the company is quietly adapting across its core business areas, led by a future-focused Satya Nadella. Since assuming the CEO role in 2014, Nadella has deprioritized the Windows offering that initially helped Microsoft become a household name, refocusing the company's efforts on implementing AI across all its products and services. That's not the only change: in addition to an increased focus on AI, cloud and subscription services have become unifying themes across products. And to maintain its dominance in enterprise technology, Microsoft is expanding in new areas -- like gaming and personal computing -- that leverage the company's own cloud infrastructure. Below, we outline Microsoft's key priorities, initiatives, investments, and acquisitions across its various business segments. The majority of Microsoft's revenue comes from its enterprise technologies, which fall under its Intelligent Cloud and Productivity & Business Processes segments. The Productivity & Business Processes segment includes software products like Office 365, Skype, LinkedIn, and Microsoft's ERP (enterprise resource planning) and CRM (customer relationship management) platform, Dynamic 365. Microsoft's Intelligence Cloud segment includes cloud platform Azure, the Visual Studio developer platform, and Windows Server, a version of Microsoft's proprietary operating system optimized for running in the cloud. Outside of enterprise technology, Microsoft generates revenue from products like Xbox and Microsoft Surface, among others areas. These products are bucketed into the company's More Personal Computing segment. In addition to its in-house efforts, Microsoft has a number of initiatives that look to support promising young businesses. These include Microsoft's venture capital arm, M12, Microsoft's accelerator, ScaleUp, and other initiatives like Microsoft for Startups.


Post-prognostics decision in Cyber-Physical Systems

arXiv.org Artificial Intelligence

Abstract-- Prognostics and Health Management (PHM) offers several benefits for predictive maintenance. It predicts the future behavior of a system as well as its Remaining Useful Life (RUL). This RUL is used to planned the maintenance operation to avoid the failure, the stop time and optimize the cost of the maintenance and failure. However, with the development of the industry the assets are nowadays distributed this is why the PHM needs to be developed using the new IT. In our work we propose a PHM solution based on Cyber physical system where the physical side is connected to the analyze process of the PHM which are developed in the cloud to be shared and to benefit of the cloud characteristics Keywords-- Cyber physical systems CPS, Prognostics Health Management PHM, Decision post-prognostics, cloud computing, Internet of Things.