How AI Protects PayPal's Payments and Performance The Official NVIDIA Blog


With advances in machine learning and the deployments of neural networks, logistic regression-powered models are expanding their uses throughout PayPal. PayPal's deep learning system is able to filter out deceptive merchants and crack down on sales of illegal products. Kutsyy explained the machines can identify "why transactions fail, monitoring businesses more efficiently," avoiding the need to buy more hardware for problem solving. The AI Podcast is available through iTunes, DoggCatcher, Google Play Music, Overcast, PlayerFM, Podbay, Pocket Casts, PodCruncher, PodKicker, Stitcher and Soundcloud.

What is hardcore data science – in practice?


For example, for personalized recommendations, we have been working with learning to rank methods that learn individual rankings over item sets. Figure 1: Typical data science workflow, starting with raw data that is turned into features and fed into learning algorithms, resulting in a model that is applied on future data. This means that this pipeline is iterated and improved many times, trying out different features, different forms of preprocessing, different learning methods, or maybe even going back to the source and trying to add more data sources. Probably the main difference between production systems and data science systems is that production systems are real-time systems that are continuously running.

Optimization tips and tricks on Azure SQL Server for Machine Learning Services


By using memory-optimized tables, resume features are stored in main memory and disk IO could be significantly reduced. If the database engine server detects more than 8 physical cores per NUMA node or socket, it will automatically create soft-NUMA nodes that ideally contain 8 cores. We then further created 4 SQL resource pools and 4 external resource pools [7] to specify the CPU affinity of using the same set of CPUs in each node. We can create resource governance for R services on SQL Server [8] by routing those scoring batches into different workload groups (Figure.

Microsoft Infuses SQL Server With Artificial Intelligence


SQL Server 2017, which will run on both Windows and Linux, is inching closer to release with a set of artificial intelligence capabilities that will change the way enterprises derive value from their business data, according to Microsoft. The Redmond, Wash., software giant on April 19 released SQL Server 2017 Community Technology Preview (CTP) 2.0. Joseph Sirosh, corporate vice president of the Microsoft Data Group, described the "production-quality" database software as "the first RDBMS [relational database management system] with built-in AI." Download links and instructions on installing the preview on Linux are available in this TechNet post from the SQL Server team at Microsoft. It's no secret to anyone keeping tabs on Microsoft lately that the company is betting big on AI, progressively baking its machine learning and cognitive computing technologies into a wide array of the company's cloud services, business software offerings and consumer products. "In this preview release, we are introducing in-database support for a rich library of machine learning functions, and now for the first time Python support (in addition to R)," stated Sirosh, in the April 19 announcement.

Where Does Automated Customer Benchmarking Make Sense?


A customer benchmarking engine is an emerging technology which uses an artificial intelligence approach to automate the reasoning that underlies data-driven benchmarking. Its benefits are discussed here, there, and elsewhere. Briefly, it uncovers comparative insights on customers which empower customer-focused employees to be more proactive, or which are shown directly to those customers as a premium information service. The business benefits include churn reduction, market differentiation, extra revenue, and deeper customer relationships. But, automated customer benchmarking doesn't always make sense.

The Right AI Approach for better asset performance – WithTheBest – Medium


Vlad Lata is Chief Technology Officer and Co-Founder of KONUX the Industrial IOT company specialised in creating customised sensor and AI-based analytics solutions for the industrial world. Founded in 2014 in Munich, KONUX combines Silicon Valley digital thinking with German engineering, and focuses on improving asset availability, network capacity and reducing maintenance costs for their clients; namely rail operators and industrial companies. Heading up the product development for KONUX with a focus on AI and Big Data, Vlad Lata and his engineering department have made huge strides in predictive analytics, smart sensors and offering a complete end-to-end solution for their clients. We're thrilled to hear more during Lata's talk at AI With the Best online conference April 29–30th, and are pleased to have had a chance to interview the KONUX CTO ahead of this event. Q Konux is famed for the highly accurate smart motion sensors paired with AI data analytics -- what skills sets you apart from other engineering firms?

Unlocking the True Value of Finance as a Business Partner – Share Talk


How can finance become a better business partner through utilizing emerging technologies? Here are 7 recommendations on how to unlock finance's potential. Over the last couple of years, companies have started to prepare for the 2020s and beyond; constantly responding to their rapidly changing environment. These changes are powered by emerging technologies, macroeconomic trends, consumer expectations and business models. Until recently, developments have been traditional and linear, following an incremental pace.

Google Reveals Technical Specs and Business Rationale for TPU Processor – PPP Focus


By way of example, the Google engineers said that if people used voice search for three minutes a day, running the associated speech recognition tasks without the TPU would have required the company to have twice as many datacenters. Based on the scant details Google provides about its data center operations – which include 15 major sites – the search-and-ad giant was looking at additional capital expenditures of perhaps $15bn, assuming that a large Google data center costs about $1bn. As it applied machine learning capabilities to more of its products and applications over the past several years, Google said it realized it needed to supercharge its hardware as well as its software. It took years for Kubernetes and TensorFlow to become publicly available, both of which Google had used extensively on its own (albeit in somewhat different forms). Due to that inherent efficiency, the chips can squeeze more operations per second into the silicon using more sophisticated and powerful machine learning models to get results more rapidly.

Revealed: Blueprints to Google's AI FPU aka the Tensor Processing Unit


Analysis In 2013, Google realized that its growing dependence on machine learning would force it to double the number of data centers it operates to handle projected workloads. Based on the scant details Google provides about its data center operations – which include 15 major sites – the search-and-ad giant was looking at additional capital expenditures of perhaps $15bn, assuming that a large Google data center costs about $1bn. The internet king assembled a team to produce a custom chip capable of handling part of its neural network workflow known as inference, which is where the software makes predictions based on data developed through the time-consuming and computationally intensive training phase. The processor sits on the PCIe bus and accepts commands from the host CPU: it is akin to a yesteryear discrete FPU or math coprocessor, but obviously souped up to today's standards. The goal was to improve cost-performance over GPUs tenfold.

First In-Depth Look at Google's TPU Architecture


Four years ago, Google started to see the real potential for deploying neural networks to support a large number of new services. During that time it was also clear that, given the existing hardware, if people did voice searches for three minutes per day or dictated to their phone for short periods, Google would have to double the number of datacenters just to run machine learning models. The need for a new architectural approach was clear, Google distinguished hardware engineer, Norman Jouppi, tells The Next Platform, but it required some radical thinking. As it turns out, that's exactly what he is known for. One of the chief architects of the MIPS processor, Jouppi has pioneered new technologies in memory systems and is one of the most recognized names in microprocessor design.