Las Vegas's search for an adaptive security solution led it to deploy Darktrace AI across its enterprise, cloud and industrial networks. Background In recent years, Las Vegas has become a prototypical Smart City. As riders glide down the Strip aboard the first completely autonomous shuttle ever deployed on a public roadway, they are unlikely to notice much trash on the sidewalk – the city's surveillance cameras stream to an AI service that directs clean-up crews towards concentrations of litter. And when rush hour approaches, its passengers can rest assured that an array of connected sensors are helping officials anticipate gridlock at busy intersections. But while smart infrastructure enables Las Vegas to achieve new heights of efficiency, conventional security tools are largely ill-equipped to defend the hybrid cloud and industrial networks that power this infrastructure.
Specialized replicated compute accelerators (RCA) are multiplied up by having multiple copies per ASICs, multiple ASICs per server, multiple servers per rack, and multiple racks per datacenter. Server controller can be an FPGA, microcontroller, or a Xeon processor. Power delivery and cooling system are customized based on ASIC needs. If required, there would be DRAMs on the PCB as well. Each ASIC interconnects its RCAs using a customized on-chip network.
There are plenty of tools and point solutions that address bits and pieces of the challenge of delivering artificial intelligence (AI) and Internet of things (IoT) applications. C3.ai's focus is on delivering an end-to-end platform for developing, deploying and running these applications in production at scale. Whether customers use every aspect of the C3.ai platform or not, big enterprise-scale companies seem to be attracted by that promise of quickly developing and running innovative, data-driven applications at scale. There was plenty of evidence of that fact at C3.ai's February 25-27 Transform conference in San Francisco, where customers including Bank of America, Shell, 3M and Engie detailed their deployments. C3.ai's cloud-first platform is comprehensive, addressing the needs of developers, data engineers and data scientists, and the operational teams challenged with bringing applications into production at scale.
BERLIN, Nov 21 (Reuters) - German data mining software firm Celonis said on Thursday that it had raised $290 mln in a Series C funding round, putting a $2.5 billion valuation on the company that has been compared with enterprise application giant SAP . The funding round was led by Arena Holdings and investors included Ryan Smith, the founder of customer experience specialist Qualtrics that was bought by SAP for $8 billion a year ago. Celonis, based in Munich and New York, runs a cloud-based service that uses artificial intelligence to mine data and optimise business processes, serving customers including Siemens, 3M, Airbus and Vodafone. "We are in a market that shows enormous momentum," co-CEO and co-founder Bastian Nominacher told Reuters, adding that Celonis would invest the funds raised in its global sales and customer service and in enhancing its cloud platform. The funding round brings total investments into Celonis to $370 million.
Ben Horowitz resoundingly falls in the category of "needing no introduction": a highly successful entrepreneur who navigated a perilous situation with his business (Loudcloud, which became Opsware) to a $1.65B acquisition by HP, he's also the founder of premier Silicon Valley venture capital firm Andreessen Horowitz (aka "a16z"), and the best selling author of two books: "The Hard Thing About Hard Things" and the newly-released "What You Do Is Who You Are". It was a special treat to host Ben for a fireside chat at the most recent most recent edition of Data Driven NYC – a great evening that included two other terrific speakers: Amr Adwallah, now VP of Developer Relations at Google Cloud, and previously co-founder and CTO at Cloudera (NYSE: CLDR) and Michael James, co-founder of AI chip Cerebras. We spent a good hour with Ben and covered a bunch of topics, loosely organized in two parts, first AI and data, and then culture an his new book. Below are two videos covering each part, as well as a FULL TRANSCRIPT for anyone who prefers to read.
Alexis Perrier is a data science consultant with a background in signal processing and stochastic algorithms. A former Parisian, Alexis is now actively involved in the D.C. data science community as an instructor, blogger, and presenter. Alexis is also an avid jazz and classical music fan, a book lover and proud owner of a real chalk blackboard on which he regularly tries to share his fascination with mathematical equations with his 3 children. He holds a Master in Mathematics from Université Pierre et Marie Curie Paris VI, a Ph.D. in Signal Processing from Telecom ParisTech and currently resides in Washington D.C. Giuseppe Ciaburro holds a PhD in environmental technical physics and two master's degrees. His research was focused on machine learning applications in the study of the urban sound environments.
Cloud Infrastructure as a Service (IaaS) is vulnerable to malware due to its exposure to external adversaries, making it a lucrative attack vector for malicious actors. A datacenter infected with malware can cause data loss and/or major disruptions to service for its users. This paper analyzes and compares various Convolutional Neural Networks (CNNs) for online detection of malware in cloud IaaS. The detection is performed based on behavioural data using process level performance metrics including cpu usage, memory usage, disk usage etc. We have used the state of the art DenseNets and ResNets in effectively detecting malware in online cloud system. CNN are designed to extract features from data gathered from a live malware running on a real cloud environment. Experiments are performed on OpenStack (a cloud IaaS software) testbed designed to replicate a typical 3-tier web architecture. Comparative analysis is performed for different metrics for different CNN models used in this research.
With coyote attacks on humans in cities and suburbs making headlines – coyotes injured two people in Chicago earlier this month – officials could tap into a data repository to get a better handle on what's bringing the area's animals into such close proximity to humans. Called eMammal, the tool has been around for several years in one form or another and has helped researchers manage camera-trapping projects. It uses a data pipeline that takes images and metadata from the field through a cloud-based review processes and into SIdora, a Smithsonian Institution data repository. To date, eMammal has data on more than 1 million detections of wildlife worldwide, including in cities. Smithsonian researchers collaborated with others at the North Carolina Museum of Natural Sciences, Conservation International and the Wildlife Conservation Society to develop an open standard for camera trap metadata -- the Camera Trap Metadata Standard -- as part of the eMammal project. Camera traps are ruggedized cameras that researchers place in forests, jungles, grasslands, cities and elsewhere to capture images of mammals.
Reinforcement Learning (RL) has demonstrated a great potential for automatically solving decision making problems in complex uncertain environments. Basically, RL proposes a computational approach that allows learning through interaction in an environment of stochastic behavior, with agents taking actions to maximize some cumulative short-term and long-term rewards. Some of the most impressive results have been shown in Game Theory where agents exhibited super-human performance in games like Go or Starcraft 2, which led to its adoption in many other domains including Cloud Computing. Particularly, workflow autoscaling exploits the Cloud elasticity to optimize the execution of workflows according to a given optimization criteria. This is a decision-making problem in which it is necessary to establish when and how to scale-up/down computational resources; and how to assign them to the upcoming processing workload. Such actions have to be taken considering some optimization criteria in the Cloud, a dynamic and uncertain environment. Motivated by this, many works apply RL to the autoscaling problem in Cloud. In this work we survey exhaustively those proposals from major venues, and uniformly compare them based on a set of proposed taxonomies. We also discuss open problems and provide a prospective of future research in the area.
The U.S. Postal Service plans to reduce wait times on about 80 million customer calls fielded annually through a partnership with Google Cloud announced Thursday. USPS awarded Carahsoft -- Google Cloud's authorized distributor for public sector clients -- a cloud contract with a $50 million ceiling covering customer experience and mail delivery solutions. Increased delivery competition from new carrier services like Amazon has the postal agency reinventing itself, and call center customer experience is "one of the bigger pain points," Mike Daniels, vice president of global public sector at Google Cloud, told FedScoop. "Their call wait times are very unacceptable. They're limited in what they can do with respect to staffing; you can't just throw more people at it," Daniels said.