Results


[slides] @Dyn's Cloud #APM and #NPM at @CloudExpo #AI #ML #Monitoring

#artificialintelligence

With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend @CloudExpo @ThingsExpo, June 6-8, 2017, at the Javits Center in New York City, NY and October 31 - November 2, 2017, Santa Clara Convention Center, CA. Join Cloud Expo / @ThingsExpo conference chair Roger Strukhoff (@IoT2040), June 6-8, 2017, at the Javits Center in New York City, NY and October 31 - November 2, 2017, Santa Clara Convention Center, CA for three days of intense Enterprise Cloud and'Digital Transformation' discussion and focus, including Big Data's indispensable role in IoT, Smart Grids and (IIoT) Industrial Internet of Things, Wearables and Consumer IoT, as well as (new) Digital Transformation in Vertical Markets. Accordingly, attendees at the upcoming 20th Cloud Expo / @ThingsExpo June 6-8, 2017, at the Javits Center in New York City, NY and October 31 - November 2, 2017, Santa Clara Convention Center, CA will find fresh new content in a new track called FinTech, which will incorporate machine learning, artificial intelligence, deep learning, and blockchain into one track. The upcoming 20th International @CloudExpo @ThingsExpo, June 6-8, 2017, at the Javits Center in New York City, NY and October 31 - November 2, 2017, Santa Clara Convention Center, CA announces that its Call For Papers for speaking opportunities is open.


Cloud 3.0: The Rise of Big Compute

#artificialintelligence

Furthermore, the existing category leaders driving billions of dollars of compute heavy workload revenue in the legacy on-premise high performance computing (HPC) market are facing the innovator's dilemma needing to reinvent their entire business to provide effective Big Compute solutions in the space – providing a unique opportunity for the most innovative companies to become category leaders. Just like Big Data removed constraints on data and transformed major enterprise software categories, Big Compute eliminates constraints on compute hardware and provides the ability to scale computational workloads seamlessly on workload-optimized infrastructure configurations without sacrificing performance. A comprehensive Big Compute stack now enables frictionless scaling, application-centric compute hardware specialization, and performance-optimized workloads in a seamless way for both software developers and end-users. Specifically, Big Compute transforms a broad set of full-stack software services on top of specialty hardware into a software-defined layer, which enables programmatic high performance computing capabilities at your fingertips, or more likely, as back-end function evaluations part of software you touch every day.


Hello 2017, and Recap of Top 10 Posts of 2016

#artificialintelligence

As we kick off what will surely be another very exciting year of progress in artificial intelligence, machine learning and data science, we start with a quick recap of our "Top 10" most popular posts (based on aggregate readership) from the year just concluded. We also show how Microsoft R Server can harness the deep learning capabilities of MXNet and Azure GPUs using simple R scripts. Few things in life can beat "free", and that was certainly true about our free eBook on creating intelligent apps using SQL Server and R. You can now embed intelligent analytics and data transformations right in your database, and make transactions intelligent in real time. We also announced that, on Windows, Microsoft R Server (MRS) would be included in SQL Server 2016.


Context Levels in Data Science Solutioning in real-world

@machinelearnbot

Solution development: Using historical data, involves extensive experimentation, testing and validation; Solution deployment: Using the solution to get the insight and/or decision support; Solution assimilation: In the workflow enabling actions based on insight and/or prediction made by the solution; Solution maintenance and update: Periodic checking and validation of the solution performance and update to improve performance if required. Solution maintenance and update: Periodic checking and validation of the solution performance and update to improve performance if required. Solution maintenance and update: Periodic checking and validation of the solution performance and update to improve performance if required. An algorithm works with available data footprint of the process of interest; It discovers the relationships between the process characteristics and the outcomes; The above relationships are, more often than not, in form of complex patterns; Discovering these patterns require application of powerful learning algorithms on the historical data; Discovered patterns lead to learning the required model parameters; An analysis/model application algorithm use these parameters to create the model and apply it on the new data in order to compute the output.


How telecom providers are embracing cognitive app development

#artificialintelligence

As an example, mobile network operators are increasing their investment in big data analytics and machine learning technologies as they transform into digital application developers and cognitive service providers. With a long history of handling huge datasets, and with their path now led by the IT ecosystem, mobile operators will devote more than $50 billion to big data analytics and machine learning technologies through 2021, according to the latest global market study by ABI Research. Machine learning can deliver benefits across telecom provider operations with financially-oriented applications - including fraud mitigation and revenue assurance - which currently make the most compelling use cases. Predictive machine learning applications for network performance optimization and real-time management will introduce more automation and efficient resource utilization.


How telecom providers are embracing cognitive app development

#artificialintelligence

As an example, mobile network operators are increasing their investment in big data analytics and machine learning technologies as they transform into digital application developers and cognitive service providers. With a long history of handling huge datasets, and with their path now led by the IT ecosystem, mobile operators will devote more than $50 billion to big data analytics and machine learning technologies through 2021, according to the latest global market study by ABI Research. Machine learning can deliver benefits across telecom provider operations with financially-oriented applications - including fraud mitigation and revenue assurance - which currently make the most compelling use cases. Predictive machine learning applications for network performance optimization and real-time management will introduce more automation and efficient resource utilization.


Microsoft's AI and Speech Breakthroughs Eclipsed by New IBM Watson Platform -- Redmondmag.com

#artificialintelligence

The milestone was enabled with the new Microsoft Cognitive Toolkit, the software that enables those speech recognition advances (as well as image recognition and search relevance). In addition to helping the researchers hit the 5.9 WER, the new Microsoft Cognitive Toolkit 2.0 helped the researchers enable what the company is calling "reinforcement learning." The company released the new Watson Data Platform (WDP), a cloud-based analytics development platform that allows programming teams including data scientists and engineers to build, iterate and deploy machine-learning applications. WDP runs on IBM's Bluemix cloud platform, integrates with Apache Spark, works with the IBM Watson Analytics service and will underpin the new IBM Data Science Experience (DSX), which is a "cloud-based, self-service social workspace that enables data scientists to consolidate their use of and collaborate across multiple open source tools such as Python, R and Spark," said IBM Big Data Evangelist James Kobielus in a blog post outlining last month's announcements at the company's World of Watson conference in Las Vegas.


Hewlett Packard Enterprise Powers Machine Learning Apps, Revs Vertica Database

#artificialintelligence

Haven OnDemand runs on Microsoft Azure, but it's REST-based APIs can be invoked in any services-enabled environment, including Amazon Web Services or hybrid and private clouds. The availability of machine learning services on Amazon, Azure, Google and IBM clouds is clearly a threat to Haven OnDemand. On the in-database front, Vertica 8.0 gains R-based machine learning algorithms that will enable data scientists to model against vast data sets relying on the power of Vertica's massively parallel processing (and thus avoiding moving data to analytic servers or relying on sampling techniques). Vertica was already certified to run on Amazon Web Services, but the 8.0 release adds support for deployment on Microsoft Azure.


IBM's new servers to propel AI, Deep Learning & Advanced Analytics

#artificialintelligence

Featuring a new chip, the three Linux-based servers incorporate innovations from the OpenPOWER community and are a part of the Power Systems LC lineup, that IBM claims, delivers higher levels of performance and greater computing efficiency than the x86-based server. The servers clam to have been co-developed with global technology companies and the new Power Systems are uniquely designed to propel artificial intelligence, deep learning, high performance data analytics and other compute-heavy workloads to help businesses and cloud service providers save data center costs. Big Blue states that the new IBM Power System S822LC for High Performance Computing server, has been developed through open collaboration. Ian Buck, VP, Accelerated Computing, NVIDIA states, "The open and collaborative model of the OpenPOWER Foundation has propelled system innovation forward in a major way with the launch of the IBM Power System S822LC for High Performance Computing," "NVIDIA NVLink provides tight integration between the POWER CPU and NVIDIA Pascal GPUs and improved GPU-to-GPU link bandwidth to accelerate time to insight for many of today's most critical applications like advanced analytics, deep learning and AI."


IBM Linux Servers Designed to Accelerate Artificial Intelligence, Deep Learning and Advanced Analytics

#artificialintelligence

Collaboratively developed with some of the world's leading technology companies, the new Power Systems are uniquely designed to propel artificial intelligence, deep learning, high performance data analytics and other compute-heavy workloads, which can help businesses and cloud service providers save money on data center costs. "NVIDIA NVLink provides tight integration between the POWER CPU and NVIDIA Pascal GPUs and improved GPU-to-GPU link bandwidth to accelerate time to insight for many of today's most critical applications like advanced analytics, deep learning and AI." Among those first in line to receive shipments are a large multinational retail corporation and the U.S. Department of Energy's Oak Ridge National Laboratory (ORNL) and Lawrence Livermore National Laboratory (LLNL). Lower Costs, Less Server Sprawl Fully compatible in Linux-based cloud environments, IBM's Power LC servers are optimized for data-rich applications and can deliver superior data center efficiency.