Reimagining business for the digital age is the number-one priority for many of today's top executives. We offer practical advice and examples of how to do it right. IBM Chairman Ginni Rometty issued the clarion call to an arena packed with roughly 20,000 IBM customers during her Think 2018 keynote. Incumbent companies can disrupt their own industries and you don't have to be an Uber to pull it off. Compared to her keynote at the former World of Watson conference almost 18 months ago, Rometty on this go-round was far more specific, showing concrete examples of how legacy companies are adopting next-generation technologies that changes their businesses.
Opinions expressed here and in any corresponding comments are the personal opinions of the original authors, not of Cisco. The content is provided for informational purposes only and is not meant to be an endorsement or representation by Cisco or any other party. This site is available to the public. No information you consider confidential should be posted to this site. By posting you agree to be solely responsible for the content of all information you contribute, link to, or otherwise upload to the Website and release Cisco from any liability related to your use of the Website.
Google is developing its own solution based on distributed registry technology, which can be used in the cloud services of the company and will help it stand out against competitors. As early as 2016, Google launched a trial program for developers to use blockchain in their cloud. According to one source Bloomberg, throughout the last few months, members of a special group under the control of the head of the department of cloud services Diane Green actively worked on improved blockchain protocols. Officially, Google does not comment on the situation. The company is also engaged in buying and investing in startups associated with digital registries, and many of these transactions were not announced, the source added.
According to Gartner, artificial intelligence will be the most disruptive class of technology over the next 10 years due to radical computational power, near-endless amounts of data, and unprecedented advances in deep learning. The rise of deep learning has been fueled by three recent trends: the explosion in the amount of training data; the use of accelerators such as graphics processing units (GPUs); and the advancement in training algorithms and neural network architectures. To realize the full potential of this rising trend, we want the technology to be easily accessible to the people it matters most to: data scientists and AI developers. Training deep neural networks, known as deep learning, is currently highly complex and computationally intensive. It requires a highly tuned system with the right combination of software, drivers, compute, memory, network, and storage resources.
With more than 4 billion internet users worldwide today and 31 billion connected devices forecasted by 2020, the future of the digital world lies in how people and "things" will interact with each other. The key to this will be the convergence and consolidation of internet of things platforms and devices which will be able to seamlessly exchange data between people, networks, devices and applications. Creating this world, where multiple service and technology layers work harmoniously to create ubiquitous, ultra-connected experiences, is a task that will take years to complete. It requires a robust technology platform, powered by artificial intelligence. Today, we are siloed in how we think about IoT.
We're taught from an early age to take notes. I believe the first time I recall starting to take notes was when I entered junior high. In fact, I can even recall being required to take a class on how to take notes – Roman numbers, indentation, etc. The act of taking notes is one of the few activities that has stood the test of time as I continued to do this not only in junior high, but throughout my education and it is something that I continue to do regularly today in my professional career. The reason why notes continue to be so prevalent in all of our lives is not because our teachers were so effective at influencing our young impressionable selves, but it's because taking notes is an extremely useful practice.
Zero Touch & Carrier Automation Congress -- The 3GPP standards group is developing a machine learning function that could allow 5G operators to monitor the status of a network slice or third-party application performance. The network data analytics function (NWDAF) forms a part of the 3GPP's 5G standardization efforts and could become a central point for analytics in the 5G core network, said Serge Manning, a senior technology strategist at Sprint Corp. (NYSE: S). Speaking here in Madrid, Manning said the NWDAF was still in the "early stages" of standardization but could become "an interesting place for innovation." The 3rd Generation Partnership Project (3GPP) froze the specifications for a 5G new radio standard at the end of 2017 and is due to freeze another set of 5G specifications, covering some of the core network and non-radio features, in June this year as part of its "Release 15" update. Manning says that Release 15 considers the network slice selection function (NSSF) and the policy control function (PCF) as potential "consumers" of the NWDAF.
IBM today announced the launch of its new Deep Learning as a Service (DLaaS) program for AI developers. With DlaaS, users can train neural networks using popular frameworks such as TensorFlow, PyTorch, and Caffe without buying and maintaining costly hardware. The service lets data scientists train models using only the resources they need, paying only for GPU time. Each cloud processing unit is set up for ease-of-use and prepared for programming deep learning networks without the need for infrastructure management from users. Users can choose from a set of supported deep learning frameworks, a neural network model, training data, and cost constraints and then the service takes care of the rest, providing them an interactive, iterative training experience.
Here we begin our survey of Amazon AWS cloud analytics and big data tools. First we will give an overview of some of what is available. Then we will look at some of them in more detail in subsequent blog posts and provide examples of how to use them. Amazon's approach to selling these cloud services is that these tools take some of the complexity out of developing ML predictive, classification models and neural networks. That is true, but could it be limiting.
Deep neural networks are often trained in the over-parametrized regime (i.e. with far more parameters than training examples), and understanding why the training converges to solutions that generalize remains an open problem. Several studies have highlighted the fact that the training procedure, i.e. mini-batch Stochastic Gradient Descent (SGD) leads to solutions that have specific properties in the loss landscape. However, even with plain Gradient Descent (GD) the solutions found in the over-parametrized regime are pretty good and this phenomenon is poorly understood. We propose an analysis of this behavior for feedforward networks with a ReLU activation function under the assumption of small initialization and learning rate and uncover a quantization effect: The weight vectors tend to concentrate at a small number of directions determined by the input data. As a consequence, we show that for given input data there are only finitely many, "simple" functions that can be obtained, independent of the network size. This puts these functions in analogy to linear interpolations (for given input data there are finitely many triangulations, which each determine a function by linear interpolation). We ask whether this analogy extends to the generalization properties - while the usual distribution-independent generalization property does not hold, it could be that for e.g. smooth functions with bounded second derivative an approximation property holds which could "explain" generalization of networks (of unbounded size) to unseen inputs.