If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Stephen Foskett is joined by Gina Rosenthal, an expert on enterprise IT infrastructure and operations. Gina has made her career in enterprise IT infrastructure and has worked with many of the largest vendors. In this episode, she considers how vendors approach artificial intelligence, what applications they are delivering, and what this means in the enterprise. The conversation turns to ethics and risks of AI applications and how business should approach building AI models. As AI applications are deployed in the line of business, IT infrastructure organizations need to be prepared to handle the demands of these systems with next-generation cloud platforms. This episode features: Stephen Foskett, publisher of Gestalt IT and organizer of Tech Field Day. Find Stephen's writing at GestaltIT.com and on Twitter at @SFoskett Gina Rosenthal, Founder of Digital Sunshine Solutions. Find Gina on Twitter at @GMinks Date: 09/22/2020 Tags: @SFoskett, @GMinks
Artificial intelligence is poised to change every industry – and to create trillions in economic value over the coming decades. However, 80-90% of initial enterprise AI projects fail. Often, the wrong AI projects were selected in the first place. But proper AI project selection isn't a single skill – it is an outgrowth of executive AI fluency – the general ability of leadership to understand what AI does, how it works, and what long-term vision it should be contributing to (read our full Emerj Plus guide on Executive AI Fluency). AI champions or C-level executives don't simply learn to select projects.
For most professional software developers, using application lifecycle management (ALM) is a given. Data scientists, many of whom do not have a software development background, often have not used lifecycle management for their machine learning models. That's a problem that's much easier to fix now than it was a few years ago, thanks to the advent of "MLops" environments and frameworks that support machine learning lifecycle management. The easy answer to this question would be that machine learning lifecycle management is the same as ALM, but that would also be wrong. That's because the lifecycle of a machine learning model is different from the software development lifecycle (SDLC) in a number of ways.
The future comes too soon and in the wrong order." "It is our moral responsibility not to stop the future but to shape it." The pandemic is hurtling the world into a Technology 4.0-transformed "future of work" much earlier than anticipated in the ILO's Centennial Declaration of 2019. India's global significance in mastering the future of work through technology-adaptive and high-productivity human capital employing the largest global cohort of 820 million youth is huge. Along with declining fertility rates and women's empowerment, this could yield a large demographic dividend of high growth rates for decades, despite short-term shocks.
The automotive ecosystem is an almost $2T marketplace which consists of a large number of integrated markets. Beyond the automotive OEMs, these include rental companies, auto financing, auto insurance, gas stations (energy), media (radio, billboard in particular), maintenance services, public sector infrastructure, and even emergency services. Autonomous capability has been touted as the disruptive change agent by the media and investors alike. However, autonomy has proven to be very difficult at a technological level. As "Measurable Safety, The Missing Ingredient To Demonstrating ADAS Value" discusses, even ADAS, the simpler poor cousin to advanced mobility systems, is not ready for prime time.
"Everything should be made as simple as possible, but no simpler." Designed Intelligence is Fjord and Accenture's approach to unlocking the full potential of human collaboration with AI. In our previous articles we talk about how AI technologies can help augment strategic decision making and build better experiences. Empowerment is the third pillar of Designed Intelligence and focusses on how design can make intelligent systems more transparent, more adaptable and ultimately more resilient. We live in a world of increasing complexity.
Quickly shifting to remote work has enterprises looking to meet the ops needs of a suddenly distributed team, and there are open source options to get them there. The recent mad rush to scale to remote work may prove to be a key chapter in DevOps and AIOps evolution. This need for rapid, widescale change is creating a real conundrum concerning AIOps, DevOps, and ITSM, as organizations seek the best monitoring and incident response solution for their now distributed enterprises. The key question both the DevOps and IT service management (ITSM) communities need to answer is how quickly they can pivot and adapt to increasing demands for operational intelligence. Artificial intelligence for IT Operations (AIOps) brings together artificial intelligence (AI), analytics, and machine learning (ML) to automate the identification and remediation of IT operations issues.
In our previous blog, we looked at how public clouds have set the pace and standards for satisfying the technology needs of data scientists, and how on-premises offerings have become increasingly attractive due to innovations such as Kubernetes and Kubeflow. Nevertheless, delivery of ML platforms on-premises is still not easy. The effort to replicate a public cloud ML experience requires enthusiasm and persistence in the face of potential frustration. To address this challenge, the Cisco community has developed an open source tool named MLAnywhere to assist IT teams in learning and mastering the new technology stacks that ML projects require. MLAnywhere provides an actual, usable outcome in the form of a deployed Kubeflow workflow (pipeline) with sample ML applications on top of Kubernetes via a clean and intuitive interface.
The growth of AI/Deep learning and data analytics has created many of the most challenging HPC workloads in recent years. The latest HPC report by Hyperion Research states that iterative simulation workloads and new workloads such as AI and other Big Data jobs would drive the adoption of HPC storage. To keep up with the growing massive amount of data we are collecting, users need to enhance computation performance at the same time and hence HPC requires equally robust storage to maintain compute performance for faster data in and out as we are heading into the Big Data era now. Data-intensive HPC is driving new storage requirements and making a change. For the simulation process, it not only requires a large amount of computations running on HPC infrastructures built on a cluster of powerful servers linked together with networking and memory, but also adds in self-service concept data stores.
If data starts at the Edge, why can't we do as much as possible right there from an AI point of view? The explosive growth in Edge devices and applications requires new thinking as to where and how data is analyzed, and insights are derived. New Edge computing options, coupled with more demanding speed-to-insight requirements in many use cases, are driving up the use of artificial intelligence (AI) and machine learning (ML) in Edge applications. Where AI and ML are applied (at the Edge or in a data center or cloud facility) is a complex matter. To get some insights into current strategies and best practices, we recently sat down with Said Tabet, Chief Architect, AI/ML & Edge; and Calvin Smith, CTO, Emerging Technology Solutions; both in the Office of the Global CTO at Dell Technologies. We discussed the growing need for AI and ML to bring sense to the large amount of Edge data that is generated today, the compute requirements for AI/ML in Edge applications, and whether such computations should be done at the Edge or in a data center or cloud facility. RTInsights: What are today's emerging trends, and how do AI and ML fit into the Edge discussion?