Goto

Collaborating Authors

workflow


What's new in Microsoft Azure's NLP AI services

#artificialintelligence

If you want to begin using machine learning in your applications, Microsoft offers several different ways to jumpstart development. One key technology, Microsoft's Azure Cognitive Services, offers a set of managed machine learning services with pretrained models and REST API endpoints. These models offer most of the common use cases, from working with text and language, to recognizing speech and images. Machine learning is still evolving, with new models being released and new hardware to help speed up inferencing, and so Microsoft regularly updates its Cognitive Services. The latest major update, announced at Build 2022, features a lot of changes to its tools for working with text, bringing three different services under one umbrella.


An Introduction to Amazon SageMaker

#artificialintelligence

Amazon SageMaker helps data scientists and inventors to prepare, make, train, and deploy high- quality machine learning models by bringing together a broad set of capabilities purpose- erected for machine learning. Amazon SageMaker make available a set of solutions for the most common use cases that may be deployed readily with just a few clicks to make it easier to grow started. Amazon SageMaker is a completely accomplished machine learning service. Data scientists and developers may speedily and easily build and train machine learning models with SageMaker. They can straight deploy them into a production-ready hosted environment.


Bridging the knowledge gap on AI and machine-learning technologies – Physics World

#artificialintelligence

How much is too much? These are questions that cut to the heart of a complex issue currently preoccupying senior medical physicists when it comes to the training and continuing professional development (CPD) of the radiotherapy physics workforce. What's exercising management and educators specifically is the extent to which the core expertise and domain knowledge of radiotherapy physicists should evolve to reflect – and, in so doing, best support – the relentless progress of artificial intelligence (AI) and machine-learning technologies within the radiation oncology workflow. In an effort to bring a degree of clarity and consensus to the collective conversation, the ESTRO 2022 Annual Congress in Copenhagen last month featured a dedicated workshop session entitled "Every radiotherapy physicist should know about AI/machine learning…but how much?" With several hundred delegates packed into Room D5 at the Bella Center, speakers were tasked by the session moderators with defending a range of "optimum scenarios" to align the know-how of medical physicists versus emerging AI/machine-learning opportunities in the radiotherapy clinic.


Turning the promise of AI into a reality for everyone and every industry

#artificialintelligence

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Artificial intelligence (AI) has come a long way in the past few years. But for AI to truly fulfill its promise, it needs to do one more thing. It needs to be easy to use. This is just as important as all the computational and technical components that make AI happen in the first place.


BigPanda Launches Unified Analytics to Improve Business KPIs for IT Ops Teams

#artificialintelligence

BigPanda, Inc., the leader in AIOps Event Correlation and Automation, launched Unified Analytics, a revamped feature that gives IT Ops teams new self-service analytic capabilities to create new and highly interactive dashboards and reports from complex IT Ops alert data. BigPanda is the only AIOps platform that also delivers a complete library of ready-to-use operational and value dashboards that allow users to rapidly track and measure IT operations KPIs, metrics and value use cases that show the impact of IT Ops improvements to the business. In fact, Gartner in its 2022 Market Guide for AIOps states, "One of the main barriers to implementing artificial intelligence for IT operations (AIOps) platforms is the difficulty measuring their value and a lack of understanding of benefits derived. This is hard to do for several reasons, including IT Ops data that's extremely siloed, gaps in visibility between teams, and difficulty in understanding which KPIs to measure and how they drive business impact." BigPanda's Unified Analytics addresses these challenges with new, out-of-the-box, persona-based dashboards that help IT organizations translate IT Ops metrics into business impact.


Could No-Code Enable Everything Ops?

#artificialintelligence

It feels like DevOps principles are permeating every discipline, creating new buzzwords by the minute. This "JargonOps" is clearly encouraged by marketing campaigns (and bloggers, wink, wink). Yet, the phrases do depict a real trend: all industries are getting an efficiency overhaul in the wake of increased automation. As I've covered before, low-code and no-code tools lower the barrier to entry to application development, enabling field experts to construct workflows as they see fit. For tech-savvy non-engineers, this could be a huge boon to transform copy-and-paste stopgaps into efficient workflow automations.


A brief history of no-code software -- and its future

#artificialintelligence

Traditional computer programming has a steep learning curve that requires learning a programming language, for example C/C, Java or Python, just to build a simple application such as a calculator or Tic-tac-toe game. Programming also requires substantial debugging skills, which easily frustrates new learners. The study time, effort and experience needed often stop nonprogrammers from making software from scratch. No-code is a way to program websites, mobile apps and games without using codes or scripts, or sets of commands. People readily learn from visual cues, which led to the development of "what you see is what you get" (WYSIWYG) document and multimedia editors as early as the 1970s.


Various steps Involved in Building Machine Learning Pipeline

#artificialintelligence

Oftentimes in machine learning, there is a confusion about how to build a scalable and robust models which can be deployed in real-time. The thing that mostly complicates this is the lack of knowledge about the overall workflow in machine learning. Understanding the various steps in machine learning workflow can be especially handy for data scientists or machine learning engineers as it saves a considerable amount of time and effort in the long run. In this article, we will be going over the steps that are usually involved in building a machine learning system. Having a good understanding of the principles needed to build a high-level design of an AI system is useful so that one could allocate their time and resources to complete each part of the puzzle before coming up with a robust high-performance model that is put to production.


GAN as a Face Renderer for 'Traditional' CGI

#artificialintelligence

Opinion When Generative Adversarial Networks (GANs) first demonstrated their capability to reproduce stunningly realistic 3D faces, the advent triggered a gold rush for the unmined potential of GANs to create temporally consistent video featuring human faces. Somewhere in the GAN's latent space, it seemed that there must be hidden order and rationality – a schema of nascent semantic logic, buried in the latent codes, that would allow a GAN to generate consistent multiple views and multiple interpretations (such as expression changes) of the same face – and subsequently offer a temporally-convincing deepfake video method that would blow autoencoders out of the water. High-resolution output would be trivial, compared to the slum-like low-res environments in which GPU constraints force DeepFaceLab and FaceSwap to operate, while the'swap zone' of a face (in autoencoder workflows) would become the'creation zone' of a GAN, informed by a handful of input images, or even just a single image. There would be no more mismatch between the'swap' and'host' faces, because the entirety of the image would be generated from scratch, including hair, jawlines, and the outermost extremities of the facial lineaments, which frequently prove a challenge for'traditional' autoencoder deepfakes. As it transpired, it was not going to be nearly that easy.


ScImage and DiA Imaging Analysis Team Up to Infuse AI into Echocardiography Labs

#artificialintelligence

ScImage Inc., a leading provider of Enterprise Imaging solutions and DiA Imaging Analysis, a global leading provider of AI-based cardiac ultrasound software, announced a commercial partnership to combine ScImage's unique Cloud architecture with DiA's AI-based automated cardiac ultrasound solution, LVivo Seamless. The collaboration leverages each company's strengths to give echocardiography (echo) labs greater access to the latest innovations in healthcare imaging technology. ScImage's intelligent Cloud computing infrastructure together with DiA's AI-based algorithms, will now be available to more echocardiologists and other imaging specialists, enabling them to maximize workflow efficiency in the echo lab environment and improve patient care. "ScImage prides itself on delivering the most progressive, secure, True Cloud offering in healthcare today. By combining the compute power of PICOM365 with DiA's LVivo Seamless, clinicians will be able to enjoy the highest level of quantitative image analysis and longitudinal measurement accuracy," said Sai Raya, Ph.D., ScImage's Founder and CEO.