Collaborating Authors

Information Technology

The top 3 uses of machine learning


If you are more than 30 years old then you must have witnessed how the scripts of Hollywood movies have been transformed into a reality by some of the best brains on the planet. There were many technology-related concepts that were considered as the hype and only as a part of the fiction but when you will look around yourself then you will witness those ideas and concepts used in the Hollywood movies turning into reality. From the ability to talk to anyone through videos sitting in any corner of the world to reaching Mars, there have been many technological advancements achieved by humans and that is just amazing. Now, when you will look around yourself and start reading some of the technological blogs then you will realize that there is more coming. What you have witnessed till now is only a glimpse of what we are going to achieve in a couple of years.

Future of Design: Making AI work for you


A lot has taken place in the world since I published my article titled "Artificial intelligence for when times are a-changin" in December 2019. There, I introduced you to machine learning (ML) as a subset of artificial intelligence (AI). By explaining in simple terms how a machine learning model works, I hoped to demystify this somehow scary-at-first new technology. Like electricity, which was once considered a magic trick and it is now assumed, AI technologies are for us all to use and benefit from – not just those working specifically in software development. As much as I get excited about any piece of new tech I can get my hands on, a key indicator that a particular technology is successful is not that it excites early adopters but that it becomes essential, helpful, and seamlessly integrated into our very human lives. Our industry's greatest challenges are well known by all of us.



A simplified version of the pix2pix generator inside a fragment shader for Unity. This implementation is only 1/4 of the original pix2pix model to help with real time performance in VR. If you wish to run the Python code, here's what you need. If you wish to run the C code.

Why and how to build autonomous systems - AI for Business


Automated control systems were one of the most disruptive applications of industrial technology in the 20th century. The ability to control workflows and processes based on specific inputs and outputs streamlined even the most complex manufacturing processes. These systems, however, need specific parameters and, in some cases, require extensive human oversight and planning to ensure optimal execution. Innovations in AI training methodologies are pushing past these limitations to produce the next wave of disruption to industrial technology: autonomous systems. Autonomous machines do more than address the limitations of automated systems, however.

Splunk takes aim at multicloud, machine learning and observability - SiliconANGLE


Splunk's Data-to-Everything Platform is an all-encompassing suite of analytics tools that help enterprises to search, correlate, analyze, monitor and report on data in real time, available through its Splunk Cloud and Splunk Enterprise products. Today's slew of updates at the virtual event are all about expanding customer's multicloud capabilities, giving them new ways to set the right data strategy and improve access to the information their businesses generate, Splunk said. For example, the Splunk Data Stream Processor, an event streaming platform, is being updated with new capabilities that enable it to access, process and route real-time data from multiple cloud services, including Google LLC's Cloud Platform and Microsoft Corp.'s Azure Event Hub. In addition, event data now gets enriched with lookups and machine learning functionality that helps to minimize compute loads and provide more accuracy when searching through this data. Moreover, the Data-to-Everything Platform is getting a new Splunk Machine Learning Environment that will make it easy for companies to build and operationalize machine learning models by bringing data from multiple sources into a single platform.

Artificial Intelligence: Your Data Guardian - Data Guard 365


Businesses of every variety and every industry are facing a constant attack by black hats trying to storm their digital fences and jeopardize their network integrity. Rarely a day goes by that the headlines are not filled with at least one story of another global conglomerate suffering under the weight of an all-out attack on their data and networks. Suffice it to say, if the largest and most technologically advanced organizations in the world still fall victim to these threats, every business with any sort of digital footprint is susceptible. That is not to say that your business, no matter its size or industry, is a proverbial sitting duck to the clever and relentless black hats of the world. In fact, with some deliberate planning and a well-organized game plan, companies can efficiently and effectively defend themselves and their data from intrusion.

Deconstructing Maxine, Nvidia's AI-powered video-conferencing technology


This article is part of "Deconstructing artificial intelligence," a series of posts that explore the details of how AI applications work. One of the things that caught my eye at Nvidia's flagship event, the GPU Technology Conference (GTC), was Maxine, a platform that leverages artificial intelligence to improve the quality and experience of video-conferencing applications in real-time. Maxine used deep learning for resolution improvement, background noise reduction, video compression, face alignment, and real-time translation and transcription. In this post, which marks the first installation of our "deconstructing artificial intelligence" series, we will take a look at how some of these features work and how they tie-in with AI research done at Nvidia. We'll also explore the pending issues and the possible business model for Nvidia's AI-powered video-conferencing platform.

Council Post: Five Real Ways Artificial Intelligence Is Upleveling Customer Service


Remember Facebook's automated personal assistant, M, that was released in a bid to compete with Alexa and Siri? After a series of embarrassing mishaps due to poorly trained algorithms, Facebook abruptly pulled the plug. They weren't alone; chatbots are infamous for putting their metaphorical feet in their mouths. While these debacles are tough to watch, the underlying problem is not artificial intelligence (AI) itself. AI succeeds when underpinned with sound strategy and well-trained models.

How Technology can Benefit Agriculture and Farmers in India


The change in Indian agriculture began with the Green Revolution, which was trailed by accomplishments of large achievements: Blue revolution, white revolution, yellow and Bio-Technology revolutions. In India, agriculture is the core sector for food security, nutritional security, and sustainable development & for poverty alleviation. Around 64% of the total labor force is occupied with horticulture or agribusiness based businesses. After independence, there has been noteworthy development in Indian agriculture with the grain production ascending to 273.83 million tons this year. All things considered, there are enormous challenges to be analyzed to enhance the agricultural growth in India.