Collaborating Authors


D2iQ Streamlines Smart Cloud-Native Application Deployments with Kaptain AI/ML 2.0


D2iQ, the leading enterprise Kubernetes provider for smart cloud-native applications, announced version 2.0 of Kaptain AI/ML, the enterprise-ready distribution of open-source Kubeflow that enables organizations to develop, deploy, and run artificial intelligence (AI) and machine learning (ML) workloads in production environments. Powered by Kubeflow 1.5, the Kubernetes machine learning toolkit, Kaptain AI/ML now provides data science teams with features such as expanded control for mounting data volumes and increased visibility into idle notebooks, so they can spend more time developing and less time managing infrastructure. The enhanced user experience enables data scientists to more effectively manage the lifecycle of AI and ML models without the need for infrastructure knowledge and skill sets. By simplifying the deployment and full lifecycle management of AI and ML workloads at scale, Kaptain AI/ML 2.0 accelerates the impact of smart cloud-native applications. This enables organizations to drive better business results by more quickly delivering new smart products and services, becoming more agile when updating models, and driving smarter customer experiences.

Google unveils the world's largest publicly available machine learning hub


Google I/O 2022, Google's largest developer conference, kicked off with a keynote speech from Alphabet CEO Sundar Pichai. The keynote speech had major announcements including the launch of Pixel watch, updates on PaLM and LaMDA, advancements in AR and immersive technology etc. Let us look at the key highlights. "Recently we announced plans to invest USD 9.5 billion in data centers and offices across the US. One of our state-of-the-art data centers is in Mayes County, Oklahoma. I'm excited to announce that, there, we are launching the world's largest, publicly-available machine learning hub for our Google Cloud customers," Sundar Pichai said.

What to expect from Google I/O 2022


Google I/O 2022, the most awaited developers' conference of the year, is around the corner. With more than 200 speakers, the summit will cover a broad spectrum of topics and will have a slew of announcements on the latest innovations in AI and ML. The I/O adventure also makes a comeback this year: Users can explore the platform to see product demos, chat with Googlers, earn Google Developer profile badges and virtual swag, engage with the developer community, create an avatar, and look for easter eggs. Seek out your next Adventure at Google I/O 2022! The conference is scheduled to start at 10:30 pm IST on May 11, 2022, and will kick off with Alphabet CEO Sundar Pichai's keynote speech.

How I became an ML hackathon Grandmaster


HSBC's Akash Gupta has won over 45 machine learning hackathons to date. The MachineHack Grandmaster has come second thrice in a row and is currently ranked sixth on the platform. "I've always been fascinated by numbers and patterns. I got very curious about algorithms – how they are made, how they work, and what we can do with them– after I took Andrew Ng's machine learning course," said Akash Gupta. The data scientist spoke about his MachineHack journey in an exclusive interview with Analytics India Magazine.

You are Not Using the Right AI/ML API: Here's Why


Eden AI simplifies the use and deployment of AI technologies by providing a unique API connected to the best AI engines. Companies are increasingly using Artificial Intelligence services, especially when it comes to automating internal processes or improving their customers' experience. The strong development of AI makes it a commodity. These functionalities can be used in several fields: health, human resources, tech, etc. The big players in the cloud market (Amazon Web Services, Microsoft Azure or Google Cloud) offer solutions that provide access to this type of service, but there are also smaller providers that are already competing with them: Mindee, Dataleon, Deepgram, AssemblyAI, Rev.AI, Speechmatics, Lettria, etc.

Google announces Cloud TPU virtual machines for AI workloads


Since completing a degree in journalism, Aimee has had her fair share of covering various topics, including business, retail, manufacturing, and travel. She continues to expand her repertoire as a tech journalist with ZDNet. Google Cloud has announced the general availability of TPU virtual machines (VMs) for artificial intelligence workloads. Google Cloud said embedding acceleration with Cloud TPU can help businesses lower cost associated with ranking and recommendation use-cases which commonly rely on deeply neural network-based algorithms that can be costly to run. "They tend to use large amounts of data and can be difficult and expensive to train and deploy with traditional ML infrastructure," Google Cloud said in a blog post.

Best Predictive Analytics Tools and Software 2022


Managing data has always been a challenge for businesses. With new sources and higher volumes of data coming in all the time, it's more important than ever to have the right tools in place. Predictive analytics tools and software are the best way to accomplish this task. Data scientists and business leaders must be able to organize data and clean it to get the process started. The next step is analyzing it and sharing the results with colleagues.

Testing Out HPC On Google's TPU Matrix Engines


In an ideal platform cloud, you would not know or care what the underlying hardware was and how it was composed to run your HPC – and now AI – applications. The underlying hardware in a cloud would have a mix of different kinds of compute and storage, an all-to-all network lashing it together, and whatever you needed could be composed on the fly. This is precisely the kind of compute cloud that Google wanted to build back in April 2008 with App Engine and, as it turns out, that very few organizations wanted to buy. Companies cared – and still do – about the underlying infrastructure, but at the same time, Google still believes in its heart of hearts in the platform cloud. And that is one reason why its Tensor Processing Unit, or TPU, compute engines are only available on the Google Cloud.

The Hyperscalers Point The Way To Integrated AI Stacks


Enterprises know they want to do machine learning, but they also know they can't afford to think too long or too hard about it. They need to act, and they have specific business problems that they want to solve. And they know instinctively and anecdotally from the experience of the hyperscalers and the HPC centers of the world that machine learning techniques can be utterly transformative in augmenting existing applications, replacing hand-coded applications, or creating whole new classes of applications that were not possible before. They also have to decide if they want to run their AI workloads on-premise or on any one of a number of clouds where a lot of the software for creating models and training them are available as a service. And let's acknowledge that a lot of those models were created by the public cloud giants for internal workloads long before they were peddled as a service.

Machine Learning on Google Cloud (Vertex AI & AI Platform)


Are you a data scientist or AI practitioner who wants to understand cloud platforms? Are you a data scientist or AI practitioner who has worked on Azure or AWS and curious to know how ML activities can be done on GCP? If yes, this course is for you. This course will help you to understand the concepts of the cloud. In the interest of the wider audience, this course is designed for both beginners and advanced AI practitioners.