PyTorch is an open source machine learning framework, primarily developed by Meta (previously Facebook). PyTorch is extensively used in the research space and in recent years it has gained immense traction in the industry due to its ease of use and deployment. Vertex AI, a fully managed end-to-end data science and machine learning platform on Google Cloud, has first class support for PyTorch making it optimized, compatibility tested and ready to deploy. We started a new blog series - PyTorch on Google Cloud - to uncover, demonstrate and share how to build, train and deploy PyTorch models at scale on Cloud AI Infrastructure using GPUs and TPUs on Vertex AI, and how to create reproducible machine learning pipelines on Google Cloud . This blog post is the home page to the series with links to the existing and upcoming posts for the readers to refer to.
It's exciting to see the Pytorch Community continue to grow and regularly release updated versions of PyTorch! Recent releases improve performance, ONNX export, TorchScript, C frontend, JIT, and distributed training. Several new experimental features, such as quantization, have also been introduced. At the PyTorch Developer Conference earlier this fall, we presented how our open source contributions to PyTorch make it better for everyone in the community. We also talked about how Microsoft uses PyTorch to develop machine learning models for services like Bing.
"Abandon all hope ye who enter here" was the inscription Dante read when passing through the gates of hell. Apparently, it's also true of anyone but the big cloud providers when it comes to artificial intelligence, according to an analysis by Bain & Company. "The CSPs [cloud service providers] are best positioned because of the significant head start they have in using AI on a large scale," the report authors stated. Given that FirstMark investor Matt Turck recently called out how well startups have done in the shadows of the cloud giants, it's worth diving deeper into the strengths the clouds bring to AI. "CSPs' cloud and digital services have given them access to the enormous amounts of data required to effectively train AI models," the authors concluded. Such economies of scale have been an asset to the cloud providers for years.
This blog will look at an area of the business which might cause some people's eyes to automatically glaze-over, but my challenge is to take this potentially boring topic and flip it on its head. What am I talking about? Cost seems to drive most conversations around cloud adoption, but we all tend to pretend it doesn't. Each cloud is different, everyone knows that. But here's the news flash: the way that cloud providers charge for their clouds is equally as different, and it can actually be a dangerous conversation to enter if you're not equipped with the knowledge you need to navigate it well.
For many surgeons, the possibility of going back into the operating room to review the actions they carried out on a patient could provide invaluable medical insights. Using a mix of Facebook's PyTorch framework and machine-learning platform Allegro Trains, med-tech company theator is now providing surgeons with a tool that lets them watch over and analyze in detail the past operations they have performed, and access video footage of procedures carried out by colleagues around the world. Dubbed the "surgical intelligence platform", theator's platform uses computer vision technology to extract key information from videos taken during surgical operations. The data is annotated, compiled and organized to let doctors review specific content by simply typing in key words through the platform. Surgeons can use the tool to jump to a specific step, re-watch critical moments, or access analysis about the procedure, such as time taken to perform a given action.