Goto

Collaborating Authors

What's in the TensorFlow Federated(TFF) box?

#artificialintelligence

Krzysztof Ostrowski is a Research Scientist at Google, where he heads the TensorFlow Federated development team. This blog post is inspired by his talk at the OpenMined Privacy Conference. TensorFlow Federated(TFF) is a new development framework for Federated Computations, that typically involve computations on data that is born decentralized and stays decentralized. TFF provides a common framework for federated computations in both research and production and is an open-source project within the TensorFlow ecosystem. The TFF library has been designed so as to facilitate an easy path from research to production.


Model Complexity & Overfitting in Machine Learning - Data Analytics

#artificialintelligence

In machine learning, model complexity and overfitting are related in a manner that the model overfitting is a problem that can occur when a model is too complex due to different reasons. This can cause the model to fit the noise in the data rather than the underlying pattern. As a result, the model will perform poorly when applied to new and unseen data. In this blog post, we will discuss what model complexity is and how you can avoid overfitting in your machine learning models by handling the model complexity. As data scientists, it is of utmost importance to understand the concepts related to model complexity and how it impacts the model overfitting.


Last year, a meteorite was discovered at the remote Kybo Station on the Nullarbor Plain. It's only 70g – roughly the same as a large egg – and looks

#artificialintelligence

Last year, a meteorite was discovered at the remote Kybo Station on the Nullarbor Plain. It's only 70g – roughly the same as a large egg – and looks suspiciously like kangaroo faeces. Drone imagery of a 5 square kilometre'fall zone' was used to find the small space rock in the vast WA desert. The footage was then examined for meteorites using artificial intelligence, and voila! This is the first time this strategy has worked anyplace on the planet.


OpenAI and the road to text-guided image generation: DALL·E, CLIP, GLIDE, DALL·E 2 (unCLIP)

#artificialintelligence

The first version of DALL·E was a GPT-3 style transformer decoder that autoregressively generated a 256 256 image based on textual input and an optional beginning of the image. If you want to understand how a GPT-like transformer works, here is a great visual explanation by Jay Alammar. A text is encoded by BPE-tokens (max. Because of the dVAE, some details and high-frequency features are lost in generated images, so some blurriness and smoothness are the features of the DALL·E-generated images. The transformer is a large model with 12B parameters. It consisted of 64 sparse transformer blocks with a complicated set of attention mechanisms inside, consisting of 1) classical text-to-text masked attention, 2) image-to-text attention, and 3) image-to-image sparse attention.


How Machines Can Learn Using TensorFlow or PyTorch

#artificialintelligence

AI and machine learning are very hot topics these days. These are only some of the applications that cannot exist without machine learning. But how can machines learn? I will show you how the magic works in this article, but I won't talk about neural networks! I will show you what is in the deepest deep of machine learning. One of the best presentations about machine learning is Fei Fei Li's TED talk.



TensorFlow DTensor: Unified API for Distributed Deep Network Training

#artificialintelligence

Recently released TensorFlow v2.9 introduces a new API for the model, data, and space-parallel (aka spatially tiled) deep network training. DTensor aims to decouple sharding directives from the model code by providing higher-level utilities to partition the model and batch parameters between devices. The work is part of the recent effort (e.g. GPipe, TF Mesh, GShard, DeepSpeed, Fairscale, ColossalAI) to decrease development time to build large-scale training workloads. Training test loss scales logarithmically with the number of network parameters, data size, and compute time for large (language) models.


Is Grokking the Machine Learning Interview on Educative Worth it? Review

#artificialintelligence

Hello friends, we are here again today for another exciting topic to discuss. But, today we are not gonna discuss something which is related to Java or any other language or spring boot. Today we are gonna discuss something which is immensely practical and has the potential to land you very high-paying data science jobs. Today we are gonna review a course that focuses on Machine Learning! Machine Learning is very important when we are considering data science interviews! It couldn't have come at a better moment, with machine learning expected to be a $3.6 billion business by 2024.


Deep Learning for Human Action Recognition

#artificialintelligence

Human Action Recognition (HAR) refers to the automated identification of particular actions or gestures through a sequence of observations. Action recognition can be performed on images or videos (which are essentially sequences of images) and typically utilize Deep Learning model architectures. HAR has a wide range of real-world applications, some of which I'll be discussing in this article. Before Deep Learning revolutionized automatic feature extraction, handcrafted features were manually extracted for action classification using traditional Machine Learning techniques. Many action features have been proposed for RGB image data, including spatio-temporal volume-based features, spatio-temporal interesting point features, and joint trajectory features.


Predibase exits stealth with a platform for building AI models – TechCrunch

#artificialintelligence

Data science teams are stymied by disorganization at their companies, impacting efforts to deploy timely AI and analytics projects. In a recent survey of "data executives" at U.S.-based companies, 44% said that they've not hired enough, were too siloed off to be effective and haven't been given clear roles. Respondents said that they were most concerned about the impact of a revenue loss or hit to brand reputation stemming from failing AI systems and a trend toward splashy investments with short-term payoffs. These are ultimately organizational challenges. But Piero Molino, the co-founder of AI development platform Predibase, says that inadequate tooling often exacerbates them.