Improving TabTransformer Part 1: Linear Numerical Embeddings
In the previous post about TabTransformer I've described how the model works and how it can be applied to your data. This post will build on it, so if you haven't read it yet, I highly recommend starting there and returning to this post afterwards. TabTransformer was shown to outperform traditional multi-layer perceptrons (MLPs) and came close to the performance of Gradient Boosted Trees (GBTs) on some datasets. However, there is one noticeable drawback with the architecture -- it doesn't take numerical features into account when constructing contextual embeddings. This post deep dives into the paper by Gorishniy et al. (2021) which has addressed this issue by introducing FT-Transformer (Feature Tokenizer Transformer).
Oct-22-2022, 12:21:23 GMT
- Technology: