A good deep learning model has a carefully carved architecture. It needs enormous training data, effective hardware, skilled developers, and a vast amount of time to train and hyper-tune the model to achieve satisfactory performance. Therefore, building a deep learning model from scratch and training is practically impossible for every deep learning task. Here comes the power of Transfer Learning. Transfer Learning is the approach of making use of an already trained model for a related task.
I hope that you would now be able to apply pre-trained models to your problem statements. Be sure that the pre-trained model you have selected has been trained on a similar data set as the one that you wish to use it on. There are various architectures people have tried on different types of data sets and I strongly encourage you to go through these architectures and apply them on your own problem statements. Please feel free to discuss your doubts and concerns in the comments section.
Here you can see the all 16 layers of the VGG16 model which got some description in the bottom. "Total params" are the total number parameter that a model can have overall. "Trainable params" is the number of the parameters that you can train, basically this model is empty with weights which means you have only got the architecture of vgg16. Lastly "Non-trainable params" as name says, these are the parameters which are freezed and not updatable during the training. Note that here you don't see the last layer because we set that to "false".