fast.ai is a brilliant library and a course by Jeremy Howard an co. They use pytorch as a base and explain deep learning from the foundations to a very decent level. In his course Jeremy Howard demonstrates a lot of interesting techniques that he finds in papers and that do NN training faster/better/cheaper.
Here I want to reproduce some of the techniques in order to understand what is the effect they bring....
Ideas for multistage NN training.
There is some research on continuous learning without catastrophic forgetting . For example ANML: Learning to Continually Learn (ECAI 2020) arxiv code video
The code for the paper is based on another one: OML (Online-aware Meta-learning) ~ NeurIPS19 code video
OML paper derives some code from MAML:
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks pdf official tf code, also includes some links to other implementations....
Rest API example for tensorflow. It works: demo
Trained models for tensorflow
TF-slim - high-level API of TensorFlow for defining, training and evaluating complex models. Doesn’t work for python 3 (see here)
VGG16 and VGG19 in Tensorflow. One more here. And one more
Deep learning for lazybones
Inception-like CNN model based on 1d convolutions http://arxiv.org/pdf/1512.00567v3.pdf
Chat (in russian) http://closedcircles.com/?invite=99b1ac08509c560137b2e3c54d4398b0fa4c175e