Effect of techniques from Fast.ai

fast.ai is a brilliant library and a course by Jeremy Howard an co. They use pytorch as a base and explain deep learning from the foundations to a very decent level. In his course Jeremy Howard demonstrates a lot of interesting techniques that he finds in papers and that do NN training faster/better/cheaper. Here I want to reproduce some of the techniques in order to understand what is the effect they bring....

November 15, 2020 · SergeM

Self-supervised depth and ego motion estimation

3D Packing for Self-Supervised Monocular Depth Estimation -------------------------------------------------------------- by Vitor Guizilini, `pdf at arxiv `_, 2020 Learning 1. Depth estimator :math:`f_D : I \rightarrow D` 2. Ego motion estimator: :math:`f_x : (I_t , I_S) \rightarrow x_{t \rightarrow S}` Depth Estimator ===================================== They predict an inverse depth and use a packnet architecture. Inverse depth probably has more stable results. Points far away from camera have small inverse depth that with low precision....

August 23, 2020 · SergeM

Which pretrained backbone to choose

In 2020 which architecture should I use for my image classification/tracking/segmentation/… task? I was asked on an interview that and I didn’t have a prepared answer. I made a small research and want to write down some thoughts. Most of the architectures build upon ideas from ResNet paper Deep Residual Learning for Image Recognition, 2015 Here is some explanation of resnet family:An Overview of ResNet and its Variants by Vincent Fung, 2017....

July 1, 2020 · SergeM

Multistage NN training experiment

Ideas for multistage NN training. There is some research on continuous learning without catastrophic forgetting . For example ANML: Learning to Continually Learn (ECAI 2020) arxiv code video The code for the paper is based on another one: OML (Online-aware Meta-learning) ~ NeurIPS19 code video OML paper derives some code from MAML: Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks pdf official tf code, also includes some links to other implementations....

January 1, 2020 · SergeM

Machine learning links

Super harsh guide to machine Learning Super harsh guide to machine learning (reddit) First, read fucking Hastie, Tibshirani, and whoever. Chapters 1-4 and 7. If you don’t understand it, keep reading it until you do. You can read the rest of the book if you want. You probably should, but I’ll assume you know all of it. Take Andrew Ng’s Coursera. Do all the exercises in Matlab and python and R....

January 19, 2017 · SergeM

deep learning

Rest API example for tensorflow. It works: demo Trained models for tensorflow TF-slim - high-level API of TensorFlow for defining, training and evaluating complex models. Doesn’t work for python 3 (see here) VGG16 and VGG19 in Tensorflow. One more here. And one more Deep learning for lazybones Inception-like CNN model based on 1d convolutions http://arxiv.org/pdf/1512.00567v3.pdf Chat (in russian) http://closedcircles.com/?invite=99b1ac08509c560137b2e3c54d4398b0fa4c175e

June 3, 2016 · SergeM

Torch-Lightning library (draft)

How to visualize gradients with torch-lightning and tensorboard in your model class define a optimizer_step. class Model(pl.LightningModule): # ... def optimizer_step( self, epoch: int, batch_idx: int, optimizer, optimizer_idx: int, second_order_closure = None, ) -> None: if self.trainer.use_tpu and XLA_AVAILABLE: xm.optimizer_step(optimizer) elif isinstance(optimizer, torch.optim.LBFGS): optimizer.step(second_order_closure) else: optimizer.step() #### Gradient reporting start ### if batch_idx % 500 == 0: for tag, param in self.model.named_parameters(): self.logger.experiment.add_histogram('{}_grad'.format(tag), param.grad.cpu().detach()) #### Gradient reporting end ### # clear gradients optimizer....

April 29, 2000 · SergeM