BERT (1) CNN (1) ConvTransposed2D (1) Convolution (1) Data Augmentation (1) FC layer (1) GAP (1) List-comprehension (1) Maxpooling (1) NLP (1) SIFT (1) Stochastic Weight Averaging (1) Training from scratch (1) Upsampling (1) adam (1) adaptive-learning-rate (1) attention (2) autoencoder (2) computer vision (4) cutmix (1) de novo (1) dimension reduction (1) drug discovery (1) drug property (1) dti (1) early-stopping (1) fast rcnn (1) faster rcnn (1) fine-tuning (1) gpt2 (1) iherb (1) image retrieval (1) image segmentation (1) image similarity (1) kaggle (1) language model (1) learning-rate-scheduler (1) lstm (2) object detection (1) object-detection (1) object-recognition (1) opencv (1) optimization (1) paper review (7) pre-trained model (1) python (1) rcnn (1) rectified-adam (1) regularization (1) reliable ai (1) rnn (2) segmentation (1) selenium (1) seq2seq (2) sequential models (1) supervised learning (1) test time augmentation (1) transfer-learning (1) transformer (1) unet (1) web crawling (1) weight-decay (1) word embedding (1) word representation (1) word2vec (2) yolo (1)

 BERT (1)

[Paper Review] BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

 CNN (1)

[DL 101] Transfer Learning vs. Fine Tuning vs. Training from scratch

 ConvTransposed2D (1)

[DL 101] Upsampling

 Convolution (1)

[DL 101] Maxpooling? Convolutions?

 Data Augmentation (1)

Useful methods for CV competition

 FC layer (1)

[DL 101] Global Average Pooling

 GAP (1)

[DL 101] Global Average Pooling

 List-comprehension (1)

[DL 101] List comprehension

 Maxpooling (1)

[DL 101] Maxpooling? Convolutions?

 NLP (1)

[Paper Review] BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

 SIFT (1)

Scale-Invariant Feature Transform(SIFT)

 Stochastic Weight Averaging (1)

Useful methods for CV competition

 Training from scratch (1)

[DL 101] Transfer Learning vs. Fine Tuning vs. Training from scratch

 Upsampling (1)

[DL 101] Upsampling

 adam (1)

[DL 101] RAdam - a novel variant of Adam

 adaptive-learning-rate (1)

[DL 101] RAdam - a novel variant of Adam

 attention (2)

Transformer and Self-Attention
Sequence to Sequence (seq2seq) and Attention

 autoencoder (2)

Word Embedding
[DL 101] Autoencoder Tutorial (Pytorch)

 computer vision (4)

[Paper Review] YOLO (You Look Only Once)
[Paper Review] Faster R-CNN
[Paper Review] Fast R-CNN
[Paper Review] R-CNN

 cutmix (1)

Useful methods for CV competition

 de novo (1)

Review: Deep Learning in Drug Discovery

 dimension reduction (1)

Word Embedding

 drug discovery (1)

Review: Deep Learning in Drug Discovery

 drug property (1)

Review: Deep Learning in Drug Discovery

 dti (1)

Review: Deep Learning in Drug Discovery

 early-stopping (1)

[DL 101] Early Stopping, Weight Decay

 fast rcnn (1)

[Paper Review] Fast R-CNN

 faster rcnn (1)

[Paper Review] Faster R-CNN

 fine-tuning (1)

[DL 101] Transfer Learning vs. Fine Tuning vs. Training from scratch

 gpt2 (1)

[Paper Review] Language Models are Unsupervised Multitask Learners (GPT-2)

 iherb (1)

Web-crawling using Python

 image retrieval (1)

Scale-Invariant Feature Transform(SIFT)

 image segmentation (1)

[Paper Review] U-Net: Convolutional Networks for Biomedical Image Segmentation

 image similarity (1)

Scale-Invariant Feature Transform(SIFT)

 kaggle (1)

Useful methods for CV competition

 language model (1)

[Paper Review] Deep contextualized word representations (ELMo)

 learning-rate-scheduler (1)

[DL 101] Learning Rate Scheduler

 lstm (2)

Long Short Term Memory
Recurrent Neural Networks (RNN)

 object detection (1)

[Paper Review] YOLO (You Look Only Once)

 object-detection (1)

[DL 101] Object Recognition Terminology

 object-recognition (1)

[DL 101] Object Recognition Terminology

 opencv (1)

[DL 101] OpenCV python tutorial

 optimization (1)

[DL 101] RAdam - a novel variant of Adam

 paper review (7)

[Paper Review] On Calibration of Modern Neural Networks
[Paper Review] Sequence to Sequence Learning with Neural Networks
[Paper Review] Efficient Estimation of Word Representations in Vector Space
[Paper Review] U-Net: Convolutional Networks for Biomedical Image Segmentation
[Paper Review] Faster R-CNN
[Paper Review] Fast R-CNN
[Paper Review] R-CNN

 pre-trained model (1)

[Paper Review] BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

 python (1)

[DL 101] List comprehension

 rcnn (1)

[Paper Review] R-CNN

 rectified-adam (1)

[DL 101] RAdam - a novel variant of Adam

 regularization (1)

[DL 101] Early Stopping, Weight Decay

 reliable ai (1)

[Paper Review] On Calibration of Modern Neural Networks

 rnn (2)

Long Short Term Memory
Recurrent Neural Networks (RNN)

 segmentation (1)

[DL 101] Object Recognition Terminology

 selenium (1)

Web-crawling using Python

 seq2seq (2)

Sequence to Sequence (seq2seq) and Attention
[Paper Review] Sequence to Sequence Learning with Neural Networks

 sequential models (1)

Recurrent Neural Networks (RNN)

 supervised learning (1)

[Paper Review] U-Net: Convolutional Networks for Biomedical Image Segmentation

 test time augmentation (1)

Useful methods for CV competition

 transfer-learning (1)

[DL 101] Transfer Learning vs. Fine Tuning vs. Training from scratch

 transformer (1)

Transformer and Self-Attention

 unet (1)

[Paper Review] U-Net: Convolutional Networks for Biomedical Image Segmentation

 web crawling (1)

Web-crawling using Python

 weight-decay (1)

[DL 101] Early Stopping, Weight Decay

 word embedding (1)

Word Embedding

 word representation (1)

[Paper Review] Deep contextualized word representations (ELMo)

 word2vec (2)

Word Embedding
[Paper Review] Efficient Estimation of Word Representations in Vector Space

 yolo (1)

[Paper Review] YOLO (You Look Only Once)