PSPNet-ResNet50

Posted by SNC_official

Copy the project to Neural Network Console Cloud

Dataset : Training

  • Number of data : 40,000
  • Variable : x (In)
  • Type : Image
  • Shape : 1, 64, 64
  • Variable : y (Out)
  • Type : Image
  • Shape : 1, 64, 64

Examples of variable x, y in “Training”

Dataset : Validation

  • Number of data : 1,000
  • Variable : x (in)
  • Type : Image
  • Shape : 1, 64, 64
  • Variable : y (out)
  • Type : Image
  • Shape : 1, 64, 64

Examples of variable x, y in “Validation”

Network Architecture : Main

Type Value
Output 3,644,416
CostParameter 49,128,000
CostAdd 4,269,184
CostMultiply 2,483,200
CostMultiplyAdd 3,078,209,536
CostDivision 8,192
CostExp 8,192
CostIf 2,327,552

Network Architecture : Runtime

Type Value
Output 3,628,032
CostParameter 46,767,424
CostAdd 4,248,640
CostMultiply 2,446,336
CostMultiplyAdd 2,927,198,208
CostDivision 4,096
CostExp 4,096
CostIf 2,311,168

Network Architecture : BaseSkip

Type Value
Output 425,984
CostParameter 71,168
CostAdd 163,840
CostMultiply 98,304
CostMultiplyAdd 17,825,792
CostDivision 0
CostExp 0
CostIf 98,304

Network Architecture : BaseConvSkip

Type Value
Output 557,056
CostParameter 137,984
CostAdd 294,912
CostMultiply 163,840
CostMultiplyAdd 34,603,008
CostDivision 0
CostExp 0
CostIf 98,304

Network Architecture : Decoder

Type Value
Output 155,712
CostParameter 1,182,209
CostAdd 36,928
CostMultiply 65,536
CostMultiplyAdd 75,530,240
CostDivision 4,096
CostExp 4,096
CostIf 32,768

Training Procedure : Optimizer

Optimize network “Main” using “Training” dataset.

  • Batch size : 32
  • Solver : Momentum
    • Learning rate: 0.01
    • Momentum : 0.9
  • Weight decay : 0.0001

Experimental Result : Learning Curve

References

  • Sony Corporation. Neural Network Console : Not just train and evaluate. You can design neural networks with fast and intuitive GUI. https://dl.sony.com/
  • Sony Corporation. Neural Network Libraries : An open source software to make research, development and implementation of neural network more efficient. https://nnabla.org/
  • Convolution – Chen et al., DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. https://arxiv.org/abs/1606.00915, Yu et al., Multi-Scale Context Aggregation by Dilated Convolutions. https://arxiv.org/abs/1511.07122
  • BatchNormalization – Ioffe and Szegedy, Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. https://arxiv.org/abs/1502.03167
  • ReLU – Vinod Nair, Geoffrey E. Hinton. Rectified Linear Units Improve Restricted Boltzmann Machines. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.165.6419&rep=rep1&type=pdf
  • Momentum – Ning Qian : On the Momentum Term in Gradient Descent Learning Algorithms. http://www.columbia.edu/~nq6/publications/momentum.pdf