nnc-challenge_Dual-Input_CNN

Posted by wowry

Copy the project to Neural Network Console Cloud

2-input CNN model by wowry @NNC-Challenge

Network Architecture : Main

Type Value
Output 609,699
CostParameter 43,088,320
CostAdd 10,909,442
CostMultiply 2,219,536
CostMultiplyAdd 6,052,275,200
CostDivision 4,364,417
CostExp 4,364,417
CostIf 2,131,968

Network Architecture : Conv2D

Type Value
Output 132,496
CostParameter 26,592
CostAdd 125,440
CostMultiply 25,088
CostMultiplyAdd 20,697,600
CostDivision 50,176
CostExp 50,176
CostIf 25,088

Network Architecture : FC

Type Value
Output 1,168
CostParameter 100,480
CostAdd 256
CostMultiply 128
CostMultiplyAdd 100,352
CostDivision 128
CostExp 128
CostIf 0

Training Procedure : Optimizer

Optimize network “Main” using “Training” dataset.

  • Batch size : 64
  • Solver : Adam
    • Learning rate(Alpha) : 0.001
    • Beta1 : 0.9
    • Beta2 : 0.999
    • Epsilon : 1e-08
  • Weight decay is not applied.

Experimental Result : Learning Curve

Experimental Result : Evaluation

Evaluate network “MainRuntime” using “Test” dataset.

  • The calculation is executed 10 times for each data.
  • The average value is used as a final output.

Variable : y

  • Accuracy : 0.7090069284064665
  • Avg.Precision : 0.5032193732193733
  • Avg.Recall : 0.5075283144570286
  • Avg.F-Measures : 0.4809931506849315

References

  • Sony Corporation. Neural Network Console : Not just train and evaluate. You can design neural networks with fast and intuitive GUI. https://dl.sony.com/
  • Sony Corporation. Neural Network Libraries : An open source software to make research, development and implementation of neural network more efficient. https://nnabla.org/
  • Convolution – Chen et al., DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. https://arxiv.org/abs/1606.00915, Yu et al., Multi-Scale Context Aggregation by Dilated Convolutions. https://arxiv.org/abs/1511.07122
  • Swish – Prajit Ramachandran, Barret Zoph, and Quoc V. Le, SEARCHING FOR ACTIVATION FUNCTIONS. https://arxiv.org/abs/1710.05941
  • BatchNormalization – Ioffe and Szegedy, Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. https://arxiv.org/abs/1502.03167
  • Adam – Kingma and Ba, Adam: A Method for Stochastic Optimization. https://arxiv.org/abs/1412.6980