Skip to main content

Table 3 RMSE comparison of different architectures and training algorithms

From: Modeling of smartphones’ power using neural networks

Training function

 

FFBP

CFFBP

FFBPTD

n = 10

n = 20

n = 10

n = 20

n = 10

n = 20

trainbfg

0.209797

0.206758

0.216204

0.199763

0.632268

0.632285

trainbr

0.193328

0.175663

0.184413

0.174294

0.632267

0.632267

traincgb

0.218694

0.217632

0.21954

0.217823

0.632268

0.632271

traincgf

0.229647

0.226134

0.227723

0.213663

0.632267

0.632267

traincgp

0.225715

0.219247

0.227335

0.217196

0.632269

0.632268

traingd

0.353894

1.192112

0.336454

5.590423

0.632267

0.63227

traingdm

0.966383

0.845086

3.510545

5.550865

0.883342

0.654927

traingda

0.332997

0.39005

0.454985

0.482595

0.632267

0.632568

traingdx

0.286482

0.284275

0.306612

0.317421

0.632344

0.632833

trainlm

0.188031

0.176105

0.190761

0.180157

0.632267

0.632283

trainoss

0.233728

0.236423

0.223991

0.234473

0.632361

0.632269

trainrp

0.231186

0.239729

0.242783

0.247439

0.632274

0.806998

trainscg

0.22851

0.229321

0.232129

0.218688

0.632267

0.632268