首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Mixing floating- and fixed-point formats for neural network learning on neuroprocessors
Institution:1. Shenzhen Key Laboratory of Media Security, College of Information Engineering, Shenzhen University, Nanhai Ave 3688, Shenzhen, Guangdong 518060, PR China;2. National Laboratory for Scientific Computing (LNCC), Av. Getúlio Vargas 333, Quitandinha, Petrópolis, Rio de Janeiro 22230000, Brazil
Abstract:We examine the efficient implementation of back-propagation (BP) type algorithms on TO 3], a vector processor with a fixed-point engine, designed for neural network simulation. Using Matrix Back Propagation (MBP) 2]we achieve an asymptotically optimal performance on TO (about 0.8 GOPS) for both forward and backward phases, which is not possible with the standard on-line BP algorithm. We use a mixture of fixed- and floating-point operations in order to guarantee both high efficiency and fast convergence. Though the most expensive computations are implemented in fixed-point, we achieve a rate of convergence that is comparable to the floating-point version. The time taken for conversion between fixed- and floating-point is also shown to be reasonably low.
Keywords:
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号