線性整流函數rectifier),粵文又有叫修正線性單元,喺人工神經網絡上係指一種啟動函數(activation function),指個函數淨係接受正嘅數值:

一個線性整流函數畫做圖嘅樣

當中 係粒人工神經細胞受到嘅輸入。線性整流函數係喺 2000 年始創嘅,目的係要為咗令人工神經網絡更加似生物神經網絡[1],而且喺 2010 年代初,研究證實咗線性整流函數(比起當時泛用嘅啟動函數)能夠提升人工神經網絡嘅表現[2],所以到咗 2017 年為止經已成為咗深度神經網絡最常用嘅啟動函數[3][4][5]

睇埋

編輯

參考文獻

編輯
  • Hahnloser, R.; Sarpeshkar, R.; Mahowald, M. A.; Douglas, R. J.; Seung, H. S. (2000). "Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit". Nature. 405 (6789): 947–951.
  1. Hahnloser, R.; Seung, H. S. (2001). Permitted and Forbidden Sets in Symmetric Threshold-Linear Networks. NIPS 2001.
  2. Xavier Glorot, Antoine Bordes and Yoshua Bengio (2011). Deep sparse rectifier neural networks (PDF). AISTATS. Rectifier and softplus activation functions. The second one is a smooth version of the first."
  3. Yann LeCun, Leon Bottou, Genevieve B. Orr and Klaus-Robert Müller (1998). "Efficient BackProp" (PDF). In G. Orr and K. Müller (eds.). Neural Networks: Tricks of the Trade. Springer.
  4. LeCun, Yann; Bengio, Yoshua; Hinton, Geoffrey (2015). "Deep learning". Nature. 521 (7553): 436–444.
  5. Ramachandran, Prajit; Barret, Zoph; Quoc, V. Le (October 16, 2017). "Searching for Activation Functions".