SparkMLlibDeepLearningNeuralNet(深度學習_第1頁
SparkMLlibDeepLearningNeuralNet(深度學習_第2頁
SparkMLlibDeepLearningNeuralNet(深度學習_第3頁
SparkMLlibDeepLearningNeuralNet(深度學習_第4頁
免費預覽已結束,剩余1頁可下載查看

下載本文檔

版權說明:本文檔由用戶提供并上傳,收益歸屬內容提供方,若內容存在侵權,請進行舉報或認領

文檔簡介

1、Spark MLlib Deep Learning Neural Net( 深度學習Spark MLlib Deep Learning Neural Net( 深度學習 -神經網 絡)1.1/sunbow0Spark MLlib Deep Learning 工具箱,是根據(jù)現(xiàn)有深度學習教 程 UFLDL 教程中的算法,在 SparkMLlib 中的實現(xiàn)。具 體 SparkMLlib Deep Learning( 深度學習 ) 目錄結構:第一章 Neural Net(NN)1、源碼2、源碼解析3、實例第二章 Deep Belief Nets(DBNs)第三章

2、Convolution Neural Network(CNN)第四章 Stacked Auto-Encoders(SAE)第五章 CAE 第一章 Neural Net( 神經網絡 )1 源碼目前 Spark MLlib Deep Learning 工具箱源碼的 github 地址為: /sunbow1/SparkMLlibDeepLearn1.1 NeuralNet 代碼java view plain copy package NN import org.apache.spark._ import org.apache.spark.SparkContext._

3、 import org.apache.spark.rdd.RDD import org.apache.spark.Logging import org.apache.spark.mllib.regression.LabeledPoint import org.apache.spark.mllib.linalg._ import org.apache.spark.mllib.linalg.distributed.RowMatrix import breeze.linalg. Matrix => BM, CSCMatrix => BSM, DenseMatrix => BDM,

4、Vector => BV , DenseVector => BDV , SparseVector => BSV, axpy => brzAxpy, svd => brzSvd import breeze.numerics. exp => Bexp, tanh => Btanh import scala.collection.mutable.ArrayBuffer import java.util.Random import scala.math._ /* * label :目標矩陣 * nna :神經網絡每層節(jié)點的輸出值,a(0),a(1),a(2)

5、* error:輸出層與目標值的誤差矩陣*/case class NNLabel(label: BDMDouble, nna: ArrayBufferBDMDouble, error: BDMDouble) extends Serializable/* 配置參數(shù)*/ case classNNConfig(size: ArrayInt,layer: Int,activation_function: String, learningRate: Double, momentum: Double, scaling_learningRate: Double,weightPenaltyL2: Double

6、, nonSparsityPenalty: Double, sparsityTarget: Double, inputZeroMaskedFraction: Double, dropoutFraction: Double, testing: Double, output_function: String) extends Serializable /* * NN(neural network) */ class NeuralNet( private var size: ArrayInt, private var layer: Int,private varactivation_function

7、: String, private var learningRate: Double,private var momentum: Double,private varscaling_learningRate: Double,private var weightPenaltyL2:Double, private var nonSparsityPenalty: Double, private var sparsityTarget: Double,private varinputZeroMaskedFraction: Double, private var dropoutFraction: Doub

8、le,private var testing: Double,private var output_function: String) extends Serializable withLogging / var layer=3 / activation_function=tanh_opt learningRate=2.0 / var scaling_learningRate=1.0 weightPenaltyL2=0.0 / nonSparsityPenalty=0.0 / sparsityTarget=0.05 /var size=Array(5, 7, 1) var/ varvar mo

9、mentum=0.5 / varvarvarvar/varinputZeroMaskedFraction=0.0 dropoutFraction=0.0 / var testing=0.0 / var output_function=sigm/* * size = architecture;* n = numel(nn.size); * activation_function = sigm 隱含 層函數(shù) Activation functions of hidden layers: sigm (sigmoid) or tanh_opt (optimal tanh).* learningRate

10、= 2;學習率 learning rate Note: typically needs to be lower when using sigm activation function and non-normalized inputs.* momentum = 0.5; Momentum * scaling_learningRate = 1;Scaling factor for the learningrate (each epoch)* weightPenaltyL2 = 0; 正則化 L2 regularization* nonSparsityPenalty = 0;權重稀疏度懲罰值 on

11、 sparsity penalty* sparsityTarget = 0.05;Sparsity target * inputZeroMaskedFraction = 0; 加入 noise,Used for Denoising AutoEncoders* dropoutFraction= 0;每一次 mini-batch 樣本輸入訓練時,隨機扔掉x% 的隱含層節(jié)點 Dropout level (/hinton/absps/dropout.pdf) * testing = 0;Internal variable. nntest sets thi

12、s*/defto one. * output = sigm; 輸出函數(shù) output unit sigm (=logistic), softmax and linear this() = this(NeuralNet.Architecture, 3,NeuralNet.Activation_Function, 2.0, 0.5, 1.0, 0.0, 0.0, 0.05, 0.0,0.0, 0.0, NeuralNet.Output)/* 設置神經網絡結構 .Default: 10, 5, 1. */def setSize(size: ArrayInt): this.type= this.size = sizethis /* 設置神經網絡層數(shù)據(jù) . Default: 3. */def setLayer(layer: Int):this.type = this.layer = layerthis /*設置隱含層函數(shù) . Default: s

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網頁內容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
  • 4. 未經權益所有人同意不得將文件中的內容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網僅提供信息存儲空間,僅對用戶上傳內容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內容本身不做任何修改或編輯,并不能對任何下載內容負責。
  • 6. 下載文件中如有侵權或不適當內容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論