调参侠看过来!两个进步妙度进修实习服从的特技
参数处事器模式,见下图。在每个worker执行完一个batch的实习后,反向撒播参数的时辰,全部的worker城市把参数传给参数处事器,举办汇总求均值,之后再传给每个worker,进入第二个batch的实习。(图片来自收集) 参数处事器有一个可能多个的布局模式,可以看出这种数据并行的模式服从是否晋升取决于参数处事器与worker之间的通讯服从,也就是最慢的worker的实习时刻和参数处事器的吸取和更新参数后再回传的时刻。worker数目多的话,参数处事器也许存在瓶颈。(图片来自收集) 3.2.2 ring-reduce 百度提出的ring-reduce摒弃了参数处事器,回收环状布局来更新参数。ring-reduce把全部的worker构成一个两两相邻的环形布局。每个worker只与相邻的worker互换参数。颠末屡次互换之后,全部的worker都包括其他worker的参数信息,到达更新的目标。(图片来自收集) 下面几张图,可以看到个中的几个步调;ring-reduce为了加速速率,并不是一次性互换全部的参数;而是先把参数举办支解,不绝互换支解后参数。 4. 实现框架:Horovod Horovod 是 Uber 开源的又一个深度进修器材,它的成长汲取了 Facebook「一小时实习 ImageNet 论文」与百度 Ring Allreduce 的利益,可为用户实现漫衍式实习提供辅佐。https://github.com/horovod/horovod 回收NCCL 替代百度的 ring-allreduce 实现。NCCL 是英伟达的荟萃通讯库,提供高度优化的 ring-allreduce 版本。NCCL 2 应承在多个呆板之间运行 ring-allreduc。 假如要把单机的实习代码修改身漫衍式的代码,只要几个步调就可以了 改革漫衍式实习: horovod安装 提议安装docker的horovod,省去安装情形的贫困。horovod依靠NCCL 2 open MPI $ mkdir horovod-docker-gpu $ wget -O horovod-docker-gpu/Dockerfile https://raw.githubusercontent.com/horovod/horovod/master/Dockerfile.gpu $ docker build -t horovod:latest horovod-docker-gpu 呆板worker呆板之间ssh买通 修改实习代码 horovod支持tf,keras,pytorch和mxnet等差异的深度进修框架。以keras为例,修改首要6个步调 (1) 初始化:hvd.init() (2)分派GPU计较资源:config.gpu_options.visible_device_list = str(hvd.local_rank())(3)漫衍式的优化器来实现参数的漫衍式更新:opt = hvd.DistributedOptimizer(opt)(4)界说全部worker模子初始化同等性 hvd.callbacks.BroadcastGlobalVariablesCallback(0)(5)模子生涯在某一个worker from __future__ import print_function import keras from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten from keras.layers import Conv2D, MaxPooling2D from keras import backend as K import math import tensorflow as tf import horovod.keras as hvd # Horovod: initialize Horovod. hvd.init() # Horovod: pin GPU to be used to process local rank (one GPU per process) config = tf.ConfigProto() config.gpu_options.allow_growth = True config.gpu_options.visible_device_list = str(hvd.local_rank()) K.set_session(tf.Session(configconfig=config)) batch_size = 128 num_classes = 10 # Horovod: adjust number of epochs based on number of GPUs. epochs = int(math.ceil(12.0 / hvd.size())) # Input image dimensions img_rows, img_cols = 28, 28 # The data, shuffled and split between train and test sets (x_train, y_train), (x_test, y_test) = mnist.load_data() if K.image_data_format() == 'channels_first': x_trainx_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols) x_testx_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols) input_shape = (1, img_rows, img_cols) else: x_trainx_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1) x_testx_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1) input_shape = (img_rows, img_cols, 1) x_trainx_train = x_train.astype('float32') x_testx_test = x_test.astype('float32') x_train /= 255 x_test /= 255 print('x_train shape:', x_train.shape) print(x_train.shape[0], 'train samples') print(x_test.shape[0], 'test samples') # Convert class vectors to binary class matrices y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shapeinput_shape=input_shape)) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(num_classes, activation='softmax')) # Horovod: adjust learning rate based on number of GPUs. opt = keras.optimizers.Adadelta(1.0 * hvd.size()) # Horovod: add Horovod Distributed Optimizer. opt = hvd.DistributedOptimizer(opt) model.compile(loss=keras.losses.categorical_crossentropy, optoptimizer=opt, metrics=['accuracy']) callbacks = [ # Horovod: broadcast initial variable states from rank 0 to all other processes. # This is necessary to ensure consistent initialization of all workers when # training is started with random weights or restored from a checkpoint. hvd.callbacks.BroadcastGlobalVariablesCallback(0), ] # Horovod: save checkpoints only on worker 0 to prevent other workers from corrupting them. if hvd.rank() == 0: callbacks.append(keras.callbacks.ModelCheckpoint('./checkpoint-{epoch}.h5')) model.fit(x_train, y_train, batch_sizebatch_size=batch_size, callbackscallbacks=callbacks, epochsepochs=epochs, verbose=1, validation_data=(x_test, y_test)) score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) 操作horovodrun 执行漫衍式实习 horovodrun -np 16 -H server1:4,server2:4,server3:4,server4:4 python train.py 5. 总结 本文分享了通过GPU操作率和漫衍式实习Horovod框架来晋升深度进修实习。 并行CPU加载数据和预处理赏罚,让GPU不再守候CPU 回收Horovod让数据并行来进步峻数据量的实习的迭代时刻
(编辑:湖南网) 【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容! |