|
我们不会对我们的验证数据集应用任何转换(除非是调整大小,因为这是必须的),因为我们将使用它评估每个纪元的模型性能。对于在传输学习环境中的图像增强的详细解释,请随时查看我上面引用的文章。让我们从一批图像增强转换中查看一些样本结果。
img_id = 0sample_generator = train_datagen.flow(train_data[img_id:img_id+1], train_labels[img_id:img_id+1], batch_size=1)sample = [next(sample_generator) for i in range(0,5)]fig, ax = plt.subplots(1,5, figsize=(16, 6))print('Labels:', [item[1][0] for item in sample])l = [ax[i].imshow(sample[i][0][0]) for i in range(0,5)]

Sample augmented images
你可以清晰的看到与之前的输出的我们图像的轻微变化。我们现在构建我们的学习模型,确保 VGG-19 模型的最后两块是可以训练的。
vgg = tf.keras.applications.vgg19.VGG19(include_top=False, weights='imagenet', input_shape=INPUT_SHAPE)# Freeze the layersvgg.trainable = True-
set_trainable = Falsefor layer in vgg.layers: if layer.name in ['block5_conv1', 'block4_conv1']: set_trainable = True if set_trainable: layer.trainable = True else: layer.trainable = False base_vgg = vggbase_out = base_vgg.outputpool_out = tf.keras.layers.Flatten()(base_out)hidden1 = tf.keras.layers.Dense(512, activation='relu')(pool_out)drop1 = tf.keras.layers.Dropout(rate=0.3)(hidden1)hidden2 = tf.keras.layers.Dense(512, activation='relu')(drop1)drop2 = tf.keras.layers.Dropout(rate=0.3)(hidden2)-
out = tf.keras.layers.Dense(1, activation='sigmoid')(drop2)-
model = tf.keras.Model(inputs=base_vgg.input, outputs=out)model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=1e-5), loss='binary_crossentropy', metrics=['accuracy'])-
print("Total Layers:", len(model.layers))print("Total trainable layers:", sum([1 for l in model.layers if l.trainable]))-
-
# OutputTotal Layers: 28Total trainable layers: 16
在我们的模型中我们降低了学习率,因为我们不想在微调的时候对预训练的层做大的位权更新。模型的训练过程可能有轻微的不同,因为我们使用了数据生成器,因此我们将应用 fit_generator(...) 函数。
tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1)reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=2, min_lr=0.000001)-
callbacks = [reduce_lr, tensorboard_callback]train_steps_per_epoch = train_generator.n // train_generator.batch_sizeval_steps_per_epoch = val_generator.n // val_generator.batch_sizehistory = model.fit_generator(train_generator, steps_per_epoch=train_steps_per_epoch, epochs=EPOCHS, validation_data=val_generator, validation_steps=val_steps_per_epoch, verbose=1)-
-
# OutputEpoch 1/25271/271 [====] - 133s 489ms/step - loss: 0.2267 - accuracy: 0.9117 - val_loss: 0.1414 - val_accuracy: 0.9531Epoch 2/25271/271 [====] - 129s 475ms/step - loss: 0.1399 - accuracy: 0.9552 - val_loss: 0.1292 - val_accuracy: 0.9589......Epoch 24/25271/271 [====] - 128s 473ms/step - loss: 0.0815 - accuracy: 0.9727 - val_loss: 0.1466 - val_accuracy: 0.9682Epoch 25/25271/271 [====] - 128s 473ms/step - loss: 0.0792 - accuracy: 0.9729 - val_loss: 0.1127 - val_accuracy: 0.9641
(编辑:湖南网)
【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容!
|