专注于工业智能预警系统研发, 通过机理算法和数据驱动算法分析振动信号、音频、DCS、PLC信号、SCADA信号等设备运行状态数据对机器设备进行看病预诊,为机器设备健康运行保驾护航。 网站正在不断建设和完善过程中,欢迎大家给予建议和参与社区建设
52phm,专注于预测性维护知识学习和交流,欢迎广大从事预测性维护行业人员投稿,投稿请联系管理员(wx: www52phmcn),投稿内容可以是:
# 查看当前挂载的数据集目录
!ls /home/kesci/input/cell8160/cell_images
Parasitized Uninfected
# 显示cell运行时长
%load_ext klab-autotime
本数据集用于检测疟疾的细胞图像,细胞图像分为两类:
一类是感染的细胞,另一类是未感染的细胞
目标需求1:要基于卷积神经网络CNN来识别哪些细胞已经感染、哪些细胞还未感染
目标需求2:可视化模型随着迭代次数的训练集与测试集损失值的变化情况
目标需求3:可视化模型随着迭代次数的训练集与测试集准确率的变化情况
from keras.preprocessing.image import ImageDataGenerator, load_img, img_to_array, array_to_img
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import backend as K
import numpy as np
import matplotlib.pyplot as plt
import glob, os, random
Using TensorFlow backend.
Matplotlib is building the font cache using fc-list. This may take a moment.
time: 59.1 s
path = '../input/cell8160/cell_images'
time: 366 µs
os.path.join(path, '*/*.*')
'../input/cell8160/cell_images/*/*.*'
time: 3.33 ms
# 使用 glob 模块批量匹配图像, * 代表匹配所有东西
img_list = glob.glob(os.path.join(path, '*/*.*'))
print('>>>图像数量:', len(img_list))
img_list[:5]
>>>图像数量: 27560
['../input/cell8160/cell_images/Uninfected/C240ThinF_IMG_20151127_115223_cell_185.png',
'../input/cell8160/cell_images/Uninfected/C117P78ThinF_IMG_20150930_214511_cell_48.png',
'../input/cell8160/cell_images/Uninfected/C102P63ThinF_IMG_20150918_161826_cell_152.png',
'../input/cell8160/cell_images/Uninfected/C118P79ThinF_IMG_20151002_105018_cell_25.png',
'../input/cell8160/cell_images/Uninfected/C70P31_ThinF_IMG_20150819_141327_cell_14.png']
time: 81.7 ms
# 加载前面几张未感染的图像
for i, img_path in enumerate(img_list[:6]):
img_plot = load_img(img_path) # 加载图像
arr = img_to_array(img_plot) # 将图像转换成数组
print(arr.shape) # 图像形状
plt.subplot(2, 3, i1)
plt.imshow(img_plot)
(94, 103, 3)
(136, 127, 3)
(136, 133, 3)
(151, 157, 3)
(112, 100, 3)
(118, 133, 3)
time: 717 ms
通过上面的图形和图像形状情况可以知道,图像的像素通道并不一致,
因此在实际的图形识别分类当中会对图像像素进行统一定义的
# 统一定义图像像素的宽度和高度
img_width, img_height = 100, 100
# 定义训练集、验证集的图形路径(文件夹路径即可)
train_data_dir = '../input/cell8160/cell_images/'
validation_data_dir = '../input/cell8160/cell_images/'
# 模型训练的参数设置
nb_train_samples = 275
nb_validation_samples = 200
epochs = 100 # 迭代次数
batch_size = 32 # 每个批量观测数
# 图像输入维度设置
if K.image_data_format() == 'channels_first':
input_shape = (3, img_width, img_height)
else:
input_shape = (img_width, img_height, 3)
time: 919 µs
图像增强配置
使用 ImageDataGenerator类的 .flow_from_directory(directory) 的方法
官方文档:https://keras.io/zh/preprocessing/image/
train_datagen = ImageDataGenerator(rescale=1. / 255, # 重缩放因子
shear_range=0.2, # 剪切强度(以弧度逆时针方向剪切角度)
zoom_range=0.2, # 随机缩放范围
horizontal_flip=True # 随机水平翻转
)
train_generator = train_datagen.flow_from_directory(train_data_dir, # 训练数据的文件夹路径
target_size=(img_width, img_height), # 统一像素大小
batch_size=batch_size, # 每一批次的观测数
class_mode='categorical' # 指定分类模式,指定二分类
)
test_datagen = ImageDataGenerator(rescale=1. / 255,
shear_range=0.2, # 剪切强度(以弧度逆时针方向剪切角度)
zoom_range=0.2, # 随机缩放范围
horizontal_flip=True)# 随机水平翻转
validation_generator = test_datagen.flow_from_directory(validation_data_dir, # 验证集文件夹路径
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical' # 二分类
)
Found 27558 images belonging to 2 classes.
Found 27558 images belonging to 2 classes.
time: 1.74 s
model = Sequential()
# -----------------------------------------------------
# 输入层:第一层
# 添加第一个卷积层/最大池化层(必选)
model.add(Conv2D(filters=32, # 32 个过滤器
kernel_size=(3, 3), # 卷积核大小 3 x 3
input_shape=input_shape, # 图像输入维度
activation='relu')) # 'relu' 激活函数
model.add(MaxPooling2D(pool_size=(2, 2))) # 池化核大小 2 x 2
# ----------------------------------------------------
# 隐藏层:介于第一层和最后一层之间
# 添加第二个卷积层/最大池化层(可选)
model.add(Conv2D(filters=32, kernel_size=(3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
# 添加第三个卷积层/最大池化层(可选)
model.add(Conv2D(filters=64, kernel_size=(3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
# 由于卷积层是 2D 空间,训练时需要将数据展平为 1D 空间
model.add(Flatten()) # 添加展平层(必选)
model.add(Dense(units=64, activation='relu')) # 添加全连接层(必选) 64 个神经元
model.add(Dropout(0.5)) # 添加丢弃层,防止过拟合
# ---------------------------------------------------
# 输出层:最后一层,神经元控制输出的维度,并指定分类激活函数
model.add(Dense(units=2, activation='sigmoid')) # 指定分类激活函数
model.summary()
WARNING:tensorflow:From /opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From /opt/conda/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:3445: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 98, 98, 32) 896
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 49, 49, 32) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 47, 47, 32) 9248
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 23, 23, 32) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 21, 21, 64) 18496
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 10, 10, 64) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 6400) 0
_________________________________________________________________
dense_1 (Dense) (None, 64) 409664
_________________________________________________________________
dropout_1 (Dropout) (None, 64) 0
_________________________________________________________________
dense_2 (Dense) (None, 2) 130
=================================================================
Total params: 438,434
Trainable params: 438,434
Non-trainable params: 0
_________________________________________________________________
time: 1.58 s
model.compile(loss='categorical_crossentropy', # 指定损失函数类型
optimizer='rmsprop', # 优化器
metrics=['accuracy']) # 评价指标
time: 27.7 ms
参考资料:https://www.cnblogs.com/tectal/p/9482255.html
使用 fit_generator 时,需设置steps_per_epoch
如果说训练样本数 N=1000,steps_per_epoch = 10,那么相当于一个batch_size=100,
如果还是按照旧版来设置,那么相当于batch_size = 1,
会性能非常低。经验:必须明确fit_generator参数steps_per_epoch
history = model.fit_generator(train_generator,
steps_per_epoch=nb_train_samples // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=nb_validation_samples // batch_size
)
WARNING:tensorflow:From /opt/conda/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Epoch 1/100
8/8 [==============================] - 14s 2s/step - loss: 0.7575 - acc: 0.4648 - val_loss: 0.6841 - val_acc: 0.5417
Epoch 2/100
8/8 [==============================] - 8s 988ms/step - loss: 0.6957 - acc: 0.5391 - val_loss: 0.6891 - val_acc: 0.5365
Epoch 3/100
8/8 [==============================] - 8s 984ms/step - loss: 0.6962 - acc: 0.5547 - val_loss: 0.6675 - val_acc: 0.6146
Epoch 4/100
8/8 [==============================] - 8s 966ms/step - loss: 0.6650 - acc: 0.6328 - val_loss: 0.7919 - val_acc: 0.5885
Epoch 5/100
8/8 [==============================] - 8s 997ms/step - loss: 0.6981 - acc: 0.6406 - val_loss: 0.7162 - val_acc: 0.5573
Epoch 6/100
8/8 [==============================] - 8s 977ms/step - loss: 0.6744 - acc: 0.6172 - val_loss: 0.7056 - val_acc: 0.4844
Epoch 7/100
8/8 [==============================] - 8s 987ms/step - loss: 0.6577 - acc: 0.6055 - val_loss: 0.6682 - val_acc: 0.5729
Epoch 8/100
8/8 [==============================] - 8s 976ms/step - loss: 0.6558 - acc: 0.6484 - val_loss: 0.8118 - val_acc: 0.5156
Epoch 9/100
8/8 [==============================] - 8s 974ms/step - loss: 0.6615 - acc: 0.6367 - val_loss: 0.7099 - val_acc: 0.4635
Epoch 10/100
8/8 [==============================] - 8s 985ms/step - loss: 0.6689 - acc: 0.6055 - val_loss: 0.6070 - val_acc: 0.6719
Epoch 11/100
8/8 [==============================] - 8s 975ms/step - loss: 0.6735 - acc: 0.6406 - val_loss: 0.6504 - val_acc: 0.7500
Epoch 12/100
8/8 [==============================] - 8s 979ms/step - loss: 0.6239 - acc: 0.6836 - val_loss: 0.5584 - val_acc: 0.7708
Epoch 13/100
8/8 [==============================] - 8s 983ms/step - loss: 0.6314 - acc: 0.7031 - val_loss: 0.6043 - val_acc: 0.6719
Epoch 14/100
8/8 [==============================] - 8s 988ms/step - loss: 0.5879 - acc: 0.7383 - val_loss: 0.5095 - val_acc: 0.7708
Epoch 15/100
8/8 [==============================] - 8s 974ms/step - loss: 0.5809 - acc: 0.7539 - val_loss: 0.5500 - val_acc: 0.8125
Epoch 16/100
8/8 [==============================] - 8s 977ms/step - loss: 0.5620 - acc: 0.7070 - val_loss: 0.5097 - val_acc: 0.8490
Epoch 17/100
8/8 [==============================] - 8s 975ms/step - loss: 0.5494 - acc: 0.7969 - val_loss: 0.5382 - val_acc: 0.7396
Epoch 18/100
8/8 [==============================] - 8s 988ms/step - loss: 0.5105 - acc: 0.7930 - val_loss: 0.4817 - val_acc: 0.8021
Epoch 19/100
8/8 [==============================] - 8s 972ms/step - loss: 0.4612 - acc: 0.8281 - val_loss: 0.4146 - val_acc: 0.8125
Epoch 20/100
8/8 [==============================] - 8s 1s/step - loss: 0.4717 - acc: 0.7969 - val_loss: 0.3589 - val_acc: 0.9323
Epoch 21/100
8/8 [==============================] - 8s 974ms/step - loss: 0.4134 - acc: 0.8594 - val_loss: 0.4424 - val_acc: 0.8490
Epoch 22/100
8/8 [==============================] - 8s 984ms/step - loss: 0.4366 - acc: 0.8359 - val_loss: 0.3841 - val_acc: 0.8594
Epoch 23/100
8/8 [==============================] - 8s 978ms/step - loss: 0.4430 - acc: 0.8516 - val_loss: 0.4065 - val_acc: 0.8594
Epoch 24/100
8/8 [==============================] - 8s 973ms/step - loss: 0.4497 - acc: 0.8203 - val_loss: 0.3353 - val_acc: 0.9115
Epoch 25/100
8/8 [==============================] - 8s 987ms/step - loss: 0.3752 - acc: 0.8750 - val_loss: 0.3439 - val_acc: 0.8594
Epoch 26/100
8/8 [==============================] - 8s 964ms/step - loss: 0.3528 - acc: 0.8711 - val_loss: 0.3075 - val_acc: 0.9115
Epoch 27/100
8/8 [==============================] - 8s 976ms/step - loss: 0.3816 - acc: 0.8555 - val_loss: 0.2760 - val_acc: 0.9479
Epoch 28/100
8/8 [==============================] - 8s 986ms/step - loss: 0.3275 - acc: 0.9141 - val_loss: 0.3398 - val_acc: 0.8490
Epoch 29/100
8/8 [==============================] - 8s 990ms/step - loss: 0.2963 - acc: 0.9062 - val_loss: 0.4068 - val_acc: 0.7865
Epoch 30/100
8/8 [==============================] - 8s 987ms/step - loss: 0.3615 - acc: 0.8750 - val_loss: 0.2573 - val_acc: 0.9167
Epoch 31/100
8/8 [==============================] - 8s 973ms/step - loss: 0.3728 - acc: 0.8828 - val_loss: 0.2152 - val_acc: 0.9271
Epoch 32/100
8/8 [==============================] - 8s 987ms/step - loss: 0.2060 - acc: 0.9336 - val_loss: 0.3428 - val_acc: 0.9062
Epoch 33/100
8/8 [==============================] - 8s 989ms/step - loss: 0.2784 - acc: 0.9141 - val_loss: 0.2724 - val_acc: 0.8802
Epoch 34/100
8/8 [==============================] - 8s 972ms/step - loss: 0.3941 - acc: 0.8984 - val_loss: 0.2394 - val_acc: 0.9271
Epoch 35/100
8/8 [==============================] - 8s 978ms/step - loss: 0.4437 - acc: 0.7422 - val_loss: 0.2341 - val_acc: 0.9219
Epoch 36/100
8/8 [==============================] - 8s 1s/step - loss: 0.2967 - acc: 0.9258 - val_loss: 0.2853 - val_acc: 0.8750
Epoch 37/100
8/8 [==============================] - 8s 986ms/step - loss: 0.2402 - acc: 0.9062 - val_loss: 0.3493 - val_acc: 0.8750
Epoch 38/100
8/8 [==============================] - 8s 987ms/step - loss: 0.2785 - acc: 0.9023 - val_loss: 0.2534 - val_acc: 0.9010
Epoch 39/100
8/8 [==============================] - 8s 986ms/step - loss: 0.3491 - acc: 0.8516 - val_loss: 0.2125 - val_acc: 0.9323
Epoch 40/100
8/8 [==============================] - 8s 987ms/step - loss: 0.2975 - acc: 0.9062 - val_loss: 0.1633 - val_acc: 0.9323
Epoch 41/100
8/8 [==============================] - 8s 976ms/step - loss: 0.2057 - acc: 0.9141 - val_loss: 0.2421 - val_acc: 0.9062
Epoch 42/100
8/8 [==============================] - 8s 986ms/step - loss: 0.2847 - acc: 0.9258 - val_loss: 0.3729 - val_acc: 0.8385
Epoch 43/100
8/8 [==============================] - 8s 975ms/step - loss: 0.2667 - acc: 0.8945 - val_loss: 0.2371 - val_acc: 0.9583
Epoch 44/100
8/8 [==============================] - 8s 977ms/step - loss: 0.2489 - acc: 0.9141 - val_loss: 0.2421 - val_acc: 0.9219
Epoch 45/100
8/8 [==============================] - 8s 976ms/step - loss: 0.2282 - acc: 0.9258 - val_loss: 0.2418 - val_acc: 0.8854
Epoch 46/100
8/8 [==============================] - 8s 974ms/step - loss: 0.2697 - acc: 0.8477 - val_loss: 0.1576 - val_acc: 0.9531
Epoch 47/100
8/8 [==============================] - 8s 975ms/step - loss: 0.2616 - acc: 0.9180 - val_loss: 0.2010 - val_acc: 0.9375
Epoch 48/100
8/8 [==============================] - 8s 972ms/step - loss: 0.3275 - acc: 0.8477 - val_loss: 0.2453 - val_acc: 0.8906
Epoch 49/100
8/8 [==============================] - 8s 964ms/step - loss: 0.2438 - acc: 0.9219 - val_loss: 0.2605 - val_acc: 0.9115
Epoch 50/100
8/8 [==============================] - 8s 976ms/step - loss: 0.1465 - acc: 0.9492 - val_loss: 0.2885 - val_acc: 0.9062
Epoch 51/100
8/8 [==============================] - 8s 986ms/step - loss: 0.2943 - acc: 0.8867 - val_loss: 0.2459 - val_acc: 0.9010
Epoch 52/100
8/8 [==============================] - 8s 973ms/step - loss: 0.2310 - acc: 0.9336 - val_loss: 0.2933 - val_acc: 0.9010
Epoch 53/100
8/8 [==============================] - 8s 967ms/step - loss: 0.2805 - acc: 0.8984 - val_loss: 0.1787 - val_acc: 0.9375
Epoch 54/100
8/8 [==============================] - 8s 974ms/step - loss: 0.2503 - acc: 0.8945 - val_loss: 0.2568 - val_acc: 0.9010
Epoch 55/100
8/8 [==============================] - 8s 987ms/step - loss: 0.2151 - acc: 0.9180 - val_loss: 0.2806 - val_acc: 0.9062
Epoch 56/100
8/8 [==============================] - 8s 976ms/step - loss: 0.3094 - acc: 0.9102 - val_loss: 0.2732 - val_acc: 0.8958
Epoch 57/100
8/8 [==============================] - 8s 973ms/step - loss: 0.3137 - acc: 0.8867 - val_loss: 0.2140 - val_acc: 0.9115
Epoch 58/100
8/8 [==============================] - 8s 986ms/step - loss: 0.2613 - acc: 0.8945 - val_loss: 0.3203 - val_acc: 0.9062
Epoch 59/100
8/8 [==============================] - 8s 977ms/step - loss: 0.2552 - acc: 0.8867 - val_loss: 0.1871 - val_acc: 0.9115
Epoch 60/100
8/8 [==============================] - 8s 973ms/step - loss: 0.3308 - acc: 0.9023 - val_loss: 0.2159 - val_acc: 0.9219
Epoch 61/100
8/8 [==============================] - 8s 978ms/step - loss: 0.1998 - acc: 0.9180 - val_loss: 0.2363 - val_acc: 0.8646
Epoch 62/100
8/8 [==============================] - 8s 986ms/step - loss: 0.2097 - acc: 0.9258 - val_loss: 0.3306 - val_acc: 0.8750
Epoch 63/100
8/8 [==============================] - 8s 976ms/step - loss: 0.3406 - acc: 0.9023 - val_loss: 0.2797 - val_acc: 0.8802
Epoch 64/100
8/8 [==============================] - 8s 973ms/step - loss: 0.2837 - acc: 0.9062 - val_loss: 0.2063 - val_acc: 0.9062
Epoch 65/100
8/8 [==============================] - 8s 987ms/step - loss: 0.2908 - acc: 0.9062 - val_loss: 0.1748 - val_acc: 0.9271
Epoch 66/100
8/8 [==============================] - 8s 974ms/step - loss: 0.2764 - acc: 0.9414 - val_loss: 0.1717 - val_acc: 0.9375
Epoch 67/100
8/8 [==============================] - 8s 979ms/step - loss: 0.2553 - acc: 0.9141 - val_loss: 0.2076 - val_acc: 0.9115
Epoch 68/100
8/8 [==============================] - 8s 973ms/step - loss: 0.2522 - acc: 0.9102 - val_loss: 0.1813 - val_acc: 0.9167
Epoch 69/100
8/8 [==============================] - 8s 974ms/step - loss: 0.2715 - acc: 0.9062 - val_loss: 0.2102 - val_acc: 0.9323
Epoch 70/100
8/8 [==============================] - 8s 966ms/step - loss: 0.2002 - acc: 0.9336 - val_loss: 0.2524 - val_acc: 0.9062
Epoch 71/100
8/8 [==============================] - 8s 974ms/step - loss: 0.2628 - acc: 0.9102 - val_loss: 0.1506 - val_acc: 0.9375
Epoch 72/100
8/8 [==============================] - 8s 975ms/step - loss: 0.3622 - acc: 0.8711 - val_loss: 0.2602 - val_acc: 0.8854
Epoch 73/100
8/8 [==============================] - 8s 975ms/step - loss: 0.2692 - acc: 0.8867 - val_loss: 0.1870 - val_acc: 0.9219
Epoch 74/100
8/8 [==============================] - 8s 997ms/step - loss: 0.2588 - acc: 0.9180 - val_loss: 0.1967 - val_acc: 0.9167
Epoch 75/100
8/8 [==============================] - 8s 978ms/step - loss: 0.4920 - acc: 0.6133 - val_loss: 0.2735 - val_acc: 0.8906
Epoch 76/100
8/8 [==============================] - 8s 984ms/step - loss: 0.2469 - acc: 0.9062 - val_loss: 0.2809 - val_acc: 0.8802
Epoch 77/100
8/8 [==============================] - 8s 966ms/step - loss: 0.2825 - acc: 0.9102 - val_loss: 0.2099 - val_acc: 0.9219
Epoch 78/100
8/8 [==============================] - 8s 988ms/step - loss: 0.2603 - acc: 0.9258 - val_loss: 0.1780 - val_acc: 0.9271
Epoch 79/100
8/8 [==============================] - 8s 974ms/step - loss: 0.2012 - acc: 0.9258 - val_loss: 0.1663 - val_acc: 0.9271
Epoch 80/100
8/8 [==============================] - 8s 997ms/step - loss: 0.3441 - acc: 0.9023 - val_loss: 0.2758 - val_acc: 0.8802
Epoch 81/100
8/8 [==============================] - 8s 978ms/step - loss: 0.2743 - acc: 0.9023 - val_loss: 0.1789 - val_acc: 0.9427
Epoch 82/100
8/8 [==============================] - 8s 972ms/step - loss: 0.2107 - acc: 0.9336 - val_loss: 0.2255 - val_acc: 0.9167
Epoch 83/100
8/8 [==============================] - 8s 966ms/step - loss: 0.1955 - acc: 0.9258 - val_loss: 0.2542 - val_acc: 0.9323
Epoch 84/100
8/8 [==============================] - 8s 973ms/step - loss: 0.2218 - acc: 0.9180 - val_loss: 0.1741 - val_acc: 0.9219
Epoch 85/100
8/8 [==============================] - 8s 973ms/step - loss: 0.1967 - acc: 0.9258 - val_loss: 0.1468 - val_acc: 0.9323
Epoch 86/100
8/8 [==============================] - 8s 965ms/step - loss: 0.2091 - acc: 0.9219 - val_loss: 0.1802 - val_acc: 0.9167
Epoch 87/100
8/8 [==============================] - 8s 972ms/step - loss: 0.2225 - acc: 0.9141 - val_loss: 0.0867 - val_acc: 0.9479
Epoch 88/100
8/8 [==============================] - 8s 977ms/step - loss: 0.2334 - acc: 0.8984 - val_loss: 0.2047 - val_acc: 0.9062
Epoch 89/100
8/8 [==============================] - 8s 998ms/step - loss: 0.2504 - acc: 0.8945 - val_loss: 0.2010 - val_acc: 0.9427
Epoch 90/100
8/8 [==============================] - 8s 977ms/step - loss: 0.2061 - acc: 0.8945 - val_loss: 0.1916 - val_acc: 0.9219
Epoch 91/100
8/8 [==============================] - 8s 952ms/step - loss: 0.2135 - acc: 0.9297 - val_loss: 0.2709 - val_acc: 0.9271
Epoch 92/100
8/8 [==============================] - 8s 984ms/step - loss: 0.1866 - acc: 0.9336 - val_loss: 0.1747 - val_acc: 0.9531
Epoch 93/100
8/8 [==============================] - 8s 987ms/step - loss: 0.3025 - acc: 0.9258 - val_loss: 0.2163 - val_acc: 0.9375
Epoch 94/100
8/8 [==============================] - 8s 976ms/step - loss: 0.2264 - acc: 0.9180 - val_loss: 0.2695 - val_acc: 0.9115
Epoch 95/100
8/8 [==============================] - 8s 976ms/step - loss: 0.2837 - acc: 0.9102 - val_loss: 0.2171 - val_acc: 0.9271
Epoch 96/100
8/8 [==============================] - 8s 976ms/step - loss: 0.2542 - acc: 0.8945 - val_loss: 0.1586 - val_acc: 0.9427
Epoch 97/100
8/8 [==============================] - 8s 963ms/step - loss: 0.1608 - acc: 0.9375 - val_loss: 0.3260 - val_acc: 0.8854
Epoch 98/100
8/8 [==============================] - 8s 986ms/step - loss: 0.2621 - acc: 0.8867 - val_loss: 0.2625 - val_acc: 0.9115
Epoch 99/100
8/8 [==============================] - 8s 977ms/step - loss: 0.1811 - acc: 0.9336 - val_loss: 0.1889 - val_acc: 0.9167
Epoch 100/100
8/8 [==============================] - 8s 966ms/step - loss: 0.2177 - acc: 0.9258 - val_loss: 0.1912 - val_acc: 0.9115
time: 13min 10s
import matplotlib.pyplot as plt
%matplotlib inline
training_loss = history.history['loss']
test_loss = history.history['val_loss']
# 创建迭代数量
epoch_count = range(1, len(training_loss) 1)
# 可视化损失历史
plt.plot(epoch_count, training_loss, 'r--')
plt.plot(epoch_count, test_loss, 'b-')
plt.legend(['Training Loss', 'Test Loss'])
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.show()
time: 229 ms
1 随着迭代次数的增加,损失值逐渐减小并趋向于稳定,
2 并且训练和测试集的损失值相差不是很大,说明模型泛化能力挺好的
3 损失值逐渐趋向平稳,而不是起伏波动较大,说明模型收敛性不错
train_acc = history.history['acc']
test_acc = history.history['val_acc']
epoch_counts = range(1, len(train_acc)1)
plt.plot(epoch_counts, train_acc, 'r--', marker='^')
plt.plot(epoch_counts, test_acc, linestyle='-', marker='o', color='y')
plt.title('accuracy condition')
plt.legend(['train_acc', 'test_acc'])
plt.xlabel('epochs')
plt.ylabel('acc')
Text(0, 0.5, 'acc')
time: 202 ms
1 随着迭代次数的增加,准确率逐渐增加并趋向于稳定,
2 并且训练和测试集的准确率相差不是很大,说明模型泛化能力挺好的
3 准确率逐渐趋向平稳,而不是起伏波动较大,说明模型收敛性不错
predict_generator(model, generator, steps, max_queue_size, workers, use_multiprocessing, verbose)
validation_generator.reset()
pred = model.predict_generator(generator=validation_generator,
steps=10, # 代表多少个 batch_size 观测数
verbose=1
)
print(pred.shape)
pred[:10]
10/10 [==============================] - 3s 334ms/step
(320, 2)
array([[9.9922025e-01, 5.2025318e-02],
[9.9920368e-01, 3.6925077e-05],
[4.8935115e-02, 6.6392523e-01],
[8.9464116e-01, 3.1011105e-03],
[1.0000000e00, 9.2949641e-01],
[4.7528565e-02, 5.9321433e-01],
[2.8747976e-02, 6.0904616e-01],
[9.3477416e-01, 6.4416492e-01],
[5.7433277e-02, 6.6898936e-01],
[5.8204800e-02, 6.5910459e-01]], dtype=float32)
time: 3.35 s
# pred
time: 300 µs
labels = (train_generator.class_indices)
print(labels)
labels = dict((v,k) for k,v in labels.items())
print(labels)
test_x, test_y = validation_generator.__getitem__(1)
preds = model.predict(test_x)
plt.figure(figsize=(16, 16))
for i in range(16):
plt.subplot(4, 4, i1)
plt.title('pred:%s / truth:%s' % (labels[np.argmax(preds[i])], labels[np.argmax(test_y[i])]))
plt.imshow(test_x[i])
{'Parasitized': 0, 'Uninfected': 1}
{0: 'Parasitized', 1: 'Uninfected'}
time: 2.31 s
由上面图像预测值与真实值对比,图像识别分类效果还挺不错的
2021-12-04 12:13:55
博客笔记
2140
分类:算法开发
专栏:未分组
2021-12-09 14:46:03
互联网
1948
分类:算法开发
专栏:故障诊断
2021-12-09 15:29:35
互联网
2353
分类:算法开发
专栏:故障诊断
关注公众号进群
让志同道合读者学习交流
传感器融合动作识别
2021-12-04 23:41:45
博客笔记
916
分类:边缘感知
专栏:动作识别
二值传感器动作识别
2021-12-04 23:16:54
博客笔记
514
分类:边缘感知
专栏:动作识别
可穿戴加速度计动作识别
2021-12-04 23:21:00
博客笔记
696
分类:边缘感知
专栏:动作识别
OPPORTUNITY动作识别数据集
2021-12-04 23:22:36
博客笔记
586
分类:边缘感知
专栏:动作识别
1、频谱泄露 对于频率为fs的正弦序列,它的频谱应该只是在fs处有离散谱。但是,在利用DFT求它的频谱时,对时域做了截断,结果使信号的频谱不只是在fs处有离散谱,而是在以fs为中心的频带范围内都有谱线出现,它们可以理解为是从fs频率上“泄漏”出去的,这种现象称 为频谱“泄漏”。2、代码分析如果我们波形不能在fft_size个取样中形成整数个周期的话会怎样呢?将上篇博客中的采样对象...
2021-12-14 14:06:09
互联网
783
分类:算法开发
专栏:数字信号处理
学习——信号调制识别 (一) 看了《通信信号调制识别技术及其发展》这一论文后,将学习到的知识记录在这篇博客里。1、通信信号调制识别技术 调制识别问题从本质上来说是一种典型的模式识别问题。其基本框架如图所示: 调制识别由三部分组成:信号预处理、提取特征参数和分类识别。信号预处理部分包括载波同步、频率下变频、噪声抑制以及对信噪比、符号周期、载波频率等参数的估计。特征提取部分...
2021-12-14 22:00:52
互联网
872
分类:算法开发
专栏:数字信号处理
异常检测3——常见方法分类基于统计学极值分析对数据分布进行假设基于线性分析基于时空空间关系造成的异常时间序列上的异常基于相似性分析建立在距离度量上的异常检测建立在密度分析上的异常检测基于聚类的异常检测基于偏差高维方法其他集成异常检测监督异常检测,半监督异常检测,主动学习图中的异常检测、网络中的异常检测基于统计学极值分析往往只对单独纬度进行研究,使用上有很大的局限性【1】对数据分布进行假设...
2021-12-19 14:43:14
互联网
1514
分类:算法开发
专栏:工业异常检测
文章目录一、信号的时域分析1.1信号波形参数识别1.2检测方法 过零检测法1.3数字信号微分与数字信号积分二、信号的频域分析2.1周期信号的频谱分析2.2 信号的频谱分析2.3数字信号的频谱计算方法三、信号的时差域相关分析3.1信号的相关系数3.2 相关应用3.3 数字滤波器和模拟滤波器的区别四、信号的幅值域分析4.1概率密度曲线与概率分布曲线的应用五、信号的数字滤波5.1滤波器概念5.2频率域滤波:5.3时域滤波5.4 时域FIR滤波器5.5 IIR滤波器5.6 其他滤波器5.7 数字音乐均衡器的设计六、
2022-01-13 17:40:13
互联网
343
分类:算法开发
专栏:数字信号处理
电机状态.txt数据集中最后一列是`电机状态`标签,其余列是特征。本数据集可以作为学习工业数据的分类算法使用,(1)分析不同电机状态的特征分布情况;(2)建立分类模型体验分类算法的应用。
2022-04-20 15:00:51
博客笔记
2975
分类:算法开发
专栏:故障预测与健康管理
故障诊断之基于振动信号的阶比谱分析
2022-05-31 11:08:40
互联网
1568
分类:算法开发
专栏:振动信号预处理
从事设备故障预测与健康管理行业多年的PHM算法工程师(机器医生)、国际振动分析师, 实践、研发和交付的项目涉及“化工、工业机器人、风电机组、钢铁、核电、机床、机器视觉”等领域。专注于工业智能预警系统研发, 通过机理算法和数据驱动算法分析振动信号、音频、DCS、PLC信号、SCADA信号等设备运行状态数据对机器设备进行看病预诊,为机器设备健康运行保驾护航。