泰宁新闻网

paddlepaddle,paddlepaddle好用吗

鞋

泰宁新闻网 http://www.tainingxinwen.cn 2020-05-24 08:27 出处:网络
paddlepaddle,paddlepaddle好用吗,百度PaddlePaddle入门 观察训练结果 数据处理部分后的代码多数保持不变,仅在读取数据时候调用新编写的load_data函数。由于数据格式的转换工作在load_data函数中做了一部分,所以向

paddlepaddle,paddlepaddle好用吗,百度PaddlePaddle入门

观察训练结果

数据处理部分后的代码多数保持不变,仅在读取数据时候调用新编写的load_data函数。由于数据格式的转换工作在load_data函数中做了一部分,所以向模型输入数据的代码变得更加简洁。下面我们使用自己实现的数据加载函数重新训练我们的神经网络。

 1 #数据处理部分之后的代码,数据读取的部分调用Load_data函数
 2 # 定义网络结构,同上一节所使用的网络结构
 3 class MNIST(fluid.dygraph.Layer):
 4 def __init__(self, name_scope):
 5 super(MNIST, self).__init__(name_scope)
 6 name_scope = self.full_name()
 7 self.fc = FC(name_scope, size=1, act=None)
 9 def forward(self, inputs):
10 outputs = self.fc(inputs)
11 return outputs
13 # 训练配置,并启动训练过程
14 with fluid.dygraph.guard():
15 model = MNIST("mnist")
16 model.train()
17 #调用加载数据的函数
18 train_loader = load_data('train')
19 optimizer = fluid.optimizer.SGDOptimizer(learning_rate=0.001)
20 EPOCH_NUM = 10
21 for epoch_id in range(EPOCH_NUM):
22 for batch_id, data in enumerate(train_loader()):
23 #准备数据,变得更加简洁
24 image_data, label_data = data
25 image = fluid.dygraph.to_variable(image_data)
26 label = fluid.dygraph.to_variable(label_data)
28 #前向计算的过程
29 predict = model(image)
31 #计算损失,取一个批次样本损失的平均值
32 loss = fluid.layers.square_error_cost(predict, label)
33 avg_loss = fluid.layers.mean(loss)
35 #每训练了100批次的数据,打印下当前Loss的情况
36 if batch_id % 100 == 0:
37 print("epoch: {}, batch: {}, loss is: {}".format(epoch_id, batch_id, avg_loss.numpy()))
39 #后向传播,更新参数的过程
40 avg_loss.backward()
41 optimizer.minimize(avg_loss)
42 model.clear_gradients()
44 #保存模型参数
45 fluid.save_dygraph(model.state_dict(), 'mnist')
loading mnist dataset from ./work/mnist.json.gz ......
mnist dataset load done
训练数据集数量: 50000
epoch: 0, batch: 0, loss is: [24.632648]
epoch: 0, batch: 100, loss is: [4.4261494]
epoch: 0, batch: 200, loss is: [5.5177183]
epoch: 0, batch: 300, loss is: [3.5427954]
epoch: 0, batch: 400, loss is: [2.7455132]
epoch: 1, batch: 0, loss is: [3.4030478]
epoch: 1, batch: 100, loss is: [3.3895369]
epoch: 1, batch: 200, loss is: [4.0297785]
epoch: 1, batch: 300, loss is: [3.658723]
epoch: 1, batch: 400, loss is: [3.7493572]
epoch: 2, batch: 0, loss is: [3.4815173]
epoch: 2, batch: 100, loss is: [3.566256]
epoch: 2, batch: 200, loss is: [4.150691]
epoch: 2, batch: 300, loss is: [3.3143735]
epoch: 2, batch: 400, loss is: [2.8981738]
epoch: 3, batch: 0, loss is: [2.9376304]
epoch: 3, batch: 100, loss is: [3.322153]
epoch: 3, batch: 200, loss is: [4.5626388]
epoch: 3, batch: 300, loss is: [3.1342642]
epoch: 3, batch: 400, loss is: [3.2983096]
epoch: 4, batch: 0, loss is: [4.223956]
epoch: 4, batch: 100, loss is: [2.982598]
epoch: 4, batch: 200, loss is: [2.719622]
epoch: 4, batch: 300, loss is: [3.712464]
epoch: 4, batch: 400, loss is: [4.1207376]
epoch: 5, batch: 0, loss is: [2.5053217]
epoch: 5, batch: 100, loss is: [2.8577585]
epoch: 5, batch: 200, loss is: [2.9564447]
epoch: 5, batch: 300, loss is: [3.4296014]
epoch: 5, batch: 400, loss is: [4.3093677]
epoch: 6, batch: 0, loss is: [4.5576763]
epoch: 6, batch: 100, loss is: [3.20943]
epoch: 6, batch: 200, loss is: [3.327529]
epoch: 6, batch: 300, loss is: [2.5192072]
epoch: 6, batch: 400, loss is: [3.4901175]
epoch: 7, batch: 0, loss is: [3.998215]
epoch: 7, batch: 100, loss is: [4.351076]
epoch: 7, batch: 200, loss is: [3.8231916]
epoch: 7, batch: 300, loss is: [2.151733]
epoch: 7, batch: 400, loss is: [2.995807]
epoch: 8, batch: 0, loss is: [3.6070685]
epoch: 8, batch: 100, loss is: [4.0988545]
epoch: 8, batch: 200, loss is: [3.0984952]
epoch: 8, batch: 300, loss is: [3.0793695]
epoch: 8, batch: 400, loss is: [2.7344913]
epoch: 9, batch: 0, loss is: [3.7788324]
epoch: 9, batch: 100, loss is: [3.706921]
epoch: 9, batch: 200, loss is: [2.7320113]
epoch: 9, batch: 300, loss is: [3.2809222]
epoch: 9, batch: 400, loss is: [3.8385432]
batch size=100,数据总量为50000,所以有500个batch(0,100,200,300,400);epoch num=10,所以有10次循环(0,1,2,3,4,5,6,7,8,9)。
最后,将上述几部分操作合并到load_data函数,方便后续调用。下面代码为完整的数据读取函数,可以通过数据加载函数load_data的输入参数mode为'train', 'valid', 'eval'选择返回的数据是训练集,验证集,测试集。
 1 #数据处理部分的展开代码
 2 # 定义数据集读取器
 3 def load_data(mode='train'):
 5 # 数据文件
 6 datafile = './work/mnist.json.gz'
 7 print('loading mnist dataset from {} ......'.format(datafile))
 8 data = json.load(gzip.open(datafile))
 9 # 读取到的数据可以直接区分训练集,验证集,测试集
10 train_set, val_set, eval_set = data
12 # 数据集相关参数,图片高度IMG_ROWS, 图片宽度IMG_COLS
13 IMG_ROWS = 28
14 IMG_COLS = 28
15 # 获得数据
16 if mode == 'train':
17 imgs = train_set[0]
18 labels = train_set[1]
19 elif mode == 'valid':
20 imgs = val_set[0]
21 labels = val_set[1]
22 elif mode == 'eval':
23 imgs = eval_set[0]
24 labels = eval_set[1]
25 else:
26 raise Exception("mode can only be one of ['train', 'valid', 'eval']")
28 imgs_length = len(imgs)
30 assert len(imgs) == len(labels), \
31 "length of train_imgs({}) should be the same as train_labels({})".format(
32 len(imgs), len(labels))
34 index_list = list(range(imgs_length))
36 # 读入数据时用到的batchsize
37 BATCHSIZE = 100
39 # 定义数据生成器
40 def data_generator():
41 if mode == 'train':
42 # 训练模式下,将训练数据打乱
43 random.shuffle(index_list)
44 imgs_list = []
45 labels_list = []
47 for i in index_list:
48 img = np.reshape(imgs[i], [1, IMG_ROWS, IMG_COLS]).astype('float32')
49 label = np.reshape(labels[i], [1]).astype('float32')
50 imgs_list.append(img) 
51 labels_list.append(label)
52 if len(imgs_list) == BATCHSIZE:
53 # 产生一个batch的数据并返回
54 yield np.array(imgs_list), np.array(labels_list)
55 # 清空数据读取列表
56 imgs_list = []
57 labels_list = []
59 # 如果剩余数据的数目小于BATCHSIZE,
60 # 则剩余数据一起构成一个大小为len(imgs_list)的mini-batch
61 if len(imgs_list) 0:
62 yield np.array(imgs_list), np.array(labels_list)
63 return data_generator

paddlepaddle的相关网页热门搜索词

paddlepaddle好用吗|golang神经网络框架|paddlepaddle安装教程|tensorflow|paddlepaddle有人用吗|百度飞桨入门教程|keras与tensorflow的关系|tensorflow转paddlepaddle|paddlepaddle评价|

本文标题:paddlepaddle,paddlepaddle好用吗
http://www.tainingxinwen.cn/qitaxinxi/382395.html

0

精彩评论

暂无评论...
验证码 换一张
取 消