pytorch中使用numpy生成随机数每个epoch都一样的问题
最近在使用pytorch做实验的过程中发现,在dataset里面使用numpy生成随机数,每个epoch生成的随机数是一样的。比如下面的代码
import numpy as np import torch from torch.utils.data import Dataset from torch.utils.data import DataLoader class RandomDataset(Dataset): def __init__(self): pass def __getitem__(self, ind): return np.random.randint(100) def __len__(self): return 10 ds = RandomDataset() ds = DataLoader(ds, 10, shuffle=False, num_workers=1) total_epoch = 5 for epoch in range(total_epoch): for batch in ds: print(batch)
输出为
tensor([62, 66, 47, 55, 11, 90, 66, 64, 79, 62])
tensor([62, 66, 47, 55, 11, 90, 66, 64, 79, 62])
tensor([62, 66, 47, 55, 11, 90, 66, 64, 79, 62])
tensor([62, 66, 47, 55, 11, 90, 66, 64, 79, 62])
tensor([62, 66, 47, 55, 11, 90, 66, 64, 79, 62])
通过google搜索发现github上有人提过这个问题 https://github.com/pytorch/pytorch/issues/5059
简单来说就是numpy的随机数生成在多进程的情况下存在问题。因为dataloader内部会启动多进程来读数据。
为了修正问题,我采用的方法是不使用numpy生成随机数,而是使用pytorch的库生成随机数。
import numpy as np import torch from torch.utils.data import Dataset from torch.utils.data import DataLoader class RandomDataset(Dataset): def __init__(self): pass def __getitem__(self, ind): # return np.random.randint(100) return torch.randint(0, 100, (1,)).item() def __len__(self): return 10 ds = RandomDataset() ds = DataLoader(ds, 10, shuffle=False, num_workers=1) total_epoch = 5 for epoch in range(total_epoch): for batch in ds: print(batch)
输出正确
tensor([93, 87, 57, 2, 10, 1, 6, 8, 71, 8])
tensor([76, 57, 73, 97, 85, 20, 1, 11, 25, 27])
tensor([72, 69, 37, 26, 99, 71, 38, 79, 6, 96])
tensor([90, 6, 38, 27, 14, 36, 66, 54, 32, 39])
tensor([72, 18, 7, 36, 45, 8, 74, 33, 17, 15])
- 点赞
- 收藏
- 关注作者
评论(0)