Tensor本身是數(shù)據(jù)View,與Storage結(jié)合形成MVC模式。本主題主要羅列Tensor數(shù)據(jù)訪問(wèn)有關(guān)的函數(shù)。
數(shù)據(jù)訪問(wèn)
item函數(shù)
- 直接把標(biāo)量Tensor轉(zhuǎn)換為Python類型
import torch
# t = torch.Tensor([
# [1, 2, 3],
# [4, 5, 6],
# [7, 8, 9]
# ])
t = torch.Tensor([1])
print(t.item())
1.0
data屬性與detach函數(shù)
-
data與原Tensor共享Storage,但是屬性會(huì)改變:requires_grad會(huì)變?yōu)镕alse。
- 可以通過(guò)data修改原數(shù)據(jù),修改后,在運(yùn)行時(shí)不會(huì)檢測(cè),然后可能會(huì)產(chǎn)生錯(cuò)誤
-
detach()函數(shù)返回的是從計(jì)算圖中剝離出來(lái)的Tensor,requires_grad=False。
- 通過(guò)detach()函數(shù)返回的張量修改數(shù)據(jù),這種修改在運(yùn)行的時(shí)候,會(huì)先檢測(cè)是否修改,從而在運(yùn)行前觸發(fā)錯(cuò)誤。
-
注意:
- 關(guān)于Tensor的檢測(cè)實(shí)際與圖跟蹤有關(guān),這個(gè)在Torch中提供了上下文管理來(lái)處理,個(gè)人喜歡使用上下文管理器,用來(lái)管理作用在Tensor上的各種運(yùn)算操作跟蹤。
# data的不安全說(shuō)明
import torch
t = torch.Tensor([
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
])
t.requires_grad=True
print(t.data)
print(t.data.requires_grad)
t.data[0,0] =88 # 這種修改在運(yùn)行時(shí)可能導(dǎo)致致命錯(cuò)誤
print(t)
tensor([[1., 2., 3.],
[4., 5., 6.],
[7., 8., 9.]])
False
tensor([[88., 2., 3.],
[ 4., 5., 6.],
[ 7., 8., 9.]], requires_grad=True)
### detach的安全說(shuō)明(修改的時(shí)候直接檢測(cè),就是不準(zhǔn)修改了,這樣安全)
import torch
t = torch.Tensor([
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
])
t.requires_grad=True
print(t.detach())
print(t.detach().requires_grad)
t.detach()[0,0] =88 # 這種修改在運(yùn)行時(shí)可能導(dǎo)致致命錯(cuò)誤
print(t)
t.detach().resize_(3,3)
tensor([[1., 2., 3.],
[4., 5., 6.],
[7., 8., 9.]])
False
tensor([[88., 2., 3.],
[ 4., 5., 6.],
[ 7., 8., 9.]], requires_grad=True)
tensor([[88., 2., 3.],
[ 4., 5., 6.],
[ 7., 8., 9.]])
# data的不安全說(shuō)明
import torch
t = torch.Tensor([
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
])
t.requires_grad=True
y = t.sigmoid()
y_ = y.sum()
r = y_.backward(retain_graph=True) # backward本身不會(huì)返回值 r = Nones
print(t.grad)
# 修改數(shù)據(jù)
y.data[0,0]=88 # 不檢測(cè)錯(cuò)誤,但可能已經(jīng)有錯(cuò)誤(不允許在計(jì)算中途修改)
y_.backward() # backward本身不會(huì)返回值 r = Nones
print(t.grad)
print(t.data[0,0])
tensor([[1.9661e-01, 1.0499e-01, 4.5177e-02],
[1.7663e-02, 6.6480e-03, 2.4665e-03],
[9.1017e-04, 3.3522e-04, 1.2337e-04]])
tensor([[-7.6558e+03, 2.0999e-01, 9.0353e-02],
[ 3.5325e-02, 1.3296e-02, 4.9329e-03],
[ 1.8203e-03, 6.7045e-04, 2.4673e-04]])
tensor(1.)
# data的不安全說(shuō)明
import torch
t = torch.Tensor([
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
])
t.requires_grad=True
y = t.sigmoid()
y_ = y.sum()
r = y_.backward(retain_graph=True) # backward本身不會(huì)返回值 r = Nones
print(t.grad)
# 修改數(shù)據(jù)
y.detach()[0,0]=88 # 檢測(cè)錯(cuò)誤,修改數(shù)據(jù)就會(huì)報(bào)錯(cuò),這樣最終結(jié)果安全。
y_.backward() # backward本身不會(huì)返回值 r = Nones
print(t.grad)
print(t.data[0,0])
tensor([[1.9661e-01, 1.0499e-01, 4.5177e-02],
[1.7663e-02, 6.6480e-03, 2.4665e-03],
[9.1017e-04, 3.3522e-04, 1.2337e-04]])
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-41-f1b6b9abcc6a> in <module>()
15 # 修改數(shù)據(jù)
16 y.detach()[0,0]=88 # 檢測(cè)錯(cuò)誤,修改數(shù)據(jù)就會(huì)報(bào)錯(cuò),這樣最終結(jié)果安全。
---> 17 y_.backward() # backward本身不會(huì)返回值 r = Nones
18 print(t.grad)
19 print(t.data[0,0])
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
116 products. Defaults to ``False``.
117 """
--> 118 torch.autograd.backward(self, gradient, retain_graph, create_graph)
119
120 def register_hook(self, hook):
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
91 Variable._execution_engine.run_backward(
92 tensors, grad_tensors, retain_graph, create_graph,
---> 93 allow_unreachable=True) # allow_unreachable flag
94
95
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [3, 3]], which is output 0 of SigmoidBackward, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
numpy函數(shù)
- 把Tensor轉(zhuǎn)換為Numpy的ndarray格式。
- 這種轉(zhuǎn)換大部分沒(méi)有意義
import torch
t = torch.Tensor([
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
])
print(t.numpy())
[[1. 2. 3.]
[4. 5. 6.]
[7. 8. 9.]]
storage/storage_offset/storage_type函數(shù)
- 可以直接返回Tensor的數(shù)據(jù)存儲(chǔ)對(duì)象
import torch
t = torch.Tensor([
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
])
print(t.storage())
print(t.storage_offset())
print(t[1,1].storage_offset())
print(t.storage_type())
1.0
2.0
3.0
4.0
5.0
6.0
7.0
8.0
9.0
[torch.FloatStorage of size 9]
0
4
<class 'torch.FloatStorage'>
stride函數(shù)
- 返貨Tensor的對(duì)Storage的行列的間隔步長(zhǎng),按照維數(shù)指定,維度由0,1,2,...指定
- 0行
- 1列
- 注意:
- 使用負(fù)數(shù)指定逆序的維數(shù),-1表示最后一個(gè)維度,-2就是倒數(shù)第二個(gè)維度,
- 步長(zhǎng)的單位是Storage的元素長(zhǎng)度。
import torch
t = torch.Tensor([
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
])
print(t.stride()) # 返回所有維度
print(t.stride(1)) # 返回第二維讀步長(zhǎng)
print(t.stride(0))
print(t.stride(-1))
print(t.stride(-2))
(3, 1)
1
3
1
3
as_strided函數(shù)
- 按照指定的步長(zhǎng)返回新的Tensor
import torch
t = torch.Tensor([
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
])
s = t.as_strided((2, 2), (3, 1), storage_offset=2) # 返回的張量與原來(lái)的Storage共享存儲(chǔ)空間
print(s)
s[0,0]=88
print(t)
tensor([[3., 4.],
[6., 7.]])
tensor([[ 1., 2., 88.],
[ 4., 5., 6.],
[ 7., 8., 9.]])
view與view_as函數(shù)
- as_strided函數(shù)特例版本,不需要offset,不需要stride
- 需要view前后的size一樣。
import torch
t = torch.Tensor([
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
])
s = t.view((1,9))
print(s)
tensor([[1., 2., 3., 4., 5., 6., 7., 8., 9.]])
import torch
t = torch.Tensor([
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
])
v = torch.Tensor(1, 9)
s = t.view_as(v)
print(s)
tensor([[1., 2., 3., 4., 5., 6., 7., 8., 9.]])
to與to_**函數(shù)
- to系列函數(shù)有:
- to
- dtype 與 device轉(zhuǎn)換
- to_dense/ to_sparse
- 稀疏矩陣與稠密矩陣之間轉(zhuǎn)換(轉(zhuǎn)換的是layout格式)
- to_mkldnn
- cpu加速
- to_list
- 轉(zhuǎn)換為python類型
- to
import torch
t = torch.Tensor([
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
])
print(t.to(dtype=torch.int32, device=torch.device("cpu:0"))) # 還可以使用device,根據(jù)cpu個(gè)數(shù)來(lái),只有一個(gè)就只能是0
print(t.to_sparse()) # 稠密矩陣調(diào)用這個(gè)
# print(t.to_dense()) # 稀疏Tensor就調(diào)用這個(gè)
print(t.to_mkldnn())
print(t.tolist())
tensor([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]], dtype=torch.int32)
tensor(indices=tensor([[0, 0, 0, 1, 1, 1, 2, 2, 2],
[0, 1, 2, 0, 1, 2, 0, 1, 2]]),
values=tensor([1., 2., 3., 4., 5., 6., 7., 8., 9.]),
size=(3, 3), nnz=9, layout=torch.sparse_coo)
tensor([[1., 2., 3.],
[4., 5., 6.],
[7., 8., 9.]], layout=torch._mkldnn)
[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]]
type與type_as函數(shù)
- type與type_as主要是類型轉(zhuǎn)換。
- type_as是使用參數(shù)的Tensor類型
import torch
t = torch.Tensor([
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
])
print(t.type(torch.float32))
t.type_as(t.type(torch.float32))
tensor([[1., 2., 3.],
[4., 5., 6.],
[7., 8., 9.]])
tensor([[1., 2., 3.],
[4., 5., 6.],
[7., 8., 9.]])
類型轉(zhuǎn)換系列函數(shù)
import torch
t = torch.Tensor([
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
])
print(t.int())
print(t.long())
print(t.byte())
print(t.char())
print(t.short())
print(t.half())
print(t.float())
print(t.double())
print(t.bool())
tensor([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]], dtype=torch.int32)
tensor([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
tensor([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]], dtype=torch.uint8)
tensor([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]], dtype=torch.int8)
tensor([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]], dtype=torch.int16)
tensor([[1., 2., 3.],
[4., 5., 6.],
[7., 8., 9.]], dtype=torch.float16)
tensor([[1., 2., 3.],
[4., 5., 6.],
[7., 8., 9.]])
tensor([[1., 2., 3.],
[4., 5., 6.],
[7., 8., 9.]], dtype=torch.float64)
tensor([[True, True, True],
[True, True, True],
[True, True, True]])
reshape與resize函數(shù)
- 這個(gè)系列函數(shù)有:
- reshape:不改變Tensor的元素個(gè)數(shù)
- reshape_as:使用已知Tensor作為參照
- resize:改變?cè)貍€(gè)數(shù)
- resize_as_:使用已知Tensor作為參照
import torch
t = torch.Tensor([
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
])
print(t.reshape(1, 9))
print(t.resize_(2, 2)) # 按照順序來(lái)
tensor([[1., 2., 3., 4., 5., 6., 7., 8., 9.]])
tensor([[1., 2.],
[3., 4.]])
data_ptr函數(shù)
- 返回第一個(gè)元素的地址,一般沒(méi)有什么意義。
import torch
t = torch.Tensor([
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
])
print(t.data_ptr())
140604559708032
dense_dim與sparse_dim函數(shù)
- 返回稀疏與稠密矩陣維度
- 這兩個(gè)函數(shù)都是稀疏矩陣使用的。
import torch
t = torch.Tensor([
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
])
print(t.to_sparse().sparse_dim())
print(t.to_sparse().dense_dim())
2
0
element_size函數(shù)
- 元素大小
import torch
t1 = torch.LongTensor(2, 3)
t2 = torch.DoubleTensor(3, 2)
print(t1.element_size(), t2.element_size())
8 8
get_device函數(shù)
- 獲取設(shè)備
- 對(duì)CPU來(lái)說(shuō),應(yīng)該拋出異常
import torch
t1 = torch.LongTensor(2, 3)
print(t1.cpu().get_device()) # 返回-1,其實(shí)應(yīng)該是拋出異常。
print(t1.get_device()) # 返回-1,其實(shí)應(yīng)該是拋出異常。
-1
nelement函數(shù)
- 返回所有元素個(gè)數(shù)
import torch
t1 = torch.LongTensor(2, 3)
print(t1.nelement())
6
flatten函數(shù)
- 與numpy的flat函數(shù)一樣,把Tensor的layout從多維變成一維。
import torch
t1 = torch.LongTensor(2, 3)
print(t1.flatten())
tensor([0, 0, 0, 0, 0, 0])
dequantize函數(shù)
- Tensor去量化
- 量化是一種離散化的技術(shù),可以用于數(shù)據(jù)壓縮等。
import torch
t1 = torch.FloatTensor([0.0, 10.50, 11.50, 11.45]) # 被離散化
print(t1.is_quantized)
False
# 線性量化函數(shù),這個(gè)函數(shù)在卷積運(yùn)算中使用。
q_t1= torch.quantize_linear(t1, 0.2, 0, torch.qint8)
print(q_t1.is_quantized)
print(q_t1.data)
print(q_t1.dequantize())
print(q_t1.dequantize().is_quantized)
True
tensor([ 0.0000, 10.4000, 11.6000, 11.4000], size=(4,), dtype=torch.qint8,
scale=0.2, zero_point=0)
tensor([ 0.0000, 10.4000, 11.6000, 11.4000])
False
values函數(shù)
- 返回稀疏Tensor的數(shù)據(jù);
- 矩陣類型必須是coalesced 稀疏矩陣(聚接后的稀疏矩陣的值)
import torch
i = torch.tensor(
[[0, 1, 1],
[2, 0, 2]])
v = torch.tensor([3, 4, 5], dtype=torch.float32)
sp = torch.sparse_coo_tensor(i, v)
print(sp.coalesce().values())
tensor([3., 4., 5.])
indices函數(shù)
- 這個(gè)函數(shù)只爭(zhēng)對(duì)torch.sparse_coo布局的稀疏矩陣。
- 返回值得索引,見(jiàn)values函數(shù)
import torch
i = torch.tensor(
[[0, 1, 1],
[2, 0, 2]])
v = torch.tensor([3, 4, 5], dtype=torch.float32)
sp = torch.sparse_coo_tensor(i, v)
print(sp.coalesce().indices())
tensor([[0, 1, 1],
[2, 0, 2]])
squeeze_/squeeze與unsqueeze/unsqueeze_函數(shù)
- 矩陣降維(減少維數(shù))
- 前提是只有一個(gè)元素的向量,可以直接轉(zhuǎn)化為標(biāo)量,從而實(shí)現(xiàn)降維。
- 帶后綴下劃線的函數(shù),影響被操作的Tensor,同時(shí)返回操作后的Tensor。
import torch
t1 = torch.Tensor([
[2],[3],[4]
])
print(t1.squeeze())
print(t1.squeeze().unsqueeze(0)) # 指定擴(kuò)大哪一維
print(t1.squeeze().unsqueeze(1)) # 不能取大于等于2的維度。
tensor([2., 3., 4.])
tensor([[2., 3., 4.]])
tensor([[2.],
[3.],
[4.]])
import torch
t1 = torch.Tensor([
[2],[3],[4]
])
print(t1.squeeze_())
print(t1)
tensor([2., 3., 4.])
tensor([2., 3., 4.])