[pytorch]関数の整理


😎 pytorch関数の定理


https://pytorch.org/docs/stable/tensors.html?highlight=tensor#torch.Tensor
Tensor classで使用可能なメソッドと変数の整理

torch.randn(*size)


tuple likeを形状とする整数で充填されたTensor
tmp = torch.randn([3, 4])
# or tmp = torch.randn(3, 4)

tmp
>> tensor([[-0.4221,  0.9803,  0.9709, -2.2323],
        [ 1.0770, -0.6775, -0.4542,  0.1675],
        [ 0.6363,  1.1525,  0.2529, -1.1079]])

torch.clamp(input, min=None, max=None, *, out=None)


✔inputの元素値のうち、min以下はmin、max以上はmax
tmp = torch.randn([3, 4])

tmp
>> tensor([[ 0.6003,  0.8507, -1.8844, -3.1290],
        [-1.0395, -0.0954, -1.3113,  1.3417],
        [ 0.3511,  1.0014,  0.0832,  0.8779]])
        
torch.clamp(tmp, min=-0.3, max=0.5)
>> tensor([[ 0.6003,  0.8507, -0.8000, -0.8000],
        [-0.8000, -0.0954, -0.8000,  1.0000],
        [ 0.3511,  1.0000,  0.0832,  0.8779]])

torch.linspace(start, end, steps)


(start,start+end−startsteps−1​,…,start+(steps−2)∗end−startsteps−1​,end)(start, start+\dfrac{end−start}{steps−1}​,…,start+(steps−2)∗\dfrac{end−start}{steps-1}​,end)(start,start+steps−1end−start​​,…,start+(steps−2)∗steps−1end−start​​,end)
✔stepsをsizeとし、開始から終了まで同じ割合で分割するTensor
torch.linspace(-1, 1, steps=4)
>> tensor([-1.0000, -0.3333,  0.3333,  1.0000])

torch.index_select(input, dim, index)


1000 Input Tensorのindexに書き込まれるdim次元要素のみを選択します(indexはtuple-like)
x = torch.randn(3, 4)

x
>> tensor([[ 0.1392, -0.4403, -0.1479, -1.8080],
        [ 0.0270,  0.2358, -0.5572, -0.9726],
        [-0.0528,  0.0768, -1.2052, -1.3905]])
        
indices = torch.tensor([1, 2])

torch.index_select(x, 0, indices)
>> tensor([[ 0.0270,  0.2358, -0.5572, -0.9726],
        [-0.0528,  0.0768, -1.2052, -1.3905]])

indices = torch.tensor([2,])

torch.index_select(x, 0, indices)
>> tensor([[-0.0528,  0.0768, -1.2052, -1.3905]])

indices = torch.tensor([0, 1, 3])

torch.index_select(x, 1, indices)
>> tensor([[ 0.1392, -0.4403, -1.8080],
        [ 0.0270,  0.2358, -0.9726],
        [-0.0528,  0.0768, -1.3905]])

torch.numel()


✔ Tensor.numel()としても使用できます.
✔Tensorの要素数を返す
tmp = torch.randn(3,4)

tmp.numel()
>> 12

torch.Tensor.sort(dim)


✔Tensorのdim方向昇順ソート
2個✔return:並べ替えられたTensor/変更されたindex
tmp = torch.randn(10)

print(tmp)
print(tmp.sort(0)[0])
print(tmp.sort(0)[1])

>> tensor([ 1.4690,  1.0993,  0.2307,  0.6284, -0.5556, -2.2008,  1.3283,  0.2447,
         0.3749, -1.6137])
>> tensor([-2.2008, -1.6137, -0.5556,  0.2307,  0.2447,  0.3749,  0.6284,  1.0993,
         1.3283,  1.4690])
>> tensor([5, 9, 4, 2, 7, 8, 3, 1, 6, 0])

torch.new()


✔ return empty Tensor
🤣 新しいTenserを作るときに使うようですが、なぜ...?
tmp = torch.randn([3, 4])

tmp.new()
>> tensor([])

tmp.new().shape
>> torch.Size([0])