## Differences between pytorch.Tensor and pytorch.tensor

a = [[1, 2, 3], [4, 5, 6]]
b = torch.Tensor(a)
c = torch.tensor(a)
Output:
b = tensor([[1., 2., 3.],
[4., 5., 6.]])
c = tensor([[1, 2, 3],
[4, 5, 6]])

As shown above. torch.Tensor is converting to Float, while torch.tensor will infer the type

Another difference is the complex tensors. We will use Pauli matrices as example

$$\sigma_1 = \begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix}, \sigma_2 = \begin{pmatrix} 0 & -i\\ i & 0 \end{pmatrix}, \sigma_3 = \begin{pmatrix} 1 & 0\\ 0 & -1 \end{pmatrix}$$
s2 = torch.Tensor([[0, 0-1j], [0+1j, 0]])

will result in the following error

TypeError: can’t convert complex to float
However, torch.tensor will not have error.

s1 = torch.tensor([[0, 1], [1, 0]])
s2 = torch.tensor([[0, 0-1j], [0+1j, 0]]
s3 = torch.tensor([[1,0],[0,-1]])
s2 = tensor([[0.+0.j, 0.-1.j], [0.+1.j, 0.+0.j]])

Pytorch has a new linear algebra package that is similar to numpy linear algebra package. Over here, we will use Pytorch linear algebra package to determine the eigenvalues and eigenvectors.

We can use w, v = torch.linalg.eig(b) to determine the eigenvalues and eigenvectors.

 w, v = torch.linalg.eig(s1)

Eigen vectors are given by tensor([[ 0.7071+0.j, -0.7071+0.j],
The corresponding eigenvalues are [ 0.7071+0.j, 0.7071+0.j]])

July 3, 2021