PyTorch Tensor Operations¶
This section covers: * Indexing and slicing * Reshaping tensors (tensor views) * Tensor arithmetic and basic operations * Dot products * Matrix multiplication * Additional, more advanced operations
Perform standard imports¶
import torch
import numpy as np
Indexing and slicing¶
Extracting specific values from a tensor works just the same as with NumPy arrays

Image source: http://www.scipy-lectures.org/_images/numpy_indexing.png
x = torch.arange(6).reshape(3,2)
print(x)
# Grabbing the right hand column values
x[:,1]
# Grabbing the right hand column as a (3,1) slice
x[:,1:]
x = torch.arange(12)
print(x)
x.view(2,6)
x.view(6,2)
# x is unchanged
x
Views reflect the most current data¶
z = x.view(2,6)
x[0]=234
print(z)
Views can infer the correct size¶
By passing in -1 PyTorch will infer the correct value from the given tensor
# infer number of columns for given rows
x.view(2,-1)
# infer number of rows for given columns
x.view(-1,3)
Adopt another tensor's shape with .view_as()¶
view_as(input) only works with tensors that have the same number of elements.
x.view_as(z)
Tensor Arithmetic¶
Adding tensors can be performed a few different ways depending on the desired result.
As a simple expression:
a = torch.tensor([1,2,3], dtype=torch.float)
b = torch.tensor([4,5,6], dtype=torch.float)
print(a + b)
As arguments passed into a torch operation:
print(torch.add(a, b))
With an output tensor passed in as an argument:
result = torch.empty(3)
torch.add(a, b, out=result) # equivalent to result=torch.add(a,b)
print(result)
Changing a tensor in-place with _
a.add_(b) # equivalent to a=torch.add(a,b)
print(a)
In the above example: a.add_(b) changed a.
Basic Tensor Operations¶
| OPERATION | FUNCTION | DESCRIPTION |
|---|---|---|
| a + b | a.add(b) | element wise addition |
| a - b | a.sub(b) | subtraction |
| a * b | a.mul(b) | multiplication |
| a / b | a.div(b) | division |
| a % b | a.fmod(b) | modulo (remainder after division) |
| ab | a.pow(b) | power |
| OPERATION | FUNCTION | DESCRIPTION |
|---|---|---|
| |a| | torch.abs(a) | absolute value |
| 1/a | torch.reciprocal(a) | reciprocal |
| $\sqrt{a}$ | torch.sqrt(a) | square root |
| log(a) | torch.log(a) | natural log |
| ea | torch.exp(a) | exponential |
| 12.34 ==> 12. | torch.trunc(a) | truncated integer |
| 12.34 ==> 0.34 | torch.frac(a) | fractional component |
| OPERATION | FUNCTION | DESCRIPTION |
|---|---|---|
| sin(a) | torch.sin(a) | sine |
| cos(a) | torch.sin(a) | cosine |
| tan(a) | torch.sin(a) | tangent |
| arcsin(a) | torch.asin(a) | arc sine |
| arccos(a) | torch.acos(a) | arc cosine |
| arctan(a) | torch.atan(a) | arc tangent |
| sinh(a) | torch.sinh(a) | hyperbolic sine |
| cosh(a) | torch.cosh(a) | hyperbolic cosine |
| tanh(a) | torch.tanh(a) | hyperbolic tangent |
| OPERATION | FUNCTION | DESCRIPTION |
|---|---|---|
| $\sum a$ | torch.sum(a) | sum |
| $\bar a$ | torch.mean(a) | mean |
| amax | torch.max(a) | maximum |
| amin | torch.min(a) | minimum |
| torch.max(a,b) returns a tensor of size a containing the element wise max between a and b | ||
For example, torch.div(a,b) performs floor division (truncates the decimal) for integer types, and classic division for floats.
Use the space below to experiment with different operations¶
a = torch.tensor([1,2,3], dtype=torch.float)
b = torch.tensor([4,5,6], dtype=torch.float)
print(torch.add(a,b).sum())
Dot products¶
A dot product is the sum of the products of the corresponding entries of two 1D tensors. If the tensors are both vectors, the dot product is given as:
$\begin{bmatrix} a & b & c \end{bmatrix} \;\cdot\; \begin{bmatrix} d & e & f \end{bmatrix} = ad + be + cf$
If the tensors include a column vector, then the dot product is the sum of the result of the multiplied matrices. For example:
$\begin{bmatrix} a & b & c \end{bmatrix} \;\cdot\; \begin{bmatrix} d \ e \ f \end{bmatrix} = ad + be + cf$
Dot products can be expressed as torch.dot(a,b) or a.dot(b) or b.dot(a)
a = torch.tensor([1,2,3], dtype=torch.float)
b = torch.tensor([4,5,6], dtype=torch.float)
print(a.mul(b)) # for reference
print()
print(a.dot(b))
Matrix multiplication¶
2D Matrix multiplication is possible when the number of columns in tensor A matches the number of rows in tensor B. In this case, the product of tensor A with size $(x,y)$ and tensor B with size $(y,z)$ results in a tensor of size $(x,z)$

$\begin{bmatrix} a & b & c \\ d & e & f \end{bmatrix} \;\times\; \begin{bmatrix} m & n \\ p & q \\ r & s \end{bmatrix} = \begin{bmatrix} (am+bp+cr) & (an+bq+cs) \\ (dm+ep+fr) & (dn+eq+fs) \end{bmatrix}$
Matrix multiplication can be computed using torch.mm(a,b) or a.mm(b) or a @ b
a = torch.tensor([[0,2,4],[1,3,5]], dtype=torch.float)
b = torch.tensor([[6,7],[8,9],[10,11]], dtype=torch.float)
print('a: ',a.size())
print('b: ',b.size())
print('a x b: ',torch.mm(a,b).size())
print(torch.mm(a,b))
print(a.mm(b))
print(a @ b)
Matrix multiplication with broadcasting¶
Matrix multiplication that involves broadcasting can be computed using torch.matmul(a,b) or a.matmul(b) or a @ b
t1 = torch.randn(2, 3, 4)
t2 = torch.randn(4, 5)
t1
t2
print(torch.matmul(t1, t2).size())
However, the same operation raises a RuntimeError with torch.mm():
print(torch.mm(t1, t2).size())
Advanced operations¶
L2 or Euclidian Norm¶
See torch.norm()
The Euclidian Norm gives the vector norm of $x$ where $x=(x_1,x_2,...,x_n)$.
It is calculated as
${\displaystyle \left|{\boldsymbol {x}}\right|{2}:={\sqrt {x$}^{2}+\cdots +x_{n}^{2}}}
When applied to a matrix, torch.norm() returns the Frobenius norm by default.
x = torch.tensor([2.,5.,8.,14.])
x.norm()
x = torch.ones(3,7)
x.numel()
This can be useful in certain calculations like Mean Squared Error:
def mse(t1, t2):
diff = t1 - t2
return torch.sum(diff * diff) / diff.numel()