博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Element-wise operations
阅读量:5367 次
发布时间:2019-06-15

本文共 3570 字,大约阅读时间需要 11 分钟。

Element-wise operations

An element-wise operation operates on corresponding elements between tensors.

Two tensors must have the same shape in order to perform element-wise operations on them.

Suppose we have the following two tensors(Both of these tensors are rank-2 tensors with a shape of 2 \(\times\) 2):

t1 = torch.tensor([  [1, 2],  [3, 4]], dtype=torch.float32)t2 = torch.tensor([  [9, 8],  [7, 6]], dtype=torch.float32)

The elements of the first axis are arrays and the elements of the second axis are numbers.

# Example of the first axis> print(t1[0])tensor([1., 2.])# Example of the second axis> print(t1[0][0])tensor(1.)

Addition is an element-wise operation.

> t1 + t2tensor([[10., 10.],        [10., 10.]])

In fact, all the arithmetic operations, add, subtract, multiply, and divide are element-wise operations. There are two ways we can do this:

1) Using these symbolic operations:

> t + 2tensor([[3., 4.],        [5., 6.]])> t - 2tensor([[-1., 0.],        [1., 2.]])> t * 2tensor([[2., 4.],        [6., 8.]])> t / 2tensor([[0.5000, 1.0000],        [1.5000, 2.0000]])

2) Or equivalently, these built-in tensor methods:

> t.add(2)tensor([[3., 4.],        [5., 6.]])> t.sub(2)tensor([[-1., 0.],        [1., 2.]])> t.mul(2)tensor([[2., 4.],        [6., 8.]])> t.div(2)tensor([[0.5000, 1.0000],        [1.5000, 2.0000]])

Broadcasting tensors

Broadcasting is the concept whose implementation allows us to add scalars to higher dimensional tensors.

We can see what the broadcasted scalar value looks like using the broadcast_to()Numpy function:

> np.broadcast_to(2, t.shape)array([[2, 2],       [2, 2]])//This means the scalar value is transformed into a rank-2 tensor just like t, and //just like that, the shapes match and the element-wise rule of having the same //shape is back in play.

Trickier example of broadcasting

t1 = torch.tensor([    [1, 1],    [1, 1]], dtype=torch.float32)t2 = torch.tensor([2, 4], dtype=torch.float32)

Even through these two tensors have differing shapes, the element-wise operation is possible, and broadcasting is what makes the operation possible.

> np.broadcast_to(t2.numpy(), t1.shape)array([[2., 4.],       [2., 4.]], dtype=float32)>t1 + t2tensor([[3., 5.],       [3., 5.]])

When do we actually use broadcasting? We often need to use broadcasting when we are preprocessing our data, and especially during normalization routines.


Comparison operations are element-wise. For a given comparison operation between tensors, a new tensor of the same shape is returned with each element containing either a 0 or a 1.

> t = torch.tensor([    [0, 5, 0],    [6, 0, 7],    [0, 8, 0]], dtype=torch.float32)

Let's check out some of the comparison operations.

> t.eq(0)tensor([[1, 0, 1],        [0, 1, 0],        [1, 0, 1]], dtype=torch.uint8)> t.ge(0)tensor([[1, 1, 1],        [1, 1, 1],        [1, 1, 1]], dtype=torch.uint8)> t.gt(0)tensor([[0, 1, 0],        [1, 0, 1],        [0, 1, 0]], dtype=torch.uint8)> t.lt(0)tensor([[0, 0, 0],        [0, 0, 0],        [0, 0, 0]], dtype=torch.uint8)> t.le(7)tensor([[1, 1, 1],        [1, 1, 1],        [1, 0, 1]], dtype=torch.uint8)

Element-wise operations using functions

Here are some examples:

> t.abs() tensor([[0., 5., 0.],        [6., 0., 7.],        [0., 8., 0.]])> t.sqrt()tensor([[0.0000, 2.2361, 0.0000],        [2.4495, 0.0000, 2.6458],        [0.0000, 2.8284, 0.0000]])> t.neg()tensor([[-0., -5., -0.],        [-6., -0., -7.],        [-0., -8., -0.]])> t.neg().abs()tensor([[0., 5., 0.],        [6., 0., 7.],        [0., 8., 0.]])

转载于:https://www.cnblogs.com/xxxxxxxxx/p/11066654.html

你可能感兴趣的文章
JavaScript 技巧与高级特性
查看>>
Uva 11729 Commando War
查看>>
增强学习(一) ----- 基本概念
查看>>
ubuntu下USB连接Android手机
查看>>
C# 语句 分支语句 switch----case----.
查看>>
lseek函数
查看>>
反射获取 obj类 的属性 与对应值
查看>>
表单中的readonly与disable的区别(zhuan)
查看>>
win10下安装配置mysql-8.0.13--实战可用
查看>>
周记2018.8.27~9.2
查看>>
MySQL中 1305-FUNCTION liangshanhero2.getdate does not exit 问题解决
查看>>
Ctrl+Alt+Down/Up 按键冲突
查看>>
python序列化和json
查看>>
mongodb
查看>>
网格与无网格
查看>>
SSH-struts2的异常处理
查看>>
《30天自制操作系统》学习笔记--第14天
查看>>
LGPL协议的理解
查看>>
1、Python基础
查看>>
Unity The Tag Attribute Matching Rule
查看>>