Imagine you’re working with a multidimensional array, but instead of just two dimensions, you have three, four, or even more. Welcome to the world of tensors. A tensor is essentially a generalization of scalars, vectors, and matrices. Scalars are just single numbers (0D), vectors are one-dimensional arrays (1D), and matrices are two-dimensional arrays (2D). Once you start adding dimensions beyond this, you enter the realm of tensors.
Tensors are defined by their rank, which is the number of dimensions they have. For example, a rank-0 tensor is a single number, a rank-1 tensor is a list of numbers, and a rank-2 tensor is a table of numbers. Beyond that, the complexity increases, and you can consider of them as a way to store data in a way that makes mathematical operations straightforward, especially when you start diving into the realms of deep learning and artificial intelligence.
Here’s a quick example of creating a simple tensor using a popular library, TensorFlow:
UnionSine 500GB 2.5" Ultra Slim Portable External Hard Drive HDD-USB 3.0 for PC, Mac, Laptop, PS4, Xbox one,Xbox 360-HD-2510(Black)
$33.27 (as of July 31, 2025 11:54 GMT +03:00 - More infoProduct prices and availability are accurate as of the date/time indicated and are subject to change. Any price and availability information displayed on [relevant Amazon Site(s), as applicable] at the time of purchase will apply to the purchase of this product.)import tensorflow as tf # Creating a rank-1 tensor (vector) vector = tf.constant([1, 2, 3]) # Creating a rank-2 tensor (matrix) matrix = tf.constant([[1, 2], [3, 4]]) # Creating a rank-3 tensor tensor3d = tf.constant([[[1], [2]], [[3], [4]]])
When you consider about what tensors enable, it becomes clear how powerful they are. They allow you to perform operations on large amounts of data efficiently, which is why they are vital in machine learning frameworks. The beauty of tensors lies in their ability to handle both structured and unstructured data seamlessly.
One of the key features of tensors is that they can be manipulated in various ways. You can perform element-wise operations, reductions, or even more complex operations like reshaping and slicing. Here’s how you can do some basic tensor operations:
# Element-wise addition result = tf.add(matrix, matrix) # Reshaping a tensor reshaped_tensor = tf.reshape(tensor3d, [2, 2, 1]) # Slicing a tensor sliced_tensor = tensor3d[:, 0, :]
With tensors, you start to see the power of high-performance computing. They allow for parallelization and optimized memory usage, which is essential for training deep learning models on large datasets. But this also leads to a question many programmers find themselves asking…
The one simple reason you’re not still using NumPy
Why aren’t we still using NumPy for all our numerical computing needs? The one simple reason boils down to efficiency and scalability. Sure, NumPy is great for handling arrays and performing operations on them, but when you start dealing with large datasets or need to perform operations on GPUs, its limitations become apparent.
NumPy operates on CPU and is bound by the memory limits of your machine. As your data grows, you might find yourself running into performance bottlenecks. Tensors, on the other hand, are designed to work seamlessly with hardware accelerators like GPUs and TPUs. This inherent capability allows for faster computations and the ability to handle more data at once.
Let’s illustrate this with a quick performance comparison. Here’s a simple example of a matrix multiplication using both NumPy and TensorFlow:
import numpy as np import tensorflow as tf import time # Define two large matrices matrix_a = np.random.rand(1000, 1000) matrix_b = np.random.rand(1000, 1000) # NumPy matrix multiplication start_time_np = time.time() result_np = np.dot(matrix_a, matrix_b) end_time_np = time.time() print("NumPy time:", end_time_np - start_time_np) # TensorFlow matrix multiplication tensor_a = tf.constant(matrix_a) tensor_b = tf.constant(matrix_b) start_time_tf = time.time() result_tf = tf.matmul(tensor_a, tensor_b) end_time_tf = time.time() print("TensorFlow time:", end_time_tf - start_time_tf)
This code snippet shows how to perform matrix multiplication using both libraries. While NumPy performs well for smaller tasks, TensorFlow can leverage the power of GPUs to drastically reduce computation time for larger matrices.
Moreover, tensors support automatic differentiation, which is an important feature for training machine learning models. This allows you to compute gradients efficiently, and it’s an operation that NumPy struggles with unless you implement it manually.
Here’s how you can compute gradients in TensorFlow:
# Define a simple function def f(x): return x ** 2 # Compute the gradient with tf.GradientTape() as tape: x = tf.Variable(3.0) y = f(x) # Get the gradient gradient = tape.gradient(y, x) print("Gradient at x=3:", gradient.numpy())
In this example, we create a simple quadratic function and compute its gradient using TensorFlow’s automatic differentiation capabilities. This is what gives TensorFlow an edge over NumPy, especially in the context of deep learning applications where such computations are frequent.
So, while NumPy is an indispensable tool for many applications, when it comes to scalability and advanced features required for machine learning, tensors become the superior choice. The future of numerical computing is clearly leaning towards tensor-based frameworks, and for good reason. As you dive deeper into this realm…
Source: https://www.pythonfaq.net/how-to-work-with-tensors-using-torch-tensor-in-pytorch/