Skip to content

Comparison between meshgrid and einmesh

In this example we are going to show some of the justifying examples where i think einmesh makes code more readable, more maintainable and just simply shorter than the current standard meshgrid function.

import numpy as np
import torch

# Import einmesh for comparisons
from einmesh import LinSpace
from einmesh.numpy import einmesh as einmesh_np
from einmesh.torch import einmesh as einmesh_torch

Direct comparisons: meshgrid vs einmesh

For each common use case, we'll show the traditional meshgrid approach followed immediately by the einmesh equivalent.

Example 1: Simple 2D grid (NumPy)

Traditional meshgrid approach:

The most common pattern for using a meshgrid function is probably for plotting a 2D surface. The standard way to do this is some variation of the below:

# Traditional NumPy meshgrid - 3 lines
x = np.linspace(0, 10, 11)
y = np.linspace(0, 5, 6)
X, Y = np.meshgrid(x, y, indexing="ij")

print("Traditional 2D grid:")
print(f"X shape: {X.shape}")
print(f"Y shape: {Y.shape}")

Here there are a couple of annoyances with this approach 1. Usually there are two variables assigned with almost the same name, x,y and X,Y. This is to differentiate between the linspace that meshgrid takes in, and the dimension that is actually being output. 2. Indexing has to be specified, and it is not obvious from reading the code what the difference between the differnt indexing methods are 3. The whole thing realies on positional arguments, which maybe acceptable with dimensions, but become really cumbersome for error-prone for very high dimensional use.

einmesh equivalent:

# einmesh - 1 line, same result
X_ein, Y_ein = einmesh_np("x y", x=LinSpace(0, 10, 11), y=LinSpace(0, 5, 6))

print("einmesh 2D grid:")
print(f"X shape: {X_ein.shape}")
print(f"Y shape: {Y_ein.shape}")
print("Results identical:", np.allclose(X, X_ein) and np.allclose(Y, Y_ein))

Example 2: 3D grid with stacking (NumPy)

Traditional meshgrid approach:

Another common pattern is to need to stack the output dimsions of a meshgrid. In this case with 3 dimensions, we start to see how easily it could be to make a mistake in the x,y,z positions, both in the definitions, arguments, unpacking and stacking. Furthermore we have the common problem

# Traditional NumPy meshgrid - 5 lines
x = np.linspace(-2, 2, 5)
y = np.linspace(-1, 1, 3)
z = np.linspace(0, 3, 4)
X, Y, Z = np.meshgrid(x, y, z, indexing="ij")
coords_traditional = np.stack([X, Y, Z], axis=-1)

print("Traditional 3D stacked grid:")
print(f"Coordinates shape: {coords_traditional.shape}")
print("5 lines of code")

The code above is great at describing what is happening, but is not ideal for describing what is going to come out of all these operations. Just like einops, einmesh focuses on what is comming in and out, not what is happening under the hood.

Thus in einmesh we can stack simply with the star operator, it is clear where the dimensions are collected and can easily relate the dimensions through the use of keyword arguments.

# einmesh - 1 line, automatic stacking with '*'
coords_ein = einmesh_np("x y z *", x=LinSpace(-2, 2, 5), y=LinSpace(-1, 1, 3), z=LinSpace(0, 3, 4))

print("einmesh 3D stacked grid:")
print(f"Coordinates shape: {coords_ein.shape}")
print("Results identical:", np.allclose(coords_traditional, coords_ein))

Example 6: Batch processing with stacked coordinates (PyTorch)

Traditional meshgrid approach:

# Traditional PyTorch meshgrid - 8+ lines for multiple grids
batch_sizes = [32, 64, 128]
grids_traditional = []

for size in batch_sizes:
    x = torch.linspace(0, 1, size)
    y = torch.linspace(0, 1, size)
    X, Y = torch.meshgrid(x, y, indexing="ij")
    grid = torch.stack([X, Y], dim=-1)
    grids_traditional.append(grid)
    print(f"Traditional grid {size}x{size} shape: {grid.shape}")

einmesh equivalent:

# einmesh - 4 lines with automatic stacking via '*'
grids_ein = []
for size in [32, 64, 128]:
    grid = einmesh_torch("x y *", x=LinSpace(0, 1, size), y=LinSpace(0, 1, size))
    grids_ein.append(grid)
    print(f"einmesh grid {size}x{size} shape: {grid.shape}")

print("Results identical:", all(torch.allclose(trad, ein) for trad, ein in zip(grids_traditional, grids_ein)))

= Your code becomes more backend agnostic

Just like with einops, einmesh supports multiple matrix and tensor backends. This can feel like a small thing, but knowing that some complicated piece of code instantly works in numpy or jax, if you want to try, gives a flexibility that makes experiments a little more

from einmesh.jax import einmesh as einmesh_jax

# Example: Same einmesh pattern across different backends

# Define a common grid pattern
pattern = "x y *"
x_space = LinSpace(-1, 1, 50)
y_space = LinSpace(-2, 2, 30)

# NumPy backend
coords_numpy = einmesh_np(pattern, x=x_space, y=y_space)
print("NumPy backend:")
print(f"  Shape: {coords_numpy.shape}")
print(f"  Type: {type(coords_numpy)}")
print("  Backend: numpy")

# PyTorch backend
coords_torch = einmesh_torch(pattern, x=x_space, y=y_space)
print("\nPyTorch backend:")
print(f"  Shape: {coords_torch.shape}")
print(f"  Type: {type(coords_torch)}")
print("  Backend: torch")

coords_jax = einmesh_jax(pattern, x=x_space, y=y_space)
print("\nJAX backend:")
print(f"  Shape: {coords_jax.shape}")
print(f"  Type: {type(coords_jax)}")
print("  Backend: jax")

# Verify results are equivalent
print(
    f"\nAll backends produce identical results: {np.allclose(coords_numpy, np.array(coords_jax)) and np.allclose(coords_numpy, coords_torch.numpy())}"
)