Skip to content Skip to sidebar Skip to footer

Image.fromarray Changes Size

I have data that I want to store into an image. I created an image with width 100 and height 28, my matrix has the same shape. When I use Image.fromarray(matrix) the shape changes:

Solution 1:

PIL and numpy have different indexing systems. matrix[a, b] gives you the point at x position b, and y position a, but img.getpixel((a, b)) gives you the point at x position a, and y position b. As a result of this, when you are converting between numpy and PIL matrices, they switch their dimensions. To fix this, you could take the transpose (matrix.transpose()) of the matrix. Here's what's happening:

import numpy as np
from PIL import Image

img = Image.new('L', (100, 28))
img.putpixel((5, 3), 17)

matrix = np.array(img)

print matrix[5, 3] #This returns 0 print matrix[3, 5] #This returns 17

matrix = matrix.transpose()
print matrix[5, 3] #This returns 17print matrix[3, 5] #This returns 0

Solution 2:

NumPy and PIL have different indexing systems. So a (100, 28) numpy array will be interpreted as an image with width 28 and height 100.

If you want a 28x100 image, then you should swap the dimensions for your image instantiation.

img = Image.new('L', (28, 100))

If you want a 100x28 image, then you should transpose the numpy array.

tmp = Image.fromarray(matrix.transpose())

More generally, if you're working with RGB, you can use transpose() to only swap the first two axes.

>>>arr = np.zeros((100, 28, 3))>>>arr.shape
(100, 28, 3)
>>>arr.transpose(1, 0, 2).shape
(28, 100, 3)

Post a Comment for "Image.fromarray Changes Size"