How to resize an image without significant quality loss?

Everything else that doesn't fall into one of the other PB categories.
merendo
Enthusiast
Enthusiast
Posts: 449
Joined: Sat Apr 26, 2003 7:24 pm
Location: Germany
Contact:

How to resize an image without significant quality loss?

Post by merendo »

Greetings to all,

I need your help. So far, I've been using ResizeImage() to resize my images, but unfortunately that function comes with a huge loss of image quality. Apparently the resize-algorythm is very cheap :( - too cheap for my purpose.

Perhaps there is a user library which I may use for resizing. Or a way to do it with PB itself? I would appreciate any help.

Best regards and many thanx in advance

merendo
The truth is never confined to a single number - especially scientific truth!
Mowen
User
User
Posts: 48
Joined: Tue Oct 07, 2003 1:04 pm
Location: Belgium

Post by Mowen »

Huge loss of quality ? Strange. I often use the ResizeImage and I have not really noticed that. Of course you have a small loss but it is not huge. It could depend on different parameters (size of original image, color depth etc.). Do you keep a correct width/height ratio when you resize ? It helps a lot to keep good image quality after resizing.
PureBasic: one of the best programming tools ever ! PB is light, easy, crossplatform, powerfull, fast, extendable, enjoyable and... tasty ;-)
merendo
Enthusiast
Enthusiast
Posts: 449
Joined: Sat Apr 26, 2003 7:24 pm
Location: Germany
Contact:

Post by merendo »

Yes, I do keep the width/height ratio. The quality loss is not very huge but it is noticeable.

The borders are not smooth and the images appear slightly transformed, especially when I resize them to thumbnail size (100 * 75 pixels).
The truth is never confined to a single number - especially scientific truth!
Blade
Enthusiast
Enthusiast
Posts: 362
Joined: Wed Aug 06, 2003 2:49 pm
Location: Venice - Italy, Japan when possible.
Contact:

Post by Blade »

Reducing an image to thumb size will alway give a quality loss.
Would you post an example (before/after) ?
Making thumbnail-size image with PB shouldn't be too different from any other image making thumbnails such as ACDsee or Photoshop.
(if you are using the latest beta, are you enabling smoothing, aren't you?)
User avatar
griz
Enthusiast
Enthusiast
Posts: 167
Joined: Sun Jun 29, 2003 7:32 pm
Location: Canada

Post by griz »

Pure Basic 3.93 does include an optional mode parameter for smoothing, but if this is not specified it will smooth by default.
Reducing an image to thumb size will alway give a quality loss.
If you downsample a 320x200 image to 25% the end result is a 80x50 image. This type of a reduction wouldn't introduce quality loss because the pixels are evenly divisible at 25% (1/4). Does this make sense? Other reductions like 10%, 20% and 50% would work here too creating images of 32x20, 64x40 and 160x100 respectively. It's when you get into odd sizes like 17% which would create an image of 54.4 x 34. How do you represent the .4? This is where quality loss arrives. Otherwise, if you stick to evenly divisible units, you're just downsampling directly which can look very good (albeit the resolution is now lacking - but that's a thumbnail isn't it?). Doing these kinds of reductions can often look best without smoothing.
User avatar
einander
Enthusiast
Enthusiast
Posts: 744
Joined: Thu Jun 26, 2003 2:09 am
Location: Spain (Galicia)

Post by einander »

Hi Merendo:

I've updated for PB 3.93 beta one old example with a trick to change image sizes.
viewtopic.php?p=49201#49201
Blade
Enthusiast
Enthusiast
Posts: 362
Joined: Wed Aug 06, 2003 2:49 pm
Location: Venice - Italy, Japan when possible.
Contact:

Post by Blade »

griz wrote:If you downsample a 320x200 image to 25% the end result is a 80x50 image. This type of a reduction wouldn't introduce quality loss because the pixels are evenly divisible at 25% (1/4). Does this make sense?
Depends on what you mean with quality loss.
Reducing an image is always a "lossy" procedure, because there is not a way to restore it to the original size without loosing quality.
griz wrote: It's when you get into odd sizes like 17% which would create an image of 54.4 x 34. How do you represent the .4? This is where quality loss arrives.
It just depends on the algorithm used. It could be smarter than take a block of pixels, sum them toghther and dividing the result by the number of pixels... (is it called "median"?) 8)
User avatar
griz
Enthusiast
Enthusiast
Posts: 167
Joined: Sun Jun 29, 2003 7:32 pm
Location: Canada

Post by griz »

Yes Blade, I agree... however, if you work with evenly divisible numbers (as I discussed above) there is less quality loss. That's my point. You can see this effect when you zoom in and out of a bitmap in photoshop (notice the zoom percentage). As for a smarter algorithm you don't need one if you're reducing a 320x200 to 10% or 32x20. You're grabbing every tenth pixel and there's no working with subpixels at all = best quality.
ivory
User
User
Posts: 36
Joined: Fri Jun 25, 2004 2:30 am

Post by ivory »

griz wrote:Yes Blade, I agree... however, if you work with evenly divisible numbers (as I discussed above) there is less quality loss. That's my point. You can see this effect when you zoom in and out of a bitmap in photoshop (notice the zoom percentage). As for a smarter algorithm you don't need one if you're reducing a 320x200 to 10% or 32x20. You're grabbing every tenth pixel and there's no working with subpixels at all = best quality.
Grabbing every tenth pixel ( a 100:1 reduction) would indeed be a lossy reduction. A smart reduction would select a 2 dimensional median (you need to consider both directions).

A smarter solution might create an edge map, then reduce the to the median color giving extra weight to the edge map.

But going to a thumbnail image is very small, and no smart reduction will have much effect.

Personally, I have programs that reduce/enlarge images to 25% of my screen, and have no complaints with the quality I am getting.
dell_jockey
Enthusiast
Enthusiast
Posts: 767
Joined: Sat Jan 24, 2004 6:56 pm

Post by dell_jockey »

Hi group,

if you want high quality image resizing, you need to pre-calculate a qubic or quintic spline through the individual R, G and B values for the X- and Y-axis. With spline coefficients determined this way for each of the X- and Y-rows, you can calculate new R, G and B values for any intermediate value of X and Y that's not on the original pixel grid. Up- and downsampling is easy and of a very high quality, especially so if you take the trouble of computing quintic splines.
This method inherently allows to change the aspect ratio of pictures while resizing them, while still keeping a very high quality.

If you want to learn more about this, check out http://bigwww.epfl.ch/thevenaz/interpolation/
cheers,
dell_jockey
________
http://blog.forex-trading-ideas.com
Blade
Enthusiast
Enthusiast
Posts: 362
Joined: Wed Aug 06, 2003 2:49 pm
Location: Venice - Italy, Japan when possible.
Contact:

Post by Blade »

I saw a tool to resize images ising b-splines. The results was amazing!
Thanks for the link :)
dell_jockey
Enthusiast
Enthusiast
Posts: 767
Joined: Sat Jan 24, 2004 6:56 pm

Post by dell_jockey »

Blade,

be careful with b-splines/bezier-splines, because their vector knots are not on the curve you want to interpolate. You'd want to use interpolating splines that go through the datapoints provided. Only that way you get accurate interpolations of the three color planes. Use qubic or quintic interpolating splines, like shown on the site mentioned above.
Qubic splines are better suited for images that contain hard contrast/colour edges, if an image has a rather equalized contrast and color usage, use quintic splines instead. You'll be amazed what's possible, even with enlarging pictures as well.
cheers,
dell_jockey
________
http://blog.forex-trading-ideas.com
Post Reply