Image Transcoding for Proxy Internet Wireless Access
M. El Shentenawy, A. Gaddah, Q.Guo,
T. Kunz and R. Hafez
Systems and Computer Engineering
Carleton University
http://kunz-pc.sce.carleton.ca/
tkunz@sce.carleton.ca

Overview
Motivation
Related Work
Experimental Setup
Image Classification and Transcoding
Classification: image type (easy), image content (difficult), image properties (maybe)
Transcoding: lossy change of data representation, goal is to reduce data volume while keeping “meaning” as much as possible
Conclusions and Future Work

Requirements for Mobile Wireless IP

Transcoding for IP Wireless Access
Application: WWW browsing
Problems:
varying device characteristics
varying user preferences
limited and dynamically changing bandwidth
Solution:
bulk of data traffic are images
catalog images on-line (?)
apply adaptive transcoding techniques
optimization problem: per object/image     or whole document

Research Goals and Related Work
Research Goals
identify promising image transcoding operations, based on image classification
predict achievable bandwidth reductions
good adaptive algorithms that meet user constraints, take device limitations and link characteristics into account
adaptive over what timescale, how fast, interactions with link layer
Related Work
image transcoding: GloMop, InfoPyramid, Mowser, Mowgli
all introduce notion of a proxy
range of transcoding operations limited/not dynamically selected
multiple versions provided by server:
WAP: use markup language and protocol stack specific to cellphones, completely new infrastructure
RFC 2295/2296: transparent content negotiation and variant selection, leaves it to server to provide range of formats suited for huge number of clients

Experimental Setup
Step 1: verified that GIF/JPEG images are predominant causes of WWW bandwidth (GIF: 50%, JPEG 32%)
Step 2: collect set of “typical” images using Gozilla, resulting in a collection of 1500 images
Step 3: ensure that images are representative of image size distributions found on WWW
led to additional download of 2000 (smaller) images
Step 4: select image processing software (ImageMagick)
many operations
available in source code
popular and well documented

Initial Results: 4 times the same image?

Initial Results:
Effect of Imaging Software

Classification and Prediction
Resaved all images with ImageMagick to eliminate effect of imaging software
Classification:
applied range of operations, derived compression ratio, tried to correlate it with image properties (size, number of colors, ….)
No success, very limited reason to believe that we could be successful (probability distributions for various parameters)
Most effective operations:
GIF: resize, reduce number of colors, convert to JPEG, convert to grayscale
JPEG: reduce image quality, resize, convert to grayscale, despeckle

Classification and Prediction
More on GIF operations
No stable ranking of most effective operations
Order of operations matters to some extent
Not worthwhile to convert small GIFs to JPEG (less than 1 KByte)
Recommendation: convert to JPEG only when PPB (pixel per bit) is greater than 0.1
More on JPEG operations
Conversion to GIF not advantageous
Ranking stable across most images

Conclusions and Future Research
Defined transcoding algorithms that take user preferences and device characteristics into account, iterate to achieve desired bandwidth reduction
Incorporate algorithms into WWW proxy (Rabbit)
Set up experiments with different client devices (real or emulated): PCs, WinCE PDAs, Palms
Future work:
Evaluate “quality” of transcoded WWW pages (otherwise trivial: drop images completely)
Proxy: cache management (support heterogeneous client population) and performance issues (transcoding is not cheap)
WWW document: how to “transcode” whole document, not just individual images