
Algorithm for Detecting Video and a New Design for Handling Video Applications in Thin-Client Systems at Low Bit Rates Umendar Koosam and Devendra Jalihal Thin-Client Lab, Department of Electrical Engineering Indian Institute of Technology Madras, Chennai 600036, India Email: - [email protected], [email protected] Abstract — The existing thin-clients are well suited for office appli- load will be reduced avoiding video decoding and VNC encod- cations. When it comes to multimedia and gaming applications the ing. The video, flash and browser (for embedded video) appli- performance degrades drastically in terms of frame rate and cations have to be modified. The client should have decoders bandwidth utilization. The encoding techniques available at the for all the video file formats corresponding to the video files at thick server do not consider the temporal redundancy between the the server . frame updates which is a key feature in video applications. In this paper, we propose a new method in handling video applications B. Video Re-encoding by using a video codec. An algorithm for video window detection at virtual frame buffer level is developed. From the experimental If the video window in the desktop screen area is known to the VNC server, video codec can be used instead of the present results, we compare this method with other classic encoding VNC encoding. The client will have the decoder corresponding schemes in terms of bandwidth utilization and frame rate. to the codec used at the server. There are two methods in han- dling the video window location in desktop screen area. Both I. I NTRODUCTION of these methods require less bandwidth and more CPU re- sources at the server than the present VNC encoding. Both may The present versions of encoding techniques in RFB (Re- lead to degradation in video quality because of video re- mote Frame Buffer) protocol (also known as Virtual Network encoding. Computing) [1] in thin-client systems are rectangle based en- coding techniques. These techniques exploit spatial redundancy Application level Detection: The video window location can be only, because they encode each frame independently. They do known from the applications. This method requires modifica- not exploit temporal redundancy (other than frame differenc- tions to the video and browser applications. ing). Color rich areas are handled with image compression technique like JPEG in best possible Tight Encoding [2], but Frame Buffer level Detection: The presence of video can be this is not a correct solution for video applications. They do not known from the virtual frame buffer. In this method video and achieve bandwidth limitations for video applications at slow browser applications need not be modified. network connections. This motivates the study of alternate so- lutions and implementations for handling video at low bit rates. The above two methods work for all video file formats. Cli- ent does not need to have decoders for all files formats. How- A number of solutions have been proposed that essentially ever both the methods can also be used for flash and gaming include some form of video compression. We discuss below applications. two basic approaches. D. De Winter et al. [3] used video codec H.264 to stream the A. Direct Streaming graphical output of applications after GPU (Graphical Process- ing Unit) processing to a thin-client device. The complexity of Streaming the video file directly to client avoids decoding graphical commands or amount of motion in the desktop screen of the video by the host CPU and re-encoding by the VNC is used to switch between real time video codec and VNC en- server. At the client end, VNC viewer should be capable of coding. However, performing motion detection and encoding decoding the video and displaying it. This method uses less of entire desktop screen area (even for small video sizes) will bandwidth than the present VNC encoding. The server CPU not be an optimal solution . Firstly, we define the video window as a rectangular portion which is very dense in color and gets updated for continuous frame updates with same <x, y, w, h> and are correlated im- ages. The rectangles with w < Wmin and h < Hmin are filtered out as non-video rectangles. We set Wmin = 176 and Hmin = 144 . The steps involved in this video window detection which are described in detail below are: First, the image is checked for picture or natural image type. Second, the temporal correlation is measured between two picture images of present and previ- ous frame updates and finally the motion due to scrolling op- erations is checked between the correlated images. Once the video window detection is done for a span of ‘k’ frames, the video_flag (VF) is set and thereafter video window detection algorithm is bypassed for that video window < x, y, w, h >. We assume only one video portion is present in the screen area . The video window detection algorithm processes the frame buffer data in RGB color domain without converting to gray Figure 1: Proposed method for handling video portion for scale. This saves lot of computational power in real time. thin-client system. The solutions which require modification to the applications are not practical for proprietary video players such as Real Player, Quick Time Player and Windows Media Player etc. Modifications for the open source players such as xine, mplayer, flash etc. are possible. However, modifications for every new open source player and new versions of them are impractical from maintenance point of view. Considering these facts we propose to use a variant of H.263 video codec [4] for video portion because of its lower complexity as shown in Fig- ure 1. A frame buffer update is sent from server to the client upon a request. The applications write the data into the virtual frame buffer at the thick server and the data is represented in terms of rectangles. The video portion is detected from the virtual frame buffer content and is encoded using a video codec. The remain- ing rectangles (other than video portion) of the frame buffer are encoded by normal Tight Encoding. The remainder of this paper is organized as follows. Section 2, describes the new video window detection algorithm. Sec- tion 3 presents the experimental results followed by conclusion in section 4. II. V IDEO WINDOW DETECTION ALGORITHM The present frame update is a difference of previous and present screen information and is represented in terms of rec- tangles < xi, yi, wi, hi> where i =1, 2…N and xi, yi denotes the position of the rectangle in the screen area and wi, hi denotes the width and height of the rectangle. The algorithm for detect- ing video window in frame buffer data is shown by means of a Figure 2: Video window detection algorithm flow chart in Figure 2. A. Detection of picture/natural image Table 1: Average percentage of picture blocks and non- picture blocks in various images Lin T. et al. proposed color count based classification of text blocks for compound image compression [5]. We propose a mean RGB based classification of picture and non-picture Block type Text Pages Web Pages Video blocks. The steps are given below. Images Step1: The image is divided into non-overlapping blocks of Picture 11 18 92 size MxM. We set M=12. Blocks Step2: Each 12x12 block is sub divided into 4x4 blocks of count 9 as shown below Non Picture . 89 82 8 Blocks 4x4 4x4 4x4 4x4 4x4 4x4 Step4: If R1i, G1 i, B1 i and R2 i , G2 i, B2 i are the mean R, G, Bs of all blocks of two images respectively 4x4 4x4 4x4 12x12 12x12 Two corresponding NxN blocks are said to be correlated if the following conditions are satisfied: Step3: Mean R, G, Bs of each sub-block are computed as: R1 i ~ R2 i < Th Mr = Σ Σ R (i, j) G1 i ~ G2 i < Th Th Mg = Σ Σ G (i, j) B1 i ~ B2i < (2) Mb = Σ Σ B (i, j) i, j = 1...16. (1) Th - Threshold is set to 25 for N=16. Figure 3 shows Probabil- R, G and B are the color component matrices of the image. ity of miss and Probability of false alarm curves for different thresholds. Step4: The 12x12 block is classified as non-picture if the abso- lute differences of Mr, Mg, Mb of any adjacent (horizontal or Step5: If the number of correlated blocks is greater than uncor- vertical) 4x4 blocks is zero otherwise picture block. related blocks then the two images are decided as temporally correlated images. Step5: If the number of picture blocks is greater than a thresh- old T then the image is decided as picture/natural image. We In this step, series of picture images (e.g. JPEG, BMP) and pict slide show of complex text documents which are uncorrelated set Tpict = 80% of total blocks. are classified as non video rectangles. Most of the documents handled in a typical computer usage are text pages, web pages, picture images and video files. It is also observed that the picture/natural images will not have same mean R, G, and B values between two adjacent 4x4 blocks. Experiments conducted on the images of each category using the above classification algorithm and the results are shown in Table 1. In this step, text pages and web pages are separated from the group of color rich picture images and video frames. B. Correlation measurement between two picture images Step1: The two images are divided into non-overlapping blocks of size NxN. Step2: Mean R, G, Bs of each block are computed.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages5 Page
-
File Size-