A Prototyping Virtual Socket System-On-Platform Architecture with a Novel ACQPPS Motion Estimator for H.264 Video Encoding Applications
© Y. Qiu andW. Badawy. 2009
Received: 25 February 2009
Accepted: 27 July 2009
Published: 22 September 2009
H.264 delivers the streaming video in high quality for various applications. The coding tools involved in H.264, however, make its video codec implementation very complicated, raising the need for algorithm optimization, and hardware acceleration. In this paper, a novel adaptive crossed quarter polar pattern search (ACQPPS) algorithm is proposed to realize an enhanced inter prediction for H.264. Moreover, an efficient prototyping system-on-platform architecture is also presented, which can be utilized for a realization of H.264 baseline profile encoder with the support of integrated ACQPPS motion estimator and related video IP accelerators. The implementation results show that ACQPPS motion estimator can achieve very high estimated image quality comparable to that from the full search method, in terms of peak signal-to-noise ratio (PSNR), while keeping the complexity at an extremely low level. With the integrated IP accelerators and optimized techniques, the proposed system-on-platform architecture sufficiently supports the H.264 real-time encoding with the low cost.
Digital video processing technology is to improve the coding validity and efficiency for digital video images . It involves the video standards and relevant realizations. With the joint efforts of ITU-T VCEG and ISO/IEC MPEG, H.264/AVC (MPEG-4 Part 10) has been built up as the most advanced standard so far in the world, targeting to achieve very high data compression. H.264 is able to provide a good video quality at bit rates which are substantially lower than what previous standards need [2–4]. It can be applied to a wide variety of applications with various bit rates and video streaming resolutions, intending to cover practically almost all the aspects of audio and video coding processing within its framework [5–7].
H.264 includes many profiles, levels and feature definitions. There are seven sets of capabilities, referred to as profiles, targeting specific classes of applications: Baseline Profile (BP) for low-cost applications with limited computing resources, which is widely used in videoconferencing and mobile communications; Main Profile (MP) for broadcasting and storage applications; Extended Profile (XP) for streaming video with relatively high compression capability; High Profile (HiP) for high-definition television applications; High 10 Profile (Hi10P) going beyond present mainstream consumer product capabilities; High 4 : 4 : 2 Profile (Hi422P) targeting professional applications using interlaced video; High 4 : 4 : 4 Profile (Hi444P) supporting up to 12 bits per sample and efficient lossless region coding and an integer residual color transform for RGB video. The levels in H.264 are defined as Level 1 to 5, each of which is for specific bit, frame and macroblock (MB) rates to be realized in different profiles.
One of the primary issues with H.264 video applications lies on how to realize the profiles, levels, tools, and algorithms featured by H.264/AVC draft. Thanks to the rapid development of FPGA  techniques and embedded software system design and verification tools, the designers can utilize the hardware-software (HW/SW) codesign environment which is based on the reconfigurable and programmable FPGA infrastructure as a dedicated solution for H.264 video applications [9, 10].
The motion estimation (ME) scheme has a vital impact on H.264 video streaming applications, and is the main function of a video encoder to achieve image compression. The block-matching algorithm (BMA) is an important and widely used technique to estimate the motions of regular block, and generate the motion vector (MV), which is the critical information for temporal redundancy reduction in video encoding. Because of its simplicity and coding efficiency, BMA has been adopted as the standard motion estimation method in a variety of video standards, such as the MPEG-1, MPEG-2, MPEG-4, H.261, H.263, and H.264. Fast and accurate block-based search techniques and hardware acceleration are highly demanded to reduce the coding delay and maintain satisfied estimated video image quality. A novel adaptive crossed quarter polar pattern search (ACQPPS) algorithm and its hardware architecture are proposed in this paper to provide an advanced motion estimation search method with the high performance and low computational complexity.
Moreover, an integrated IP accelerated codesign system, which is constructed with an efficient hardware architecture, is also proposed. With integrations of H.264 IP accelerators into the system framework, a complete system-on-platform solution can be set up to realize the H.264 video encoding system. Through the codevelopment and co-verification for system-on-platform, the architecture and IP cores developed by designers can be easily reused and therefore transplanted from one platform to others without significant modification . These factors make a system-on-platform solution outperform a pure software solution and more flexible than a fully dedicated hardware implementation for H.264 video codec realizations.
The rest of paper is organized as follows: in the next Section 2, H.264 baseline profile and its applications are briefly analyzed. In Section 3, the ACQPPS algorithm is proposed in details, while Section 4 describes the hardware architecture for the proposed ACQPPS motion estimator. Furthermore, a hardware architecture and host interface features of the proposed system-on-platform solution is elaborated in Section 5, and the related techniques for system optimizations are illustrated in Section 6. The complete experimental results are generated and analyzed in Section 7. The Section 8 concludes the paper.
2. H.264 Baseline Profile
2.1. General Overview
The profiles and levels specify the conformance points, which are designed to facilitate the interoperability between a variety of video applications of the H.264 standard that has similar functional requirements. A profile defines a set of coding tools or algorithms that can be utilized in generating a compliant bitstream, whereas a level places constraints on certain key parameters of the bitstream.
H.264 baseline profile was designed to minimize the computational complexity and provide high robustness and flexibility for utilization over a broad range of network environment and conditions. It is typically regarded as the simplest one in the standard, which includes all the H.264 tools with the exception of the following tools: B-slices, weighted prediction, field (interlaced) coding, picture/macroblock adaptive switching between the frame and field coding (MB-AFF), context adaptive binary arithmetic coding (CABAC), SP/SI slices and slice data partitioning. This profile normally targets the video applications with low computational complexity and low delay requirements.
For example, in the field of mobile communications, H.264 baseline profile will play an important role because the compression efficiency is doubled in comparison with the coding schemes currently specified by the H.263 Baseline, H.263+ and MPEG-4 Simple Profile.
2.2. Baseline Profile Bitstream
For mobile and videoconferencing applications, H.264 BP, MPEG-4 Visual Simple Profile (VSP), H.263 BP, and H.263 Conversational High Compression (CHC) are usually considered. Practically, H.264 outperforms all other considered encoders for video streaming encoding. H.264 BP allows an average bit rate saving of about 40 compared to H.263 BP, 29 to MPEG-4 VSP and 27 to H.263 CHC, respectively .
2.3. Hardware Codec Complexity
The implementation complexity of any video coding standard heavily depends on the characteristics of the platform, for example, FPGA, DSP, ASIC, SoC, on which it is mapped. The basic analysis with respect to the H.264 BP hardware codec implementation complexity can be found in [13, 14].
In general, the main bottleneck of H.264 video encoding is a combination of multiple reference frames and large search ranges.
Moreover, the H.264 video codec complexity ratio is in the order of 10 for basic configurations and can grow up to the 2 orders of magnitude for complex ones .
3. The Proposed ACQPPS Algorithm
3.1. Overview of the ME Methods
For motion estimation, the full search algorithm (FS) of BMA exhaustively checks all possible block pixels within the search window to find out the best matching block with minimal matching error (MME). It can usually produce a globally optimal solution to the motion estimation, but demand a very high computational complexity.
To reduce the required operations, many fast algorithms have been developed, including the 2D logarithmic search (LOGS) , the three-step search (TSS) , the new three-step search (NTSS) , the novel four-step search (NFSS) , the block-based gradient descent search (BBGDS) , the diamond search (DS) , the hexagonal search (HEX) , the unrestricted center-biased diamond search (UCBDS) , and so forth. The basic idea behind these multistep fast search algorithms is to check a few of block points at current step, and restrict the search in next step to the neighboring of points that minimizes the block distortion measure.
These algorithms, however, assume that the error surface of the minimum absolute difference increases monotonically as the search position moves away from the global minimum on the error surface . This assumption would be reasonable in a small region near the global minimum, but not absolutely true for real video signals. To avoid trapped in undesirable local minimum, some adaptive search algorithms have been devised intending to achieve the global optimum or sub-optimum with adaptive search patterns. One of those algorithms is the adaptive rood pattern search (ARPS) .
Recently, a few of valuable algorithms have been developed to further improve the search performance, such as the Enhanced Predictive Zonal Search (EPZS) [25, 26] and Unsymmetrical-Cross Multi-Hexagon-grid Search (UMHexagonS) , which were even adopted by H.264 as the standard motion estimation algorithms. These schemes, however, are not especially suitable for the hardware implementation, as the search principle of these methods is complicated. If the hardware architecture is required for the realization of H.264 encoder, these algorithms are usually not regarded as the efficient solution.
To improve the search performance and reduce the computational complexity as well, an efficient and fast method, adaptive crossed quarter polar pattern search algorithm (ACQPPS), is therefore proposed in this paper.
3.2. Algorithm Design Considerations
It is known that a small search pattern with compactly spaced search points (SP) is more appropriate than a large search pattern containing sparsely spaced search points in detecting small motions . On the contrary, the large search pattern has the advantage of quickly detecting large motions to avoid being trapped into local minimum along the search path and leads to unfavorable estimation, an issue that the small search pattern encounters. It is desirable to use different search patterns, that is, adaptive search patterns, in view of a variety of the estimated motion behaviors.
Three main aspects are considered to improve or speed up the matching procedure for adaptive search methods: (1) type of the motion prediction; (2) selection of the search pattern shape and direction; (3) adaptive length of search pattern. The first two aspects can reduce the number of search points, and the last one is to give more accurate searching result with a large motion.
For the proposed ACQPPS algorithm under H.264 encoding framework, a median type of the predicted motion vector, that is, median vector predictor (MVP) , is produced for determining the initial search range. The shape and direction of the search pattern is adaptively selected. The length (radius) of the search arm is adjusted to improve the search. Two main search steps are involved in the motion search: (1) initial search stage; (2) refined search stage. In the initial search stage, some initial search points are selected to obtain an initial MME point. For the refined search, a unit-sized square pattern is applied iteratively to obtain the final best motion vector.
3.3. Shape of the Search Pattern
To determine the following search step according to whether the current best matching point is positioned at the center of search range, a new search pattern is devised to detect the potentially optimal search points in the initial search stage. The basic concept is to pick up some initial points along with the polar (circular) search pattern. The center of the search circles is the current block position.
Under the assumption that the matching error surface has a property of monotonic increasing or decreasing, however, some redundant checking points may exist in the initial search stage. It is obvious that some redundant points are not necessary to be examined under the assumption of unimodal distortion surface. To reduce the number of initial checking points and keep the probability of getting optimal matching points as high as possible, a fractional or quarter polar search pattern is used accordingly.
Moreover, it is known that the accuracy of motion predictor is very important to the adaptive pattern search. To improve the performance of adaptive search, extra related motion predictors can be used other than the initial MVP. The extra motion predictors utilized by ACQPPS algorithm only require an extension and a contraction of the initial MVP that can be easily obtained. Therefore, at the crossing of quarter circle and motion predictors, the search method is equipped with the adaptive crossed quarter polar patterns for efficient motion search.
3.4. Adaptive Directions of the Search Pattern
(3.4.1)If the predicted MV (motion predictor) = 0, set up an initial square search pattern with a pattern size = 1, around the search center, as shown in Figure 2(a).
(3.4.2)If the predicted MV falls onto a coordinate axis, that is, PredMVy = 0 or PredMVx = 0, the pattern direction is chosen to be E, N, W, or S, as shown in Figures 1(a), 1(c), 1(e), 1(g). In this case, the point at the initial motion predictor is overlapped with an initial search point which is on the N, W, E, or S coordinate axis.
(3.4.3)If the predicted MV does not fall onto any coordinate axis, and , the pattern direction is chosen to be E, N, W, or S, as shown in Figure 2(b).
(3.4.4)If the predicted MV does not fall onto any coordinate axis, and , the pattern direction is chosen to be NE, NW, SW, or SE, as shown in Figure 2(c).
3.5. Size of the Search Pattern
where is the radius of quarter circle, PredMVy and PredMVx the vertical and horizontal components of the motion predictor, respectively.
3.6. Initial Search Points
After the direction and size of a search pattern are decided, some search points will be selected in the initial search stage. Each search point represents a block to be checked with intensity matching. The initial search points include (when MVP is not zero):
(1)the predicted motion vector point;
(2)the center point of search pattern, which represents the candidate block in the current frame;
(3)some points on the directional axis;
(4)the extension predicted motion vector point (the point with prolonged length of motion predictor), and the contraction predicted motion vector point (the point with contracted length of motion predictor)
Normally, if no overlapping exists, there will be totally seven search points selected in the initial search stage, in order to get a point with the MME, which can be used as a basis for the refined search stage thereafter.
If a search point is on the axis of NW, NE, SW, or SE, the corresponding decomposed coordinates of that point will satisfy,
where and are the vertical and horizontal components of a search point on the axis of NW, NE, SW, or SE. Because is equal to in this case, then
A look-up table for the definition of vertical and horizontal components of initial search points on NW/NE/SW/SE axis.
When the radius , the value of and can be determined by
Definition of scaled factors for initial search points related to motion predictor.
Scaled factor for extension ( )
Scaled factor for contraction ( )
Therefore, the initial search points related to the motion predictor can be identified as
where MVP is a point representing the median vector predictor. SFE and SFC are the scaled factors for the extension and contraction, respectively. EMVP and CMVP are the initial search points with the prolonged and contracted lengths of predicted motion vector, respectively. If the horizontal or vertical component of EMVP and CMVP is not an integer after the scaling, the component value will be truncated to the integer for video block processing.
3.7. Algorithm Procedure
Get a predicted motion vector (MVP) for the candidate block in current frame for the initial search stage.
Find the adaptive direction of a search pattern by rules , determine the pattern size " " with the (1), choose initial SPs in the reference frame along the quarter circle and predicted MV using look-up table, (5) and (6).
Check the initial search points with block pixel intensity measurement, and get an MME point which has a minimum SAD as the search center for the next search stage.
Refine local search by applying unit-sized square pattern to the MME point (search center), and check its neighboring points with block pixel intensity measurement. If after search, the MME point is still the search center, then stop searching and obtain the final motion vector for the candidate block corresponding to the final best matching point identified in this step. Otherwise, set up the new MME point as the search center, and apply square pattern search to that MME point again, until the stop condition is satisfied.
3.8. Algorithm Complexity
As the ACQPPS is a predicted and adaptive multistep algorithm for motion search, the algorithm computational complexity exclusively depends on the object motions contained in the video sequences and scenarios for estimation processing. The main overhead of ACQPPS algorithm lies in the block SAD computations. Some other algorithm overhead, such as the selection of adaptive search pattern direction, the determination of search arm and initial search points, are merely consumed by a combination of if-condition judgments, and thus can be even ignored when compared with block SAD calculations.
If the large, quick, and complex object motions are included in video sequences, the number of search points (NSP) will be reasonably increased. On the contrary, if the small, slow and simple object motions are shown in the sequences, it only requires the ACQPPS algorithm a few of processing steps to finish the motion search, that is, the number of search points is correspondingly reduced.
Unlike the ME algorithms with fixed search ranges, for example, the full search algorithm, it is impractical to precisely identify the number of computational steps for ACQPPS. On an average, however, an approximation equation can be utilized to represent the computational complexity for ACQPPS method. The worst case of motion search for a video sequence is to use the block size, if the fixed block size is employed. In this case, the number of search points for ACQPPS motion estimation is usually around , according to the practical motion search results. Therefore, the algorithm complexity can be simply identified as, in terms of image size and frame rate,
where the block size is for the worst case of computations. For a standard software implementation, it actually requires 16 subtractions and 15 additions, that is, 31 arithmetic operations, for each block SAD calculations. Accordingly, the complexity of ACQPPS is approximately 14 and 60 times less than the one required by full search algorithm with the and search range, respectively. In practice, the ACQPPS complexity is roughly at the same level as the simple DS algorithm.
4. Hardware Architecture of ACQPPS Motion Estimator
The ACQPPS is designed with low complexity, which is appropriate to be implemented based on a hardware architecture. The hardware architecture takes advantage of the pipelining and parallel operations of the adaptive search patterns, and utilizes a fully pipelined multilevel SAD calculator to improve the computational efficiency and, therefore, reduce the clock frequency reasonably.
As mentioned above, the computation of motion vector for a smallest block shape, that is, 4 4 block, is the worst case for calculation. The worst case refers to the percentage usage of the memory bandwidth. It is necessary that the computational efficiency be as high as possible in the worst case. All of the other block shapes can be constructed from 4 4 blocks so that the computation of distortion in 4 4 partial solutions and result additions can solve all of the other block shapes.
4.1. ACQPPS Hardware Architecture
4.2. Fully Pipelined SAD Calculator
For instance, for a largest block, it will require 4 stages of the parallel data loadings from the register arrays to the SAD calculator to obtain a final block SAD result. In this case, the schedule of data loading will be 0, 1, 2, 3 4, 5, 6, 7 8, 9, 10, 11 12, 13, 14, 15 , where " " indicates each parallel pixel data input with the current and reference block data.
4.3. Optimized Memory Structure
When a square pattern is used to refine the MV search results, the mapping of the memory architecture is important to speed up the performance. In our design, the memory architecture will be mapped onto a 2D register space for the refined stage. The maximum size of this space is with pixel bit depth, that is, the mapped register memory can accommodate a largest macroblock plus the edge redundancy for the rotated data shift and storage operations.
A simple combination of parallel register shifts and related data fetches from SRAM can reduce the memory bandwidth, and facilitate the refinement processing, as many of the pixel data for searching in this stage remain unchanged. For example, 87.89 and 93.75 of the pixel data will stay unchanged, when the (1,−1) and (1,0) offset searches for the block are executed, respectively.
4.4. SAD Comparator
The SAD comparator is utilized to compare the previously generated block SAD results to obtain a final estimated MV which corresponds to the best MME point that has the minimum SAD with the lowest block pixel intensity. To select and compare the proper block SAD results as shown in Figure 6, the signals of different block shapes and computing stages are employed to determine the appropriate mode of minimum SAD to be utilized.
For example, if the block size is used for motion estimation, the block data will be loaded into the BPU for SAD calculations. Each block requires 4 computing stages to obtain a final block SAD result. In this case, the result mode of " or SAD" will be first selected. Meanwhile, the signal of computing stages is also used to indicate the valid input to the SAD comparator for retrieving proper SAD results from BPU, and thus obtain the MME point with a minimum SAD for this block size.
The best MME point position obtained by SAD comparator is further employed to produce the best matched reference block data and residual data which are important to other video encoding functions, such as mathematical transforms and motion compensation, and so forth.
5. Virtual Socket System-on-Platform Architecture
The bitstream and hardware complexity analysis derived in Section 2 helps guiding both the architecture design for prototyping IP accelerated system and the optimized implementation of an H.264 BP encoding system based on that architecture.
5.1. The Proposed System-On-Platform Architecture
A variety of options, switches, and modes required in video bitstream actually results in the increasing interactions between different video tasks or function-specific IP blocks. Consequently, the functional oriented and fully dedicated architectures will become inefficient, if high levels of the flexibility are not provided in the individual IP modules. To make the architectures remain efficient, the hardware blocks need optimization to deal with the increasing complexity for visual objects processing. Besides, the hardware must keep flexible enough to manage and allocate various resources, memories, computational video IP accelerators for different encoding tasks. In view of that the programmable solutions will be preferable for video codec applications with programmable and reconfigurable processing cores, the heterogeneous functionality and the algorithms can be executed on the same hardware platform, and upgraded flexibly by software manipulations.
To accelerate the performance on processing cores, parallelization will be demanded. The parallelization can take place at different levels, such as task, data, and instruction. Furthermore, the specific video processing algorithms performed by IP accelerators or processing cores can improve the execution efficiency significantly. Therefore, the requirements for H.264 video applications are so demanding that multiple acceleration techniques may be combined to meet the real-time conditions. The programmable, reconfigurable, heterogeneous processors are the preferable choice for an implementation of H.264 BP video encoder. Architectures with the support for concurrent performance and hardware video IP accelerators are well applicable for achieving the real-time requirement imposed by the H.264 standard.
The processing cores are connected through the heterogeneous integrated onplatform memory spaces for the exchange of control information. The PCI/PCMCIA standard bus provides a data transfer solution for the host connected to the platform framework, reconfigures and controls the platform in a flexible way. Desirable video IP accelerators will be integrated in the system platform architecture to improve the encoding performance for H.264 BP video applications.
5.2. Virtual Socket Management
The concept of virtual socket is thus introduced to the proposed system-on-platform architecture. Virtual socket is a solution for the host-platform interface, which can map a virtual memory space from the host environment to the physical storage on the architecture. It is an efficient mechanism for the management of virtual memory interface and heterogeneous memory spaces on the system framework. It enables a truly integrated, platform independent environment for the hardware-software codevelopment.
Through the virtual socket interface, a few of virtual socket application programming interface (API) function calls can be employed to make the generic hardware functional IP accelerators automatically map the virtual memory addresses from the host system to different memory spaces on the hardware platform. Therefore, with the efficient virtual socket memory organization, the hardware abstraction layer will provide the system architecture with simplified memory access, interrupt based control and shielded interactions between the platform framework and the host system. Through the integration of IP accelerators to the hardware architecture, the system performance will be improved significantly.
The codesign virtual socket host-platform interface management and system-on-platform hardware architecture actually provide a useful embedded system approach for the realization of advanced and complicated H.264 video encoding system. Hence, the IP accelerators on FPGA, together with the extensible DSP and RISC, construct an efficient programmable embedded solution to perform the dedicated and real-time video processing tasks. Moreover, due to the various video configurations for H.264 encoding, the physically implemented virtual socket interface as well as APIs can easily enable the encoder configurations, data manipulations and communications between the host computer system and hardware architecture, in return facilitate the system development for H.264 video encoders.
5.3. Integration of IP Accelerators
The IP accelerator illustrated here can be any H.264 compliant hardware block which is defined to handle a computationally extensive task for video applications without a specific design for interaction controls between IP and the host. For encoding, the basic modules to be integrated include Motion Estimator, Discrete Cosine Transform and Quantization (DCT/Q), Deblocking Filter and Context Adaptive Variable Length Coding (CAVLC), while Inverse Discrete Cosine Transform and Inverse Quantization , and Motion Compensation (MC) for decoding. An IP memory interface is provided by the architecture to achieve the integration. All IP modules are connected to the IP memory interface, which provides accelerators a straight way to exchange data between the host and memory spaces. Interrupt signals can be generated by accelerators when demanded. Moreover, to control the concurrent performance of accelerators, an IP bus arbitrator is designed and integrated in the IP memory interface, for the interface controller to allocate appropriate memory operation time for each IP module, and avoid the memory access conflicts possibly caused by heterogeneous IP operations.
IP interface signals.
Clk, reset, start
Platform signals for IP
Valid strobes for IP memory access
Input and output memory data for IP
IP request for memory read
Mem_HW Accel, offset, count
IP number, offset, and data count provided by IP/Host for memory read
Mem_HW Accel1, offset1, count1
IP Number, offset, and data count provided by IP/Host for memory write
IP bus request for memory
IP bus release request for
IP bus request grant for
IP bus release grant for
IP interrupt signal
5.4. Host Interface and API Function Calls
The host interface provides the architecture with necessary data for video processing. It can also control video accelerators to operate in sequential or parallel mode, in accordance with the H.264 video codec specifications. The hardware-software partitioning is simplified so that the host interface can focus on the data communication as well as flow control for video tasks, while hardware accelerators deal with local memory accesses and video codec functions. Therefore, the software abstraction layer covers the feature of data exchange and video task flow control for hardware performance.
A set of related virtual socket API functions is defined to implement the host interface features. The virtual socket APIs are software function calls coded in C/C++, which perform data transfers and signal interactions between the host and hardware system-on-platform. The virtual socket API as a software infrastructure can be utilized by a variety of video applications to control the implementation of hardware feature defined. With virtual socket APIs, the manipulation of video data in local memories can be executed conveniently. Therefore, the efficiency of hardware and software interactions can be kept high.
6. System Optimizations
6.1. Memory Optimization
Due to the significant memory access requirement for video encoding tasks, a large amount of clock cycles is consumed by the processing core while waiting for the data fetch from local memory spaces. To reduce or avoid the overhead of memory data access, the memory storage of video frame data can be organized to utilize multiple independent memory spaces (SRAM and DRAM) and dual-port memory (BRAM), in order to enable the parallel and pipelined memory access during the video encoding. This optimized requirement can practically provide the system architecture with the multi-port memory storage to reduce the data access bandwidth for each of the individual memory space.
Furthermore, with the dual-port data access, DMA can be scheduled to transfer a large amount of video frame data through PCI bus and virtual socket interface in parallel with the operations of encoding tasks, so that the processing core will not suffer memory and encoding latency. In such case, the data control flow of video encoding will be managed to make the DMA transfer and IP accelerator operations in fully parallel and pipelined stages.
6.2. Architecture Optimization
As the main video encoding functions (such as ME, DCT/Q, IDCT/Q-1, MC, Deblocking Filter, and CAVLC) can be accelerated by IP modules, the interconnection between those video processing accelerators has an important impact on the overall system performance. To make the IP accelerators execute main computational encoding routines in full parallel and pipelining mode, the IP integration architecture has to be optimized. A few of caches are inserted between the video IP accelerators to facilitate the encoding concurrent performance. The caches can be organized as parallel dual-port memory (BRAM) or pipelined memory (FIFO). The interconnection control of data streaming between IP modules will be defined using those caches targeting to eliminate the extra overhead of processing routines, for encoding functions can be operated in full parallel and pipelining stages.
6.3. Algorithm Optimization
The complexity of encoding algorithms can be modified when the IP accelerators are shaping. This optimization can be taken after choosing the most appropriate modes, options, and configurations for the H.264 BP applications. It is known that the motion estimator requires the major overhead for encoding computations. To reduce the complexity of motion estimation, a very efficient and fast ACQPPS algorithm and corresponding hardware architecture have been realized based on the reduction of spatio-temporal correlation redundancy. Some other algorithm optimizations can also be executed. For example, a simple algorithm optimization may be applied to mathematic transform and quantization. As many blocks tend to have minimal residual data after the motion compensation, the mathematic transform and quantization for motion-compensated blocks can be ignored, if SAD of such blocks is lower than a prescribed threshold, in order to facilitate the processing speed.
The application of memory, algorithm, and architecture optimizations combined in the system can meet the major challenges for the realization of video encoding system. The optimization techniques can be employed to reduce the encoding complexity and memory bandwidth, with the well-defined parallel and pipelining data streaming control flow, in order to implement a simplified H.264 BP encoder.
6.4. An IP Accelerated Model for Video Encoding
This IP accelerated system model includes the memory, algorithm, and architecture optimization techniques to enable the reduction and elimination of the overhead resulted from the heterogeneous video encoding tasks. The video encoding model provided in this architecture is compliant with H.264 standard specifications.
The proposed ACQPPS algorithm is integrated and verified under H.264 JM Reference Software , while the hardware architectures, including the ACQPPS motion estimator and system-on-platform framework, are synthesized with Synplify Pro 8.6.2, implemented using Xilinx ISE 8.1i SP3 targeting Virtex-4 XC4VSX35FF668-10, based on the WILDCARD-4 .
The system hardware architecture can sufficiently process the QCIF/SIF/CIF video frames with the support of on-platform design resources. The Virtex-4 XC4VSX35 contains 3,456 Kb BRAM , 192 XtremeDSP (DSP48) slices , and 15,360 logic slices, which are equivalent to almost 1 million logic gates. Moreover, WILDCARD-4 integrates the large-sized 8 MB SRAM and 128 MB DRAM. With the sufficient design resources and memory support, the whole video frames of QCIF/SIF/CIF can be directly stored in the on-platform memories for the efficient hardware processing.
For example, if a CIF YUV (YCbCr) 4 : 2 : 0 video sequence is encoded with the optimized hardware architecture proposed in Figure 9, the total size of each current frame is 148.5 Kb. Therefore, each of the current CIF frame can be transferred from host system and directly stored in BRAM for motion estimation and video encoding, whereas the generated reference frames are stored in SRAM or DRAM. The SRAM and DRAM can accommodate a maximum of up to 55 and 882 CIF reference frames, respectively, which are more than enough for the practical video encoding process.
7.1. Performance of ACQPPS Algorithm
Video sequences for experiment with real-time frame rate.
Sequence (bit rate Kbps)
No. of frames
The implementation results in Tables 6 and 7 show that the estimated image quality produced by ACQPPS, in terms of PSNR, is very close to that from FS, while the number of average search points is dramatically reduced. The PSNR difference between ACQPPS and FS is in the range of . In most cases, PSNR degradation of ACQPPS is less than 0.06 dB, as compared to FS. In some cases, PSNR results of ACQPPS can be approximately equivalent or equal to those generated from FS. When compared with other fast search methods, that is, DS (small pattern), UCBDS, TSS, FSS and HEX, ACQPPS result is able to outperform their performance. ACQPPS can always yield higher PSNR than those fast algorithms. In this case, ACQPPS can obtain an average PSNR of dB higher than those algorithms with evaluated video sequences.
Besides, ACQPPS performance is comparable to that of the complicated and advanced EPZS and UMHexagonS algorithms, as it can achieve an average PSNR in the range of and , as compared to EPZS and UMHexagonS, respectively.
Video sequences for experiment with low bit and frames rates.
Sequence (bit rate Kbps)
No. of frames
Average PSNR performance for experiment with real-time and frame rate.
Average number of search points per MB for experiment with real-time and frame rate.
Average PSNR performance for experiment with low bit and frame rates.
Average number of search points per MB for experiment with low bit and frame rates.
The experiments show that the PSNR difference between ACQPPS and FS is still small, which is in an acceptable range of . In most cases, there is only less than 0.2 dB PSNR discrepancy between them. Moreover, ACQPPS still sufficiently outperforms DS, UCBDS, TSS, FSS and HEX. For mobile scenarios, there are usually quick and considerable motion displacements existing, under the environment of low frame rate video encoding. In such case, ACQPPS is particularly much better than those fast algorithms, and a result of up to +2.42 dB for PSNR can be achieved with the tested sequences. When compared with EPZS and UMHexagonS, ACQPPS can yield an average PSNR in the range of and , respectively.
Normally, ACQPPS is useful to produce a favorable PSNR for the sequences not only with small object motions, but also large amount of motions. In particular, if a sequence includes large object motions or considerable amount of motions, the advantage of ACQPPS algorithm is obvious, as the ACQPPS can adaptively choose different shapes and sizes for the search pattern which is applicable to the efficient large motion search.
Such search advantage can be observed when ACQPPS is compared with DS. It is know that DS has a simple diamond pattern for a very low complexity based motion search. For video sequences with slow and small motions contained, for example, Miss_Am (QCIF) and Mother_Daguhter (CIF) at 30 fps, the PSNR performance of DS and ACQPPS is relatively close, which indicates that DS performs well in the case of simple motion search. When the complicated and large amount of motions included in video images, however, DS is unable to yield good PSNR, as its motion search will be easily trapped in undesirable local minimum. For example, the PSNR differences between DS and ACQPPS are 0.34 dB and 0.44 dB, when Foreman (CIF) is tested with 1 Mbps at 30 fps and 150 Kbps at 10 fps, respectively. Furthermore, ACQPPS can produce an average PSNR of higher than DS in the case of real-time video encoding, and in the case of low bit and frame rate environment.
The number of search points for each method, which mainly represents the algorithm complexity, is also obtained to measure the search efficiency of different approaches. The NSP results show that the search efficiency of ACQPPS is higher than other algorithms, as ACQPPS can produce very good performance, in terms of PSNR, with reasonably possessed NSP. The NSP of ACQPPS is one of the least among all methods.
If ACQPPS is compared with DS, it is shown that ACQPPS has the similar NSP as DS. It is true that NSP of ACQPPS is usually a little bit increased in comparison with that of DS. However, the increasing of the NSP is limited and very reasonable, and is able to in turn bring ACQPPS much better PSNR for the encoded video quality. Furthermore, for the video sequences containing complex and quick object motions, for example, Foreman (CIF) and Stefan (CIF) at 30 fps, the NSP of ACQPPS can be even less than that of DS, which verifies that ACQPPS has a much satisfied search efficiency than DS, due to its highly adaptive search patterns.
In general, the complexity of ACQPPS is very low, and with high search performance, which makes it especially useful for the hardware architecture implementation.
7.2. Design Resources for ACQPPS Motion Estimator
Performance comparison between proposed ACQPPS and other motion estimation hardware architectures.
[ 33 ]
[ 34 ]
[ 35 ]
[ 36 ]
FPGA + DSP
Support block sizes
Max fps of CIF
Min Freq. [MHz]for CIF 30 fps
7.3. Throughput of ACQPPS Motion Estimator
Unlike the FS which has a fixed search range, search points and search range of ACQPPS depend on video sequences. ACQPPS search points will be increased, if a video sequence contains considerable or quick motions. On the contrary, search points can be reduced, if a video sequence includes slow or small amount of motions.
The ME scheme with a fixed block size can be typically applied to the throughput analysis. In such case, the worst case will be the motion estimation using blocks, which is the most time consuming in the case of fixed block size. Hence, the overall throughput result produced by ACQPPS architecture can be reasonably generalized and evaluated.
In general, if the clock frequency is 50 MHz and the memory (SRAM, BRAM and DRAM) structure is organized as DWORD (32-bit) for each data access, the ACQPPS hardware architecture will approximately need an average of 12.39 milliseconds for motion estimation in the worst case of using blocks. For a real-hardware architecture implementation, the typical throughput in the worst case of blocks can represent the overall motion search ability for this motion estimator architecture.
Therefore, the ACQPPS architecture can complete the motion estimation for more than 4 CIF ( ) video sequences or equivalent 1 4 CIF ( ) video sequence at 75 MHz clock frequency within each 33.33 milliseconds time slot (30 fps) to meet the real-time encoding requirement for a low design cost and low bit rate implementation. The throughput ability of ACQPPS architecture can be compared with those of a variety of other recently developed motion estimator hardware architectures, as illustrated in Table 10. The comparison results show that the proposed ACQPPS architecture can achieve higher throughput than other hardware architectures, with the reduced operational clock frequency. Generally, it will only require a very low clock frequency, that is, 18.75 MHz, to generate the motion estimation results for the CIF video sequences at 30 fps.
7.4. Realization of System Architecture
Design resources for system-on-platform architecture.
DMA performance for video sequence transfer.
QCIF 4 : 2 : 0 YCrCb
DMA Write (ms)
DMA Read (ms)
DMA R/W (ms)
CIF 4 :2 : 0 YCrCb
DMA Write (ms)
DMA Read (ms)
DMA R/W (ms)
Different DMA burst sizes will result in different DMA data transfer rates. In our case, the maximum DMA burst size is defined to accommodate a whole CIF 4 : 2 : 0 video frame, that is, 38,016 Dwords for each DMA data transfer buffer. Accordingly, the DMA transfer results verify that it only takes an average of approximately 2 milliseconds to transfer a whole CIF 4 : 2 : 0 video frame based on WildCard-4. This transfer performance can sufficiently support up to level 4 bitstream rate for the H.264 BP video encoding system.
7.5. Overall Encoding Performance
In view of the complexity analysis of H.264 video tasks described in Section 2, the most time consuming task is motion estimation. Other encoding tasks have much less overhead. Therefore, the video tasks can be scheduled to operate in parallel and pipelining stages as displayed in Figures 9 and 10 for the proposed architecture model. In this case, the overall encoding time for a video sequence is approximately equal to the following
The processing time of DCT/Q, , MC, Deblocking Filter, and CAVLC for a divided block directly depends on the architecture design for each of the module. On an average, it is normal that the overhead of those video tasks for encoding an individual block is much less than that of motion estimation. As a whole, the encoding time derived from those video tasks for the last one block can be even ignored, when it is compared to the total processing time of the motion estimator for a whole video sequence. Therefore, to simplify the overall encoding performance analysis for the proposed architecture model, the total encoding overhead derived from the system architecture for a video sequence can be approximately regarded as
This simplified system encoding performance analysis is valid as long as the video tasks are operated in concurrent and pipelined stages with the efficient optimization techniques. Accordingly, when the proposed ACQPPS motion estimator is integrated into the system architecture to perform the motion search, the overall encoding performance for the proposed architecture model is generalized.
An overall performance comparison for H.264 BP video encoding systems.
[ 37 ]
[ 38 ]
[ 39 ]
Codesign(Extensible multiple processing cores)
Full Search (FS)
Full Search (FS)
Max fps of CIF
Min Freq. [MHz] for CIF 30 fps
Core Voltage Supply
I/O Voltage Supply
An integrated reconfigurable hardware-software codesign IP accelerated system-on-platform architecture is proposed in this paper. The efficient virtual socket interface and optimization approaches for hardware realization have been presented. The system architecture is flexible for the host interface control and extensible with multiple cores, which can actually construct a useful integrated and embedded system approach for the dedicated functions.
An advanced application for this proposed architecture is to facilitate the development of H.264 video encoding system. As the motion estimation is the most complicated and important task in video encoder, a block-based novel adaptive motion estimation search algorithm, ACQPPS, and its hardware architecture are developed for reducing the complexity to extremely low level, while keeping the encoding performance, in terms of PSNR and bit rate, as high as possible. It is beneficial to integrate video IP accelerators, especially ACQPPS motion estimator, into the architecture framework for improving the overall encoding performance. The proposed system architecture is mapped on an integrated FPGA device, WildCard-4, toward an implementation for a simplified H.264 BP video encoder.
In practice, with the proposed system architecture, the realization of multistandard video codec can be greatly facilitated and efficiently verified, other than the H.264 video applications. It can be expected that the advantages of the proposed architecture will become more desirable for prototyping the future video encoding systems, as new video standards are emerging continually, for example, the coming H.265 draft.
The authors would like to thank the support from Alberta Informatics Circle of Research Excellence (iCore), Xilinx Inc., Natural Science and Engineering Research Council of Canada (NSERC), Canada Foundation for Innovation (CFI), and the Department of Electrical and Computer Engineering at the University of Calgary.
- Tekalp M: Digital Video Processing, Signal Processing Series. Prentice Hall, Englewood Cliffs, NJ, USA; 1995.Google Scholar
- Information technology—generic coding of moving pictures and associated audio information: video ISO/IEC 13818-2, September 1995
- Video Coding for Low Bit Rate Communication ITU-T Recommendation H.263, March 1996
- Coding of audio-visual objects—part 2: visual, amendment 1: visual extensions ISO/IEC 14496-4/AMD 1, April 1999
- Joint Video Team of ITU-T and ISO/IEC JTC 1 : Draft ITU-T recommendation and final draft international standard of joint video specification (ITU-T Rec. H.264 ISO/IEC 14496-10 AVC). JVT-G050r1, May 2003; JVT-K050r1 (non-integrated form) and JVT-K051r1 (integrated form), March 2004; Fidelity Range Extensions JVT-L047 (non-integrated form) and JVT-L050 (integrated form), July 2004
- Wiegand T, Sullivan GJ, Bjøntegaard G, Luthra A: Overview of the H.264/AVC video coding standard. IEEE Transactions on Circuits and Systems for Video Technology 2003,13(7):560-576.View ArticleGoogle Scholar
- Wenger S: H.264/AVC over IP. IEEE Transactions on Circuits and Systems for Video Technology 2003,13(7):645-656. 10.1109/TCSVT.2003.814966View ArticleGoogle Scholar
- Zeidman B: Designing with FPGAs and CPLDs. Publishers Group West, Berkeley, Calif, USA; 2002.Google Scholar
- Notebaert S, Cock JD: Hardware/Software Co-design of the H.264/AVC Standard. Ghent University; White Paper, 2004Google Scholar
- Staehler W, Susin A: IP Core for an H.264 Decoder SoC. Universidade Federal do Rio Grande do Sul (UFRGS); White Paper, October 2008Google Scholar
- Chandra R: IP-Reuse and Platform Base Designs. STMicroelectronics Inc.; White Paper, February 2002Google Scholar
- Wiegand T, Schwarz H, Joch A, Kossentini F, Sullivan GJ: Rate-constrained coder control and comparison of video coding standards. IEEE Transactions on Circuits and Systems for Video Technology 2003,13(7):688-703. 10.1109/TCSVT.2003.815168View ArticleGoogle Scholar
- Ostermann J, Bormans J, List P, et al.: Video coding with H.264/AVC: tools, performance, and complexity. IEEE Circuits and Systems Magazine 2004,4(1):7-28. 10.1109/MCAS.2004.1286980View ArticleGoogle Scholar
- Horowitz M, Joch A, Kossentini F, Hallapuro A: H.264/AVC baseline profile decoder complexity analysis. IEEE Transactions on Circuits and Systems for Video Technology 2003,13(7):704-716. 10.1109/TCSVT.2003.814967View ArticleGoogle Scholar
- Saponara S, Blanch C, Denolf K, Bormans J: The JVT advanced video coding standard: complexity and performance analysis on a tool-by-tool basis. Proceedings of the Packet Video Workshop (PV '03), April 2003, Nantes, France Google Scholar
- Jain JR, Jain AK: Displacement measurement and its application in interframe image coding. IEEE Transactions on Communications 1981,29(12):1799-1808. 10.1109/TCOM.1981.1094950View ArticleGoogle Scholar
- Koga T, Iinuma K, Hirano A, Iijima Y, Ishiguro T: Motion compensated interframe coding for video conferencing. Proceedings of the IEEE National Telecommunications Conference (NTC '81), November 1981 4: 1-9.Google Scholar
- Li R, Zeng B, Liou ML: A new three-step search algorithm for block motion estimation. IEEE Transactions on Circuits and Systems for Video Technology 1994, 4: 438-442. 10.1109/76.313138View ArticleGoogle Scholar
- Po L-M, Ma W-C: A novel four-step search algorithm for fast block motion estimation. IEEE Transactions on Circuits and Systems for Video Technology 1996,6(3):313-317. 10.1109/76.499840View ArticleGoogle Scholar
- Liu L-K, Feig E: A block-based gradient descent search algorithm for block motion estimation in video coding. IEEE Transactions on Circuits and Systems for Video Technology 1996,6(4):419-421. 10.1109/76.510936View ArticleGoogle Scholar
- Zhu S, Ma KK: A new diamond search algorithm for fast block-matching motion estimation. Proceedings of the International Conference on Information, Communications and Signal Processing (ICICS '97), September 1997, Singapore 1: 292-296.View ArticleGoogle Scholar
- Zhu C, Lin X, Chau L-P: Hexagon-based search pattern for fast block motion estimation. IEEE Transactions on Circuits and Systems for Video Technology 2002,12(5):349-355. 10.1109/TCSVT.2002.1003474View ArticleGoogle Scholar
- Tham JY, Ranganath S, Ranganath M, Kassim AA: A novel unrestricted center-biased diamond search algorithm for block motion estimation. IEEE Transactions on Circuits and Systems for Video Technology 1998,8(4):369-377. 10.1109/76.709403View ArticleGoogle Scholar
- Nie Y, Ma K-K: Adaptive rood pattern search for fast block-matching motion estimation. IEEE Transactions on Image Processing 2002,11(12):1442-1449. 10.1109/TIP.2002.806251View ArticleGoogle Scholar
- Tourapis HC, Tourapis AM: Fast motion estimation within the H.264 codec. Proceedings of the IEEE International Conference on Multimedia and Expo (ICME '03), July 2003, Baltimore, Md, USA 3: 517-520.Google Scholar
- Tourapis AM: Enhanced predictive zonal search for single and multiple frame motion estimation. Visual Communications and Image Processing, January 2002, Proceedings of SPIE 4671: 1069-1079.Google Scholar
- Joint Video Team (JVT) of ISO/IEC MPEG and ITU-T VCEG : Fast integer pel and fractional pel motion estimation for AVC. JVT-F016, December 2002
- Sühring K: H.264 JM Reference Software v.15.0. September 2008, http://iphome.hhi.de/suehring/tml/download Google Scholar
- Annapolis Micro Systems : Wildcard TM —4 Reference Manual. 12968-000 Revision 3.2, December 2005
- Xilinx Inc : Virtex-4 User Guide. UG070 (v2.3), August 2007
- Xilinx Inc : XtremeDSP for Virtex-4 FPGAs User Guide. UG073(v2.1), December 2005
- Tourapis AM, Au OC, Liou ML: Predictive motion vector field adaptive search technique (PMVFAST)—enhanced block based motion estimation. Proceedings of the IEEE Visual Communications and Image Processing (VCIP '01), January 2001 883-892.Google Scholar
- Huang Y-W, Wang T-C, Hsieh B-Y, Chen L-G: Hardware architecture design for variable block size motion estimation in MPEG-4 AVC/JVT/ITU-T H.264. Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS '03), May 2003 2: 796-798.Google Scholar
- Kim M, Hwang I, Chae S: A fast VLSI architecture for full-search variable block size motion estimation in MPEG-4 AVC/H.264. Proceedings of the IEEE Asia and South Pacific Design Automation Conference, January 2005 1: 631-634.Google Scholar
- Shen J-F, Wang T-C, Chen L-G: A novel low-power full-search block-matching motion-estimation design for H.263+. IEEE Transactions on Circuits and Systems for Video Technology 2001,11(7):890-897. 10.1109/76.931116View ArticleGoogle Scholar
- Yap SY, McCanny JV: A VLSI architecture for advanced video coding motion estimation. Proceedings of the IEEE International Conference on Application-Specific Systems, Architectures, and Processors (ASAP '03), June 2003 1: 293-301.Google Scholar
- Mochizuki S, Shibayama T, Hase M, et al.: A 64 mW high picture quality H.264/MPEG-4 video codec IP for HD mobile applications in 90 nm CMOS. IEEE Journal of Solid-State Circuits 2008,43(11):2354-2362.View ArticleGoogle Scholar
- Colenbrander RR, Damstra AS, Korevaar CW, Verhaar CA, Molderink A: Co-design and implementation of the H.264/AVC motion estimation algorithm using co-simulation. Proceedings of the 11th IEEE EUROMICRO Conference on Digital System Design Architectures, Methods and Tools (DSD '08), September 2008 210-215.Google Scholar
- Li Z, Zeng X, Yin Z, Hu S, Wang L: The design and optimization of H.264 encoder based on the nexperia platform. Proceedings of the 8th IEEE International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD '07), July 2007 1: 216-219.Google Scholar
- Winkler S, Dufaux F: Video quality evaluation for mobile applications. Visual Communications and Image Processing, July 2003, Lugano, Switzerland, Proceedings of SPIE 5150: 593-603.Google Scholar
- Ries M, Nemethova O, Rupp M: Motion based reference-free quality estimation for H.264/AVC video streaming. Proceedings of the 2nd International Symposium on Wireless Pervasive Computing (ISWPC '07), February 2007 355-359.Google Scholar
This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.