• Title/Summary/Keyword: frame splitting

Search Result 21, Processing Time 0.029 seconds

Moving Image Compression with Splitting Sub-blocks for Frame Difference Based on 3D-DCT (3D-DCT 기반 프레임 차분의 부블록 분할 동영상 압축)

  • Choi, Jae-Yoon;Park, Dong-Chun;Kim, Tae-Hyo
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.37 no.1
    • /
    • pp.55-63
    • /
    • 2000
  • This paper investigated the sub-region compression effect of the three dimensional DCT(3D-DCT) using the difference component(DC) of inter-frame in images. The proposed algorithm are the method that obtain compression effect to divide the information into subband after 3D-DCT, the data appear the type of cubic block(8${\times}$8${\times}$8) in eight difference components per unit. In the frequence domain that transform the eight differential component frames into eight DCT frames with components of both spatial and temporal frequencies of inter-frame, the image data are divided into frame component(8${\times}$8 block) of time-axis direction into 4${\times}$4 sub block in order to effectively obtain compression data because image components are concentrate in corner region with low-frequency of cubic block. Here, using the weight of sub block, we progressed compression ratio as consider to adaptive sub-region of low frequency part. In simulation, we estimated compression ratio, reconstructed image resolution(PSNR) with the simpler image and the complex image contained the higher frequency component. In the result, we could obtain the high compression effect of 30.36dB(average value in the complex-image) and 34.75dB(average value in the simple-image) in compression range of 0.04~0.05bpp.

  • PDF

Seismic Performance Evaluation of Existing Low-rise RC Frames with Non-seismic Detail (비내진상세를 가지는 기존 저층 철근콘크리트 골조의 내진거동평가)

  • Kim, Kyung Min;Lee, Sang Ho;Oh, Sang Hoon
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.17 no.3
    • /
    • pp.97-105
    • /
    • 2013
  • In this paper, the a static experiment of on two reinforced concrete (RC) frame sub-assemblages was conducted to evaluate the seismic behaviors of existing RC frames that were not designed to support a seismic load. The specimens were a one span and actual-sized. One of them had two columns with the same stiffness, but the other had two columns with different stiffness values. As Regarding the test results, lots of many cracks occurred on the surfaces of the columns and beam-column joints for the two specimens, but the cover concrete splitting hardly occurred was minimal until the test ends. In the case of the specimen with the same stiffness offor the two columns, the flexural collapse of the left-side column occurred. However, in the case of the specimen with different stiffness values for of the two columns, the beam-column joint finally collapsed, even though the shear strength of the joint was designed to be strong enough to support the lateral collapse load. The nonlinear Nonlinear static analysis of the two specimens was also conducted using the uniaxial spring model, and the analytical results successfully simulated the nonlinear behaviour of the specimens in accordance with the test results.

Collision Reduction Using Modified Q-Algorithm with Moving Readers in LED-ID System

  • Huynh, Vu Van;Le, Nam-Tuan;Choi, Sun-Woong;Jang, Yeong-Min
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.5A
    • /
    • pp.358-366
    • /
    • 2012
  • LED-ID (Light Emitting Diode - Identification) is one of the key technologies for identification, data transmission, and illumination simultaneously. This is the new paradigm in the identification technology environment. There are many issues are still now challenging to achieve high performance in LED-ID system. Collision issue is one of them. Actually this is the most significant issue in all identification system. LED-ID system also suffers from collision problem. In our system, collision occurs when two or more readers transmit data to tag at the same time or vice versa. There are many anti-collision protocols to resolve this problem; such as: Slotted ALOHA, Basic Frame Slotted ALOHA, Query Tree, Tree Splitting, and Q-Algorithm etc. In this paper, we propose modified Q-Algorithm to resolve collision at tag. The proposed protocol is based on Q-Algorithm and used the information of arrived readers to a tag from neighbor. The information includes transmitting slot number of readers and the number of readers that can be arrived in next slot. Our proposed protocol can reduce the numbers of collision slot and the successful time to identify all readers. In this paper our simulation and theoretical results are presented.

Video Scene Detection using Shot Clustering based on Visual Features (시각적 특징을 기반한 샷 클러스터링을 통한 비디오 씬 탐지 기법)

  • Shin, Dong-Wook;Kim, Tae-Hwan;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.47-60
    • /
    • 2012
  • Video data comes in the form of the unstructured and the complex structure. As the importance of efficient management and retrieval for video data increases, studies on the video parsing based on the visual features contained in the video contents are researched to reconstruct video data as the meaningful structure. The early studies on video parsing are focused on splitting video data into shots, but detecting the shot boundary defined with the physical boundary does not cosider the semantic association of video data. Recently, studies on structuralizing video shots having the semantic association to the video scene defined with the semantic boundary by utilizing clustering methods are actively progressed. Previous studies on detecting the video scene try to detect video scenes by utilizing clustering algorithms based on the similarity measure between video shots mainly depended on color features. However, the correct identification of a video shot or scene and the detection of the gradual transitions such as dissolve, fade and wipe are difficult because color features of video data contain a noise and are abruptly changed due to the intervention of an unexpected object. In this paper, to solve these problems, we propose the Scene Detector by using Color histogram, corner Edge and Object color histogram (SDCEO) that clusters similar shots organizing same event based on visual features including the color histogram, the corner edge and the object color histogram to detect video scenes. The SDCEO is worthy of notice in a sense that it uses the edge feature with the color feature, and as a result, it effectively detects the gradual transitions as well as the abrupt transitions. The SDCEO consists of the Shot Bound Identifier and the Video Scene Detector. The Shot Bound Identifier is comprised of the Color Histogram Analysis step and the Corner Edge Analysis step. In the Color Histogram Analysis step, SDCEO uses the color histogram feature to organizing shot boundaries. The color histogram, recording the percentage of each quantized color among all pixels in a frame, are chosen for their good performance, as also reported in other work of content-based image and video analysis. To organize shot boundaries, SDCEO joins associated sequential frames into shot boundaries by measuring the similarity of the color histogram between frames. In the Corner Edge Analysis step, SDCEO identifies the final shot boundaries by using the corner edge feature. SDCEO detect associated shot boundaries comparing the corner edge feature between the last frame of previous shot boundary and the first frame of next shot boundary. In the Key-frame Extraction step, SDCEO compares each frame with all frames and measures the similarity by using histogram euclidean distance, and then select the frame the most similar with all frames contained in same shot boundary as the key-frame. Video Scene Detector clusters associated shots organizing same event by utilizing the hierarchical agglomerative clustering method based on the visual features including the color histogram and the object color histogram. After detecting video scenes, SDCEO organizes final video scene by repetitive clustering until the simiarity distance between shot boundaries less than the threshold h. In this paper, we construct the prototype of SDCEO and experiments are carried out with the baseline data that are manually constructed, and the experimental results that the precision of shot boundary detection is 93.3% and the precision of video scene detection is 83.3% are satisfactory.

A Quadtree-based Disparity Estimation for 3D Intermediate View Synthesis (3차원 중간영상의 합성을 위한 쿼드트리기반 변이추정 방법)

  • 성준호;이성주;김성식;하태현;김재석
    • Journal of Broadcast Engineering
    • /
    • v.9 no.3
    • /
    • pp.257-273
    • /
    • 2004
  • In stereoscopic or multi-view three dimensional display systems, the synthesis of intermediate sequences is inevitably needed to assure look-around capability and continuous motion parallax so that it could enhance comfortable 3D perception. The quadtree-based disparity estimation is one of the most remarkable methods for synthesis of Intermediate sequences due to the simplicity of its algorithm and hardware implementation. In this paper, we propose two ideas in order to reduce the annoying flicker at the object boundaries of synthesized intermediate sequences by quadtree-based disparity estimation. First, new split-scheme provides more consistent auadtree-splitting during the disparity estimation. Secondly, adaptive temporal smoothing using correlation between present frame and previous one relieves error of disparity estimation. Two proposed Ideas are tested by using several stereoscopic sequences, and the annoying flickering is remarkably reduced by them.

A Novel Transmit Diversity Technique for IS-2000 Systems (IS-2000 시스템을 위한 SS-OTD에 관한 연구)

  • Yoon, Hyun-Goo;Yook Jong-Gwan;Park, Han-Kyu
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.1B
    • /
    • pp.56-65
    • /
    • 2002
  • This paper proposes a novel transmit diversity technique, namely symbol split orthogonal transmit diversity (SS-OTD). In this technique, full path diversity and temporal diversity are achieved by combining orthogonal transmit diversity technique (OTD) technique with the symbol splitting method proposed by Meyer. Its performances is simulated for fundamental channels associated with the forward link of the IS-2000 system, and then compared with those of OTD and space-time spreading (STS). Our proposed method offers a 0.5-7.7dB performance improvement over OTD under various simulation environments and its performance is similar to STS. Moreover, compares with that of STS, the peak-to-average power ratio (PAR) of transmitted signals in SS-OTD is reduced by a maximal 1.35dB, which decreases the complexity of base station RF devices, such as power amplifiers. Thus, SS-OTD is comparable to STS in performance and superior to STS in the cost and efficiency of base station RF devices.

Optimal Time Scheduling Algorithm for Decoupled RF Energy Harvesting Networks (비결합 무선 에너지 하비스팅 네트워크를 위한 최적 시간 스케줄링 알고리즘)

  • Jung, Jun Hee;Hwang, Yu Min;Kim, Jin Young
    • Journal of Satellite, Information and Communications
    • /
    • v.11 no.2
    • /
    • pp.55-59
    • /
    • 2016
  • Conventional RF energy harvesting systems can harvest energy and decode information from same source as an Hybirid-AP (H-AP). However, harvesting efficiency is seriously dependent on distance between users and H-AP. Therefore, in this paper, we proposed a transmission model for RF harvesting consisting of information and power source separately called Decoupled RF Energy harvesting networks. Main purpose of this paper is to maximize energy efficiency under various constraints of transmit power from H-AP and power beacon (PB), minimum quality of service and quality of harvested power of each users. To measure proposed model's performance, we proposed optimal time scheduling algorithms for energy efficiency (EE) maximization using Lagrangian dual decomposition theory that locally maximizes the EE by obtaining suboptimal values of three arguments : transmit power of H-AP, transmit power of PB, frame splitting factor. Experiment results show that the proposed energy-efficient algorithms converge within a few iterations with its optimality and greatly improve the EE compared to that of baseline schemes.

The effects of limestone powder and fly ash as an addition on fresh, elastic, inelastic and strength properties of self-compacting concrete

  • Hilmioglu, Hayati;Sengul, Cengiz;Ozkul, M. Hulusi
    • Advances in concrete construction
    • /
    • v.14 no.2
    • /
    • pp.93-102
    • /
    • 2022
  • In this study, limestone powder (LS) and fly ash (FA) were used as powder materials in self-compacting concrete (SCC) in increasing quantities in addition to cement, so that the two powders commonly used in the production of SCC could be compared in the same study. Considering the reduction of the maximum aggregate size in SCC, 10 mm or 16 mm was selected as the coarse aggregate size. The properties of fresh concrete were determined by slump flow (including T500 time), V-funnel and J-ring experiments. The experimental results showed that as the amount of both LS and FA increased, the slump flow also increased. The increase in powder material had a negative effect on V-funnel flow times, causing it to increase; however, the increase in FA concretes was smaller compared to LS ones. The increase in the powder content reduced the amount of blockage in the J-ring test for both aggregate sizes. As the hardened concrete properties, the compressive and splitting strengths as well as the modulus of elasticity were determined. Longitudinal and transverse deformations were measured by attaching a special frame to the cylindrical specimens and the values of Poisson's ratio, initiation and critical stresses were obtained. Despite having a similar W/C ratio, all SCC exhibited higher compressive strength than NVC. Compressive strength increased with increasing powder content for both LS and FA; however, the increase of the FA was higher than the LS due to the pozzolanic effect. SCC with a coarse aggregate size of 16 mm showed higher strength than 10 mm for both powders. Similarly, the modulus of elasticity increased with the amount of powder material. Inelastic properties, which are rarely found in the literature for SCC, were determined by measuring the initial and critical stresses. Crack formation in SCC begins under lower stresses (corresponding to lower initial stresses) than in normal concretes, while critical stresses indicate a more brittle behavior by taking higher values.

Quantitative Damage Index of RC Columns with Non-seismic Details (비내진상세를 가지는 철근콘크리트 기둥의 정량적 손상도 평가 기준)

  • Kim, Kyung-Min;Oh, Sang-Hoon;Choi, Kwang-Yong;Lee, Jung-Han;Park, Byung-Cheol
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.17 no.6
    • /
    • pp.11-20
    • /
    • 2013
  • In this paper, the quantitative damage index for reinforced concrete (RC) columns with non-seismic details were presented. They are necessary to carry out the postearthquake safety evaluation of RC buildings under 5 stories without seismic details. The static cyclic test of the RC frame sub-assemblage that was an one span and actual-sized was first conducted. The specimen collapsed by the shear failure after flexural yielding of a column, lots of cracks on the surfaces of columns and beam-column joints and the cover concrete splitting at the bottom of columns occurred. The damage levels of these kinds of columns with non-seismic details were classified to five based on the load-displacement relationship by the test result. The residual story drift ratios and crack widths were then adapted as the quantitative index to evaluate the damage limit states because those values were comparatively easy to measure right after earthquakes. The highest one among the residual story drift ratios under the similar maximum story drift ratio decided on the residual story drift ratio of each damage limit state. On the other hand, the lowest and average ones among the respective residual shear and flexural widths under the similar maximum story drift ratio decided on the residual shear and flexural widths of each damage limit state, respectively. These values for each damage limit state resulted in being smaller than those by the international damage evaluation guidelines that are for seismically designed members under the same deformations.

An Analysis of Big Video Data with Cloud Computing in Ubiquitous City (클라우드 컴퓨팅을 이용한 유시티 비디오 빅데이터 분석)

  • Lee, Hak Geon;Yun, Chang Ho;Park, Jong Won;Lee, Yong Woo
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.45-52
    • /
    • 2014
  • The Ubiquitous-City (U-City) is a smart or intelligent city to satisfy human beings' desire to enjoy IT services with any device, anytime, anywhere. It is a future city model based on Internet of everything or things (IoE or IoT). It includes a lot of video cameras which are networked together. The networked video cameras support a lot of U-City services as one of the main input data together with sensors. They generate huge amount of video information, real big data for the U-City all the time. It is usually required that the U-City manipulates the big data in real-time. And it is not easy at all. Also, many times, it is required that the accumulated video data are analyzed to detect an event or find a figure among them. It requires a lot of computational power and usually takes a lot of time. Currently we can find researches which try to reduce the processing time of the big video data. Cloud computing can be a good solution to address this matter. There are many cloud computing methodologies which can be used to address the matter. MapReduce is an interesting and attractive methodology for it. It has many advantages and is getting popularity in many areas. Video cameras evolve day by day so that the resolution improves sharply. It leads to the exponential growth of the produced data by the networked video cameras. We are coping with real big data when we have to deal with video image data which are produced by the good quality video cameras. A video surveillance system was not useful until we find the cloud computing. But it is now being widely spread in U-Cities since we find some useful methodologies. Video data are unstructured data thus it is not easy to find a good research result of analyzing the data with MapReduce. This paper presents an analyzing system for the video surveillance system, which is a cloud-computing based video data management system. It is easy to deploy, flexible and reliable. It consists of the video manager, the video monitors, the storage for the video images, the storage client and streaming IN component. The "video monitor" for the video images consists of "video translater" and "protocol manager". The "storage" contains MapReduce analyzer. All components were designed according to the functional requirement of video surveillance system. The "streaming IN" component receives the video data from the networked video cameras and delivers them to the "storage client". It also manages the bottleneck of the network to smooth the data stream. The "storage client" receives the video data from the "streaming IN" component and stores them to the storage. It also helps other components to access the storage. The "video monitor" component transfers the video data by smoothly streaming and manages the protocol. The "video translator" sub-component enables users to manage the resolution, the codec and the frame rate of the video image. The "protocol" sub-component manages the Real Time Streaming Protocol (RTSP) and Real Time Messaging Protocol (RTMP). We use Hadoop Distributed File System(HDFS) for the storage of cloud computing. Hadoop stores the data in HDFS and provides the platform that can process data with simple MapReduce programming model. We suggest our own methodology to analyze the video images using MapReduce in this paper. That is, the workflow of video analysis is presented and detailed explanation is given in this paper. The performance evaluation was experiment and we found that our proposed system worked well. The performance evaluation results are presented in this paper with analysis. With our cluster system, we used compressed $1920{\times}1080(FHD)$ resolution video data, H.264 codec and HDFS as video storage. We measured the processing time according to the number of frame per mapper. Tracing the optimal splitting size of input data and the processing time according to the number of node, we found the linearity of the system performance.