• 제목/요약/키워드: Two-dimensional order

Search Result 1,947, Processing Time 0.048 seconds

Analysis of the Geological Structure of the Hwasan Caldera Using Potential Data (포텐셜 자료해석을 통한 화산칼데라 구조 해석)

  • Park, Gye-Soon;Yoo, Hee-Young;Yang, Jun-Mo;Lee, Heui-Soon;Kwon, Byung-Doo;Eom, Joo-Young;Kim, Dong-O;Park, Chan-Hong
    • Journal of the Korean earth science society
    • /
    • v.29 no.1
    • /
    • pp.1-12
    • /
    • 2008
  • A geophysical mapping was performed for Hwasan caldera which is located in Euisung Sub-basin of the southeastern part of the Korean Peninsula. In order to overcome the limitation of the previous studies, remote sensing technic was used and dense potential data were obtained and analyzed. First, we analyzed geological lineament for target area using geological map, digital elevation model (DEM) data and satellite imagery. The results were greatly consistent with the previous studies, and showed that N-S and NW-SE direction are the most dominant one in target area. Second, based on the lineament analysis, highly dense gravity data were acquired in Euisung Sub-basin and an integrated interpretation considering air-born magnetic data was made to investigate the regional structure of the target area. The results of power spectrum analysis for the acquired potential data revealed that the subsurface of Euisung Sub-basin have two density discontinuities at about 1 km and 3-5 km depth. A 1 km depth discontinuity is thought as the depth of pyroclastic sedimentary rocks or igneous rocks which were intruded at the ring vent of Hwasan caldera, while a 3-5 km depth discontinuity seems to be associated with the depth of the basin basement. In addition, three-dimensional gravity inversion for the total area of Euisung Sub-basin was carried out, and the inversion results indicated two followings; 1) Cretaceous Palgongsan granite and Bulguksa intrusion rocks, which are located in southeastern part and northeastern part of Euisung Sub-basin, show two major low density anomalies, 2) pyroclastic rocks around Hwasan caldera also have lower density when compared with those of neighborhood regions and are extended to 1.5 km depth. However, a poor vertical resolution of potential survey makes it difficult to accurately delineate the detailed structure caldera which has a vertically developed characteristic in general. To overcome this limitation, integrated analysis was carried out using the magnetotelluric data on the corresponding area with potential data and we could obtain more reasonable geologic structure.

Tissue Engineered Cartilage Formation on Various PLGA Scaffolds (PLGA 종류와 담체의 형성 방법에 따른 인간의 조직공학적 연골형성)

  • 김유미;임종옥;정호윤;박태인;백운이
    • Journal of Biomedical Engineering Research
    • /
    • v.23 no.2
    • /
    • pp.147-153
    • /
    • 2002
  • The purpose of this study was to evacuate the effect of different types of Poly(lactic-co-glycolic acid) (PLGA) scaffolds on the formation of human auricular and septal cartilages. All of the scaffolds were formed in a tubular shape for potential application for artificial trachea or esophagus with either 110,000 g/mol PLGA. 220,000 g/mol PLGA. or a combination of both. In order to maintain the tubular shape in vivo, two methods were used. One method was inserting polyethylene tube at the center of scaffolds made of 110,000 g/mol PLGA. The other method involved combination of the two different molecular weight PLGA's. The inner surface of tubular shaped scaffold made with 110,000 g/mol PLGA was coated with 220,000 9/mol PLGA to give more mechanical rigidity. Elastic cartilage was taken from the ear of a patient aged under 20 nears old and hyaline cartilage was taken from the nasal septum. The chondrocytes were then isolated. After second passage, the chondrocytes were seeded on the PLGA scaffolds followed by in vitro culture for one week. The cells-PLGA scaffold complex were implanted subcutaneously on the back of nude mice for 8 weeks. The tissue engineered cartilages were separated from nude mice and examined histologically after staining with the Hematoxylin Eosin. The morphology of the scaffolds were examined by scanning electron microscopy. The pores were well formed and uniformly distributed in the various PLGA scaffolds. After 8 weeks in vivo culture, cartilage was well formed with 110,000 g/mol PLGA. however lumen had collapsed. In contrast. a minimal amount of neocartilage was formed with 220,000 g/mol PLGA, while the architecture of scaffold and lumen were well preserved. Elastic cartilage formed more neocartilage than hyaline. Hyaline and elastic neocartilage were well formed on 110,000 g/mol PLGA with the polyethylene tube, exhibiting mature chondrocytes and preservation of the tubular shape. It was found that 110,000 g/mol PLGA was more appropriate for cartilage formation but higher molecular weight polymer was necessary to maintain the three dimensional shape of the scaffold.

A Pilot Study for the Remote Monitoring of IMRT Using a Head and Neck Phantom (원격 품질 보증 시스템을 사용한 세기변조 방사선치료의 예비 모니터링 결과)

  • Han, Young-Yih;Shin, Eun-Hyuk;Lim, Chun-Il;Kang, Se-Kwon;Park, Sung-Ho;Lah, Jeong-Eun;Suh, Tae-Suk;Yoon, Myong-Geun;Lee, Se-Byeong;Ju, Sang-Gyu;Ahn, Yong-Chan
    • Radiation Oncology Journal
    • /
    • v.25 no.4
    • /
    • pp.249-260
    • /
    • 2007
  • Purpose: In order to enhance the quality of IMRT as employed in Korea, we developed a remote monitoring system. The feasibility of the system was evaluated by conducting a pilot study. Materials and Methods: The remote monitoring system consisted of a head and neck phantom and a user manual. The phantom contains a target and three OARs (organs at risk) that can be detected on CT images. TLD capsules were inserted at the center of the target and at the OARs. Two film slits for GafchromicEBT film were located on the axial and saggital planes. The user manual contained an IMRT planning guide and instructions for IMRT planning and the delivery process. After the manual and phantom were sent to four institutions, IMRT was planed and delivered. Predicted doses were compared with measured doses. Dose distribution along the two straight lines that intersected at the center of the axial film was measured and compared with the profiles predicted by the plan. Results: The measurements at the target agreed with the predicted dose within a 3% deviation. Doses at the OARs that represented the thyroid glands showed larger deviations (minimum 3.3% and maximum 19.8%). The deviation at OARs that represented the spiral cord was $0.7{\sim}1.4%$. The percentage of dose distributions that showed more than a 5% of deviation on the lines was $7{\sim}27%$ and $7{\sim}14%$ along the horizontal and vertical lines, respectively. Conculsion: Remote monitoring of IMRT using the developed system was feasible. With remote monitoring, the deviation at the target is expected to be small while the deviation at the OARs can be very large. Therefore, a method that is able to investigate the cause of a large deviation needs to be developed. In addition, a more clinically relevant measure for the two-dimensional dose comparison and pass/fail criteria need to be further developed.

Transport Paths of Surface Sediment on the Tidal Flat of Garolim Bay, West Coast of Korea (황해 가로림만 조간대 표층퇴적물의 이동경로)

  • Shin, Dong-Hyeok;Yi, Hi-Il;Han, Sang-Joon;Oh, Jae-Kyung;Kwon, Su-Jae
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.3 no.2
    • /
    • pp.59-70
    • /
    • 1998
  • Two-dimensional trend-vector model of sediment transport is first tested in the tidal flat of Garolim Bay, mid-western coast of the Korean Peninsula. Three major parameters of surface sediment, i.e., mean grain size, sorting and skewness, are used for defining the best-fitting transport trend-vector on the sand ridge and muddy sand flat. These trend vectors are compared with the real transport directions determined from morphology, field observation and bedforms. The 15 possible cases of trend vectors are calculated from total sediments. In order to find the role of coarse sediments, trend vectors from sediments coarser than < 4.5 ${\phi}$, (sand size) are separately calculated from those of total sediments. As compared with the real directions, the best-fitting transport-vector model is the "case M" of coarse sediments which is the combined trend vectors of two cases: (1) finer, better sorted and more negatively skewed and (2) coarser, better sorted and more positively skewed. This indicates sand-size grains are formed by simpler hydrodynamic processes than total sediments. Transported sediment grains are better sorted than the source sediment grains. This indicates that consistent hydrodynamic energy can make sediment grains better sorted, regardless of complicated mechanisms of sediment transport. Consequently, both transported vector model and real transported direction show that the source of sediments are located outside of bay (offshore Yellow Sea) and in the baymouth. These source sediments are transported through the East Main Tidal Channel adjacent the baymouth. Some are transported from the subtidal zone to the upper tidal flat, but others are transported farther to the south, reaching the south tidal channel in the study area. Also, coarse sediment grains on the sand ridge are originally from the baymouth, and transported through the subtidal zone to the south tidal channel. These coarse sediments are moved to the northeast, but could not pass the small north tidal channel. It is interpreted that the great amount of coarse sediments is returned back to the outside of the bay (Yellow Sea) again through the baymouth during the ebb tide. The distribution of muddy sand in the northeastern part of study area may result from the mixing of two sediment transport mechanisms, i.e., suspension and bedload processes. The landward movement of sand ridge and the formation of the north tidal channel are formed either by the supply of coarse sediments originating from the baymouth and outside of the bay (subaqueous sand ridges including Jang-An-Tae) or by the recent relative sea-level rise.

  • PDF

Crystal Structure of Dehydrated Partially Cobalt(II)-Exchanged Zeolite X, $Co_{41}Na_{10}-X$ (부분적으로 $Co^{2+}$ 이온으로 치환된 제올라이트 X, $Co_{41}Na_{10}-X$를 탈수한 결정구조)

  • Jang, Se-Bok;Jeong, Mi-Suk;Han, Young-Wook;Kim, Yang
    • Korean Journal of Crystallography
    • /
    • v.6 no.2
    • /
    • pp.125-133
    • /
    • 1995
  • The crystal structure of dehydrated, partially Co(II)-exchanged zeolite X, stoichiometry Co2+Na+-X (Co41+Na10Si100Al92O384) per unit cell, has been determined from three-dimensional X-ray diffraction data gathered by counter methods. The structure was solved and refined in the cubic space group Fd3:α=24.544(1)Å at 21(1)℃. The crystal was prepared by ion exchange in a flowing stream using a solution 0.025 M each in Co(NO3)2 and Co(O2CCH3)2. The crystal was then dehydrated at 380℃ and 2×10-6 Torr for two days. The structure was refined to the final error indices, R1=0.059 and R2=0.046 with 211 reflections for which I > 3σ(I). Co2+ ions and Na+ ions are located at the four different crystallographic sites. Co2+ ions are located at two different sites of high occupancies. Sixteen Co2+ ions are located at the center of the double six-ring (site I; Co-O = 2.21(1)Å, O-Co-O = 90.0(4)°) and twenty-five Co2+ ions are located at site II in the supercage. Twenty-five Co2+ ions are recessed 0.09Å into the supercage from its three oxygen plane (Co-O = 2.05(1)Å, O-Co-O = 119.8(7)°). Na+ ions are located at two different sites of occupandies. Seven Na+ ions are located at site II in the supercage (Na-O = 2.29(1)Å, O-Na-O = 102(1)°). Three Na+ ions are statistically distribyted over site III, a 48-fold equipoint in the supercages on twofold axes (Na-O = 2.59(10)Å, O-Na-O = 69.0(3)°). Seven Na+ ions are recessed 1.02Å into the supercage from the three oxygen plane. It appears that Co2+ ions prefer sites I and II in order, and that Na+ ions occupy the remaining sites, II and III.

  • PDF

Evaluation of the Positional Uncertainty of a Liver Tumor using 4-Dimensional Computed Tomography and Gated Orthogonal Kilovolt Setup Images (사차원전산화단층촬영과 호흡연동 직각 Kilovolt 준비 영상을 이용한 간 종양의 움직임 분석)

  • Ju, Sang-Gyu;Hong, Chae-Seon;Park, Hee-Chul;Ahn, Jong-Ho;Shin, Eun-Hyuk;Shin, Jung-Suk;Kim, Jin-Sung;Han, Young-Yih;Lim, Do-Hoon;Choi, Doo-Ho
    • Radiation Oncology Journal
    • /
    • v.28 no.3
    • /
    • pp.155-165
    • /
    • 2010
  • Purpose: In order to evaluate the positional uncertainty of internal organs during radiation therapy for treatment of liver cancer, we measured differences in inter- and intra-fractional variation of the tumor position and tidal amplitude using 4-dimentional computed radiograph (DCT) images and gated orthogonal setup kilovolt (KV) images taken on every treatment using the on board imaging (OBI) and real time position management (RPM) system. Materials and Methods: Twenty consecutive patients who underwent 3-dimensional (3D) conformal radiation therapy for treatment of liver cancer participated in this study. All patients received a 4DCT simulation with an RT16 scanner and an RPM system. Lipiodol, which was updated near the target volume after transarterial chemoembolization or diaphragm was chosen as a surrogate for the evaluation of the position difference of internal organs. Two reference orthogonal (anterior and lateral) digital reconstructed radiograph (DRR) images were generated using CT image sets of 0% and 50% into the respiratory phases. The maximum tidal amplitude of the surrogate was measured from 3D conformal treatment planning. After setting the patient up with laser markings on the skin, orthogonal gated setup images at 50% into the respiratory phase were acquired at each treatment session with OBI and registered on reference DRR images by setting each beam center. Online inter-fractional variation was determined with the surrogate. After adjusting the patient setup error, orthogonal setup images at 0% and 50% into the respiratory phases were obtained and tidal amplitude of the surrogate was measured. Measured tidal amplitude was compared with data from 4DCT. For evaluation of intra-fractional variation, an orthogonal gated setup image at 50% into the respiratory phase was promptly acquired after treatment and compared with the same image taken just before treatment. In addition, a statistical analysis for the quantitative evaluation was performed. Results: Medians of inter-fractional variation for twenty patients were 0.00 cm (range, -0.50 to 0.90 cm), 0.00 cm (range, -2.40 to 1.60 cm), and 0.00 cm (range, -1.10 to 0.50 cm) in the X (transaxial), Y (superior-inferior), and Z (anterior-posterior) directions, respectively. Significant inter-fractional variations over 0.5 cm were observed in four patients. Min addition, the median tidal amplitude differences between 4DCTs and the gated orthogonal setup images were -0.05 cm (range, -0.83 to 0.60 cm), -0.15 cm (range, -2.58 to 1.18 cm), and -0.02 cm (range, -1.37 to 0.59 cm) in the X, Y, and Z directions, respectively. Large differences of over 1 cm were detected in 3 patients in the Y direction, while differences of more than 0.5 but less than 1 cm were observed in 5 patients in Y and Z directions. Median intra-fractional variation was 0.00 cm (range, -0.30 to 0.40 cm), -0.03 cm (range, -1.14 to 0.50 cm), 0.05 cm (range, -0.30 to 0.50 cm) in the X, Y, and Z directions, respectively. Significant intra-fractional variation of over 1 cm was observed in 2 patients in Y direction. Conclusion: Gated setup images provided a clear image quality for the detection of organ motion without a motion artifact. Significant intra- and inter-fractional variation and tidal amplitude differences between 4DCT and gated setup images were detected in some patients during the radiation treatment period, and therefore, should be considered when setting up the target margin. Monitoring of positional uncertainty and its adaptive feedback system can enhance the accuracy of treatments.

Evaluation of Varietal Difference and Environmental Variation for Some Characters Related to Source and Sink in the Rice Plants (벼의 Source 및 Sink형질의 품종간차이와 환경변이의 평가)

  • Choi, Hae-Chun;Kwon, Yong-Woong
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.30 no.4
    • /
    • pp.460-470
    • /
    • 1985
  • Experiments were carried out to evaluate the standard gravity in determining potential kernel size and to determine the effective sampling way by analyzing intra - and inter - plant variations for some source and sink characters using eleven semi-dwarf indica and three japonica cultivars including four semi-dwarf indica nearisogenic lines. Also, additional experiments were conducted to understand yearly variation and variety x year interaction effects for ten characters related to source and sink and to characterize the varietal difference of pre- and post-heading self-competition employing three parental varieties and their F$\sub$5/ progenies in 1982 and 1983. It is desirable to determine the potential kernel size by average kernel wight of rice grains showing above 1.15 specific gravity. There was significant difference in leaf area per tiller, spikelets and sink capacity per panicle among vigorous, intermediate and inferior tillers classified by differentiated order and vigorousness. Although it was difficult to find out any significant difference in grain-fill ratio, ratio of perfectly ripened grain, potential kernel size and sink/source ratio between vigorous and intermediate tillers, there was big difference between them and inferior one. The coefficients of variation within each tiller-group for some characters related to source and sink were larger with the order of vigorous tillers < intermediate one '||'&'||'lt; inferior one, and the average heritability of all characters, evaluated by the ratio of varietal variance (equation omitted) to total variance (equation omitted), were higher with the order of inferior tillers '||'&'||'lt; intemediate one '||'&'||'lt; superior one. Therefore, it is desirable to sample the vigorous tillers to represent the varietal difference of these traits. '82-'83 year variations of three parental cultivars were significant for all traits except for leaf area/tiller, panicles/hill, leaf area index and rough rice yield. The characters showing highly significant variance of variety x year interaction were growth duration from transplanting to heading, leaf area/tiller, sink/source ratio, sink capacity/panicle and grain yield. Generalized yearly response of three parental varieties (Suweon 264, Raegyeong, IR1317-70-l) and their F$\sub$5/ progenies on the 1st and 2nd principal components extracted from ten source and sink characters generally exhibited reduction in both source and sink. However, there were diverse variety x year interactions such as progenies showing similar reaction with their parents and intermediate or recombinational yearly response with little or considerable yearly movement on the four-dimensional planes of the two upper principal components between 1982 and 1983. Sink characters revealing highly significant border effect were grain-fill ratio, spikelets and sink capacity per panicle. Among them the latter two especially showed significant variety x border effect interaction. Self-competition characterized by relative weakness of inside plant's sink characters compared to the border one was more severe during the reproductive stage before heading than maturing stage. Though the larger sink capacity per panicle generally disclosed the severer self-competition, some lines (like Suweon 264) revealed severe self-competition with small sink capacity while a few others showed tender self-competition in spite of big sink capacity per panicle.

  • PDF

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

Hierarchical Overlapping Clustering to Detect Complex Concepts (중복을 허용한 계층적 클러스터링에 의한 복합 개념 탐지 방법)

  • Hong, Su-Jeong;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.111-125
    • /
    • 2011
  • Clustering is a process of grouping similar or relevant documents into a cluster and assigning a meaningful concept to the cluster. By this process, clustering facilitates fast and correct search for the relevant documents by narrowing down the range of searching only to the collection of documents belonging to related clusters. For effective clustering, techniques are required for identifying similar documents and grouping them into a cluster, and discovering a concept that is most relevant to the cluster. One of the problems often appearing in this context is the detection of a complex concept that overlaps with several simple concepts at the same hierarchical level. Previous clustering methods were unable to identify and represent a complex concept that belongs to several different clusters at the same level in the concept hierarchy, and also could not validate the semantic hierarchical relationship between a complex concept and each of simple concepts. In order to solve these problems, this paper proposes a new clustering method that identifies and represents complex concepts efficiently. We developed the Hierarchical Overlapping Clustering (HOC) algorithm that modified the traditional Agglomerative Hierarchical Clustering algorithm to allow overlapped clusters at the same level in the concept hierarchy. The HOC algorithm represents the clustering result not by a tree but by a lattice to detect complex concepts. We developed a system that employs the HOC algorithm to carry out the goal of complex concept detection. This system operates in three phases; 1) the preprocessing of documents, 2) the clustering using the HOC algorithm, and 3) the validation of semantic hierarchical relationships among the concepts in the lattice obtained as a result of clustering. The preprocessing phase represents the documents as x-y coordinate values in a 2-dimensional space by considering the weights of terms appearing in the documents. First, it goes through some refinement process by applying stopwords removal and stemming to extract index terms. Then, each index term is assigned a TF-IDF weight value and the x-y coordinate value for each document is determined by combining the TF-IDF values of the terms in it. The clustering phase uses the HOC algorithm in which the similarity between the documents is calculated by applying the Euclidean distance method. Initially, a cluster is generated for each document by grouping those documents that are closest to it. Then, the distance between any two clusters is measured, grouping the closest clusters as a new cluster. This process is repeated until the root cluster is generated. In the validation phase, the feature selection method is applied to validate the appropriateness of the cluster concepts built by the HOC algorithm to see if they have meaningful hierarchical relationships. Feature selection is a method of extracting key features from a document by identifying and assigning weight values to important and representative terms in the document. In order to correctly select key features, a method is needed to determine how each term contributes to the class of the document. Among several methods achieving this goal, this paper adopted the $x^2$�� statistics, which measures the dependency degree of a term t to a class c, and represents the relationship between t and c by a numerical value. To demonstrate the effectiveness of the HOC algorithm, a series of performance evaluation is carried out by using a well-known Reuter-21578 news collection. The result of performance evaluation showed that the HOC algorithm greatly contributes to detecting and producing complex concepts by generating the concept hierarchy in a lattice structure.

Dual Codec Based Joint Bit Rate Control Scheme for Terrestrial Stereoscopic 3DTV Broadcast (지상파 스테레오스코픽 3DTV 방송을 위한 이종 부호화기 기반 합동 비트율 제어 연구)

  • Chang, Yong-Jun;Kim, Mun-Churl
    • Journal of Broadcast Engineering
    • /
    • v.16 no.2
    • /
    • pp.216-225
    • /
    • 2011
  • Following the proliferation of three-dimensional video contents and displays, many terrestrial broadcasting companies have been preparing for stereoscopic 3DTV service. In terrestrial stereoscopic broadcast, it is a difficult task to code and transmit two video sequences while sustaining as high quality as 2DTV broadcast due to the limited bandwidth defined by the existing digital TV standards such as ATSC. Thus, a terrestrial 3DTV broadcasting with a heterogeneous video codec system, where the left image and right images are based on MPEG-2 and H.264/AVC, respectively, is considered in order to achieve both high quality broadcasting service and compatibility for the existing 2DTV viewers. Without significant change in the current terrestrial broadcasting systems, we propose a joint rate control scheme for stereoscopic 3DTV service based on the heterogeneous dual codec systems. The proposed joint rate control scheme applies to the MPEG-2 encoder a quadratic rate-quantization model which is adopted in the H.264/AVC. Then the controller is designed for the sum of the left and right bitstreams to meet the bandwidth requirement of broadcasting standards while the sum of image distortions is minimized by adjusting quantization parameter obtained from the proposed optimization scheme. Besides, we consider a condition on maintaining quality difference between the left and right images around a desired level in the optimization in order to mitigate negative effects on human visual system. Experimental results demonstrate that the proposed bit rate control scheme outperforms the rate control method where each video coding standard uses its own bit rate control algorithm independently in terms of the increase in PSNR by 2.02%, the decrease in the average absolute quality difference by 77.6% and the reduction in the variance of the quality difference by 74.38%.