• Title/Summary/Keyword: 블록효과

Search Result 923, Processing Time 0.025 seconds

The PC concrete Rainwater Storage Facility development for a prevention of disaster and a water resources re-application (방재 및 수자원 재활용을 위한 PC콘크리트 빗물저류조의 개발)

  • Chang, Young-Cheol;Cho, Cheong-Hwi;Kim, Ok-Soo;Oh, Se-Eun;Lee, Jun-Gu
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2005.05b
    • /
    • pp.879-883
    • /
    • 2005
  • 우리나라는 하천유역의 도시화 추세 속에 불투수층의 증가로 빗물의 일시 유출로 인한 홍수발생으로 많은 인명과 재산피해가 발생하고 있어 방재적 차원에서의 수자원관리가 시급한 실정이다. 또한, 초기 빗물과 합류식 하수도의 월류수에 의한 하천, 호소, 및 습지의 수질오염문제도 많이 발생하고 있다. 이러한 문제를 해결하기 위하여 콘크리트로 제작된 PC 지하식 빗물저류시설로서 상부의 공간은 공원, 운동장, 주차장 등 다양하게 이용하면서 방재와 치수를 가능케 할 수 있다. 또한, PC 콘크리트 빗물저류조는 현장 타설이 아닌 PC콘크리트 블록을 현장에서 조립하여 시공기간이 대폭적으로 단축되고, 작업환경 및 주변환경을 개선시킬 수 있다. 또한, 지하수의 보전, 회복을 위한 빗물저류 침투 시설 역할도 수행하여 비상용수를 확보하고 여름철 홍수 시 빗물을 가두어 재해를 방지하는 등의 다목적 시설로 활용된다. 지하 매립형 빗물저류조는 기존의 암거설계기준을 참조하여 일본의 내진설계 기준을 반영하였으며, 고강도 콘크리트를 사용하여 강도 또한 뛰어나다. 그리고 시공이 간편하고 공기의 단축에 탁월한 효과를 나타내며, 빗물저류조 설치는 다음과 같은 특징이 있다. 1. 지하저류형 빗물저류조 시설로 설계되어 토지의 효과적인 이용이 기대된다. 2. 공사기간이 짧아 경제적이다. 3. 안정된 구조체이다. 4. 부지의 형태에 맞춘 시공이 가능하다. 5. 소규모에서 대규모의 유수지까지 광범위하게 대응이 가능하다. 6. 방재역할 수행 및 빗물이용의 역할을 담당할 수 있다. 7. 불투수층이 증가하고 있는 도시지역에서 적극 활용가능하다.로 판단된다.한 예비방류의 시행과 강우종료 후에도 이수용량에는 손실이 없는 저수지의 관리방안의 지침이 되는데 효율적이라 판단되었다. 방법을 개발하여 개선시킬 필요성이 있다.>$4.3\%$로 가장 근접한 결과를 나타내었으며, 총 유출량에서도 각각 $7.8\%,\;13.2\%$의 오차율을 가지는 것으로 분석되어 타 모형에 비해 실유량과의 차가 가장 적은 것으로 모의되었다. 향후 도시유출을 모의하는 데 가장 근사한 유출량을 산정할 수 있는 근거가 될 것이며, 도시재해 저감대책을 수립하는데 기여할 수 있을 것이라 판단된다.로 판단되는 대안들을 제시하는 예비타당성(Prefeasibility) 계획을 수립하였다. 이렇게 제시된 계획은 향후 과학적인 분석(세부평가방법)을 통해 대안을 평가하고 구체적인 타당성(feasibility) 계획을 수립하는데 토대가 될 것이다.{0.11R(mm)}(r^2=0.69)$로 나타났다. 이는 토양의 투수특성에 따라 강우량 증가에 비례하여 점증하는 침투수와 구분되는 현상이었다. 경사와 토양이 같은 조건에서 나지의 경우 역시 $Ro_{B10}(mm)=20.3e^{0.08R(mm)(r^2=0.84)$로 지수적으로 증가하는 경향을 나타내었다. 유거수량은 토성별로 양토를 1.0으로 기준할 때 사양토가 0.86으로 가장 작았고, 식양토 1.09, 식토 1.15로 평가되어 침투수에 비해 토성별 차이가 크게 나타났다. 이는 토성이 세립질일 수록 유거수의 저항이 작기 때문으로 생각된다. 경사에 따라서는 경사도가 증가할수록 증가하였으며

  • PDF

Automatic Text Categorization Using Passage-based Weight Function and Passage Type (문단 단위 가중치 함수와 문단 타입을 이용한 문서 범주화)

  • Joo, Won-Kyun;Kim, Jin-Suk;Choi, Ki-Seok
    • The KIPS Transactions:PartB
    • /
    • v.12B no.6 s.102
    • /
    • pp.703-714
    • /
    • 2005
  • Researches in text categorization have been confined to whole-document-level classification, probably due to lacks of full-text test collections. However, full-length documents availably today in large quantities pose renewed interests in text classification. A document is usually written in an organized structure to present its main topic(s). This structure can be expressed as a sequence of sub-topic text blocks, or passages. In order to reflect the sub-topic structure of a document, we propose a new passage-level or passage-based text categorization model, which segments a test document into several Passages, assigns categories to each passage, and merges passage categories to document categories. Compared with traditional document-level categorization, two additional steps, passage splitting and category merging, are required in this model. By using four subsets of Routers text categorization test collection and a full-text test collection of which documents are varying from tens of kilobytes to hundreds, we evaluated the proposed model, especially the effectiveness of various passage types and the importance of passage location in category merging. Our results show simple windows are best for all test collections tested in these experiments. We also found that passages have different degrees of contribution to main topic(s), depending on their location in the test document.

An Evaluation of a Dasymetric Surface Model for Spatial Disaggregation of Zonal Population data (구역단위 인구자료의 공간적 세분화를 위한 밀도 구분적 표면모델에 대한 평가)

  • Jun, Byong-Woon
    • Journal of the Korean association of regional geographers
    • /
    • v.12 no.5
    • /
    • pp.614-630
    • /
    • 2006
  • Improved estimates of populations at risk for quick and effective response to natural and man-made disasters require spatial disaggregation of zonal population data because of the spatial mismatch problem in areal units between census and impact zones. This paper implements a dasymetric surface model to facilitate spatial disaggregation of the population of a census block group into populations associated with each constituent pixel and evaluates the performance of the surface-based spatial disaggregation model visually and statistically. The surface-based spatial disaggregation model employed geographic information systems (GIS) to enable dasymetric interpolation to be guided by satellite-derived land use and land cover data as additional information about the geographic distributor of population. In the spatial disaggregation, percent cover based empirical sampling and areal weighting techniques were used to objectively determine dasymetric weights for each grid cell. The dasymetric population surface for the Atlanta metropolitan area was generated by the surface-based spatial disaggregation model. The accuracy of the dasymetric population surface was tested on census counts using the root mean square error (RMSE) and an adjusted RMSE. The errors related to each census track and block group were also visualized by percent error maps. Results indicate that the dasymetric population surface provides high-precision estimates of populations as well as the detailed spatial distribution of population within census block groups. The results also demonstrate that the population surface largely tends to overestimate or underestimate population for both the rural and forested and the urban core areas.

  • PDF

The QoS Filtering and Scalable Transmission Scheme of MPEG Data to Adapt Network Bandwidth Variation (통신망 대역폭 변화에 적응하는 MPEG 데이터의 QoS 필터링 기법과 스케일러블 전송 기법)

  • 유우종;김두현;유관종
    • Journal of Korea Multimedia Society
    • /
    • v.3 no.5
    • /
    • pp.479-494
    • /
    • 2000
  • Although the proliferation of real-time multimedia services over the Internet might indicate its successfulness in dealing with heterogeneous environments, it is obvious, on the other hand, that the internet now has to cope with a flood of multimedia data which consumes most of network communication channels due to a great deal of video or audio streams. Therefore, for the purpose of an efficient and appropriate utilization of network resources, it requires to develop and deploy a new scalable transmission technique n consideration of respective network environment and individual clients computing power. Also, we can eliminate the waste effects of storage device and data transmission overhead in that the same video stream duplicated according to QoS. The purpose of this paper is to develop a technology that can adjust the amount of data transmitted as an MPEG video stream according to its given communication bandwidth, and technique that can reflect dynamic bandwidth while playing a video stream. For this purpose, we introduce a media scalable media decomposer working on server side, and a scalable media composer working o n a client side, and then propose a scalable transmission method and a media sender and a media receiver in consideration of dynamic QoS. Those methods proposed her can facilitate an effective use of network resources, and provide multimedia MPEG video services in real-time with respect to individual client computing environment.

  • PDF

Study On The Robustness Of Face Authentication Methods Under illumination Changes (얼굴인증 방법들의 조명변화에 대한 견인성 비교 연구)

  • Ko Dae-Young;Kim Jin-Young;Na Seung-You
    • The KIPS Transactions:PartB
    • /
    • v.12B no.1 s.97
    • /
    • pp.9-16
    • /
    • 2005
  • This paper focuses on the study of the face authentication system and the robustness of fact authentication methods under illumination changes. Four different face authentication methods are tried. These methods are as fellows; PCA(Principal Component Analysis), GMM(Gaussian Mixture Modeis), 1D HMM(1 Dimensional Hidden Markov Models), Pseudo 2D HMM(Pseudo 2 Dimensional Hidden Markov Models). Experiment results involving an artificial illumination change to fate images are compared with each other. Face feature vector extraction based on the 2D DCT(2 Dimensional Discrete Cosine Transform) if used. Experiments to evaluate the above four different fate authentication methods are carried out on the ORL(Olivetti Research Laboratory) face database. Experiment results show the EER(Equal Error Rate) performance degrade in ail occasions for the varying ${\delta}$. For the non illumination changes, Pseudo 2D HMM is $2.54{\%}$,1D HMM is $3.18{\%}$, PCA is $11.7{\%}$, GMM is $13.38{\%}$. The 1D HMM have the bettor performance than PCA where there is no illumination changes. But the 1D HMM have worse performance than PCA where there is large illumination changes(${\delta}{\geq}40$). For the Pseudo 2D HMM, The best EER performance is observed regardless of the illumination changes.

Gate-Level Conversion Methods between Boolean and Arithmetic Masks (불 마스크와 산술 마스크에 대한 게이트 레벨 변환기법)

  • Baek, Yoo-Jin
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.46 no.11
    • /
    • pp.8-15
    • /
    • 2009
  • Side-channel attacks including the differential power analysis attack are often more powerful than classical cryptanalysis and have to be seriously considered by cryptographic algorithm's implementers. Various countermeasures have been proposed against such attacks. In this paper, we deal with the masking method, which is known to be a very effective countermeasure against the differential power analysis attack and propose new gate-level conversion methods between Boolean and arithmetic masks. The new methods require only 6n-5 XOR and 2n-2 AND gates with 3n-2 gate delay for converting n-bit masks. The basic idea of the proposed methods is that the carry and the sum bits in the ripple adder are manipulated in a way that the adversary cannot detect the relation between these bits and the original raw data. Since the proposed methods use only bitwise operations, they are especially useful for DPA-securely implementing cryptographic algorithms in hardware which use both Boolean and arithmetic operations. For example, we applied them to securely implement the block encryption algorithm SEED in hardware and present its detailed implementation result.

3-D Gravity Terrain Inversion for High Resolution Gravity Survey (고정밀 중력 탐사를 위한 3차원 중력 지형 역산 기법)

  • Park, Gye-Soon;Lee, Heui-Soon;Kwon, Byung-Doo
    • Journal of the Korean earth science society
    • /
    • v.26 no.7
    • /
    • pp.691-697
    • /
    • 2005
  • Recently, the development of accurate gravity-meter and GPS make it possible to obtain high resolution gravity data. Though gravity data interpretation like modeling and inversion has significantly improved, gravity data processing itself has improved very little. Conventional gravity data processing removes gravity effects due to mass and height difference between base and measurement level. But, it would be a biased density model when some or whole part of anomalous bodies exist above the base level. We attempted to make a multiquadric surface of the survey area from topography with DEM (Digital Elevation Map) data. Then we constituted rectangular blocks which reflect real topography of the survey area by the multiquadric surface. Thus, we were able to carry out 3-D inversions which include information of topography. We named this technique, 3-D Gravity Terrain Inversion (3DGTI). The model test showed that the inversion model from 3DGTI made better results than conventional methods. Furthermore, the 3-dimensional model from the 3DGTI method could maintain topography and as a result, it showed more realistic geologic model. This method was also applied on real field data in Masan-Changwon area. Granitic intrusion is an important geologic characteristic in this area. This method showed more critical geological boundaries than other conventional methods. Therefore, we concluded that in the case of various rocks and rugged terrain, this new method will make better model than convention ones.

EFFECT OF A FLUORIDE VARNISH ON THE ENAMEL DEMINERALIZATION (불소바니쉬가 법랑질 탈회에 미치는 영향)

  • Yoon, Myung-Ok;Lee, Nan-Young;Lee, Sang-Ho
    • Journal of the korean academy of Pediatric Dentistry
    • /
    • v.35 no.3
    • /
    • pp.446-455
    • /
    • 2008
  • The aim of this study was to evaluate the effect of fluoride varnish application on enamel decalcification. Eighty bovine enamel blocks divided randomly into 4 groups. Group I is the control group. Group II was treated with the APF gel and washed after 4 minutes. Group III and IV was treated with Fluor $Protector^{(R)}$ and $CavityShield^{TM}$ and washed after 1 minutes. Decalcification were created by placing all specimen into artificial acidic solution(pH 4.0). Then the optical density of the lesions were measured by visible light fluorescence and the lesion depths were measured. The results were : 1. The optical density of group II was higher than group I but lower than group III, IV(p<0.05) and there was no difference between group III, IV(p>0.05) at 48 hours. 2. The optical density of group IV was highest at 72 hours(p<0.05). 3. Mean lesion depths were $205.36{\pm}42.85{\mu}m$ and $210.81{\pm}44.60{\mu}m$ in group I, II but no significant difference between two groups(p>0.05). 4. Mean lesion depths were $80.03{\pm}21.66{\mu}m$ and $77.46{\pm}27.72{\mu}m$ in group III, IV but no significant difference between two groups(p>0.05). Fluoride varnish treatment resulted in a significant reduction in lesion depth compared with APF gel. Fluor $Protector^{(R)}$ and $CavityShield^{TM}$ provided the similar effect.

  • PDF

A VLSI Design of High Performance H.264 CAVLC Decoder Using Pipeline Stage Optimization (파이프라인 최적화를 통한 고성능 H.264 CAVLC 복호기의 VLSI 설계)

  • Lee, Byung-Yup;Ryoo, Kwang-Ki
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.46 no.12
    • /
    • pp.50-57
    • /
    • 2009
  • This paper proposes a VLSI architecture of CAVLC hardware decoder which is a tool eliminating statistical redundancy in H.264/AVC video compression. The previous CAVLC hardware decoder used four stages to decode five code symbols. The previous CAVLC hardware architectures decreased decoding performance because there was an unnecessary idle cycle in between state transitions. Likewise, the computation of valid bit length includes an unnecessary idle cycle. This paper proposes hardware architecture to eliminate the idle cycle efficiently. Two methods are applied to the architecture. One is a method which eliminates an unnecessary things of buffers storing decoded codes and then makes efficient pipeline architecture. The other one is a shifter control to simplify operations and controls in the process of calculating valid bit length. The experimental result shows that the proposed architecture needs only 89 cycle in average for one macroblock decoding. This architecture improves the performance by about 29% than previous designs. The synthesis result shows that the design achieves the maximum operating frequency at 140Mhz and the hardware cost is about 11.5K under a 0.18um CMOS process. Comparing with the previous design, it can achieve low-power operation because this design is implemented with high throughputs and low gate count.

Ciphering Scheme and Hardware Implementation for MPEG-based Image/Video Security (DCT-기반 영상/비디오 보안을 위한 암호화 기법 및 하드웨어 구현)

  • Park Sung-Ho;Choi Hyun-Jun;Seo Young-Ho;Kim Dong-Wook
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.2 s.302
    • /
    • pp.27-36
    • /
    • 2005
  • This thesis proposed an effective encryption method for the DCT-based image/video contents and made it possible to operate in a high speed by implementing it as an optimized hardware. By considering the increase in the amount of the calculation in the image/video compression, reconstruction and encryption, an partial encryption was performed, in which only the important information (DC and DPCM coefficients) were selected as the data to be encrypted. As the result, the encryption cost decreased when all the original image was encrypted. As the encryption algorithm one of the multi-mode AES, DES, or SEED can be used. The proposed encryption method was implemented in software to be experimented with TM-5 for about 1,000 test images. From the result, it was verified that to induce the original image from the encrypted one is not possible. At that situation, the decrease in compression ratio was only $1.6\%$. The hardware encryption system implemented in Verilog-HDL was synthesized to find the gate-level circuit in the SynopsysTM design compiler with the Hynix $0.25{\mu}m$ CMOS Phantom-cell library. Timing simulation was performed by Verilog-XL from CadenceTM, which resulted in the stable operation in the frequency above 100MHz. Accordingly, the proposed encryption method and the implemented hardware are expected to be effectively used as a good solution for the end-to-end security which is considered as one of the important problems.