• Title/Summary/Keyword: 스케일 모델

Search Result 324, Processing Time 0.042 seconds

A Study on Estimation of Quantile using Regional Scaling Model and Frequency Analysis (빈도해석과 지역 스케일 모델을 이용한 확률강우량 추정에 대한 연구)

  • Jung, Younghun;Kim, Sunghun;Kim, Hanbeen;Heo, Jun-Haeng
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2016.05a
    • /
    • pp.301-301
    • /
    • 2016
  • 국내의 경우 수공구조물을 설계하기 위해서는 빈도해석을 통해 설계수문량을 산정한다. 일반적으로 실무에서는 지점빈도해석을 수행하게 되는데 설계빈도보다 대부분 짧은 기간의 자료를 이용하여 산정한다. 지역빈도해석은 이러한 자료기간이 가지는 문제점을 극복하기 위하여 확률수문량의 정확도와 신뢰도를 향상시키는 기법이다. 스케일 모델은 지속기간별로 관측된 강우자료를 이용하여 재현기간에 대한 지속기간의 함수로 표현이 가능하며, 이를 통해 강우의 IDF곡선을 제시할 수 있는 수학적 모델이다. 대상지역의 강우관측소에서 관측된 강우자료가 일단위이면, 기준지속기간이 24시간이 되며, 기준지속기간에 대한 확률강우량으로부터 임의의 지속기간에 대한 확률강우량을 스케일 모델을 이용하여 추정할 수 있다. 따라서 짧은 자료를 보유한 지역이거나 미계측 지역에 대한 확률강우량을 추정을 위해 지역빈도해석과 지역 스케일 모델을 이용하여 확률강우량을 추정하여 지점빈도해석과 비교하고자 한다. 본 연구를 위해 한강유역의 강우 관측소를 이용하였으며, 군집분석 중 k-means방법을 적용하여 수문학적 동질성을 확보한 후 지역을 구분하였다. 구분된 지역은 지점 및 지역빈도해석을 수행한 후 상대평균제곱근오차(relative root mean square error, RRMSE)를 비교하여 정확도를 판단하였고, 정확도가 높은 빈도해석에 지역 스케일 모델을 적용하여 미계측 지점에 대한 임의의 시간에 대한 확률강우량을 추정하고자 한다.

  • PDF

Enhancement of MSFC-Based Multi-Scale Features Compression Network with Bottom-UP MSFF in VCM (VCM 의 바텀-업 MSFF 를 이용한 MSFC 기반 멀티-스케일 특징 압축 네트워크 개선)

  • Dong-Ha Kim;Gyu-Woong Han;Jun-Seok Cha;Jae-Gon Kim
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.11a
    • /
    • pp.116-118
    • /
    • 2022
  • MPEG-VCM(Video Coding for Machine)은 입력된 이미지/비디오의 특징(feature)를 압축하는 Track 1 과 입력 이미지/비디오를 직접 압축하는 Track 2 로 나뉘어 표준화가 진행 중이다. 본 논문은 Track 1 의 비전임무 네트워크로 사용하는 Detectron2 의 FPN(Feature Pyramid Network)에서 추출한 멀티-스케일 특징을 효율적으로 압축하는 MSFC 기반의 압축 모델의 개선 기법을 제시한다. 제안기법은 해상도를 줄여서 단일-스케일 압축맵을 압축하는 기존의 압축 모델에서 저해상도 특징맵을 고해상도 특징맵에 바텀-업(Bottom-Up) 구조로 합성하여 단일-스케일 특징맵을 구성하는 바텀-업 MSFF 를 가지는 압축 모델을 제시한다. 제안방법은 기존의 모델 보다 BPP-mAP 성능에서 1 ~ 2.7%의 개선된 BD-rate 성능을 보이며 VCM 의 이미지 앵커(image anchor) 대비 최대 -85.94%의 BD-rate 성능향상을 보인다.

  • PDF

Bayesian Texture Segmentation Using Multi-layer Perceptron and Markov Random Field Model (다층 퍼셉트론과 마코프 랜덤 필드 모델을 이용한 베이지안 결 분할)

  • Kim, Tae-Hyung;Eom, Il-Kyu;Kim, Yoo-Shin
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.1
    • /
    • pp.40-48
    • /
    • 2007
  • This paper presents a novel texture segmentation method using multilayer perceptron (MLP) networks and Markov random fields in multiscale Bayesian framework. Multiscale wavelet coefficients are used as input for the neural networks. The output of the neural network is modeled as a posterior probability. Texture classification at each scale is performed by the posterior probabilities from MLP networks and MAP (maximum a posterior) classification. Then, in order to obtain the more improved segmentation result at the finest scale, our proposed method fuses the multiscale MAP classifications sequentially from coarse to fine scales. This process is done by computing the MAP classification given the classification at one scale and a priori knowledge regarding contextual information which is extracted from the adjacent coarser scale classification. In this fusion process, the MRF (Markov random field) prior distribution and Gibbs sampler are used, where the MRF model serves as the smoothness constraint and the Gibbs sampler acts as the MAP classifier. The proposed segmentation method shows better performance than texture segmentation using the HMT (Hidden Markov trees) model and HMTseg.

Eye Localization based on Multi-Scale Gabor Feature Vector Model (다중 스케일 가버 특징 벡터 모델 기반 눈좌표 검출)

  • Kim, Sang-Hoon;Jung, Sou-Hwan;Oh, Du-Sik;Kim, Jae-Min;Cho, Seong-Won;Chung, Sun-Tae
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.1
    • /
    • pp.48-57
    • /
    • 2007
  • Eye localization is necessary for face recognition and related application areas. Most of eye localization algorithms reported thus far still need to be improved about precision and computational time for successful applications. In this paper, we propose an improved eye localization method based on multi-scale Gator feature vector models. The proposed method first tries to locate eyes in the downscaled face image by utilizing Gabor Jet similarity between Gabor feature vector at an initial eye coordinates and the eye model bunch of the corresponding scale. The proposed method finally locates eyes in the original input face image after it processes in the same way recursively in each scaled face image by using the eye coordinates localized in the downscaled image as initial eye coordinates. Experiments verify that our proposed method improves the precision rate without causing much computational overhead compared with other eye localization methods reported in the previous researches.

Fusion of Multi-Scale Features towards Improving Accuracy of Long-Term Time Series Forecasting (다중 스케일 특징 융합을 통한 트랜스포머 기반 장기 시계열 예측 정확도 향상 기법)

  • Min, Heesu;Chae, Dong-Kyu
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.539-540
    • /
    • 2022
  • 본 논문에서는 정확한 장기 시계열 예측을 위해 시계열 데이터의 다양한 스케일 (시간 규모)에서 표현을 학습하는 트랜스포머 모델을 제안한다. 제안하는 모델은 시계열의 다중 스케일 특징을 추출하고, 이를 트랜스포머에 반영하여 예측 시계열을 생성하는 구조로 되어 있다. 스케일 정규화 과정을 통해 시계열의 전역적 및 지역적인 시간 정보를 효율적으로 융합하여 종속성을 학습한다. 3 가지의 다변량 시계열 데이터를 이용한 실험을 통해 제안하는 방법의 우수성을 보인다.

Speech detection from broadcast contents using multi-scale time-dilated convolutional neural networks (다중 스케일 시간 확장 합성곱 신경망을 이용한 방송 콘텐츠에서의 음성 검출)

  • Jang, Byeong-Yong;Kwon, Oh-Wook
    • Phonetics and Speech Sciences
    • /
    • v.11 no.4
    • /
    • pp.89-96
    • /
    • 2019
  • In this paper, we propose a deep learning architecture that can effectively detect speech segmentation in broadcast contents. We also propose a multi-scale time-dilated layer for learning the temporal changes of feature vectors. We implement several comparison models to verify the performance of proposed model and calculated the frame-by-frame F-score, precision, and recall. Both the proposed model and the comparison model are trained with the same training data, and we train the model using 32 hours of Korean broadcast data which is composed of various genres (drama, news, documentary, and so on). Our proposed model shows the best performance with F-score 91.7% in Korean broadcast data. The British and Spanish broadcast data also show the highest performance with F-score 87.9% and 92.6%. As a result, our proposed model can contribute to the improvement of performance of speech detection by learning the temporal changes of the feature vectors.

A Study on Scale-Up Success Factors for ICT Startups: A Case Analysis Using ERIS Model (ICT 스타트업 스케일업 성공요인 연구: ERIS 모델 적용 사례연구)

  • Hwang, Jeong-Seop;Sim, Da-Hyun;Lee, Jungwoo
    • Journal of Digital Convergence
    • /
    • v.19 no.4
    • /
    • pp.89-101
    • /
    • 2021
  • Scale-up of ICT startups is not easy because of limited capabilities, lack of resources, and immature networks for the business. Therefore, this research selected a representative startup succeeded in scale-up and applied the ERIS model in analyzing their scale-up process in the initial stages of scale-up. Analysis of qualitative data collected revealed that the entrepreneurs' experience, convergence of knowledge between diverse industries, participation in public-sector-led R&D, management of communication channels between customers and businesses, and utilization of project-oriented campaigns are found to be critical success factors in scaling up ICT startups. Academically, this study validates the utility of ERIS model in analyzing the scale-up process. For practitioners, this study will be used as a reference for strategic development in seucring the competitiveness of the initial market of ICT startups and scale-up.

Comparative Study on Illumination Compensation Performance of Retinex model and Illumination-Reflectance model (레티넥스 모델과 조명-반사율 모델의 조명 보상 성능 비교 연구)

  • Chung, Jin-Yun;Yang, Hyun-Seung
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.11
    • /
    • pp.936-941
    • /
    • 2006
  • To apply object recognition techniques to real environment, illumination compensation method should be developed. As effective illumination compensation model, we focused our attention on Retinex model and illumination-Reflectance model, implemented them, and experimented on their performance. We implemented Retinex model with Single Scale Retinex, Multi-Scale Retinex, and Retinex Neural Network and Multi-Scale Retinex Neural Network, neural network model of Retinex model. Also, we implemented illumination-Reflectance model with reflectance image calculation by calculating an illumination image by low frequency filtering in frequency domain of Discrete Cosine Transform and Wavelet Transform, and Gaussian blurring. We compare their illumination compensation performance to facial images under nine illumination directions. We also compare their performance after post processing using Principal Component Analysis(PCA). As a result, illumination Reflectance model showed better performance and their overall performance was improved when illumination compensated images were post processed by PCA.

Scalable HBT Modeling using Direct Extraction Method of Model Parameters (파라메터 직접 추출법을 이용한 스케일 가능한 HBT의 모델링)

  • Suh Youngsuk
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.2
    • /
    • pp.316-321
    • /
    • 2005
  • A new HBT current source model and the corresponding direct parameter extraction methods are presented. Exact analytical expressions for the current source model parameters are derived. This method is applied to scalable modeling of HBT, Some techniques to reduce redundancy of the parameters are introduced. The model based on this method can accurately predict the measured data for the change of ambient temperature, size, and bias.

Simulations of Self-Assembled Structures in Macromolecular Systems: from Atomistic Model to Mesoscopic Model (고분자 자기조립 구조의 전산 모사: 원자 모델로부터 메조 스케일 모델까지)

  • Huh, June;Jo, Won-Ho
    • Polymer(Korea)
    • /
    • v.30 no.6
    • /
    • pp.453-463
    • /
    • 2006
  • Molecular simulation is an exceptionally useful method for predicting self-assembled structures in various macromolecular systems, enlightening the origins of many interesting molecular events such as protein folding, polymer micellization, and ordering of molten block copolymer. The length scales of those events ranges widely from sub-nanometer scale to micron-scale or to even larger, which is the main obstacle to simulate all the events in an ab initio principle. In order to detour this major obstacle in the molecular simulation approach, a molecular model can be rebuilt by sacrificing some unimportant molecular details, based on two different perspectives with respect to the resolution of model. These two perspectives are generally referred to as 'atomistic' and 'mesoscopit'. This paper reviews various simulation methods for macromolecular self-assembly in both atomistic and mesoscopic perspectives.