• Title/Summary/Keyword: Combination Weighting Method

Search Result 67, Processing Time 0.022 seconds

Image Retrieval Using a Composite of MPEG-7 Visual Descriptors (MPEG-7 디스크립터들의 조합을 이용한 영상 검색)

  • 강희범;원치선
    • Journal of Broadcast Engineering
    • /
    • v.8 no.1
    • /
    • pp.91-100
    • /
    • 2003
  • In this paper, to improve the retrieval Performance, an efficient combination of the MPEG-7 visual descriptors, such as the edge histogram descriptor (EHD), the color layout descriptor (CLD), and the homogeneous texture descriptor (HTD), is proposed in the framework of the relevance feedback approach. The EHD represents spatial distribution of edges in local image regions and it is considered as an important feature to represent the content of the image. The CLD specifies spatial distribution of colors and is widely used in image retrieval due to its simplicity and fast operation speed. The HTD describes precise statistical distribution of the image texture. Both the feature vector for the query image and the weighting factors among the combined descriptors are adaptively determined during the relevance feedback. Experimental results show that the proposed method improves the retrieval performance significantly tot natural images.

A Study on Random Selection of Pooling Operations for Regularization and Reduction of Cross Validation (정규화 및 교차검증 횟수 감소를 위한 무작위 풀링 연산 선택에 관한 연구)

  • Ryu, Seo-Hyeon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.4
    • /
    • pp.161-166
    • /
    • 2018
  • In this paper, we propose a method for the random selection of pooling operations for the regularization and reduction of cross validation in convolutional neural networks. The pooling operation in convolutional neural networks is used to reduce the size of the feature map and for its shift invariant properties. In the existing pooling method, one pooling operation is applied in each pooling layer. Because this method fixes the convolution network, the network suffers from overfitting, which means that it excessively fits the models to the training samples. In addition, to find the best combination of pooling operations to maximize the performance, cross validation must be performed. To solve these problems, we introduce the probability concept into the pooling layers. The proposed method does not select one pooling operation in each pooling layer. Instead, we randomly select one pooling operation among multiple pooling operations in each pooling region during training, and for testing purposes, we use probabilistic weighting to produce the expected output. The proposed method can be seen as a technique in which many networks are approximately averaged using a different pooling operation in each pooling region. Therefore, this method avoids the overfitting problem, as well as reducing the amount of cross validation. The experimental results show that the proposed method can achieve better generalization performance and reduce the need for cross validation.

Effect of Areal Mean Rainfall Estimation Technique and Rainfall-Runoff Models on Flood Simulation in Samcheok Osipcheon(Riv.) Basin (면적 강우량 산정 기법과 강우-유출 모형이 삼척오십천 유역의 홍수 모의에 미치는 영향)

  • Lee, Hyeonji;Shin, Youngsub;Kang, Dongho;Kim, Byungsik
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.43 no.6
    • /
    • pp.775-784
    • /
    • 2023
  • In terms of flood management, it is necessary to analyze quantitative rainfall and runoff from a spatial and temporal perspective and to analyze runoff for heavy rainfall events that are concentrated within a short period of time. The simulation and analysis results of rainfall-runoff models vary depending on the type and input data. In particular, rainfall data is an important factor, so calculating areal mean rainfall is very important. In this study, the areal mean rainfall of the Samcheok Osipcheon(Riv.) watersheds located in the mountainous terrain was calculated using the Arithmetic Mean Method, Thiessen's Weighting Method, and the Isohyetal Method, and the rainfall-runoff results were compared by applying the distributional model S-RAT and the lumped model HEC-HMS. The results of the temporal transferability study showed that the combination of the distributional model and the Isohyetal Method had the best statistical performance with MAE of 64.62 m3/s, RMSE of 82.47 m3/s, and R2 and NSE of 0.9383 and 0.8547, respectively. It is considered that this study was properly analyzed because the peak flood volume occurrence time of the observed and simulated flows is within 1 hour. Therefore, the results of this study can be used for frequency analysis in the future, which can be used to improve the accuracy of simulating peak flood volume and peak flood occurrence time in mountainous watersheds with steep slopes.

Site Characterization using Shear-Wave Velocities Inverted from Rayleigh-Wave Dispersion in Wonju, Korea (레일리파 분산을 역산하여 구한 횡파속도를 이용한 원주시의 부지특성)

  • Kim, Chungho;Ali, Abid;Kim, Ki Young
    • Geophysics and Geophysical Exploration
    • /
    • v.17 no.1
    • /
    • pp.11-20
    • /
    • 2014
  • To reveal shear-wave velocities ($v_s$) and site characterization of Wonju, Korea, Rayleigh waves were recorded at 78 sites of lower altitude using 12 to 24 4.5-Hz vertical geophones for 20 days during the period of February to September 2013. Dispersion curves of the Rayleigh waves obtained by the extended spatial autocorrelation method were inverted using the damped least-squares method to derive $v_s$ models. From these 1-D models, the average $v_s$ to a depth of 30 m ($v_s30$), $v_s$ of weathered rocks, depths to these basement rocks, and average $v_s$ of the overburden layer were derived to be $16.3{\pm}0.7m$, $576{\pm}8m/s$, $290{\pm}7m/s$, and $418{\pm}13m/s$, respectively, in the 95% confidence range. To determine adequate proxies for $v_s30$, we computed correlation coefficients of $v_s30$ with topographic slope (r = 0.46) and elevation (r = 0.43). An empirical linear relationship is presented as a combination of individually estimated $v_s30$ with weighting factors of 0.45, 0.45, and 0.1 for topographic slope, elevation, and mapped lithology, respectively. Due to a weak correlation between $v_s30$ obtained from inversion of dispersion curves and the proxy-based estimation (r = 0.50), however, the relatively large error range should be considered for applications of this relationship.

Implementation of Turbo Decoder Based on Two-step SOVA with a Scaling Factor (비례축소인자를 가진 2단 SOVA를 이용한 터보 복호기의 설계)

  • Kim, Dae-Won;Choi, Jun-Rim
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.39 no.11
    • /
    • pp.14-23
    • /
    • 2002
  • Two implementation methods for SOVA (Soft Output Viterbi Algorithm)of Turbo decoder are applied and verfied. The first method is the combination of a trace back (TB) logic for the survivor state and a double trace back logic for the weight value in two-step SOVA. This architecure of two-setp SOVA decoder allows important savings in area and high-speed processing compared with that of one-step SOVA decoding using register exchange (RE) or trace-back (TB) method. Second method is adjusting the reliability value with a scaling factor between 0.25 and 0.33 in order to compensate for the distortion for a rate 1/3 and 8-state SOVA decoder with a 256-bit frame size. The proposed schemes contributed to higher SNR performance by 2dB at the BER 10E-4 than that of SOVA decoder without a scaling factor. In order to verify the suggested schemes, the SOVA decoder is testd using Xillinx XCV 1000E FPGA, which runs at 33.6MHz of the maximum speed with 845 latencies and it features 175K gates in the case of 256-bit frame size.

Speaker-Adaptive Speech Synthesis based on Fuzzy Vector Quantizer Mapping and Neural Networks (퍼지 벡터 양자화기 사상화와 신경망에 의한 화자적응 음성합성)

  • Lee, Jin-Yi;Lee, Gwang-Hyeong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.1
    • /
    • pp.149-160
    • /
    • 1997
  • This paper is concerned with the problem of speaker-adaptive speech synthes is method using a mapped codebook designed by fuzzy mapping on FLVQ (Fuzzy Learning Vector Quantization). The FLVQ is used to design both input and reference speaker's codebook. This algorithm is incorporated fuzzy membership function into the LVQ(learning vector quantization) networks. Unlike the LVQ algorithm, this algorithm minimizes the network output errors which are the differences of clas s membership target and actual membership values, and results to minimize the distances between training patterns and competing neurons. Speaker Adaptation in speech synthesis is performed as follow;input speaker's codebook is mapped a reference speaker's codebook in fuzzy concepts. The Fuzzy VQ mapping replaces a codevector preserving its fuzzy membership function. The codevector correspondence histogram is obtained by accumulating the vector correspondence along the DTW optimal path. We use the Fuzzy VQ mapping to design a mapped codebook. The mapped codebook is defined as a linear combination of reference speaker's vectors using each fuzzy histogram as a weighting function with membership values. In adaptive-speech synthesis stage, input speech is fuzzy vector-quantized by the mapped codcbook, and then FCM arithmetic is used to synthesize speech adapted to input speaker. The speaker adaption experiments are carried out using speech of males in their thirties as input speaker's speech, and a female in her twenties as reference speaker's speech. Speeches used in experiments are sentences /anyoung hasim nika/ and /good morning/. As a results of experiments, we obtained a synthesized speech adapted to input speaker.

  • PDF

A Study on Relationship between Physical Elements and Tennis/Golf Elbow

  • Choi, Jungmin;Park, Jungwoo;Kim, Hyunseung
    • Journal of the Ergonomics Society of Korea
    • /
    • v.36 no.3
    • /
    • pp.183-196
    • /
    • 2017
  • Objective: The purpose of this research was to assess the agreement between job physical risk factor analysis by ergonomists using ergonomic methods and physical examinations made by occupational physicians on the presence of musculoskeletal disorders of the upper extremities. Background: Ergonomics is the systematic application of principles concerned with the design of devices and working conditions for enhancing human capabilities and optimizing working and living conditions. Proper ergonomic design is necessary to prevent injuries and physical and emotional stress. The major types of ergonomic injuries and incidents are cumulative trauma disorders (CTDs), acute strains, sprains, and system failures. Minimization of use of excessive force and awkward postures can help to prevent such injuries Method: Initial data were collected as part of a larger study by the University of Utah Ergonomics and Safety program field data collection teams and medical data collection teams from the Rocky Mountain Center for Occupational and Environmental Health (RMCOEH). Subjects included 173 male and female workers, 83 at Beehive Clothing (a clothing plant), 74 at Autoliv (a plant making air bags for vehicles), and 16 at Deseret Meat (a meat-processing plant). Posture and effort levels were analyzed using a software program developed at the University of Utah (Utah Ergonomic Analysis Tool). The Ergonomic Epicondylitis Model (EEM) was developed to assess the risk of epicondylitis from observable job physical factors. The model considers five job risk factors: (1) intensity of exertion, (2) forearm rotation, (3) wrist posture, (4) elbow compression, and (5) speed of work. Qualitative ratings of these physical factors were determined during video analysis. Personal variables were also investigated to study their relationship with epicondylitis. Logistic regression models were used to determine the association between risk factors and symptoms of epicondyle pain. Results: Results of this study indicate that gender, smoking status, and BMI do have an effect on the risk of epicondylitis but there is not a statistically significant relationship between EEM and epicondylitis. Conclusion: This research studied the relationship between an Ergonomic Epicondylitis Model (EEM) and the occurrence of epicondylitis. The model was not predictive for epicondylitis. However, it is clear that epicondylitis was associated with some individual risk factors such as smoking status, gender, and BMI. Based on the results, future research may discover risk factors that seem to increase the risk of epicondylitis. Application: Although this research used a combination of questionnaire, ergonomic job analysis, and medical job analysis to specifically verify risk factors related to epicondylitis, there are limitations. This research did not have a very large sample size because only 173 subjects were available for this study. Also, it was conducted in only 3 facilities, a plant making air bags for vehicles, a meat-processing plant, and a clothing plant in Utah. If working conditions in other kinds of facilities are considered, results may improve. Therefore, future research should perform analysis with additional subjects in different kinds of facilities. Repetition and duration of a task were not considered as risk factors in this research. These two factors could be associated with epicondylitis so it could be important to include these factors in future research. Psychosocial data and workplace conditions (e.g., low temperature) were also noted during data collection, and could be used to further study the prevalence of epicondylitis. Univariate analysis methods could be used for each variable of EEM. This research was performed using multivariate analysis. Therefore, it was difficult to recognize the different effect of each variable. Basically, the difference between univariate and multivariate analysis is that univariate analysis deals with one predictor variable at a time, whereas multivariate analysis deals with multiple predictor variables combined in a predetermined manner. The univariate analysis could show how each variable is associated with epicondyle pain. This may allow more appropriate weighting factors to be determined and therefore improve the performance of the EEM.