• Title/Summary/Keyword: 층별 가중치

Search Result 9, Processing Time 0.019 seconds

A Study on Weight for Capability Evaluation in the Safety Inspection for Vertical Extension Remodeling of the Apartment Housing (증축형 리모델링 안전진단 내하력 평가의 가중치에 대한 연구)

  • Lim, Chi-Sung;Karl, Kyoung-Wan;Oh, Dae-Jin;Lee, Seok-Ho
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.25 no.1
    • /
    • pp.51-58
    • /
    • 2021
  • As vertical extension remodeling policy was implemented in 2014, Safety Inspection Manual was established to ensure structural safety during the vertical extension remodeling. In the manual, the story weight for capability evaluation was based on the Safety Inspection Manual for Reconstruction. Although capability evaluation in the vertical extension remodeling is more important than reconstruction, engineering basis for the story weight is insufficient. Therefore it is necessary to improve the method of calculating the story weight. In this study, story importance and story weight were defined through the case analysis of capability evaluation in order to provide engineering basis for story weight. Also, new story weight equation was presented considering the load-bearing ratio of structural members.

A New Hidden Error Function for Layer-By-Layer Training of Multi layer Perceptrons (다층 퍼셉트론의 층별 학습을 위한 중간층 오차 함수)

  • Oh Sang-Hoon
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2005.11a
    • /
    • pp.364-370
    • /
    • 2005
  • LBL(Layer-By-Layer) algorithms have been proposed to accelerate the training speed of MLPs(Multilayer Perceptrons). In this LBL algorithms, each layer needs a error function for optimization. Especially, error function for hidden layer has a great effect to achieve good performance. In this sense, this paper proposes a new hidden layer error function for improving the performance of LBL algorithm for MLPs. The hidden layer error function is derived from the mean squared error of output layer. Effectiveness of the proposed error function was demonstrated for a handwritten digit recognition and an isolated-word recognition tasks and very fast learning convergence was obtained.

  • PDF

A New Hidden Error Function for Training of Multilayer Perceptrons (다층 퍼셉트론의 층별 학습 가속을 위한 중간층 오차 함수)

  • Oh Sang-Hoon
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.6
    • /
    • pp.57-64
    • /
    • 2005
  • LBL(Layer-By-Layer) algorithms have been proposed to accelerate the training speed of MLPs(Multilayer Perceptrons). In this LBL algorithms, each layer needs a error function for optimization. Especially, error function for hidden layer has a great effect to achieve good performance. In this sense, this paper proposes a new hidden layer error function for improving the performance of LBL algorithm for MLPs. The hidden layer error function is derived from the mean squared error of output layer. Effectiveness of the proposed error function was demonstrated for a handwritten digit recognition and an isolated-word recognition tasks and very fast learning convergence was obtained.

  • PDF

A sample design for life and attitude survey of Gyeongbuk people (경북인의 생활과 의식조사 표본설계)

  • Kim, Dal-Ho;Cho, Kil-Ho;Hwang, Jin-Seub;Jung, Kyung-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.20 no.6
    • /
    • pp.1155-1167
    • /
    • 2009
  • We made a new sample design for life and consciousness survey of Kyungpook people in 2007. We used the 10% sample survey data of 2005 population and housing census as a survey population. After stratification, we allocate proportionally samples within strata after examining various characteristics in previous survey, which includes economic activity state, an income level per year, and housing possession. And we calculated weight in a new sample design and derived estimators and a formula of standard error using the weights.

  • PDF

Attention Patterns and Semantics of Korean Language Models (한국어 언어모델 주의집중 패턴과 의미적 대표성)

  • Yang, Kisu;Jang, Yoonna;Lim, Jungwoo;Park, Chanjun;Jang, Hwanseok;Lim, Heuiseok
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.605-608
    • /
    • 2021
  • KoBERT는 한국어 자연어처리 분야에서 우수한 성능과 확장성으로 인해 높은 위상을 가진다. 하지만 내부에서 이뤄지는 연산과 패턴에 대해선 아직까지 많은 부분이 소명되지 않은 채 사용되고 있다. 본 연구에서는 KoBERT의 핵심 요소인 self-attention의 패턴을 4가지로 분류하며 특수 토큰에 가중치가 집중되는 현상을 조명한다. 특수 토큰의 attention score를 층별로 추출해 변화 양상을 보이고, 해당 토큰의 역할을 attention 매커니즘과 연관지어 해석한다. 이를 뒷받침하기 위해 한국어 분류 작업에서의 실험을 수행하고 정량적 분석과 함께 특수 토큰이 갖는 의미론적 가치를 평가한다.

  • PDF

A Study on the Efficiency of the BLS Nonresponse Adjustment According to the Correlation and Sample Size (상관관계와 표본 크기에 따른 BLS 무응답 보정의 효율성 비교)

  • Kim, Seok;Shin, Key-Il
    • The Korean Journal of Applied Statistics
    • /
    • v.22 no.6
    • /
    • pp.1301-1313
    • /
    • 2009
  • Efficiency and sensitivity of BLS adjustment method have been studied and the method is known to provide more accurate estimate of total by using properly adjusted weights of samples. However, BLS methods provide different efficiencies according to the magnitudes of correlation coefficients and the sizes of samples in strata. In this paper we study the efficiency of the BLS adjustment according to the sample sizes and correlations in strata. For this study, 2007 monthly labor survey data is used.

A Study on the Weight Adjustment Method for Household Panel Survey (가구 패널조사에서의 가중치 조정에 관한 연구)

  • NamKung, Pyong;Byun, Jong-Seok;Lim, Chan-Soo
    • The Korean Journal of Applied Statistics
    • /
    • v.22 no.6
    • /
    • pp.1315-1329
    • /
    • 2009
  • The panel survey is need to have a more concern about a response due to a secession and non-response of a sample. And generally a population is not fixed and continuously changed. Thus, the rotation sample design can be used by the method replacing the panel research. This paper is the study of comparison to equal weight method, Duncan weight, Design weight method, weight share method in rotation sample design. More specifically, this paper compared variance estimators about the existing each method for the efficiency comparison, and to compare the precision using the relative efficiency gain by the Coefficient Variance(CV) after getting the design weight from the actual data.

Bias adjusted estimation in a sample survey with linear response rate (응답률이 선형인 표본조사에서 편향 보정 추정)

  • Chung, Hee Young;Shin, Key-Il
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.4
    • /
    • pp.631-642
    • /
    • 2019
  • Many methods have been developed to solve problems found in sample surveys involving a large number of item non-responses that cause inaccuracies in estimation. However, the non-response adjustment method used under the assumption of random non-response generates a bias in cases where the response rate is affected by the variable of interest. Chung and Shin (2017) and Min and Shin (2018) proposed a method to improve the accuracy of estimation by appropriately adjusting a bias generated when the response rate is a function of the variables of interest. In this study, we studied a case where the response rate function is linear and the error of the super population model follows normal distribution. We also examined the effect of the number of stratum population on bias adjustment. The performance of the proposed estimator was examined through simulation studies and confirmed through actual data analysis.

Initialization by using truncated distributions in artificial neural network (절단된 분포를 이용한 인공신경망에서의 초기값 설정방법)

  • Kim, MinJong;Cho, Sungchul;Jeong, Hyerin;Lee, YungSeop;Lim, Changwon
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.5
    • /
    • pp.693-702
    • /
    • 2019
  • Deep learning has gained popularity for the classification and prediction task. Neural network layers become deeper as more data becomes available. Saturation is the phenomenon that the gradient of an activation function gets closer to 0 and can happen when the value of weight is too big. Increased importance has been placed on the issue of saturation which limits the ability of weight to learn. To resolve this problem, Glorot and Bengio (Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249-256, 2010) claimed that efficient neural network training is possible when data flows variously between layers. They argued that variance over the output of each layer and variance over input of each layer are equal. They proposed a method of initialization that the variance of the output of each layer and the variance of the input should be the same. In this paper, we propose a new method of establishing initialization by adopting truncated normal distribution and truncated cauchy distribution. We decide where to truncate the distribution while adapting the initialization method by Glorot and Bengio (2010). Variances are made over output and input equal that are then accomplished by setting variances equal to the variance of truncated distribution. It manipulates the distribution so that the initial values of weights would not grow so large and with values that simultaneously get close to zero. To compare the performance of our proposed method with existing methods, we conducted experiments on MNIST and CIFAR-10 data using DNN and CNN. Our proposed method outperformed existing methods in terms of accuracy.