• Title/Summary/Keyword: 특징 변수 추출

Search Result 174, Processing Time 0.025 seconds

Improve the Performance of People Detection using Fisher Linear Discriminant Analysis in Surveillance (서베일런스에서 피셔의 선형 판별 분석을 이용한 사람 검출의 성능 향상)

  • Kang, Sung-Kwan;Lee, Jung-Hyun
    • Journal of Digital Convergence
    • /
    • v.11 no.12
    • /
    • pp.295-302
    • /
    • 2013
  • Many reported methods assume that the people in an image or an image sequence have been identified and localization. People detection is one of very important variable to affect for the system's performance as the basis technology about the detection of other objects and interacting with people and computers, motion recognition. In this paper, we present an efficient linear discriminant for multi-view people detection. Our approaches are based on linear discriminant. We define training data with fisher Linear discriminant to efficient learning method. People detection is considerably difficult because it will be influenced by poses of people and changes in illumination. This idea can solve the multi-view scale and people detection problem quickly and efficiently, which fits for detecting people automatically. In this paper, we extract people using fisher linear discriminant that is hierarchical models invariant pose and background. We estimation the pose in detected people. The purpose of this paper is to classify people and non-people using fisher linear discriminant.

Topographic Analysis Using Wavelet-Based Digital Filters in the KR5 area, NE Equatorial Pacific (웨이브렛 디지털 필터를 이용한 북동태평양 KR5 지역의 지형 분석방법)

  • Jung, Mee-Sook;Lee, Tae-Gook;Kim, Hyun-Sub;Ko, Young-Tak;Park, Cheong-Kee;Kim, Ki-Hyune
    • Journal of the Korean Geophysical Society
    • /
    • v.9 no.4
    • /
    • pp.311-320
    • /
    • 2006
  • Digital filters designed using wavelet theory are applied to bathymetry data acquired from KR5 area of Korea Deepsea Mining Area. The filters used in this study are the linear B-spline wavelet filter and derivative of a Cubic B-spline filter. With proper tuning of the digital filters, we can identify the location and orientation of the abyssal hill and abyssal trough in bathymetry. These features obtained from the digital filters are well correlated with bathymetric image. This quantitative information, which can be used to understand the underlying geophysical processes, can be further processed to obtain the spacing, orientation and distribution of the abyssal hill. This wavelet analysis of bathymetry provides good data to select the mining site.

  • PDF

A Development of Automatic Lineament Extraction Algorithm from Landsat TM images for Geological Applications (지질학적 활용을 위한 Landsat TM 자료의 자동화된 선구조 추출 알고리즘의 개발)

  • 원중선;김상완;민경덕;이영훈
    • Korean Journal of Remote Sensing
    • /
    • v.14 no.2
    • /
    • pp.175-195
    • /
    • 1998
  • Automatic lineament extraction algorithms had been developed by various researches for geological purpose using remotely sensed data. However, most of them are designed for a certain topographic model, for instance rugged mountainous region or flat basin. Most of common topographic characteristic in Korea is a mountainous region along with alluvial plain, and consequently it is difficult to apply previous algorithms directly to this area. A new algorithm of automatic lineament extraction from remotely sensed images is developed in this study specifically for geological applications. An algorithm, named as DSTA(Dynamic Segment Tracing Algorithm), is developed to produce binary image composed of linear component and non-linear component. The proposed algorithm effectively reduces the look direction bias associated with sun's azimuth angle and the noise in the low contrast region by utilizing a dynamic sub window. This algorithm can successfully accomodate lineaments in the alluvial plain as well as mountainous region. Two additional algorithms for estimating the individual lineament vector, named as ALEHHT(Automatic Lineament Extraction by Hierarchical Hough Transform) and ALEGHT(Automatic Lineament Extraction by Generalized Hough Transform) which are merging operation steps through the Hierarchical Hough transform and Generalized Hough transform respectively, are also developed to generate geological lineaments. The merging operation proposed in this study is consisted of three parameters: the angle between two lines($\delta$$\beta$), the perpendicular distance($(d_ij)$), and the distance between midpoints of lines(dn). The test result of the developed algorithm using Landsat TM image demonstrates that lineaments in alluvial plain as well as in rugged mountain is extremely well extracted. Even the lineaments parallel to sun's azimuth angle are also well detected by this approach. Further study is, however, required to accommodate the effect of quantization interval(droh) parameter in ALEGHT for optimization.

A Passport Recognition and face Verification Using Enhanced fuzzy ART Based RBF Network and PCA Algorithm (개선된 퍼지 ART 기반 RBF 네트워크와 PCA 알고리즘을 이용한 여권 인식 및 얼굴 인증)

  • Kim Kwang-Baek
    • Journal of Intelligence and Information Systems
    • /
    • v.12 no.1
    • /
    • pp.17-31
    • /
    • 2006
  • In this paper, passport recognition and face verification methods which can automatically recognize passport codes and discriminate forgery passports to improve efficiency and systematic control of immigration management are proposed. Adjusting the slant is very important for recognition of characters and face verification since slanted passport images can bring various unwanted effects to the recognition of individual codes and faces. Therefore, after smearing the passport image, the longest extracted string of characters is selected. The angle adjustment can be conducted by using the slant of the straight and horizontal line that connects the center of thickness between left and right parts of the string. Extracting passport codes is done by Sobel operator, horizontal smearing, and 8-neighborhood contour tracking algorithm. The string of codes can be transformed into binary format by applying repeating binary method to the area of the extracted passport code strings. The string codes are restored by applying CDM mask to the binary string area and individual codes are extracted by 8-neighborhood contour tracking algerian. The proposed RBF network is applied to the middle layer of RBF network by using the fuzzy logic connection operator and proposing the enhanced fuzzy ART algorithm that dynamically controls the vigilance parameter. The face is authenticated by measuring the similarity between the feature vector of the facial image from the passport and feature vector of the facial image from the database that is constructed with PCA algorithm. After several tests using a forged passport and the passport with slanted images, the proposed method was proven to be effective in recognizing passport codes and verifying facial images.

  • PDF

Classification and discrimination of excel radial charts using the statistical shape analysis (통계적 형상분석을 이용한 엑셀 방사형 차트의 분류와 판별)

  • Seungeon Lee;Jun Hong Kim;Yeonseok Choi;Yong-Seok Choi
    • The Korean Journal of Applied Statistics
    • /
    • v.37 no.1
    • /
    • pp.73-86
    • /
    • 2024
  • A radial chart of Excel is very useful graphical method in delivering information for numerical data. However, it is not easy to discriminate or classify many individuals. In this case, after shaping each individual of a radial chart, we need to apply shape analysis. For a radial chart, since landmarks for shaping are formed as many as the number of variables representing the characteristics of the object, we consider a shape that connects them to a line. If the shape becomes complicated due to the large number of variables, it is difficult to easily grasp even if visualized using a radial chart. Principal component analysis (PCA) is performed on variables to create a visually effective shape. The classification table and classification rate are checked by applying the techniques of traditional discriminant analysis, support vector machine (SVM), and artificial neural network (ANN), before and after principal component analysis. In addition, the difference in discrimination between the two coordinates of generalized procrustes analysis (GPA) coordinates and Bookstein coordinates is compared. Bookstein coordinates are obtained by converting the position, rotation, and scale of the shape around the base landmarks, and show higher rate than GPA coordinates for the classification rate.

Clustering Performance Analysis of Autoencoder with Skip Connection (스킵연결이 적용된 오토인코더 모델의 클러스터링 성능 분석)

  • Jo, In-su;Kang, Yunhee;Choi, Dong-bin;Park, Young B.
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.12
    • /
    • pp.403-410
    • /
    • 2020
  • In addition to the research on noise removal and super-resolution using the data restoration (Output result) function of Autoencoder, research on the performance improvement of clustering using the dimension reduction function of autoencoder are actively being conducted. The clustering function and data restoration function using Autoencoder have common points that both improve performance through the same learning. Based on these characteristics, this study conducted an experiment to see if the autoencoder model designed to have excellent data recovery performance is superior in clustering performance. Skip connection technique was used to design autoencoder with excellent data recovery performance. The output result performance and clustering performance of both autoencoder model with Skip connection and model without Skip connection were shown as graph and visual extract. The output result performance was increased, but the clustering performance was decreased. This result indicates that the neural network models such as autoencoders are not sure that each layer has learned the characteristics of the data well if the output result is good. Lastly, the performance degradation of clustering was compensated by using both latent code and skip connection. This study is a prior study to solve the Hanja Unicode problem by clustering.

The Effects of Transformational Leadership on Employees Job Satisfaction & Organizational Identification in Korean Hotel Industry (호텔기업 종사원의 변혁적 리더십이 직무만족과 조직 동일시에 미치는 영향에 관한 연구)

  • Lee, Jun-Hyuk;Kim, Dong-Ki;Park, Ki-Ho
    • Journal of Global Scholars of Marketing Science
    • /
    • v.15 no.2
    • /
    • pp.27-48
    • /
    • 2005
  • This study analyzed the structural elements of transformational leadership on hotel employees job satisfaction and organizational identification, and inquired into how moderating variables such as demographical characteristics and the features of hotels affect transformational leadership. The ultimate purpose of this study was to provide managerial implications to hotel business operators and hotel employees. The main results of this study are as follows; According to the result of factor analysis on transformational leadership and hotel employees job satisfaction, 18 variables were derived as two factors 'obliging leadership' & 'vision leadership' factor in the area of transformational leadership, and 31 variables were derived as four factors 'welfare and work environment', 'ability display and job stability', 'colleague relationship and job performance' and 'company policy' factor. Second, stepwise regression analysis on whether the type of transformational leadership at hotels has a significant effect on employees job satisfaction & organizational identification, 'vision leadership' and job satisfaction and the both 'vision leadership' & 'obliging leadership' among the types of transformational leadership appeared to have a significant effect on hotel employees organizational identification. Third, One-Way ANOVA and t-test in order to examine significant difference in the type of transformational leadership according to demographical characteristics and general characteristics, statistically significant difference was found according to income level, the current position, work experience, the type of hotel operation and the experience of job change.

  • PDF

Drought frequency analysis for multi-purpose dam inflow using bivariate Copula model (이변량 Copula 모형을 활용한 다목적댐 유입량 가뭄빈도해석)

  • Sung, Jiyoung;Kim, Eunji;Kang, Boosik
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.340-340
    • /
    • 2021
  • 가뭄의 특성상 시점과 종점을 명확하게 정의하기 어렵기 때문에 기준수문량을 설정하고 부족량과 지속기간을 정의하는 것이 일반적이다. 대상 수문량은 강우나 유출량을 사용할 수 있지만, 두 성분간 지체와 감쇄효과로 인하여 빈도해석의 결과는 차이를 보일 수 밖에 없어, 사용 목적에 따라 선별적으로 적용해야 한다. 가뭄빈도해석은 강우를 기반으로 지속기간과 심도를 정의하여 빈도를 해석하는 연구가 선행되어왔지만, 기본적으로 강우의 간헐적 발생특성과 체감도의 한계가 문제로 지적되고 있다. 본 연구에서는 댐 유입량의 Run 시계열 특성을 이용하여 다양한 유황을 기준유량으로 활용하여 가뭄의 시점과 종점에 대한 가뭄사상을 추출하고 지속기간과 누적부족량을 계산하여 가뭄빈도해석의 변수로 설정하였다. 두 변수간의 복잡한 상호 관계를 해석하기 위해 Copula 함수를 이용한 이변량 가뭄빈도해석을 진행하였다. 먼저 소양강댐('74-'19) 유입량, 충주댐('86-'19) 유입량을 연구대상지역으로 설정하여, 두 유역의 유입량의 추세분석을 통해 시간의존성을 파악하였다. 유황분석에 사용되는 분위량중 평수량을 기준값으로 사용하여 각 년별 최대 지속기간과 누적부족량을 추출하였다. Copula 가뭄빈도해석을 수행하기 전에 지속기간에는 GEV, 누적 부족량에는 Log-normal 분포를 적용해 단변량 누적확률분포를 계산하여 재현기간을 도출하였다. 이변량 빈도해석에 Clayton Copula 함수를 적용하여 가뭄빈도해석을 진행하였고, Copula 이변량 재현기간과 SDF곡선을 도출하였다. Clayton Copula를 이용한 이변량 가뭄빈도해석의 결과로 소양강댐의 가장 극심한 가뭄은 1996년으로 단변량 재현기간은 지속기간 기준 9.11년, 누적부족량 기준 17.26년, Copula 재현기간은 141.19년 이며 충주댐의 가장 극심한 가뭄은 2014년으로 단변량 재현기간은 지속기간 기준 17.76년, 누적부족량 기준 18.72년, Copula 재현기간은 184.19년으로 단변량 가뭄빈도해석을 통한 재현기간보다 Copula 재현기간이 높은 결과가 도출되었다. Run 시계열을 바탕으로 한 기준유량의 임계값 기준 Event 산정과 Copula를 이용한 빈도해석은 가뭄분석에 이용되는 자료의 상관관계와 분포특성을 재현하는데 효과적인 특징이 있다. 이를 미루어 보아 Copula 함수를 이용한 가뭄빈도해석의 재현기간은 보다 현실적인 재현기간을 도출할 수 있는 것으로 판단된다. 임계값의 조정을 통해 가뭄빈도해석의 변수의 양이 늘어나면, 보다 정확도 높은 재현기간을 도출하여 수문학적 가뭄을 정의할 수 있을 것이라고 사료된다.

  • PDF

Comparison of Prediction Accuracy Between Classification and Convolution Algorithm in Fault Diagnosis of Rotatory Machines at Varying Speed (회전수가 변하는 기기의 고장진단에 있어서 특성 기반 분류와 합성곱 기반 알고리즘의 예측 정확도 비교)

  • Moon, Ki-Yeong;Kim, Hyung-Jin;Hwang, Se-Yun;Lee, Jang Hyun
    • Journal of Navigation and Port Research
    • /
    • v.46 no.3
    • /
    • pp.280-288
    • /
    • 2022
  • This study examined the diagnostics of abnormalities and faults of equipment, whose rotational speed changes even during regular operation. The purpose of this study was to suggest a procedure that can properly apply machine learning to the time series data, comprising non-stationary characteristics as the rotational speed changes. Anomaly and fault diagnosis was performed using machine learning: k-Nearest Neighbor (k-NN), Support Vector Machine (SVM), and Random Forest. To compare the diagnostic accuracy, an autoencoder was used for anomaly detection and a convolution based Conv1D was additionally used for fault diagnosis. Feature vectors comprising statistical and frequency attributes were extracted, and normalization & dimensional reduction were applied to the extracted feature vectors. Changes in the diagnostic accuracy of machine learning according to feature selection, normalization, and dimensional reduction are explained. The hyperparameter optimization process and the layered structure are also described for each algorithm. Finally, results show that machine learning can accurately diagnose the failure of a variable-rotation machine under the appropriate feature treatment, although the convolution algorithms have been widely applied to the considered problem.

Development of the KOSPI (Korea Composite Stock Price Index) forecast model using neural network and statistical methods) (신경 회로망과 통계적 기법을 이용한 종합주가지수 예측 모형의 개발)

  • Lee, Eun-Jin;Min, Chul-Hong;Kim, Tae-Seon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.5
    • /
    • pp.95-101
    • /
    • 2008
  • Modeling of stock prices forecast has been considered as one of the most difficult problem to develop accurately since stock prices are highly correlated with various environmental conditions including economics and political situation. In this paper, we propose a agent system approach to predict Korea Composite Stock Price Index (KOSPI) using neural network and statistical methods. To minimize mean of prediction error and variation of prediction error, agent system includes sub-agent modules for feature extraction, variables selection, forecast engine selection, and forecasting results analysis. As a first step to develop agent system for KOSPI forecasting, twelve economic indices are selected from twenty two basic standard economic indices using principal component analysis. From selected twelve economic indices, prediction model input variables are chosen again using best-subsets regression method. Two different types data are tested for KOSPI forecasting and the Prediction results showed 11.92 points of root mean squared error for consecutive thirty days of prediction. Also, it is shown that proposed agent system approach for KOSPI forecast is effective since required types and numbers of prediction variables are time-varying, so adaptable selection of modeling inputs and prediction engine are essential for reliable and accurate forecast model.