• Title/Summary/Keyword: 다층퍼셉트론 신경망

Search Result 151, Processing Time 0.025 seconds

Gaze Detection by Computing Facial Rotation and Translation (얼굴의 회전 및 이동 분석에 의한 응시 위치 파악)

  • Lee, Jeong-Jun;Park, Kang-Ryoung;Kim, Jai-Hie
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.5
    • /
    • pp.535-543
    • /
    • 2002
  • In this paper, we propose a new gaze detection method using 2-D facial images captured by a camera on top of the monitor. We consider only the facial rotation and translation and not the eyes' movements. The proposed method computes the gaze point caused by the facial rotation and the amount of the facial translation respectively, and by combining these two the final gaze point on a monitor screen can be obtained. We detected the gaze point caused by the facial rotation by using a neural network(a multi-layered perceptron) whose inputs are the 2-D geometric changes of the facial features' points and estimated the amount of the facial translation by image processing algorithms in real time. Experimental results show that the gaze point detection accuracy between the computed positions and the real ones is about 2.11 inches in RMS error when the distance between the user and a 19-inch monitor is about 50~70cm. The processing time is about 0.7 second with a Pentium PC(233MHz) and 320${\times}$240 pixel-size images.

Dynamic Gesture Recognition for the Remote Camera Robot Control (원격 카메라 로봇 제어를 위한 동적 제스처 인식)

  • Lee Ju-Won;Lee Byung-Ro
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.7
    • /
    • pp.1480-1487
    • /
    • 2004
  • This study is proposed the novel gesture recognition method for the remote camera robot control. To recognize the dynamics gesture, the preprocessing step is the image segmentation. The conventional methods for the effectively object segmentation has need a lot of the cole. information about the object(hand) image. And these methods in the recognition step have need a lot of the features with the each object. To improve the problems of the conventional methods, this study proposed the novel method to recognize the dynamic hand gesture such as the MMS(Max-Min Search) method to segment the object image, MSM(Mean Space Mapping) method and COG(Conte. Of Gravity) method to extract the features of image, and the structure of recognition MLPNN(Multi Layer Perceptron Neural Network) to recognize the dynamic gestures. In the results of experiment, the recognition rate of the proposed method appeared more than 90[%], and this result is shown that is available by HCI(Human Computer Interface) device for .emote robot control.

Gaze Detection System by IR-LED based Camera (적외선 조명 카메라를 이용한 시선 위치 추적 시스템)

  • 박강령
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.4C
    • /
    • pp.494-504
    • /
    • 2004
  • The researches about gaze detection have been much developed with many applications. Most previous researches only rely on image processing algorithm, so they take much processing time and have many constraints. In our work, we implement it with a computer vision system setting a IR-LED based single camera. To detect the gaze position, we locate facial features, which is effectively performed with IR-LED based camera and SVM(Support Vector Machine). When a user gazes at a position of monitor, we can compute the 3D positions of those features based on 3D rotation and translation estimation and affine transform. Finally, the gaze position by the facial movements is computed from the normal vector of the plane determined by those computed 3D positions of features. In addition, we use a trained neural network to detect the gaze position by eye's movement. As experimental results, we can obtain the facial and eye gaze position on a monitor and the gaze position accuracy between the computed positions and the real ones is about 4.2 cm of RMS error.

Prediction of Air Temperature and Relative Humidity in Greenhouse via a Multilayer Perceptron Using Environmental Factors (환경요인을 이용한 다층 퍼셉트론 기반 온실 내 기온 및 상대습도 예측)

  • Choi, Hayoung;Moon, Taewon;Jung, Dae Ho;Son, Jung Eek
    • Journal of Bio-Environment Control
    • /
    • v.28 no.2
    • /
    • pp.95-103
    • /
    • 2019
  • Temperature and relative humidity are important factors in crop cultivation and should be properly controlled for improving crop yield and quality. In order to control the environment accurately, we need to predict how the environment will change in the future. The objective of this study was to predict air temperature and relative humidity at a future time by using a multilayer perceptron (MLP). The data required to train MLP was collected every 10 min from Oct. 1, 2016 to Feb. 28, 2018 in an eight-span greenhouse ($1,032m^2$) cultivating mango (Mangifera indica cv. Irwin). The inputs for the MLP were greenhouse inside and outside environment data, and set-up and operating values of environment control devices. By using these data, the MLP was trained to predict the air temperature and relative humidity at a future time of 10 to 120 min. Considering typical four seasons in Korea, three-day data of the each season were compared as test data. The MLP was optimized with four hidden layers and 128 nodes for air temperature ($R^2=0.988$) and with four hidden layers and 64 nodes for relative humidity ($R^2=0.990$). Due to the characteristics of MLP, the accuracy decreased as the prediction time became longer. However, air temperature and relative humidity were properly predicted regardless of the environmental changes varied from season to season. For specific data such as spray irrigation, however, the numbers of trained data were too small, resulting in poor predictive accuracy. In this study, air temperature and relative humidity were appropriately predicted through optimization of MLP, but were limited to the experimental greenhouse. Therefore, it is necessary to collect more data from greenhouses at various places and modify the structure of neural network for generalization.

Development of prediction model identifying high-risk older persons in need of long-term care (장기요양 필요 발생의 고위험 대상자 발굴을 위한 예측모형 개발)

  • Song, Mi Kyung;Park, Yeongwoo;Han, Eun-Jeong
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.4
    • /
    • pp.457-468
    • /
    • 2022
  • In aged society, it is important to prevent older people from being disability needing long-term care. The purpose of this study is to develop a prediction model to discover high-risk groups who are likely to be beneficiaries of Long-Term Care Insurance. This study is a retrospective study using database of National Health Insurance Service (NHIS) collected in the past of the study subjects. The study subjects are 7,724,101, the population over 65 years of age registered for medical insurance. To develop the prediction model, we used logistic regression, decision tree, random forest, and multi-layer perceptron neural network. Finally, random forest was selected as the prediction model based on the performances of models obtained through internal and external validation. Random forest could predict about 90% of the older people in need of long-term care using DB without any information from the assessment of eligibility for long-term care. The findings might be useful in evidencebased health management for prevention services and can contribute to preemptively discovering those who need preventive services in older people.

Ensemble Design of Machine Learning Technigues: Experimental Verification by Prediction of Drifter Trajectory (앙상블을 이용한 기계학습 기법의 설계: 뜰개 이동경로 예측을 통한 실험적 검증)

  • Lee, Chan-Jae;Kim, Yong-Hyuk
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.8 no.3
    • /
    • pp.57-67
    • /
    • 2018
  • The ensemble is a unified approach used for getting better performance by using multiple algorithms in machine learning. In this paper, we introduce boosting and bagging, which have been widely used in ensemble techniques, and design a method using support vector regression, radial basis function network, Gaussian process, and multilayer perceptron. In addition, our experiment was performed by adding a recurrent neural network and MOHID numerical model. The drifter data used for our experimental verification consist of 683 observations in seven regions. The performance of our ensemble technique is verified by comparison with four algorithms each. As verification, mean absolute error was adapted. The presented methods are based on ensemble models using bagging, boosting, and machine learning. The error rate was calculated by assigning the equal weight value and different weight value to each unit model in ensemble. The ensemble model using machine learning showed 61.7% improvement compared to the average of four machine learning technique.

Demand Forecast For Empty Containers Using MLP (MLP를 이용한 공컨테이너 수요예측)

  • DongYun Kim;SunHo Bang;Jiyoung Jang;KwangSup Shin
    • The Journal of Bigdata
    • /
    • v.6 no.2
    • /
    • pp.85-98
    • /
    • 2021
  • The pandemic of COVID-19 further promoted the imbalance in the volume of imports and exports among countries using containers, which worsened the shortage of empty containers. Since it is important to secure as many empty containers as the appropriate demand for stable and efficient port operation, measures to predict demand for empty containers using various techniques have been studied so far. However, it was based on long-term forecasts on a monthly or annual basis rather than demand forecasts that could be used directly by ports and shipping companies. In this study, a daily and weekly prediction method using an actual artificial neural network is presented. In details, the demand forecasting model has been developed using multi-layer perceptron and multiple linear regression model. In order to overcome the limitation from the lack of data, it was manipulated considering the business process between the loaded container and empty container, which the fully-loaded container is converted to the empty container. From the result of numerical experiment, it has been developed the practically applicable forecasting model, even though it could not show the perfect accuracy.

Steganalysis Using Histogram Characteristic and Statistical Moments of Wavelet Subbands (웨이블릿 부대역의 히스토그램 특성과 통계적 모멘트를 이용한 스테그분석)

  • Hyun, Seung-Hwa;Park, Tae-Hee;Kim, Young-In;Kim, Yoo-Shin;Eom, Il-Kyu
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.6
    • /
    • pp.57-65
    • /
    • 2010
  • In this paper, we present a universal steganalysis scheme. The proposed method extract features of two types. First feature set is extracted from histogram characteristic of the wavelet subbands. Second feature set is determined by statistical moments of wavelet characteristic functions. 3-level wavelet decomposition is performed for stego image and cover image using the Haar wavelet basis. We extract one features from 9 high frequency subbands of 12 subbands. The number of second features is 39. We use total 48 features for steganalysis. Multi layer perceptron(MLP) is applied as classifier to distinguish between cover images and stego images. To evaluate the proposed steganalysis method, we use the CorelDraw image database. We test the performance of our proposed steganalysis method over LSB method, spread spectrum data hiding method, blind spread spectrum data hiding method and F5 data hiding method. The proposed method outperforms the previous methods in sensitivity, specificity, error rate and area under ROC curve, etc.

Face Detection Using A Selectively Attentional Hough Transform and Neural Network (선택적 주의집중 Hough 변환과 신경망을 이용한 얼굴 검출)

  • Choi, Il;Seo, Jung-Ik;Chien, Sung-Il
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.4
    • /
    • pp.93-101
    • /
    • 2004
  • A face boundary can be approximated by an ellipse with five-dimensional parameters. This property allows an ellipse detection algorithm to be adapted to detecting faces. However, the construction of a huge five-dimensional parameter space for a Hough transform is quite unpractical. Accordingly, we Propose a selectively attentional Hough transform method for detecting faces from a symmetric contour in an image. The idea is based on the use of a constant aspect ratio for a face, gradient information, and scan-line-based orientation decomposition, thereby allowing a 5-dimensional problem to be decomposed into a two-dimensional one to compute a center with a specific orientation and an one-dimensional one to estimate a short axis. In addition, a two-point selection constraint using geometric and gradient information is also employed to increase the speed and cope with a cluttered background. After detecting candidate face regions using the proposed Hough transform, a multi-layer perceptron verifier is adopted to reject false positives. The proposed method was found to be relatively fast and promising.

Gaze Detection by Computing Facial and Eye Movement (얼굴 및 눈동자 움직임에 의한 시선 위치 추적)

  • 박강령
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.2
    • /
    • pp.79-88
    • /
    • 2004
  • Gaze detection is to locate the position on a monitor screen where a user is looking by computer vision. Gaze detection systems have numerous fields of application. They are applicable to the man-machine interface for helping the handicapped to use computers and the view control in three dimensional simulation programs. In our work, we implement it with a computer vision system setting a IR-LED based single camera. To detect the gaze position, we locate facial features, which is effectively performed with IR-LED based camera and SVM(Support Vector Machine). When a user gazes at a position of monitor, we can compute the 3D positions of those features based on 3D rotation and translation estimation and affine transform. Finally, the gaze position by the facial movements is computed from the normal vector of the plane determined by those computed 3D positions of features. In addition, we use a trained neural network to detect the gaze position by eye's movement. As experimental results, we can obtain the facial and eye gaze position on a monitor and the gaze position accuracy between the computed positions and the real ones is about 4.8 cm of RMS error.