• 제목/요약/키워드: Expression Feature

검색결과 529건 처리시간 0.034초

A Node2Vec-Based Gene Expression Image Representation Method for Effectively Predicting Cancer Prognosis (암 예후를 효과적으로 예측하기 위한 Node2Vec 기반의 유전자 발현량 이미지 표현기법)

  • Choi, Jonghwan;Park, Sanghyun
    • KIPS Transactions on Software and Data Engineering
    • /
    • 제8권10호
    • /
    • pp.397-402
    • /
    • 2019
  • Accurately predicting cancer prognosis to provide appropriate treatment strategies for patients is one of the critical challenges in bioinformatics. Many researches have suggested machine learning models to predict patients' outcomes based on their gene expression data. Gene expression data is high-dimensional numerical data containing about 17,000 genes, so traditional researches used feature selection or dimensionality reduction approaches to elevate the performance of prognostic prediction models. These approaches, however, have an issue of making it difficult for the predictive models to grasp any biological interaction between the selected genes because feature selection and model training stages are performed independently. In this paper, we propose a novel two-dimensional image formatting approach for gene expression data to achieve feature selection and prognostic prediction effectively. Node2Vec is exploited to integrate biological interaction network and gene expression data and a convolutional neural network learns the integrated two-dimensional gene expression image data and predicts cancer prognosis. We evaluated our proposed model through double cross-validation and confirmed superior prognostic prediction accuracy to traditional machine learning models based on raw gene expression data. As our proposed approach is able to improve prediction models without loss of information caused by feature selection steps, we expect this will contribute to development of personalized medicine.

Facial Point Classifier using Convolution Neural Network and Cascade Facial Point Detector (컨볼루셔널 신경망과 케스케이드 안면 특징점 검출기를 이용한 얼굴의 특징점 분류)

  • Yu, Je-Hun;Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • 제22권3호
    • /
    • pp.241-246
    • /
    • 2016
  • Nowadays many people have an interest in facial expression and the behavior of people. These are human-robot interaction (HRI) researchers utilize digital image processing, pattern recognition and machine learning for their studies. Facial feature point detector algorithms are very important for face recognition, gaze tracking, expression, and emotion recognition. In this paper, a cascade facial feature point detector is used for finding facial feature points such as the eyes, nose and mouth. However, the detector has difficulty extracting the feature points from several images, because images have different conditions such as size, color, brightness, etc. Therefore, in this paper, we propose an algorithm using a modified cascade facial feature point detector using a convolutional neural network. The structure of the convolution neural network is based on LeNet-5 of Yann LeCun. For input data of the convolutional neural network, outputs from a cascade facial feature point detector that have color and gray images were used. The images were resized to $32{\times}32$. In addition, the gray images were made into the YUV format. The gray and color images are the basis for the convolution neural network. Then, we classified about 1,200 testing images that show subjects. This research found that the proposed method is more accurate than a cascade facial feature point detector, because the algorithm provides modified results from the cascade facial feature point detector.

Improving the Processing Speed and Robustness of Face Detection for a Psychological Robot Application (심리로봇적용을 위한 얼굴 영역 처리 속도 향상 및 강인한 얼굴 검출 방법)

  • Ryu, Jeong Tak;Yang, Jeen Mo;Choi, Young Sook;Park, Se Hyun
    • Journal of Korea Society of Industrial Information Systems
    • /
    • 제20권2호
    • /
    • pp.57-63
    • /
    • 2015
  • Compared to other emotion recognition technology, facial expression recognition technology has the merit of non-contact, non-enforceable and convenience. In order to apply to a psychological robot, vision technology must be able to quickly and accurately extract the face region in the previous step of facial expression recognition. In this paper, we remove the background from any image using the YCbCr skin color technology, and use Haar-like Feature technology for robust face detection. We got the result of improved processing speed and robust face detection by removing the background from the input image.

Facial Expression Recognition using ICA-Factorial Representation Method (ICA-factorial 표현법을 이용한 얼굴감정인식)

  • Han, Su-Jeong;Kwak, Keun-Chang;Go, Hyoun-Joo;Kim, Sung-Suk;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • 제13권3호
    • /
    • pp.371-376
    • /
    • 2003
  • In this paper, we proposes a method for recognizing the facial expressions using ICA(Independent Component Analysis)-factorial representation method. Facial expression recognition consists of two stages. First, a method of Feature extraction transforms the high dimensional face space into a low dimensional feature space using PCA(Principal Component Analysis). And then, the feature vectors are extracted by using ICA-factorial representation method. The second recognition stage is performed by using the Euclidean distance measure based KNN(K-Nearest Neighbor) algorithm. We constructed the facial expression database for six basic expressions(happiness, sadness, angry, surprise, fear, dislike) and obtained a better performance than previous works.

Reverting Gene Expression Pattern of Cancer into Normal-Like Using Cycle-Consistent Adversarial Network

  • Lee, Chan-hee;Ahn, TaeJin
    • International Journal of Advanced Culture Technology
    • /
    • 제6권4호
    • /
    • pp.275-283
    • /
    • 2018
  • Cancer show distinct pattern of gene expression when it is compared to normal. This difference results malignant characteristic of cancer. Many cancer drugs are targeting this difference so that it can selectively kill cancer cells. One of the recent demand for personalized treating cancer is retrieving normal tissue from a patient so that the gene expression difference between cancer and normal be assessed. However, in most clinical situation it is hard to retrieve normal tissue from a patient. This is because biopsy of normal tissues may cause damage to the organ function or a risk of infection or side effect what a patient to take. Thus, there is a challenge to estimate normal cell's gene expression where cancers are originated from without taking additional biopsy. In this paper, we propose in-silico based prediction of normal cell's gene expression from gene expression data of a tumor sample. We call this challenge as reverting the cancer into normal. We divided this challenge into two parts. The first part is making a generator that is able to fool a pretrained discriminator. Pretrained discriminator is from the training of public data (9,601 cancers, 7,240 normals) which shows 0.997 of accuracy to discriminate if a given gene expression pattern is cancer or normal. Deceiving this pretrained discriminator means our method is capable of generating very normal-like gene expression data. The second part of the challenge is to address whether generated normal is similar to true reverse form of the input cancer data. We used, cycle-consistent adversarial networks to approach our challenges, since this network is capable of translating one domain to the other while maintaining original domain's feature and at the same time adding the new domain's feature. We evaluated that, if we put cancer data into a cycle-consistent adversarial network, it could retain most of the information from the input (cancer) and at the same time change the data into normal. We also evaluated if this generated gene expression of normal tissue would be the biological reverse form of the gene expression of cancer used as an input.

The facial expression generation of vector graphic character using the simplified principle component vector (간소화된 주성분 벡터를 이용한 벡터 그래픽 캐릭터의 얼굴표정 생성)

  • Park, Tae-Hee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • 제12권9호
    • /
    • pp.1547-1553
    • /
    • 2008
  • This paper presents a method that generates various facial expressions of vector graphic character by using the simplified principle component vector. First, we analyze principle components to the nine facial expression(astonished, delighted, etc.) redefined based on Russell's internal emotion state. From this, we find principle component vector having the biggest effect on the character's facial feature and expression and generate the facial expression by using that. Also we create natural intermediate characters and expressions by interpolating weighting values to character's feature and expression. We can save memory space considerably, and create intermediate expressions with a small computation. Hence the performance of character generation system can be considerably improved in web, mobile service and game that real time control is required.

A Study on Facial Expression Recognition using Boosted Local Binary Pattern (Boosted 국부 이진 패턴을 적용한 얼굴 표정 인식에 관한 연구)

  • Won, Chulho
    • Journal of Korea Multimedia Society
    • /
    • 제16권12호
    • /
    • pp.1357-1367
    • /
    • 2013
  • Recently, as one of images based methods in facial expression recognition, the research which used ULBP block histogram feature and SVM classifier was performed. Due to the properties of LBP introduced by Ojala, such as highly distinction capability, durability to the illumination changes and simple operation, LBP is widely used in the field of image recognition. In this paper, we combined $LBP_{8,2}$ and $LBP_{8,1}$ to describe micro features in addition to shift, size change in calculating ULBP block histogram. From sub-windows of 660 of $LBP_{8,1}$ and 550 of $LBP_{8,2}$, ULBP histogram feature of 1210 were extracted and weak classifiers of 50 were generated using AdaBoost. By using the combined $LBP_{8,1}$ and $LBP_{8,2}$ hybrid type of ULBP histogram feature and SVM classifier, facial expression recognition rate could be improved and it was confirmed through various experiments. Facial expression recognition rate of 96.3% by hybrid boosted ULBP block histogram showed the superiority of the proposed method.

Facial Expression Recognition with Instance-based Learning Based on Regional-Variation Characteristics Using Models-based Feature Extraction (모델기반 특징추출을 이용한 지역변화 특성에 따른 개체기반 표정인식)

  • Park, Mi-Ae;Ko, Jae-Pil
    • Journal of Korea Multimedia Society
    • /
    • 제9권11호
    • /
    • pp.1465-1473
    • /
    • 2006
  • In this paper, we present an approach for facial expression recognition using Active Shape Models(ASM) and a state-based model in image sequences. Given an image frame, we use ASM to obtain the shape parameter vector of the model while we locate facial feature points. Then, we can obtain the shape parameter vector set for all the frames of an image sequence. This vector set is converted into a state vector which is one of the three states by the state-based model. In the classification step, we use the k-NN with the proposed similarity measure that is motivated on the observation that the variation-regions of an expression sequence are different from those of other expression sequences. In the experiment with the public database KCFD, we demonstrate that the proposed measure slightly outperforms the binary measure in which the recognition performance of the k-NN with the proposed measure and the existing binary measure show 89.1% and 86.2% respectively when k is 1.

  • PDF

Study of Facial Expression Recognition using Variable-sized Block (가변 크기 블록(Variable-sized Block)을 이용한 얼굴 표정 인식에 관한 연구)

  • Cho, Youngtak;Ryu, Byungyong;Chae, Oksam
    • Convergence Security Journal
    • /
    • 제19권1호
    • /
    • pp.67-78
    • /
    • 2019
  • Most existing facial expression recognition methods use a uniform grid method that divides the entire facial image into uniform blocks when describing facial features. The problem of this method may include non-face backgrounds, which interferes with discrimination of facial expressions, and the feature of a face included in each block may vary depending on the position, size, and orientation of the face in the input image. In this paper, we propose a variable-size block method which determines the size and position of a block that best represents meaningful facial expression change. As a part of the effort, we propose the way to determine the optimal number, position and size of each block based on the facial feature points. For the evaluation of the proposed method, we generate the facial feature vectors using LDTP and construct a facial expression recognition system based on SVM. Experimental results show that the proposed method is superior to conventional uniform grid based method. Especially, it shows that the proposed method can adapt to the change of the input environment more effectively by showing relatively better performance than exiting methods in the images with large shape and orientation changes.

Enhanced Independent Component Analysis of Temporal Human Expressions Using Hidden Markov model

  • Lee, J.J.;Uddin, Zia;Kim, T.S.
    • 한국HCI학회:학술대회논문집
    • /
    • 한국HCI학회 2008년도 학술대회 1부
    • /
    • pp.487-492
    • /
    • 2008
  • Facial expression recognition is an intensive research area for designing Human Computer Interfaces. In this work, we present a new facial expression recognition system utilizing Enhanced Independent Component Analysis (EICA) for feature extraction and discrete Hidden Markov Model (HMM) for recognition. Our proposed approach for the first time deals with sequential images of emotion-specific facial data analyzed with EICA and recognized with HMM. Performance of our proposed system has been compared to the conventional approaches where Principal and Independent Component Analysis are utilized for feature extraction. Our preliminary results show that our proposed algorithm produces improved recognition rates in comparison to previous works.

  • PDF