• Title/Summary/Keyword: Feature normalization

Search Result 155, Processing Time 0.028 seconds

Removal of the Ambiguity of Images by Normalization and Entropy Minimization and Edge Detection by Understanding of Image Structures (정규화와 엔트로피의 최소화에 의한 영상 경계의 애매성 제거 및 영상 구조 파악에 의한 경계선 추출)

  • Jo, Dong-Uk;Baek, Seung-Jae
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.9
    • /
    • pp.2558-2562
    • /
    • 1999
  • This paper proposes on the methods of noise removal and edge extraction which is done by eliminating the ambiguities of the image using normalization and minimizing the entropy. Pre-existing methods have their own peculiarities and limitations, such as gray level distributions change very slowly or two regions which having similar gray level distribution are touched. This affects on the post processing such as feature extraction, as a result, this leads to false-recognition or no-recognition. Therefore, this paper proposes on the methods which overcome these problems. Finally, the effectiveness of this paper is demonstrated by several experiments.

  • PDF

Normalization of Face Images Subject to Directional Illumination using Linear Model (선형모델을 이용한 방향성 조명하의 얼굴영상 정규화)

  • 고재필;김은주;변혜란
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.1
    • /
    • pp.54-60
    • /
    • 2004
  • Face recognition is one of the problems to be solved by appearance based matching technique. However, the appearance of face image is very sensitive to variation in illumination. One of the easiest ways for better performance is to collect more training samples acquired under variable lightings but it is not practical in real world. ]:n object recognition, it is desirable to focus on feature extraction or normalization technique rather than focus on classifier. This paper presents a simple approach to normalization of faces subject to directional illumination. This is one of the significant issues that cause error in the face recognition process. The proposed method, ICR(illumination Compensation based on Multiple Linear Regression), is to find the plane that best fits the intensity distribution of the face image using the multiple linear regression, then use this plane to normalize the face image. The advantages of our method are simple and practical. The planar approximation of a face image is mathematically defined by the simple linear model. We provide experimental results to demonstrate the performance of the proposed ICR method on public face databases and our database. The experimental results show a significant improvement of the recognition accuracy.

A Corpus-based Study of Translation Universals in English Translations of Korean Newspaper Texts (한국 신문의 영어 번역에 나타난 번역 보편소의 코퍼스 기반 분석)

  • Goh, Gwang-Yoon;Lee, Younghee (Cheri)
    • Cross-Cultural Studies
    • /
    • v.45
    • /
    • pp.109-143
    • /
    • 2016
  • This article examines distinctive linguistic shifts of translational English in an effort to verify the validity of the translation universals hypotheses, including simplification, explicitation, normalization and leveling-out, which have been most heavily explored to date. A large-scale study involving comparable corpora of translated and non-translated English newspaper texts has been carried out to typify particular linguistic attributes inherent in translated texts. The main findings are as follows. First, by employing the parameters of STTR, top-to-bottom frequency words, and mean values of sentence lengths, the translational instances of simplification have been detected across the translated English newspaper corpora. In contrast, the portion of function words produced contrary results, which in turn suggests that this feature might not constitute an effective test of the hypothesis. Second, it was found that the use of connectives was more salient in original English newspaper texts than translated English texts, being incompatible with the explicitation hypothesis. Third, as an indicator of translational normalization, lexical bundles were found to be more pervasive in translated texts than in non-translated texts, which is expected from and therefore support the normalization hypothesis. Finally, the standard deviations of both STTR and mean sentence lengths turned out to be higher in translated texts, indicating that the translated English newspaper texts were less leveled out within the same corpus group, which is opposed to what the leveling-out hypothesis postulates. Overall, the results suggest that not all four hypotheses may qualify for the label translation universals, or at least that some translational predictors are not feasible enough to evaluate the effectiveness of the translation universals hypotheses.

Feature-Oriented Requirements Change Management with Value Analysis (가치분석을 통한 휘처 기반의 요구사항 변경 관리)

  • Ahn, Sang-Im;Chong, Ki-Won
    • The Journal of Society for e-Business Studies
    • /
    • v.12 no.3
    • /
    • pp.33-47
    • /
    • 2007
  • The requirements have been changed during development progresses, since it is impossible to define all of software requirements. These requirements change leads to mistakes because the developers cannot completely understand the software's structure and behavior, or they cannot discover all parts affected by a change. Requirement changes have to be managed and assessed to ensure that they are feasible, make economic sense and contribute to the business needs of the customer organization. We propose a feature-oriented requirements change management method to manage requirements change with value analysis and feature-oriented traceability links including intermediate catalysis using features. Our approach offers two contributions to the study of requirements change: (1) We define requirements change tree to make user requirements change request generalize by feature level. (2) We provide overall process such as change request normalization, change impact analysis, solution dealing with change request, change request implementation, change request evaluation. In addition, we especially present the results of a case study which is carried out in asset management portal system in details.

  • PDF

A Study on Illumination Normalization Method based on Bilateral Filter for Illumination Invariant Face Recognition (조명 환경에 강인한 얼굴인식 성능향상을 위한 Bilateral 필터 기반 조명 정규화 방법에 관한 연구)

  • Lee, Sang-Seop;Lee, Su-Young;Kim, Joong-Kyu
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.4
    • /
    • pp.49-55
    • /
    • 2010
  • Cast shadow caused by an illumination condition can produce troublesome effects for face recognition system using reflectance image. Consequently, we need to separate cast shadow area from feature area for improvement of recognition accuracy. A Bilateral filter smooths image while preserving edges, by means of a nonlinear combination of nearby pixel values. Processing such characteristics, this method is suited to our purpose in illumination estimation process based on Retinex. Therefore, in this paper, we propose a new illumination normalization method based on the Bilateral filter in face images. The proposed method produces a reflectance image that is preserved relatively exact cast shadow area, because coefficient of filter is designed to multiply proximity and discontinuity of pixels in input image. Performance of our method is measured by a recognition accuracy of principle component analysis(PCA) and evaluated to compare with other conventional illumination normalization methods.

The Algorithm Design and Implement of Microarray Data Classification using the Byesian Method (베이지안 기법을 적용한 마이크로어레이 데이터 분류 알고리즘 설계와 구현)

  • Park, Su-Young;Jung, Chai-Yeoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.12
    • /
    • pp.2283-2288
    • /
    • 2006
  • As development in technology of bioinformatics recently makes it possible to operate micro-level experiments, we can observe the expression pattern of total genome through on chip and analyze the interactions of thousands of genes at the same time. Thus, DNA microarray technology presents the new directions of understandings for complex organisms. Therefore, it is required how to analyze the enormous gene information obtained through this technology effectively. In this thesis, We used sample data of bioinformatics core group in harvard university. It designed and implemented system that evaluate accuracy after dividing in class of two using Bayesian algorithm, ASA, of feature extraction method through normalization process, reducing or removing of noise that occupy by various factor in microarray experiment. It was represented accuracy of 98.23% after Lowess normalization.

Comparison of Prediction Accuracy Between Classification and Convolution Algorithm in Fault Diagnosis of Rotatory Machines at Varying Speed (회전수가 변하는 기기의 고장진단에 있어서 특성 기반 분류와 합성곱 기반 알고리즘의 예측 정확도 비교)

  • Moon, Ki-Yeong;Kim, Hyung-Jin;Hwang, Se-Yun;Lee, Jang Hyun
    • Journal of Navigation and Port Research
    • /
    • v.46 no.3
    • /
    • pp.280-288
    • /
    • 2022
  • This study examined the diagnostics of abnormalities and faults of equipment, whose rotational speed changes even during regular operation. The purpose of this study was to suggest a procedure that can properly apply machine learning to the time series data, comprising non-stationary characteristics as the rotational speed changes. Anomaly and fault diagnosis was performed using machine learning: k-Nearest Neighbor (k-NN), Support Vector Machine (SVM), and Random Forest. To compare the diagnostic accuracy, an autoencoder was used for anomaly detection and a convolution based Conv1D was additionally used for fault diagnosis. Feature vectors comprising statistical and frequency attributes were extracted, and normalization & dimensional reduction were applied to the extracted feature vectors. Changes in the diagnostic accuracy of machine learning according to feature selection, normalization, and dimensional reduction are explained. The hyperparameter optimization process and the layered structure are also described for each algorithm. Finally, results show that machine learning can accurately diagnose the failure of a variable-rotation machine under the appropriate feature treatment, although the convolution algorithms have been widely applied to the considered problem.

Facial Shape Recognition Using Self Organized Feature Map(SOFM)

  • Kim, Seung-Jae;Lee, Jung-Jae
    • International journal of advanced smart convergence
    • /
    • v.8 no.4
    • /
    • pp.104-112
    • /
    • 2019
  • This study proposed a robust detection algorithm. It detects face more stably with respect to changes in light and rotation forthe identification of a face shape. The proposed algorithm uses face shape asinput information in a single camera environment and divides only face area through preprocessing process. However, it is not easy to accurately recognize the face area that is sensitive to lighting changes and has a large degree of freedom, and the error range is large. In this paper, we separated the background and face area using the brightness difference of the two images to increase the recognition rate. The brightness difference between the two images means the difference between the images taken under the bright light and the images taken under the dark light. After separating only the face region, the face shape is recognized by using the self-organization feature map (SOFM) algorithm. SOFM first selects the first top neuron through the learning process. Second, the highest neuron is renewed by competing again between the highest neuron and neighboring neurons through the competition process. Third, the final top neuron is selected by repeating the learning process and the competition process. In addition, the competition will go through a three-step learning process to ensure that the top neurons are updated well among neurons. By using these SOFM neural network algorithms, we intend to implement a stable and robust real-time face shape recognition system in face shape recognition.

Three-Dimensional Shape Recognition and Classification Using Local Features of Model Views and Sparse Representation of Shape Descriptors

  • Kanaan, Hussein;Behrad, Alireza
    • Journal of Information Processing Systems
    • /
    • v.16 no.2
    • /
    • pp.343-359
    • /
    • 2020
  • In this paper, a new algorithm is proposed for three-dimensional (3D) shape recognition using local features of model views and its sparse representation. The algorithm starts with the normalization of 3D models and the extraction of 2D views from uniformly distributed viewpoints. Consequently, the 2D views are stacked over each other to from view cubes. The algorithm employs the descriptors of 3D local features in the view cubes after applying Gabor filters in various directions as the initial features for 3D shape recognition. In the training stage, we store some 3D local features to build the prototype dictionary of local features. To extract an intermediate feature vector, we measure the similarity between the local descriptors of a shape model and the local features of the prototype dictionary. We represent the intermediate feature vectors of 3D models in the sparse domain to obtain the final descriptors of the models. Finally, support vector machine classifiers are used to recognize the 3D models. Experimental results using the Princeton Shape Benchmark database showed the average recognition rate of 89.7% using 20 views. We compared the proposed approach with state-of-the-art approaches and the results showed the effectiveness of the proposed algorithm.

Feature selection and similarity comparison system for identification of unknown paintings (미확인 작품 식별을 위한 Feature 선정 및 유사도 비교 시스템 구축)

  • Park, Kyung-Yeob;Kim, Joo-Sung;Kim, Hyun-Soo;Shin, Dong-Myung
    • Journal of Software Assessment and Valuation
    • /
    • v.17 no.1
    • /
    • pp.17-24
    • /
    • 2021
  • There is a problem that unknown paintings are sophisticated in the level of forgery, making it difficult for even experts to determine whether they are genuine or counterfeit. These problems can be suspected of forgery even if the genuine product is submitted, which can lead to a decline in the value of the work and the artist. To address these issues, in this paper, we propose a system to classify chromaticity data among extracted data through objective analysis into quadrants, extracting comparisons and intersections, and estimating authors of unknown paintings using XRF and hyperspectral spectrum data from corresponding points.