• Title/Summary/Keyword: Robust Feature

Search Result 874, Processing Time 0.026 seconds

Localizing Head and Shoulder Line Using Statistical Learning (통계학적 학습을 이용한 머리와 어깨선의 위치 찾기)

  • Kwon, Mu-Sik
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.2C
    • /
    • pp.141-149
    • /
    • 2007
  • Associating the shoulder line with head location of the human body is useful in verifying, localizing and tracking persons in an image. Since the head line and the shoulder line, what we call ${\Omega}$-shape, move together in a consistent way within a limited range of deformation, we can build a statistical shape model using Active Shape Model (ASM). However, when the conventional ASM is applied to ${\Omega}$-shape fitting, it is very sensitive to background edges and clutter because it relies only on the local edge or gradient. Even though appearance is a good alternative feature for matching the target object to image, it is difficult to learn the appearance of the ${\Omega}$-shape because of the significant difference between people's skin, hair and clothes, and because appearance does not remain the same throughout the entire video. Therefore, instead of teaming appearance or updating appearance as it changes, we model the discriminative appearance where each pixel is classified into head, torso and background classes, and update the classifier to obtain the appropriate discriminative appearance in the current frame. Accordingly, we make use of two features in fitting ${\Omega}$-shape, edge gradient which is used for localization, and discriminative appearance which contributes to stability of the tracker. The simulation results show that the proposed method is very robust to pose change, occlusion, and illumination change in tracking the head and shoulder line of people. Another advantage is that the proposed method operates in real time.

Multimodal Brain Image Registration based on Surface Distance and Surface Curvature Optimization (표면거리 및 표면곡률 최적화 기반 다중모달리티 뇌영상 정합)

  • Park Ji-Young;Choi Yoo-Joo;Kim Min-Jeong;Tae Woo-Suk;Hong Seung-Bong;Kim Myoung-Hee
    • The KIPS Transactions:PartA
    • /
    • v.11A no.5
    • /
    • pp.391-400
    • /
    • 2004
  • Within multimodal medical image registration techniques, which correlate different images and Provide integrated information, surface registration methods generally minimize the surface distance between two modalities. However, the features of two modalities acquired from one subject are similar. So, it can improve the accuracy of registration result to match two images based on optimization of both surface distance and shape feature. This research proposes a registration method which optimizes surface distance and surface curvature of two brain modalities. The registration process has two steps. First, surface information is extracted from the reference images and the test images. Next, the optimization process is performed. In the former step, the surface boundaries of regions of interest are extracted from the two modalities. And for the boundary of reference volume image, distance map and curvature map are generated. In the optimization step, a transformation minimizing both surface distance and surface curvature difference is determined by a cost function referring to the distance map and curvature map. The applying of the result transformation makes test volume be registered to reference volume. The suggested cost function makes possible a more robust and accurate registration result than that of the cost function using the surface distance only. Also, this research provides an efficient means for image analysis through volume visualization of the registration result.

Hand Motion Recognition Algorithm Using Skin Color and Center of Gravity Profile (피부색과 무게중심 프로필을 이용한 손동작 인식 알고리즘)

  • Park, Youngmin
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.2
    • /
    • pp.411-417
    • /
    • 2021
  • The field that studies human-computer interaction is called HCI (Human-computer interaction). This field is an academic field that studies how humans and computers communicate with each other and recognize information. This study is a study on hand gesture recognition for human interaction. This study examines the problems of existing recognition methods and proposes an algorithm to improve the recognition rate. The hand region is extracted based on skin color information for the image containing the shape of the human hand, and the center of gravity profile is calculated using principal component analysis. I proposed a method to increase the recognition rate of hand gestures by comparing the obtained information with predefined shapes. We proposed a method to increase the recognition rate of hand gestures by comparing the obtained information with predefined shapes. The existing center of gravity profile has shown the result of incorrect hand gesture recognition for the deformation of the hand due to rotation, but in this study, the center of gravity profile is used and the point where the distance between the points of all contours and the center of gravity is the longest is the starting point. Thus, a robust algorithm was proposed by re-improving the center of gravity profile. No gloves or special markers attached to the sensor are used for hand gesture recognition, and a separate blue screen is not installed. For this result, find the feature vector at the nearest distance to solve the misrecognition, and obtain an appropriate threshold to distinguish between success and failure.

Denoising Self-Attention Network for Mixed-type Data Imputation (혼합형 데이터 보간을 위한 디노이징 셀프 어텐션 네트워크)

  • Lee, Do-Hoon;Kim, Han-Joon;Chun, Joonghoon
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.11
    • /
    • pp.135-144
    • /
    • 2021
  • Recently, data-driven decision-making technology has become a key technology leading the data industry, and machine learning technology for this requires high-quality training datasets. However, real-world data contains missing values for various reasons, which degrades the performance of prediction models learned from the poor training data. Therefore, in order to build a high-performance model from real-world datasets, many studies on automatically imputing missing values in initial training data have been actively conducted. Many of conventional machine learning-based imputation techniques for handling missing data involve very time-consuming and cumbersome work because they are applied only to numeric type of columns or create individual predictive models for each columns. Therefore, this paper proposes a new data imputation technique called 'Denoising Self-Attention Network (DSAN)', which can be applied to mixed-type dataset containing both numerical and categorical columns. DSAN can learn robust feature expression vectors by combining self-attention and denoising techniques, and can automatically interpolate multiple missing variables in parallel through multi-task learning. To verify the validity of the proposed technique, data imputation experiments has been performed after arbitrarily generating missing values for several mixed-type training data. Then we show the validity of the proposed technique by comparing the performance of the binary classification models trained on imputed data together with the errors between the original and imputed values.

A Thoracic Spine Segmentation Technique for Automatic Extraction of VHS and Cobb Angle from X-ray Images (X-ray 영상에서 VHS와 콥 각도 자동 추출을 위한 흉추 분할 기법)

  • Ye-Eun, Lee;Seung-Hwa, Han;Dong-Gyu, Lee;Ho-Joon, Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.1
    • /
    • pp.51-58
    • /
    • 2023
  • In this paper, we propose an organ segmentation technique for the automatic extraction of medical diagnostic indicators from X-ray images. In order to calculate diagnostic indicators of heart disease and spinal disease such as VHS(vertebral heart scale) and Cobb angle, it is necessary to accurately segment the thoracic spine, carina, and heart in a chest X-ray image. A deep neural network model in which the high-resolution representation of the image for each layer and the structure converted into a low-resolution feature map are connected in parallel was adopted. This structure enables the relative position information in the image to be effectively reflected in the segmentation process. It is shown that learning performance can be improved by combining the OCR module, in which pixel information and object information are mutually interacted in a multi-step process, and the channel attention module, which allows each channel of the network to be reflected as different weight values. In addition, a method of augmenting learning data is presented in order to provide robust performance against changes in the position, shape, and size of the subject in the X-ray image. The effectiveness of the proposed theory was evaluated through an experiment using 145 human chest X-ray images and 118 animal X-ray images.

Characteristics of Signal-to-Noise Paradox and Limits of Potential Predictive Skill in the KMA's Climate Prediction System (GloSea) through Ensemble Expansion (기상청 기후예측시스템(GloSea)의 앙상블 확대를 통해 살펴본 신호대잡음의 역설적 특징(Signal-to-Noise Paradox)과 예측 스킬의 한계)

  • Yu-Kyung Hyun;Yeon-Hee Park;Johan Lee;Hee-Sook Ji;Kyung-On Boo
    • Atmosphere
    • /
    • v.34 no.1
    • /
    • pp.55-67
    • /
    • 2024
  • This paper aims to provide a detailed introduction to the concept of the Ratio of Predictable Component (RPC) and the Signal-to-Noise Paradox. Then, we derive insights from them by exploring the paradoxical features by conducting a seasonal and regional analysis through ensemble expansion in KMA's climate prediction system (GloSea). We also provide an explanation of the ensemble generation method, with a specific focus on stochastic physics. Through this study, we can provide the predictability limits of our forecasting system, and find way to enhance it. On a global scale, RPC reaches a value of 1 when the ensemble is expanded to a maximum of 56 members, underlining the significance of ensemble expansion in the climate prediction system. The feature indicating RPC paradoxically exceeding 1 becomes particularly evident in the winter North Atlantic and the summer North Pacific. In the Siberian Continent, predictability is notably low, persisting even as the ensemble size increases. This region, characterized by a low RPC, is considered challenging for making reliable predictions, highlighting the need for further improvement in the model and initialization processes related to land processes. In contrast, the tropical ocean demonstrates robust predictability while maintaining an RPC of 1. Through this study, we have brought to attention the limitations of potential predictability within the climate prediction system, emphasizing the necessity of leveraging predictable signals with high RPC values. We also underscore the importance of continuous efforts aimed at improving models and initializations to overcome these limitations.

A digital Audio Watermarking Algorithm using 2D Barcode (2차원 바코드를 이용한 오디오 워터마킹 알고리즘)

  • Bae, Kyoung-Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.2
    • /
    • pp.97-107
    • /
    • 2011
  • Nowadays there are a lot of issues about copyright infringement in the Internet world because the digital content on the network can be copied and delivered easily. Indeed the copied version has same quality with the original one. So, copyright owners and content provider want a powerful solution to protect their content. The popular one of the solutions was DRM (digital rights management) that is based on encryption technology and rights control. However, DRM-free service was launched after Steve Jobs who is CEO of Apple proposed a new music service paradigm without DRM, and the DRM is disappeared at the online music market. Even though the online music service decided to not equip the DRM solution, copyright owners and content providers are still searching a solution to protect their content. A solution to replace the DRM technology is digital audio watermarking technology which can embed copyright information into the music. In this paper, the author proposed a new audio watermarking algorithm with two approaches. First, the watermark information is generated by two dimensional barcode which has error correction code. So, the information can be recovered by itself if the errors fall into the range of the error tolerance. The other one is to use chirp sequence of CDMA (code division multiple access). These make the algorithm robust to the several malicious attacks. There are many 2D barcodes. Especially, QR code which is one of the matrix barcodes can express the information and the expression is freer than that of the other matrix barcodes. QR code has the square patterns with double at the three corners and these indicate the boundary of the symbol. This feature of the QR code is proper to express the watermark information. That is, because the QR code is 2D barcodes, nonlinear code and matrix code, it can be modulated to the spread spectrum and can be used for the watermarking algorithm. The proposed algorithm assigns the different spread spectrum sequences to the individual users respectively. In the case that the assigned code sequences are orthogonal, we can identify the watermark information of the individual user from an audio content. The algorithm used the Walsh code as an orthogonal code. The watermark information is rearranged to the 1D sequence from 2D barcode and modulated by the Walsh code. The modulated watermark information is embedded into the DCT (discrete cosine transform) domain of the original audio content. For the performance evaluation, I used 3 audio samples, "Amazing Grace", "Oh! Carol" and "Take me home country roads", The attacks for the robustness test were MP3 compression, echo attack, and sub woofer boost. The MP3 compression was performed by a tool of Cool Edit Pro 2.0. The specification of MP3 was CBR(Constant Bit Rate) 128kbps, 44,100Hz, and stereo. The echo attack had the echo with initial volume 70%, decay 75%, and delay 100msec. The sub woofer boost attack was a modification attack of low frequency part in the Fourier coefficients. The test results showed the proposed algorithm is robust to the attacks. In the MP3 attack, the strength of the watermark information is not affected, and then the watermark can be detected from all of the sample audios. In the sub woofer boost attack, the watermark was detected when the strength is 0.3. Also, in the case of echo attack, the watermark can be identified if the strength is greater and equal than 0.5.

Enhancement of Inter-Image Statistical Correlation for Accurate Multi-Sensor Image Registration (정밀한 다중센서 영상정합을 위한 통계적 상관성의 증대기법)

  • Kim, Kyoung-Soo;Lee, Jin-Hak;Ra, Jong-Beom
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.4 s.304
    • /
    • pp.1-12
    • /
    • 2005
  • Image registration is a process to establish the spatial correspondence between images of the same scene, which are acquired at different view points, at different times, or by different sensors. This paper presents a new algorithm for robust registration of the images acquired by multiple sensors having different modalities; the EO (electro-optic) and IR(infrared) ones in the paper. The two feature-based and intensity-based approaches are usually possible for image registration. In the former selection of accurate common features is crucial for high performance, but features in the EO image are often not the same as those in the R image. Hence, this approach is inadequate to register the E0/IR images. In the latter normalized mutual Information (nHr) has been widely used as a similarity measure due to its high accuracy and robustness, and NMI-based image registration methods assume that statistical correlation between two images should be global. Unfortunately, since we find out that EO and IR images don't often satisfy this assumption, registration accuracy is not high enough to apply to some applications. In this paper, we propose a two-stage NMI-based registration method based on the analysis of statistical correlation between E0/1R images. In the first stage, for robust registration, we propose two preprocessing schemes: extraction of statistically correlated regions (ESCR) and enhancement of statistical correlation by filtering (ESCF). For each image, ESCR automatically extracts the regions that are highly correlated to the corresponding regions in the other image. And ESCF adaptively filters out each image to enhance statistical correlation between them. In the second stage, two output images are registered by using NMI-based algorithm. The proposed method provides prospective results for various E0/1R sensor image pairs in terms of accuracy, robustness, and speed.

Response Modeling for the Marketing Promotion with Weighted Case Based Reasoning Under Imbalanced Data Distribution (불균형 데이터 환경에서 변수가중치를 적용한 사례기반추론 기반의 고객반응 예측)

  • Kim, Eunmi;Hong, Taeho
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.29-45
    • /
    • 2015
  • Response modeling is a well-known research issue for those who have tried to get more superior performance in the capability of predicting the customers' response for the marketing promotion. The response model for customers would reduce the marketing cost by identifying prospective customers from very large customer database and predicting the purchasing intention of the selected customers while the promotion which is derived from an undifferentiated marketing strategy results in unnecessary cost. In addition, the big data environment has accelerated developing the response model with data mining techniques such as CBR, neural networks and support vector machines. And CBR is one of the most major tools in business because it is known as simple and robust to apply to the response model. However, CBR is an attractive data mining technique for data mining applications in business even though it hasn't shown high performance compared to other machine learning techniques. Thus many studies have tried to improve CBR and utilized in business data mining with the enhanced algorithms or the support of other techniques such as genetic algorithm, decision tree and AHP (Analytic Process Hierarchy). Ahn and Kim(2008) utilized logit, neural networks, CBR to predict that which customers would purchase the items promoted by marketing department and tried to optimized the number of k for k-nearest neighbor with genetic algorithm for the purpose of improving the performance of the integrated model. Hong and Park(2009) noted that the integrated approach with CBR for logit, neural networks, and Support Vector Machine (SVM) showed more improved prediction ability for response of customers to marketing promotion than each data mining models such as logit, neural networks, and SVM. This paper presented an approach to predict customers' response of marketing promotion with Case Based Reasoning. The proposed model was developed by applying different weights to each feature. We deployed logit model with a database including the promotion and the purchasing data of bath soap. After that, the coefficients were used to give different weights of CBR. We analyzed the performance of proposed weighted CBR based model compared to neural networks and pure CBR based model empirically and found that the proposed weighted CBR based model showed more superior performance than pure CBR model. Imbalanced data is a common problem to build data mining model to classify a class with real data such as bankruptcy prediction, intrusion detection, fraud detection, churn management, and response modeling. Imbalanced data means that the number of instance in one class is remarkably small or large compared to the number of instance in other classes. The classification model such as response modeling has a lot of trouble to recognize the pattern from data through learning because the model tends to ignore a small number of classes while classifying a large number of classes correctly. To resolve the problem caused from imbalanced data distribution, sampling method is one of the most representative approach. The sampling method could be categorized to under sampling and over sampling. However, CBR is not sensitive to data distribution because it doesn't learn from data unlike machine learning algorithm. In this study, we investigated the robustness of our proposed model while changing the ratio of response customers and nonresponse customers to the promotion program because the response customers for the suggested promotion is always a small part of nonresponse customers in the real world. We simulated the proposed model 100 times to validate the robustness with different ratio of response customers to response customers under the imbalanced data distribution. Finally, we found that our proposed CBR based model showed superior performance than compared models under the imbalanced data sets. Our study is expected to improve the performance of response model for the promotion program with CBR under imbalanced data distribution in the real world.

Development of Information Extraction System from Multi Source Unstructured Documents for Knowledge Base Expansion (지식베이스 확장을 위한 멀티소스 비정형 문서에서의 정보 추출 시스템의 개발)

  • Choi, Hyunseung;Kim, Mintae;Kim, Wooju;Shin, Dongwook;Lee, Yong Hun
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.111-136
    • /
    • 2018
  • In this paper, we propose a methodology to extract answer information about queries from various types of unstructured documents collected from multi-sources existing on web in order to expand knowledge base. The proposed methodology is divided into the following steps. 1) Collect relevant documents from Wikipedia, Naver encyclopedia, and Naver news sources for "subject-predicate" separated queries and classify the proper documents. 2) Determine whether the sentence is suitable for extracting information and derive the confidence. 3) Based on the predicate feature, extract the information in the proper sentence and derive the overall confidence of the information extraction result. In order to evaluate the performance of the information extraction system, we selected 400 queries from the artificial intelligence speaker of SK-Telecom. Compared with the baseline model, it is confirmed that it shows higher performance index than the existing model. The contribution of this study is that we develop a sequence tagging model based on bi-directional LSTM-CRF using the predicate feature of the query, with this we developed a robust model that can maintain high recall performance even in various types of unstructured documents collected from multiple sources. The problem of information extraction for knowledge base extension should take into account heterogeneous characteristics of source-specific document types. The proposed methodology proved to extract information effectively from various types of unstructured documents compared to the baseline model. There is a limitation in previous research that the performance is poor when extracting information about the document type that is different from the training data. In addition, this study can prevent unnecessary information extraction attempts from the documents that do not include the answer information through the process for predicting the suitability of information extraction of documents and sentences before the information extraction step. It is meaningful that we provided a method that precision performance can be maintained even in actual web environment. The information extraction problem for the knowledge base expansion has the characteristic that it can not guarantee whether the document includes the correct answer because it is aimed at the unstructured document existing in the real web. When the question answering is performed on a real web, previous machine reading comprehension studies has a limitation that it shows a low level of precision because it frequently attempts to extract an answer even in a document in which there is no correct answer. The policy that predicts the suitability of document and sentence information extraction is meaningful in that it contributes to maintaining the performance of information extraction even in real web environment. The limitations of this study and future research directions are as follows. First, it is a problem related to data preprocessing. In this study, the unit of knowledge extraction is classified through the morphological analysis based on the open source Konlpy python package, and the information extraction result can be improperly performed because morphological analysis is not performed properly. To enhance the performance of information extraction results, it is necessary to develop an advanced morpheme analyzer. Second, it is a problem of entity ambiguity. The information extraction system of this study can not distinguish the same name that has different intention. If several people with the same name appear in the news, the system may not extract information about the intended query. In future research, it is necessary to take measures to identify the person with the same name. Third, it is a problem of evaluation query data. In this study, we selected 400 of user queries collected from SK Telecom 's interactive artificial intelligent speaker to evaluate the performance of the information extraction system. n this study, we developed evaluation data set using 800 documents (400 questions * 7 articles per question (1 Wikipedia, 3 Naver encyclopedia, 3 Naver news) by judging whether a correct answer is included or not. To ensure the external validity of the study, it is desirable to use more queries to determine the performance of the system. This is a costly activity that must be done manually. Future research needs to evaluate the system for more queries. It is also necessary to develop a Korean benchmark data set of information extraction system for queries from multi-source web documents to build an environment that can evaluate the results more objectively.