• Title/Summary/Keyword: 가중치 알고리즘

Search Result 1,233, Processing Time 0.028 seconds

Color Component Analysis For Image Retrieval (이미지 검색을 위한 색상 성분 분석)

  • Choi, Young-Kwan;Choi, Chul;Park, Jang-Chun
    • The KIPS Transactions:PartB
    • /
    • v.11B no.4
    • /
    • pp.403-410
    • /
    • 2004
  • Recently, studies of image analysis, as the preprocessing stage for medical image analysis or image retrieval, are actively carried out. This paper intends to propose a way of utilizing color components for image retrieval. For image retrieval, it is based on color components, and for analysis of color, CLCM (Color Level Co-occurrence Matrix) and statistical techniques are used. CLCM proposed in this paper is to project color components on 3D space through geometric rotate transform and then, to interpret distribution that is made from the spatial relationship. CLCM is 2D histogram that is made in color model, which is created through geometric rotate transform of a color model. In order to analyze it, a statistical technique is used. Like CLCM, GLCM (Gray Level Co-occurrence Matrix)[1] and Invariant Moment [2,3] use 2D distribution chart, which use basic statistical techniques in order to interpret 2D data. However, even though GLCM and Invariant Moment are optimized in each domain, it is impossible to perfectly interpret irregular data available on the spatial coordinates. That is, GLCM and Invariant Moment use only the basic statistical techniques so reliability of the extracted features is low. In order to interpret the spatial relationship and weight of data, this study has used Principal Component Analysis [4,5] that is used in multivariate statistics. In order to increase accuracy of data, it has proposed a way to project color components on 3D space, to rotate it and then, to extract features of data from all angles.

A Hybrid Knowledge Representation Method for Pedagogical Content Knowledge (교수내용지식을 위한 하이브리드 지식 표현 기법)

  • Kim, Yong-Beom;Oh, Pill-Wo;Kim, Yung-Sik
    • Korean Journal of Cognitive Science
    • /
    • v.16 no.4
    • /
    • pp.369-386
    • /
    • 2005
  • Although Intelligent Tutoring System(ITS) offers individualized learning environment that overcome limited function of existent CAI, and consider many learners' variable, there is little development to be using at the sites of schools because of inefficiency of investment and absence of pedagogical content knowledge representation techniques. To solve these problem, we should study a method, which represents knowledge for ITS, and which reuses knowledge base. On the pedagogical content knowledge, the knowledge in education differs from knowledge in a general sense. In this paper, we shall primarily address the multi-complex structure of knowledge and explanation of learning vein using multi-complex structure. Multi-Complex, which is organized into nodes, clusters and uses by knowledge base. In addition, it grows a adaptive knowledge base by self-learning. Therefore, in this paper, we propose the 'Extended Neural Logic Network(X-Neuronet)', which is based on Neural Logic Network with logical inference and topological inflexibility in cognition structure, and includes pedagogical content knowledge and object-oriented conception, verify validity. X-Neuronet defines that a knowledge is directive combination with inertia and weights, and offers basic conceptions for expression, logic operator for operation and processing, node value and connection weight, propagation rule, learning algorithm.

  • PDF

English Phoneme Recognition using Segmental-Feature HMM (분절 특징 HMM을 이용한 영어 음소 인식)

  • Yun, Young-Sun
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.3
    • /
    • pp.167-179
    • /
    • 2002
  • In this paper, we propose a new acoustic model for characterizing segmental features and an algorithm based upon a general framework of hidden Markov models (HMMs) in order to compensate the weakness of HMM assumptions. The segmental features are represented as a trajectory of observed vector sequences by a polynomial regression function because the single frame feature cannot represent the temporal dynamics of speech signals effectively. To apply the segmental features to pattern classification, we adopted segmental HMM(SHMM) which is known as the effective method to represent the trend of speech signals. SHMM separates observation probability of the given state into extra- and intra-segmental variations that show the long-term and short-term variabilities, respectively. To consider the segmental characteristics in acoustic model, we present segmental-feature HMM(SFHMM) by modifying the SHMM. The SFHMM therefore represents the external- and internal-variation as the observation probability of the trajectory in a given state and trajectory estimation error for the given segment, respectively. We conducted several experiments on the TIMIT database to establish the effectiveness of the proposed method and the characteristics of the segmental features. From the experimental results, we conclude that the proposed method is valuable, if its number of parameters is greater than that of conventional HMM, in the flexible and informative feature representation and the performance improvement.

A Study on the Criteria for Collision Avoidance of Naval Ships for Obstacles in Constant Bearing, Decreasing Range (CBDR) (방위끌림이 없는 장애물에 대한 함정의 충돌회피 기준에 관한 연구)

  • Ha, Jeong-soo;Jeong, Yeon-hwan
    • Journal of Navigation and Port Research
    • /
    • v.43 no.6
    • /
    • pp.377-383
    • /
    • 2019
  • Naval ships that are navigating always have the possibility of colliding, but there is no clear maneuvering procedure for collision avoidance, and there is a tendency to depend entirely on the intuitive judgment of the Officer Of Watch (OOW). In this study, we conducted a questionnaire survey when and how to avoid collision for the OOW in a Constant Bearing, Decreasing Range (CBDR) situation wherein the naval ships encountered obstacles. Using the results of the questionnaire survey, we analyzed the CBDR situation of encountering obstacles, and how to avoid collision in day/night. The most difficult to maneuver areas were Pyeongtaek, Mokpo, and occurred mainly in narrow channels. The frequency appeared on average about once every four hours, and there were more of a large number of ships encountering situations than the 1:1 situation. The method of check of collision course confirmation was more reliable with the eye confirmation results, and priority was given to distance at closest point of approach (DCPA) and time at closest point of approach (TCPA). There was not a difference in DCPA between the give-way ship and stand-on ship, but a difference between day and night. Also, most navigators prefer to use maneuvering & shifting when avoiding collisions, and steering is 10-15°, shifting ±5knots, and the drift course was direction added stern of the obstacles to the direction of it. These results will facilitate in providing officers with standards for collision avoidance, and also apply to the development of AI and big data based unmanned ship collision avoidance algorithms.

A New Face Tracking and Recognition Method Adapted to the Environment (환경에 적응적인 얼굴 추적 및 인식 방법)

  • Ju, Myung-Ho;Kang, Hang-Bong
    • The KIPS Transactions:PartB
    • /
    • v.16B no.5
    • /
    • pp.385-394
    • /
    • 2009
  • Face tracking and recognition are difficult problems because the face is a non-rigid object. The main reasons for the failure to track and recognize the faces are the changes of a face pose and environmental illumination. To solve these problems, we propose a nonlinear manifold framework for the face pose and the face illumination normalization processing. Specifically, to track and recognize a face on the video that has various pose variations, we approximate a face pose density to single Gaussian density by PCA(Principle Component Analysis) using images sampled from training video sequences and then construct the GMM(Gaussian Mixture Model) for each person. To solve the illumination problem for the face tracking and recognition, we decompose the face images into the reflectance and the illuminance using the SSR(Single Scale Retinex) model. To obtain the normalized reflectance, the reflectance is rescaled by histogram equalization on the defined range. We newly approximate the illuminance by the trained manifold since the illuminance has almost variations by illumination. By combining these two features into our manifold framework, we derived the efficient face tracking and recognition results on indoor and outdoor video. To improve the video based tracking results, we update the weights of each face pose density at each frame by the tracking result at the previous frame using EM algorithm. Our experimental results show that our method is more efficient than other methods.

A Link Travel Time Estimation Algorithm Based on Point and Interval Detection Data over the National Highway Section (일반국도의 지점 및 구간검지기 자료의 융합을 통한 통행시간 추정 알고리즘 개발)

  • Kim, Sung-Hyun;Lim, Kang-Won;Lee, Young-Ihn
    • Journal of Korean Society of Transportation
    • /
    • v.23 no.5 s.83
    • /
    • pp.135-146
    • /
    • 2005
  • Up to now studies on the fusion of travel time from various detectors have been conducted based on the variance raito of the intermittent data mainly collected by GPS or probe vehicles. The fusion model based on the variance ratio of intermittent data is not suitable for the license plate recognition AVIs which can deal with vast amount of data. This study was carried out to develop the fusion model based on travel time acquired from the license plate recognition AVIs and the point detectors. In order to fuse travel time acquired from the point detectors and the license plate recognition AVIs, the optimized fusion model and the proportional fusion model were developed in this study. As a result of verification, the optimized fusion model showed the superior estimation performance. The optimized fusion model is the dynamic fusion ratio estimation model on real time base, which calculates fusion weights based on real time historic data and applies them to the current time period. The results of this study are expected to be used effectively for National Highway Traffic Management System to provide traffic information in the future. However, there should be further studies on the Proper distance for the establishment of the AVIs and the license plate matching rate according to the lanes for AVIs to be established.

Moving Image Compression with Splitting Sub-blocks for Frame Difference Based on 3D-DCT (3D-DCT 기반 프레임 차분의 부블록 분할 동영상 압축)

  • Choi, Jae-Yoon;Park, Dong-Chun;Kim, Tae-Hyo
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.37 no.1
    • /
    • pp.55-63
    • /
    • 2000
  • This paper investigated the sub-region compression effect of the three dimensional DCT(3D-DCT) using the difference component(DC) of inter-frame in images. The proposed algorithm are the method that obtain compression effect to divide the information into subband after 3D-DCT, the data appear the type of cubic block(8${\times}$8${\times}$8) in eight difference components per unit. In the frequence domain that transform the eight differential component frames into eight DCT frames with components of both spatial and temporal frequencies of inter-frame, the image data are divided into frame component(8${\times}$8 block) of time-axis direction into 4${\times}$4 sub block in order to effectively obtain compression data because image components are concentrate in corner region with low-frequency of cubic block. Here, using the weight of sub block, we progressed compression ratio as consider to adaptive sub-region of low frequency part. In simulation, we estimated compression ratio, reconstructed image resolution(PSNR) with the simpler image and the complex image contained the higher frequency component. In the result, we could obtain the high compression effect of 30.36dB(average value in the complex-image) and 34.75dB(average value in the simple-image) in compression range of 0.04~0.05bpp.

  • PDF

A Study on SNS Reviews Analysis based on Deep Learning for User Tendency (개인 성향 추출을 위한 딥러닝 기반 SNS 리뷰 분석 방법에 관한 연구)

  • Park, Woo-Jin;Lee, Ju-Oh;Lee, Hyung-Geol;Kim, Ah-Yeon;Heo, Seung-Yeon;Ahn, Yong-Hak
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.11
    • /
    • pp.9-17
    • /
    • 2020
  • In this paper, we proposed an SNS review analysis method based on deep learning for user tendency. The existing SNS review analysis method has a problem that does not reflect a variety of opinions on various interests because most are processed based on the highest weight. To solve this problem, the proposed method is to extract the user's personal tendency from the SNS review for food. It performs classification using the YOLOv3 model, and after performing a sentiment analysis through the BiLSTM model, it extracts various personal tendencies through a set algorithm. Experiments showed that the performance of Top-1 accuracy 88.61% and Top-5 90.13% for the YOLOv3 model, and 90.99% accuracy for the BiLSTM model. Also, it was shown that diversity of the individual tendencies in the SNS review classification through the heat map. In the future, it is expected to extract personal tendencies from various fields and be used for customized service or marketing.

Research on text mining based malware analysis technology using string information (문자열 정보를 활용한 텍스트 마이닝 기반 악성코드 분석 기술 연구)

  • Ha, Ji-hee;Lee, Tae-jin
    • Journal of Internet Computing and Services
    • /
    • v.21 no.1
    • /
    • pp.45-55
    • /
    • 2020
  • Due to the development of information and communication technology, the number of new / variant malicious codes is increasing rapidly every year, and various types of malicious codes are spreading due to the development of Internet of things and cloud computing technology. In this paper, we propose a malware analysis method based on string information that can be used regardless of operating system environment and represents library call information related to malicious behavior. Attackers can easily create malware using existing code or by using automated authoring tools, and the generated malware operates in a similar way to existing malware. Since most of the strings that can be extracted from malicious code are composed of information closely related to malicious behavior, it is processed by weighting data features using text mining based method to extract them as effective features for malware analysis. Based on the processed data, a model is constructed using various machine learning algorithms to perform experiments on detection of malicious status and classification of malicious groups. Data has been compared and verified against all files used on Windows and Linux operating systems. The accuracy of malicious detection is about 93.5%, the accuracy of group classification is about 90%. The proposed technique has a wide range of applications because it is relatively simple, fast, and operating system independent as a single model because it is not necessary to build a model for each group when classifying malicious groups. In addition, since the string information is extracted through static analysis, it can be processed faster than the analysis method that directly executes the code.

An integrated framework of security tool selection using fuzzy regression and physical programming (퍼지회귀분석과 physical programming을 활용한 정보보호 도구 선정 통합 프레임워크)

  • Nguyen, Hoai-Vu;Kongsuwan, Pauline;Shin, Sang-Mun;Choi, Yong-Sun;Kim, Sang-Kyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.11
    • /
    • pp.143-156
    • /
    • 2010
  • Faced with an increase of malicious threats from the Internet as well as local area networks, many companies are considering deploying a security system. To help a decision maker select a suitable security tool, this paper proposed a three-step integrated framework using linear fuzzy regression (LFR) and physical programming (PP). First, based on the experts' estimations on security criteria, analytic hierarchy process (AHP) and quality function deployment (QFD) are employed to specify an intermediate score for each criterion and the relationship among these criteria. Next, evaluation value of each criterion is computed by using LFR. Finally, a goal programming (GP) method is customized to obtain the most appropriate security tool for an organization, considering a tradeoff among the multi-objectives associated with quality, credibility and costs, utilizing the relative weights calculated by the physical programming weights (PPW) algorithm. A numerical example provided illustrates the advantages and contributions of this approach. Proposed approach is anticipated to help a decision maker select a suitable security tool by taking advantage of experts' experience, with noises eliminated, as well as the accuracy of mathematical optimization methods.