• Title/Summary/Keyword: adaptive model

Search Result 2,837, Processing Time 0.027 seconds

Rotation and Scale Invariant Face Detection Using Log-polar Mapping and Face Features (Log-polar변환과 얼굴특징추출을 이용한 크기 및 회전불변 얼굴인식)

  • Go Gi-Young;Kim Doo-Young
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.6 no.1
    • /
    • pp.15-22
    • /
    • 2005
  • In this paper, we propose a face recognition system by using the CCD color image. We first get the face candidate image by using YCbCr color model and adaptive skin color information. And we use it initial curve of active contour model to extract face region. We use the Eye map and mouth map using color information for extracting facial feature from the face image. To obtain center point of Log-polar image, we use extracted facial feature from the face image. In order to obtain feature vectors, we use extracted coefficients from DCT and wavelet transform. To show the validity of the proposed method, we performed a face recognition using neural network with BP learning algorithm. Experimental results show that the proposed method is robuster with higher recogntion rate than the conventional method for the rotation and scale variant.

  • PDF

Components Design for Guided Weapon System according to Resolution based on Base System Model (기본체계모델 기반 해상도 별 유도 무기체계 컴포넌트 설계)

  • Moon, Kyujin;An, Yu-Young;Jeong, Ui-Taek;Ryoo, Chang-Kyung
    • Journal of the Korea Society for Simulation
    • /
    • v.28 no.3
    • /
    • pp.11-23
    • /
    • 2019
  • An AddSIM(Adaptive distributed and parallel Simulation environment for Interoperable and reusable Models) is developed to construct a composite environment that can be used in the overall stage from military demand analysis to test and evaluation. In addition, a base system model(BSM), which is a component model of the weapon system with standardized hierarchies, has been developed. This paper describes the critical design of BSM for the guided weapon system that can be operated in AddSIM. The guided weapon system BSM is designed for reusability and interoperability, and to have the same interface for assembly, even if the subcomponents have different resolution. Then, each subcomponent is defined and implemented according to the component resolution classification scheme. Finally, Combinations of subcomponents have been used to construct the guided weapon system of various resolution and the performance is compared and analyzed through simulation.

Intrusion Detection System Modeling Based on Learning from Network Traffic Data

  • Midzic, Admir;Avdagic, Zikrija;Omanovic, Samir
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.11
    • /
    • pp.5568-5587
    • /
    • 2018
  • This research uses artificial intelligence methods for computer network intrusion detection system modeling. Primary classification is done using self-organized maps (SOM) in two levels, while the secondary classification of ambiguous data is done using Sugeno type Fuzzy Inference System (FIS). FIS is created by using Adaptive Neuro-Fuzzy Inference System (ANFIS). The main challenge for this system was to successfully detect attacks that are either unknown or that are represented by very small percentage of samples in training dataset. Improved algorithm for SOMs in second layer and for the FIS creation is developed for this purpose. Number of clusters in the second SOM layer is optimized by using our improved algorithm to minimize amount of ambiguous data forwarded to FIS. FIS is created using ANFIS that was built on ambiguous training dataset clustered by another SOM (which size is determined dynamically). Proposed hybrid model is created and tested using NSL KDD dataset. For our research, NSL KDD is especially interesting in terms of class distribution (overlapping). Objectives of this research were: to successfully detect intrusions represented in data with small percentage of the total traffic during early detection stages, to successfully deal with overlapping data (separate ambiguous data), to maximize detection rate (DR) and minimize false alarm rate (FAR). Proposed hybrid model with test data achieved acceptable DR value 0.8883 and FAR value 0.2415. The objectives were successfully achieved as it is presented (compared with the similar researches on NSL KDD dataset). Proposed model can be used not only in further research related to this domain, but also in other research areas.

Lightweight Deep Learning Model for Real-Time 3D Object Detection in Point Clouds (실시간 3차원 객체 검출을 위한 포인트 클라우드 기반 딥러닝 모델 경량화)

  • Kim, Gyu-Min;Baek, Joong-Hwan;Kim, Hee Yeong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.9
    • /
    • pp.1330-1339
    • /
    • 2022
  • 3D object detection generally aims to detect relatively large data such as automobiles, buses, persons, furniture, etc, so it is vulnerable to small object detection. In addition, in an environment with limited resources such as embedded devices, it is difficult to apply the model because of the huge amount of computation. In this paper, the accuracy of small object detection was improved by focusing on local features using only one layer, and the inference speed was improved through the proposed knowledge distillation method from large pre-trained network to small network and adaptive quantization method according to the parameter size. The proposed model was evaluated using SUN RGB-D Val and self-made apple tree data set. Finally, it achieved the accuracy performance of 62.04% at mAP@0.25 and 47.1% at mAP@0.5, and the inference speed was 120.5 scenes per sec, showing a fast real-time processing speed.

Novel Algorithms for Early Cancer Diagnosis Using Transfer Learning with MobileNetV2 in Thermal Images

  • Swapna Davies;Jaison Jacob
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.3
    • /
    • pp.570-590
    • /
    • 2024
  • Breast cancer ranks among the most prevalent forms of malignancy and foremost cause of death by cancer worldwide. It is not preventable. Early and precise detection is the only remedy for lowering the rate of mortality and improving the probability of survival for victims. In contrast to present procedures, thermography aids in the early diagnosis of cancer and thereby saves lives. But the accuracy experiences detrimental impact by low sensitivity for small and deep tumours and the subjectivity by physicians in interpreting the images. Employing deep learning approaches for cancer detection can enhance the efficacy. This study explored the utilization of thermography in early identification of breast cancer with the use of a publicly released dataset known as the DMR-IR dataset. For this purpose, we employed a novel approach that entails the utilization of a pre-trained MobileNetV2 model and fine tuning it through transfer learning techniques. We created three models using MobileNetV2: one was a baseline transfer learning model with weights trained from ImageNet dataset, the second was a fine-tuned model with an adaptive learning rate, and the third utilized early stopping with callbacks during fine-tuning. The results showed that the proposed methods achieved average accuracy rates of 85.15%, 95.19%, and 98.69%, respectively, with various performance indicators such as precision, sensitivity and specificity also being investigated.

The Flood Water Stage Prediction based on Neural Networks Method in Stream Gauge Station (하천수위표지점에서 신경망기법을 이용한 홍수위의 예측)

  • Kim, Seong-Won;Salas, Jose-D.
    • Journal of Korea Water Resources Association
    • /
    • v.33 no.2
    • /
    • pp.247-262
    • /
    • 2000
  • In this paper, the WSANN(Water Stage Analysis with Neural Network) model was presented so as to predict flood water stage at Jindong which has been the major stream gauging station in Nakdong river basin. The WSANN model used the improved backpropagation training algorithm which was complemented by the momentum method, improvement of initial condition and adaptive-learning rate and the data which were used for this study were classified into training and testing data sets. An empirical equation was derived to determine optimal hidden layer node between the hidden layer node and threshold iteration number. And, the calibration of the WSANN model was performed by the four training data sets. As a result of calibration, the WSANN22 and WSANN32 model were selected for the optimal models which would be used for model verification. The model verification was carried out so as to evaluate model fitness with the two-untrained testing data sets. And, flood water stages were reasonably predicted through the results of statistical analysis. As results of this study, further research activities are needed for the construction of a real-time warning of the impending flood and for the control of flood water stage with neural network method in river basin. basin.

  • PDF

Geologic Map Data Model (지질도 데이터 모델)

  • Yeon, Young-Kwang;Han, Jong-Gyu;Lee, Hong-Jin;Chi, Kwang-Hoon;Ryu, Kun-Ho
    • Economic and Environmental Geology
    • /
    • v.42 no.3
    • /
    • pp.273-282
    • /
    • 2009
  • To render more valuable information, a spatial database is being constructed from digitalized maps in the geographic areas. Transferring file-based maps into a spatial database, facilitates the integration of larger databases and information retrieval using database functions. Geological mapping is the graphical interpretation results of the geological phenomenon by geological surveyors, which is different from other thematic maps produced quantitatively. These features make it difficult to construct geologic databases needing geologic interpretation about various meanings. For those reasons, several organizations in the USA and Australia are suggesting the data model for the database construction. But, it is hard to adapt to a domestic environment because of the representation differences of geological phenomenon. This paper suggests the data model adaptive in domestic environment analyzing 1:50,000 scales of geologic maps and more detailed mine geologic maps. The suggested model is a logical data model for the ArcGIS GeoDatabase. Using the model it can be efficiently applicable in the 1:50,000 scales of geological maps. It is expected that the geologic data model suggested in this paper can be used for integrated use and efficient management of geologic maps.

Stereo Image-based 3D Modelling Algorithm through Efficient Extraction of Depth Feature (효율적인 깊이 특징 추출을 이용한 스테레오 영상 기반의 3차원 모델링 기법)

  • Ha, Young-Su;Lee, Heng-Suk;Han, Kyu-Phil
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.10
    • /
    • pp.520-529
    • /
    • 2005
  • A feature-based 3D modeling algorithm is presented in this paper. Since conventional methods use depth-based techniques, they need much time for the image matching to extract depth information. Even feature-based methods have less computation load than that of depth-based ones, the calculation of modeling error about whole pixels within a triangle is needed in feature-based algorithms. It also increase the computation time. Therefore, the proposed algorithm consists of three phases, which are an initial 3D model generation, model evaluation, and model refinement phases, in order to acquire an efficient 3D model. Intensity gradients and incremental Delaunay triangulation are used in the Initial model generation. In this phase, a morphological edge operator is adopted for a fast edge filtering, and the incremental Delaunay triangulation is modified to decrease the computation time by avoiding the calculation errors of whole pixels and selecting a vertex at the near of the centroid within the previous triangle. After the model generation, sparse vertices are matched, then the faces are evaluated with the size, approximation error, and disparity fluctuation of the face in evaluation stage. Thereafter, the faces which have a large error are selectively refined into smaller faces. Experimental results showed that the proposed algorithm could acquire an adaptive model with less modeling errors for both smooth and abrupt areas and could remarkably reduce the model acquisition time.

Additive hazards models for interval-censored semi-competing risks data with missing intermediate events (결측되었거나 구간중도절단된 중간사건을 가진 준경쟁적위험 자료에 대한 가산위험모형)

  • Kim, Jayoun;Kim, Jinheum
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.4
    • /
    • pp.539-553
    • /
    • 2017
  • We propose a multi-state model to analyze semi-competing risks data with interval-censored or missing intermediate events. This model is an extension of the three states of the illness-death model: healthy, disease, and dead. The 'diseased' state can be considered as the intermediate event. Two more states are added into the illness-death model to incorporate the missing events, which are caused by a loss of follow-up before the end of a study. One of them is a state of the lost-to-follow-up (LTF), and the other is an unobservable state that represents an intermediate event experienced after the occurrence of LTF. Given covariates, we employ the Lin and Ying additive hazards model with log-normal frailty and construct a conditional likelihood to estimate transition intensities between states in the multi-state model. A marginalization of the full likelihood is completed using adaptive importance sampling, and the optimal solution of the regression parameters is achieved through an iterative quasi-Newton algorithm. Simulation studies are performed to investigate the finite-sample performance of the proposed estimation method in terms of empirical coverage probability of true regression parameters. Our proposed method is also illustrated with a dataset adapted from Helmer et al. (2001).

Scalable Collaborative Filtering Technique based on Adaptive Clustering (적응형 군집화 기반 확장 용이한 협업 필터링 기법)

  • Lee, O-Joun;Hong, Min-Sung;Lee, Won-Jin;Lee, Jae-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.73-92
    • /
    • 2014
  • An Adaptive Clustering-based Collaborative Filtering Technique was proposed to solve the fundamental problems of collaborative filtering, such as cold-start problems, scalability problems and data sparsity problems. Previous collaborative filtering techniques were carried out according to the recommendations based on the predicted preference of the user to a particular item using a similar item subset and a similar user subset composed based on the preference of users to items. For this reason, if the density of the user preference matrix is low, the reliability of the recommendation system will decrease rapidly. Therefore, the difficulty of creating a similar item subset and similar user subset will be increased. In addition, as the scale of service increases, the time needed to create a similar item subset and similar user subset increases geometrically, and the response time of the recommendation system is then increased. To solve these problems, this paper suggests a collaborative filtering technique that adapts a condition actively to the model and adopts the concepts of a context-based filtering technique. This technique consists of four major methodologies. First, items are made, the users are clustered according their feature vectors, and an inter-cluster preference between each item cluster and user cluster is then assumed. According to this method, the run-time for creating a similar item subset or user subset can be economized, the reliability of a recommendation system can be made higher than that using only the user preference information for creating a similar item subset or similar user subset, and the cold start problem can be partially solved. Second, recommendations are made using the prior composed item and user clusters and inter-cluster preference between each item cluster and user cluster. In this phase, a list of items is made for users by examining the item clusters in the order of the size of the inter-cluster preference of the user cluster, in which the user belongs, and selecting and ranking the items according to the predicted or recorded user preference information. Using this method, the creation of a recommendation model phase bears the highest load of the recommendation system, and it minimizes the load of the recommendation system in run-time. Therefore, the scalability problem and large scale recommendation system can be performed with collaborative filtering, which is highly reliable. Third, the missing user preference information is predicted using the item and user clusters. Using this method, the problem caused by the low density of the user preference matrix can be mitigated. Existing studies on this used an item-based prediction or user-based prediction. In this paper, Hao Ji's idea, which uses both an item-based prediction and user-based prediction, was improved. The reliability of the recommendation service can be improved by combining the predictive values of both techniques by applying the condition of the recommendation model. By predicting the user preference based on the item or user clusters, the time required to predict the user preference can be reduced, and missing user preference in run-time can be predicted. Fourth, the item and user feature vector can be made to learn the following input of the user feedback. This phase applied normalized user feedback to the item and user feature vector. This method can mitigate the problems caused by the use of the concepts of context-based filtering, such as the item and user feature vector based on the user profile and item properties. The problems with using the item and user feature vector are due to the limitation of quantifying the qualitative features of the items and users. Therefore, the elements of the user and item feature vectors are made to match one to one, and if user feedback to a particular item is obtained, it will be applied to the feature vector using the opposite one. Verification of this method was accomplished by comparing the performance with existing hybrid filtering techniques. Two methods were used for verification: MAE(Mean Absolute Error) and response time. Using MAE, this technique was confirmed to improve the reliability of the recommendation system. Using the response time, this technique was found to be suitable for a large scaled recommendation system. This paper suggested an Adaptive Clustering-based Collaborative Filtering Technique with high reliability and low time complexity, but it had some limitations. This technique focused on reducing the time complexity. Hence, an improvement in reliability was not expected. The next topic will be to improve this technique by rule-based filtering.