• Title/Summary/Keyword: Number Matching

Search Result 803, Processing Time 0.034 seconds

Ecological Division of Habitats by Analysis of Vegetation Structure and Soil Environment -A Case Study on the Vegetation in the Kimpo Landfills and Its Periphery Region- (식생구조와 토양환경 분석을 통한 서식처의 생태학적 구분 -김포매립지와 그 근린 지역의 식생을 사례로 -)

  • Kim, Jong-Won;Yong-Kyoo Jong
    • The Korean Journal of Ecology
    • /
    • v.18 no.3
    • /
    • pp.307-321
    • /
    • 1995
  • Division of ecoregions having respective functions was attempted through quantitative and qualitative analysis on vegetation diversity, and heterogeneity and on soil environment of the study sites. Field research was carried out in a square of 81 ㎢ around Andongpo (126°38'E, 37°30'N), Kimpo-gun, Kyonggi provice. Conventional methods applied are as follows: classical syntaxonomy by the Zurich-Montpellier School, interpolation method to determine the degree of diversity, heterogeneity and distribution pattern of vegetation, and correlation analysis between soil properties and plant communities. 41 plant communities were identified and composed of 6 forests, 4 mantle and 31 herb communities including 6 saltmarsh plant communities. In a mesh, number of plant communities was highly correlated to the number of species. The highest number of plant community and species was 25 communities·km-2·mesh-1 and 381 species· km-2·mesh-1 ,and the highest value of vegetation heterogeneity was 28.1 species· community-1·mesh-1. Their lowest numbers were 4 communities·km-2·mesh-1. and 28 species·km-2·mesh-1. and 7 species·community-1·mesh-1, respectively. Contour map on vegetation diversity and heterogeneity enabled us to establish two regions; coastal and inland vegetation. Isoline 〔150〕,〔10〕and〔10〕and〔15〕on the species diversity, the community diversity and the vegetation heterogeneity, respectively, were regarded as ecolines in the study area. Cl- content was recognized as the most important factor from correlation analysis between soil properties. Ordination of sites indicated that the study area be divided into two edaphic types: inland and coastal habitats. It was considered that the extent of desalinization in soil played a major role in determining the species composition in the reclamed area. By matching edaphic division of habitats with division of vegetation structures, designation of ecoregion was endorsed. The approach of current study was suggested as an effective tool to implement an assessment of the vegetation dynamics by the disparity of natural environment and anthropogenic interferences.

  • PDF

Parallel Processing of the Fuzzy Fingerprint Vault based on Geometric Hashing

  • Chae, Seung-Hoon;Lim, Sung-Jin;Bae, Sang-Hyun;Chung, Yong-Wha;Pan, Sung-Bum
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.4 no.6
    • /
    • pp.1294-1310
    • /
    • 2010
  • User authentication using fingerprint information provides convenience as well as strong security. However, serious problems may occur if fingerprint information stored for user authentication is used illegally by a different person since it cannot be changed freely as a password due to a limited number of fingers. Recently, research in fuzzy fingerprint vault system has been carried out actively to safely protect fingerprint information in a fingerprint authentication system. In addition, research to solve the fingerprint alignment problem by applying a geometric hashing technique has also been carried out. In this paper, we propose the hardware architecture for a geometric hashing based fuzzy fingerprint vault system that consists of the software module and hardware module. The hardware module performs the matching for the transformed minutiae in the enrollment hash table and verification hash table. On the other hand, the software module is responsible for hardware feature extraction. We also propose the hardware architecture which parallel processing technique is applied for high speed processing. Based on the experimental results, we confirmed that execution time for the proposed hardware architecture was 0.24 second when number of real minutiae was 36 and number of chaff minutiae was 200, whereas that of the software solution was 1.13 second. For the same condition, execution time of the hardware architecture which parallel processing technique was applied was 0.01 second. Note that the proposed hardware architecture can achieve a speed-up of close to 100 times compared to a software based solution.

The Convergence of Accuracy Ratio in Finite Element Method (유한요소법의 정도수렴)

  • Cho, Soon-Bo
    • Journal of Korean Association for Spatial Structures
    • /
    • v.3 no.2 s.8
    • /
    • pp.85-90
    • /
    • 2003
  • If we use a third order approximation for the displacement function of beam element in finite element methods, finite element solutions of beams yield nodal displacement values matching to beam theory results to have no connection with the number increasing of elements of beams. It is assumed that, as the member displacement value at beam nodes are correct, the calculation procedure of beam element stiffness matrix have no numerical errors. A the member forces are calculated by the equations of $\frac{-M}{EI}=\frac{{d^2}{\omega}}{dx^2}\;and\;\frac{dM}{dx}=V$, the member forces at nodes of beams have errors in a moment and a shear magnitudes in the case of smaller number of element. The nodal displacement value of plate subject to the lateral load converge to the exact values according to the increase of the number of the element. So it is assumed that the procedures of plate element stiffness matrix calculations has a error in the fundamental assumptions. The beam methods for the high accuracy ratio solution Is also applied to the plate analysis. The method of reducing a error ratio of member forces and element stiffness matrix in the finite element methods is studied. Results of study were as follows. 1. The matrixes of EI[B] and [K] in the equations of M(x)=EI[B]{q} and M(x) = [K]{q}+{Q} of beams are same. 2. The equations of $\frac{-M}{EI}=\frac{{d^2}{\omega}}{dx^2}\;and\;\frac{dM}{dx}=V$ for the member forces have a error ratio in a finite element method of uniformly loaded structures, so equilibrium node loads {Q} must be substituted in the equation of member forces as the numerical examples of this paper revealed.

  • PDF

Estimation of Runoff Curve Number for Chungju Dam Watershed Using SWAT (SWAT을 이용한 충주댐 유역의 유출곡선지수 산정 방안)

  • Kim, Nam-Won;Lee, Jin-Won;Lee, Jeong-Woo;Lee, Jeong-Eun
    • Journal of Korea Water Resources Association
    • /
    • v.41 no.12
    • /
    • pp.1231-1244
    • /
    • 2008
  • The objective of this study is to present a methodology for estimating runoff curve number(CN) using SWAT model which is capable of reflecting watershed heterogeneity such as climate condition, land use, soil type. The proposed CN estimation method is based on the asymptotic CN method and particularly, it uses surface flow data simulated by SWAT. This method has advantages to estimate spatial CN values according to subbasin division and to reflect watershed characteristics because the calibration process has been made by matching the measured and simulated streamflows. Furthermore, the method is not sensitive to rainfall-runoff data since CN estimation is on a daily basis. The SWAT based CN estimation method is applied to Chungju dam watershed. The regression equation of the estimated CN that exponentially decays with the increase of rainfall is presented.

High-Speed Search for Pirated Content and Research on Heavy Uploader Profiling Analysis Technology (불법복제물 고속검색 및 Heavy Uploader 프로파일링 분석기술 연구)

  • Hwang, Chan-Woong;Kim, Jin-Gang;Lee, Yong-Soo;Kim, Hyeong-Rae;Lee, Tae-Jin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.6
    • /
    • pp.1067-1078
    • /
    • 2020
  • With the development of internet technology, a lot of content is produced, and the demand for it is increasing. Accordingly, the number of contents in circulation is increasing, while the number of distributing illegal copies that infringe on copyright is also increasing. The Korea Copyright Protection Agency operates a illegal content obstruction program based on substring matching, and it is difficult to accurately search because a large number of noises are inserted to bypass this. Recently, researches using natural language processing and AI deep learning technologies to remove noise and various blockchain technologies for copyright protection are being studied, but there are limitations. In this paper, noise is removed from data collected online, and keyword-based illegal copies are searched. In addition, the same heavy uploader is estimated through profiling analysis for heavy uploaders. In the future, it is expected that copyright damage will be minimized if the illegal copy search technology and blocking and response technology are combined based on the results of profiling analysis for heavy uploaders.

Recognition of Resident Registration Card using ART2-based RBF Network and face Verification (ART2 기반 RBF 네트워크와 얼굴 인증을 이용한 주민등록증 인식)

  • Kim Kwang-Baek;Kim Young-Ju
    • Journal of Intelligence and Information Systems
    • /
    • v.12 no.1
    • /
    • pp.1-15
    • /
    • 2006
  • In Korea, a resident registration card has various personal information such as a present address, a resident registration number, a face picture and a fingerprint. A plastic-type resident card currently used is easy to forge or alter and tricks of forgery grow to be high-degree as time goes on. So, whether a resident card is forged or not is difficult to judge by only an examination with the naked eye. This paper proposed an automatic recognition method of a resident card which recognizes a resident registration number by using a refined ART2-based RBF network newly proposed and authenticates a face picture by a template image matching method. The proposed method, first, extracts areas including a resident registration number and the date of issue from a resident card image by applying Sobel masking, median filtering and horizontal smearing operations to the image in turn. To improve the extraction of individual codes from extracted areas, the original image is binarized by using a high-frequency passing filter and CDM masking is applied to the binaried image fur making image information of individual codes better. Lastly, individual codes, which are targets of recognition, are extracted by applying 4-directional contour tracking algorithm to extracted areas in the binarized image. And this paper proposed a refined ART2-based RBF network to recognize individual codes, which applies ART2 as the loaming structure of the middle layer and dynamicaly adjusts a teaming rate in the teaming of the middle and the output layers by using a fuzzy control method to improve the performance of teaming. Also, for the precise judgement of forgey of a resident card, the proposed method supports a face authentication by using a face template database and a template image matching method. For performance evaluation of the proposed method, this paper maked metamorphoses of an original image of resident card such as a forgey of face picture, an addition of noise, variations of contrast variations of intensity and image blurring, and applied these images with original images to experiments. The results of experiment showed that the proposed method is excellent in the recognition of individual codes and the face authentication fur the automatic recognition of a resident card.

  • PDF

A Methodology for Automatic Multi-Categorization of Single-Categorized Documents (단일 카테고리 문서의 다중 카테고리 자동확장 방법론)

  • Hong, Jin-Sung;Kim, Namgyu;Lee, Sangwon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.77-92
    • /
    • 2014
  • Recently, numerous documents including unstructured data and text have been created due to the rapid increase in the usage of social media and the Internet. Each document is usually provided with a specific category for the convenience of the users. In the past, the categorization was performed manually. However, in the case of manual categorization, not only can the accuracy of the categorization be not guaranteed but the categorization also requires a large amount of time and huge costs. Many studies have been conducted towards the automatic creation of categories to solve the limitations of manual categorization. Unfortunately, most of these methods cannot be applied to categorizing complex documents with multiple topics because the methods work by assuming that one document can be categorized into one category only. In order to overcome this limitation, some studies have attempted to categorize each document into multiple categories. However, they are also limited in that their learning process involves training using a multi-categorized document set. These methods therefore cannot be applied to multi-categorization of most documents unless multi-categorized training sets are provided. To overcome the limitation of the requirement of a multi-categorized training set by traditional multi-categorization algorithms, we propose a new methodology that can extend a category of a single-categorized document to multiple categorizes by analyzing relationships among categories, topics, and documents. First, we attempt to find the relationship between documents and topics by using the result of topic analysis for single-categorized documents. Second, we construct a correspondence table between topics and categories by investigating the relationship between them. Finally, we calculate the matching scores for each document to multiple categories. The results imply that a document can be classified into a certain category if and only if the matching score is higher than the predefined threshold. For example, we can classify a certain document into three categories that have larger matching scores than the predefined threshold. The main contribution of our study is that our methodology can improve the applicability of traditional multi-category classifiers by generating multi-categorized documents from single-categorized documents. Additionally, we propose a module for verifying the accuracy of the proposed methodology. For performance evaluation, we performed intensive experiments with news articles. News articles are clearly categorized based on the theme, whereas the use of vulgar language and slang is smaller than other usual text document. We collected news articles from July 2012 to June 2013. The articles exhibit large variations in terms of the number of types of categories. This is because readers have different levels of interest in each category. Additionally, the result is also attributed to the differences in the frequency of the events in each category. In order to minimize the distortion of the result from the number of articles in different categories, we extracted 3,000 articles equally from each of the eight categories. Therefore, the total number of articles used in our experiments was 24,000. The eight categories were "IT Science," "Economy," "Society," "Life and Culture," "World," "Sports," "Entertainment," and "Politics." By using the news articles that we collected, we calculated the document/category correspondence scores by utilizing topic/category and document/topics correspondence scores. The document/category correspondence score can be said to indicate the degree of correspondence of each document to a certain category. As a result, we could present two additional categories for each of the 23,089 documents. Precision, recall, and F-score were revealed to be 0.605, 0.629, and 0.617 respectively when only the top 1 predicted category was evaluated, whereas they were revealed to be 0.838, 0.290, and 0.431 when the top 1 - 3 predicted categories were considered. It was very interesting to find a large variation between the scores of the eight categories on precision, recall, and F-score.

Anomaly Detection for User Action with Generative Adversarial Networks (적대적 생성 모델을 활용한 사용자 행위 이상 탐지 방법)

  • Choi, Nam woong;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.43-62
    • /
    • 2019
  • At one time, the anomaly detection sector dominated the method of determining whether there was an abnormality based on the statistics derived from specific data. This methodology was possible because the dimension of the data was simple in the past, so the classical statistical method could work effectively. However, as the characteristics of data have changed complexly in the era of big data, it has become more difficult to accurately analyze and predict the data that occurs throughout the industry in the conventional way. Therefore, SVM and Decision Tree based supervised learning algorithms were used. However, there is peculiarity that supervised learning based model can only accurately predict the test data, when the number of classes is equal to the number of normal classes and most of the data generated in the industry has unbalanced data class. Therefore, the predicted results are not always valid when supervised learning model is applied. In order to overcome these drawbacks, many studies now use the unsupervised learning-based model that is not influenced by class distribution, such as autoencoder or generative adversarial networks. In this paper, we propose a method to detect anomalies using generative adversarial networks. AnoGAN, introduced in the study of Thomas et al (2017), is a classification model that performs abnormal detection of medical images. It was composed of a Convolution Neural Net and was used in the field of detection. On the other hand, sequencing data abnormality detection using generative adversarial network is a lack of research papers compared to image data. Of course, in Li et al (2018), a study by Li et al (LSTM), a type of recurrent neural network, has proposed a model to classify the abnormities of numerical sequence data, but it has not been used for categorical sequence data, as well as feature matching method applied by salans et al.(2016). So it suggests that there are a number of studies to be tried on in the ideal classification of sequence data through a generative adversarial Network. In order to learn the sequence data, the structure of the generative adversarial networks is composed of LSTM, and the 2 stacked-LSTM of the generator is composed of 32-dim hidden unit layers and 64-dim hidden unit layers. The LSTM of the discriminator consists of 64-dim hidden unit layer were used. In the process of deriving abnormal scores from existing paper of Anomaly Detection for Sequence data, entropy values of probability of actual data are used in the process of deriving abnormal scores. but in this paper, as mentioned earlier, abnormal scores have been derived by using feature matching techniques. In addition, the process of optimizing latent variables was designed with LSTM to improve model performance. The modified form of generative adversarial model was more accurate in all experiments than the autoencoder in terms of precision and was approximately 7% higher in accuracy. In terms of Robustness, Generative adversarial networks also performed better than autoencoder. Because generative adversarial networks can learn data distribution from real categorical sequence data, Unaffected by a single normal data. But autoencoder is not. Result of Robustness test showed that he accuracy of the autocoder was 92%, the accuracy of the hostile neural network was 96%, and in terms of sensitivity, the autocoder was 40% and the hostile neural network was 51%. In this paper, experiments have also been conducted to show how much performance changes due to differences in the optimization structure of potential variables. As a result, the level of 1% was improved in terms of sensitivity. These results suggest that it presented a new perspective on optimizing latent variable that were relatively insignificant.

The Effects of Singing Program Combined with Physical Exercise of Physiologic Changes, Perception Function and Degree of Depression in the Elderly Women (운동과 음악을 이용한 노래부르기가 노인의 생리적 변화, 인지기능 및 우울에 미치는 효과)

  • Jung, Young-Ju;Min, Soon
    • Journal of Korean Biological Nursing Science
    • /
    • v.3 no.2
    • /
    • pp.35-50
    • /
    • 2001
  • This study was conducted for the evaluation of the effects of singing program combined with physical exercise on the physiologic changes, perception function and degree of depression. The subjects were the members of elderly women's glee club in D care center for the elderly, who have been singing for more than 6 months. 30 members were allocated to study group and 30 to control group. The singing program designed for both physical therapy and music therapy was consisted of initial physical exercise, singing art songs and classical song and the finishing physical exercise. This program was performed twice a week and about forth minutes was consumed for one session. We checked the heart rate, peripheral arterial oxygen saturation, perception function and degree of depression before and after the program. We used a pulse oxymeter to check the heart rate to oxygen saturation and a questionnaire for the evaluation of perception function and degree of depression. We need SPSS program for data analysis. The results of the investigated personnel complying with general characteristics were analyzed by frequency, two groups by t-test, data before and after the program by paired t-test, respectively. The results were as follows. 1) Heart rate after the program was significantly lower than that before program in test group(p<0.05). 2) Peripheral oxygen saturation after the program was significantly higher than that before the program(p<0.05). 3) Ability to match the right sign with a certain predetermined number was improved after the program. The frequency of wrong matching the sign with number before program was 30. But the frequency was decreased to 8 after the program. 4) Ability to calculate was improved after the program. The frequency of wrong calculation before the program was $1.10{\pm}1.94$. But the frequency after the program was decreased to $0.97{\pm}1.84$. 5) The degree of depression after the program($2.07{\pm}0.49$) was significantly lower than that before program(p<0.001). These results show that singing program combined with physical exercise improves the oxygen delivery to peripheral circulation, stability of heart function, the perception function(calculating and matching ability) and decreases the degree of depression. In conclusion, singing program combined with physical exercise can be used for the effective measure to improve the health of elderly and prevent dementia.

  • PDF

A Study on Algorithm and Operation Technique for Dynamic Hard Shoulder Running System on Freeway (고속도로 동적 갓길차로제 알고리즘과 운영기법 연구)

  • Nam Sik Moon;Eon kyo Shin;Ju hyun Kim
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.23 no.4
    • /
    • pp.16-36
    • /
    • 2024
  • This study, developed a dynamic hard shoulder running(HSR) algorithm that includes ending speed and minimum operation time in addition to the starting speed for HSR, and presented an operation plan. The first stage of the algorithm was red, which means vehicles are prohibited from HSR. The second stage is red/amber, in which drivers are notified of HSR, and operators are given time to check whether there is any obstacle to HSR. Stage 3 is green, which vehicles are permitted for HSR. Stage 4 is amber, in which a signal is given to drivers that the end of HSR is imminent. In addition, a minimum time is applied to green and red, but if congestion is severe, red is terminated early to prevent congestion from worsening. The upstream and downstream traffic flow is managed stably through main line ramp metering and lane number matching. The operating standard speed reflects the characteristics of vehicles and drivers, and based on simulation results, 7090 was selected as the optimal operating standard speed considering traffic flow and safety aspects. Therefore it is desirable to apply the travel time divided by the minimum speed of the HSR link as the minimum operating time in order to ensure continuity of traffic flow