• Title/Summary/Keyword: information processing

Search Result 42,334, Processing Time 0.06 seconds

Counterfeit Money Detection Algorithm using Non-Local Mean Value and Support Vector Machine Classifier (비지역적 특징값과 서포트 벡터 머신 분류기를 이용한 위변조 지폐 판별 알고리즘)

  • Ji, Sang-Keun;Lee, Hae-Yeoun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.1
    • /
    • pp.55-64
    • /
    • 2013
  • Due to the popularization of digital high-performance capturing equipments and the emergence of powerful image-editing softwares, it is easy for anyone to make a high-quality counterfeit money. However, the probability of detecting a counterfeit money to the general public is extremely low. In this paper, we propose a counterfeit money detection algorithm using a general purpose scanner. This algorithm determines counterfeit money based on the different features in the printing process. After the non-local mean value is used to analyze the noises from each money, we extract statistical features from these noises by calculating a gray level co-occurrence matrix. Then, these features are applied to train and test the support vector machine classifier for identifying either original or counterfeit money. In the experiment, we use total 324 images of original money and counterfeit money. Also, we compare with noise features from previous researches using wiener filter and discrete wavelet transform. The accuracy of the algorithm for identifying counterfeit money was over 94%. Also, the accuracy for identifying the printing source was over 93%. The presented algorithm performs better than previous researches.

Design and Implementation of a Protocol for Interworking Open Web Application Store (개방형 웹 애플리케이션 스토어 연동을 위한 프로토콜의 설계 및 구현)

  • Baek, Jihun;Kim, Jihun;Nam, Yongwoo;Lee, HyungUk;Park, Sangwon;Jeon, Jonghong;Lee, Seungyoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.10
    • /
    • pp.669-678
    • /
    • 2013
  • Recently, because the portable devices became popular, it is easily to see that each person carries more than just one portable device and the use of the smartphone stretches as time goes by. After the smartphone has propagated rapidly, the total usage of the smartphone applications has also increased. But still, each application store has a different platform to develop and to apply an application. The application store is divided into two big markets, the Android and the Apple. So the developers have to develop their application by using these two different platforms. Developing into two different platforms almost makes a double development cost. And for the other platforms, the weakness is, which still have a small market breadth like Bada is not about the cost, but about drawing the proper developers for the given platform application development. The web application is rising up as the solution to solve these problems, reducing the cost and time in developing applications for every platform. For web applications don't need to make a vassal relationship with application markets platform. Which makes it possible for an application to operate properly in every portable devices and reduces the time and cost in developing. Therefore, all of the application markets could be united into one big market through a protocol which will connect each web applications market. But, still there is no standard for the web application store and no current web application store is possible to interlock with other web application stores. In this paper, we are trying to suggest a protocol by developing a prototype and prove that this protocol can supplement the current weakness.

Understanding of Structural Changes of Keyword Networks in the Computer Engineering Field (컴퓨터공학 분야 키워드네트워크의 구조적 변화 이해)

  • Kwon, Yung-Keun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.3
    • /
    • pp.187-194
    • /
    • 2013
  • Recently, there have been many trials to analyze characteristics of research trends through a structural analysis of keyword networks in various fields. However, most previous studies have mainly focused on structural analysis harbored in some static networks and there is a lack of research on changes of such networks structure with time. In this paper, we constructed annual keyword networks by using a database of papers published in the international computer engineering-field journals from 2002 through 2011, and examined the changes of them. As a result, it was shown that most keywords in a network are preserved in the network of the next year, and their degree of connectivity and the average weight of the connections were higher and smaller, respectively, than those of the keywords which are not preserved. In addition, when a keyword network shifted to one of the next year, the connections between keywords were more likely to be removed than preserved, and the average weight of the removal connections was higher than that of the preserved ones. These results imply that the keywords are not changed over time but their connections are very likely to be changed; and there is apparent differences between the preserved and removal groups of keywords/connections with respect to degree and weights of connections. All these results are consistently observed over the ten-year datasets and they can be important principles in understanding the structural changes of the keyword networks.

A Study of Standard eBook Contents Conversion (전자책 표준간의 컨텐츠 변환에 관한 연구)

  • Ko, Seung-Kyu;Sohn, Won-Sung;Lim, Soon-Bum;Choy, Yoon-Chul
    • The KIPS Transactions:PartD
    • /
    • v.10D no.2
    • /
    • pp.267-276
    • /
    • 2003
  • Many countries have established eBook standards adequate to their environments. In USA, OEB PS is announced for distribution and display of eBooks, in Japan, JepaX is announced for storage and exchange, and in Korea, EBKS is made for clear exchange of eBook contents. These diverse objectives lead to different content structures. These variety of content structure will cause a problem in exchanging them. To correctly exchange eBook contents, the content structure should be considered. So, In this paper, we study conversion methods of standard eBooks contents based on Korean eBook standard, with contemplating content structure. To convert contents properly, the mapping relations should be clearly defined. For this, we consider standard's structure and extension mechanisms, and use path notations and namespaces for precise description. Moreover, through analysis of each mapping relationships, we classify conversion cases into automatic, semi-automatic, and manual conversions. Finally we write up conversion scripts and experiment with them.

A Distributed Method for Constructing a P2P Overlay Multicast Network using Computational Intelligence (지능적 계산법을 이용한 분산적 P2P 오버레이 멀티케스트 네트워크 구성 기법)

  • Park, Jaesung
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.11 no.6
    • /
    • pp.95-102
    • /
    • 2012
  • In this paper, we propose a method that can construct efficiently a P2P overlay multicast network composed of many heterogeneous peers in communication bandwidth, processing power and a storage size by selecting a peer in a distributed fashion using an ant-colony theory that is one of the computational intelligence methods. The proposed method considers not only the capacity of a peer but also the number of children peers supported by the peer and the hop distance between a multicast source and the peer when selecting a parent peer of a newly joining node. Thus, an P2P multicast overlay network is constructed efficiently in that the distances between a multicast source and peers are maintained small. In addition, the proposed method works in a distributed fashion in that peers use their local information to find a parent node. Thus, compared to a centralized method where a centralized server maintains and controls the overlay construction process, the proposed method scales well. Through simulations, we show that, by making a few high capacity peers support a lot of low capacity peers, the proposed method can maintain the size of overlay network small even there are a few thousands of peers in the network.

Efficient Methodology in Markov Random Field Modeling : Multiresolution Structure and Bayesian Approach in Parameter Estimation (피라미드 구조와 베이지안 접근법을 이용한 Markove Random Field의 효율적 모델링)

  • 정명희;홍의석
    • Korean Journal of Remote Sensing
    • /
    • v.15 no.2
    • /
    • pp.147-158
    • /
    • 1999
  • Remote sensing technique has offered better understanding of our environment for the decades by providing useful level of information on the landcover. In many applications using the remotely sensed data, digital image processing methodology has been usefully employed to characterize the features in the data and develop the models. Random field models, especially Markov Random Field (MRF) models exploiting spatial relationships, are successfully utilized in many problems such as texture modeling, region labeling and so on. Usually, remotely sensed imagery are very large in nature and the data increase greatly in the problem requiring temporal data over time period. The time required to process increasing larger images is not linear. In this study, the methodology to reduce the computational cost is investigated in the utilization of the Markov Random Field. For this, multiresolution framework is explored which provides convenient and efficient structures for the transition between the local and global features. The computational requirements for parameter estimation of the MRF model also become excessive as image size increases. A Bayesian approach is investigated as an alternative estimation method to reduce the computational burden in estimation of the parameters of large images.

A Study on the Enhancement of DEM Resolution by Radar Interferometry (레이더 간섭기법을 이용한 수치고도모델 해상도 향상에 관한 연구)

  • Kim Chang-Oh;Kim Sang-Wan;Lee Dong-Cheon;Lee Yong-Wook;Kim Jeong Woo
    • Korean Journal of Remote Sensing
    • /
    • v.21 no.4
    • /
    • pp.287-302
    • /
    • 2005
  • Digital Elevation Models (DEMs) were generated by ERS-l/2 and JERS-1 SAR interferometry in Daejon area, Korea. The quality of the DEM's was evaluated by the Ground Control Points (GCPs) in city area where GCPs were determined by GPS surveys, while in the mountain area with no GCPs, a 1:25,000 digital map was used. In order to minimize errors due to the inaccurate satellite orbit information and the phase unwrapping procedure, a Differential InSAR (DInSAR) was implemented in addition to the traditional InSAR analysis for DEM generation. In addition, DEMs from GTOPO30, SRTM-3, and 1:25,000 digital map were used for assessment the resolution of the DEM generated from DInSAR. 5-6 meters of elevation errors were found in the flat area regardless of the usage and the resolution of DEM, as a result of InSAR analyzing with a pair of ERS tandem and 6 pairs of JERS-1 interferograms. In the mountain area, however, DInSAR with DEMs from SRTM-3 and the digital map was found to be very effective to reduce errors due to phase unwrapping procedure. Also errors due to low signal-to-noise ratio of radar images and atmospheric effect were attenuated in the DEMs generated from the stacking of 6 pairs of JERS-1. SAR interferometry with multiple pairs of SAR interferogram with low resolution DEM can be effectively used to enhance the resolution of DEM in terms of data processing time and cost.

A Study on the Concentration of Research Investment in National R&D Projects Using the Theil Index (타일(Theil) 지수를 이용한 국가연구개발사업의 연구비 집중도 분석)

  • Yang, Hyeonchae;Sung, Kyungmo;Kim, Yeonglin
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.9
    • /
    • pp.355-362
    • /
    • 2019
  • In the past, when research and development(R&D) resources were absolutely scarce, the so-called 'choice and concentration' strategy of national R&D projects has been persuasive. Under the current situation where various actors such as GRIs(Government-funded Research Institutes) and universities supported by more abundant R&D resources conduct national R&D projects, this strategy cannot be applied without distinction. In order to see how the strategy has worked, this paper analyzes the concentration of research funds allocated to actors performing national R&D projects. Concentration is measured based on the amount of research funds supported by government from 2002 to 2016 using the Theil index to break down the concentration of individual actors in the overall national R&D project. The results from the Theil index were compared with concentrations using the Gini coefficient, a widely known indicator. As a result, the Theil index could be used to analyze the concentration and sub-components' contribution such as universities and GRIs that make up the entire national R&D system. The results also showed GRIs had the highest concentration, followed by universities, but their concentration has been somewhat reduced compared to 10 years ago. On the other hand, small-sized companies have maintained a certain level, although they are not highly concentrated. In other words, universities and GRIs tend to reduce the gap in the allocation of research funds among institutions, while small-sized companies tend to distribute them evenly.

An Investigation on the Future Recognition of Career Counselors and their Future Competency and Future Adaptability change by using the Future Workshop (미래워크숍을 활용한 진로직업상담가의 미래인식과 미래역량 및 미래적응력 변화 탐색)

  • Yeom, In-Sook;Lim, Geum-Hui
    • Journal of Digital Convergence
    • /
    • v.17 no.11
    • /
    • pp.557-567
    • /
    • 2019
  • This investigation was conducted to derive future recognition and future competency of career counselors using future workshops and to verify the effectiveness of improving future adaptability. For this purpose, the future workshop was conducted for 25 career counselors and the data written and the discussion contents of the future workshop were analyzed. For analysis, word frequency analysis and corresponding sample T-verification were conducted, and the main words were derived through consensus. The results, First, the keywords of future recognition showed high frequency of robot, artificial intelligence, leisure, education, convenience, and the disabled. Second, the future labor sites projected the most changes due to high technology. Third, at the career counseling site, professional career counselors and robot counselors related to the fourth industrial revolution are expected to appear. Fourth, future competencies of career counselors were derived from information processing ability, professional counseling ability, communication ability, and ethical consciousness. Finally, it was confirmed that the future adaptability of career counselors increases after participating in future workshops, and the future competencies derived from this study are expected to be used for job training of career counselors.

Purchase Transaction Similarity Measure Considering Product Taxonomy (상품 분류 체계를 고려한 구매이력 유사도 측정 기법)

  • Yang, Yu-Jeong;Lee, Ki Yong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.9
    • /
    • pp.363-372
    • /
    • 2019
  • A sequence refers to data in which the order exists on the two items, and purchase transaction data in which the products purchased by one customer are listed is one of the representative sequence data. In general, all goods have a product taxonomy, such as category/ sub-category/ sub-sub category, and if they are similar to each other, they are classified into the same category according to their characteristics. Therefore, in this paper, we not only consider the purchase order of products to compare two purchase transaction sequences, but also calculate their similarity by giving a higher score if they are in the same category in spite of their difference. Especially, in order to choose the best similarity measure that directly affects the calculation performance of the purchase transaction sequences, we have compared the performance of three representative similarity measures, the Levenshtein distance, dynamic time warping distance, and the Needleman-Wunsch similarity. We have extended the existing methods to take into account the product taxonomy. For conventional similarity measures, the comparison of goods in two sequences is calculated by simply assigning a value of 0 or 1 according to whether or not the product is matched. However, the proposed method is subdivided to have a value between 0 and 1 using the product taxonomy tree to give a different degree of relevance between the two products, even if they are different products. Through experiments, we have confirmed that the proposed method was measured the similarity more accurately than the previous method. Furthermore, we have confirmed that dynamic time warping distance was the most suitable measure because it considered the degree of association of the product in the sequence and showed good performance for two sequences with different lengths.