• Title/Summary/Keyword: Information Merging

Search Result 557, Processing Time 0.023 seconds

Multi-stage Image Restoration for High Resolution Panchromatic Imagery (고해상도 범색 영상을 위한 다중 단계 영상 복원)

  • Lee, Sanghoon
    • Korean Journal of Remote Sensing
    • /
    • v.32 no.6
    • /
    • pp.551-566
    • /
    • 2016
  • In the satellite remote sensing, the operational environment of the satellite sensor causes image degradation during the image acquisition. The degradation results in noise and blurring which badly affect identification and extraction of useful information in image data. Especially, the degradation gives bad influence in the analysis of images collected over the scene with complicate surface structure such as urban area. This study proposes a multi-stage image restoration to improve the accuracy of detailed analysis for the images collected over the complicate scene. The proposed method assumes a Gaussian additive noise, Markov random field of spatial continuity, and blurring proportional to the distance between the pixels. Point-Jacobian Iteration Maximum A Posteriori (PJI-MAP) estimation is employed to restore a degraded image. The multi-stage process includes the image segmentation performing region merging after pixel-linking. A dissimilarity coefficient combining homogeneity and contrast is proposed for image segmentation. In this study, the proposed method was quantitatively evaluated using simulation data and was also applied to the two panchromatic images of super-high resolution: Dubaisat-2 data of 1m resolution from LA, USA and KOMPSAT3 data of 0.7 m resolution from Daejeon in the Korean peninsula. The experimental results imply that it can improve analytical accuracy in the application of remote sensing high resolution panchromatic imagery.

Test Time Reduction of BIST by Primary Input Grouping Method (입력신호 그룹화 방법에 의한 BIST의 테스트 시간 감소)

  • Chang, Yoon-Seok;Kim, Dong-Wook
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.37 no.8
    • /
    • pp.86-96
    • /
    • 2000
  • The representative area among the ones whose cost increases as the integration ratio increases is the test area. As the relative cost of hardware decreases, the BIST method has been focued on as the future-oriented test method. The biggest drawback of it is the increasing test time to obtain the acceptable fault coverage. This paper proposed a BIST implementation method to reduce the test times. This method uses an input grouping and test point insertion method, in which the definition of test point is different from the previous one. That is, the test points are defined on the basis of the internal nodes which are the reference points of the input grouping and are merging points of the grouped signals. The main algorithms in the proposed method were implemented with C-language, and various circuits were used to apply the proposed method for experiment. The results showed that the test time could be reduced to at most $1/2^{40}$ of the pseudo-random pattern case and the fault coverage were also increased compared with the conventional BIST method. The relative hardware overhead of the proposed method to the circuit under test decreases as th e size of the circuit to be tested increases, and the delay overhead by the BIST utility is negligible compared to that of the original circuit. That means, the proposed method can be applied efficiently to large VLSI circuits.

  • PDF

Segmentation Method of Overlapped nuclei in FISH Image (FISH 세포영상에서의 군집세포 분할 기법)

  • Jeong, Mi-Ra;Ko, Byoung-Chul;Nam, Jae-Yeal
    • The KIPS Transactions:PartB
    • /
    • v.16B no.2
    • /
    • pp.131-140
    • /
    • 2009
  • This paper presents a new algorithm to the segmentation of the FISH images. First, for segmentation of the cell nuclei from background, a threshold is estimated by using the gaussian mixture model and maximizing the likelihood function of gray value of cell images. After nuclei segmentation, overlapped nuclei and isolated nuclei need to be classified for exact nuclei analysis. For nuclei classification, this paper extracted the morphological features of the nuclei such as compactness, smoothness and moments from training data. Three probability density functions are generated from these features and they are applied to the proposed Bayesian networks as evidences. After nuclei classification, segmenting of overlapped nuclei into isolated nuclei is necessary. This paper first performs intensity gradient transform and watershed algorithm to segment overlapped nuclei. Then proposed stepwise merging strategy is applied to merge several fragments in major nucleus. The experimental results using FISH images show that our system can indeed improve segmentation performance compared to previous researches, since we performed nuclei classification before separating overlapped nuclei.

Design and Analysis of a Digit-Serial $AB^{2}$ Systolic Arrays in $GF(2^{m})$ ($GF(2^{m})$ 상에서 새로운 디지트 시리얼 $AB^{2}$ 시스톨릭 어레이 설계 및 분석)

  • Kim Nam-Yeun;Yoo Kee-Young
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.4
    • /
    • pp.160-167
    • /
    • 2005
  • Among finite filed arithmetic operations, division/inverse is known as a basic operation for public-key cryptosystems over $GF(2^{m})$ and it is computed by performing the repetitive $AB^{2}$ multiplication. This paper presents a digit-serial-in-serial-out systolic architecture for performing the $AB^2$ operation in GF$(2^{m})$. To obtain L×L digit-serial-in-serial-out architecture, new $AB^{2}$ algorithm is proposed and partitioning, index transformation and merging the cell of the architecture, which is derived from the algorithm, are proposed. Based on the area-time product, when the digit-size of digit-serial architecture, L, is selected to be less than about m, the proposed digit-serial architecture is efficient than bit-parallel architecture, and L is selected to be less than about $(1/5)log_{2}(m+1)$, the proposed is efficient than bit-serial. In addition, the area-time product complexity of pipelined digit-serial $AB^{2}$ systolic architecture is approximately $10.9\%$ lower than that of nonpipelined one, when it is assumed that m=160 and L=8. Additionally, since the proposed architecture can be utilized for the basic architecture of crypto-processor and it is well suited to VLSI implementation because of its simplicity, regularity and pipelinability.

Automatic Extraction of Roof Components from LiDAR Data Based on Octree Segmentation (LiDAR 데이터를 이용한 옥트리 분할 기반의 지붕요소 자동추출)

  • Song, Nak-Hyeon;Cho, Hong-Beom;Cho, Woo-Sug;Shin, Sung-Woong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.25 no.4
    • /
    • pp.327-336
    • /
    • 2007
  • The 3D building modeling is one of crucial components in building 3D geospatial information. The existing methods for 3D building modeling depend mainly on manual photogrammetric processes by stereoplotter compiler, which indeed take great amount of time and efforts. In addition, some automatic methods that were proposed in research papers and experimental trials have limitations of describing the details of buildings with lack of geometric accuracy. It is essential in automatic fashion that the boundary and shape of buildings should be drawn effortlessly by a sophisticated algorithm. In recent years, airborne LiDAR data representing earth surface in 3D has been utilized in many different fields. However, it is still in technical difficulties for clean and correct boundary extraction without human intervention. The usage of airborne LiDAR data will be much feasible to reconstruct the roof tops of buildings whose boundary lines could be taken out from existing digital maps. The paper proposed a method to reconstruct the roof tops of buildings using airborne LiDAR data with building boundary lines from digital map. The primary process is to perform octree-based segmentation to airborne LiDAR data recursively in 3D space till there are no more airborne LiDAR points to be segmented. Once the octree-based segmentation has been completed, each segmented patch is thereafter merged based on geometric spatial characteristics. The experimental results showed that the proposed method were capable of extracting various building roof components such as plane, gable, polyhedric and curved surface.

A Measure of Landscape Planning and Design Application through 3D Scan Analysis (3D 스캔 분석을 통한 전통조경 계획 및 설계 활용방안)

  • Shin, Hyun-Sil
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.36 no.4
    • /
    • pp.105-112
    • /
    • 2018
  • This study aims to apply 3D scanning technology to the field of landscape planning design. Through this, 3D scans were conducted on Soswaewon Garden and Seongrakwon Gardens to find directions for traditional landscape planning and designs. The results as follows. First, the actual measurement of the traditional garden through a 3D scan confirmed that a precise three-dimensional modeling of ${\pm}3-5mm$ error was constructed through the merging of coordinate values based on point data acquired at each observation point and postprocessing. Second, as a result of the 3D survey, the Soswaewon Garden obtained survey data on Jewoldang House, Gwangpunggak Pavilion, the surrounding wall, stone axis, and Aeyangdan wall, while the Seongnakwon Garden obtained survey data on the topography, rocks and waterways around the Yeongbyeokji pond area. The above data have the advantage of being able to monitor the changing appearance of the garden. Third, spatial information developed through 3D scans could be developed with a three-dimensional drawing preparation and inspection tool that included precise real-world data, and this process ensured the economic feasibility of time and manpower in the actual survey and investigation of landscaping space. In addition, modelling with a three-dimensional 1:1 scale is expected to be highly efficient in that reliable spatial data can be maintained and reprocessed to a specific size depending on the size of the design. In addition, from a long-term perspective, the deployment of 3D scan data is easy to predict and simulate changes in traditional landscaping space over time.

Various Quality Fingerprint Classification Using the Optimal Stochastic Models (최적화된 확률 모델을 이용한 다양한 품질의 지문분류)

  • Jung, Hye-Wuk;Lee, Jee-Hyong
    • Journal of the Korea Society for Simulation
    • /
    • v.19 no.1
    • /
    • pp.143-151
    • /
    • 2010
  • Fingerprint classification is a step to increase the efficiency of an 1:N fingerprint recognition system and plays a role to reduce the matching time of fingerprint and to increase accuracy of recognition. It is difficult to classify fingerprints, because the ridge pattern of each fingerprint class has an overlapping characteristic with more than one class, fingerprint images may include a lot of noise and an input condition is an exceptional case. In this paper, we propose a novel approach to design a stochastic model and to accomplish fingerprint classification using a directional characteristic of fingerprints for an effective classification of various qualities. We compute the directional value by searching a fingerprint ridge pixel by pixel and extract a directional characteristic by merging a computed directional value by fixed pixels unit. The modified Markov model of each fingerprint class is generated using Markov model which is a stochastic information extraction and a recognition method by extracted directional characteristic. The weight list of classification model of each class is decided by analyzing the state transition matrixes of the generated Markov model of each class and the optimized value which improves the performance of fingerprint classification using GA (Genetic Algorithm) is estimated. The performance of the optimized classification model by GA is superior to the model before the optimization by the experiment result of applying the fingerprint database of various qualities to the optimized model by GA. And the proposed method effectively achieved fingerprint classification to exceptional input conditions because this approach is independent of the existence and nonexistence of singular points by the result of analyzing the fingerprint database which is used to the experiments.

Survey of coastal topography using images from a single UAV (단일 UAV를 이용한 해안 지형 측량)

  • Noh, Hyoseob;Kim, Byunguk;Lee, Minjae;Park, Yong Sung;Bang, Ki Young;Yoo, Hojun
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.spc1
    • /
    • pp.1027-1036
    • /
    • 2023
  • Coastal topographic information is crucial in coastal management, but point measurment based approeaches, which are labor intensive, are generally applied to land and underwater, separately. This study introduces an efficient method enabling land and undetwater surveys using an unmanned aerial vehicle (UAV). This method involves applying two different algorithms to measure the topography on land and water depth, respectively, using UAV imagery and merge them to reconstruct whole coastal digital elevation model. Acquisition of the landside terrain is achieved using the Structure-from-Motion Multi-View Stereo technique with spatial scan imagery. Independently, underwater bathymetry is retrieved by employing a depth inversion technique with a drone-acquired wave field video. After merging the two digital elevation models into a local coordinate, interpolation is performed for areas where terrain measurement is not feasible, ultimately obtaining a continuous nearshore terrain. We applied the proposed survey technique to Jangsa Beach, South Korea, and verified that detailed terrain characteristics, such as berm, can be measured. The proposed UAV-based survey method has significant efficiency in terms of time, cost, and safety compared to existing methods.

User-Perspective Issue Clustering Using Multi-Layered Two-Mode Network Analysis (다계층 이원 네트워크를 활용한 사용자 관점의 이슈 클러스터링)

  • Kim, Jieun;Kim, Namgyu;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.93-107
    • /
    • 2014
  • In this paper, we report what we have observed with regard to user-perspective issue clustering based on multi-layered two-mode network analysis. This work is significant in the context of data collection by companies about customer needs. Most companies have failed to uncover such needs for products or services properly in terms of demographic data such as age, income levels, and purchase history. Because of excessive reliance on limited internal data, most recommendation systems do not provide decision makers with appropriate business information for current business circumstances. However, part of the problem is the increasing regulation of personal data gathering and privacy. This makes demographic or transaction data collection more difficult, and is a significant hurdle for traditional recommendation approaches because these systems demand a great deal of personal data or transaction logs. Our motivation for presenting this paper to academia is our strong belief, and evidence, that most customers' requirements for products can be effectively and efficiently analyzed from unstructured textual data such as Internet news text. In order to derive users' requirements from textual data obtained online, the proposed approach in this paper attempts to construct double two-mode networks, such as a user-news network and news-issue network, and to integrate these into one quasi-network as the input for issue clustering. One of the contributions of this research is the development of a methodology utilizing enormous amounts of unstructured textual data for user-oriented issue clustering by leveraging existing text mining and social network analysis. In order to build multi-layered two-mode networks of news logs, we need some tools such as text mining and topic analysis. We used not only SAS Enterprise Miner 12.1, which provides a text miner module and cluster module for textual data analysis, but also NetMiner 4 for network visualization and analysis. Our approach for user-perspective issue clustering is composed of six main phases: crawling, topic analysis, access pattern analysis, network merging, network conversion, and clustering. In the first phase, we collect visit logs for news sites by crawler. After gathering unstructured news article data, the topic analysis phase extracts issues from each news article in order to build an article-news network. For simplicity, 100 topics are extracted from 13,652 articles. In the third phase, a user-article network is constructed with access patterns derived from web transaction logs. The double two-mode networks are then merged into a quasi-network of user-issue. Finally, in the user-oriented issue-clustering phase, we classify issues through structural equivalence, and compare these with the clustering results from statistical tools and network analysis. An experiment with a large dataset was performed to build a multi-layer two-mode network. After that, we compared the results of issue clustering from SAS with that of network analysis. The experimental dataset was from a web site ranking site, and the biggest portal site in Korea. The sample dataset contains 150 million transaction logs and 13,652 news articles of 5,000 panels over one year. User-article and article-issue networks are constructed and merged into a user-issue quasi-network using Netminer. Our issue-clustering results applied the Partitioning Around Medoids (PAM) algorithm and Multidimensional Scaling (MDS), and are consistent with the results from SAS clustering. In spite of extensive efforts to provide user information with recommendation systems, most projects are successful only when companies have sufficient data about users and transactions. Our proposed methodology, user-perspective issue clustering, can provide practical support to decision-making in companies because it enhances user-related data from unstructured textual data. To overcome the problem of insufficient data from traditional approaches, our methodology infers customers' real interests by utilizing web transaction logs. In addition, we suggest topic analysis and issue clustering as a practical means of issue identification.

Improving Performance of Recommendation Systems Using Topic Modeling (사용자 관심 이슈 분석을 통한 추천시스템 성능 향상 방안)

  • Choi, Seongi;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.3
    • /
    • pp.101-116
    • /
    • 2015
  • Recently, due to the development of smart devices and social media, vast amounts of information with the various forms were accumulated. Particularly, considerable research efforts are being directed towards analyzing unstructured big data to resolve various social problems. Accordingly, focus of data-driven decision-making is being moved from structured data analysis to unstructured one. Also, in the field of recommendation system, which is the typical area of data-driven decision-making, the need of using unstructured data has been steadily increased to improve system performance. Approaches to improve the performance of recommendation systems can be found in two aspects- improving algorithms and acquiring useful data with high quality. Traditionally, most efforts to improve the performance of recommendation system were made by the former approach, while the latter approach has not attracted much attention relatively. In this sense, efforts to utilize unstructured data from variable sources are very timely and necessary. Particularly, as the interests of users are directly connected with their needs, identifying the interests of the user through unstructured big data analysis can be a crew for improving performance of recommendation systems. In this sense, this study proposes the methodology of improving recommendation system by measuring interests of the user. Specially, this study proposes the method to quantify interests of the user by analyzing user's internet usage patterns, and to predict user's repurchase based upon the discovered preferences. There are two important modules in this study. The first module predicts repurchase probability of each category through analyzing users' purchase history. We include the first module to our research scope for comparing the accuracy of traditional purchase-based prediction model to our new model presented in the second module. This procedure extracts purchase history of users. The core part of our methodology is in the second module. This module extracts users' interests by analyzing news articles the users have read. The second module constructs a correspondence matrix between topics and news articles by performing topic modeling on real world news articles. And then, the module analyzes users' news access patterns and then constructs a correspondence matrix between articles and users. After that, by merging the results of the previous processes in the second module, we can obtain a correspondence matrix between users and topics. This matrix describes users' interests in a structured manner. Finally, by using the matrix, the second module builds a model for predicting repurchase probability of each category. In this paper, we also provide experimental results of our performance evaluation. The outline of data used our experiments is as follows. We acquired web transaction data of 5,000 panels from a company that is specialized to analyzing ranks of internet sites. At first we extracted 15,000 URLs of news articles published from July 2012 to June 2013 from the original data and we crawled main contents of the news articles. After that we selected 2,615 users who have read at least one of the extracted news articles. Among the 2,615 users, we discovered that the number of target users who purchase at least one items from our target shopping mall 'G' is 359. In the experiments, we analyzed purchase history and news access records of the 359 internet users. From the performance evaluation, we found that our prediction model using both users' interests and purchase history outperforms a prediction model using only users' purchase history from a view point of misclassification ratio. In detail, our model outperformed the traditional one in appliance, beauty, computer, culture, digital, fashion, and sports categories when artificial neural network based models were used. Similarly, our model outperformed the traditional one in beauty, computer, digital, fashion, food, and furniture categories when decision tree based models were used although the improvement is very small.