• Title/Summary/Keyword: mapping method

Search Result 2,590, Processing Time 0.024 seconds

An Application of PCSI Antecedent Model to Development of Library Organizational Performance Evaluation Method (PCSI 선행요인 모형에 기반한 도서관 조직성과 평가 방법론 개발에 관한 연구)

  • Kwon, Nahyun;Lee, Jungyeoun;Pyo, Soon Hee
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.29 no.1
    • /
    • pp.369-391
    • /
    • 2018
  • The purpose of this study was to develop a measurement scheme that assesses organizational performances in library department unit level by applying PCSI model, a Public-service Customer Satisfaction Index. Specifically, this study adopted the Antecedent Model, a component of the PCSI's three-part model, that consists of a total of 12 service quality indices. The study selected a large-scale library as a case to analyze and design a method. We analyzed the library's organizational missions and goals set by each of its six departments, and then mapped them with each of the 12 service quality indices. This mapping was further developed as a work analysis scheme of the library and as a measurement tool. A total of 341 library users participated in a survey that was designed to assess 12 service quality indices. As a result service quality was measured for each index, which in turn, calculated the library's service performance index for both entire and individual units of the library. The results reveled the measurement tool useful in assessing service performances for both individual unit and the entire library.

A Program Transformational Approach for Rule-Based Hangul Automatic Programming (규칙기반 한글 자동 프로그램을 위한 프로그램 변형기법)

  • Hong, Seong-Su;Lee, Sang-Rak;Sim, Jae-Hong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.1 no.1
    • /
    • pp.114-128
    • /
    • 1994
  • It is very difficult for a nonprofessional programmer in Koera to write a program with very High Level Language such as, V,REFINE, GIST, and SETL, because the semantic primitives of these languages are based on predicate calculus, set, mapping, or testricted natural language. And it takes time to be familiar with these language. In this paper, we suggest a method to reduce such difficulties by programming with the declarative, procedural constructs, and aggregate constructs. And we design and implement an experimental knowledge-based automatic programming system. called HAPS(Hangul Automatic Program System). HAPS, whose input is specification such as Hangul abstract algorithm and datatype or Hangul procedural constructs, and whose output is C program. The method of operation is based on rule-based and program transformation technique, and the problem transformation technique. The problem area is general problem. The control structure of HAPS accepts the program specification, transforms this specification according to the proper rule in the rule-base, and stores the transformed program specification on the global data base. HAPS repeats these procedures until the target C program is fully constructed.

  • PDF

Change Reconciliation on XML Repetitive Data (XML 반복부 데이터의 변경 협상 방법)

  • Lee Eunjung
    • The KIPS Transactions:PartA
    • /
    • v.11A no.6
    • /
    • pp.459-468
    • /
    • 2004
  • Sharing XML trees on mobile devices has become more and more popular. Optimistic replication of XML trees for mobile devices raises the need for reconciliation of concurrently modified data. Especially for reconciling the modified tree structures, we have to compare trees by node mapping which takes O($n^2$) time. Also, using semantic based conflict resolving policy is often discussed in the literature. In this research, we focused on an efficient reconciliation method for mobile environments, using edit scripts of XML data sent from each device. To get a simple model for mobile devices, we use the XML list data sharing model, which allows inserting/deleting subtrees only for the repetitive parts of the tree, based on the document type. Also, we use keys for repetitive part subtrees, keys are unique between nodes with a same parent. This model not only guarantees that the edit action always results a valid tree but also allows a linear time reconciliation algorithm due to key based list reconciliation. The algorithm proposed in this paper takes linear time to the length of edit scripts, if we can assume that there is no insertion key conflict. Since the previous methods take a linear time to the size of the tree, the proposed method is expected to provide a more efficient reconciliation model in the mobile environment.

Object Modeling for Mapping from XML Document and Query to UML Class Diagram based on XML-GDM (XML-GDM을 기반으로 한 UML 클래스 다이어그램으로 사상을 위한 XML문서와 질의의 객체 모델링)

  • Park, Dae-Hyun;Kim, Yong-Sung
    • The KIPS Transactions:PartD
    • /
    • v.17D no.2
    • /
    • pp.129-146
    • /
    • 2010
  • Nowadays, XML has been favored by many companies internally and externally as a means of sharing and distributing data. there are many researches and systems for modeling and storing XML documents by an object-oriented method as for the method of saving and managing web-based multimedia document more easily. The representative tool for the object-oriented modeling of XML documents is UML (Unified Modeling Language). UML at the beginning was used as the integrated methodology for software development, but now it is used more frequently as the modeling language of various objects. Currently, UML supports various diagrams for object-oriented analysis and design like class diagram and is widely used as a tool of creating various database schema and object-oriented codes from them. This paper proposes an Efficinet Query Modelling of XML-GL using the UML class diagram and OCL for searching XML document which its application scope is widely extended due to the increased use of WWW and its flexible and open nature. In order to accomplish this, we propose the modeling rules and algorithm that map XML-GL. which has the modeling function for XML document and DTD and the graphical query function about that. In order to describe precisely about the constraint of model component, it is defined by OCL (Object Constraint Language). By using proposed technique creates a query for the XML document of holding various properties of object-oriented model by modeling the XML-GL query from XML document, XML DTD, and XML query while using the class diagram of UML. By converting, saving and managing XML document visually into the object-oriented graphic data model, user can prepare the base that can express the search and query on XML document intuitively and visually. As compared to existing XML-based query languages, it has various object-oriented characteristics and uses the UML notation that is widely used as object modeling tool. Hence, user can construct graphical and intuitive queries on XML-based web document without learning a new query language. By using the same modeling tool, UML class diagram on XML document content, query syntax and semantics, it allows consistently performing all the processes such as searching and saving XML document from/to object-oriented database.

Multiple Camera Based Imaging System with Wide-view and High Resolution and Real-time Image Registration Algorithm (다중 카메라 기반 대영역 고해상도 영상획득 시스템과 실시간 영상 정합 알고리즘)

  • Lee, Seung-Hyun;Kim, Min-Young
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.49 no.4
    • /
    • pp.10-16
    • /
    • 2012
  • For high speed visual inspection in semiconductor industries, it is essential to acquire two-dimensional images on regions of interests with a large field of view (FOV) and a high resolution simultaneously. In this paper, an imaging system is newly proposed to achieve high quality image in terms of precision and FOV, which is composed of single lens, a beam splitter, two camera sensors, and stereo image grabbing board. For simultaneously acquired object images from two camera sensors, Zhang's camera calibration method is applied to calibrate each camera first of all. Secondly, to find a mathematical mapping function between two images acquired from different view cameras, the matching matrix from multiview camera geometry is calculated based on their image homography. Through the image homography, two images are finally registered to secure a large inspection FOV. Here the inspection system of using multiple images from multiple cameras need very fast processing unit for real-time image matching. For this purpose, parallel processing hardware and software are utilized, such as Compute Unified Device Architecture (CUDA). As a result, we can obtain a matched image from two separated images in real-time. Finally, the acquired homography is evaluated in term of accuracy through a series of experiments, and the obtained results shows the effectiveness of the proposed system and method.

A Study on Object Based Image Analysis Methods for Land Use and Land Cover Classification in Agricultural Areas (변화지역 탐지를 위한 시계열 KOMPSAT-2 다중분광 영상의 MAD 기반 상대복사 보정에 관한 연구)

  • Yeon, Jong-Min;Kim, Hyun-Ok;Yoon, Bo-Yeol
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.15 no.3
    • /
    • pp.66-80
    • /
    • 2012
  • It is necessary to normalize spectral image values derived from multi-temporal satellite data to a common scale in order to apply remote sensing methods for change detection, disaster mapping, crop monitoring and etc. There are two main approaches: absolute radiometric normalization and relative radiometric normalization. This study focuses on the multi-temporal satellite image processing by the use of relative radiometric normalization. Three scenes of KOMPSAT-2 imagery were processed using the Multivariate Alteration Detection(MAD) method, which has a particular advantage of selecting PIFs(Pseudo Invariant Features) automatically by canonical correlation analysis. The scenes were then applied to detect disaster areas over Sendai, Japan, which was hit by a tsunami on 11 March 2011. The case study showed that the automatic extraction of changed areas after the tsunami using relatively normalized satellite data via the MAD method was done within a high accuracy level. In addition, the relative normalization of multi-temporal satellite imagery produced better results to rapidly map disaster-affected areas with an increased confidence level.

A Table Parametric Method for Automatic Generation of Parametric CAD Models in a Mold Base e-Catalog System (몰드베이스 전자 카탈로그 시스템의 파라메트릭 CAD 모델 자동 생성을 위한 테이블 파라메트릭 방법)

  • Mun, Du-Hwan;Kim, Heung-Ki;Jang, Kwang-Sub;Cho, Jun-Myun;Kim, Jun-Hwan;Han, Soon-Hung
    • The Journal of Society for e-Business Studies
    • /
    • v.9 no.4
    • /
    • pp.117-136
    • /
    • 2004
  • As the time-to-market gets more important for competitiveness of an enterprise in manufacturing industry, it becomes important to shorten the development cycle of a product. Reuse of existing design models and e-Catalog for components are required for faster product development. To achieve this goal, an electric catalog must provide parametric CAD models since parametric information is indispensable for configuration design. There are difficulties in building up a parametric library of all the necessary combination using a CAD system, since we have too many combinations of components for a product. For example, there are at least 80 million combinations of components on one page of paper catalog of a mold base. To solve this problem, we propose the method of table parametric for the automatic generation of parametric CAD models. Any combination of mold base can be generated by mapping between a classification system of an electric catalog and the design parameters set of the table parametric. We propose how to select parametric models and to construct the design parameters set.

  • PDF

Application and perspectives of proteomics in crop science fields (작물학 분야 프로테오믹스의 응용과 전망)

  • Woo Sun-Hee
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2004.04a
    • /
    • pp.12-27
    • /
    • 2004
  • Thanks to spectacular advances in the techniques for identifying proteins separated by two-dimensional electrophoresis and in methods for large-scale analysis of proteome variations, proteomics is becoming an essential methodology in various fields of plant sciences. Plant proteomics would be most useful when combined with other functional genomics tools and approaches. A combination of microarray and proteomics analysis will indicate whether gene regulation is controlled at the level of transcription or translation and protein accumulation. In this review, we described the catalogues of the rice proteome which were constructed in our program, and functional characterization of some of these proteins was discussed. Mass-spectrometry is a most prevalent technique to identify rapidly a large of proteins in proteome analysis. However, the conventional Western blotting/sequencing technique us still used in many laboratories. As a first step to efficiently construct protein data-file in proteome analysis of major cereals, we have analyzed the N-terminal sequences of 100 rice embryo proteins and 70 wheat spike proteins separated by two-dimensional electrophoresis. Edman degradation revealed the N-terminal peptide sequences of only 31 rice proteins and 47 wheat proteins, suggesting that the rest of separated protein spots are N-terminally blocked. To efficiently determine the internal sequence of blocked proteins, we have developed a modified Cleveland peptide mapping method. Using this above method, the internal sequences of all blocked rice proteins (i. e., 69 proteins) were determined. Among these 100 rice proteins, thirty were proteins for which homologous sequence in the rice genome database could be identified. However, the rest of the proteins lacked homologous proteins. This appears to be consistent with the fact that about 30% of total rice cDNA have been deposited in the database. Also, the major proteins involved in the growth and development of rice can be identified using the proteome approach. Some of these proteins, including a calcium-binding protein that fumed out to be calreticulin, gibberellin-binding protein, which is ribulose-1,5-bisphosphate carboxylase/oxygenase activate in rice, and leginsulin-binding protein in soybean have functions in the signal transduction pathway. Proteomics is well suited not only to determine interaction between pairs of proteins, but also to identify multisubunit complexes. Currently, a protein-protein interaction database for plant proteins (http://genome .c .kanazawa-u.ac.jp/Y2H)could be a very useful tool for the plant research community. Recently, we are separated proteins from grain filling and seed maturation in rice to perform ESI-Q-TOF/MS and MALDI-TOF/MS. This experiment shows a possibility to easily and rapidly identify a number of 2-DE separated proteins of rice by ESI-Q-TOF/MS and MALDI-TOF/MS. Therefore, the Information thus obtained from the plant proteome would be helpful in predicting the function of the unknown proteins and would be useful in the plant molecular breeding. Also, information from our study could provide a venue to plant breeder and molecular biologist to design their research strategies precisely.

  • PDF

Geospatial Assessment of Frost and Freeze Risk in 'Changhowon Hwangdo' Peach (Prunus persica) Trees as Affected by the Projected Winter Warming in South Korea: III. Identifying Freeze Risk Zones in the Future Using High-Definition Climate Scenarios (겨울기온 상승에 따른 복숭아 나무 '장호원황도' 품종의 결과지에 대한 동상해위험 공간분석: III. 고해상도 기후시나리오에 근거한 동해위험의 미래분포)

  • Chung, U-Ran;Kim, Jin-Hee;Kim, Soo-Ock;Seo, Hee-Cheol;Yun, Jin-I.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.11 no.4
    • /
    • pp.221-232
    • /
    • 2009
  • The geographical distribution of freeze risk determines the latitudinal and altitudinal limits and the maximum acreage suitable for fruit production. Any changes in its pattern can affect the policy for climate change adaptation in fruit industry. High-definition digital maps for such applications are not available yet due to uncertainty in the combined responses of temperature and dormancy depth under the future climate scenarios. We applied an empirical freeze risk index, which was derived from the combination of the dormancy depth and threshold temperature inducing freeze damage to dormant buds of 'Changhowon Hwangdo' peach trees, to the high-definition digital climate maps prepared for the current (1971-2000), the near future (2011-2040) and the far future (2071-2100) climate scenarios. According to the geospatial analysis at a landscape scale, both the safe and risky areas will be expanded in the future and some of the major peach cultivation areas may encounter difficulty in safe overwintering due to weakening cold tolerance resulting from insufficient chilling. Our test of this method for the two counties representing the major peach cultivation areas in South Korea demonstrated that the migration of risky areas could be detected at a sub-grid scale. The method presented in this study can contribute significantly to climate change adaptation planning in agriculture as a decision aids tool.

Estimation of Aboveground Forest Biomass Carbon Stock by Satellite Remote Sensing - A Comparison between k-Nearest Neighbor and Regression Tree Analysis - (위성영상을 활용한 지상부 산림바이오매스 탄소량 추정 - k-Nearest Neighbor 및 Regression Tree Analysis 방법의 비교 분석 -)

  • Jung, Jaehoon;Nguyen, Hieu Cong;Heo, Joon;Kim, Kyoungmin;Im, Jungho
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.5
    • /
    • pp.651-664
    • /
    • 2014
  • Recently, the demands of accurate forest carbon stock estimation and mapping are increasing in Korea. This study investigates the feasibility of two methods, k-Nearest Neighbor (kNN) and Regression Tree Analysis (RTA), for carbon stock estimation of pilot areas, Gongju and Sejong cities. The 3rd and 5th ~ 6th NFI data were collected together with Landsat TM acquired in 1992, 2010 and Aster in 2009. Additionally, various vegetation indices and tasseled cap transformation were created for better estimation. Comparison between two methods was conducted by evaluating carbon statistics and visualizing carbon distributions on the map. The comparisons indicated clear strengths and weaknesses of two methods: kNN method has produced more consistent estimates regardless of types of satellite images, but its carbon maps were somewhat smooth to represent the dense carbon areas, particularly for Aster 2009 case. Meanwhile, RTA method has produced better performance on mean bias results and representation of dense carbon areas, but they were more subject to types of satellite images, representing high variability in spatial patterns of carbon maps. Finally, in order to identify the increases in carbon stock of study area, we created the difference maps by subtracting the 1992 carbon map from the 2009 and 2010 carbon maps. Consequently, it was found that the total carbon stock in Gongju and Sejong cities was drastically increased during that period.