• Title/Summary/Keyword: information mapping

Search Result 3,148, Processing Time 0.029 seconds

The extension of the IDEA Methodology for a multilevel secure schema design (다단계 보안 스키마 설계를 위한 IDEA 방법론의 확장)

  • Kim, Jung-Jong;Park, Woon-Jae;Sim, Gab-Sig
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.3
    • /
    • pp.879-890
    • /
    • 2000
  • Designing a multilevel database application is a complex process, and the entities and their associated security levels must be represented using an appropriate model unambiguously. It is also important to capture the semantics of a multilevel databse application as accurate and complete as possible. Owing to the focus of the IDEA Methodology for designing the non-secure database applications on the data-intensive systems, the Object Model describes the static structure of the objects in an application and their relationships. That is, the Object Model in the IDEA Methodology is an extended Entity-Relationship model giving a static description of objects. The IDEA Methodology has not been developed the multilevel secure database applications, but by using an existing methodology we could take advantage of the various techniques that have already been developed for that methodology. That is, this way is easier to design the multilevel secure schema than to develop a new model from scratch. This paper adds the security features 새? Object Model in the IDEA Methodology, and presents the transformation from this model to a multilevel secure object oriented schema. This schema will be the preliminary work which can be the general scheme for the automatic mapping to the various commercial multilevel secure database management system such as Informix-Online/Secure, Trusted ORACLE, and Sybase Secure SQL Server.

  • PDF

Application and perspectives of proteomics in crop science fields (작물학 분야 프로테오믹스의 응용과 전망)

  • Woo Sun-Hee
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2004.04a
    • /
    • pp.12-27
    • /
    • 2004
  • Thanks to spectacular advances in the techniques for identifying proteins separated by two-dimensional electrophoresis and in methods for large-scale analysis of proteome variations, proteomics is becoming an essential methodology in various fields of plant sciences. Plant proteomics would be most useful when combined with other functional genomics tools and approaches. A combination of microarray and proteomics analysis will indicate whether gene regulation is controlled at the level of transcription or translation and protein accumulation. In this review, we described the catalogues of the rice proteome which were constructed in our program, and functional characterization of some of these proteins was discussed. Mass-spectrometry is a most prevalent technique to identify rapidly a large of proteins in proteome analysis. However, the conventional Western blotting/sequencing technique us still used in many laboratories. As a first step to efficiently construct protein data-file in proteome analysis of major cereals, we have analyzed the N-terminal sequences of 100 rice embryo proteins and 70 wheat spike proteins separated by two-dimensional electrophoresis. Edman degradation revealed the N-terminal peptide sequences of only 31 rice proteins and 47 wheat proteins, suggesting that the rest of separated protein spots are N-terminally blocked. To efficiently determine the internal sequence of blocked proteins, we have developed a modified Cleveland peptide mapping method. Using this above method, the internal sequences of all blocked rice proteins (i. e., 69 proteins) were determined. Among these 100 rice proteins, thirty were proteins for which homologous sequence in the rice genome database could be identified. However, the rest of the proteins lacked homologous proteins. This appears to be consistent with the fact that about 30% of total rice cDNA have been deposited in the database. Also, the major proteins involved in the growth and development of rice can be identified using the proteome approach. Some of these proteins, including a calcium-binding protein that fumed out to be calreticulin, gibberellin-binding protein, which is ribulose-1,5-bisphosphate carboxylase/oxygenase activate in rice, and leginsulin-binding protein in soybean have functions in the signal transduction pathway. Proteomics is well suited not only to determine interaction between pairs of proteins, but also to identify multisubunit complexes. Currently, a protein-protein interaction database for plant proteins (http://genome .c .kanazawa-u.ac.jp/Y2H)could be a very useful tool for the plant research community. Recently, we are separated proteins from grain filling and seed maturation in rice to perform ESI-Q-TOF/MS and MALDI-TOF/MS. This experiment shows a possibility to easily and rapidly identify a number of 2-DE separated proteins of rice by ESI-Q-TOF/MS and MALDI-TOF/MS. Therefore, the Information thus obtained from the plant proteome would be helpful in predicting the function of the unknown proteins and would be useful in the plant molecular breeding. Also, information from our study could provide a venue to plant breeder and molecular biologist to design their research strategies precisely.

  • PDF

Novel Radix-26 DF IFFT Processor with Low Computational Complexity (연산복잡도가 적은 radix-26 FFT 프로세서)

  • Cho, Kyung-Ju
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.1
    • /
    • pp.35-41
    • /
    • 2020
  • Fast Fourier transform (FFT) processors have been widely used in various application such as communications, image, and biomedical signal processing. Especially, high-performance and low-power FFT processing is indispensable in OFDM-based communication systems. This paper presents a novel radix-26 FFT algorithm with low computational complexity and high hardware efficiency. Applying a 7-dimensional index mapping, the twiddle factor is decomposed and then radix-26 FFT algorithm is derived. The proposed algorithm has a simple twiddle factor sequence and a small number of complex multiplications, which can reduce the memory size for storing the twiddle factor. When the coefficient of twiddle factor is small, complex constant multipliers can be used efficiently instead of complex multipliers. Complex constant multipliers can be designed more efficiently using canonic signed digit (CSD) and common subexpression elimination (CSE) algorithm. An efficient complex constant multiplier design method for the twiddle factor multiplication used in the proposed radix-26 algorithm is proposed applying CSD and CSE algorithm. To evaluate performance of the previous and the proposed methods, 256-point single-path delay feedback (SDF) FFT is designed and synthesized into FPGA. The proposed algorithm uses about 10% less hardware than the previous algorithm.

Study on Context-Aware SOA based on Open Service Gateway initiative platform (OSGi플렛폼 기반의 상황인식 서비스지향아키텍쳐에 관한 연구)

  • Choi, Sung-Wook;Oh, Am-Suk;Kwon, Oh-Hyun;Kang, Si-Hyeob;Hong, Soon-Goo;Choi, Hyun-Rim
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.11
    • /
    • pp.2083-2090
    • /
    • 2006
  • In an proposed Context-Aware SOA(Service Oriented Architecture) based OSGi(Open Service Gateway initiative) platform, Service provider manages relative kinds of services in an integrative basis from various sensors, puts each service in a SOAP (Simple Object access Protocol) message, and register thorn to the UDDI(Universal Description Discovery and Integration) server of service registry, service requester retrivel the specified kinds of services and call them to service provider. Recently most context-aware technologies for ubiquitous home network are mainly putting emphasis on RFID/USN and location-based technology. Because of this, service-oriented architecture researches have not been made enough. Under the environment of an OSGi service platform, various context-aware services are dynamically mapping from various sensors, new services are being offered for the asking of users, and existing services are changing. Accordingly, the data sharing between services provided, management of service life cycle, and the facilitation of service distribution are needed. Taking into considering all these factors, this study has suggested an Context-Aware SOA based eclipse SOA Tools Platform using OSGi platform that can transaction throughtput of more than 546 TPS of distributional Little's Law from ATAM(Architecture Tradeoff Analysis Method) while remaining stable other condition.

Design and Implementation of an Adaptive User Interface for Home Network Environments (홈 네트워크 환경을 위한 적응형 사용자 인터페이스 설계 및 구현)

  • Jung, Jae-Hwan;Jang, Hyun-Su;Ko, Kwang-Sun;Kim, Gu-Su;Eom, Young-Ik
    • The KIPS Transactions:PartB
    • /
    • v.15B no.1
    • /
    • pp.37-44
    • /
    • 2008
  • With the growing proliferation of mobile computing devices, several industrial and academic research groups have a vigorous studying to remote control for various appliances with mobile devices such as Notebooks, PDAs, and Smartphones in home network environments. We can utilize the good points of mobile devices such as portability and usability so that we can remote control and manage the mobile devices connected on home networks anytime, anywhere. However, mobile devices use different languages. Therefore, they cause some problems because their interfaces and the methods of operation are different each other. To solve these problems, there are two consideration. First, we may be solved development of the user interface and difficulty of maintenance in order to control various heterogeneous devices. Second, we may provide the user interface which is dynamically adapting user's preferences and device characteristics. To satisfy these considerations, we describe the design of an adaptive user interface for home network environments using the UIML (User Interface Markup Language) based on XML (eXtensible Markup Language) and the profile information of the user and device. Therefore, we present several implementation examples that show how the framework can form the basis of prototypical applications.

Development and Evaluation of Korean Diagnosis Related Groups: Medical service utilization of inpatients (한국형 진단명기준환자군의 개발과 평가: 입원환자의 의료서비스 이용을 중심으로)

  • Shin, Young-Soo;Lee, Young-Seong;Park, Ha-Young;Yeom, Yong-Kwon
    • Journal of Preventive Medicine and Public Health
    • /
    • v.26 no.2 s.42
    • /
    • pp.293-309
    • /
    • 1993
  • With expanded and extended coverage of the national medical insurance and fast growing health care expenditures, appropriateness of health service utilization and quality of care are concerns of both health care providers and insurers as well as patients. An accurate patient classification system is a basic tool for effective health care policies and efficient health services management. A classification system applicable to Korean medical information-Korean Diagnosis Related Groups (K-DRGs)-was developed based on the U.S. Refined DRGs, and the performance of the developed system was assessed in this study. In the process of the development, first the Korean coding systems for diagnoses and procedures were converted to the systems used in the definition of the U.S. Refined DRGs using the mapping tables formulated by physician panels. Then physician panels reviewed the group definition, and identified medical practice patterns different in two countries. The definition was modified for the differences in K-DRGs. The process resulted in 1,199 groups in the system. Several groups in Refined DRGs could not be differentiated in K-DRGs due to insufficient medical information, and several groups could not be defined due to procedures which were not practiced in Korea. However, the classification structure of Refined DRGs was retained in K-DRGs. The developed system was evaluated fur its performance in explaining variations in resource use as measured by charges and length of stay(LOS), for both all and non-extreme discharges. The data base used in this evaluation included 373,322 discharges which was a random sample of discharges reviewed and payed by the medical insurance during the five-month period from September 1990. The proportion of variance in resource use which was reduced by classifying patients into K-DRGs-r-square-was comparable to the performance of the U.S. Refined DRGs: .39 for charges and .25 for LOS for all discharges, and .53 for charges and .31 for LOS for non-extreme discharges. Another measure analyzed to assess the performance was the coefficient of variation of charges within individual K-DRGs. A total of 966 K-DRGs (87.7%) showed a coefficient below 100%, and the highest coefficient among K-DRGs with more than 30 discharges was 159%.

  • PDF

Extraction of Water Depth in Coastal Area Using EO-1 Hyperion Imagery (EO-1 Hyperion 영상을 이용한 연안해역의 수심 추출)

  • Seo, Dong-Ju;Kim, Jin-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.4
    • /
    • pp.716-723
    • /
    • 2008
  • With rapid development of science and technology and recent widening of mankind's range of activities, development of coastal waters and the environment have emerged as global issues. In relation to this, to allow more extensive analyses, the use of satellite images has been on the increase. This study aims at utilizing hyperspectral satellite images in determining the depth of coastal waters more efficiently. For this purpose, a partial image of the research subject was first extracted from an EO-1 Hyperion satellite image, and atmospheric and geometric corrections were made. Minimum noise fraction (MNF) transformation was then performed to compress the bands, and the band most suitable for analyzing the characteristics of the water body was selected. Within the chosen band, the diffuse attenuation coefficient Kd was determined. By deciding the end-member of pixels with pure spectral properties and conducting mapping based on the linear spectral unmixing method, the depth of water at the coastal area in question was ultimately determined. The research findings showed the calculated depth of water differed by an average of 1.2 m from that given on the digital sea map; the errors grew larger when the water to be measured was deeper. If accuracy in atmospheric correction, end-member determination, and Kd calculation is enhanced in the future, it will likely be possible to determine water depths more economically and efficiently.

Image-based Water Level Measurement Method Adapting to Ruler's Surface Condition (목자판 표면 상태에 적응적인 영상 기반 수위 계측 기법)

  • Kim, Jae-Do;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.9
    • /
    • pp.67-76
    • /
    • 2010
  • This paper proposes a image-based water level measurement method, which adapt to the ruler's surface condition. When the surface of a ruler is deteriorated by mud, drifts, or strong light reflection, the proposed method judges the pollution of ruler by comparing distance between two levels: the first one is the end position of horizontal edge region which keeps the pattern of ruler's marking, and the second one is the position where the sharpest drop occurs in the histogram which is construct using image density based on the axis of image height. If the ruler is polluted, the water level is a position of local valley of the section having a maximum difference between the local peak and valley around the second level. If the ruler is not polluted, the water level is detected as the position having horizontal edges more than 30% of histogram's maximum value around the first level. The detected water level is converted to the actual water level by using the mapping table which is construct based on the making of ruler in the image. The proposed method is compared to the ultrasonic based method to evaluate its accuracy and efficiency on the real situation.

Establishment of Database and Distribution Maps for Biomass Resources (바이오매스 자원 DB 구축과 분포도 작성)

  • Kim, Yi-Hyun;Nam, Jae-Jak;Hong, S. Young;Choe, Eun-Young;Hong, Seung-Gil;So, Kyu-Ho
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.42 no.5
    • /
    • pp.379-384
    • /
    • 2009
  • This study was carried out to understand the national and regional distribution of the biomass resources produced in Korea annually via establishing database (DB) and distribution maps of biomass resources data including as livestock manures, food wastes and agricultural by-product. The information of the annual production of each biomass resources was obtained from Ministry for Food, Agriculture, Forestry and Fisheries (MIFAFF), Ministry of Environment (MOE) and National Statistical Office (NSO). Based on biomass resources data, we established database architecture table about livestock manures and food wastes. The distribution maps for the total amount of manures produced from each livestock animal were built up in both national and regional scales and used for analysis of the space-based and time-based distribution of the manure resources. Distribution maps for food wastes and agricultural by-product were also produced, respectively. It was shown that the analysis through resource mapping can be used to identify the sources of collectable biomass feasibly determining suitable region for establishment of a biomass-energy production. The biomass distribution maps graphically provide the information regarding biomass resources to policy-makers, farmers, general users and it was expected to be utilized for policy-making of environmental-friendly agriculture and bio-energy.

Classification of Natural and Artificial Forests from KOMPSAT-3/3A/5 Images Using Deep Neural Network (심층신경망을 이용한 KOMPSAT-3/3A/5 영상으로부터 자연림과 인공림의 분류)

  • Baek, Won-Kyung;Lee, Yong-Suk;Park, Sung-Hwan;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_3
    • /
    • pp.1965-1974
    • /
    • 2021
  • Satellite remote sensing approach can be actively used for forest monitoring. Especially, it is much meaningful to utilize Korea multi-purpose satellites, an independently operated satellite in Korea, for forest monitoring of Korea, Recently, several studies have been performed to exploit meaningful information from satellite remote sensed data via machine learning approaches. The forest information produced through machine learning approaches can be used to support the efficiency of traditional forest monitoring methods, such as in-situ survey or qualitative analysis of aerial image. The performance of machine learning approaches is greatly depending on the characteristics of study area and data. Thus, it is very important to survey the best model among the various machine learning models. In this study, the performance of deep neural network to classify artificial or natural forests was analyzed in Samcheok, Korea. As a result, the pixel accuracy was about 0.857. F1 scores for natural and artificial forests were about 0.917 and 0.433 respectively. The F1 score of artificial forest was low. However, we can find that the artificial and natural forest classification performance improvement of about 0.06 and 0.10 in F1 scores, compared to the results from single layered sigmoid artificial neural network. Based on these results, it is necessary to find a more appropriate model for the forest type classification by applying additional models based on a convolutional neural network.