• Title/Summary/Keyword: 정보처리지식

Search Result 1,694, Processing Time 0.03 seconds

Database Security System supporting Access Control for Various Sizes of Data Groups (다양한 크기의 데이터 그룹에 대한 접근 제어를 지원하는 데이터베이스 보안 시스템)

  • Jeong, Min-A;Kim, Jung-Ja;Won, Yong-Gwan;Bae, Suk-Chan
    • The KIPS Transactions:PartD
    • /
    • v.10D no.7
    • /
    • pp.1149-1154
    • /
    • 2003
  • Due to various requirements for the user access control to large databases in the hospitals and the banks, database security has been emphasized. There are many security models for database systems using wide variety of policy-based access control methods. However, they are not functionally enough to meet the requirements for the complicated and various types of access control. In this paper, we propose a database security system that can individually control user access to data groups of various sites and is suitable for the situation where the user's access privilege to arbitrary data is changed frequently. Data group(s) in different sixes d is defined by the table name(s), attribute(s) and/or record key(s), and the access privilege is defined by security levels, roles and polices. The proposed system operates in two phases. The first phase is composed of a modified MAC (Mandatory Access Control) model and RBAC (Role-Based Access Control) model. A user can access any data that has lower or equal security levels, and that is accessible by the roles to which the user is assigned. All types of access mode are controlled in this phase. In the second phase, a modified DAC(Discretionary Access Control) model is applied to re-control the 'read' mode by filtering out the non-accessible data from the result obtained at the first phase. For this purpose, we also defined the user group s that can be characterized by security levels, roles or any partition of users. The policies represented in the form of Block(s, d, r) were also defined and used to control access to any data or data group(s) that is not permitted in 'read ' mode. With this proposed security system, more complicated 'read' access to various data sizes for individual users can be flexibly controlled, while other access mode can be controlled as usual. An implementation example for a database system that manages specimen and clinical information is presented.

The Genealogical Study on SWIFTNet Trade Service Utility and Bank Payment Obligation (SWIFTNet TSU BPO의 계보학적 연구)

  • Lee, Bong-Soo
    • International Commerce and Information Review
    • /
    • v.18 no.3
    • /
    • pp.3-21
    • /
    • 2016
  • The thesis examines genealogical study of various aspects to overcome lots of problems which come by when we execute SWIFTNet TSU BPO. Practical implications regarding the innovation of electronic trade infrastructure are as follows. First, the shipping documents in the SWIFTNet TSU BPO are directly sent to an importer by an exporter after the baseline is confirmed. With this process itself, therefore, the bank cannot secure the account receivable. When initiating the SWIFTNet TSU BPO deal, it is needed to set regulations on the bank's account receivable security in the contract. Second, the SWIFTNet TSU BPO should also have an institutionally unified sharing platform with security, stability and convenience. It other words, it is needed to develop services which meet e-payment paradigm and international environments through continued analysis on market changes and flow. Third, the SWIFTNet TSU is useful in terms of promptness, reduction of risk in foreign exchange payment, cost reduction. Therefore, the SWIFT should be perfectly united and linked among the banks, importer and exporter to make the SWIFTNet TSU more convenient in countries around the world. Fourth, the SWIFT should be approached from the aspect of expansion of network and creation of a new business model through analysis on these problems with a worldwide perspective. At the same time, it is necessary to build a cooperative system to share information and promote comprehensive management for efficient operation.

  • PDF

소비효율성 개념을 이용한 혁신의 이해

  • 박찬수;이정동;오동현
    • Proceedings of the Technology Innovation Conference
    • /
    • 2003.06a
    • /
    • pp.41-56
    • /
    • 2003
  • 다양한 제품들이 존재하는 시장에는 타 제품에 비하여 품질대비 가격이 낮은 혁신적인 경쟁력있는 제품과 그렇지 못한 제품들이 혼재하고 있다. 그러나 정보의 부족(limited information), 제한적 합리성(bounded rationality) 등 여러 가지 원인으로 인하여 혁신적인 제품들만이 소비자들에게 선택되어 소비되는 것은 아니다. 본 연구에서는 이러한 현상을 설명하기 위하여 소비효율성(consumption efficiency)라는 개념을 도입, 제시하고자 한다. 만약 소비효율성이 극도로 낮다면 혁신적인 제품을 내어놓는다 하더라도 소비자들에게 선택되어 이윤이 발생될 확률이 낮기 때문에 생산자 입장에서는 혁신의 유인(innovation incentive)이 낮아질 수밖에 없게 된다. 이처럼 소비효율성의 문제는 혁신의 유인과 결과를 이해하는데 중요한 단초를 제공할 수 있게 된다. 이에 반하여 혁신을 이해하기 위한 기존의 분석틀은 생산경제이론(production economics)에 기반하고 있고, 효율성의 개념도 생산효율성(production efficiency) 혹은 기술적 효율성(technical efficiency)의 범주에서 다루어져 왔다. 본 연구에서 제시하는 소비효율성의 개념은 효용이론에 근거하고 있다는 점에서 기존 연구와 차별화된다. 본 연구는 효용함수 극대화이론에서 출발하여 경계헤도닉함수(frontier hedomic function)을 도출하는 이론적 유도과정을 제시한다. 실증분석을 위해서는 SFA(Stochastic Frontier Analysis)의 방법론 체계를 적용하였다. 제시된 분석틀은 국내 PC산업의 데이터에 적용되었다. 분석의 결과 몇 가지 가정하에 국내 PC산업이 약 13%정도의 비효율성을 안고 있는 것으로 판단할 수 있으며, 초기혁신구매자(early adopter)들은 일정 정도의 비효율성을 기꺼이 감수할 것으로 분석되었다. 궤적 분석에서는 각 산업별 기술의 특성을 분석하는 것으로, 특정 기술 지식의 활용 기간을 통해 기술 주기를 도출하고, 산업 내 평균 권리 청구 항목 수를 이용하여 각 산업의 기술 범위를 비교하였다. 각각의 동적 분석을 통해 시간에 따른 변화 양상이 관찰하였고, ANOVA 분석을 이용하여 통계적 유의성을 검증하였다. 본 연구는 현재의 기술 패러다임 내에서 Pavitt이 제시한 산업 분류의 근거를 보충 설명하였고 특허 정보를 이용하여 기술혁신의 산업별 유형에 대한 폭넓은 분석방법을 제시하였다.별 시간대별 효과분석을 통하여 정책의 시행여부가 결정되어야 할 것이다. 한편, 화물전용차선의 설치로 인한 물류비용의 절감을 보다 효과적으로 달성하기 위해서는 종합류류 전산망의 시급한 구축과 함께 화물차의 적재율을 높이고 공차율을 낮출 수 있는 운송체계의 수립이 필요한 것으로 판단된다. 그라나 이러한 화물전용차선의 효과는 단기적인 치유책일 수밖에 없기 때문에 물류유통 시설의 확충을 위한 사회간접자본의 구축을 서둘러 시행하여야 할 것이다.으로 처리한 Machine oil, Phenthoate EC 및 Trichlorfon WP는 비교적 약효가 낮았다.>$^{\circ}$E/$\leq$30$^{\circ}$NW 단열군이 연구지역 내에서 지하수 유동성이 가장 높은 단열군으로 추정된다. 이러한 사실은 3개 시추공을 대상으로 실시한 시추공 내 물리검층과 정압주입시험에서도 확인된다.. It was resulted from increase of weight of single cocoon. "Manta"2.5ppm produced 22.2kg of cocoon. It is equal to 9% increase in index, as compared to that of control. In case of R-20458, the increasing

  • PDF

청주지역 주부들의 수입식품 이용실태 및 분별능력

  • 김기남;박은진;손은미
    • Proceedings of the KSCN Conference
    • /
    • 2003.11a
    • /
    • pp.1074-1075
    • /
    • 2003
  • WTO 출범 이후 우리나라 국민의 식생활 중 수입식품이 차지하는 비중이 급속히 늘어나면서 수입식품 자체의 위해성과 수입농산물에 관한 지식이나 정보가 소비자들에게 부족하여 올바른 구입과 소비가 어렵게 되어 피해를 당하는 경우도 발생하고 있다. 본 연구에서는 청주지역 주부들을 대상으로 수입식품 이용과 인식도 및 수입식품 분별능력의 조사를 통하여, 교육의 필요성여부를 검토하였다. 조사 대상자는 청주시에 거주하는 주부 183명이었고, 조사기간은 2003년 3월 1일부터 3월 15일까지였다. 조사내용은 조사대상자의 일반사항, 식품구입시 태도, 수입식품 구입 경험, 수입식품에 대한 인식도, 수입식품과 국산식품의 분별력 테스트로 구성되었으며, 인식도는 5점 척도에 의해 조사되었다. 사용된 통계처리 방법은 SAS program을 사용하여 빈도, 백분율, 평균, 표준편차를 구하였고, Chi-square 또는 ANOVA로 검정하였다. 연구결과는 다음과 같다. 국내산과 수입산 표시의 확인여부는 응답자의 92.3%가 확인한다고 응답하였고, 선호도를 조사한 결과 99.5%가 국내산을 선호하는 것으로 나타났다. 선호하는 주된 이유로는 ‘수입산은 유해물질이 많으므로’라는 응답이 46.3%로 가장 많이 나타났고, 수입식품 구입경험의 유무로는 92.3%가 경험이 있다고 응답하였으며, 구입동기에 대해 ‘수입산이 시중에 많아 쉽게 구입할 수 있기 때문에’라는 응답이 61.6%로 가장 많았다. 또한 수입식품을 국산식품으로 혼동하여 구입한 경험으로는 응답자의 76%가 ‘있다’고 응답하였으며 혼동한 이유로는 ‘수입식품과 국산식품의 구분이 명확하지 않아서’ 49.6%, ‘수입산과 국내산의 표시가 명확하지 않아서’ 41%를 차지했다. 국내산과 수입산 표시에 대한 신뢰도는 89.7%가 대체로 신뢰하지 않는 것으로 나타났다. 신뢰하지 않는 이유로는 ‘허위표시에 대한 보도를 많이 접해서’가 56.l%로 가장 많은 비율을 차지했다. 수입식품의 인식도를 품질, 가격, 포장 용기, 맛, 조리편리성, 안전성, 건강에 미치는 영향의 7가지 측면에서 살펴보았는데, 수입식품의 품질과 맛에서 국내산과 비슷하거나 나쁘다고 인지하였고, 가격은 수입식품이 싸다고 인지하고 있었으며 연령과 유의성을 보였다(p <0.01). 포장 용기와 조리편리성에 대해서는 국내산과 비슷하거나 좋다고 인지하고 있었으며, 조리편리성은 학력과 유의성이 높게 나타났다(p < 0.01). 안전성과 건강에 미치는 영향에 대해서는 나쁘다고 인지하고 있었다. 수입식품과 국산식품의 분별정도는 66.1%가 구별하는 능력이 언다고 응답하였다. 분별정보를 얻는 경로로는 ‘TV 및 라디오 프로그램’이 61.7%로 가장 많았다. 수입식품에 관한 교육을 받을 의향에 있어서 61.5%가 ‘예’라고 응답하였고, 필요한 교육에 대해 ‘국산식품과 수입식품의 분별법에 관한 내용’이 75.4%로 가장 많았다. 수입식품 분별력 테스트에 있어서는 40점 만점에 평균이 13.6 $\pm$ 7.4점으로 낮았고, 분별력이 가장 좋은 품목은 고사리(53.6%), 분별력이 가장 낮은 품목은 고춧가루(40.4%)로 나타났다. 분별정도에 있어서 ‘매우 잘 구별한다’고 응답한 사람일수록 분별력 점수가 높게 나타나 주부들이 자신의 분별능력에 대해 매우 잘 인지하고 있다는 것을 알 수 있었다(p<0.001). 이상의 결과를 통하여 대부분의 응답자가 수입산에 대해 부정적인 태도를 보이고 있으나 대부분 구입경험이 있고, 분별력 점수도 낮은 편이었다. 따라서 국내 소비자들의 피해를 줄이기 위하여는 수입식품과 국내식품의 특징 및 분별법에 대한 대중매체를 통한 홍보와 이에 대한 교육의 기회가 확대되어져야 하겠다.

  • PDF

Scalable RDFS Reasoning Using the Graph Structure of In-Memory based Parallel Computing (인메모리 기반 병렬 컴퓨팅 그래프 구조를 이용한 대용량 RDFS 추론)

  • Jeon, MyungJoong;So, ChiSeoung;Jagvaral, Batselem;Kim, KangPil;Kim, Jin;Hong, JinYoung;Park, YoungTack
    • Journal of KIISE
    • /
    • v.42 no.8
    • /
    • pp.998-1009
    • /
    • 2015
  • In recent years, there has been a growing interest in RDFS Inference to build a rich knowledge base. However, it is difficult to improve the inference performance with large data by using a single machine. Therefore, researchers are investigating the development of a RDFS inference engine for a distributed computing environment. However, the existing inference engines cannot process data in real-time, are difficult to implement, and are vulnerable to repetitive tasks. In order to overcome these problems, we propose a method to construct an in-memory distributed inference engine that uses a parallel graph structure. In general, the ontology based on a triple structure possesses a graph structure. Thus, it is intuitive to design a graph structure-based inference engine. Moreover, the RDFS inference rule can be implemented by utilizing the operator of the graph structure, and we can thus design the inference engine according to the graph structure, and not the structure of the data table. In this study, we evaluate the proposed inference engine by using the LUBM1000 and LUBM3000 data to test the speed of the inference. The results of our experiment indicate that the proposed in-memory distributed inference engine achieved a performance of about 10 times faster than an in-storage inference engine.

ChIP-seq Library Preparation and NGS Data Analysis Using the Galaxy Platform (ChIP-seq 라이브러리 제작 및 Galaxy 플랫폼을 이용한 NGS 데이터 분석)

  • Kang, Yujin;Kang, Jin;Kim, Yea Woon;Kim, AeRi
    • Journal of Life Science
    • /
    • v.31 no.4
    • /
    • pp.410-417
    • /
    • 2021
  • Next-generation sequencing (NGS) is a high-throughput technique for sequencing large numbers of DNA fragments that are prepared from a genome. This sequencing technique has been used to elucidate whole genome sequences of living organisms and to analyze complementary DNA (cDNA) or chromatin immunoprecipitated DNA (ChIPed DNA) at the genome level. After NGS, the use of proper tools is important for processing and analyzing data with reasonable parameters. However, handling large-scale sequencing data and programing for data analysis can be difficult. The Galaxy platform, a public web service system, provides many different tools for NGS data analysis, and it allows researchers to analyze their data on a web browser with no deep knowledge about bioinformatics and/or programing. In this study, we explain the procedure for preparing chromatin immunoprecipitation-sequencing (ChIP-seq) libraries and steps for analyzing ChIP-seq data using the Galaxy platform. The data analysis steps include the NGS data upload to Galaxy, quality check of the NGS data, premapping processes, read mapping, the post-mapping process, peak-calling and visualization by window view, heatmaps, average profile, and correlation analysis. Analysis of our histone H3K4me1 ChIP-seq data in K562 cells shows that it correlates with public data. Thus, NGS data analysis using the Galaxy platform can provide an easy approach to bioinformatics.

A Study on Building Libraries and Librarian Archive (도서관 사서 아카이브 구축 방안 연구)

  • Kang, Min-ji;Rieh, Hae-young
    • The Korean Journal of Archival Studies
    • /
    • no.80
    • /
    • pp.89-128
    • /
    • 2024
  • This study believed that records could help 'strengthen identity' and that job archives could help provide pride and identity in one's job by preserving the history and culture of it. This study investigated whether building a librarian archive could help preserve records related to librarians, share experiences and knowledge among librarians, and confirm the history and identity of oneself and the group. In-depth interviews were conducted with librarians to examine what records should be included and what records could help strengthen the librarian's identity. As a result of the interview, various records related to the work of librarians were identified, and records related to the history of librarians were also confirmed. It was found that historical events related to librarians and library services provided by previous generations of librarians will help strengthen the identity of librarians. Thus, the Librarian Archive can help librarians reflect on themselves as librarians and decide on their own direction. This study presented a plan to build a librarian archive as a participatory archive through cooperation between institutions and communities. In this study, it was expected that through building a librarian archive, it would be possible to strengthen the professional identity of librarians, share the history of librarians, create a collective intelligence platform, efficiently process work, and improve the social status of libraries and librarians.

Direct Reconstruction of Displaced Subdivision Mesh from Unorganized 3D Points (연결정보가 없는 3차원 점으로부터 차이분할메쉬 직접 복원)

  • Jung, Won-Ki;Kim, Chang-Heon
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.29 no.6
    • /
    • pp.307-317
    • /
    • 2002
  • In this paper we propose a new mesh reconstruction scheme that produces a displaced subdivision surface directly from unorganized points. The displaced subdivision surface is a new mesh representation that defines a detailed mesh with a displacement map over a smooth domain surface, but original displaced subdivision surface algorithm needs an explicit polygonal mesh since it is not a mesh reconstruction algorithm but a mesh conversion (remeshing) algorithm. The main idea of our approach is that we sample surface detail from unorganized points without any topological information. For this, we predict a virtual triangular face from unorganized points for each sampling ray from a parameteric domain surface. Direct displaced subdivision surface reconstruction from unorganized points has much importance since the output of this algorithm has several important properties: It has compact mesh representation since most vertices can be represented by only a scalar value. Underlying structure of it is piecewise regular so it ran be easily transformed into a multiresolution mesh. Smoothness after mesh deformation is automatically preserved. We avoid time-consuming global energy optimization by employing the input data dependant mesh smoothing, so we can get a good quality displaced subdivision surface quickly.

A Study on Automatic Classification Model of Documents Based on Korean Standard Industrial Classification (한국표준산업분류를 기준으로 한 문서의 자동 분류 모델에 관한 연구)

  • Lee, Jae-Seong;Jun, Seung-Pyo;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.221-241
    • /
    • 2018
  • As we enter the knowledge society, the importance of information as a new form of capital is being emphasized. The importance of information classification is also increasing for efficient management of digital information produced exponentially. In this study, we tried to automatically classify and provide tailored information that can help companies decide to make technology commercialization. Therefore, we propose a method to classify information based on Korea Standard Industry Classification (KSIC), which indicates the business characteristics of enterprises. The classification of information or documents has been largely based on machine learning, but there is not enough training data categorized on the basis of KSIC. Therefore, this study applied the method of calculating similarity between documents. Specifically, a method and a model for presenting the most appropriate KSIC code are proposed by collecting explanatory texts of each code of KSIC and calculating the similarity with the classification object document using the vector space model. The IPC data were collected and classified by KSIC. And then verified the methodology by comparing it with the KSIC-IPC concordance table provided by the Korean Intellectual Property Office. As a result of the verification, the highest agreement was obtained when the LT method, which is a kind of TF-IDF calculation formula, was applied. At this time, the degree of match of the first rank matching KSIC was 53% and the cumulative match of the fifth ranking was 76%. Through this, it can be confirmed that KSIC classification of technology, industry, and market information that SMEs need more quantitatively and objectively is possible. In addition, it is considered that the methods and results provided in this study can be used as a basic data to help the qualitative judgment of experts in creating a linkage table between heterogeneous classification systems.

KoFlux's Progress: Background, Status and Direction (KoFlux 역정: 배경, 현황 및 향방)

  • Kwon, Hyo-Jung;Kim, Joon
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.12 no.4
    • /
    • pp.241-263
    • /
    • 2010
  • KoFlux is a Korean network of micrometeorological tower sites that use eddy covariance methods to monitor the cycles of energy, water, and carbon dioxide between the atmosphere and the key terrestrial ecosystems in Korea. KoFlux embraces the mission of AsiaFlux, i.e. to bring Asia's key ecosystems under observation to ensure quality and sustainability of life on earth. The main purposes of KoFlux are to provide (1) an infrastructure to monitor, compile, archive and distribute data for the science community and (2) a forum and short courses for the application and distribution of knowledge and data between scientists including practitioners. The KoFlux community pursues the vision of AsiaFlux, i.e., "thinking community, learning frontiers" by creating information and knowledge of ecosystem science on carbon, water and energy exchanges in key terrestrial ecosystems in Asia, by promoting multidisciplinary cooperations and integration of scientific researches and practices, and by providing the local communities with sustainable ecosystem services. Currently, KoFlux has seven sites in key terrestrial ecosystems (i.e., five sites in Korea and two sites in the Arctic and Antarctic). KoFlux has systemized a standardized data processing based on scrutiny of the data observed from these ecosystems and synthesized the processed data for constructing database for further uses with open access. Through publications, workshops, and training courses on a regular basis, KoFlux has provided an agora for building networks, exchanging information among flux measurement and modelling experts, and educating scientists in flux measurement and data analysis. Despite such persistent initiatives, the collaborative networking is still limited within the KoFlux community. In order to break the walls between different disciplines and boost up partnership and ownership of the network, KoFlux will be housed in the National Center for Agro-Meteorology (NCAM) at Seoul National University in 2011 and provide several core services of NCAM. Such concerted efforts will facilitate the augmentation of the current monitoring network, the education of the next-generation scientists, and the provision of sustainable ecosystem services to our society.