• Title/Summary/Keyword: 정보시스템 도입방법

Search Result 1,122, Processing Time 0.027 seconds

Probability-based Pre-fetching Method for Multi-level Abstracted Data in Web GIS (웹 지리정보시스템에서 다단계 추상화 데이터의 확률기반 프리페칭 기법)

  • 황병연;박연원;김유성
    • Spatial Information Research
    • /
    • v.11 no.3
    • /
    • pp.261-274
    • /
    • 2003
  • The effective probability-based tile pre-fetching algorithm and the collaborative cache replacement algorithm are able to reduce the response time for user's requests by transferring tiles which will be used in advance and determining tiles which should be removed from the restrictive cache space of a client based on the future access probabilities in Web GISs(Geographical Information Systems). The Web GISs have multi-level abstracted data for the quick response time when zoom-in and zoom-out queries are requested. But, the previous pre-fetching algorithm is applied on only two-dimensional pre-fetching space, and doesn't consider expanded pre-fetching space for multi-level abstracted data in Web GISs. In this thesis, a probability-based pre-fetching algorithm for multi-level abstracted in Web GISs was proposed. This algorithm expanded the previous two-dimensional pre-fetching space into three-dimensional one for pre-fetching tiles of the upper levels or lower levels. Moreover, we evaluated the effect of the proposed pre-fetching algorithm by using a simulation method. Through the experimental results, the response time for user requests was improved 1.8%∼21.6% on the average. Consequently, in Web GISs with multi-level abstracted data, the proposed pre-fetching algorithm and the collaborative cache replacement algorithm can reduce the response time for user requests substantially.

  • PDF

A Study on the Economic Analysis Method of Energy Storage System (에너지 저장 시스템(ESS)의 경제성 분석 기법에 관한 연구)

  • Yoon, Young-Sang;Choi, Jae-Hyun;Choi, Yong-Lak;Shin, Yongtae;Kim, Jong-Bae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.3
    • /
    • pp.596-606
    • /
    • 2015
  • Recently, the government is promoting the new renewable energy spread and expansion policy. To this end, the investment and the research is ongoing on the core of the ESS (Energy Storage System) for the Smart Grid that is being spread around the industrialized countries. US and European countries have also conducted a variety of ESS related systems maintenance and improvement in order to induce the activation of the ESS industry. On the other hand, our country has no law and institutional foundation for the introduction of activation ESS, and there is no objective basis for the economic impact of the introduction of the ESS. Therefore, spread and activation of the ESS is not properly conducted. In this paper, the economics of the ESS based on the Korea electric pricing system for the spread and activation of the ESS effectively proposes a technique for analysis. To do this, define the ESS operating model, and propose the best economic analysis method economic analysis comparing each operating model.

OpenGL ES 1.1 Implementation Using OpenGL (OpenGL을 이용한 OpenGL ES 1.1 구현)

  • Lee, Hwan-Yong;Baek, Nak-Hoon
    • The KIPS Transactions:PartA
    • /
    • v.16A no.3
    • /
    • pp.159-168
    • /
    • 2009
  • In this paper, we present an efficient way of implementing OpenGL ES 1.1 standard for the environments with hardware-supported OpenGL API, such as desktop PCs. Although OpenGL ES was started from the existing OpenGL features, it becomes a new three-dimensional graphics library customized for embedded systems through introducing fixed-point arithmetic operations, buffer management with fixed-point data type supports, completely new texture mapping functionalities and others. Currently, it is the official three dimensional graphics library for Google Android, Apple iPhone, PlayStation3, etc. In this paper, we achieved improvements on the arithmetic operations for the fixed-point number representation, which is the most characteristic data type for OpenGL ES. For the conversion of fixed-point data types to the floating-point number representations for the underlying OpenGL, we show the way of efficient conversion processes even with satisfying OpenGL ES standard requirements. We also introduced a simple memory management scheme to mange the converted data for the buffer containing fixed-point numbers. In the case of texture processing, the requirements in both standards are quite different and thus we used completely new software-implementations. Our final implementation result of OpenGL ES library provides all of over than 200 functions in OpenGL ES 1.1 standard and completely passed its conformance test, to show its compliance with the standard. From the efficiency viewpoint, we measured its execution times for several OpenGL ES-specific application programs and achieved at most 33.147 times improvements, to become the fastest one among the OpenGL ES implementations in the same category.

Development of Web Service for Liver Cirrhosis Diagnosis Based on Machine Learning (머신러닝기반 간 경화증 진단을 위한 웹 서비스 개발)

  • Noh, Si-Hyeong;Kim, Ji-Eon;Lee, Chungsub;Kim, Tae-Hoon;Kim, KyungWon;Yoon, Kwon-Ha;Jeong, Chang-Won
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.10
    • /
    • pp.285-290
    • /
    • 2021
  • In the medical field, disease diagnosis and prediction research using artificial intelligence technology is being actively conducted. It is being released as a variety of products for disease diagnosis and prediction, which are most widely used in the application of artificial intelligence technology based on medical images. Artificial intelligence is being applied to diagnose diseases, to classify diseases into benign and malignant, and to separate disease regions for use in identification or reading according to the risk of disease. Recently, in connection with cloud technology, its utility as a service product is increasing. Among the diseases dealt with in this paper, liver disease is a disease with very high risk because it is difficult to diagnose early due to the lack of pain. Artificial intelligence technology was introduced based on medical images as a non-invasive diagnostic method for diagnosing these diseases. We describe the development of a web service to help the most meaningful clinical reading of liver cirrhosis patients. Then, it shows the web service process and shows the operation screen of each process and the final result screen. It is expected that the proposed service will be able to diagnose liver cirrhosis at an early stage and help patients recover through rapid treatment.

Static Identification of Firmware Linux Kernel Version by using Symbol Table (심볼 테이블을 이용한 펌웨어 리눅스 커널 버전 정적 식별 기법)

  • Kim, Kwang-jun;Cho, Yeo-jeong;Kim, Yun-jeong;Lee, Man-hee
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.1
    • /
    • pp.67-75
    • /
    • 2022
  • When acquiring a product having an OS, it is very important to identify the exact kernel version of the OS. This is because the product's administrator needs to keep checking whether a new vulnerability is found in the kernel version. Also, if there is an acquisition requirement for exclusion or inclusion of a specific kernel version, the kernel identification becomes critical to the acquisition decision. In the case of the Linux kernel used in various equipment, sometimes it becomes difficult to pinpoint the device's exact version. The reason is that many manufacturers often modify the kernel to produce their own firmware optimized for their device. Furthermore, if a kernel patch is applied to the modified kernel, it will be very different from its base kernel. Therefore, it is hard to identify the Linux kernel accurately by simple methods such as a specific file existence test. In this paper, we propose a static method to classify a specific kernel version by analyzing function names stored in the symbol table. In an experiment with 100 Linux devices, we correctly identified the Linux kernel version with 99% accuracy.

What factors drive AI project success? (무엇이 AI 프로젝트를 성공적으로 이끄는가?)

  • KyeSook Kim;Hyunchul Ahn
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.1
    • /
    • pp.327-351
    • /
    • 2023
  • This paper aims to derive success factors that successfully lead an artificial intelligence (AI) project and prioritize importance. To this end, we first reviewed prior related studies to select success factors and finally derived 17 factors through expert interviews. Then, we developed a hierarchical model based on the TOE framework. With a hierarchical model, a survey was conducted on experts from AI-using companies and experts from supplier companies that support AI advice and technologies, platforms, and applications and analyzed using AHP methods. As a result of the analysis, organizational and technical factors are more important than environmental factors, but organizational factors are a little more critical. Among the organizational factors, strategic/clear business needs, AI implementation/utilization capabilities, and collaboration/communication between departments were the most important. Among the technical factors, sufficient amount and quality of data for AI learning were derived as the most important factors, followed by IT infrastructure/compatibility. Regarding environmental factors, customer preparation and support for the direct use of AI were essential. Looking at the importance of each 17 individual factors, data availability and quality (0.2245) were the most important, followed by strategy/clear business needs (0.1076) and customer readiness/support (0.0763). These results can guide successful implementation and development for companies considering or implementing AI adoption, service providers supporting AI adoption, and government policymakers seeking to foster the AI industry. In addition, they are expected to contribute to researchers who aim to study AI success models.

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.

Design and Implementation of a Protection and Distribution System for Digital Broadcasting Contents (디지털 방송 콘텐츠 보호 유통 시스템 설계 및 구현)

  • Lee Hyejoo;Choi BumSeok;Hong Jinwoo;Seo Jongwon
    • The KIPS Transactions:PartC
    • /
    • v.11C no.6 s.95
    • /
    • pp.731-738
    • /
    • 2004
  • With the increase of digital content usages, the protection for digital content and intellectual property becomes more important. The DRM(digital rights management) technologies are applicable to protect not only any kind of digital contents but also intellectual property. Besides such techniques are required for recorded digital broadcasting contents due to introduction of digital broadcasting techniques and storage devices such as personal video recorder. The conventional protection scheme for broadcasting content is the CAS(conditional access system) by which the access of viewer is controlled on the specific channels or programs. The CAS prohibits the viewer from delivering the digital broadcasting content to other person, so it results in restriction of superdistribution on the digital broadcasting content. In this paper, for broadcast targeting unspecfic many people, we will design the service model of the protection and distribution of digital broadcasting content using encryption and license by employing the concept of DRM. The results of implementation are also shown to verify some functions of each component. An implemented system of this paper has some advantages that the recording of broadcast content is allowed on set-top-box and superdistribution is available by consumer. Hence it provides content providers and consumers with trustworthy environment for content protection and distribution.

An Efficient Snapshot Technique for Shared Storage Systems supporting Large Capacity (대용량 공유 스토리지 시스템을 위한 효율적인 스냅샷 기법)

  • 김영호;강동재;박유현;김창수;김명준
    • Journal of KIISE:Databases
    • /
    • v.31 no.2
    • /
    • pp.108-121
    • /
    • 2004
  • In this paper, we propose an enhanced snapshot technique that solves performance degradation when snapshot is initiated for the storage cluster system. However, traditional snapshot technique has some limits adapted to large amount storage shared by multi-hosts in the following aspects. As volume size grows, (1) it deteriorates crucially the performance of write operations due to additional disk access to verify COW is performed. (2) Also it increases excessively the blocking time of write operation performed during the snapshot creation time. (3)Finally, it deteriorates the performance of write operations due to additional disk I/O for mapping block caused by the verification of COW. In this paper, we propose an efficient snapshot technique for large amount storage shared by multi-hosts in SAN Environments. We eliminate the blocking time of write operation caused by freezing while a snapshot creation is performing. Also to improve the performance of write operation when snapshot is taken, we introduce First Allocation Bit(FAB) and Snapshot Status Bit(SSB). It improves performance of write operation by reducing an additional disk access to volume disk for getting snapshot mapping block. We design and implement an efficient snapshot technique, while the snapshot deletion time, improve performance by deallocation of COW data block using SSB of original mapping entry without snapshot mapping entry obtained mapping block read from the shared disk.

A Study on the Identification and Classification of Relation Between Biotechnology Terms Using Semantic Parse Tree Kernel (시맨틱 구문 트리 커널을 이용한 생명공학 분야 전문용어간 관계 식별 및 분류 연구)

  • Choi, Sung-Pil;Jeong, Chang-Hoo;Chun, Hong-Woo;Cho, Hyun-Yang
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.45 no.2
    • /
    • pp.251-275
    • /
    • 2011
  • In this paper, we propose a novel kernel called a semantic parse tree kernel that extends the parse tree kernel previously studied to extract protein-protein interactions(PPIs) and shown prominent results. Among the drawbacks of the existing parse tree kernel is that it could degenerate the overall performance of PPI extraction because the kernel function may produce lower kernel values of two sentences than the actual analogy between them due to the simple comparison mechanisms handling only the superficial aspects of the constituting words. The new kernel can compute the lexical semantic similarity as well as the syntactic analogy between two parse trees of target sentences. In order to calculate the lexical semantic similarity, it incorporates context-based word sense disambiguation producing synsets in WordNet as its outputs, which, in turn, can be transformed into more general ones. In experiments, we introduced two new parameters: tree kernel decay factors, and degrees of abstracting lexical concepts which can accelerate the optimization of PPI extraction performance in addition to the conventional SVM's regularization factor. Through these multi-strategic experiments, we confirmed the pivotal role of the newly applied parameters. Additionally, the experimental results showed that semantic parse tree kernel is superior to the conventional kernels especially in the PPI classification tasks.