• Title/Summary/Keyword: 정보시스템 도입방법

Search Result 1,130, Processing Time 0.033 seconds

OpenGL ES 1.1 Implementation Using OpenGL (OpenGL을 이용한 OpenGL ES 1.1 구현)

  • Lee, Hwan-Yong;Baek, Nak-Hoon
    • The KIPS Transactions:PartA
    • /
    • v.16A no.3
    • /
    • pp.159-168
    • /
    • 2009
  • In this paper, we present an efficient way of implementing OpenGL ES 1.1 standard for the environments with hardware-supported OpenGL API, such as desktop PCs. Although OpenGL ES was started from the existing OpenGL features, it becomes a new three-dimensional graphics library customized for embedded systems through introducing fixed-point arithmetic operations, buffer management with fixed-point data type supports, completely new texture mapping functionalities and others. Currently, it is the official three dimensional graphics library for Google Android, Apple iPhone, PlayStation3, etc. In this paper, we achieved improvements on the arithmetic operations for the fixed-point number representation, which is the most characteristic data type for OpenGL ES. For the conversion of fixed-point data types to the floating-point number representations for the underlying OpenGL, we show the way of efficient conversion processes even with satisfying OpenGL ES standard requirements. We also introduced a simple memory management scheme to mange the converted data for the buffer containing fixed-point numbers. In the case of texture processing, the requirements in both standards are quite different and thus we used completely new software-implementations. Our final implementation result of OpenGL ES library provides all of over than 200 functions in OpenGL ES 1.1 standard and completely passed its conformance test, to show its compliance with the standard. From the efficiency viewpoint, we measured its execution times for several OpenGL ES-specific application programs and achieved at most 33.147 times improvements, to become the fastest one among the OpenGL ES implementations in the same category.

Development of Web Service for Liver Cirrhosis Diagnosis Based on Machine Learning (머신러닝기반 간 경화증 진단을 위한 웹 서비스 개발)

  • Noh, Si-Hyeong;Kim, Ji-Eon;Lee, Chungsub;Kim, Tae-Hoon;Kim, KyungWon;Yoon, Kwon-Ha;Jeong, Chang-Won
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.10
    • /
    • pp.285-290
    • /
    • 2021
  • In the medical field, disease diagnosis and prediction research using artificial intelligence technology is being actively conducted. It is being released as a variety of products for disease diagnosis and prediction, which are most widely used in the application of artificial intelligence technology based on medical images. Artificial intelligence is being applied to diagnose diseases, to classify diseases into benign and malignant, and to separate disease regions for use in identification or reading according to the risk of disease. Recently, in connection with cloud technology, its utility as a service product is increasing. Among the diseases dealt with in this paper, liver disease is a disease with very high risk because it is difficult to diagnose early due to the lack of pain. Artificial intelligence technology was introduced based on medical images as a non-invasive diagnostic method for diagnosing these diseases. We describe the development of a web service to help the most meaningful clinical reading of liver cirrhosis patients. Then, it shows the web service process and shows the operation screen of each process and the final result screen. It is expected that the proposed service will be able to diagnose liver cirrhosis at an early stage and help patients recover through rapid treatment.

Static Identification of Firmware Linux Kernel Version by using Symbol Table (심볼 테이블을 이용한 펌웨어 리눅스 커널 버전 정적 식별 기법)

  • Kim, Kwang-jun;Cho, Yeo-jeong;Kim, Yun-jeong;Lee, Man-hee
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.1
    • /
    • pp.67-75
    • /
    • 2022
  • When acquiring a product having an OS, it is very important to identify the exact kernel version of the OS. This is because the product's administrator needs to keep checking whether a new vulnerability is found in the kernel version. Also, if there is an acquisition requirement for exclusion or inclusion of a specific kernel version, the kernel identification becomes critical to the acquisition decision. In the case of the Linux kernel used in various equipment, sometimes it becomes difficult to pinpoint the device's exact version. The reason is that many manufacturers often modify the kernel to produce their own firmware optimized for their device. Furthermore, if a kernel patch is applied to the modified kernel, it will be very different from its base kernel. Therefore, it is hard to identify the Linux kernel accurately by simple methods such as a specific file existence test. In this paper, we propose a static method to classify a specific kernel version by analyzing function names stored in the symbol table. In an experiment with 100 Linux devices, we correctly identified the Linux kernel version with 99% accuracy.

What factors drive AI project success? (무엇이 AI 프로젝트를 성공적으로 이끄는가?)

  • KyeSook Kim;Hyunchul Ahn
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.1
    • /
    • pp.327-351
    • /
    • 2023
  • This paper aims to derive success factors that successfully lead an artificial intelligence (AI) project and prioritize importance. To this end, we first reviewed prior related studies to select success factors and finally derived 17 factors through expert interviews. Then, we developed a hierarchical model based on the TOE framework. With a hierarchical model, a survey was conducted on experts from AI-using companies and experts from supplier companies that support AI advice and technologies, platforms, and applications and analyzed using AHP methods. As a result of the analysis, organizational and technical factors are more important than environmental factors, but organizational factors are a little more critical. Among the organizational factors, strategic/clear business needs, AI implementation/utilization capabilities, and collaboration/communication between departments were the most important. Among the technical factors, sufficient amount and quality of data for AI learning were derived as the most important factors, followed by IT infrastructure/compatibility. Regarding environmental factors, customer preparation and support for the direct use of AI were essential. Looking at the importance of each 17 individual factors, data availability and quality (0.2245) were the most important, followed by strategy/clear business needs (0.1076) and customer readiness/support (0.0763). These results can guide successful implementation and development for companies considering or implementing AI adoption, service providers supporting AI adoption, and government policymakers seeking to foster the AI industry. In addition, they are expected to contribute to researchers who aim to study AI success models.

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.

Design and Implementation of a Protection and Distribution System for Digital Broadcasting Contents (디지털 방송 콘텐츠 보호 유통 시스템 설계 및 구현)

  • Lee Hyejoo;Choi BumSeok;Hong Jinwoo;Seo Jongwon
    • The KIPS Transactions:PartC
    • /
    • v.11C no.6 s.95
    • /
    • pp.731-738
    • /
    • 2004
  • With the increase of digital content usages, the protection for digital content and intellectual property becomes more important. The DRM(digital rights management) technologies are applicable to protect not only any kind of digital contents but also intellectual property. Besides such techniques are required for recorded digital broadcasting contents due to introduction of digital broadcasting techniques and storage devices such as personal video recorder. The conventional protection scheme for broadcasting content is the CAS(conditional access system) by which the access of viewer is controlled on the specific channels or programs. The CAS prohibits the viewer from delivering the digital broadcasting content to other person, so it results in restriction of superdistribution on the digital broadcasting content. In this paper, for broadcast targeting unspecfic many people, we will design the service model of the protection and distribution of digital broadcasting content using encryption and license by employing the concept of DRM. The results of implementation are also shown to verify some functions of each component. An implemented system of this paper has some advantages that the recording of broadcast content is allowed on set-top-box and superdistribution is available by consumer. Hence it provides content providers and consumers with trustworthy environment for content protection and distribution.

An Efficient Snapshot Technique for Shared Storage Systems supporting Large Capacity (대용량 공유 스토리지 시스템을 위한 효율적인 스냅샷 기법)

  • 김영호;강동재;박유현;김창수;김명준
    • Journal of KIISE:Databases
    • /
    • v.31 no.2
    • /
    • pp.108-121
    • /
    • 2004
  • In this paper, we propose an enhanced snapshot technique that solves performance degradation when snapshot is initiated for the storage cluster system. However, traditional snapshot technique has some limits adapted to large amount storage shared by multi-hosts in the following aspects. As volume size grows, (1) it deteriorates crucially the performance of write operations due to additional disk access to verify COW is performed. (2) Also it increases excessively the blocking time of write operation performed during the snapshot creation time. (3)Finally, it deteriorates the performance of write operations due to additional disk I/O for mapping block caused by the verification of COW. In this paper, we propose an efficient snapshot technique for large amount storage shared by multi-hosts in SAN Environments. We eliminate the blocking time of write operation caused by freezing while a snapshot creation is performing. Also to improve the performance of write operation when snapshot is taken, we introduce First Allocation Bit(FAB) and Snapshot Status Bit(SSB). It improves performance of write operation by reducing an additional disk access to volume disk for getting snapshot mapping block. We design and implement an efficient snapshot technique, while the snapshot deletion time, improve performance by deallocation of COW data block using SSB of original mapping entry without snapshot mapping entry obtained mapping block read from the shared disk.

A Study on the Identification and Classification of Relation Between Biotechnology Terms Using Semantic Parse Tree Kernel (시맨틱 구문 트리 커널을 이용한 생명공학 분야 전문용어간 관계 식별 및 분류 연구)

  • Choi, Sung-Pil;Jeong, Chang-Hoo;Chun, Hong-Woo;Cho, Hyun-Yang
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.45 no.2
    • /
    • pp.251-275
    • /
    • 2011
  • In this paper, we propose a novel kernel called a semantic parse tree kernel that extends the parse tree kernel previously studied to extract protein-protein interactions(PPIs) and shown prominent results. Among the drawbacks of the existing parse tree kernel is that it could degenerate the overall performance of PPI extraction because the kernel function may produce lower kernel values of two sentences than the actual analogy between them due to the simple comparison mechanisms handling only the superficial aspects of the constituting words. The new kernel can compute the lexical semantic similarity as well as the syntactic analogy between two parse trees of target sentences. In order to calculate the lexical semantic similarity, it incorporates context-based word sense disambiguation producing synsets in WordNet as its outputs, which, in turn, can be transformed into more general ones. In experiments, we introduced two new parameters: tree kernel decay factors, and degrees of abstracting lexical concepts which can accelerate the optimization of PPI extraction performance in addition to the conventional SVM's regularization factor. Through these multi-strategic experiments, we confirmed the pivotal role of the newly applied parameters. Additionally, the experimental results showed that semantic parse tree kernel is superior to the conventional kernels especially in the PPI classification tasks.

Drone-based smart quarantine performance research (드론 기반 스마트 방재 방안 연구)

  • Yoo, Soonduck
    • The Journal of the Convergence on Culture Technology
    • /
    • v.6 no.2
    • /
    • pp.437-447
    • /
    • 2020
  • The purpose of this study is to research the countermeasures and expected effects through the use of drones in the field of disaster prevention as a drone-based smart quarantine performance method. The environmental, market, and technological approaches to the review of the current quarantine performance task and its countermeasures are as follows. First, in terms of the environment, the effectiveness of the quarantine performance business using drone-based control is to broaden the utilization of forest, bird flu, livestock, facility areas, mosquito larvae, pests, and to simplify and provide various effective prevention systems such as AI and cholera. Second, in terms of market, the standardization of livestock and livestock quarantine laws and regulations according to the use of disinfection and quarantine missions using domestic standardized drones through the introduction of new technologies in the quarantine method, shared growth of related industries and discovery of new markets, and animal disease prevention It brings about the effect of annual budget savings. Third, the technical aspects are (1) on-site application of disinfection and prevention using multi-drone, a new form of animal disease prevention, (2) innovation in the drone industry software field, and (3) diversification of the industry with an integrated drone control / control system applicable to various markets. (4) Big data drone moving path 3D spatial information analysis precise drone traffic information ensures high flight safety, (5) Multiple drones can simultaneously auto-operate and fly, enabling low-cost, high-efficiency system deployment, (6) High precision that this was considered due to the increase in drone users by sector due to the necessity of airplane technology. This study was prepared based on literature surveys and expert opinions, and the future research field needs to prove its effectiveness based on empirical data on drone-based services. The expected effect of this study is to contribute to the active use of drones for disaster prevention work and to establish policies related to them.

Compromised feature normalization method for deep neural network based speech recognition (심층신경망 기반의 음성인식을 위한 절충된 특징 정규화 방식)

  • Kim, Min Sik;Kim, Hyung Soon
    • Phonetics and Speech Sciences
    • /
    • v.12 no.3
    • /
    • pp.65-71
    • /
    • 2020
  • Feature normalization is a method to reduce the effect of environmental mismatch between the training and test conditions through the normalization of statistical characteristics of acoustic feature parameters. It demonstrates excellent performance improvement in the traditional Gaussian mixture model-hidden Markov model (GMM-HMM)-based speech recognition system. However, in a deep neural network (DNN)-based speech recognition system, minimizing the effects of environmental mismatch does not necessarily lead to the best performance improvement. In this paper, we attribute the cause of this phenomenon to information loss due to excessive feature normalization. We investigate whether there is a feature normalization method that maximizes the speech recognition performance by properly reducing the impact of environmental mismatch, while preserving useful information for training acoustic models. To this end, we introduce the mean and exponentiated variance normalization (MEVN), which is a compromise between the mean normalization (MN) and the mean and variance normalization (MVN), and compare the performance of DNN-based speech recognition system in noisy and reverberant environments according to the degree of variance normalization. Experimental results reveal that a slight performance improvement is obtained with the MEVN over the MN and the MVN, depending on the degree of variance normalization.