• Title/Summary/Keyword: File Storage

Search Result 453, Processing Time 0.025 seconds

Developing Standard Transmission System for Radiology Reporting Including Key Images (Key Image를 포함한 방사선과 판독결과지 표준전송시스템 개발)

  • Kim, Seon-Chil
    • Journal of radiological science and technology
    • /
    • v.30 no.1
    • /
    • pp.47-51
    • /
    • 2007
  • Development of hospital information system and Picture Archiving Communication System is not new in the medical field, and the development of internet and information technology are also universal. In the course of such development, however, it is hard to share medical information without a refined standard format. Especially in the department of radiology, the role of PACS has become very important in interchanging information with other disparate hospital information systems. A specific system needs to be developed that radiological reports are archived into a database efficiently. This includes sharing of medical images. A model is suggested in this study in which an internal system is developed where radiologists store necessary images and transmit them in the standard international clinical format, Clinical Document Architecture, and share the information with hospitals. CDA document generator was made to generate a new file format and separate the existing storage system from the new system. This was to ensure the access to required data in XML documents. The model presented in this study added a process where crucial images in reading are inserted in the CDA radiological report generator. Therefore, this study suggests a storage and transmission model for CDA documents, which is different from the existing DICOM SR. Radiological reports could be better shared, when the application function for inserting images and the analysis of standard clinical terms are completed.

  • PDF

Still Image Identifier based over Low-frequency Area (저역주파수 영역 기반 정지영상 식별자)

  • Park, Je-Ho
    • Journal of Digital Contents Society
    • /
    • v.11 no.3
    • /
    • pp.393-398
    • /
    • 2010
  • Composite and compact devices equipped with the functionality of digital still image acquisition, such as cellular phones and MP3 players are widely available to common users. In addition, the application of digital still images is becoming common among security and digital recording devices. The amount of still images, that are maintained or shared in personal storage or massive storage provided by various web services, are rapidly increasing. These still images are bound with file names or identifiers that are provided arbitrarily by users or that are generated from device specific naming method. However, those identifiers are vulnerable for unexpected changing or eliminating so that it becomes a problem in still image search or management. In this paper, we propose a method for still image identifier generation that is created from the still image internal information.

Research on Minimizing Access to RDF Triple Store for Efficiency in Constructing Massive Bibliographic Linked Data (극대용량 서지 링크드 데이터 구축의 효율성을 위한 RDF 트리플 저장소 접근 최소화에 관한 연구)

  • Lee, Moon-Ho;Choi, Sung-Pil
    • Journal of Korean Library and Information Science Society
    • /
    • v.48 no.3
    • /
    • pp.233-257
    • /
    • 2017
  • In this paper, we propose an effective method to convert and construct the MEDLINE, the world's largest biomedical bibliographic database, into linked data. To do this, we first derive the appropriate RDF schema by analyzing the MEDLINE record structure in detail, and convert each record into a valid RDF file in the derived schema. We apply the dual batch registration method to streamline the subject URI duplication checking procedure when merging all RDF files in the converted record unit and storing it in a single RDF triple storage. By applying this method, the number of RDF triple storage accesses for the subject URI duplication is reduced from 26,597,850 to 2,400, compared with the sequential configuration of linked data in units of RDF files. Therefore, it is expected that the result of this study will provide an important opportunity to eliminate the inefficiency in converting large volume bibliographic record sets into linked data, and to secure promptness and timeliness.

Real-time Image Scanning System for Detecting Tunnel Cracks Using Linescan Cameras

  • Jeong, Dong-Hyun;Kim, Young-Rin;Cho, I-Sac;Kim, Eun-Ju;Lee, Kang-Moon;Jin, Kwang-Won;Song, Chang-Geun
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.6
    • /
    • pp.726-736
    • /
    • 2007
  • In this paper, real-time image scanning system using linescan cameras is designed. The system is specially designed to diagnose and analyse the conditions of tunnels such as crack widths through the captured images. The system consists of two major parts, the image acquisition system and the image merging system. To save scanned image data into storage media in real-time, the image acquisition system has been designed with two different control and management modules. The control modules are in charge of controlling the hardware device and the management modules handle system resources so that the scanned images are safely saved to the magnetic storage devices. The system can be mounted to various kinds of vehicles. After taking images, the image merging system generates extended images by combining saved images. Several tests are conducted in laboratory as well as in the field. In the laboratory simulation, both systems are tested several times and upgraded. In the field-testing, the image acquisition system is mounted to a specially designed vehicle and images of the interior surface of the tunnel are captured. The system is successfully tested in a real tunnel with a vehicle at the speed of 20 km/h. The captured images of the tunnel condition including cracks are vivid enough for an expert to diagnose the state of the tunnel using images instead of seeing through his/her eyes.

  • PDF

A Bitmap Index for Chunk-Based MOLAP Cubes (청크 기반 MOLAP 큐브를 위한 비트맵 인덱스)

  • Lim, Yoon-Sun;Kim, Myung
    • Journal of KIISE:Databases
    • /
    • v.30 no.3
    • /
    • pp.225-236
    • /
    • 2003
  • MOLAP systems store data in a multidimensional away called a 'cube' and access them using way indexes. When a cube is placed into disk, it can be Partitioned into a set of chunks of the same side length. Such a cube storage scheme is called the chunk-based MOLAP cube storage scheme. It gives data clustering effect so that all the dimensions are guaranteed to get a fair chance in terms of the query processing speed. In order to achieve high space utilization, sparse chunks are further compressed. Due to data compression, the relative position of chunks cannot be obtained in constant time without using indexes. In this paper, we propose a bitmap index for chunk-based MOLAP cubes. The index can be constructed along with the corresponding cube generation. The relative position of chunks is retained in the index so that chunk retrieval can be done in constant time. We placed in an index block as many chunks as possible so that the number of index searches is minimized for OLAP operations such as range queries. We showed the proposed index is efficient by comparing it with multidimensional indexes such as UB-tree and grid file in terms of time and space.

An Efficient Log Buffer Management Through Join between Log Blocks (로그 블록 간 병합을 이용한 효율적인 로그 버퍼 관리)

  • Kim, hak-cheol;Park, youg-hun;Yun, jong-hyeon;Seo, dong-min;Song, seok-il;Yoo, jae-soo
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2009.05a
    • /
    • pp.51-56
    • /
    • 2009
  • Flash memory has rapidly deployed as data storage. However, the flash memory has a major disadvantage that recorded data cannot be dynamically overwritten. In order to solve this "erase-before-write" problem, the log block buffer scheme used Flash memory file system. however, the current managements of the log buffer, in case random write pattern, BAST technique have problem of frequent merge operation, but FAST technique don't consider merge operation by frequently updated data. Previous methods not consider merge operation cost and frequently updated data. In this paper, we propose a new log buffer management scheme, called JBB. Our proposed method evaluates the worth of the merge of log blocks, so we conducts the merge operation between infrequently updated data and its data blocks, and postpone the merge operation between frequently updated data and its data blocks. Through the method, we prevent the unnecessary merge operations, reduce the number of the erase operation, and improve the utilization of the flash memory storage. We show the superiority of our proposed method through the performance evaluation with BAST and FAST.

  • PDF

Web Proxy Cache Replacement Algorithms using Object Type Partition (개체 타입별 분할공간을 이용한 웹 프락시 캐시의 대체 알고리즘)

  • Soo-haeng, Lee;Sang-bang, Choi
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.5C
    • /
    • pp.399-410
    • /
    • 2002
  • Web cache, which is functionally another word of proxy server, is located between client and server. Web cache has a limited storage area although it has broad bandwidth between client and proxy server, which are usually connected through LAN. Because of limited storage capacity, existing objects in web cache can be deleted for new objects by some rules called replacement algorithm. Hit rate and byte-hit rate are general metrics to evaluate replacement algorithms. Most of the replacement algorithms do satisfy only one metric, or sometimes none of them. In this paper, we propose two replacement algorithms to achieve both high hit rate and byte-hit rate with great satisfaction. In the first algorithm, the cache is appropriately partitioned according to file types as a basic model. In the second algorithm, the cache is composed of two levels; the upper level cache is managed by the basic algorithm, but the lower level is collectively used for all types of files as a shared area. To show the performance of the proposed algorithms, we evaluate hit rate and byte-hit rate of the proposed replacement algorithms using the trace driven simulation.

Data De-duplication and Recycling Technique in SSD-based Storage System for Increasing De-duplication Rate and I/O Performance (SSD 기반 스토리지 시스템에서 중복률과 입출력 성능 향상을 위한 데이터 중복제거 및 재활용 기법)

  • Kim, Ju-Kyeong;Lee, Seung-Kyu;Kim, Deok-Hwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.12
    • /
    • pp.149-155
    • /
    • 2012
  • SSD is a storage device of having high-performance controller and cache buffer and consists of many NAND flash memories. Because NAND flash memory does not support in-place update, valid pages are invalidated when update and erase operations are issued in file system and then invalid pages are completely deleted via garbage collection. However, garbage collection performs many erase operations of long latency and then it reduces I/O performance and increases wear leveling in SSD. In this paper, we propose a new method of de-duplicating valid data and recycling invalid data. The method de-duplicates valid data and then recycles invalid data so that it improves de-duplication ratio. Due to reducing number of writes and garbage collection, the method could increase I/O performance and decrease wear leveling in SSD. Experimental result shows that it can reduce maximum 20% number of garbage collections and 9% I/O latency than those of general case.

Development of a SDTS Data Conversion System for GOTHIC (GOTHIC을 위한 SDTS 데이타 변환 시스템의 개발)

  • Zhang, Yan-Sheng;Kim, Jun-Jong;Han, Ki-Joon;Yun, Jae-Kwan
    • Journal of Korea Spatial Information System Society
    • /
    • v.2 no.2 s.4
    • /
    • pp.99-115
    • /
    • 2000
  • A geographic information system (GIS) generally has a great deal of geographic data and has a singular storage structure individually. It is very hard to exchange geographic data between geographic information systems which store their geographic data with incompatible formats. Moreover, since it needs large amount of storage space to store geographic data and expensive cost to input them. In this paper, we designed and implemented a SDTS (Spatial Data Transfer Standard) Data Conversion System for Gothic which is an existing geographic information system. In order to convert geographic data without loss of information, we first carefully define a mapping between SDTS data and Gothic data. Especially, since SDTS data are in the format of ISO8211, the FIPS123 library is used to access them. Because the internal data format of Gothic is not open to the public, we also use the Gothic library to access Gothic data. The SDTS data conversion system developed in this paper uses an intermediate file to convert geographic data efficiently. In addition, we use UIL (User Interface Language) to implement a graphic user interface (GUI) of our system.

  • PDF

Development of Flash Memory Management Algorithm (플래쉬 메모리 관리 알고리즘 개발)

  • Park, In-Gyu
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.38 no.1
    • /
    • pp.26-45
    • /
    • 2001
  • The Flash memory market: is an exciting market that has quickly over the last 10 years. Recently Flash memory provides a high-density. truly non-volatile, high performance read write memory solutions, also is characterized by low power consumption, extreme ruggedness and high reliability. Flash memory is an optimum solution for large nonvolitilc storage operations such as solid file storage, digital video recorder, digital still camera, The MP3 player and other portable multimedia communication applications requiring non-volatility. Regardless of the type of Flash memory, Flash media management software is always required to manage the larger Flash memory block partitions. This is true, since Flash memory cannot be erased on the byte level common to memory, but must be erased on a block granularity. The management of a Flash memory manager requires a keen understanding of a Flash technology and data management methods. Though Flash memory's write performance is relatively slow, the suggested algorithm offers a higher maximum write performance. Algorithms so far developed is not suitable for applications which is requiring more fast and frequent accesses. But, the proposed algorithm is focused on the justifiable operation even in the circumstance of fast and frequent accesses.

  • PDF