• Title/Summary/Keyword: file access

Search Result 406, Processing Time 0.031 seconds

Implementation of Web Based Teleradiology Internet PACS (웹 기반 원격 방사선 인터넷 PACS 구현)

  • 지연상;이성주
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.4 no.5
    • /
    • pp.1105-1110
    • /
    • 2000
  • In the past high cost and complex system configuration often discouraged hospitals from building teleradiology system or PACS(Picture Archiving and Communication System). But new standard platforms enable us to construct the same system with very low cost and simple configurations. internet as a communication channel made us overcome the regional limit and communication cost, and WWW technologies simplified the complex problems on the software developments, configurations and installations. So whoever has a Web browser to access internet can review medical images at anywhere. And we adopted DICOM technology which is a standard for medical imaging, thus we could resolve the interface problems among medical imaging systems such as modalities or archives. The implementation is comprised of three part DICOM/WWW interface subsystem, image format conversion subsystem and viewing applets which are displayed on users WWW browsers. In addition, Teleradiology intrenet PACS system includes DICOM converter that non-DICOM file format converts standard file format.

  • PDF

Secure Authentication Protocol in Hadoop Distributed File System based on Hash Chain (해쉬 체인 기반의 안전한 하둡 분산 파일 시스템 인증 프로토콜)

  • Jeong, So Won;Kim, Kee Sung;Jeong, Ik Rae
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.23 no.5
    • /
    • pp.831-847
    • /
    • 2013
  • The various types of data are being created in large quantities resulting from the spread of social media and the mobile popularization. Many companies want to obtain valuable business information through the analysis of these large data. As a result, it is a trend to integrate the big data technologies into the company work. Especially, Hadoop is regarded as the most representative big data technology due to its terabytes of storage capacity, inexpensive construction cost, and fast data processing speed. However, the authentication token system of Hadoop Distributed File System(HDFS) for the user authentication is currently vulnerable to the replay attack and the datanode hacking attack. This can cause that the company secrets or the personal information of customers on HDFS are exposed. In this paper, we analyze the possible security threats to HDFS when tokens or datanodes are exposed to the attackers. Finally, we propose the secure authentication protocol in HDFS based on hash chain.

I/O Translation Layer Technology for High-performance and Compatibility Using New Memory (뉴메모리를 이용한 고성능 및 호환성을 위한 I/O 변환 계층 기술)

  • Song, Hyunsub;Moon, Young Je;Noh, Sam H.
    • Journal of KIISE
    • /
    • v.42 no.4
    • /
    • pp.427-433
    • /
    • 2015
  • The rapid advancement of computing technology has triggered the need for fast data I/O processing and high-performance storage technology. Next generation memory technology, which we refer to as new memory, is anticipated to be used for high-performance storage as they have excellent characteristics as a storage device with non-volatility and latency close to DRAM. This research proposes NTL (New memory Translation layer) as a technology to make use of new memory as storage. With the addition of NTL, conventional I/O is served with existing mature disk-based file systems providing compatibility, while new memory I/O is serviced through the NTL to take advantage of the byte-addressability feature of new memory. In this paper, we describe the design of NTL and provide experiment measurement results that show that our design will bring performance benefits.

A Case Study of Mainframe Load Reduction Using The Client and Server Model (클라이언트/서버 모델에 의한 메인프레임 부하 분산 사례연구)

  • 고광병;공승욱;권기목;강창언
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.19 no.8
    • /
    • pp.1628-1639
    • /
    • 1994
  • In order to increase the utilization of the computing resources, universities connect a variety of computing resources such as mainframes, workstations, and personal computers via LAN. However, due to management and security reasons, most administrative applications are concentrated on mainframes which is the main cause of large work overload on mainframes for such applications as on-line course registration system where the entire student body must have access to the system during a short period of time. In this study, using a university system as the model and choosing on-line course registration system as the targeted distributed computing. APPC through IBM SNALU 6.2 link is proposed as the most appropriate means of distributed computing for the environment of the model university. In addition, the on-line course registration system is redesigned as client-server model where a mainframe serves as the file server responsible for file input and output and workstations becomes the client. Actual implementation and experiments have shown that the proposed distributed computing system yields a significant reduction in processing time.

  • PDF

A Study on Data Security of Web Local Storage (웹 로컬스토리지 데이터 보안을 위한 연구)

  • Kim, Ji-soo;Moon, Jong-sub
    • Journal of Internet Computing and Services
    • /
    • v.17 no.3
    • /
    • pp.55-66
    • /
    • 2016
  • A local storage of HTML5 is a Web Storage, which is stored permanently on a local computer in the form of files. The contents of the storage can be easily accessed and modified because it is stored as plaintext. Moreover, because the internet browser classifies the local storages of each domain using file names, the malicious attacker can abuse victim's local storage files by changing file names. In the paper, we propose a scheme to maintain the integrity and the confidentiality of the local storage's source domain and source device. The key idea is that the client encrypts the data stored in the local storage with cipher key, which is managed by the web server. On the step of requesting the cipher key, the web server authenticates whether the client is legal source of local storage or not. Finally, we showed that our method can detect an abnormal access to the local storage through experiments according to the proposed method.

COMPARISON OF THE SEALING ABILITY OF VARIOUS RETROGRADE FILLING MAIERIALS (수종의 역충전 재료의 치근단 밀폐력 비교)

  • 황윤찬;강인철;황인남;오원만
    • Restorative Dentistry and Endodontics
    • /
    • v.26 no.5
    • /
    • pp.379-386
    • /
    • 2001
  • This study was performed to evaluate the sealing ability of various retrograde filling materials by using bacterial penetration and dye penetration test. One hundred and forty extracted human teeth with single, straight canals and mature apiece were collected and used for this study. All canals were instrumented using an engine driven Ni-Ti file (ProFile). After removing 3mm from the apex of tooth, a standardized 3mm root end cavity was prepared using an ultrasonic instrument. The 70 teeth were randomly divided into 7 groups : 6 groups for retrograde filling using Super-EBA, ZOE, Chelon-Silver, IRM, ZPC and amalgam. The 7th group was used as a negative control. Nail varnish was applied to all external root surfaces to the level of the reseated root ends to prevent lateral microleakages. The specimens were then sterilized in an ethylene oxide sterilizer for 24 hours. 2 mm of the reseated root was immersed in a culture chamber containing a Tripticase Soy Broth with a phenol red indicator. The coronal access of each specimen was inoculated every 72 hours with suspension of Proteus vulgaris. The culture media were observed every 24hours for color change indicating bacterial contamination. The specimens were observed for 4weeks. The remaining 70 teeth were submitted to a dye penetration test. The canals of all teeth were first sealed with AH26 and obturated using an Obtura II system. Root resection, root end preparation and retrograde filling was performed as above. All specimens were suspended in 2% methylene blue dye for 72 hours before being ion gitudinally split. The degree of dye penetration was then measured using a stereomicroscope at 10 magnification and evaluated. The results were as floows : 1. In the bacterial penetration, the degree of leakage was the lowest in the Super-EBA, followed by, in ascending order, ZOE, Chelon-Silver IRM and ZPC. The amalgam showed highest bacterial leakage of all(p<0.01). 2. In the dye penetration, the degree of microleakage was the lowest in the Chelon-Silver and Super-EBA, followed by, in ascending order, IRM, ZPC. The ZOE and amalgam showed the highest microleakage of all (p<0.05). These results suggested that the eugenol based cement, Super-EBA, have excellent sealing ability as a retrograde filling material.

  • PDF

Analysis of the ROMizer of simpleRTJ Embedded Java Virtual Machine (simpleRTJ 임베디드 자바가상기계의 ROMizer 분석 연구)

  • Yang, Hee-jae
    • The KIPS Transactions:PartA
    • /
    • v.10A no.4
    • /
    • pp.397-404
    • /
    • 2003
  • Dedicated-purpose embedded Java system usually takes such model that all class files are converted into a single ROM Image by the ROMizer in the host computer, and then the Java virtual machine in the embedded system executes the image. Defining the ROM Image is a very important issue for embedded system with limited memory resource and low-performance processor since the format directly influences on the memory usage and effectiveness of accessing entries in classes. In this paper we have analyzed the ROMizer and especially the format of the ROM image implemented in the simpleRTJ embedded Jana virtual machine. The analysis says that memory space can be saved up to 50% compared to the original class file and access speed exceeds up to six times with the use of the ROMizer. The result of this study will be applied to develop a more efficient ROMizer for a ROM-based embedded Java system.

Development of Monte Carlo Simulation Code for the Dose Calculation of the Stereotactic Radiosurgery (뇌 정위 방사선수술의 선량 계산을 위한 몬테카를로 시뮬레이션 코드 개발)

  • Kang, Jeongku;Lee, Dong Joon
    • Progress in Medical Physics
    • /
    • v.23 no.4
    • /
    • pp.303-308
    • /
    • 2012
  • The Geant4 based Monte Carlo code for the application of stereotactic radiosurgery was developed. The probability density function and cumulative density function to determine the incident photon energy were calculated from pre-calculated energy spectrum for the linac by multiplying the weighting factors corresponding to the energy bins. The messenger class to transfer the various MLC fields generated by the planning system was used. The rotation matrix of rotateX and rotateY were used for simulating gantry and table rotation respectively. We construct accelerator world and phantom world in the main world coordinate to rotate accelerator and phantom world independently. We used dicomHandler class object to convert from the dicom binary file to the text file which contains the matrix number, pixel size, pixel's HU, bit size, padding value and high bits order. We reconstruct this class object to work fine. We also reconstruct the PrimaryGeneratorAction class to speed up the calculation time. because of the huge calculation time we discard search process of the ThitsMap and used direct access method from the first to the last element to produce the result files.

Lambda Architecture Used Apache Kudu and Impala (Apache Kudu와 Impala를 활용한 Lambda Architecture 설계)

  • Hwang, Yun-Young;Lee, Pil-Won;Shin, Yong-Tae
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.9
    • /
    • pp.207-212
    • /
    • 2020
  • The amount of data has increased significantly due to advances in technology, and various big data processing platforms are emerging, to handle it. Among them, the most widely used platform is Hadoop developed by the Apache Software Foundation, and Hadoop is also used in the IoT field. However, the existing Hadoop-based IoT sensor data collection and analysis environment has a problem of overloading the name node due to HDFS' Small File, which is Hadoop's core project, and it is impossible to update or delete the imported data. This paper uses Apache Kudu and Impala to design Lambda Architecture. The proposed Architecture classifies IoT sensor data into Cold-Data and Hot-Data, stores it in storage according to each personality, and uses Batch-View created through Batch and Real-time View generated through Apache Kudu and Impala to solve problems in the existing Hadoop-based IoT sensor data collection analysis environment and shorten the time users access to the analyzed data.

The Development of a Computer-Assisted HACCP Program for the Microbiological Quality Assurance in Hospital Foodservice Operations (병원급식의 미생물적 품질보증을 위한 HACCP 전산프로그램의 개발)

  • Kwak, Tong-Kyung;Ryu, Kyung;Choi, Seong-Kyung
    • Journal of the Korean Society of Food Culture
    • /
    • v.11 no.1
    • /
    • pp.107-121
    • /
    • 1996
  • This study was carried out to develop the computer-assisted Hazard Analysis and Critical Control Point (HACCP) program for a systematic approach to the identification, assessment and control of hazards for foodservice manager to assure the microbiological quality of food in hospital foodservice operations. Sanitation practices were surveyed and analyzed in the dietetic department of 4 hospitals. Among them, one 762-bed general hospital was selected as standard model to develop computer-assisted HACCP program. All data base files and processing programs were created by using Foxpro package for easy access of HACCP concept. HACCP program was developed based on the methods suggested by NACMCF, IAMFES and Bryan. This program consisted of two parts: the pre-stage for HACCP study and the implementation stage of the HACCP system. 1. Pre-stage for HACCP study includes the selection of menu item, the development of the HACCP recipe, the construction of a product flow diagram, and printing the HACCP recipe and a product flow diagram. A menu item for HACCP study can be selected from the menu item lists classified by cooking methods. HACCP recipe includes ingredients, their amount and cooking procedure. A flow diagram is constructed based on the HACCP recipe. The HACCP recipe and a product flow diagram are printed out. 2. Implementation of HACCP study includes the identification of microbiological hazards, the determination of critical control points, the establishment of control methods of each hazard, and the complementation of data base file. Potentially hazardous ingredients are determined and microbiological hazards are identified in each phase of the product flow. Critical control points (CCPs) are identified by applying CCP decision trees for ingredients and each process stage. After hazards and CCPs are identified, criteria, monitoring system, corrective action plan, record-keeping system and verification methods are established. When the HACCP study is complemented, HACCP study result forms are printed out. HACCP data base file can be either added, corrected or deleted.

  • PDF