• Title/Summary/Keyword: Compressed file

Search Result 48, Processing Time 0.023 seconds

Development of an EEG Software for Two-Channel Cerebral Function Monitoring System (2채널 뇌기능 감시 시스템을 위한 뇌파 소프트웨어의 개발)

  • Kim, Dong-Jun;Yu, Seon-Guk;Kim, Seon-Ho
    • Journal of Biomedical Engineering Research
    • /
    • v.20 no.1
    • /
    • pp.81-90
    • /
    • 1999
  • This paper describes an EEG(electroencephalogram) software for two-channel cerebral function monitoring system to detect the cerebral ischemia. In the software, two-channel bipolar analog EEG signals are digitized and from the signals various EEG parameters are extracted and displayed on a monitor in real-time. Digitized EEG signal is transformed by FFT(Fast Fourier transform) and represented as CSA(compressed spectral array) and DSA(density spectral array). Additional 5 parameters, such as alpha ratio, percent delta, spectral edge frequency, total power, and difference in total power, are estimated using the FFT spectra. All of these are effectively merged in a monitor and displayed in real-time. Through animal experiments and clinical trials on men, the software is modified and enhanced. Since the software provides raw EEG, CSA, DSA, simultaneously with additional 5 parameters in a monitor, it is possible to observe patients multilaterally. For easy comparison of patient's status, reference patterns of CSA, DSA can be captured and displayed on top of the monitor. And user can mark events of surgical operation and patient's conditions on the software, this allow him jump to the points of events directly, when reviewing the recorded EEG file afterwards. Other functions, such as forward/backward jump, gain control, file management are equipped and these are operated by simple mouse click. Clinical tests in a university hospital show that the software responds accurately according to the conditions of patients and medical doctors can use the software easily.

  • PDF

A Bitmap Index for Chunk-Based MOLAP Cubes (청크 기반 MOLAP 큐브를 위한 비트맵 인덱스)

  • Lim, Yoon-Sun;Kim, Myung
    • Journal of KIISE:Databases
    • /
    • v.30 no.3
    • /
    • pp.225-236
    • /
    • 2003
  • MOLAP systems store data in a multidimensional away called a 'cube' and access them using way indexes. When a cube is placed into disk, it can be Partitioned into a set of chunks of the same side length. Such a cube storage scheme is called the chunk-based MOLAP cube storage scheme. It gives data clustering effect so that all the dimensions are guaranteed to get a fair chance in terms of the query processing speed. In order to achieve high space utilization, sparse chunks are further compressed. Due to data compression, the relative position of chunks cannot be obtained in constant time without using indexes. In this paper, we propose a bitmap index for chunk-based MOLAP cubes. The index can be constructed along with the corresponding cube generation. The relative position of chunks is retained in the index so that chunk retrieval can be done in constant time. We placed in an index block as many chunks as possible so that the number of index searches is minimized for OLAP operations such as range queries. We showed the proposed index is efficient by comparing it with multidimensional indexes such as UB-tree and grid file in terms of time and space.

Design and Implementation of Map Databases for Telematics and Car Navigation Systems using an Embedded DBMS

  • Joo, Yong-Jin;Kim, Jung-Yeop;Lee, Yong-Ik;Moon, Kyung-Ky;Park, Soo-Hong
    • Spatial Information Research
    • /
    • v.14 no.4 s.39
    • /
    • pp.379-389
    • /
    • 2006
  • Map databases for CNS (Car Navigation System) can be accessed quickly and compressed efficiently due to that these are usually recorded as in a PSF (Physical Storage Format). However, it is difficult to create and manage data storage based on a file-system. To solve these problems, DBMS needs to be combined with spatial data management. Therefore, we developed an embedded DBMS with which to store data and conduct quick searches in CNS. Spatial data could be easily managed and accessed using the compression method, Multi-Link, spatial index, and spatial division. In the result, the proposed embedded DBMS searched quickly and stably supported data management. If synchronization is applied in DBMS, it is expected to utilize the advantages of an embedded DBMS.

  • PDF

Construction of PANM Database (Protostome DB) for rapid annotation of NGS data in Mollusks

  • Kang, Se Won;Park, So Young;Patnaik, Bharat Bhusan;Hwang, Hee Ju;Kim, Changmu;Kim, Soonok;Lee, Jun Sang;Han, Yeon Soo;Lee, Yong Seok
    • The Korean Journal of Malacology
    • /
    • v.31 no.3
    • /
    • pp.243-247
    • /
    • 2015
  • A stand-alone BLAST server is available that provides a convenient and amenable platform for the analysis of molluscan sequence information especially the EST sequences generated by traditional sequencing methods. However, it is found that the server has limitations in the annotation of molluscan sequences generated using next-generation sequencing (NGS) platforms due to inconsistencies in molluscan sequence available at NCBI. We constructed a web-based interface for a new stand-alone BLAST, called PANM-DB (Protostome DB) for the analysis of molluscan NGS data. The PANM-DB includes the amino acid sequences from the protostome groups-Arthropoda, Nematoda, and Mollusca downloaded from GenBank with the NCBI taxonomy Browser. The sequences were translated into multi-FASTA format and stored in the database by using the formatdb program at NCBI. PANM-DB contains 6% of NCBInr database sequences (as of 24-06-2015), and for an input of 10,000 RNA-seq sequences the processing speed was 15 times faster by using PANM-DB when compared with NCBInr DB. It was also noted that PANM-DB show two times more significant hits with diverse annotation profiles as compared with Mollusks DB. Hence, the construction of PANM-DB is a significant step in the annotation of molluscan sequence information obtained from NGS platforms. The PANM-DB is freely downloadable from the web-based interface (Malacological Society of Korea, http://malacol.or/kr/blast) as compressed file system and can run on any compatible operating system.

A Study on the Implementation of the Web-Camera System for Realtime Monitoring (실시간 영상 감시를 위한 웹 카메라 시스템의 구현에 관한 연구)

  • Ahn, Young-Min;Jin, Hyun-Joon;Park, Nho-Kyung
    • Journal of IKEEE
    • /
    • v.5 no.2 s.9
    • /
    • pp.174-181
    • /
    • 2001
  • In this study, the architecture of the Web Camera System for realtime monitoring on Internet is proposed and implemented in two different structures. In the one architecture, a Web-server and a Camera-server are implemented on the same system, and the system transfers motion pictures compressed to JPEG file to users on the WWW(World Wide Web). In the other architecture, the Web-server and the Camera-server are implemented on different systems, and the motion pictures are transferred from the Camera-server to the Web-server, and finally to users. For JPEG image transferring in the Web Camera system, the Java Applet and the Java Script are used to maximize flexibility of the system from the Operating system and the Web browsers. In order to compare system performance between two architectures, data traffic is measured and simulated in the unit of byte per second.

  • PDF

A study on the state of infection control in dental clinic (치과진료실에서의 감염관리 실태 조사)

  • Kim, Kyung-Mi;Jung, Jae-Yeon;Hwang, Yoon-Sook
    • Journal of Korean society of Dental Hygiene
    • /
    • v.7 no.3
    • /
    • pp.213-230
    • /
    • 2007
  • The purpose of this study was to examine the state of infection control provided to members of Korean Dental Hygienists Association. The subjects in this study were dental hygienists who attended a symposium on July 1. 2006. after a survey was conducted, the answer sheets from 489 participants were analyzed, and the findings of the study were as follows: 1. Possession of disinfection room was being(72.7%), and person of infection control was zero(52.9%). Number of sterilizer was one(62.2%). 2. As a repetition choice, type of sterilizer was autoclave(97.9%), UV sterilizer(67.4%) and EO gas sterilizer(21.4%). As a repetition choice, infection materials was ethanol(84.1%). 3. Water tube of unit and chair was using of sterilized water(42.9%). Sterilizing of compressed air was no(69.0%). 4. Re-using of disposal was not using(62.5%), re-using disposal was suction tip(28.2%)(repetition choice) 5. In sterilization of instruments, hand-piece was every using time(28.4%), and reamer-file, bur, mirror, pincette, explorer, hand scaler and ultrasonic scaler were high in every using time. 6. Individual protection was high of using, cleaning of hands before treatment was every treatment(87.0%). Type of soap was liquid type in dental clinic(48.2%), infection soap in dental hospital(41.2%) and solid soap in public health center(50.6%). Answered that they need regular oral health education, and 82.9% respondents answered that they need oral health technicians in school. And 87.8% respondents needed individual oral health education for the benefit of better oral health.

  • PDF

Improved Original Entry Point Detection Method Based on PinDemonium (PinDemonium 기반 Original Entry Point 탐지 방법 개선)

  • Kim, Gyeong Min;Park, Yong Su
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.7 no.6
    • /
    • pp.155-164
    • /
    • 2018
  • Many malicious programs have been compressed or encrypted using various commercial packers to prevent reverse engineering, So malicious code analysts must decompress or decrypt them first. The OEP (Original Entry Point) is the address of the first instruction executed after returning the encrypted or compressed executable file back to the original binary state. Several unpackers, including PinDemonium, execute the packed file and keep tracks of the addresses until the OEP appears and find the OEP among the addresses. However, instead of finding exact one OEP, unpackers provide a relatively large set of OEP candidates and sometimes OEP is missing among candidates. In other words, existing unpackers have difficulty in finding the correct OEP. We have developed new tool which provides fewer OEP candidate sets by adding two methods based on the property of the OEP. In this paper, we propose two methods to provide fewer OEP candidate sets by using the property that the function call sequence and parameters are same between packed program and original program. First way is based on a function call. Programs written in the C/C++ language are compiled to translate languages into binary code. Compiler-specific system functions are added to the compiled program. After examining these functions, we have added a method that we suggest to PinDemonium to detect the unpacking work by matching the patterns of system functions that are called in packed programs and unpacked programs. Second way is based on parameters. The parameters include not only the user-entered inputs, but also the system inputs. We have added a method that we suggest to PinDemonium to find the OEP using the system parameters of a particular function in stack memory. OEP detection experiments were performed on sample programs packed by 16 commercial packers. We can reduce the OEP candidate by more than 40% on average compared to PinDemonium except 2 commercial packers which are can not be executed due to the anti-debugging technique.

An Analysis of Big Video Data with Cloud Computing in Ubiquitous City (클라우드 컴퓨팅을 이용한 유시티 비디오 빅데이터 분석)

  • Lee, Hak Geon;Yun, Chang Ho;Park, Jong Won;Lee, Yong Woo
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.45-52
    • /
    • 2014
  • The Ubiquitous-City (U-City) is a smart or intelligent city to satisfy human beings' desire to enjoy IT services with any device, anytime, anywhere. It is a future city model based on Internet of everything or things (IoE or IoT). It includes a lot of video cameras which are networked together. The networked video cameras support a lot of U-City services as one of the main input data together with sensors. They generate huge amount of video information, real big data for the U-City all the time. It is usually required that the U-City manipulates the big data in real-time. And it is not easy at all. Also, many times, it is required that the accumulated video data are analyzed to detect an event or find a figure among them. It requires a lot of computational power and usually takes a lot of time. Currently we can find researches which try to reduce the processing time of the big video data. Cloud computing can be a good solution to address this matter. There are many cloud computing methodologies which can be used to address the matter. MapReduce is an interesting and attractive methodology for it. It has many advantages and is getting popularity in many areas. Video cameras evolve day by day so that the resolution improves sharply. It leads to the exponential growth of the produced data by the networked video cameras. We are coping with real big data when we have to deal with video image data which are produced by the good quality video cameras. A video surveillance system was not useful until we find the cloud computing. But it is now being widely spread in U-Cities since we find some useful methodologies. Video data are unstructured data thus it is not easy to find a good research result of analyzing the data with MapReduce. This paper presents an analyzing system for the video surveillance system, which is a cloud-computing based video data management system. It is easy to deploy, flexible and reliable. It consists of the video manager, the video monitors, the storage for the video images, the storage client and streaming IN component. The "video monitor" for the video images consists of "video translater" and "protocol manager". The "storage" contains MapReduce analyzer. All components were designed according to the functional requirement of video surveillance system. The "streaming IN" component receives the video data from the networked video cameras and delivers them to the "storage client". It also manages the bottleneck of the network to smooth the data stream. The "storage client" receives the video data from the "streaming IN" component and stores them to the storage. It also helps other components to access the storage. The "video monitor" component transfers the video data by smoothly streaming and manages the protocol. The "video translator" sub-component enables users to manage the resolution, the codec and the frame rate of the video image. The "protocol" sub-component manages the Real Time Streaming Protocol (RTSP) and Real Time Messaging Protocol (RTMP). We use Hadoop Distributed File System(HDFS) for the storage of cloud computing. Hadoop stores the data in HDFS and provides the platform that can process data with simple MapReduce programming model. We suggest our own methodology to analyze the video images using MapReduce in this paper. That is, the workflow of video analysis is presented and detailed explanation is given in this paper. The performance evaluation was experiment and we found that our proposed system worked well. The performance evaluation results are presented in this paper with analysis. With our cluster system, we used compressed $1920{\times}1080(FHD)$ resolution video data, H.264 codec and HDFS as video storage. We measured the processing time according to the number of frame per mapper. Tracing the optimal splitting size of input data and the processing time according to the number of node, we found the linearity of the system performance.