• Title/Summary/Keyword: 인터페이스 평가

Search Result 1,008, Processing Time 0.035 seconds

Development of a Window Program for Searching CpG Island (CpG Island 검색용 윈도우 프로그램 개발)

  • Kim, Ki-Bong
    • Journal of Life Science
    • /
    • v.18 no.8
    • /
    • pp.1132-1139
    • /
    • 2008
  • A CpG island is a short stretch of DNA in which the frequency of the CG dinucleotide is higher than other regions. CpG islands are present in the promoters and exonic regions of approximately $30{\sim}60$% of mammalian genes so they are useful markers for genes in organisms containing 5-methylcytosine in their genomes. Recent evidence supports the notion that the hypermethylation of CpG island, by silencing tumor suppressor genes, plays a major causal role in cancer, which has been described in almost every tumor types. In this respect, CpG island search by computational methods is very helpful for cancer research and computational promoter and gene predictions. I therefore developed a window program (called CpGi) on the basis of CpG island criteria defined by D. Takai and P. A. Jones. The program 'CpGi' was implemented in Visual C++ 6.0 and can determine the locations of CpG islands using diverse parameters (%GC, Obs (CpG)/Exp (CpG), window size, step size, gap value, # of CpG, length) specified by user. The analysis result of CpGi provides a graphical map of CpG islands and G+C% plot, where more detailed information on CpG island can be obtained through pop-up window. Two human contigs, i.e. AP00524 (from chromosome 22) and NT_029490.3 (from chromosome 21), were used to compare the performance of CpGi and two other public programs for the accuracy of search results. The two other programs used in the performance comparison are Emboss-CpGPlot and CpG Island Searcher that are web-based public CpG island search programs. The comparison result showed that CpGi is on a level with or outperforms Emboss-CpGPlot and CpG Island Searcher. Having a simple and easy-to-use user interface, CpGi would be a very useful tool for genome analysis and CpG island research. To obtain a copy of CpGi for academic use only, contact corresponding author.

Local Shape Analysis of the Hippocampus using Hierarchical Level-of-Detail Representations (계층적 Level-of-Detail 표현을 이용한 해마의 국부적인 형상 분석)

  • Kim Jeong-Sik;Choi Soo-Mi;Choi Yoo-Ju;Kim Myoung-Hee
    • The KIPS Transactions:PartA
    • /
    • v.11A no.7 s.91
    • /
    • pp.555-562
    • /
    • 2004
  • Both global volume reduction and local shape changes of hippocampus within the brain indicate their abnormal neurological states. Hippocampal shape analysis consists of two main steps. First, construct a hippocampal shape representation model ; second, compute a shape similarity from this representation. This paper proposes a novel method for the analysis of hippocampal shape using integrated Octree-based representation, containing meshes, voxels, and skeletons. First of all, we create multi-level meshes by applying the Marching Cube algorithm to the hippocampal region segmented from MR images. This model is converted to intermediate binary voxel representation. And we extract the 3D skeleton from these voxels using the slice-based skeletonization method. Then, in order to acquire multiresolutional shape representation, we store hierarchically the meshes, voxels, skeletons comprised in nodes of the Octree, and we extract the sample meshes using the ray-tracing based mesh sampling technique. Finally, as a similarity measure between the shapes, we compute $L_2$ Norm and Hausdorff distance for each sam-pled mesh pair by shooting the rays fired from the extracted skeleton. As we use a mouse picking interface for analyzing a local shape inter-actively, we provide an interaction and multiresolution based analysis for the local shape changes. In this paper, our experiment shows that our approach is robust to the rotation and the scale, especially effective to discriminate the changes between local shapes of hippocampus and more-over to increase the speed of analysis without degrading accuracy by using a hierarchical level-of-detail approach.

A Comprehensive Computer Program for Monitor Unit Calculation and Beam Data Management: Independent Verification of Radiation Treatment Planning Systems (방사선치료계획시스템의 독립적 검증을 위한 선량 계산 및 빔데이터 관리 프로그램)

  • Kim, Hee-Jung;Park, Yang-Kyun;Park, Jong-Min;Choi, Chang-Heon;Kim, Jung-In;Lee, Sang-Won;Oh, Heon-Jin;Lim, Chun-Il;Kim, Il-Han;Ye, Sung-Joon
    • Progress in Medical Physics
    • /
    • v.19 no.4
    • /
    • pp.231-240
    • /
    • 2008
  • We developed a user-friendly program to independently verify monitor units (MUs) calculated by radiation treatment planning systems (RTPS), as well as to manage beam database in clinic. The off-axis factor, beam hardening effect, inhomogeneity correction, and the different depth correction were incorporated into the program algorithm to improve the accuracy in calculated MUs. A beam database in the program was supposed to use measured data from routine quality assurance (QA) processes for timely update. To enhance user's convenience, a graphic user interface (GUI) was developed by using Visual Basic for Application. In order to evaluate the accuracy of the program for various treatment conditions, the MU comparisons were made for 213 cases of phantom and for 108 cases of 17 patients treated by 3D conformal radiation therapy. The MUs calculated by the program and calculated by the RTPS showed a fair agreement within ${\pm}3%$ for the phantom and ${\pm}5%$ for the patient, except for the cases of extreme inhomogeneity. By using Visual Basic for Application and Microsoft Excel worksheet interface, the program can automatically generate beam data book for clinical reference and the comparison template for the beam data management. The program developed in this study can be used to verify the accuracy of RTPS for various treatment conditions and thus can be used as a tool of routine RTPS QA, as well as independent MU checks. In addition, its beam database management interface can update beam data periodically and thus can be used to monitor multiple beam databases efficiently.

  • PDF

Techniques for Acquisition of Moving Object Location in LBS (위치기반 서비스(LBS)를 위한 이동체 위치획득 기법)

  • Min, Gyeong-Uk;Jo, Dae-Su
    • The KIPS Transactions:PartD
    • /
    • v.10D no.6
    • /
    • pp.885-896
    • /
    • 2003
  • The typws of service using location Information are being various and extending their domain as wireless internet tochnology is developing and its application par is widespread, so it is prospected that LBS(Location-Based Services) will be killer application in wireless internet services. This location information is basic and high value-added information, and this information services make prior GIS(Geographic Information System) to be useful to anybody. The acquisition of this location information from moving object is very important part in LBS. Also the interfacing of acquisition of moving object between MODB and telecommunication network is being very important function in LBS. After this, when LBS are familiar to everybody, we can predict that LBS system load is so heavy for the acquisition of so many subscribers and vehicles. That is to say, LBS platform performance is fallen off because of overhead increment of acquiring moving object between MODB and wireless telecommunication network. So, to make stable of LBS platform, in this MODB system, acquisition of moving object location par as reducing the number of acquisition of unneccessary moving object location. We study problems in acquiring a huge number of moving objects location and design some acquisition model using past moving patternof each object to reduce telecommunication overhead. And after implementation these models, we estimate performance of each model.

Study on the channel of bipolar plate for PEM fuel cell (고분자 전해질 연료전지용 바이폴라 플레이트의 유로 연구)

  • Ahn Bum Jong;Ko Jae-Churl;Jo Young-Do
    • Journal of the Korean Institute of Gas
    • /
    • v.8 no.2 s.23
    • /
    • pp.15-27
    • /
    • 2004
  • The purpose of this paper is to improve the performance of Polymer electrolyte fuel cell(PEMFC) by studying the channel dimension of bipolar plates using commercial CFD program 'Fluent'. Simulations are done ranging from 0.5 to 3.0mm for different size in order to find the channel size which shoves the highst hydrogen consumption. The results showed that the smaller channel width, land width, channel depth, the higher hydrogen consumption in anode. When channel width is increased, the pressure drop in channel is decreased because total channel length Is decreased, and when land width is increased, the net hydrogen consumption is decreased because hydrogen is diffused under the land width. It is also found that the influence of hydrogen consumption is larger at different channel width than it at different land width. The change of hydrogen consumption with different channel depth isn't as large as it with different channel width, but channel depth has to be small as can as it does because it has influence on the volume of bipolar plates. however the hydrogen utilization among the channel sizes more than 1.0mm which can be machined in reality is the most at channel width 1.0, land width 1.0, channel depth 0.5mm and considered as optimum channel size. The fuel cell combined with 2cm${\times}$2cm diagonal or serpentine type flow field and MEA(Membrane Electrode Assembly) is tested using 100W PEMFC test station to confirm that the channel size studied in simulation. The results showed that diagonal and serpentine flow field have similarly high OCV and current density of diagonal (low field is higher($2-40mA/m^2$) than that of serpentine flow field under 0.6 voltage, but the current density of serpentine type has higher performance($5-10mA/m^2$) than that of diagonal flow field under 0.7-0.8 voltage.

  • PDF

Study(V) on Development of Charts and Equations Predicting Allowable Compressive Bearing Capacity for Prebored PHC Piles Socketed into Weathered Rock through Sandy Soil Layers - Analysis of Results and Data by Parametric Numerical Analysis - (사질토를 지나 풍화암에 소켓된 매입 PHC말뚝에서 지반의 허용압축지지력 산정도표 및 산정공식 개발에 관한 연속 연구(V) - 매개변수 수치해석 자료 분석 -)

  • Park, Mincheol;Kwon, Oh-Kyun;Kim, Chae Min;Yun, Do Kyun;Choi, Yongkyu
    • Journal of the Korean Geotechnical Society
    • /
    • v.35 no.10
    • /
    • pp.47-66
    • /
    • 2019
  • A parametric numerical analysis according to diameter, length, and N values of soil was conducted for the PHC pile socketed into weathered rock through sandy soil layers. In the numerical analysis, the Mohr-Coulomb model was applied to PHC pile and soils, and the contacted phases among the pile-soil-cement paste were modeled as interfaces with a virtual thickness. The parametric numerical analyses for 10 kinds of pile diameters were executed to obtain the load-settlement relationship and the axial load distribution according to N-values. The load-settlement curves were obtained for each load such as total load, total skin friction, skin friction of the sandy soil layer, skin friction of the weathered rock layer and end bearing resistance of the weathered rock. As a result of analysis of various load levels from the load-settlement curves, the settlements corresponding to the inflection point of each curve were appeared as about 5~7% of each pile diameter and were estimated conservatively as 5% of each pile diameter. The load at the inflection point was defined as the mobilized bearing capacity ($Q_m$) and it was used in analyses of pile bearing capacity. And SRF was appeared above average 70%, irrespective of diameter, embedment length of pile and N value of sandy soil layer. Also, skin frictional resistance of sandy soil layers was evaluated above average 80% of total skin frictional resistance. These results can be used in calculating the bearing capacity of prebored PHC pile, and also be utilized in developing the bearing capacity prediction method and chart for the prebored PHC pile socketed into weathered rock through sandy soil layers.

Development of A Material Flow Model for Predicting Nano-TiO2 Particles Removal Efficiency in a WWTP (하수처리장 내 나노 TiO2 입자 제거효율 예측을 위한 물질흐름모델 개발)

  • Ban, Min Jeong;Lee, Dong Hoon;Shin, Sangwook;Lee, Byung-Tae;Hwang, Yu Sik;Kim, Keugtae;Kang, Joo-Hyon
    • Journal of Wetlands Research
    • /
    • v.24 no.4
    • /
    • pp.345-353
    • /
    • 2022
  • A wastewater treatment plant (WWTP) is a major gateway for the engineered nano-particles (ENPs) entering the water bodies. However existing studies have reported that many WWTPs exceed the No Observed Effective Concentration (NOEC) for ENPs in the effluent and thus they need to be designed or operated to more effectively control ENPs. Understanding and predicting ENPs behaviors in the unit and \the whole process of a WWTP should be the key first step to develop strategies for controlling ENPs using a WWTP. This study aims to provide a modeling tool for predicting behaviors and removal efficiencies of ENPs in a WWTP associated with process characteristics and major operating conditions. In the developed model, four unit processes for water treatment (primary clarifier, bioreactor, secondary clarifier, and tertiary treatment unit) were considered. Additionally the model simulates the sludge treatment system as a single process that integrates multiple unit processes including thickeners, digesters, and dewatering units. The simulated ENP was nano-sized TiO2, (nano-TiO2) assuming that its behavior in a WWTP is dominated by the attachment with suspendid solids (SS), while dissolution and transformation are insignificant. The attachment mechanism of nano-TiO2 to SS was incorporated into the model equations using the apparent solid-liquid partition coefficient (Kd) under the equilibrium assumption between solid and liquid phase, and a steady state condition of nano-TiO2 was assumed. Furthermore, an MS Excel-based user interface was developed to provide user-friendly environment for the nano-TiO2 removal efficiency calculations. Using the developed model, a preliminary simulation was conducted to examine how the solid retention time (SRT), a major operating variable affects the removal efficiency of nano-TiO2 particles in a WWTP.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.