• Title/Summary/Keyword: 코드 최적화

Search Result 480, Processing Time 0.027 seconds

A Study on the Development of Intravenous Injection Management Application for EMR System Interworking (EMR 시스템 연동 정맥주사 관리 애플리케이션 개발에 대한 연구)

  • Jin-Hyoung, Jeong;Jae-Hyun, Jo;Seung-Hun, Kim;Won-yeop, Park;Sang-Sik, Lee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.15 no.6
    • /
    • pp.506-514
    • /
    • 2022
  • This paper is about developing an intravenous injection management system that can provide nurses with information related to intravenous injection in real-time to compensate for possible instability factors during intravenous injection. The intravenous injection management system consists of an app-based user S/W and a web-based administrator S/W. User S/W is implemented to provide users with the ability to identify patients who need intravenous injection through smartphones, tablet PCs, and nursing PDAs, recognize information codes given to patients, and enter and share treatment contents and treatment items after intravenous injection. As a result of intravenous injection treatment uploaded through the user app, the manager S/W can check the records of intravenous injection treatment items, perform user management functions, emergency notification registration and management functions, and data upload functions. The implemented system has not yet been tested on the EMR system used in the actual hospital. Therefore, through further research, S/W will be optimized and actual environmental application tests will be conducted through cooperation with hospitals.

Path Algorithm for Maximum Tax-Relief in Maximum Profit Tax Problem of Multinational Corporation (다국적기업 최대이익 세금트리 문제의 최대 세금경감 경로 알고리즘)

  • Sang-Un Lee
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.4
    • /
    • pp.157-164
    • /
    • 2023
  • This paper suggests O(n2) polynomial time heuristic algorithm for corporate tax structure optimization problem that has been classified as NP-complete problem. The proposed algorithm constructs tax tree levels that the target holding company is located at root node of Level 1, and the tax code categories(Te) 1,4,3,2 are located in each level 2,3,4,5 sequentially. To find the maximum tax-relief path from source(S) to target(T), firstly we connect the minimum witholding tax rate minrw(u, v) arc of node u point of view for transfer the profit from u to v node. As a result we construct the spanning tree from all of the source nodes to a target node, and find the initial feasible solution. Nextly, we find the alternate path with minimum foreign tax rate minrfi(u, v) of v point of view. Finally we choose the minimum tax-relief path from of this two paths. The proposed heuristic algorithm performs better optimal results than linear programming and Tabu search method that is a kind of metaheuristic method.

Development of an Algorithm for Automatic Quantity Take-off of Slab Rebar (슬래브 철근 물량 산출 자동화 알고리즘 개발)

  • Kim, Suhwan;Kim, Sunkuk;Suh, Sangwook;Kim, Sangchul
    • Korean Journal of Construction Engineering and Management
    • /
    • v.24 no.5
    • /
    • pp.52-62
    • /
    • 2023
  • The objective of this study is to propose an automated algorithm for precise cutting length of slab rebar complying with regulations such as anchorage length, standard hooks, and lapping length. This algorithm aims to improve the traditional manual quantity take-off process typically outsourced by external contractors. By providing accurate rebar quantity data at BBS(Bar Bending Schedule) level from the bidding phase, uncertainty in quantity take-off can be eliminated and reliance on out-sourcing reduced. In addition, the algorithm allows for early determination of precise quantities, enabling construction firms to preapre competitive and optimized bids, leading to increased profit margins during contract negotiations. The proposed algorithm not only streamlines redundant tasks across various processes, including estimating, budgeting, and BBS generation but also offers flexibility in handling post-contract structural drawing changes. In particular, the proposed algorithm, when combined with BIM, can solve the technical problems of using BIM in the early phases of construction, and the algorithm's formulas and shape codes that built as REVIT-based family files, can help saving time and manpower.

BIM Mesh Optimization Algorithm Using K-Nearest Neighbors for Augmented Reality Visualization (증강현실 시각화를 위해 K-최근접 이웃을 사용한 BIM 메쉬 경량화 알고리즘)

  • Pa, Pa Win Aung;Lee, Donghwan;Park, Jooyoung;Cho, Mingeon;Park, Seunghee
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.42 no.2
    • /
    • pp.249-256
    • /
    • 2022
  • Various studies are being actively conducted to show that the real-time visualization technology that combines BIM (Building Information Modeling) and AR (Augmented Reality) helps to increase construction management decision-making and processing efficiency. However, when large-capacity BIM data is projected into AR, there are various limitations such as data transmission and connection problems and the image cut-off issue. To improve the high efficiency of visualizing, a mesh optimization algorithm based on the k-nearest neighbors (KNN) classification framework to reconstruct BIM data is proposed in place of existing mesh optimization methods that are complicated and cannot adequately handle meshes with numerous boundaries of the 3D models. In the proposed algorithm, our target BIM model is optimized with the Unity C# code based on triangle centroid concepts and classified using the KNN. As a result, the algorithm can check the number of mesh vertices and triangles before and after optimization of the entire model and each structure. In addition, it is able to optimize the mesh vertices of the original model by approximately 56 % and the triangles by about 42 %. Moreover, compared to the original model, the optimized model shows no visual differences in the model elements and information, meaning that high-performance visualization can be expected when using AR devices.

A Study on Load-carrying Capacity Design Criteria of Jack-up Rigs under Environmental Loading Conditions (환경하중을 고려한 Jack-up rig의 내하력 설계 기준에 대한 연구)

  • Park, Joo Shin;Ha, Yeon Chul;Seo, Jung Kwan
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.26 no.1
    • /
    • pp.103-113
    • /
    • 2020
  • Jack-up drilling rigs are widely used in the offshore oil and gas exploration industry. Although originally designed for use in shallow waters, trends in the energy industry have led to a growing demand for their use in deep sea and harsh environmental conditions. To extend the operating range of jack-up units, their design must be based on reliable analysis while eliminating excessive conservatism. In current industrial practice, jack-up drilling rigs are designed using the working(or allowable) stress design (WSD) method. Recently, classifications have been developed for specific regulations based on the load and resistance factor design (LRFD) method, which emphasises the reliability of the methods. This statistical method utilises the concept of limit state design and uses factored loads and resistance factors to account for uncertainly in the loads and computed strength of the leg components in a jack-up drilling rig. The key differences between the LRFD method and the WSD method must be identified to enable appropriate use of the LRFD method for designing jack-up rigs. Therefore, the aim of this study is to compare and quantitatively investigate the differences between actual jack-up lattice leg structures, which are designed by the WSD and LRFD methods, and subject to different environmental load-to-dead-load ratios, thereby delineating the load-to-capacity ratios of rigs designed using theses methods under these different enviromental conditions. The comparative results are significantly advantageous in the leg design of jack-up rigs, and determine that the jack-up rigs designed using the WSD and LRFD methods with UC values differ by approximately 31 % with respect to the API-RP code basis. It can be observed that the LRFD design method is more advantageous to structure optimization compared to the WSD method.

A 0.31pJ/conv-step 13b 100MS/s 0.13um CMOS ADC for 3G Communication Systems (3G 통신 시스템 응용을 위한 0.31pJ/conv-step의 13비트 100MS/s 0.13um CMOS A/D 변환기)

  • Lee, Dong-Suk;Lee, Myung-Hwan;Kwon, Yi-Gi;Lee, Seung-Hoon
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.46 no.3
    • /
    • pp.75-85
    • /
    • 2009
  • This work proposes a 13b 100MS/s 0.13um CMOS ADC for 3G communication systems such as two-carrier W-CDMA applications simultaneously requiring high resolution, low power, and small size at high speed. The proposed ADC employs a four-step pipeline architecture to optimize power consumption and chip area at the target resolution and sampling rate. Area-efficient high-speed high-resolution gate-bootstrapping circuits are implemented at the sampling switches of the input SHA to maintain signal linearity over the Nyquist rate even at a 1.0V supply operation. The cascode compensation technique on a low-impedance path implemented in the two-stage amplifiers of the SHA and MDAC simultaneously achieves the required operation speed and phase margin with more reduced power consumption than the Miller compensation technique. Low-glitch dynamic latches in sub-ranging flash ADCs reduce kickback-noise referred to the differential input stage of the comparator by isolating the input stage from output nodes to improve system accuracy. The proposed low-noise current and voltage references based on triple negative T.C. circuits are employed on chip with optional off-chip reference voltages. The prototype ADC in a 0.13um 1P8M CMOS technology demonstrates the measured DNL and INL within 0.70LSB and 1.79LSB, respectively. The ADC shows a maximum SNDR of 64.5dB and a maximum SFDR of 78.0dB at 100MS/s, respectively. The ABC with an active die area of $1.22mm^2$ consumes 42.0mW at 100MS/s and a 1.2V supply, corresponding to a FOM of 0.31pJ/conv-step.

Optimized Implementation of Block Cipher PIPO in Parallel-Way on 64-bit ARM Processors (64-bit ARM 프로세서 상에서의 블록암호 PIPO 병렬 최적 구현)

  • Eum, Si Woo;Kwon, Hyeok Dong;Kim, Hyun Jun;Jang, Kyoung Bae;Kim, Hyun Ji;Park, Jae Hoon;Song, Gyeung Ju;Sim, Min Joo;Seo, Hwa Jeong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.8
    • /
    • pp.223-230
    • /
    • 2021
  • The lightweight block cipher PIPO announced at ICISC'20 has been effectively implemented by applying the bit slice technique. In this paper, we propose a parallel optimal implementation of PIPO for ARM processors. The proposed implementation enables parallel encryption of 8-plaintexts and 16-plaintexts. The implementation targets the A10x fusion processor. On the target processor, the existing reference PIPO code has performance of 34.6 cpb and 44.7 cpb in 64/128 and 64/256 standards. Among the proposed methods, the general implementation has a performance of 12.0 cpb and 15.6 cpb in the 8-plaintexts 64/128 and 64/256 standards, and 6.3 cpb and 8.1 cpb in the 16-plaintexts 64/128 and 64/256 standards. Compared to the existing reference code implementation, the 8-plaintexts parallel implementation for each standard has about 65.3%, 66.4%, and the 16-plaintexts parallel implementation, about 81.8%, and 82.1% better performance. The register minimum alignment implementation shows performance of 8.2 cpb and 10.2 cpb in the 8-plaintexts 64/128 and 64/256 specifications, and 3.9 cpb and 4.8 cpb in the 16-plaintexts 64/128 and 64/256 specifications. Compared to the existing reference code implementation, the 8-plaintexts parallel implementation has improved performance by about 76.3% and 77.2%, and the 16-plaintext parallel implementation is about 88.7% and 89.3% higher for each standard.

A Study on Characteristics of Lincomycin Degradation by Optimized TiO2/HAP/Ge Composite using Mixture Analysis (혼합물분석을 통해 최적화된 TiO2/HAP/Ge 촉매를 이용한 Lincomycin 제거특성 연구)

  • Kim, Dongwoo;Chang, Soonwoong
    • Journal of the Korean GEO-environmental Society
    • /
    • v.15 no.1
    • /
    • pp.63-68
    • /
    • 2014
  • In this study, it was found that determined the photocatalytic degradation of antibiotics (lincomycin, LM) with various catalyst composite of titanium dioxide ($TiO_2$), hydroxyapatite (HAP) and germanium (Ge) under UV-A irradiation. At first, various type of complex catalysts were investigated to compare the enhanced photocatalytic potential. It was observed that in order to obtain the removal efficiencies were $TiO_2/HAP/Ge$ > $TiO_2/Ge$ > $TiO_2/HAP$. The composition of $TiO_2/HAP/Ge$ using a statistical approach based on mixture analysis design, one of response surface method was investigated. The independent variables of $TiO_2$ ($X_1$), HAP ($X_2$) and Ge ($X_3$) which consisted of 6 condition in each variables was set up to determine the effects on LM ($Y_1$) and TOC ($Y_2$) degradation. Regression analysis on analysis of variance (ANOVA) showed significant p-value (p < 0.05) and high coefficients for determination value ($R^2$ of $Y_1=99.28%$ and $R^2$ of $Y_2=98.91%$). Contour plot and response curve showed that the effects of $TiO_2/HAP/Ge$ composition for LM degradation under UV-A irradiation. And the estimated optimal composition for TOC removal ($Y_2$) were $X_1=0.6913$, $X_2=0.2313$ and $X_3=0.0756$ by coded value. By comparison with actual applications, the experimental results were found to be in good agreement with the model's predictions, with mean results for LM and TOC removal of 99.2% and 49.3%, respectively.

Parallel Processing of Satellite Images using CUDA Library: Focused on NDVI Calculation (CUDA 라이브러리를 이용한 위성영상 병렬처리 : NDVI 연산을 중심으로)

  • LEE, Kang-Hun;JO, Myung-Hee;LEE, Won-Hee
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.19 no.3
    • /
    • pp.29-42
    • /
    • 2016
  • Remote sensing allows acquisition of information across a large area without contacting objects, and has thus been rapidly developed by application to different areas. Thus, with the development of remote sensing, satellites are able to rapidly advance in terms of their image resolution. As a result, satellites that use remote sensing have been applied to conduct research across many areas of the world. However, while research on remote sensing is being implemented across various areas, research on data processing is presently insufficient; that is, as satellite resources are further developed, data processing continues to lag behind. Accordingly, this paper discusses plans to maximize the performance of satellite image processing by utilizing the CUDA(Compute Unified Device Architecture) Library of NVIDIA, a parallel processing technique. The discussion in this paper proceeds as follows. First, standard KOMPSAT(Korea Multi-Purpose Satellite) images of various sizes are subdivided into five types. NDVI(Normalized Difference Vegetation Index) is implemented to the subdivided images. Next, ArcMap and the two techniques, each based on CPU or GPU, are used to implement NDVI. The histograms of each image are then compared after each implementation to analyze the different processing speeds when using CPU and GPU. The results indicate that both the CPU version and GPU version images are equal with the ArcMap images, and after the histogram comparison, the NDVI code was correctly implemented. In terms of the processing speed, GPU showed 5 times faster results than CPU. Accordingly, this research shows that a parallel processing technique using CUDA Library can enhance the data processing speed of satellites images, and that this data processing benefits from multiple advanced remote sensing techniques as compared to a simple pixel computation like NDVI.

Evaluation of Spatial Dose Rate in Working Environment during Non-Destructive Testing using Radioactive Isotopes (방사성동위원소를 이용한 비파괴 검사 시 작업환경 내 공간선량률 평가)

  • Cho, Yong-In;Kim, Jung-Hoon;Bae, Sang-Il
    • Journal of the Korean Society of Radiology
    • /
    • v.16 no.4
    • /
    • pp.373-379
    • /
    • 2022
  • The radiation source used for non-destructive testing have permeability and cause a scattered radiation through collisions of surrounding materials, which causes changes in the surrounding spatial dose. Therefore, this study attempted to evaluate and analyze the distribution of spatial dose by source in the working environment during the non-destructive test using monte carlo simulation. In this study, Using FLUKA, a simulation code, simulates 60Co, 192Ir, and 75Se source used in non-destructive testing, The reliability of the source term was secured by comparing the calculated dose rate with the data of the Health and Physics Association. After that, a non-destructive test in the radiation safety facility(RT-room) was designed to evaluate the spatial dose according to the distance from the source. As a result of the spatial dose evaluation, 75Se source showed the lowest dose distribution in the frontal position and 60Co source showed a dose rate of about 15 times higher than that of 75Se and about 2 times higher than that of 192Ir. In addition, the spatial dose according to the distance tends to decrease according to the distance inverse square law as the distance from the source increases. Exceptionally, 60Co, 192Ir, and 75Se sources confirmed a slight increase within 2 m of position. Based on the results of this study, it is believed that it will be used as supplementary data for safety management of workers in radiation safety facilities during non-destructive testing using radioactive isotopes.