• Title/Summary/Keyword: Control Software

Search Result 4,250, Processing Time 0.05 seconds

Evaluation of Vertical Vibration Performance of Tridimensional Hybrid Isolation System for Traffic Loads (교통하중에 대한 3차원 하이브리드 면진시스템의 수직 진동성능 평가)

  • Yonghun Lee;Sang-Hyun Lee;Moo-Won Hur
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.28 no.1
    • /
    • pp.70-81
    • /
    • 2024
  • In this study, Tridimensional Hybrid Isolation System(THIS) was proposed as a vibration isolator for traffic loads, combining vertical and horizontal isolation systems. Its efficacy in improving serviceability for vertical vibration was analytically evaluated. Firstly, for the analysis, the major vibration modes of the existing apartment were identified through eigenvalue analysis for the system and pulse response analysis for the bedroom slab using commercial structural analysis software. Subsequently, a 16-story model with horizontal, vertical and rotational degrees of freedom for each slab was numerically organized to represent the achieved modes. The dynamic analysis for the measured acceleration from an adjacent ground to high-speed railway was performed by state-space equations with the stiffness and damping ratio of THIS as variables. The result indicated that as the vertical period ratio increased, the threshold period ratio where the slab response started to be suppressed varied. Specifically, when the period ratio is greater than or equal to 5, the acceleration levels of all slabs decreased to approximately 70% or less compared to the non-isolated condition. On the other hand, it was ascertained that the influence of damping ratios on the response control of THIS is inconsequential in the analysis. Finally, the improvement in vertical vibration performance of THIS was evaluated according to design guidelines for floor vibration of AIJ, SCI and AISC. It was confirmed that, after the application of THIS, the residential performance criteria were met, whereas the non-isolated structure failed to satisfy them.

Deep Learning-Based Computed Tomography Image Standardization to Improve Generalizability of Deep Learning-Based Hepatic Segmentation

  • Seul Bi Lee;Youngtaek Hong;Yeon Jin Cho;Dawun Jeong;Jina Lee;Soon Ho Yoon;Seunghyun Lee;Young Hun Choi;Jung-Eun Cheon
    • Korean Journal of Radiology
    • /
    • v.24 no.4
    • /
    • pp.294-304
    • /
    • 2023
  • Objective: We aimed to investigate whether image standardization using deep learning-based computed tomography (CT) image conversion would improve the performance of deep learning-based automated hepatic segmentation across various reconstruction methods. Materials and Methods: We collected contrast-enhanced dual-energy CT of the abdomen that was obtained using various reconstruction methods, including filtered back projection, iterative reconstruction, optimum contrast, and monoenergetic images with 40, 60, and 80 keV. A deep learning based image conversion algorithm was developed to standardize the CT images using 142 CT examinations (128 for training and 14 for tuning). A separate set of 43 CT examinations from 42 patients (mean age, 10.1 years) was used as the test data. A commercial software program (MEDIP PRO v2.0.0.0, MEDICALIP Co. Ltd.) based on 2D U-NET was used to create liver segmentation masks with liver volume. The original 80 keV images were used as the ground truth. We used the paired t-test to compare the segmentation performance in the Dice similarity coefficient (DSC) and difference ratio of the liver volume relative to the ground truth volume before and after image standardization. The concordance correlation coefficient (CCC) was used to assess the agreement between the segmented liver volume and ground-truth volume. Results: The original CT images showed variable and poor segmentation performances. The standardized images achieved significantly higher DSCs for liver segmentation than the original images (DSC [original, 5.40%-91.27%] vs. [standardized, 93.16%-96.74%], all P < 0.001). The difference ratio of liver volume also decreased significantly after image conversion (original, 9.84%-91.37% vs. standardized, 1.99%-4.41%). In all protocols, CCCs improved after image conversion (original, -0.006-0.964 vs. standardized, 0.990-0.998). Conclusion: Deep learning-based CT image standardization can improve the performance of automated hepatic segmentation using CT images reconstructed using various methods. Deep learning-based CT image conversion may have the potential to improve the generalizability of the segmentation network.

The Workflow for Computational Analysis of Single-cell RNA-sequencing Data (단일 세포 RNA 시퀀싱 데이터에 대한 컴퓨터 분석의 작업과정)

  • Sung-Hun WOO;Byung Chul JUNG
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.56 no.1
    • /
    • pp.10-20
    • /
    • 2024
  • RNA-sequencing (RNA-seq) is a technique used for providing global patterns of transcriptomes in samples. However, it can only provide the average gene expression across cells and does not address the heterogeneity within the samples. The advances in single-cell RNA sequencing (scRNA-seq) technology have revolutionized our understanding of heterogeneity and the dynamics of gene expression at the single-cell level. For example, scRNA-seq allows us to identify the cell types in complex tissues, which can provide information regarding the alteration of the cell population by perturbations, such as genetic modification. Since its initial introduction, scRNA-seq has rapidly become popular, leading to the development of a huge number of bioinformatic tools. However, the analysis of the big dataset generated from scRNA-seq requires a general understanding of the preprocessing of the dataset and a variety of analytical techniques. Here, we present an overview of the workflow involved in analyzing the scRNA-seq dataset. First, we describe the preprocessing of the dataset, including quality control, normalization, and dimensionality reduction. Then, we introduce the downstream analysis provided with the most commonly used computational packages. This review aims to provide a workflow guideline for new researchers interested in this field.

Uncertainty Calculation Algorithm for the Estimation of the Radiochronometry of Nuclear Material (핵물질 연대측정을 위한 불확도 추정 알고리즘 연구)

  • JaeChan Park;TaeHoon Jeon;JungHo Song;MinSu Ju;JinYoung Chung;KiNam Kwon;WooChul Choi;JaeHak Cheong
    • Journal of Radiation Industry
    • /
    • v.17 no.4
    • /
    • pp.345-357
    • /
    • 2023
  • Nuclear forensics has been understood as a mendatory component in the international society for nuclear material control and non-proliferation verification. Radiochronometry of nuclear activities for nuclear forensics are decay series characteristics of nuclear materials and the Bateman equation to estimate when nuclear materials were purified and produced. Radiochronometry values have uncertainty of measurement due to the uncertainty factors in the estimation process. These uncertainties should be calculated using appropriate evaluation methods that are representative of the accuracy and reliability. The IAEA, US, and EU have been researched on radiochronometry and uncertainty of measurement, although the uncertainty calculation method using the Bateman equation is limited by the underestimation of the decay constant and the impossibility of estimating the age of more than one generation, so it is necessary to conduct uncertainty calculation research using computer simulation such as Monte Carlo method. This highlights the need for research using computational simulations, such as the Monte Carlo method, to overcome these limitations. In this study, we have analyzed mathematical models and the LHS (Latin Hypercube Sampling) methods to enhance the reliability of radiochronometry which is to develop an uncertainty algorithm for nuclear material radiochronometry using Bateman Equation. We analyzed the LHS method, which can obtain effective statistical results with a small number of samples, and applied it to algorithms that are Monte Carlo methods for uncertainty calculation by computer simulation. This was implemented through the MATLAB computational software. The uncertainty calculation model using mathematical models demonstrated characteristics based on the relationship between sensitivity coefficients and radiative equilibrium. Computational simulation random sampling showed characteristics dependent on random sampling methods, sampling iteration counts, and the probability distribution of uncertainty factors. For validation, we compared models from various international organizations, mathematical models, and the Monte Carlo method. The developed algorithm was found to perform calculations at an equivalent level of accuracy compared to overseas institutions and mathematical model-based methods. To enhance usability, future research and comparisons·validations need to incorporate more complex decay chains and non-homogeneous conditions. The results of this study can serve as foundational technology in the nuclear forensics field, providing tools for the identification of signature nuclides and aiding in the research, development, comparison, and validation of related technologies.

Development of Conformal Radiotherapy with Respiratory Gate Device (호흡주기에 따른 방사선입체조형치료법의 개발)

  • Chu Sung Sil;Cho Kwang Hwan;Lee Chang Geol;Suh Chang Ok
    • Radiation Oncology Journal
    • /
    • v.20 no.1
    • /
    • pp.41-52
    • /
    • 2002
  • Purpose : 3D conformal radiotherapy, the optimum dose delivered to the tumor and provided the risk of normal tissue unless marginal miss, was restricted by organ motion. For tumors in the thorax and abdomen, the planning target volume (PTV) is decided including the margin for movement of tumor volumes during treatment due to patients breathing. We designed the respiratory gating radiotherapy device (RGRD) for using during CT simulation, dose planning and beam delivery at identical breathing period conditions. Using RGRD, reducing the treatment margin for organ (thorax or abdomen) motion due to breathing and improve dose distribution for 3D conformal radiotherapy. Materials and Methods : The internal organ motion data for lung cancer patients were obtained by examining the diaphragm in the supine position to find the position dependency. We made a respiratory gating radiotherapy device (RGRD) that is composed of a strip band, drug sensor, micro switch, and a connected on-off switch in a LINAC control box. During same breathing period by RGRD, spiral CT scan, virtual simulation, and 3D dose planing for lung cancer patients were peformed, without an extended PTV margin for free breathing, and then the dose was delivered at the same positions. We calculated effective volumes and normal tissue complication probabilities (NTCP) using dose volume histograms for normal lung, and analyzed changes in doses associated with selected NTCP levels and tumor control probabilities (TCP) at these new dose levels. The effects of 3D conformal radiotherapy by RGRD were evaluated with DVH (Dose Volume Histogram), TCP, NTCP and dose statistics. Results : The average movement of a diaphragm was 1.5 cm in the supine position when patients breathed freely. Depending on the location of the tumor, the magnitude of the PTV margin needs to be extended from 1 cm to 3 cm, which can greatly increase normal tissue irradiation, and hence, results in increase of the normal tissue complications probabiliy. Simple and precise RGRD is very easy to setup on patients and is sensitive to length variation (+2 mm), it also delivers on-off information to patients and the LINAC machine. We evaluated the treatment plans of patients who had received conformal partial organ lung irradiation for the treatment of thorax malignancies. Using RGRD, the PTV margin by free breathing can be reduced about 2 cm for moving organs by breathing. TCP values are almost the same values $(4\~5\%\;increased)$ for lung cancer regardless of increasing the PTV margin to 2.0 cm but NTCP values are rapidly increased $(50\~70\%\;increased)$ for upon extending PTV margins by 2.0 cm. Conclusion : Internal organ motion due to breathing can be reduced effectively using our simple RGRD. This method can be used in clinical treatments to reduce organ motion induced margin, thereby reducing normal tissue irradiation. Using treatment planning software, the dose to normal tissues was analyzed by comparing dose statistics with and without RGRD. Potential benefits of radiotherapy derived from reduction or elimination of planning target volume (PTV) margins associated with patient breathing through the evaluation of the lung cancer patients treated with 3D conformal radiotherapy.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.

Investigation of the Signal Characteristics of a Small Gamma Camera System Using NaI(Tl)-Position Sensitive Photomultiplier Tube (NaI(Tl) 섬광결정과 위치민감형 광전자증배관을 이용한 소형 감마카메라의 신호 특성 고찰)

  • Choi, Yong;Kim, Jong-Ho;Kim, Joon-Young;Im, Ki-Chun;Kim, Sang-Eun;Choe, Yearn-Seong;Lee, Kyung-Han;Joo, Koan-Sik;Kim, Byung-Tae
    • The Korean Journal of Nuclear Medicine
    • /
    • v.34 no.1
    • /
    • pp.82-93
    • /
    • 2000
  • Purpose: We characterized the signals obtained from the components of a small gamma camera using Nal(Tl)-position sensitive photomultiplier tube (PSPMT) and optimized the parameters employed in the modules of the system. Materials and Methods: The small gamma camera system consists of a Nal(Tl) crystal ($60{\times}60{\times}6mm^3$) coupled with a Hamamatsu R3941 PSPMT, a resister chain circuit, preamplifiers, nuclear instrument modules (NIMs), an analog to digital converter and a personal computer for control and display. The PSPMT was read out using a resistive charge division circuit which multiplexes the 34 cross wire anode channels into 4 signals (X+, X-, Y+, Y -). Those signals were individually amplified by four preamplifiers and then, shaped and amplified by amplifiers. The signals were discriminated and digitized via triggering signal and used to localize the position of an event by applying the Anger logic. The gamma camera control and image display was performed by a program implemented using a graphic software. Results: The characteristics of signal and the parameters employed in each module of the system were presented. The intrinsic sensitivity of the system was approximately $8{\times}10^3$ counts/sec/${\mu}Ci$. The intrinsic energy resolution of the system was 18% FWHM at 140 keV. The spatial resolution obtained using a line-slit mask and $^{99m}Tc$ point source were, respectively, 2.2 and 2.3 mm FWHM in X and Y directions. Breast phantom containing $2{\sim}7mm$ diameter spheres was successfully imaged with a parallel hole collimator. The image displayed accurate size and activity distribution over the imaging field of view Conclusion: We proposed a simple method for development of a small gamma camera and presented the characteristics of the signals from the system and the optimized parameters used in the modules of the small gamma camera.

  • PDF

A STUDY ON THE DISTRIBUTION OF CALCITONIN GENE-RELATED PEPTIDE CONTAINING NERVE FIBERS IN RAT PULP FOLLOWING DENTINAL INJURY (상아질 손상 후 흰쥐 대구치 치수의 calcitonin gene-related peptide(CGRP) 함유 신경섬유 분포에 관한 연구)

  • Moon, Joo-Hoon;Park, Sang-Jin;Min, Byung-Soon;Choi, Ho-Young;Cho, Gi-Woon
    • Restorative Dentistry and Endodontics
    • /
    • v.24 no.1
    • /
    • pp.100-115
    • /
    • 1999
  • The purpose of this study was to investigate the distribution of calcitonin gene-related peptide containing nerve fibers in rat pulp after dentinl injury by means of immunohistochemistry and confocal laser scanning microscope. The Spague-Dawley rats weighing about 250-300gm were used. The animals were devided into normal control and experimental groups. Experimental animals were sacrified 1, 2, 4, 7, 10, 21days after dentinal injury (dentin cutting, and then acid etching with 35% phosphoric acid) on the maxillary molar teeth. The maxillary teeth and alveolar bone were removed and immersed in the 4% paraformaldehyde in 0.1M phosphate buffer (pH 7.4), then were decalcified with 15% formic acid for 10 days. Serial frozen $50{\mu}m$ thick sections were cut on a cryostat. The rabbit CGRP antibody was used as a primary antibody with a dilution of 1:2000 in 0.01M PB. The sections were incubated for 48 hours at $4^{\circ}C$, and placed into biotinylated antirabbit Ig G as a secondary anti body with dilution of 1:200 in 0.01M PB and incubated in ABC(avidin-biotin complex). The peroxidase reaction was visualized by incubating the sections in 0.05% 3,3 diaminobenzidine tetrahydrochloride containing 0.02% $H_2O_2$. For the confocal laser scanning microscopic examination, Primary antibody reaction was same as immunoperoxidase stainning, but fluorescein isothiocyanate(FITC)-conjugate antirabbit IgG as a secondary antibody was used. The confocal laser scanning microscope was used for the examination. A series of images of optical sections was collected with a 20x objective at $3{\mu}m$ intervals throughout the depth of specimen. FITC fluerescence was registrated through a 488nm and 568nm excitation filter, and images were saved on optical disk. The stereoscopic images and three dimentionnal images were reconstructed by computer software, and then were analyzed. The results were as follows : 1. In normal control group, CGRP containing nerve fibers were coursed through the root with very little branching, and then formed a dense network of terminals in coronal pulp. 2. A slight increase in CGRP containing nerve fibers at 1 and 2day postinjury was noted subjacent to the injury site. In the 4day group, there were an extensive increase in the number of reactive fibers, followed by a partial return toward normal levels at 7~10 day postinjury, and return by 21days. 3. The sprouting of the CGRP containing nerve fibers was evident within 2day after dentinal injury, and by 4days there was a maximal increased, but was decreased at 7days and returned to normal 10~21 day postinjury. 4. In confocal laser scanning microscopic exammination, the distinct distribution pattern and sprouting reaction of CGRP containing nerve fibers were observed in stereoscopic images and three dimentional images. These results suggest that CGRP containing nerve fiber can be important role in the response to dental injury and pain regulation.

  • PDF

Development of an Aerodynamic Simulation for Studying Microclimate of Plant Canopy in Greenhouse - (2) Development of CFD Model to Study the Effect of Tomato Plants on Internal Climate of Greenhouse - (공기유동해석을 통한 온실내 식물군 미기상 분석기술 개발 - (2)온실내 대기환경에 미치는 작물의 영향 분석을 위한 CFD 모델개발 -)

  • Lee In-Bok;Yun Nam-Kyu;Boulard Thierry;Roy Jean Claude;Lee Sung-Hyoun;Kim Gyoeng-Won;Hong Se-Woon;Sung Si-Heung
    • Journal of Bio-Environment Control
    • /
    • v.15 no.4
    • /
    • pp.296-305
    • /
    • 2006
  • The heterogeneity of crop transpiration is important to clearly understand the microclimate mechanisms and to efficiently handle the water resource in greenhouses. A computational fluid dynamic program (Fluent CFD version 6.2) was developed to study the internal climate and crop transpiration distributions of greenhouses. Additionally, the global solar radiation model and a crop heat exchange model were programmed together. Those models programmed using $C^{++}$ software were connected to the CFD main module using the user define function (UDF) technology. For the developed CFD validity, a field experiment was conducted at a $17{\times}6 m^2$ plastic-covered mechanically ventilated single-span greenhouse located at Pusan in Korea. The CFD internal distributions of air temperature, relative humidity, and air velocity at 1m height were validated against the experimental results. The CFD computed results were in close agreement with the measured distributions of the air temperature, relative humidity, and air velocity along the greenhouse. The averaged errors of their CFD computed results were 2.2%,2.1%, and 7.7%, respectively.

Implementation of a Self Controlled Mobile Robot with Intelligence to Recognize Obstacles (장애물 인식 지능을 갖춘 자율 이동로봇의 구현)

  • 류한성;최중경
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.5
    • /
    • pp.312-321
    • /
    • 2003
  • In this paper, we implement robot which are ability to recognize obstacles and moving automatically to destination. we present two results in this paper; hardware implementation of image processing board and software implementation of visual feedback algorithm for a self-controlled robot. In the first part, the mobile robot depends on commands from a control board which is doing image processing part. We have studied the self controlled mobile robot system equipped with a CCD camera for a long time. This robot system consists of a image processing board implemented with DSPs, a stepping motor, a CCD camera. We will propose an algorithm in which commands are delivered for the robot to move in the planned path. The distance that the robot is supposed to move is calculated on the basis of the absolute coordinate and the coordinate of the target spot. And the image signal acquired by the CCD camera mounted on the robot is captured at every sampling time in order for the robot to automatically avoid the obstacle and finally to reach the destination. The image processing board consists of DSP (TMS320VC33), ADV611, SAA7111, ADV7l76A, CPLD(EPM7256ATC144), and SRAM memories. In the second part, the visual feedback control has two types of vision algorithms: obstacle avoidance and path planning. The first algorithm is cell, part of the image divided by blob analysis. We will do image preprocessing to improve the input image. This image preprocessing consists of filtering, edge detection, NOR converting, and threshold-ing. This major image processing includes labeling, segmentation, and pixel density calculation. In the second algorithm, after an image frame went through preprocessing (edge detection, converting, thresholding), the histogram is measured vertically (the y-axis direction). Then, the binary histogram of the image shows waveforms with only black and white variations. Here we use the fact that since obstacles appear as sectional diagrams as if they were walls, there is no variation in the histogram. The intensities of the line histogram are measured as vertically at intervals of 20 pixels. So, we can find uniform and nonuniform regions of the waveforms and define the period of uniform waveforms as an obstacle region. We can see that the algorithm is very useful for the robot to move avoiding obstacles.