• Title/Summary/Keyword: 자동정합

Search Result 266, Processing Time 0.022 seconds

Development of Graph based Deep Learning methods for Enhancing the Semantic Integrity of Spaces in BIM Models (BIM 모델 내 공간의 시멘틱 무결성 검증을 위한 그래프 기반 딥러닝 모델 구축에 관한 연구)

  • Lee, Wonbok;Kim, Sihyun;Yu, Youngsu;Koo, Bonsang
    • Korean Journal of Construction Engineering and Management
    • /
    • v.23 no.3
    • /
    • pp.45-55
    • /
    • 2022
  • BIM models allow building spaces to be instantiated and recognized as unique objects independently of model elements. These instantiated spaces provide the required semantics that can be leveraged for building code checking, energy analysis, and evacuation route analysis. However, theses spaces or rooms need to be designated manually, which in practice, lead to errors and omissions. Thus, most BIM models today does not guarantee the semantic integrity of space designations, limiting their potential applicability. Recent studies have explored ways to automate space allocation in BIM models using artificial intelligence algorithms, but they are limited in their scope and relatively low classification accuracy. This study explored the use of Graph Convolutional Networks, an algorithm exclusively tailored for graph data structures. The goal was to utilize not only geometry information but also the semantic relational data between spaces and elements in the BIM model. Results of the study confirmed that the accuracy was improved by about 8% compared to algorithms that only used geometric distinctions of the individual spaces.

Development of Video Image-Guided Setup (VIGS) System for Tomotherapy: Preliminary Study (단층치료용 비디오 영상기반 셋업 장치의 개발: 예비연구)

  • Kim, Jin Sung;Ju, Sang Gyu;Hong, Chae Seon;Jeong, Jaewon;Son, Kihong;Shin, Jung Suk;Shin, Eunheak;Ahn, Sung Hwan;Han, Youngyih;Choi, Doo Ho
    • Progress in Medical Physics
    • /
    • v.24 no.2
    • /
    • pp.85-91
    • /
    • 2013
  • At present, megavoltage computed tomography (MVCT) is the only method used to correct the position of tomotherapy patients. MVCT produces extra radiation, in addition to the radiation used for treatment, and repositioning also takes up much of the total treatment time. To address these issues, we suggest the use of a video image-guided setup (VIGS) system for correcting the position of tomotherapy patients. We developed an in-house program to correct the exact position of patients using two orthogonal images obtained from two video cameras installed at $90^{\circ}$ and fastened inside the tomotherapy gantry. The system is programmed to make automatic registration possible with the use of edge detection of the user-defined region of interest (ROI). A head-and-neck patient is then simulated using a humanoid phantom. After taking the computed tomography (CT) image, tomotherapy planning is performed. To mimic a clinical treatment course, we used an immobilization device to position the phantom on the tomotherapy couch and, using MVCT, corrected its position to match the one captured when the treatment was planned. Video images of the corrected position were used as reference images for the VIGS system. First, the position was repeatedly corrected 10 times using MVCT, and based on the saved reference video image, the patient position was then corrected 10 times using the VIGS method. Thereafter, the results of the two correction methods were compared. The results demonstrated that patient positioning using a video-imaging method ($41.7{\pm}11.2$ seconds) significantly reduces the overall time of the MVCT method ($420{\pm}6$ seconds) (p<0.05). However, there was no meaningful difference in accuracy between the two methods (x=0.11 mm, y=0.27 mm, z=0.58 mm, p>0.05). Because VIGS provides a more accurate result and reduces the required time, compared with the MVCT method, it is expected to manage the overall tomotherapy treatment process more efficiently.

Compact Orthomode Transducer for Field Experiments of Radar Backscatter at L-band (L-밴드 대역 레이더 후방 산란 측정용 소형 직교 모드 변환기)

  • Hwang, Ji-Hwan;Kwon, Soon-Gu;Joo, Jeong-Myeong;Oh, Yi-Sok
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.22 no.7
    • /
    • pp.711-719
    • /
    • 2011
  • A study of miniaturization of an L-band orthomode transducer(OMT) for field experiments of radar backscatter is presented in this paper. The proposed OMT is not required the additional waveguide taper structures to connect with a standard adaptor by the newly designed junction structure which bases on a waveguide taper. Total length of the OMT for L-band is about 1.2 ${\lambda}_o$(310 mm) and it's a size of 60 % of the existing OMTs. And, to increase the matching and isolation performances of each polarization, two conducting posts are inserted. The bandwidth of 420 MHz and the isolation level of about 40 dB are measured in the operating frequency. The L-band scatterometer consisting of the manufactured OMT, a horn-antenna and network analyzer(Agilent 8753E) was used STCT and 2DTST to analysis the measurement accuracy of radar backscatter. The full-polarimetric RCSs of test-target, 55 cm trihedral corner reflector, measured by the calibrated scatterometer have errors of -0.2 dB and 0.25 dB for vv-/hh-polarization, respectively. The effective isolation level is about 35.8 dB in the operating frequency. Then, the horn-antenna used to measure has the length of 300 mm, the aperture size of $450{\times}450\;mm^2$, and HPBWs of $29.5^{\circ}$ and $36.5^{\circ}$ on the principle E-/H-planes.

Rotation Errors of Breast Cancer on 3D-CRT in TomoDirect (토모다이렉트 3D-CRT을 이용한 유방암 환자의 회전 오차)

  • Jung, Jae Hong;Cho, Kwang Hwan;Moon, Seong Kwon;Bae, Sun Hyun;Min, Chul Kee;Kim, Eun Seog;Yeo, Seung-Gu;Choi, Jin Ho;Jung, Joo-Yong;Choe, Bo Young;Suh, Tae Suk
    • Progress in Medical Physics
    • /
    • v.26 no.1
    • /
    • pp.6-11
    • /
    • 2015
  • The purpose of this study was to analyze the rotational errors of roll, pitch, and yaw in the whole breast cancer treated by the three-dimensional radiation therapy (3D-CRT) using TomoDirect (TD). Twenty-patient previously treated with TD 3D-CRT was selected. We performed a retrospective clinical analysis based on 80 images of megavoltage computed tomography (MVCT) including the systematic and random variation with patient setup errors and treatment setup margin (mm). In addition, a rotational error (degree) for each patient was analyzed using the automatic image registration. The treatment margin of X, Y, and Z directions were 4.2 mm, 6.2 mm, and 6.4 mm, respectively. The mean value of the rotational error for roll, pitch, and yaw were $0.3^{\circ}$, $0.5^{\circ}$, $0.1^{\circ}$, and all of systematic and random error was within $1.0^{\circ}$. The errors of patient positioning with the Y and Z directions have generally been mainly higher than the X direction. The percentage in treatment fractions in less than $2^{\circ}$ at roll, pitch, and yaw are 95.1%, 98.8%, and 97.5%, respectively. However, the edge of upper and lower (i.e., bottom) based on the center of therapy region (point) will quite a possibility that it is expected to twist even longer as the length of treatment region. The patient-specific characters should be considered for the accuracy and reproducibility of treatment and it is necessary to confirm periodically the rotational errors, including patient repositioning and repeating MVCT scan.

Study of the UAV for Application Plans and Landscape Analysis (UAV를 이용한 경관분석 및 활용방안에 관한 기초연구)

  • Kim, Seung-Min
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.32 no.3
    • /
    • pp.213-220
    • /
    • 2014
  • This is the study to conduct the topographical analysis using the orthophotographic data from the waypoint flight using the UAV and constructed the system required for the automatic waypoint flight using the multicopter.. The results of the waypoint photographing are as follows. First, result of the waypoint flight over the area of 9.3ha, take time photogrammetry took 40 minutes in total. The multicopter have maintained the certain flight altitude and a constant speed that the accurate photographing was conducted over the waypoint determined by the ground station. Then, the effect of the photogrammetry was checked. Second, attached a digital camera to the multicopter which is lightweight and low in cost compared to the general photogrammetric unmanned airplane and then used it to check its mobility and economy. In addition, the matching of the photo data, and production of DEM and DXF files made it possible to analyze the topography. Third, produced the high resolution orthophoto(2cm) for the inside of the river and found out that the analysis is possible for the changes in vegetation and topography around the river. Fourth, It would be used for the more in-depth research on landscape analysis such as terrain analysis and visibility analysis. This method may be widely used to analyze the various terrains in cities and rivers. It can also be used for the landscape control such as cultural remains and tourist sites as well as the control of the cultural and historical resources such as the visibility analysis for the construction of DSM.

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.