• Title/Summary/Keyword: 필터 링

Search Result 3,389, Processing Time 0.028 seconds

Development of an Automatic Seed Marker Registration Algorithm Using CT and kV X-ray Images (CT 영상 및 kV X선 영상을 이용한 자동 표지 맞춤 알고리듬 개발)

  • Cheong, Kwang-Ho;Cho, Byung-Chul;Kang, Sei-Kwon;Kim, Kyoung-Joo;Bae, Hoon-Sik;Suh, Tae-Suk
    • Radiation Oncology Journal
    • /
    • v.25 no.1
    • /
    • pp.54-61
    • /
    • 2007
  • [ $\underline{Purpose}$ ]: The purpose of this study is to develop a practical method for determining accurate marker positions for prostate cancer radiotherapy using CT images and kV x-ray images obtained from the use of the on- board imager (OBI). $\underline{Materials\;and\;Methods}$: Three gold seed markers were implanted into the reference position inside a prostate gland by a urologist. Multiple digital image processing techniques were used to determine seed marker position and the center-of-mass (COM) technique was employed to determine a representative reference seed marker position. A setup discrepancy can be estimated by comparing a computed $COM_{OBI}$ with the reference $COM_{CT}$. A proposed algorithm was applied to a seed phantom and to four prostate cancer patients with seed implants treated in our clinic. $\underline{Results}$: In the phantom study, the calculated $COM_{CT}$ and $COM_{OBI}$ agreed with $COM_{actual}$ within a millimeter. The algorithm also could localize each seed marker correctly and calculated $COM_{CT}$ and $COM_{OBI}$ for all CT and kV x-ray image sets, respectively. Discrepancies of setup errors between 2D-2D matching results using the OBI application and results using the proposed algorithm were less than one millimeter for each axis. The setup error of each patient was in the range of $0.1{\pm}2.7{\sim}1.8{\pm}6.6\;mm$ in the AP direction, $0.8{\pm}1.6{\sim}2.0{\pm}2.7\;mm$ in the SI direction and $-0.9{\pm}1.5{\sim}2.8{\pm}3.0\;mm$ in the lateral direction, even though the setup error was quite patient dependent. $\underline{Conclusion}$: As it took less than 10 seconds to evaluate a setup discrepancy, it can be helpful to reduce the setup correction time while minimizing subjective factors that may be user dependent. However, the on-line correction process should be integrated into the treatment machine control system for a more reliable procedure.

Analysis of Interactions in Multiple Genes using IFSA(Independent Feature Subspace Analysis) (IFSA 알고리즘을 이용한 유전자 상호 관계 분석)

  • Kim, Hye-Jin;Choi, Seung-Jin;Bang, Sung-Yang
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.3
    • /
    • pp.157-165
    • /
    • 2006
  • The change of external/internal factors of the cell rquires specific biological functions to maintain life. Such functions encourage particular genes to jnteract/regulate each other in multiple ways. Accordingly, we applied a linear decomposition model IFSA, which derives hidden variables, called the 'expression mode' that corresponds to the functions. To interpret gene interaction/regulation, we used a cross-correlation method given an expression mode. Linear decomposition models such as principal component analysis (PCA) and independent component analysis (ICA) were shown to be useful in analyzing high dimensional DNA microarray data, compared to clustering methods. These methods assume that gene expression is controlled by a linear combination of uncorrelated/indepdendent latent variables. However these methods have some difficulty in grouping similar patterns which are slightly time-delayed or asymmetric since only exactly matched Patterns are considered. In order to overcome this, we employ the (IFSA) method of [1] to locate phase- and shut-invariant features. Membership scoring functions play an important role to classify genes since linear decomposition models basically aim at data reduction not but at grouping data. We address a new function essential to the IFSA method. In this paper we stress that IFSA is useful in grouping functionally-related genes in the presence of time-shift and expression phase variance. Ultimately, we propose a new approach to investigate the multiple interaction information of genes.

Finding Influential Users in the SNS Using Interaction Concept : Focusing on the Blogosphere with Continuous Referencing Relationships (상호작용성에 의한 SNS 영향유저 선정에 관한 연구 : 연속적인 참조관계가 있는 블로고스피어를 중심으로)

  • Park, Hyunjung;Rho, Sangkyu
    • The Journal of Society for e-Business Studies
    • /
    • v.17 no.4
    • /
    • pp.69-93
    • /
    • 2012
  • Various influence-related relationships in Social Network Services (SNS) among users, posts, and user-and-post, can be expressed using links. The current research evaluates the influence of specific users or posts by analyzing the link structure of relevant social network graphs to identify influential users. We applied the concept of mutual interactions proposed for ranking semantic web resources, rather than the voting notion of Page Rank or HITS, to blogosphere, one of the early SNS. Through many experiments with network models, where the performance and validity of each alternative approach can be analyzed, we showed the applicability and strengths of our approach. The weight tuning processes for the links of these network models enabled us to control the experiment errors form the link weight differences and compare the implementation easiness of alternatives. An additional example of how to enter the content scores of commercial or spam posts into the graph-based method is suggested on a small network model as well. This research, as a starting point of the study on identifying influential users in SNS, is distinctive from the previous researches in the following points. First, various influence-related properties that are deemed important but are disregarded, such as scraping, commenting, subscribing to RSS feeds, and trusting friends, can be considered simultaneously. Second, the framework reflects the general phenomenon where objects interacting with more influential objects increase their influence. Third, regarding the extent to which a bloggers causes other bloggers to act after him or her as the most important factor of influence, we treated sequential referencing relationships with a viewpoint from that of PageRank or HITS (Hypertext Induced Topic Selection).

Quantitative Rainfall Estimation for S-band Dual Polarization Radar using Distributed Specific Differential Phase (분포형 비차등위상차를 이용한 S-밴드 이중편파레이더의 정량적 강우 추정)

  • Lee, Keon-Haeng;Lim, Sanghun;Jang, Bong-Joo;Lee, Dong-Ryul
    • Journal of Korea Water Resources Association
    • /
    • v.48 no.1
    • /
    • pp.57-67
    • /
    • 2015
  • One of main benefits of a dual polarization radar is improvement of quantitative rainfall estimation. In this paper, performance of two representative rainfall estimation methods for a dual polarization radar, JPOLE and CSU algorithms, have been compared by using data from a MOLIT S-band dual polarization radar. In addition, this paper presents evaluation of specific differential phase ($K_{dp}$) retrieval algorithm proposed by Lim et al. (2013). Current $K_{dp}$ retrieval methods are based on range filtering technique or regression analysis. However, these methods can result in underestimating peak $K_{dp}$ or negative values in convective regions, and fluctuated $K_{dp}$ in low rain rate regions. To resolve these problems, this study applied the $K_{dp}$ distribution method suggested by Lim et al. (2013) and evaluated by adopting new $K_{dp}$ to JPOLE and CSU algorithms. Data were obtained from the Mt. Biseul radar of MOLIT for two rainfall events in 2012. Results of evaluation showed improvement of the peak $K_{dp}$ and did not show fluctuation and negative $K_{dp}$ values. Also, in heavy rain (daily rainfall > 80 mm), accumulated daily rainfall using new $K_{dp}$ was closer to AWS observation data than that using legacy $K_{dp}$, but in light rain(daily rainfall < 80mm), improvement was insignificant, because $K_{dp}$ is used mostly in case of heavy rain rate of quantitative rainfall estimation algorithm.

Database Security System supporting Access Control for Various Sizes of Data Groups (다양한 크기의 데이터 그룹에 대한 접근 제어를 지원하는 데이터베이스 보안 시스템)

  • Jeong, Min-A;Kim, Jung-Ja;Won, Yong-Gwan;Bae, Suk-Chan
    • The KIPS Transactions:PartD
    • /
    • v.10D no.7
    • /
    • pp.1149-1154
    • /
    • 2003
  • Due to various requirements for the user access control to large databases in the hospitals and the banks, database security has been emphasized. There are many security models for database systems using wide variety of policy-based access control methods. However, they are not functionally enough to meet the requirements for the complicated and various types of access control. In this paper, we propose a database security system that can individually control user access to data groups of various sites and is suitable for the situation where the user's access privilege to arbitrary data is changed frequently. Data group(s) in different sixes d is defined by the table name(s), attribute(s) and/or record key(s), and the access privilege is defined by security levels, roles and polices. The proposed system operates in two phases. The first phase is composed of a modified MAC (Mandatory Access Control) model and RBAC (Role-Based Access Control) model. A user can access any data that has lower or equal security levels, and that is accessible by the roles to which the user is assigned. All types of access mode are controlled in this phase. In the second phase, a modified DAC(Discretionary Access Control) model is applied to re-control the 'read' mode by filtering out the non-accessible data from the result obtained at the first phase. For this purpose, we also defined the user group s that can be characterized by security levels, roles or any partition of users. The policies represented in the form of Block(s, d, r) were also defined and used to control access to any data or data group(s) that is not permitted in 'read ' mode. With this proposed security system, more complicated 'read' access to various data sizes for individual users can be flexibly controlled, while other access mode can be controlled as usual. An implementation example for a database system that manages specimen and clinical information is presented.

A study on optical coherence tomography system using optical fiber (광섬유를 이용한 광영상 단층촬영기에 관한연구)

  • 양승국;박양하;장원석;오상기;김현덕;김기문
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2004.04a
    • /
    • pp.5-9
    • /
    • 2004
  • In this paper, we studied the OCT(Optical Coherence Tomography) system which it has been extensively studied because of having some advantages such as high resolution cross-sectional images, low cost, and small size configuration. A basic principle of OCT system is Michelson interferometer. The characteristics of light source determine the resolution and the transmission depth. As a results, the light source have a commercial SLD with a central wavelength of 1,285 nm and FWHM(Full Width at Half Maximum) of 35.3 nm. The optical delay line part is necessary to equal of the optical path length with scattered light or reflected light from sample. In order to equal the optical path length, the stage which is attached to reference mirror is moved linearly by step motor And the interferometer is configured with the Michelson interferometer using single mod fiber, the scanner can be focused of the sample by using the reference arm. Also, the 2-dimensional cross-sectional images were measured with scanning the transverse direction of the sample by using step motor. After detecting the internal signal of lateral direction at a paint of sample, scanner is moved to obtain the cross-sectional image of 2-demensional by using step motor. Photodiode has been used which has high detection sensitivity, excellent noise characteristic, and dynamic range from 800 nm to 1,700 nm. It is detected mixed small signal between noise and interference signal with high frequency After filtering and amplifying this signal, only envelope curve of interference signal is detected. And then, cross-sectional image is shown through converting this signal into digitalized signal using A/D converter. The resolution of the OCT system is about 30$\mu\textrm{m}$ which corresponds to the theoretical resolution. Also, the cross-sectional image of ping-pong ball is measured. The OCT system is configured with Michelson interferometer which has a low contrast because of reducing the power of feedback interference light. Such a problem is overcomed by using the improved inteferometer. Also, in order to obtain the cross-sectional image within a short time, it is necessary to reduce the measurement time for improving the optical delay line.

  • PDF

The Flow-rate Measurements in a Multi-phase Flow Pipeline by Using a Clamp-on Sealed Radioisotope Cross Correlation Flowmeter (투과 감마선 계측신호의 Cross correlation 기법 적용에 의한 다중상 유체의 유량측정)

  • Kim, Jin-Seop;Kim, Jong-Bum;Kim, Jae-Ho;Lee, Na-Young;Jung, Sung-Hee
    • Journal of Radiation Protection and Research
    • /
    • v.33 no.1
    • /
    • pp.13-20
    • /
    • 2008
  • The flow rate measurements in a multi-phase flow pipeline were evaluated quantitatively by means of a clamp-on sealed radioisotope based on a cross correlation signal processing technique. The flow rates were calculated by a determination of the transit time between two sealed gamma sources by using a cross correlation function following FFT filtering, then corrected with vapor fraction in the pipeline which was measured by the ${\gamma}$-ray attenuation method. The pipeline model was manufactured by acrylic resin(ID. 8 cm, L=3.5 m, t=10 mm), and the multi-phase flow patterns were realized by an injection of compressed $N_2$ gas. Two sealed gamma sources of $^{137}Cs$ (E=0.662 MeV, ${\Gamma}$ $factor=0.326\;R{\cdot}h^{-1}{\cdot}m^2{\cdot}Ci^{-1}$) of 20 mCi and 17 mCi, and radiation detectors of $2"{\times}2"$ NaI(Tl) scintillation counter (Eberline, SP-3) were used for this study. Under the given conditions(the distance between two sources: 4D(D; inner diameter), N/S ratio: $0.12{\sim}0.15$, sampling time ${\Delta}t$: 4msec), the measured flow rates showed the maximum. relative error of 1.7 % when compared to the real ones through the vapor content corrections($6.1\;%{\sim}9.2\;%$). From a subsequent experiment, it was proven that the closer the distance between the two sealed sources is, the more precise the measured flow rates are. Provided additional studies related to the selection of radioisotopes their activity, and an optimization of the experimental geometry are carried out, it is anticipated that a radioisotope application for flow rate measurements can be used as an important tool for monitoring multi-phase facilities belonging to petrochemical and refinery industries and contributes economically in the light of maintenance and control of them.

The Method for Real-time Complex Event Detection of Unstructured Big data (비정형 빅데이터의 실시간 복합 이벤트 탐지를 위한 기법)

  • Lee, Jun Heui;Baek, Sung Ha;Lee, Soon Jo;Bae, Hae Young
    • Spatial Information Research
    • /
    • v.20 no.5
    • /
    • pp.99-109
    • /
    • 2012
  • Recently, due to the growth of social media and spread of smart-phone, the amount of data has considerably increased by full use of SNS (Social Network Service). According to it, the Big Data concept is come up and many researchers are seeking solutions to make the best use of big data. To maximize the creative value of the big data held by many companies, it is required to combine them with existing data. The physical and theoretical storage structures of data sources are so different that a system which can integrate and manage them is needed. In order to process big data, MapReduce is developed as a system which has advantages over processing data fast by distributed processing. However, it is difficult to construct and store a system for all key words. Due to the process of storage and search, it is to some extent difficult to do real-time processing. And it makes extra expenses to process complex event without structure of processing different data. In order to solve this problem, the existing Complex Event Processing System is supposed to be used. When it comes to complex event processing system, it gets data from different sources and combines them with each other to make it possible to do complex event processing that is useful for real-time processing specially in stream data. Nevertheless, unstructured data based on text of SNS and internet articles is managed as text type and there is a need to compare strings every time the query processing should be done. And it results in poor performance. Therefore, we try to make it possible to manage unstructured data and do query process fast in complex event processing system. And we extend the data complex function for giving theoretical schema of string. It is completed by changing the string key word into integer type with filtering which uses keyword set. In addition, by using the Complex Event Processing System and processing stream data at real-time of in-memory, we try to reduce the time of reading the query processing after it is stored in the disk.

2-D/3-D Seismic Data Acquisition and Quality Control for Gas Hydrate Exploration in the Ulleung Basin (울릉분지 가스하이드레이트 2/3차원 탄성파 탐사자료 취득 및 품질관리)

  • Koo, Nam-Hyung;Kim, Won-Sik;Kim, Byoung-Yeop;Cheong, Snons;Kim, Young-Jun;Yoo, Dong-Geun;Lee, Ho-Young;Park, Keun-Pil
    • Geophysics and Geophysical Exploration
    • /
    • v.11 no.2
    • /
    • pp.127-136
    • /
    • 2008
  • To identify the potential area of gas hydrate in the Ulleung Basin, 2-D and 3-D seismic surveys using R/V Tamhae II were conducted in 2005 and 2006. Seismic survey equipment consisted of navigation system, recording system, streamer cable and air-gun source. For reliable velocity analysis in a deep sea area where water depths are mostly greater than 1,000 m and the target depth is up to about 500 msec interval below the seafloor, 3-km-long streamer and 1,035 $in^3$ tuned air-gun array were used. During the survey, a suite of quality control operations including source signature analysis, 2-D brute stack, RMS noise analysis and FK analysis were performed. The source signature was calculated to verify its conformity to quality specification and the gun dropout test was carried out to examine signature changes due to a single air gun's failure. From the online quality analysis, we could conclude that the overall data quality was very good even though some seismic data were affected by swell noise, parity error, spike noise and current rip noise. Especially, by checking the result of data quality enhancement using FK filtering and missing trace restoration technique for the 3-D seismic data inevitably contaminated with current rip noises, the acquired data were accepted and the field survey could be conducted continuously. Even in survey areas where the acquired data would be unsuitable for quality specification, the marine seismic survey efficiency could be improved by showing the possibility of noise suppression through onboard data processing.

Enhancement of Inter-Image Statistical Correlation for Accurate Multi-Sensor Image Registration (정밀한 다중센서 영상정합을 위한 통계적 상관성의 증대기법)

  • Kim, Kyoung-Soo;Lee, Jin-Hak;Ra, Jong-Beom
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.4 s.304
    • /
    • pp.1-12
    • /
    • 2005
  • Image registration is a process to establish the spatial correspondence between images of the same scene, which are acquired at different view points, at different times, or by different sensors. This paper presents a new algorithm for robust registration of the images acquired by multiple sensors having different modalities; the EO (electro-optic) and IR(infrared) ones in the paper. The two feature-based and intensity-based approaches are usually possible for image registration. In the former selection of accurate common features is crucial for high performance, but features in the EO image are often not the same as those in the R image. Hence, this approach is inadequate to register the E0/IR images. In the latter normalized mutual Information (nHr) has been widely used as a similarity measure due to its high accuracy and robustness, and NMI-based image registration methods assume that statistical correlation between two images should be global. Unfortunately, since we find out that EO and IR images don't often satisfy this assumption, registration accuracy is not high enough to apply to some applications. In this paper, we propose a two-stage NMI-based registration method based on the analysis of statistical correlation between E0/1R images. In the first stage, for robust registration, we propose two preprocessing schemes: extraction of statistically correlated regions (ESCR) and enhancement of statistical correlation by filtering (ESCF). For each image, ESCR automatically extracts the regions that are highly correlated to the corresponding regions in the other image. And ESCF adaptively filters out each image to enhance statistical correlation between them. In the second stage, two output images are registered by using NMI-based algorithm. The proposed method provides prospective results for various E0/1R sensor image pairs in terms of accuracy, robustness, and speed.