• Title/Summary/Keyword: linear convolution

Search Result 125, Processing Time 0.026 seconds

A Stiudy on the Deveplopment of Algorithm for the Representative Unit Hydrograph of a Watershed as a Closed Linear System. (폐선형계로 본 유역대표 단위유량도의 유도를 위한 알고리즘의 개발에 관한 연구)

  • 김재한;이원환
    • Water for future
    • /
    • v.13 no.2
    • /
    • pp.35-47
    • /
    • 1980
  • An algorithm is developed to derive a representative I hr-unit hydrograph through an analysis of rainfall-runoff relations of a watershed as a closed system. For the base flow seperation of a flood hydrograph the multi-deflection method is proposed herein, which gace better results compared with those by the existing empirical methods. A modified $\Phi$index method is also proposed in this stidy to determine the time distribution rainfall excess of a rainstorm, which is essetially a modification of the commonly used $\Phi$index method of rainfall seperation. With the so-obtained rainfall excess hyetograph and the direct runoff hydrograph a trial and error computation of the ordinates of 1 hr-unit hydrograph was executed in such a manner that the synthesized flood hydrograph closely approximates the observed one, thus resulting a unit hydrograph of a piecewise exponential function type. To verify the validity of this study the 1 hr-unit hydrographs for the Imha and Dongchon in Nagdong River basin, and Yongdam in Geum River basin were derived by this algorithm, and the results were compared with those by the conventional synthetic unit hydrograph method and the Nakayasu method. Besides, the validity of this stiudy was also tested by comparing the observed hydrograph with the one computed by applying the unit hydrograph to a specific rainfall event. To generalize the result of this study a computer program, consisited of a main and three subprograns (for rainfall excess estimation, convolution summation, and sorting), is developed as a package, which is believed to be applicable to other watersheds for the similar purpose as those in this study.

  • PDF

Correction for SPECT image distortion by non-circular detection orbits (비원형 궤도에서의 검출에 의한 SPECT 영상 왜곡 보정)

  • Lee, Nam-Yong
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.8 no.3
    • /
    • pp.156-162
    • /
    • 2007
  • The parallel beam SPECT system acquires projection data by using collimators in conjunction with photon detectors. The projection data of the parallel beam SPECT system is, however, blurred by the point response function of the collimator that is used to define the range of directions where photons can be detected. By increasing the number of parallel holes per unit area in collimator, one can reduce such blurring effect. This approach also, however, has the blurring problem if the distance between the object and the collimator becomes large. In this paper we consider correction methods for artifacts caused by non-circular orbit of parallel beam SPECT with many parallel holes per detector cell. To do so, we model the relationship between the object and its projection data as a linear system, and propose an iterative reconstruction method including artifacts correction. We compute the projector and the backprojector, which are required in iterative method, as a sum of convolutions with distance-dependent point response functions instead of matrix form, where those functions are analytically computed from a single function. By doing so, we dramatically reduce the computation time and memory required for the generation of the projector and the backprojector. We conducted several simulation studies to compare the performance of the proposed method with that of conventional Fourier method. The result shows that the proposed method outperforms Fourier methods objectively and subjectively.

  • PDF

An Object Detection and Tracking System using Fuzzy C-means and CONDENSATION (Fuzzy C-means와 CONDENSATION을 이용한 객체 검출 및 추적 시스템)

  • Kim, Jong-Ho;Kim, Sang-Kyoon;Hang, Goo-Seun;Ahn, Sang-Ho;Kang, Byoung-Doo
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.16 no.4
    • /
    • pp.87-98
    • /
    • 2011
  • Detecting a moving object from videos and tracking it are basic and necessary preprocessing steps in many video systems like object recognition, context aware, and intelligent visual surveillance. In this paper, we propose a method that is able to detect a moving object quickly and accurately in a condition that background and light change in a real time. Furthermore, our system detects strongly an object in a condition that the target object is covered with other objects. For effective detection, effective Eigen-space and FCM are combined and employed, and a CONDENSATION algorithm is used to trace a detected object strongly. First, training data collected from a background image are linear-transformed using Principal Component Analysis (PCA). Second, an Eigen-background is organized from selected principal components having excellent discrimination ability on an object and a background. Next, an object is detected with FCM that uses a convolution result of the Eigen-vector of previous steps and the input image. Finally, an object is tracked by using coordinates of an detected object as an input value of condensation algorithm. Images including various moving objects in a same time are collected and used as training data to realize our system that is able to be adapted to change of light and background in a fixed camera. The result of test shows that the proposed method detects an object strongly in a condition having a change of light and a background, and partial movement of an object.

Study on the Small Fields Dosimetry for High Energy Photon-based Radiation Therapy (고에너지 광자선을 이용한 방사선 치료 시 소조사면에서의 흡수선량평가에 관한 연구)

  • Jeong, Hae-Sun;Han, Young-Yih;Kum, O-Yeon;Kim, Chan-Hyeong
    • Progress in Medical Physics
    • /
    • v.20 no.4
    • /
    • pp.290-297
    • /
    • 2009
  • In case of radiation treatment using small field high-energy photon beams, an accurate dosimetry is a challenging task because of dosimetrically unfavorable phenomena such as dramatic changes of the dose at the field boundaries, dis-equilibrium of the electrons, and non-uniformity between the detector and the phantom materials. In this study, the absorbed dose in the phantom was measured by using an ion chamber and a diode detector widely used in clinics. $GAFCHROMIC^{(R)}$ EBT films composed of water equivalent materials was also evaluated as a small field detector and compared with ionchamber and diode detectors. The output factors at 10 cm depth of a solid phantom located 100 cm from the 6 MV linear accelerator (Varian, 6 EX) source were measured for 6 field sizes ($5{\times}5\;cm^2$, $2{\times}2\;cm^2$, $1.5{\times}1.5\;cm^2$, $1{\times}1\;cm^2$, $0.7{\times}0.7\;cm^2$ and $0.5{\times}0.5\;cm^2$). As a result, from $5{\times}5\;cm^2$ to $1.5{\times}1.5\;cm^2$ field sizes, absorbed doses from three detectors were accurately identified within 1%. Wheres, the ion chamber underestimated dose compared to other detectors in the field sizes less than $1{\times}1\;cm^2$. In order to correct the observed underestimation, a convolution method was employed to eliminate the volume averaging effect of an ion chamber. Finally, in $1{\times}1\;cm^2$ field the absorbed dose with a diode detector was about 3% higher than that with the EBT film while the dose with the ion chamber after volume correction was 1% lower. For $0.5{\times}0.5\;cm^2$ field, the dose with the diode detector was 1% larger than that with the EBT film while dose with volume corrected ionization chamber was 7% lower. In conclusion, the possibility of $GAFCHROMIC^{(R)}$ EBT film as an small field dosimeter was tested and further investigation will be proceed using Monte Calro simulation.

  • PDF

Target-Aspect-Sentiment Joint Detection with CNN Auxiliary Loss for Aspect-Based Sentiment Analysis (CNN 보조 손실을 이용한 차원 기반 감성 분석)

  • Jeon, Min Jin;Hwang, Ji Won;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.1-22
    • /
    • 2021
  • Aspect Based Sentiment Analysis (ABSA), which analyzes sentiment based on aspects that appear in the text, is drawing attention because it can be used in various business industries. ABSA is a study that analyzes sentiment by aspects for multiple aspects that a text has. It is being studied in various forms depending on the purpose, such as analyzing all targets or just aspects and sentiments. Here, the aspect refers to the property of a target, and the target refers to the text that causes the sentiment. For example, for restaurant reviews, you could set the aspect into food taste, food price, quality of service, mood of the restaurant, etc. Also, if there is a review that says, "The pasta was delicious, but the salad was not," the words "steak" and "salad," which are directly mentioned in the sentence, become the "target." So far, in ABSA, most studies have analyzed sentiment only based on aspects or targets. However, even with the same aspects or targets, sentiment analysis may be inaccurate. Instances would be when aspects or sentiment are divided or when sentiment exists without a target. For example, sentences like, "Pizza and the salad were good, but the steak was disappointing." Although the aspect of this sentence is limited to "food," conflicting sentiments coexist. In addition, in the case of sentences such as "Shrimp was delicious, but the price was extravagant," although the target here is "shrimp," there are opposite sentiments coexisting that are dependent on the aspect. Finally, in sentences like "The food arrived too late and is cold now." there is no target (NULL), but it transmits a negative sentiment toward the aspect "service." Like this, failure to consider both aspects and targets - when sentiment or aspect is divided or when sentiment exists without a target - creates a dual dependency problem. To address this problem, this research analyzes sentiment by considering both aspects and targets (Target-Aspect-Sentiment Detection, hereby TASD). This study detected the limitations of existing research in the field of TASD: local contexts are not fully captured, and the number of epochs and batch size dramatically lowers the F1-score. The current model excels in spotting overall context and relations between each word. However, it struggles with phrases in the local context and is relatively slow when learning. Therefore, this study tries to improve the model's performance. To achieve the objective of this research, we additionally used auxiliary loss in aspect-sentiment classification by constructing CNN(Convolutional Neural Network) layers parallel to existing models. If existing models have analyzed aspect-sentiment through BERT encoding, Pooler, and Linear layers, this research added CNN layer-adaptive average pooling to existing models, and learning was progressed by adding additional loss values for aspect-sentiment to existing loss. In other words, when learning, the auxiliary loss, computed through CNN layers, allowed the local context to be captured more fitted. After learning, the model is designed to do aspect-sentiment analysis through the existing method. To evaluate the performance of this model, two datasets, SemEval-2015 task 12 and SemEval-2016 task 5, were used and the f1-score increased compared to the existing models. When the batch was 8 and epoch was 5, the difference was largest between the F1-score of existing models and this study with 29 and 45, respectively. Even when batch and epoch were adjusted, the F1-scores were higher than the existing models. It can be said that even when the batch and epoch numbers were small, they can be learned effectively compared to the existing models. Therefore, it can be useful in situations where resources are limited. Through this study, aspect-based sentiments can be more accurately analyzed. Through various uses in business, such as development or establishing marketing strategies, both consumers and sellers will be able to make efficient decisions. In addition, it is believed that the model can be fully learned and utilized by small businesses, those that do not have much data, given that they use a pre-training model and recorded a relatively high F1-score even with limited resources.