• Title/Summary/Keyword: 연산회로

Search Result 1,642, Processing Time 0.038 seconds

Connectivity Assessment Based on Circuit Theory for Suggestion of Ecological Corridor (생태축 제안을 위한 회로 이론 기초 연결성 평가)

  • Yoon, Eun-Joo;Kim, Eun-Young;Kim, Ji-Yeon;Lee, Dong Kun
    • Journal of Environmental Impact Assessment
    • /
    • v.28 no.3
    • /
    • pp.275-286
    • /
    • 2019
  • In order to prevent local extinction of organisms and to preserve biodiversity, it is important to ensure connectivity between habitats. Even if the habitat is exposed to various disturbance factors, it is possible to avoid or respond to disturbances if they are linked to other habitats. Habitat connectivity can be assessed from a variety of perspectives, but the importance of functional connectivity based on species movement has been emphasized in recent years due to the development of computational capabilities and related software. Among them, Circuitscape, which is a connectivity evaluation tool, has an advantage it can provide detailed reference data for the city planning because it maps ecological flows on individual grid based on circuit theory. Therefore, in this study, the functional connectivity of Suwon was evaluated by applying Circuitscape and then, the ecological corridor to be conserved and supplemented was suggested based on it. The results of this study are expected to effectively complement the methodology related ecological corridor/axis, which was previously provided only in the form of a diagram, and to be effective in management of development project and urban planning.

Analysis of Feature Map Compression Efficiency and Machine Task Performance According to Feature Frame Configuration Method (피처 프레임 구성 방안에 따른 피처 맵 압축 효율 및 머신 태스크 성능 분석)

  • Rhee, Seongbae;Lee, Minseok;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.27 no.3
    • /
    • pp.318-331
    • /
    • 2022
  • With the recent development of hardware computing devices and software based frameworks, machine tasks using deep learning networks are expected to be utilized in various industrial fields and personal IoT devices. However, in order to overcome the limitations of high cost device for utilizing the deep learning network and that the user may not receive the results requested when only the machine task results are transmitted from the server, Collaborative Intelligence (CI) proposed the transmission of feature maps as a solution. In this paper, an efficient compression method for feature maps with vast data sizes to support the CI paradigm was analyzed and presented through experiments. This method increases redundancy by applying feature map reordering to improve compression efficiency in traditional video codecs, and proposes a feature map method that improves compression efficiency and maintains the performance of machine tasks by simultaneously utilizing image compression format and video compression format. As a result of the experiment, the proposed method shows 14.29% gain in BD-rate of BPP and mAP compared to the feature compression anchor of MPEG-VCM.

Index-based Searching on Timestamped Event Sequences (타임스탬프를 갖는 이벤트 시퀀스의 인덱스 기반 검색)

  • 박상현;원정임;윤지희;김상욱
    • Journal of KIISE:Databases
    • /
    • v.31 no.5
    • /
    • pp.468-478
    • /
    • 2004
  • It is essential in various application areas of data mining and bioinformatics to effectively retrieve the occurrences of interesting patterns from sequence databases. For example, let's consider a network event management system that records the types and timestamp values of events occurred in a specific network component(ex. router). The typical query to find out the temporal casual relationships among the network events is as fellows: 'Find all occurrences of CiscoDCDLinkUp that are fellowed by MLMStatusUP that are subsequently followed by TCPConnectionClose, under the constraint that the interval between the first two events is not larger than 20 seconds, and the interval between the first and third events is not larger than 40 secondsTCPConnectionClose. This paper proposes an indexing method that enables to efficiently answer such a query. Unlike the previous methods that rely on inefficient sequential scan methods or data structures not easily supported by DBMSs, the proposed method uses a multi-dimensional spatial index, which is proven to be efficient both in storage and search, to find the answers quickly without false dismissals. Given a sliding window W, the input to a multi-dimensional spatial index is a n-dimensional vector whose i-th element is the interval between the first event of W and the first occurrence of the event type Ei in W. Here, n is the number of event types that can be occurred in the system of interest. The problem of‘dimensionality curse’may happen when n is large. Therefore, we use the dimension selection or event type grouping to avoid this problem. The experimental results reveal that our proposed technique can be a few orders of magnitude faster than the sequential scan and ISO-Depth index methods.hods.

Rear Vehicle Detection Method in Harsh Environment Using Improved Image Information (개선된 영상 정보를 이용한 가혹한 환경에서의 후방 차량 감지 방법)

  • Jeong, Jin-Seong;Kim, Hyun-Tae;Jang, Young-Min;Cho, Sang-Bok
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.1
    • /
    • pp.96-110
    • /
    • 2017
  • Most of vehicle detection studies using the existing general lens or wide-angle lens have a blind spot in the rear detection situation, the image is vulnerable to noise and a variety of external environments. In this paper, we propose a method that is detection in harsh external environment with noise, blind spots, etc. First, using a fish-eye lens will help minimize blind spots compared to the wide-angle lens. When angle of the lens is growing because nonlinear radial distortion also increase, calibration was used after initializing and optimizing the distortion constant in order to ensure accuracy. In addition, the original image was analyzed along with calibration to remove fog and calibrate brightness and thereby enable detection even when visibility is obstructed due to light and dark adaptations from foggy situations or sudden changes in illumination. Fog removal generally takes a considerably significant amount of time to calculate. Thus in order to reduce the calculation time, remove the fog used the major fog removal algorithm Dark Channel Prior. While Gamma Correction was used to calibrate brightness, a brightness and contrast evaluation was conducted on the image in order to determine the Gamma Value needed for correction. The evaluation used only a part instead of the entirety of the image in order to reduce the time allotted to calculation. When the brightness and contrast values were calculated, those values were used to decided Gamma value and to correct the entire image. The brightness correction and fog removal were processed in parallel, and the images were registered as a single image to minimize the calculation time needed for all the processes. Then the feature extraction method HOG was used to detect the vehicle in the corrected image. As a result, it took 0.064 seconds per frame to detect the vehicle using image correction as proposed herein, which showed a 7.5% improvement in detection rate compared to the existing vehicle detection method.

Relationship Analysis between Lineaments and Epicenters using Hotspot Analysis: The Case of Geochang Region, South Korea (핫스팟 분석을 통한 거창지역의 선구조선과 진앙의 상관관계 분석)

  • Jo, Hyun-Woo;Chi, Kwang-Hoon;Cha, Sungeun;Kim, Eunji;Lee, Woo-Kyun
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.5_1
    • /
    • pp.469-480
    • /
    • 2017
  • This study aims to understand the relationship between lineaments and epicenters in Geochang region, Gyungsangnam-do, South Korea. An instrumental observation of earthquakes has been started by Korea Meteorological Administration (KMA) since 1978 and there were 6 earthquakes with magnitude ranging 2 to 2.5 in Geochang region from 1978 to 2016. Lineaments were extracted from LANDSAT 8 satellite image and shaded relief map displayed in 3-dimension using Digital Elevation Model (DEM). Then, lineament density was statistically examined by hotspot analysis. Hexagonal grids were generated to perform the analysis because hexagonal pattern expresses lineaments with less discontinuity than square girds, and the size of the grid was selected to minimize a variance of lineament density. Since hotspot analysis measures the extent of clustering with Z score, Z scores computed with lineaments' frequency ($L_f$), length ($L_d$), and intersection ($L_t$) were used to find lineament clusters in the density map. Furthermore, the Z scores were extracted from the epicenters and examined to see the relevance of each density elements to epicenters. As a result, 15 among 18 densities,recorded as 3 elements in 6 epicenters, were higher than 1.65 which is 95% of the standard normal distribution. This indicates that epicenters coincide with high density area. Especially, $L_f$ and $L_t$ had a significant relationship with epicenter, being located in upper 95% of the standard normal distribution, except for one epicenter in $L_t$. This study can be used to identify potential seismic zones by improving the accuracy of expressing lineaments' spatial distribution and analyzing relationship between lineament density and epicenter. However, additional studies in wider study area with more epicenters are recommended to promote the results.

Evaluation of Radioactivity Concentration According to Radioactivity Uptake on Image Acquisition of PET/CT 2D and 3D (PET/CT 2D와 3D 영상 획득에서 방사능 집적에 따른 방사능 농도의 평가)

  • Park, Sun-Myung;Hong, Gun-Chul;Lee, Hyuk;Kim, Ki;Choi, Choon-Ki;Seok, Jae-Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.111-114
    • /
    • 2010
  • Purpose: There has been recent interest in the radioactivity uptake and image acquisition of radioactivity concentration. The degree of uptake is strongly affected by many factors containing $^{18}F$-FDG injection volume, tumor size and the density of blood glucose. Therefore, we investigated how radioactivity uptake in target influences 2D or 3D image analysis and elucidate radioactivity concentration that mediate this effect. This study will show the relationship between the radioactivity uptake and 2D,3D image acquisition on radioactivity concentration. Materials and Methods: We got image with 2D and 3D using 1994 NEMA PET phantom and GE Discovery(GE, U.S.A) STe 16 PET/CT setting the ratio of background and hot sphere's radioactivity concentration as being a standard of 1:2, 1:4, 1:8, 1:10, 1:20, and 1:30 respectively. And we set 10 minutes for CT attenuation correction and acquisition time. For the reconstruction method, we applied iteration method with twice of the iterative and twenty times subset to both 2D and 3D respectively. For analyzing the images, We set the same ROI at the center of hot sphere and the background radioactivity. We measured the radioactivity count of each part of hot sphere and background, and it was comparative analyzed. Results: The ratio of hot sphere's radioactivity density and the background radioactivity with setting ROI was 1:1.93, 1:3.86, 1:7.79, 1:8.04, 1:18.72, and 1:26.90 in 2D, and 1:1.95, 1:3.71, 1:7.10, 1:7.49, 1:15.10, and 1:23.24 in 3D. The differences of percentage were 3.50%, 3.47%, 8.12%, 8.02%, 10.58%, and 11.06% in 2D, the minimum differentiation was 3.47%, and the maximum one was 11.06%. In 3D, the difference of percentage was 3.66%, 4.80%, 8.38%, 23.92%, 23.86%, and 22.69%. Conclusion: The difference of accumulated concentrations is significantly increased following enhancement of radioactivity concentration. The change of radioactivity density in 2D image is affected by less than 3D. For those reasons, when patient is examined as follow up scan with changing the acquisition mode, scan should be conducted considering those things may affect to the quantitative analysis result and take into account these differences at reading.

  • PDF

The Evaluation of Reconstructed Images in 3D OSEM According to Iteration and Subset Number (3D OSEM 재구성 법에서 반복연산(Iteration) 횟수와 부분집합(Subset) 개수 변경에 따른 영상의 질 평가)

  • Kim, Dong-Seok;Kim, Seong-Hwan;Shim, Dong-Oh;Yoo, Hee-Jae
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.15 no.1
    • /
    • pp.17-24
    • /
    • 2011
  • Purpose: Presently in the nuclear medicine field, the high-speed image reconstruction algorithm like the OSEM algorithm is widely used as the alternative of the filtered back projection method due to the rapid development and application of the digital computer. There is no to relate and if it applies the optimal parameter be clearly determined. In this research, the quality change of the Jaszczak phantom experiment and brain SPECT patient data according to the iteration times and subset number change try to be been put through and analyzed in 3D OSEM reconstruction method of applying 3D beam modeling. Materials and Methods: Patient data from August, 2010 studied and analyzed against 5 patients implementing the brain SPECT until september, 2010 in the nuclear medicine department of ASAN medical center. The phantom image used the mixed Jaszczak phantom equally and obtained the water and 99mTc (500 MBq) in the dual head gamma camera Symbia T2 of Siemens. When reconstructing each image altogether with patient data and phantom data, we changed iteration number as 1, 4, 8, 12, 24 and 30 times and subset number as 2, 4, 8, 16 and 32 times. We reconstructed in reconstructed each image, the variation coefficient for guessing about noise of images and image contrast, FWHM were produced and compared. Results: In patients and phantom experiment data, a contrast and spatial resolution of an image showed the tendency to increase linearly altogether according to the increment of the iteration times and subset number but the variation coefficient did not show the tendency to be improved according to the increase of two parameters. In the comparison according to the scan time, the image contrast and FWHM showed altogether the result of being linearly improved according to the iteration times and subset number increase in projection per 10, 20 and 30 second image but the variation coefficient did not show the tendency to be improved. Conclusion: The linear relationship of the image contrast improved in 3D OSEM reconstruction method image of applying 3D beam modeling through this experiment like the existing 1D and 2D OSEM reconfiguration method according to the iteration times and subset number increase could be confirmed. However, this is simple phantom experiment and the result of obtaining by the some patients limited range and the various variables can be existed. So for generalizing this based on this results of this experiment, there is the excessiveness and the evaluation about 3D OSEM reconfiguration method should be additionally made through experiments after this.

  • PDF

A Performance Comparison of the Mobile Agent Model with the Client-Server Model under Security Conditions (보안 서비스를 고려한 이동 에이전트 모델과 클라이언트-서버 모델의 성능 비교)

  • Han, Seung-Wan;Jeong, Ki-Moon;Park, Seung-Bae;Lim, Hyeong-Seok
    • Journal of KIISE:Information Networking
    • /
    • v.29 no.3
    • /
    • pp.286-298
    • /
    • 2002
  • The Remote Procedure Call(RPC) has been traditionally used for Inter Process Communication(IPC) among precesses in distributed computing environment. As distributed applications have been complicated more and more, the Mobile Agent paradigm for IPC is emerged. Because there are some paradigms for IPC, researches to evaluate and compare the performance of each paradigm are issued recently. But the performance models used in the previous research did not reflect real distributed computing environment correctly, because they did not consider the evacuation elements for providing security services. Since real distributed environment is open, it is very vulnerable to a variety of attacks. In order to execute applications securely in distributed computing environment, security services which protect applications and information against the attacks must be considered. In this paper, we evaluate and compare the performance of the Remote Procedure Call with that of the Mobile Agent in IPC paradigms. We examine security services to execute applications securely, and propose new performance models considering those services. We design performance models, which describe information retrieval system through N database services, using Petri Net. We compare the performance of two paradigms by assigning numerical values to parameters and measuring the execution time of two paradigms. In this paper, the comparison of two performance models with security services for secure communication shows the results that the execution time of the Remote Procedure Call performance model is sharply increased because of many communications with the high cryptography mechanism between hosts, and that the execution time of the Mobile Agent model is gradually increased because the Mobile Agent paradigm can reduce the quantity of the communications between hosts.

List-event Data Resampling for Quantitative Improvement of PET Image (PET 영상의 정량적 개선을 위한 리스트-이벤트 데이터 재추출)

  • Woo, Sang-Keun;Ju, Jung Woo;Kim, Ji Min;Kang, Joo Hyun;Lim, Sang Moo;Kim, Kyeong Min
    • Progress in Medical Physics
    • /
    • v.23 no.4
    • /
    • pp.309-316
    • /
    • 2012
  • Multimodal-imaging technique has been rapidly developed for improvement of diagnosis and evaluation of therapeutic effects. In despite of integrated hardware, registration accuracy was decreased due to a discrepancy between multimodal image and insufficiency of count in accordance with different acquisition method of each modality. The purpose of this study was to improve the PET image by event data resampling through analysis of data format, noise and statistical properties of small animal PET list data. Inveon PET listmode data was acquired as static data for 10 min after 60 min of 37 MBq/0.1 ml $^{18}F$-FDG injection via tail vein. Listmode data format was consist of packet containing 48 bit in which divided 8 bit header and 40 bit payload space. Realigned sinogram was generated from resampled event data of original listmode by using adjustment of LOR location, simple event magnification and nonparametric bootstrap. Sinogram was reconstructed for imaging using OSEM 2D algorithm with 16 subset and 4 iterations. Prompt coincidence was 13,940,707 count measured from PET data header and 13,936,687 count measured from analysis of list-event data. In simple event magnification of PET data, maximum was improved from 1.336 to 1.743, but noise was also increased. Resampling efficiency of PET data was assessed from de-noised and improved image by shift operation of payload value of sequential packet. Bootstrap resampling technique provides the PET image which noise and statistical properties was improved. List-event data resampling method would be aid to improve registration accuracy and early diagnosis efficiency.

The Performance Bottleneck of Subsequence Matching in Time-Series Databases: Observation, Solution, and Performance Evaluation (시계열 데이타베이스에서 서브시퀀스 매칭의 성능 병목 : 관찰, 해결 방안, 성능 평가)

  • 김상욱
    • Journal of KIISE:Databases
    • /
    • v.30 no.4
    • /
    • pp.381-396
    • /
    • 2003
  • Subsequence matching is an operation that finds subsequences whose changing patterns are similar to a given query sequence from time-series databases. This paper points out the performance bottleneck in subsequence matching, and then proposes an effective method that improves the performance of entire subsequence matching significantly by resolving the performance bottleneck. First, we analyze the disk access and CPU processing times required during the index searching and post processing steps through preliminary experiments. Based on their results, we show that the post processing step is the main performance bottleneck in subsequence matching, and them claim that its optimization is a crucial issue overlooked in previous approaches. In order to resolve the performance bottleneck, we propose a simple but quite effective method that processes the post processing step in the optimal way. By rearranging the order of candidate subsequences to be compared with a query sequence, our method completely eliminates the redundancy of disk accesses and CPU processing occurred in the post processing step. We formally prove that our method is optimal and also does not incur any false dismissal. We show the effectiveness of our method by extensive experiments. The results show that our method achieves significant speed-up in the post processing step 3.91 to 9.42 times when using a data set of real-world stock sequences and 4.97 to 5.61 times when using data sets of a large volume of synthetic sequences. Also, the results show that our method reduces the weight of the post processing step in entire subsequence matching from about 90% to less than 70%. This implies that our method successfully resolves th performance bottleneck in subsequence matching. As a result, our method provides excellent performance in entire subsequence matching. The experimental results reveal that it is 3.05 to 5.60 times faster when using a data set of real-world stock sequences and 3.68 to 4.21 times faster when using data sets of a large volume of synthetic sequences compared with the previous one.