• Title/Summary/Keyword: Edge detection

Search Result 1,455, Processing Time 0.028 seconds

A Simulation-Based Investigation of an Advanced Traveler Information System with V2V in Urban Network (시뮬레이션기법을 통한 차량 간 통신을 이용한 첨단교통정보시스템의 효과 분석 (도시 도로망을 중심으로))

  • Kim, Hoe-Kyoung
    • Journal of Korean Society of Transportation
    • /
    • v.29 no.5
    • /
    • pp.121-138
    • /
    • 2011
  • More affordable and available cutting-edge technologies (e.g., wireless vehicle communication) are regarded as a possible alternative to the fixed infrastructure-based traffic information system requiring the expensive infrastructure investments and mostly implemented in the uninterrupted freeway network with limited spatial system expansion. This paper develops an advanced decentralized traveler information System (ATIS) using vehicle-to-vehicle (V2V) communication system whose performance (drivers' travel time savings) are enhanced by three complementary functions (autonomous automatic incident detection algorithm, reliable sample size function, and driver behavior model) and evaluates it in the typical $6{\times}6$ urban grid network with non-recurrent traffic state (traffic incident) with the varying key parameters (traffic flow, communication radio range, and penetration ratio), employing the off-the-shelf microscopic simulation model (VISSIM) under the ideal vehicle communication environment. Simulation outputs indicate that as the three key parameters are increased more participating vehicles are involved for traffic data propagation in the less communication groups at the faster data dissemination speed. Also, participating vehicles saved their travel time by dynamically updating the up-to-date traffic states and searching for the new route. Focusing on the travel time difference of (instant) re-routing vehicles, lower traffic flow cases saved more time than higher traffic flow ones. This is because a relatively small number of vehicles in 300vph case re-route during the most system-efficient time period (the early time of the traffic incident) but more vehicles in 514vph case re-route during less system-efficient time period, even after the incident is resolved. Also, normally re-routings on the network-entering links saved more travel time than any other places inside the network except the case where the direct effect of traffic incident triggers vehicle re-routings during the effective incident time period and the location and direction of the incident link determines the spatial distribution of re-routing vehicles.

A Road Luminance Measurement Application based on Android (안드로이드 기반의 도로 밝기 측정 어플리케이션 구현)

  • Choi, Young-Hwan;Kim, Hongrae;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.16 no.2
    • /
    • pp.49-55
    • /
    • 2015
  • According to the statistics of traffic accidents over recent 5 years, traffic accidents during the night times happened more than the day times. There are various causes to occur traffic accidents and the one of the major causes is inappropriate or missing street lights that make driver's sight confused and causes the traffic accidents. In this paper, with smartphones, we designed and implemented a lane luminance measurement application which stores the information of driver's location, driving, and lane luminance into database in real time to figure out the inappropriate street light facilities and the area that does not have any street lights. This application is implemented under Native C/C++ environment using android NDK and it improves the operation speed than code written in Java or other languages. To measure the luminance of road, the input image with RGB color space is converted to image with YCbCr color space and Y value returns the luminance of road. The application detects the road lane and calculates the road lane luminance into the database sever. Also this application receives the road video image using smart phone's camera and improves the computational cost by allocating the ROI(Region of interest) of input images. The ROI of image is converted to Grayscale image and then applied the canny edge detector to extract the outline of lanes. After that, we applied hough line transform method to achieve the candidated lane group. The both sides of lane is selected by lane detection algorithm that utilizes the gradient of candidated lanes. When the both lanes of road are detected, we set up a triangle area with a height 20 pixels down from intersection of lanes and the luminance of road is estimated from this triangle area. Y value is calculated from the extracted each R, G, B value of pixels in the triangle. The average Y value of pixels is ranged between from 0 to 100 value to inform a luminance of road and each pixel values are represented with color between black and green. We store car location using smartphone's GPS sensor into the database server after analyzing the road lane video image with luminance of road about 60 meters ahead by wireless communication every 10 minutes. We expect that those collected road luminance information can warn drivers about safe driving or effectively improve the renovation plans of road luminance management.

Development of Position Encoding Circuit for a Multi-Anode Position Sensitive Photomultiplier Tube (다중양극 위치민감형 광전자증배관을 위한 위치검출회로 개발)

  • Kwon, Sun-Il;Hong, Seong-Jong;Ito, Mikiko;Yoon, Hyun-Suk;Lee, Geon-Song;Sim, Kwang-Souk;Rhee, June-Tak;Lee, Dong-Soo;Lee, Jae-Sung
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.42 no.6
    • /
    • pp.469-477
    • /
    • 2008
  • Purpose: The goal of this paper is to present the design and performance of a position encoding circuit for $16{\times}16$ array of position sensitive multi-anode photomultiplier tube for small animal PET scanners. This circuit which reduces the number of readout channels from 256 to 4 channels is based on a charge division method utilizing a resistor array. Materials and Methods: The position encoding circuit was simulated with PSpice before fabrication. The position encoding circuit reads out the signals from H9500 flat panel PMTs (Hamamatsu Photonics K.K., Japan) on which $1.5{\times}1.5{\times}7.0\;mm^3$ $L_{0.9}GSO$ ($Lu_{1.8}Gd_{0.2}SiO_{5}:Ce$) crystals were mounted. For coincidence detection, two different PET modules were used. One PET module consisted of a $29{\times}29\;L_{0.9}GSO$ crystal layer, and the other PET module two $28{\times}28$ and $29{\times}29\;L_{0.9}GSO$ crystal layers which have relative offsets by half a crystal pitch in x- and y-directions. The crystal mapping algorithm was also developed to identify crystals. Results: Each crystal was clearly visible in flood images. The crystal identification capability was enhanced further by changing the values of resistors near the edge of the resistor array. Energy resolutions of individual crystal were about 11.6%(SD 1.6). The flood images were segmented well with the proposed crystal mapping algorithm. Conclusion: The position encoding circuit resulted in a clear separation of crystals and sufficient energy resolutions with H9500 flat-panel PMT and $L_{0.9}GSO$ crystals. This circuit is good enough for use in small animal PET scanners.

Cloud-cell Tracking Analysis using Satellite Image of Extreme Heavy Snowfall in the Yeongdong Region (영동지역의 극한 대설에 대한 위성관측으로부터 구름 추적)

  • Cho, Young-Jun;Kwon, Tae-Yong
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.1
    • /
    • pp.83-107
    • /
    • 2014
  • This study presents spatial characteristics of cloud using satellite image in the extreme heavy snowfall of the Yeongdong region. 3 extreme heavy snowfall events in the Yeongdong region during the recent 12 years (2001 ~ 2012) are selected for which the fresh snow cover exceed 50 cm/day. Spatial characteristics (minimum brightness temperature; Tmin, cloud size, center of cloud-cell) of cloud are analyzed by tracking main cloud-cell related with these events. These characteristics are compared with radar precipitation in the Yeongdong region to investigate relationship between cloud and precipitation. The results are summarized as follows, selected extreme heavy snowfall events are associated with the isolated, well-developed, and small-scale convective cloud which is developing over the Yeongdong region or moving from over East Korea Bay to the Yeongdong region. During the period of main precipitation, cloud-cell Tmin is low ($-40{\sim}-50^{\circ}C$) and cloud area is small (17,000 ~ 40,000 $km^2$). Precipitation area (${\geq}$ 0.5 mm/hr) from radar also shows small and isolated shape (4,000 ~ 8,000 $km^2$). The locations of the cloud and precipitation are similar, but in there centers are located closely to the coast of the Yeongdong region. In all events the extreme heavy snowfall occur in the period a developed cloud-cell was moving into the coastal waters of the Yeongdong. However, it was found that developing stage of cloud and precipitation are not well matched each other in one of 3 events. Water vapor image shows that cloud-cell is developed on the northern edge of the dry(dark) region. Therefore, at the result analyzed from cloud and precipitation, selected extreme heavy snowfall events are associated with small-scale secondary cyclone or vortex, not explosive polar low. Detection and tracking small-scale cloud-cell in the real-time forecasting of the Yeongdong extreme heavy snowfall is important.

Computer Assisted EPID Analysis of Breast Intrafractional and Interfractional Positioning Error (유방암 방사선치료에 있어 치료도중 및 분할치료 간 위치오차에 대한 전자포탈영상의 컴퓨터를 이용한 자동 분석)

  • Sohn Jason W.;Mansur David B.;Monroe James I.;Drzymala Robert E.;Jin Ho-Sang;Suh Tae-Suk;Dempsey James F.;Klein Eric E.
    • Progress in Medical Physics
    • /
    • v.17 no.1
    • /
    • pp.24-31
    • /
    • 2006
  • Automated analysis software was developed to measure the magnitude of the intrafractional and interfractional errors during breast radiation treatments. Error analysis results are important for determining suitable planning target volumes (PTV) prior to Implementing breast-conserving 3-D conformal radiation treatment (CRT). The electrical portal imaging device (EPID) used for this study was a Portal Vision LC250 liquid-filled ionization detector (fast frame-averaging mode, 1.4 frames per second, 256X256 pixels). Twelve patients were imaged for a minimum of 7 treatment days. During each treatment day, an average of 8 to 9 images per field were acquired (dose rate of 400 MU/minute). We developed automated image analysis software to quantitatively analyze 2,931 images (encompassing 720 measurements). Standard deviations ($\sigma$) of intrafractional (breathing motion) and intefractional (setup uncertainty) errors were calculated. The PTV margin to include the clinical target volume (CTV) with 95% confidence level was calculated as $2\;(1.96\;{\sigma})$. To compensate for intra-fractional error (mainly due to breathing motion) the required PTV margin ranged from 2 mm to 4 mm. However, PTV margins compensating for intefractional error ranged from 7 mm to 31 mm. The total average error observed for 12 patients was 17 mm. The intefractional setup error ranged from 2 to 15 times larger than intrafractional errors associated with breathing motion. Prior to 3-D conformal radiation treatment or IMRT breast treatment, the magnitude of setup errors must be measured and properly incorporated into the PTV. To reduce large PTVs for breast IMRT or 3-D CRT, an image-guided system would be extremely valuable, if not required. EPID systems should incorporate automated analysis software as described in this report to process and take advantage of the large numbers of EPID images available for error analysis which will help Individual clinics arrive at an appropriate PTV for their practice. Such systems can also provide valuable patient monitoring information with minimal effort.

  • PDF