• Title/Summary/Keyword: Top-View Image

Search Result 80, Processing Time 0.026 seconds

Real-Time Rendering of a Displacement Map using an Image Pyramid (이미지 피라미드를 이용한 변위 맵의 실시간 렌더링)

  • Oh, Kyoung-Su;Ki, Hyun-Woo
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.5_6
    • /
    • pp.228-237
    • /
    • 2007
  • displacement mapping enables us to add realistic details to polygonal meshes without changing geometry. We present a real-time artifacts-free inverse displacement mapping method. In each pixel, we construct a ray and trace the ray through the displacement map to find an intersection. To skip empty regions safely, we traverse the image pyramid of displacement map in top-down order. Furthermore, when the displacement map is enlarged, intersection with bilinear interpolated displacement map can be found. When the displacement map is at distance, our method supports mipmap-like prefiltering to enhance image quality and speed. Experimental results show that our method can produce correct images even at grazing view angles. Rendering speed of a test scene is over hundreds of frames per second and the influence of resolution of displacement map to rendering speed is little. Our method is simple enough to be added to existing virtual reality systems easily.

Classification of Trucks using Convolutional Neural Network (합성곱 신경망을 사용한 화물차의 차종분류)

  • Lee, Dong-Gyu
    • Journal of Convergence for Information Technology
    • /
    • v.8 no.6
    • /
    • pp.375-380
    • /
    • 2018
  • This paper proposes a classification method using the Convolutional Neural Network(CNN) which can obtain the type of trucks from the input image without the feature extraction step. To automatically classify vehicle images according to the type of truck cargo box, the top view images of the vehicle are used as input image and we design the structure of the CNN suitable for the input images. Learning images and correct output results is generated and the weights of neural network are obtained through the learning process. The actual image is input to the CNN and the output of the CNN is calculated. The classification performance is evaluated through comparison CNN output with actual vehicle types. Experimental results show that vehicle images could be classified with more than 90 percent accuracy according to the type of cargo box and this method can be used for pre-classification for inspecting loading defect.

Display System on a Tabletop for Two Viewers (2방향 관찰면 테이블형 디스플레이 시스템)

  • Yoon, Ki-Hyuk;Kim, Sung-Kyu
    • Korean Journal of Optics and Photonics
    • /
    • v.23 no.6
    • /
    • pp.255-263
    • /
    • 2012
  • In this paper, we designed a tabletop display which enables two viewers each to see a different image simultaneously within his/her defined viewing zone. In order to construct the designed viewing zones, we found the basic design conditions for a parallax barrier with a commercial LCD panel. As the viewing zones for two viewers are formed with only two view-point design, the interval between the center positions of each viewing zone and the width of each viewing zone are small compared to designed values. We analyzed the primary cases, introduced two modified design methods to enlarge the interval and the width of the viewing zones, and simulated their characteristics. As designed with six unit view-point and each viewing zone of each viewer is formed with a merged view-point, we found that adequate interval and width of viewing zones can be made, and we verified it in comparison with the tabletop display module that we fabricated.

Estimation of Rice Heading Date of Paddy Rice from Slanted and Top-view Images Using Deep Learning Classification Model (딥 러닝 분류 모델을 이용한 직하방과 경사각 영상 기반의 벼 출수기 판별)

  • Hyeok-jin Bak;Wan-Gyu Sang;Sungyul Chang;Dongwon Kwon;Woo-jin Im;Ji-hyeon Lee;Nam-jin Chung;Jung-Il Cho
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.4
    • /
    • pp.337-345
    • /
    • 2023
  • Estimating the rice heading date is one of the most crucial agricultural tasks related to productivity. However, due to abnormal climates around the world, it is becoming increasingly challenging to estimate the rice heading date. Therefore, a more objective classification method for estimating the rice heading date is needed than the existing methods. This study, we aimed to classify the rice heading stage from various images using a CNN classification model. We collected top-view images taken from a drone and a phenotyping tower, as well as slanted-view images captured with a RGB camera. The collected images underwent preprocessing to prepare them as input data for the CNN model. The CNN architectures employed were ResNet50, InceptionV3, and VGG19, which are commonly used in image classification models. The accuracy of the models all showed an accuracy of 0.98 or higher regardless of each architecture and type of image. We also used Grad-CAM to visually check which features of the image the model looked at and classified. Then verified our model accurately measure the rice heading date in paddy fields. The rice heading date was estimated to be approximately one day apart on average in the four paddy fields. This method suggests that the water head can be estimated automatically and quantitatively when estimating the rice heading date from various paddy field monitoring images.

Analysis of Behavioral Characteristics of Broilers by Feeding, Drinking, and Resting Spaces according to Stocking Density using Image Analysis Technique (영상분석기법을 활용한 사육밀도에 따른 급이·급수 및 휴식공간별 육계의 행동특성 분석)

  • Kim, Hyunsoo;Kang, HwanKu;Kang, Boseok;Kim, ChanHo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.12
    • /
    • pp.558-569
    • /
    • 2020
  • This study examined the frequency of a broiler's stay in each area as stock density using an ICT-based image analysis technique from the perspective of precision livestock farming (PLF) according to the increase in the domestic broiler farms to understand the normal behavior patterns of broilers by age. The broiler was used in the experimental box (3.3×2.7 m) in a poultry house in Gyeonggi province. The stock densities were 9.5 birds/㎡ (n=85) and 19 birds/㎡ (n=170), respectively, and the frequency of stay by feeding, water, and rest area was monitored using a top-view camera. The image data of three-colored-specific broilers identified as the stock density were acquired by age (12, 16, 22, 27, and 29 days) for six hours. In the collected image data, the object tracking technique was used to record the cumulative movement path by connecting approximately 640,000 frames at 30 fps to quantify the frequency of stay in each area. In each stock density, it was significant in the order of the rest area, feeding, and water area (p<0.001). In 9.5 birds/㎡, it was at 57.9, 24.2, and 17.9 %, and 73.2, 16.8, and 10 % in 19 birds/㎡. The frequency of a broiler's stay could be evaluated in each area as the stock density using an ICT-based image analysis technique that minimizes stress. This method is expected to be used to provide basic material for developing an ICT-based management system through real-time monitoring.

Coherent X-ray Diffraction Imaging with Single-pulse Table-top Soft X-ray Laser

  • Kang, Hyon-Chol;Kim, H.T.;Lee, S.K.;Kim, C.M.;Choi, I.W.;Yu, T.J.;Sung, J.H.;Hafz, N.;Jeong, T.M.;Kang, S.W.;Jin, Y.Y.;Noh, Y.C.;Ko, D.K.;Kim, S.S.;Marathe, S.;Kim, S.N.;Kim, C.;Noh, D.Y.;Lee, J.
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2008.02a
    • /
    • pp.429-430
    • /
    • 2008
  • We demonstrate coherent x-ray diffraction imaging using table-top x-ray laser at a wavelength of 13.9nm driven by 10-Hz ti:Sapphire laser system at the Advanced Photonics Research Institute in Korea. Since the flux of x-ray photons reaches as high as $10^9$ photons/pulse in a $20{\times}20{\mu}m^2$ field of view, we measured a ingle-pulse diffraction pattern of a micrometer-scale object with high dynamic range of diffraction intensities and successfully reconstructed to the image using phase retrieval algorithm with an oversampling ratio of 1:6. the imaging resolution is $^{\sim}150$ nm, while that is much improved by stacking the many diffraction patterns. This demonstration can be extended to the biological sample with the diffraction limited resolution.

  • PDF

STUDY OF CORRELATION BETWEEN WETTED FUEL FOOTPRINTS ON COMBUSTION CHAMBER WALLS AND UBHC IN ENGINE START PROCESSES

  • KIM H.;YOON S.;LAI M.-C.
    • International Journal of Automotive Technology
    • /
    • v.6 no.5
    • /
    • pp.437-444
    • /
    • 2005
  • Unburned hydrocarbon (UBHC) emissions from gasoline engines remain a primary engineering research and development concern due to stricter emission regulations. Gasoline engines produce more UBHC emissions during cold start and warm-up than during any other stage of operation, because of insufficient fuel-air mixing, particularly in view of the additional fuel enrichment used for early starting. Impingement of fuel droplets on the cylinder wall is a major source of UBHC and a concern for oil dilution. This paper describes an experimental study that was carried out to investigate the distribution and 'footprint' of fuel droplets impinging on the cylinder wall during the intake stroke under engine starting conditions. Injectors having different targeting and atomization characteristics were used in a 4-Valve engine with optical access to the intake port and combustion chamber. The spray and targeting performance were characterized using high-speed visualization and Phase Doppler Interferometry techniques. The fuel droplets impinging on the port, cylinder wall and piston top were characterized using a color imaging technique during simulated engine start-up from room temperature. Highly absorbent filter paper was placed around the circumference of the cylinder liner and on the piston top to collect fuel droplets during the intake strokes. A small amount of colored dye, which dissolves completely in gasoline, was used as the tracer. Color density on the paper, which is correlated with the amount of fuel deposited and its distribution on the cylinder wall, was measured using image analysis. The results show that by comparing the locations of the wetted footprints and their color intensities, the influence of fuel injection and engine conditions can be qualitatively and quantitatively examined. Fast FID measurements of UBHC were also performed on the engine for correlation to the mixture formation results.

Application of Ultrasound Tomography for Non-Destructive Testing of Concrete Structure (초음파 tomography를 응용한 콘크리트 구조물의 비파괴 시험에 관한 연구)

  • Kim, Young-Ki;Yoon, Young-Deuk;Yoon, Chong-Yul;Kim, Jung-Soo;Kim, Woon-Kyung;Song, Moon-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.37 no.1
    • /
    • pp.27-36
    • /
    • 2000
  • As a potential approach for non-destructive testing of concrete structures, we evaluate the time-of-flight (TOF) ultrasound tomography technique In conventional X ray tomography, the reconstructed Image corresponds to the internal attenuation coefficient However, in TOF ultrasound tomography, the reconstructed Image is proportional to the retractive index of the medium Because refractive effects are minimal for X-rays, conventional reconstruction techniques are applied to reconstruct the Image in X-ray tomography However, since ultrasound travels in curved path, due to the spatial variations in the refractive index of the medium, the path must be known to correctly reconstruct the Image. Algorithm for determining the ultrasound path is developed from a Geometrical Optics point view and the image reconstruction algorithm, since the paths are curved It requires the algebraic approach, namely the ART or the SIRT Here, the difference between the computed and the measured TOP data is used as a basis, for the iteration process First the initial image is reconstructed assuming straight paths. It then updates the path based on the recently reconstructed image This process of reconstruction and path determination repeats until convergence The proposed algorithm is evaluated by computer simulations, and in addition is applied to a real concrete structure.

  • PDF

Design and Simulation of Depth-Encoding PET Detector using Wavelength-Shifting (WLS) Fiber Readout

  • An, Su Jung;Kim, Hyun-il;Lee, Chae Young;Song, Han Kyeol;Park, Chan Woo;Chung, Young Hyun
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.5
    • /
    • pp.305-310
    • /
    • 2015
  • We propose a new concept for a depth of interaction (DOI) positron emission tomography (PET) detector based on dual-ended-scintillator (DES) readout for small animal imaging. The detector consists of lutetium yttrium orthosilicate (LYSO) arrays coupled with orthogonal wavelength-shifting (WLS) fibre placed on the top and bottom of the arrays. On every other line, crystals that are 2 mm shorter are arranged to create grooves. WLS fibre is inserted into these grooves. This paper describes the design and performance evaluation of this PET detector using Monte Carlo simulations. To investigate sensitivity by crystal size, five types of PET detectors were simulated. Because the proposed detector is composed of crystals with three different lengths, degradation in sensitivity across the field of view was also explored by simulation. In addition, the effect of DOI resolution on image quality was demonstrated. The simulation results proved that the devised PET detector with excellent DOI resolution is helpful for reducing the channels of sensors/electronics and minimizing gamma ray attenuation and scattering while maintaining good detector performance.

Image Processing for Pig's Head Removal (돼지 머리 제거를 위한 영상 처리)

  • Ahn, Han-Se;Choi, Won-Seok;Lee, Han-Hae-Sol;Chung, Yong-Wha;Park, Dai-Hee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.05a
    • /
    • pp.621-624
    • /
    • 2019
  • 돈사에서 돼지의 무게는 돼지의 건강이나 성장 상태, 출하 여부, 사육 환경, 사료 배급을 결정하는 주요 요인 중 하나이다. 이에 따라 돈사에서 돼지의 무게를 측정하는 것은 중요한 문제이다. 돼지의 무게 측정을 위해 Top-view 카메라에서 획득한 영상으로부터 돼지의 픽셀 수를 정확히 측정하기 위해서는 돼지의 머리 부분을 제거할 필요가 있다. 본 논문에서는 Convex-hull을 이용하여 돼지 모양에서의 오목 점과 돼지의 중심으로부터의 거리 정보를 이용함으로써 돼지의 머리를 효과적으로 탐지 및 제거하는 방법을 제안한다. 먼저, 이진화된 돼지의 이미지에서 Convex-hull 알고리즘을 수행 후, 돼지의 중심점 좌표로부터 일정 굴곡 이상의 오목 점 중 가장 가까운 점의 좌표를 획득한다. 이후 앞서 획득한 점의 좌표와 중점의 좌표 사이 일정 길이와 각도를 가지는 또 다른 점의 좌표를 획득하고, 두 점을 기준으로 돼지의 몸통과 머리를 분리하였다. 실험결과, 높은 정확도와 적은 수행시간으로 돼지의 머리를 탐지하고 제거할 수 있음을 확인하였다.