• Title/Summary/Keyword: Shot Detection

Search Result 212, Processing Time 0.029 seconds

Semantic Event Detection and Summary for TV Golf Program Using MPEG-7 Descriptors (MPEG-7 기술자를 이용한 TV 골프 프로그램의 이벤트검출 및 요약)

  • 김천석;이희경;남제호;강경옥;노용만
    • Journal of Broadcast Engineering
    • /
    • v.7 no.2
    • /
    • pp.96-106
    • /
    • 2002
  • We introduce a novel scheme to characterize and index events in TV golf programs using MPEG-7 descriptors. Our goal is to identify and localize the golf events of interest to facilitate highlight-based video indexing and summarization. In particular, we analyze multiple (low-level) visual features using domain-specific model to create a perceptual relation for semantically meaningful(high-level) event identification. Furthermore, we summarize a TV golf program with TV-Anytime segmentation metadata, a standard form of an XML-based metadata description, in which the golf events are represented by temporally localized segments and segment groups of highlights. Experimental results show that our proposed technique provides reasonable performance for identifying a variety of golf events.

A Study on an Image Stabilization for Car Vision System (차량용 비전 시스템을 위한 영상 안정화에 관한 연구)

  • Lew, Sheen;Lee, Wan-Joo;Kang, Hyun-Chul
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.4
    • /
    • pp.957-964
    • /
    • 2011
  • The image stabilization is the procedure of stabilizing the blurred image with image processing method. Due to easy detection of global motion, PA(Projection algorithm) based on digital image stabilization has been studied by many researchers. PA has the advantage of easy implementation and low complexity, but in the case of serious rotational motion the accuracy of the algorithm will be cut down because of its fixed exploring range, and, on the other hand, if extending the exploring range, the block for detecting motion will become small, then we cannot detect correct global motion. In this paper, to overcome the drawback of conventional PA, an Iterative Projection Algorithm (IPA) is proposed, which improved the correctness of global motion by detecting global motion with detecting block which is appropriate to different extent of motion. With IPA, in the case of processing 1000 continual frames shot in automobile, compared with conventional algorithm and other detecting range, the results of PSNR is improved 6.8% at least, and 28.9% at the most.

Quantitative Measurement of Soot concentration by Two-Wavelength Correction of Laser-Induced Incandescence Signals (2파장 보정 Laser-Induced Incandescence 법을 이용한 매연 농도 측정)

  • 정종수
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.5 no.3
    • /
    • pp.54-65
    • /
    • 1997
  • To quantify the LII signals from soot particle of flames in diesel engine cylinder, a new method has been proposed for correcting LII signal attenuated by soot particles between the measuring point and the detector. It has been verified by an experiment on a laminar jet ethylene-air diffusion flame. Being proportional to the attenuation, the ratio of LII signal at two different detection wavelengths can be used to correct the measured LIIsignal and obtain the unattenuated LII signal, from which the soot volume fraction in the flame can be estimated. Both the 1064-nm and frequency-doubled 532-nm beams from the Nd : YAG laser are used. Single-shot, one-dimensional(1-D) line images are recorded on the intensified CCD camera, with the rectangular-profile laser beam using 1-mm-diameter pinhole. Two broadband optical interference filters having the center wavelengths of 647 nm and 400 nm respectively and a bandwidth of 10 nm are used. This two-wavelength correction has been applied to the ethylene-air coannular laminar diffusion flame, previously studied on soot formation by the laser extinction method in this laboratory. The results by the LII measurement technique and the conventional laser extinction method at the height of 40 nm above the jet exit agreed well with each other except around outside of the peaks of soot concentration, where the soot concentration was relatively high and resulting attenuation of the LII signal was large. The radial profile shape of soot concentration was not changed a lot, but the absolute value of the soot volume fraction around outside edge changed from 4ppm to 6.5 ppm at r=2.8mm after correction. This means that the attenuation of LII signal was approximately 40% at this point, which is higher than the average attenuation rate of this flame, 10~15%.

  • PDF

Optimum Quality Control of Seismic Data of Kunsan Basin in Offshore Korea (국내대륙붕 군산분지에 대한 탄성파 전산처리의 최적 매개 변수 결정)

  • Kim, Kun-Deuk
    • Geophysics and Geophysical Exploration
    • /
    • v.1 no.3
    • /
    • pp.161-169
    • /
    • 1998
  • The Kunsan basin is a pull-apart basin which was formed during Tertiary. The pre-Tertiary section consists of various rock types, such as meta-sediments, igneous rocks, carbonates, clastics, and volcanics. Tertiary sections are the main targets for the petroleum exploration. In order to determine the optimum processing parameters of the basin, about 12 kinds of test processings were performed. The first main steps for the quality control is to determine the noisy or bad traces by examining the near trace section and shot gathers. The true amplitude recovery was applied to account for the amplitude losses due to spherical divergence and inelastic attenuation. Source designature and predictive deconvolution test were conducted to determine the optimum wavelet parameters and to remove the multiples. Velocity analysis was performed at 1km intervals. The optimum mute function was picked by locating the range of offsets which gives the best stacking response for any particular reflections. Post-stack deconvolution was tested to see if the quality of stacked data improved. The stacked data was migrated using a finite difference algorithm. The migration velocity was obtained from the stacking velocities using the time varying percentages. The AGC sections were provided for the structural interpretation. The RAP sections were used for DHI analysis and for the detection of volcanics.

  • PDF

Reliability Improvement of Offshore Structural Steel F690 Using Surface Crack Nondamaging Technology

  • Lee, Weon-Gu;Gu, Kyoung-Hee;Kim, Cheol-Su;Nam, Ki-Woo
    • Journal of Ocean Engineering and Technology
    • /
    • v.35 no.5
    • /
    • pp.327-335
    • /
    • 2021
  • Microcracks can rapidly grow and develop in high-strength steels used in offshore structures. It is important to render these microcracks harmless to ensure the safety and reliability of offshore structures. Here, the dependence of the aspect ratio (As) of the maximum depth of harmless crack (ahlm) was evaluated under three different conditions considering the threshold stress intensity factor (Δkth) and residual stress of offshore structural steel F690. The threshold stress intensity factor and fatigue limit of fatigue crack propagation, dependent on crack dimensions, were evaluated using Ando's equation, which considers the plastic behavior of fatigue and the stress ratio. ahlm by peening was analyzed using the relationship between Δkth obtained by Ando's equation and Δkth obtained by the sum of applied stress and residual stress. The plate specimen had a width 2W = 12 mm and thickness t = 20 mm, and four value of As were considered: 1.0, 0.6, 0.3, and 0.1. The ahlm was larger as the compressive residual stress distribution increased. Additionally, an increase in the values of As and Δkth(l) led to a larger ahlm. With a safety factor (N) of 2.0, the long-term safety and reliability of structures constructed using F690 can be secured with needle peening. It is necessary to apply a more sensitive non-destructive inspection technique as a non-destructive inspection method for crack detection could not be used to observe fatigue cracks that reduced the fatigue limit of smooth specimens by 50% in the three types of residual stresses considered. The usefulness of non-destructive inspection and non-damaging techniques was reviewed based on the relationship between ahlm, aNDI (minimum crack depth detectable in non-destructive inspection), acr N (crack depth that reduces the fatigue limit to 1/N), and As.

A Fast Background Subtraction Method Robust to High Traffic and Rapid Illumination Changes (많은 통행량과 조명 변화에 강인한 빠른 배경 모델링 방법)

  • Lee, Gwang-Gook;Kim, Jae-Jun;Kim, Whoi-Yul
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.3
    • /
    • pp.417-429
    • /
    • 2010
  • Though background subtraction has been widely studied for last decades, it is still a poorly solved problem especially when it meets real environments. In this paper, we first address some common problems for background subtraction that occur in real environments and then those problems are resolved by improving an existing GMM-based background modeling method. First, to achieve low computations, fixed point operations are used. Because background model usually does not require high precision of variables, we can reduce the computation time while maintaining its accuracy by adopting fixed point operations rather than floating point operations. Secondly, to avoid erroneous backgrounds that are induced by high pedestrian traffic, static levels of pixels are examined using shot-time statistics of pixel history. By using a lower learning rate for non-static pixels, we can preserve valid backgrounds even for busy scenes where foregrounds dominate. Finally, to adapt rapid illumination changes, we estimated the intensity change between two consecutive frames as a linear transform and compensated learned background models according to the estimated transform. By applying the fixed point operation to existing GMM-based method, it was able to reduce the computation time to about 30% of the original processing time. Also, experiments on a real video with high pedestrian traffic showed that our proposed method improves the previous background modeling methods by 20% in detection rate and 5~10% in false alarm rate.

The way to make training data for deep learning model to recognize keywords in product catalog image at E-commerce (온라인 쇼핑몰에서 상품 설명 이미지 내의 키워드 인식을 위한 딥러닝 훈련 데이터 자동 생성 방안)

  • Kim, Kitae;Oh, Wonseok;Lim, Geunwon;Cha, Eunwoo;Shin, Minyoung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.1-23
    • /
    • 2018
  • From the 21st century, various high-quality services have come up with the growth of the internet or 'Information and Communication Technologies'. Especially, the scale of E-commerce industry in which Amazon and E-bay are standing out is exploding in a large way. As E-commerce grows, Customers could get what they want to buy easily while comparing various products because more products have been registered at online shopping malls. However, a problem has arisen with the growth of E-commerce. As too many products have been registered, it has become difficult for customers to search what they really need in the flood of products. When customers search for desired products with a generalized keyword, too many products have come out as a result. On the contrary, few products have been searched if customers type in details of products because concrete product-attributes have been registered rarely. In this situation, recognizing texts in images automatically with a machine can be a solution. Because bulk of product details are written in catalogs as image format, most of product information are not searched with text inputs in the current text-based searching system. It means if information in images can be converted to text format, customers can search products with product-details, which make them shop more conveniently. There are various existing OCR(Optical Character Recognition) programs which can recognize texts in images. But existing OCR programs are hard to be applied to catalog because they have problems in recognizing texts in certain circumstances, like texts are not big enough or fonts are not consistent. Therefore, this research suggests the way to recognize keywords in catalog with the Deep Learning algorithm which is state of the art in image-recognition area from 2010s. Single Shot Multibox Detector(SSD), which is a credited model for object-detection performance, can be used with structures re-designed to take into account the difference of text from object. But there is an issue that SSD model needs a lot of labeled-train data to be trained, because of the characteristic of deep learning algorithms, that it should be trained by supervised-learning. To collect data, we can try labelling location and classification information to texts in catalog manually. But if data are collected manually, many problems would come up. Some keywords would be missed because human can make mistakes while labelling train data. And it becomes too time-consuming to collect train data considering the scale of data needed or costly if a lot of workers are hired to shorten the time. Furthermore, if some specific keywords are needed to be trained, searching images that have the words would be difficult, as well. To solve the data issue, this research developed a program which create train data automatically. This program can make images which have various keywords and pictures like catalog and save location-information of keywords at the same time. With this program, not only data can be collected efficiently, but also the performance of SSD model becomes better. The SSD model recorded 81.99% of recognition rate with 20,000 data created by the program. Moreover, this research had an efficiency test of SSD model according to data differences to analyze what feature of data exert influence upon the performance of recognizing texts in images. As a result, it is figured out that the number of labeled keywords, the addition of overlapped keyword label, the existence of keywords that is not labeled, the spaces among keywords and the differences of background images are related to the performance of SSD model. This test can lead performance improvement of SSD model or other text-recognizing machine based on deep learning algorithm with high-quality data. SSD model which is re-designed to recognize texts in images and the program developed for creating train data are expected to contribute to improvement of searching system in E-commerce. Suppliers can put less time to register keywords for products and customers can search products with product-details which is written on the catalog.

Detection of Hepatic Lesion: Comparison of Free-Breathing and Respiratory-Triggered Diffusion-Weighted MR imaging on 1.5-T MR system (국소 간 병변의 발견: 1.5-T 자기공명영상에서의 자유호흡과 호흡유발 확산강조 영상의 비교)

  • Park, Hye-Young;Cho, Hyeon-Je;Kim, Eun-Mi;Hur, Gham;Kim, Yong-Hoon;Lee, Byung-Hoon
    • Investigative Magnetic Resonance Imaging
    • /
    • v.15 no.1
    • /
    • pp.22-31
    • /
    • 2011
  • Purpose : To compare free-breathing and respiratory-triggered diffusion-weighted imaging on 1.5-T MR system in the detection of hepatic lesions. Materials and Methods: This single-institution study was approved by our institutional review board. Forty-seven patients (mean 57.9 year; M:F = 25:22) underwent hepatic MR imaging on 1.5-T MR system using both free-breathing and respiratory-triggered diffusion-weighted imaging (DWI) at a single examination. Two radiologists retrospectively reviewed respiratory-triggered and free-breathing sets (B50, B400, B800 diffusion weighted images and ADC map) in random order with a time interval of 2 weeks. Liver SNR and lesion-to-liver CNR of DWI were calculated measuring ROI. Results : Total of 62 lesions (53 benign, 9 malignant) that included 32 cysts, 13 hemangiomas, 7 hepatocellular carcinomas (HCCs), 5 eosinophilic infiltration, 2 metastases, 1 eosinophilic abscess, focal nodular hyperplasia, and pseudolipoma of Glisson's capsule were reviewed by two reviewers. Though not reaching statistical significance, the overall lesion sensitivities were increased in respiratory-triggered DWI [reviewer1: reviewer2, 47/62(75.81%):45/62(72.58%)] than free-breathing DWI [44/62(70.97%):41/62(66.13%)]. Especially for smaller than 1 cm hepatic lesions, sensitivity of respiratory-triggered DWI [24/30(80%):21/30(70%)] was superior to free-breathing DWI [17/30(56.7%):15/30(50%)]. The diagnostic accuracy measuring the area under the ROC curve (Az value) of free-breathing and respiratory-triggered DWI was not statistically different. Liver SNR and lesion-to-liver CNR of respiratory-triggered DWI ($87.6{\pm}41.4$, $41.2{\pm}62.5$) were higher than free-breathing DWI ($38.8:{\pm}13.6$, $24.8{\pm}36.8$) (p value < 0.001, respectively). Conclusion: Respiratory-triggered diffusion-weighted MR imaging seemed to be better than free-breathing diffusion-weighted MR imaging on 1.5-T MR system for the detection of smaller than 1 cm lesions by providing high SNR and CNR.

Maximising the lateral resolution of near-surface seismic refraction methods (천부 탄성파 굴절법 자료의 수평 분해능 최대화 연구)

  • Palmer, Derecke
    • Geophysics and Geophysical Exploration
    • /
    • v.12 no.1
    • /
    • pp.85-98
    • /
    • 2009
  • The tau-p inversion algorithm is widely employed to generate starting models with most computer programs, which implement refraction tomography. This algorithm emphasises the vertical resolution of many layers, and as a result, it frequently fails to detect even large lateral variations in seismic velocities, such as the decreases which are indicative of shear zones. This study demonstrates the failure of the tau-p inversion algorithm to detect or define a major shear zone which is 50m or 10 stations wide. Furthermore, the majority of refraction tomography programs parameterise the seismic velocities within each layer with vertical velocity gradients. By contrast, the Generalized Reciprocal Method (GRM) inversion algorithms emphasise the lateral resolution of individual layers. This study demonstrates the successful detection and definition of the 50m wide shear zone with the GRM inversion algorithms. The existence of the shear zone is confirmed by a 2D analysis of the head wave amplitudes and by numerous closely spaced orthogonal seismic profiles carried out as part of a later 3D refraction investigation. Furthermore, an analysis of the shot record amplitudes indicates that a reversal in the seismic velocities, rather than vertical velocity gradients, occurs in the weathered layers. The major conclusion reached in this study is that while all seismic refraction operations should aim to provide as accurate depth estimates as is practical, those which emphasise the lateral resolution of individual layers generate more useful results for geotechnical and environmental applications. The advantages of the improved lateral resolution are obtained with 2D traverses in which the structural features can be recognised from the magnitudes of the variations in the seismic velocities. Furthermore, the spatial patterns obtained with 3D investigations facilitate the recognition of structural features such as faults which do not display any intrinsic variation or 'signature' in seismic velocities.

Effect of Hypersonic Missiles on Maritime Strategy: Focus on Securing and Exploiting Sea Control (극초음속 미사일이 해양전략에 미치는 영향: 해양통제의 확보와 행사를 중심으로)

  • Cho, Seongjin
    • Maritime Security
    • /
    • v.1 no.1
    • /
    • pp.241-271
    • /
    • 2020
  • The military technology currently receiving the most attention is the hypersonic missile. hypersonic is faster than the speed of sound or Mach 5+. The vast majority of the ballistic missiles that it inspired achieved hypersonic speeds as they fell from the sky. Rather than speed, today's renewed attention to hypersonic weapons owes to developments that enable controlled flight. These new systems have two sub-varieties: hypersonic glide vehicles and hypersonic cruise missiles. Hypersonic weapons could challenge detection and defense due to their speed, maneuverability, and low altitude of flight. The fundamental question of this study is: 'What effect will the hypersonic missile have on the maritime strategy?' It is quite prudent to analyze and predict the impact of technology in the development stage on strategy in advance. However, strategy is essential because it affect future force construction. hypersonic missiles act as a limiting factor in securing sea control. The high speed and powerful destructive power of the hypersonic missile are not only difficult to intercept, but it also causes massive ship damage at a single shot. As a result, it is analyzed that the Securing sea control will be as difficult as the capacity of sea denial will be improved geographically and qualitatively. In addition, the concept of Fortress Fleet, which was criticized for its passive strategy in the past, could be reborn in a modern era. There are maritime power projection/defence, SLOC attack/defence in exploiting sea control. The effects of hypersonic missiles on exploiting sea control could be seen as both limiting and opportunity factors.

  • PDF