• Title/Summary/Keyword: 카메라 기반 인식

Search Result 700, Processing Time 0.03 seconds

Efficiency Evaluation of Contour Generation from Airborne LiDAR Data (LiDAR 데이터를 이용한 등고선 제작의 효율성 평가)

  • Wie, Gwang-Jae;Lee, Im-Pyeong;Kang, In-Gu;Cho, Jae-Myoung
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.15 no.2 s.40
    • /
    • pp.59-66
    • /
    • 2007
  • The digital working environment and its related technology have been rapidly expanding. In the surveying field, we have changed from using optical film cameras and plotters to digital cameras, multi sensors like GPS/INS etc,. The old analog work flow is replaced by a new digital work flow. Accurate data of the land is used in various fields, efficient utilization and management of land, urban planning, disaster and environment management. It is important because it is an essential infrastructure. For this study, LiDAR surveying was used to get points clouds in the study area. It has a high vegetation penetrating advantage and we used a digital process from planning to the final products. Contour lines were made from LiDAR data and compared with national digital base maps (scale 1/1,000 and 1/5,000). As a result, the accuracy and the economical efficiency were evaluated. The accuracy of LiDAR contour data was average $0.089m{\pm}0.062\;m$ and showed high ground detail in complex areas. Compared with 1/1,000 scale contour line production when surveying an area over $100\;km^2$, approximately 48% of the cost was reduced. Therefore we prepose LiDAR surveying as an alternative to modify and update national base maps.

  • PDF

3D Product digital camera Model on the Web and study about developing 3D shopping mall (Web 상에서 3차원 디지털카메라제품모델과 3차원 쇼핑몰 개발에 관한 연구)

  • 조진희;이규옥
    • Archives of design research
    • /
    • v.14 no.1
    • /
    • pp.63-70
    • /
    • 2001
  • Thanks to the inter-connection of information servers throughout the world based on the internet technology, the new sphere which actual transaction can be made like in the visible market has become conspicuous as the virtual space. The movement to realize the new business through the cyber space has been actively ongoing. In the domestic market, a lot of corporations knowing the needs of internet shopping malls have entered into this e-business but they have not made a big success comparing with internet's potentials. And, it can be attributed to the simple planes and the limitations of information provided by the cyber malls, which means that the needs of better information transfer we apparent Accordingly, in this thesis, the research on the 3-D based products and shopping malls has been made through the inter-complementary composition between the 2-D shopping malls and 3-B ones. This research consists of 3 parts. Firstly, through the research on references and existing data, it presents the analysis on consumer's characteristics and sales limits of the internet shopping mall's products. Secondly, the background of 3-D shopping mall's advent and the virtual reality technology data are put together. Finally, it presents how the development of 3-D based product modeling and shopping malls can increase the consumer's purchase power and furthermore the directions of shopping malls to go.

  • PDF

The Development of Robot and Augmented Reality Based Contents and Instructional Model Supporting Childrens' Dramatic Play (로봇과 증강현실 기반의 유아 극놀이 콘텐츠 및 교수.학습 모형 개발)

  • Jo, Miheon;Han, Jeonghye;Hyun, Eunja
    • Journal of The Korean Association of Information Education
    • /
    • v.17 no.4
    • /
    • pp.421-432
    • /
    • 2013
  • The purpose of this study is to develop contents and an instructional model that support children's dramatic play by integrating the robot and augmented reality technology. In order to support the dramatic play, the robot shows various facial expressions and actions, serves as a narrator and a sound manager, supports the simultaneous interaction by using the camera and recognizing the markers and children's motions, records children's activities as a photo and a video that can be used for further activities. The robot also uses a projector to allow children to directly interact with the video object. On the other hand, augmented reality offers a variety of character changes and props, and allows various effects of background and foreground. Also it allows natural interaction between the contents and children through the real-type interface, and provides the opportunities for the interaction between actors and audiences. Along with these, augmented reality provides an experience-based learning environment that induces a sensory immersion by allowing children to manipulate or choose the learning situation and experience the results. In addition, the instructional model supporting dramatic play consists of 4 stages(i.e., teachers' preparation, introducing and understanding a story, action plan and play, evaluation and wrapping up). At each stage, detailed activities to decide or proceed are suggested.

Development of the Algorithm for Traffic Accident Auto-Detection in Signalized Intersection (신호교차로 내 실시간 교통사고 자동검지 알고리즘 개발)

  • O, Ju-Taek;Im, Jae-Geuk;Hwang, Bo-Hui
    • Journal of Korean Society of Transportation
    • /
    • v.27 no.5
    • /
    • pp.97-111
    • /
    • 2009
  • Image-based traffic information collection systems have entered widespread adoption and use in many countries since these systems are not only capable of replacing existing loop-based detectors which have limitations in management and administration, but are also capable of providing and managing a wide variety of traffic related information. In addition, these systems are expanding rapidly in terms of purpose and scope of use. Currently, the utilization of image processing technology in the field of traffic accident management is limited to installing surveillance cameras on locations where traffic accidents are expected to occur and digitalizing of recorded data. Accurately recording the sequence of situations around a traffic accident in a signal intersection and then objectively and clearly analyzing how such accident occurred is more urgent and important than anything else in resolving a traffic accident. Therefore, in this research, we intend to present a technology capable of overcoming problems in which advanced existing technologies exhibited limitations in handling real-time due to large data capacity such as object separation of vehicles and tracking, which pose difficulties due to environmental diversities and changes at a signal intersection with complex traffic situations, as pointed out by many past researches while presenting and implementing an active and environmentally adaptive methodology capable of effectively reducing false detection situations which frequently occur even with the Gaussian complex model analytical method which has been considered the best among well-known environmental obstacle reduction methods. To prove that the technology developed by this research has performance advantage over existing automatic traffic accident recording systems, a test was performed by entering image data from an actually operating crossroad online in real-time. The test results were compared with the performance of other existing technologies.

Mixed Mobile Education System using SIFT Algorithm (SIFT 알고리즘을 이용한 혼합형 모바일 교육 시스템)

  • Hong, Kwang-Jin;Jung, Kee-Chul;Han, Eun-Jung;Yang, Jong-Yeol
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.13 no.2
    • /
    • pp.69-79
    • /
    • 2008
  • Due to popularization of the wireless Internet and mobile devices the infrastructure of the ubiquitous environment, where users can get information whatever they want anytime and anywhere, is created. Therefore, a variety of fields including the education studies methods for efficiency of information transmission using on-line and off-line contents. In this paper, we propose the Mixed Mobile Education system(MME) that improves educational efficiency using on-line and off-line contents on mobile devices. Because it is hard to input new data and cannot use similar off-line contents in systems used additional tags, the proposed system does not use additional tags but recognizes of-line contents as we extract feature points in the input image using the mobile camera. We use the Scale Invariant Feature Transform(SIFT) algorithm to extract feature points which are not affected by noise, color distortion, size and rotation in the input image captured by the low resolution camera. And we use the client-server architecture for solving the limited storage size of the mobile devices and for easily registration and modification of data. Experimental results show that compared with previous work, the proposed system has some advantages and disadvantages and that the proposed system has good efficiency on various environments.

  • PDF

Active Water-Level and Distance Measurement Algorithm using Light Beam Pattern (광패턴을 이용한 능동형 수위 및 거리 측정 기법)

  • Kim, Nac-Woo;Son, Seung-Chul;Lee, Mun-Seob;Min, Gi-Hyeon;Lee, Byung-Tak
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.4
    • /
    • pp.156-163
    • /
    • 2015
  • In this paper, we propose an active water level and distance measurement algorithm using a light beam pattern. On behalf of conventional water level gauge types of pressure, float-well, ultrasonic, radar, and others, recently, extensive research for video analysis based water level measurement methods is gradually increasing as an importance of accurate measurement, monitoring convenience, and much more has been emphasized. By turning a reference light beam pattern on bridge or embankment actively, we suggest a new approach that analyzes and processes the projected light beam pattern image obtained from camera device, measures automatically water level and distance between a camera and a bridge or a levee. As contrasted with conventional methods that passively have to analyze captured video information for recognition of a watermark attached on a bridge or specific marker, we actively use the reference light beam pattern suited to the installed bridge environment. So, our method offers a robust water level measurement. The reasons are as follows. At first, our algorithm is effective against unfavorable visual field, pollution or damage of watermark, and so on, and in the next, this is possible to monitor in real-time the portable-based local situation by day and night. Furthermore, our method is not need additional floodlight. Tests are simulated under indoor environment conditions from distance measurement over 0.4-1.4m and height measurement over 13.5-32.5cm.

Individual Ortho-rectification of Coast Guard Aerial Images for Oil Spill Monitoring (유출유 모니터링을 위한 해경 항공 영상의 개별정사보정)

  • Oh, Youngon;Bui, An Ngoc;Choi, Kyoungah;Lee, Impyeong
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1479-1488
    • /
    • 2022
  • Accidents in which oil spills occur intermittently in the ocean due to ship collisions and sinkings. In order to prepare prompt countermeasures when such an accident occurs, it is necessary to accurately identify the current status of spilled oil. To this end, the Coast Guard patrols the target area with a fixed-wing airplane or helicopter and checks it with the naked eye or video, but it was difficult to determine the area contaminated by the spilled oil and its exact location on the map. Accordingly, this study develops a technology for direct ortho-rectification by automatically geo-referencing aerial images collected by the Coast Guard without individual ground reference points to identify the current status of spilled oil. First, meta information required for georeferencing is extracted from a visualized screen of sensor information such as video by optical character recognition (OCR). Based on the extracted information, the external orientation parameters of the image are determined. Images are individually orthorectified using the determined the external orientation parameters. The accuracy of individual orthoimages generated through this method was evaluated to be about tens of meters up to 100 m. The accuracy level was reasonably acceptable considering the inherent errors of the position and attitude sensors, the inaccuracies in the internal orientation parameters such as camera focal length, without using no ground control points. It is judged to be an appropriate level for identifying the current status of spilled oil contaminated areas in the sea. In the future, if real-time transmission of images captured during flight becomes possible, individual orthoimages can be generated in real time through the proposed individual orthorectification technology. Based on this, it can be effectively used to quickly identify the current status of spilled oil contamination and establish countermeasures.

A study on Broad Quantification Calibration to various isotopes for Quantitative Analysis and its SUVs assessment in SPECT/CT (SPECT/CT 장비에서 정량분석을 위한 핵종 별 Broad Quantification Calibration 시행 및 SUV 평가를 위한 팬텀 실험에 관한 연구)

  • Hyun Soo, Ko;Jae Min, Choi;Soon Ki, Park
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.26 no.2
    • /
    • pp.20-31
    • /
    • 2022
  • Purpose Broad Quantification Calibration(B.Q.C) is the procedure for Quantitative Analysis to measure Standard Uptake Value(SUV) in SPECT/CT scanner. B.Q.C was performed with Tc-99m, I-123, I-131, Lu-177 respectively and then we acquired the phantom images whether the SUVs were measured accurately. Because there is no standard for SUV test in SPECT, we used ACR Esser PET phantom alternatively. The purpose of this study was to lay the groundwork for Quantitative Analysis with various isotopes in SPECT/CT scanner. Materials and Methods Siemens SPECT/CT Symbia Intevo 16 and Intevo Bold were used for this study. The procedure of B.Q.C has two steps; first is point source Sensitivity Cal. and second is Volume Sensitivity Cal. to calculate Volume Sensitivity Factor(VSF) using cylinder phantom. To verify SUV, we acquired the images with ACR Esser PET phantom and then we measured SUVmean on background and SUVmax on hot vials(25, 16, 12, 8 mm). SPSS was used to analyze the difference in the SUV between Intevo 16 and Intevo Bold by Mann-Whitney test. Results The results of Sensitivity(CPS/MBq) and VSF were in Detector 1, 2 of four isotopes (Intevo 16 D1 sensitivity/D2 sensitivity/VSF and Intevo Bold) 87.7/88.6/1.08, 91.9/91.2/1.07 on Tc-99m, 79.9/81.9/0.98, 89.4/89.4/0.98 on I-123, 124.8/128.9/0.69, 130.9, 126.8/0.71, on I-131, 8.7/8.9/1.02, 9.1/8.9/1.00 on Lu-177 respectively. The results of SUV test with ACR Esser PET phantom were (Intevo 16 BKG SUVmean/25mm SUVmax/16mm/12mm/8mm and Intevo Bold) 1.03/2.95/2.41/1.96/1.84, 1.03/2.91/2.38/1.87/1.82 on Tc-99m, 0.97/2.91/2.33/1.68/1.45, 1.00/2.80/2.23/1.57/1.32 on I-123, 0.96/1.61/1.13/1.02/0.69, 0.94/1.54/1.08/0.98/ 0.66 on I-131, 1.00/6.34/4.67/2.96/2.28, 1.01/6.21/4.49/2.86/2.21 on Lu-177. And there was no statistically significant difference of SUV between Intevo 16 and Intevo Bold(p>0.05). Conclusion Only Qualitative Analysis was possible with gamma camera in the past. On the other hand, it's possible to acquire not only anatomic localization, 3D tomography but also Quantitative Analysis with SUV measurements in SPECT/CT scanner. We could lay the groundwork for Quantitative Analysis with various isotopes; Tc-99m, I-123, I-131, Lu-177 by carrying out B.Q.C and could verify the SUV measurement with ACR phantom. It needs periodic calibration to maintain for precision of Quantitative evaluation. As a result, we can provide Quantitative Analysis on follow up scan with the SPECT/CT exams and evaluate the therapeutic response in theranosis.

Rear Vehicle Detection Method in Harsh Environment Using Improved Image Information (개선된 영상 정보를 이용한 가혹한 환경에서의 후방 차량 감지 방법)

  • Jeong, Jin-Seong;Kim, Hyun-Tae;Jang, Young-Min;Cho, Sang-Bok
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.1
    • /
    • pp.96-110
    • /
    • 2017
  • Most of vehicle detection studies using the existing general lens or wide-angle lens have a blind spot in the rear detection situation, the image is vulnerable to noise and a variety of external environments. In this paper, we propose a method that is detection in harsh external environment with noise, blind spots, etc. First, using a fish-eye lens will help minimize blind spots compared to the wide-angle lens. When angle of the lens is growing because nonlinear radial distortion also increase, calibration was used after initializing and optimizing the distortion constant in order to ensure accuracy. In addition, the original image was analyzed along with calibration to remove fog and calibrate brightness and thereby enable detection even when visibility is obstructed due to light and dark adaptations from foggy situations or sudden changes in illumination. Fog removal generally takes a considerably significant amount of time to calculate. Thus in order to reduce the calculation time, remove the fog used the major fog removal algorithm Dark Channel Prior. While Gamma Correction was used to calibrate brightness, a brightness and contrast evaluation was conducted on the image in order to determine the Gamma Value needed for correction. The evaluation used only a part instead of the entirety of the image in order to reduce the time allotted to calculation. When the brightness and contrast values were calculated, those values were used to decided Gamma value and to correct the entire image. The brightness correction and fog removal were processed in parallel, and the images were registered as a single image to minimize the calculation time needed for all the processes. Then the feature extraction method HOG was used to detect the vehicle in the corrected image. As a result, it took 0.064 seconds per frame to detect the vehicle using image correction as proposed herein, which showed a 7.5% improvement in detection rate compared to the existing vehicle detection method.

Wavelet Transform-based Face Detection for Real-time Applications (실시간 응용을 위한 웨이블릿 변환 기반의 얼굴 검출)

  • 송해진;고병철;변혜란
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.9
    • /
    • pp.829-842
    • /
    • 2003
  • In this Paper, we propose the new face detection and tracking method based on template matching for real-time applications such as, teleconference, telecommunication, front stage of surveillance system using face recognition, and video-phone applications. Since the main purpose of paper is to track a face regardless of various environments, we use template-based face tracking method. To generate robust face templates, we apply wavelet transform to the average face image and extract three types of wavelet template from transformed low-resolution average face. However template matching is generally sensitive to the change of illumination conditions, we apply Min-max normalization with histogram equalization according to the variation of intensity. Tracking method is also applied to reduce the computation time and predict precise face candidate region. Finally, facial components are also detected and from the relative distance of two eyes, we estimate the size of facial ellipse.