• Title/Summary/Keyword: 카메라 영상 모형

Search Result 80, Processing Time 0.024 seconds

Statistical Modeling Methods for Analyzing Human Gait Structure (휴먼 보행 동작 구조 분석을 위한 통계적 모델링 방법)

  • Sin, Bong Kee
    • Smart Media Journal
    • /
    • v.1 no.2
    • /
    • pp.12-22
    • /
    • 2012
  • Today we are witnessing an increasingly widespread use of cameras in our lives for video surveillance, robot vision, and mobile phones. This has led to a renewed interest in computer vision in general and an on-going boom in human activity recognition in particular. Although not particularly fancy per se, human gait is inarguably the most common and frequent action. Early on this decade there has been a passing interest in human gait recognition, but it soon declined before we came up with a systematic analysis and understanding of walking motion. This paper presents a set of DBN-based models for the analysis of human gait in sequence of increasing complexity and modeling power. The discussion centers around HMM-based statistical methods capable of modeling the variability and incompleteness of input video signals. Finally a novel idea of extending the discrete state Markov chain with a continuous density function is proposed in order to better characterize the gait direction. The proposed modeling framework allows us to recognize pedestrian up to 91.67% and to elegantly decode out two independent gait components of direction and posture through a sequence of experiments.

  • PDF

A Method of Extracting Features of Sensor-only Facilities for Autonomous Cooperative Driving

  • Hyung Lee;Chulwoo Park;Handong Lee;Sanyeon Won
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.12
    • /
    • pp.191-199
    • /
    • 2023
  • In this paper, we propose a method to extract the features of five sensor-only facilities built as infrastructure for autonomous cooperative driving, which are from point cloud data acquired by LiDAR. In the case of image acquisition sensors installed in autonomous vehicles, the acquisition data is inconsistent due to the climatic environment and camera characteristics, so LiDAR sensor was applied to replace them. In addition, high-intensity reflectors were designed and attached to each facility to make it easier to distinguish it from other existing facilities with LiDAR. From the five sensor-only facilities developed and the point cloud data acquired by the data acquisition system, feature points were extracted based on the average reflective intensity of the high-intensity reflective paper attached to the facility, clustered by the DBSCAN method, and changed to two-dimensional coordinates by a projection method. The features of the facility at each distance consist of three-dimensional point coordinates, two-dimensional projected coordinates, and reflection intensity, and will be used as training data for a model for facility recognition to be developed in the future.

A Study on forest fires Prediction and Detection Algorithm using Intelligent Context-awareness sensor (상황인지 센서를 활용한 지능형 산불 이동 예측 및 탐지 알고리즘에 관한 연구)

  • Kim, Hyeng-jun;Shin, Gyu-young;Woo, Byeong-hun;Koo, Nam-kyoung;Jang, Kyung-sik;Lee, Kang-whan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.6
    • /
    • pp.1506-1514
    • /
    • 2015
  • In this paper, we proposed a forest fires prediction and detection system. It could provide a situation of fire prediction and detection methods using context awareness sensor. A fire occurs wide range of sensing a fire in a single camera sensor, it is difficult to detect the occurrence of a fire. In this paper, we propose an algorithm for real-time by using a temperature sensor, humidity, Co2, the flame presence information acquired and comparing the data based on multiple conditions, analyze and determine the weighting according to fire in complex situations. In addition, it is possible to differential management of intensive fire detection and prediction for required dividing the state of fire zone. Therefore we propose an algorithm to determine the prediction and detection from the fire parameters as an temperature, humidity, Co2 and the flame in real-time by using a context awareness sensor and also suggest algorithm that provide the path of fire diffusion and service the secure safety zone prediction.

Analysis of Respiratory Motion Artifacts in PET Imaging Using Respiratory Gated PET Combined with 4D-CT (4D-CT와 결합한 호흡게이트 PET을 이용한 PET영상의 호흡 인공산물 분석)

  • Cho, Byung-Chul;Park, Sung-Ho;Park, Hee-Chul;Bae, Hoon-Sik;Hwang, Hee-Sung;Shin, Hee-Soon
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.3
    • /
    • pp.174-181
    • /
    • 2005
  • Purpose: Reduction of respiratory motion artifacts in PET images was studied using respiratory-gated PET (RGPET) with moving phantom. Especially a method of generating simulated helical CT images from 4D-CT datasets was developed and applied to a respiratory specific RGPET images for more accurate attenuation correction. Materials and Methods: Using a motion phantom with periodicity of 6 seconds and linear motion amplitude of 26 mm, PET/CT (Discovery ST: GEMS) scans with and without respiratory gating were obtained for one syringe and two vials with each volume of 3, 10, and 30 ml respectively. RPM (Real-Time Position Management, Varian) was used for tracking motion during PET/CT scanning. Ten datasets of RGPET and 4D-CT corresponding to every 10% phase intervals were acquired. from the positions, sizes, and uptake values of each subject on the resultant phase specific PET and CT datasets, the correlations between motion artifacts in PET and CT images and the size of motion relative to the size of subject were analyzed. Results: The center positions of three vials in RGPET and 4D-CT agree well with the actual position within the estimated error. However, volumes of subjects in non-gated PET images increase proportional to relative motion size and were overestimated as much as 250% when the motion amplitude was increased two times larger than the size of the subject. On the contrary, the corresponding maximal uptake value was reduced to about 50%. Conclusion: RGPET is demonstrated to remove respiratory motion artifacts in PET imaging, and moreover, more precise image fusion and more accurate attenuation correction is possible by combining with 4D-CT.

A Study of the Reactive Movement Synchronization for Analysis of Group Flow (그룹 몰입도 판단을 위한 움직임 동기화 연구)

  • Ryu, Joon Mo;Park, Seung-Bo;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.79-94
    • /
    • 2013
  • Recently, the high value added business is steadily growing in the culture and art area. To generated high value from a performance, the satisfaction of audience is necessary. The flow in a critical factor for satisfaction, and it should be induced from audience and measures. To evaluate interest and emotion of audience on contents, producers or investors need a kind of index for the measurement of the flow. But it is neither easy to define the flow quantitatively, nor to collect audience's reaction immediately. The previous studies of the group flow were evaluated by the sum of the average value of each person's reaction. The flow or "good feeling" from each audience was extracted from his face, especially, the change of his (or her) expression and body movement. But it was not easy to handle the large amount of real-time data from each sensor signals. And also it was difficult to set experimental devices, in terms of economic and environmental problems. Because, all participants should have their own personal sensor to check their physical signal. Also each camera should be located in front of their head to catch their looks. Therefore we need more simple system to analyze group flow. This study provides the method for measurement of audiences flow with group synchronization at same time and place. To measure the synchronization, we made real-time processing system using the Differential Image and Group Emotion Analysis (GEA) system. Differential Image was obtained from camera and by the previous frame was subtracted from present frame. So the movement variation on audience's reaction was obtained. And then we developed a program, GEX(Group Emotion Analysis), for flow judgment model. After the measurement of the audience's reaction, the synchronization is divided as Dynamic State Synchronization and Static State Synchronization. The Dynamic State Synchronization accompanies audience's active reaction, while the Static State Synchronization means to movement of audience. The Dynamic State Synchronization can be caused by the audience's surprise action such as scary, creepy or reversal scene. And the Static State Synchronization was triggered by impressed or sad scene. Therefore we showed them several short movies containing various scenes mentioned previously. And these kind of scenes made them sad, clap, and creepy, etc. To check the movement of audience, we defined the critical point, ${\alpha}$and ${\beta}$. Dynamic State Synchronization was meaningful when the movement value was over critical point ${\beta}$, while Static State Synchronization was effective under critical point ${\alpha}$. ${\beta}$ is made by audience' clapping movement of 10 teams in stead of using average number of movement. After checking the reactive movement of audience, the percentage(%) ratio was calculated from the division of "people having reaction" by "total people". Total 37 teams were made in "2012 Seoul DMC Culture Open" and they involved the experiments. First, they followed induction to clap by staff. Second, basic scene for neutralize emotion of audience. Third, flow scene was displayed to audience. Forth, the reversal scene was introduced. And then 24 teams of them were provided with amuse and creepy scenes. And the other 10 teams were exposed with the sad scene. There were clapping and laughing action of audience on the amuse scene with shaking their head or hid with closing eyes. And also the sad or touching scene made them silent. If the results were over about 80%, the group could be judged as the synchronization and the flow were achieved. As a result, the audience showed similar reactions about similar stimulation at same time and place. Once we get an additional normalization and experiment, we can obtain find the flow factor through the synchronization on a much bigger group and this should be useful for planning contents.

Multi-point Dynamic Displacement Measurements of Structures Using Digital Image Correlation Technique (Digital Image Correlation기법을 이용한 구조물의 다중 동적변위응답 측정)

  • Kim, Sung-Wan;Kim, Nam-Sik
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.13 no.3
    • /
    • pp.11-19
    • /
    • 2009
  • Recently, concerns relating to the maintenance of large structures have been increased. In addition, the number of large structures that need to be evaluated for their structural safety due to natural disasters and structural deterioration has been rapidly increasing. It is common for the structural characteristics of an older large structure to differ from the characteristics in the initial design stage, and changes in dynamic characteristics may result from a reduction in stiffness due to cracks on the materials. The process of deterioration of such structures enables the detection of damaged locations, as well as a quantitative evaluation. One of the typical measuring instruments used for the monitoring of bridges and buildings is the dynamic measurement system. Conventional dynamic measurement systems require considerable cabling to facilitate a direct connection between sensor and DAQ logger. For this reason, a method of measuring structural responses from a remote distance without the mounted sensors is needed. In terms of non-contact methods that are applicable to dynamic response measurement, the methods using the doppler effect of a laser or a GPS are commonly used. However, such methods could not be generally applied to bridge structures because of their costs and inaccuracies. Alternatively, a method using a visual image can be economical as well as feasible for measuring vibration signals of inaccessible bridge structures and extracting their dynamic characteristics. Many studies have been conducted using camera visual signals instead of conventional mounted sensors. However, these studies have been focused on measuring displacement response by an image processing technique after recording a position of the target mounted on the structure, in which the number of measurement targets may be limited. Therefore, in this study, a model experiment was carried out to verify the measurement algorithm for measuring multi-point displacement responses by using a DIC (Digital Image Correlation) technique.

Realtime Video Visualization based on 3D GIS (3차원 GIS 기반 실시간 비디오 시각화 기술)

  • Yoon, Chang-Rak;Kim, Hak-Cheol;Kim, Kyung-Ok;Hwang, Chi-Jung
    • Journal of Korea Spatial Information System Society
    • /
    • v.11 no.1
    • /
    • pp.63-70
    • /
    • 2009
  • 3D GIS(Geographic Information System) processes, analyzes and presents various real-world 3D phenomena by building 3D spatial information of real-world terrain, facilities, etc., and working with visualization technique such as VR(Virtual Reality). It can be applied to such areas as urban management system, traffic information system, environment management system, disaster management system, ocean management system, etc,. In this paper, we propose video visualization technology based on 3D geographic information to provide effectively real-time information in 3D geographic information system and also present methods for establishing 3D building information data. The proposed video visualization system can provide real-time video information based on 3D geographic information by projecting real-time video stream from network video camera onto 3D geographic objects and applying texture-mapping of video frames onto terrain, facilities, etc.. In this paper, we developed sem i-automatic DBM(Digital Building Model) building technique using both aerial im age and LiDAR data for 3D Projective Texture Mapping. 3D geographic information system currently provide static visualization information and the proposed method can replace previous static visualization information with real video information. The proposed method can be used in location-based decision-making system by providing real-time visualization information, and moreover, it can be used to provide intelligent context-aware service based on geographic information.

  • PDF

Measurement of the Flow Field in a River (LSPIV에 의한 하천 표면유속장의 관측)

  • Kim, Young-Sung;Yang, Jae-Rheen
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2009.05a
    • /
    • pp.1812-1816
    • /
    • 2009
  • 이미지 해석에 의한 유속장 측정방법은 유체역학분야에서 지난 30 여년 동안 많이 활용되어온 속도측정 기법으로 오늘날에는 이를 수공학 분야에서 이를 유량측정 등 수리현상 해석에 활용하려는 시도가 다각적으로 이루어지고 있다. 이에 본 연구에서는 이미지 해석에 의한 유속장 측정방법을 용담댐 시험유역에 적용하여 그의 자연하천에서의 적용성을 검토하고자 한다. 이미지 해석에 의한 유속장 측정방법은 PIV(Particle Image Velocimetry)로 통칭되고 있으며, PIV는 seeding, illumination, recording, 및 image processing의 네 가지 요소로 구성된다. seeding을 위해서 유체를 따라 흐를수 있는 작은 입자를 유체에 첨가한다. 유체를 따라 흐르는 입자들의 선명한 이미지를 얻기 위해서illumination이 필요하다. PIV를 이용하여 흐름을 해석하기 위한 illumination은 일반적으로 이중펄스 레이저가 이용된다. 이렇게 유속장 해석을 하려는 유체에 대하여 seeding 및 illumination이 준비되면 단일노출- 다중 프레임법, 혹은 다중노출-단일 프레임법으로 흐름을 recording을 한다. image processing은 이미지를 다운로드하고, 디지타이징 및 화질향상을 하는 전처리(pre-processing), 상관계수의 산정에 의한 유속 벡터의 결정 및 에러 벡터를 제거하고 유속장을 그래프화하는 후처리(post-processing) 과정으로 구성된다. LSPIV(Large Scale PIV)는 PIV의 기본원리를 근거로 하여 기존의 PIV에 비하여 실험실 내에서의 수리모형실험이나 일반 하천에서의 유속측정과 같은 큰 규모$(4m^2\sim45,000m^2$)의 흐름해석을 할 수 있도록 Fujita et al.(1994)와 Aya et al.(1995)이 확장시킨 것이다. PIV와 비교시 LSPIV의 다른 점은 넓은 흐름 표면적을 포함하기 위하여 촬영시에 카메라의 광축과 흐름 사이의 각도가 PIV에서 이용하는 수직이 아닌 경사각을 이용하였고 이에 따라 발생하는 이미지의 왜곡을 제거하기 위하여 이미지 변환기법을 적용하여 왜곡이 없는 정사촬영 이미지로 변환시킨다. 이후부터는 PIV의 이미지 처리 방법이 적용되어 표면유속을 산정한다. 다만 이미지 변환을 PIV 이미지 처리 전에 하느냐 후에 하느냐에 따라 유속장 해석결과에 차이가 있다. PIV의 네가지 단계를 포함하여 LSPIV의 각 단계를 구분하면, seeding, illumination, recording, image transformation,image processing 및 post-processing의 여섯 단계로 나뉘어진다 (Li, 2002). LSPIV를 적용시 물표면 입자의 Tracing을 위하여 자연하천에서 사용하기에 적합한 환경친화적인 seeding 재료인 Wood Mulch를 사용하여 유속을 측정하였다. 적용지점은 용담댐 상류의 동향수위관측소 지점으로 이 지점은 한국수자원공사의 수자원시험유역이 위치하고 있다. 이미지의 촬영은 가정용 비디오 캠코더 (Sony DCR-PC 350)을 이용하여 두 줄기의 흐름에 대하여 각각 약 5분 동안의 영상을 촬영한후 이중에서 seeding의 분포가 잘 이루어진 약 1분간을 추출한후 이를 이용하여 PIV 분석에 이용하였다. 대체적으로 유속장의 계산이 무난하게 이루어지었으나 비교적 수질 상태가 양호하고, 수심이 낮고, 하상재료가 자갈로 이루어져 있어 비슷한 색상의 seeding 재료를 추적하기 어려운 구간이 발생한 부분에서는 유속의 계산이 정확히 이루어지지 않았다.

  • PDF

A Study for Characterization on Shallow Behavior of Soil Slope by Flume Experiments (토조실험 장치를 이용한 토사비탈면 표층거동 특성 연구)

  • Suk, Jae-wook;Park, Sung-Yong;Na, Geon-ha;Kang, Hyo-Sub
    • The Journal of Engineering Geology
    • /
    • v.28 no.3
    • /
    • pp.489-499
    • /
    • 2018
  • A flume experiments was used to study the characteristics of the surface displacements and volumetric water contents (VWC) during torrential rain. The surface displacement and VWC of the granite weathered soil were measured for rainfall intensity (100, 200 mm/hr) and initial ground condition (VWC 7, 14, 26%). The test processes were also recorded by video cameras. According to the test results, The shallow failure is classified into three types: retrogressive failure, progressive failure and defined failure. In the case of retrogressive failure and progressive failure, relatively large damage could occur due to the feature that soil is deposited to the bottom of the slope. the shallow failure occurred when the VWC reached a certain value regardless of the initial soil condition. It was found that the shallow failure can be predicted through the increase patton of the VWC under the condition of the ground dry condition (VWC 7%) and the natural condition (VWC 14%). For high rainfall intensity, progressive failure predominated, and rainfall intensity above a certain level did not affect wetting front transition.

Development of Greenhouse Environment Monitoring & Control System Based on Web and Smart Phone (웹과 스마트폰 기반의 온실 환경 제어 시스템 개발)

  • Kim, D.E.;Lee, W.Y.;Kang, D.H.;Kang, I.C.;Hong, S.J.;Woo, Y.H.
    • Journal of Practical Agriculture & Fisheries Research
    • /
    • v.18 no.1
    • /
    • pp.101-112
    • /
    • 2016
  • Monitoring and control of the greenhouse environment play a decisive role in greenhouse crop production processes. The network system for greenhouse control was developed by using recent technologies of networking and wireless communications. In this paper, a remote monitoring and control system for greenhouse using a smartphone and a computer with internet has been developed. The system provides real-time remote greenhouse integrated management service which collects greenhouse environment information and controls greenhouse facilities based on sensors and equipments network. Graphical user interface for an integrated management system was designed with bases on the HMI and the experimental results showed that a sensor data and device status were collected by integrated management in real-time. Because the sensor data and device status can be displayed on a web page, transmitted using the server program to remote computer and mobile smartphone at the same time. The monitored-data can be downloaded, analyzed and saved from server program in real-time via mobile phone or internet at a remote place. Performance test results of the greenhouse control system has confirmed that all work successfully in accordance with the operating conditions. And data collections and display conditions, event actions, crops and equipments monitoring showed reliable results.