• Title/Summary/Keyword: Video Projection

Search Result 152, Processing Time 0.025 seconds

Multiple Pedestrians Detection using Motion Information and Support Vector Machine from a Moving Camera Image (이동 카메라 영상에서 움직임 정보와 Support Vector Machine을 이용한 다수 보행자 검출)

  • Lim, Jong-Seok;Park, Hyo-Jin;Kim, Wook-Hyun
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.12 no.4
    • /
    • pp.250-257
    • /
    • 2011
  • In this paper, we proposed the method detecting multiple pedestrians using motion information and SVM(Support Vector Machine) from a moving camera image. First, we detect moving pedestrians from both the difference image and the projection histogram which is compensated for the camera ego-motion using corresponding feature sets. The difference image is simple method but it is not detected motionless pedestrians. Thus, to fix up this problem, we detect motionless pedestrians using SVM The SVM works well particularly in binary classification problem such as pedestrian detection. However, it is not detected in case that the pedestrians are adjacent or they move arms and legs excessively in the image. Therefore, in this paper, we proposed the method detecting motionless and adjacent pedestrians as well as people who take excessive action in the image using motion information and SVM The experimental results on our various test video sequences demonstrated the high efficiency of our approach as it had shown an average detection ratio of 94% and False Positive of 2.8%.

A Study on the Method of Creating Realistic Content in Audience-participating Performances using Artificial Intelligence Sentiment Analysis Technology (인공지능 감정분석 기술을 이용한 관객 참여형 공연에서의 실감형 콘텐츠 생성 방식에 관한 연구)

  • Kim, Jihee;Oh, Jinhee;Kim, Myeungjin;Lim, Yangkyu
    • Journal of Broadcast Engineering
    • /
    • v.26 no.5
    • /
    • pp.533-542
    • /
    • 2021
  • In this study, a process of re-creating Jindo Buk Chum, one of the traditional Korean arts, into digital art using various artificial intelligence technologies was proposed. The audience's emotional data, quantified through artificial intelligence language analysis technology, intervenes in various object forms in the projection mapping performance and affects the big story without changing it. If most interactive arts express communication between the performer and the video, this performance becomes a new type of responsive performance that allows the audience to directly communicate with the work, centering on artificial intelligence emotion analysis technology. This starts with 'Chuimsae', a performance that is common only in Korean traditional art, where the audience directly or indirectly intervenes and influences the performance. Based on the emotional information contained in the performer's 'prologue', it is combined with the audience's emotional information and converted into the form of images and particles used in the performance to indirectly participate and change the performance.

Optical System Design of Compact Head-Up Display(HUD) using Micro Display (마이크로 디스플레이를 이용한 소형 헤드업 디스플레이 광학계 설계)

  • Han, Dong-Jin;Kim, Hyun-Hee
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.9
    • /
    • pp.6227-6235
    • /
    • 2015
  • The HUD has recently been downsized due to the development of micro display and LED technology as a see through information display device, gradually expands the application areas. In this paper, using a DLP micro display device designed a compact head-up display(HUD) optical system for biocular observation of the image exhibition area 5 inches. It was analyzed for each design element of the optical system in order to design a compacted HUD. DLP, projection optical system and concave image combiner were discussed the design approach and the characteristics. Through a connection structure analysis of each optical system, detailed design specifications were set up and designed the optical system in detail. Put a folded configuration in the form of a white diffuse reflector between the projection lens and concave image combiner was designed to be independent, respectively. Distance of the projected image is adjustable up to approximately 2m ~ infinity and observation distance is 1m. Resolution could be recognized by 1 ~ 2pixels in HD($1,280{\times}720pixels$) class, various characters and symbols could be read. In addition, color navigation map, daytime video camera and thermal imaging cameras can be displayed.

2D Interpolation of 3D Points using Video-based Point Cloud Compression (비디오 기반 포인트 클라우드 압축을 사용한 3차원 포인트의 2차원 보간 방안)

  • Hwang, Yonghae;Kim, Junsik;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.26 no.6
    • /
    • pp.692-703
    • /
    • 2021
  • Recently, with the development of computer graphics technology, research on technology for expressing real objects as more realistic virtual graphics is being actively conducted. Point cloud is a technology that uses numerous points, including 2D spatial coordinates and color information, to represent 3D objects, and they require huge data storage and high-performance computing devices to provide various services. Video-based Point Cloud Compression (V-PCC) technology is currently being studied by the international standard organization MPEG, which is a projection based method that projects point cloud into 2D plane, and then compresses them using 2D video codecs. V-PCC technology compresses point cloud objects using 2D images such as Occupancy map, Geometry image, Attribute image, and other auxiliary information that includes the relationship between 2D plane and 3D space. When increasing the density of point cloud or expanding an object, 3D calculation is generally used, but there are limitations in that the calculation method is complicated, requires a lot of time, and it is difficult to determine the correct location of a new point. This paper proposes a method to generate additional points at more accurate locations with less computation by applying 2D interpolation to the image on which the point cloud is projected, in the V-PCC technology.

3DTIP: 3D Stereoscopic Tour-Into-Picture of Korean Traditional Paintings (3DTIP: 한국 고전화의 3차원 입체 Tour-Into-Picture)

  • Jo, Cheol-Yong;Kim, Man-Bae
    • Journal of Broadcast Engineering
    • /
    • v.14 no.5
    • /
    • pp.616-624
    • /
    • 2009
  • This paper presents a 3D stereoscopic TIP (Tour Into Picture) for Korean classical paintings being composed of persons, boat, and landscape. Unlike conventional TIP methods providing 2D image or video, our proposed TIP can provide users with 3D stereoscopic contents. Navigating a picture with stereoscopic viewing can deliver more realistic and immersive perception. The method firstly makes input data being composed of foreground mask, background image, and depth map. The second step is to navigate the picture and to obtain rendered images by orthographic or perspective projection. Then, two depth enhancement schemes such as depth template and Laws depth are utilized in order to reduce a cardboard effect and thus to enhance 3D perceived depth of the foreground objects. In experiments, the proposed method was tested on 'Danopungjun' and 'Muyigido' that are famous paintings made in Chosun Dynasty. The stereoscopic animation was proved to deliver new 3D perception compared with 2D video.

A Method of Pedestrian Flow Speed Estimation Adaptive to Viewpoint Changes (시점변화에 적응적인 보행자 유동 속도 측정)

  • Lee, Gwang-Gook;Yoon, Ja-Young;Kim, Jae-Jun;Kim, Whoi-Yul
    • Journal of Broadcast Engineering
    • /
    • v.14 no.4
    • /
    • pp.409-418
    • /
    • 2009
  • This paper proposes a method to estimate the flow speed of pedestrians in surveillance videos. In the proposed method, the average moving speed of pedestrians is measured by estimating the size of real-world motion from the observed motion vectors. For this purpose, a pixel-to-meter conversion factor is introduced which is calculated from camera parameters. Also, the height information, which is missing because of camera projection, is predicted statistically from simulation experiments. Compared to the previous works for flow speed estimation, our method can be applied to various camera views because it separates scene parameters explicitly. Experiments are performed on both simulation image sequences and real video. In the experiments on simulation videos, the proposed method estimated the flow speed with average error of about 0.08m/s. The proposed method also showed promising results for the real video.

Behavioural Characteristics of Walleye Pollack Theragra chalcogramma by Acoustic Sound Conditioning (음향 순치에 의한 명태의 행동 특성)

  • 박용석
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.32 no.4
    • /
    • pp.331-339
    • /
    • 1996
  • It is most important to understand the behaviour of fish in case of the betterment of the current fishing gear and methods or the development of the conditioning by acoustic sound in marine ranching. This investigation has been attempted to provide for the prediction of the response action of fish to acoustic sound. The experimental fish was conditioned with sound and bait. As the acoustic sound for stimulus, the pure tone of sine waveform at the frequency of 200Hz was used. This pure tone was determined from previous investigation about hearing ability of walleye pollock Theragra chalcogramma. The fork length of walleye pollock used in this experiment was 385~450mm. The conditioning proceeding was recorded in the video tape recorder. Frequency of appearance in the feeding area was analyzed with computer and video tape recorder. The position of fish was tracked using the mouse cursor and picture mixed on the superimpose board. The response of conditioned fish to sound stimulus was appeared in the 8th day firstly. The conditioned fish remembered the stimulus sound for 4 days. Average frequency of appearance in the feeding area during the 30 seconds sound projection or 1 minute after the sound stimulus was 51%, and was higher than before it.

  • PDF

Reliable Camera Pose Estimation from a Single Frame with Applications for Virtual Object Insertion (가상 객체 합성을 위한 단일 프레임에서의 안정된 카메라 자세 추정)

  • Park, Jong-Seung;Lee, Bum-Jong
    • The KIPS Transactions:PartB
    • /
    • v.13B no.5 s.108
    • /
    • pp.499-506
    • /
    • 2006
  • This Paper describes a fast and stable camera pose estimation method for real-time augmented reality systems. From the feature tracking results of a marker on a single frame, we estimate the camera rotation matrix and the translation vector. For the camera pose estimation, we use the shape factorization method based on the scaled orthographic Projection model. In the scaled orthographic factorization method, all feature points of an object are assumed roughly at the same distance from the camera, which means the selected reference point and the object shape affect the accuracy of the estimation. This paper proposes a flexible and stable selection method for the reference point. Based on the proposed method, we implemented a video augmentation system that inserts virtual 3D objects into the input video frames. Experimental results showed that the proposed camera pose estimation method is fast and robust relative to the previous methods and it is applicable to various augmented reality applications.

Estimation of Human Height and Position using a Single Camera (단일 카메라를 이용한 보행자의 높이 및 위치 추정 기법)

  • Lee, Seok-Han;Choi, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.45 no.3
    • /
    • pp.20-31
    • /
    • 2008
  • In this paper, we propose a single view-based technique for the estimation of human height and position. Conventional techniques for the estimation of 3D geometric information are based on the estimation of geometric cues such as vanishing point and vanishing line. The proposed technique, however, back-projects the image of moving object directly, and estimates the position and the height of the object in 3D space where its coordinate system is designated by a marker. Then, geometric errors are corrected by using geometric constraints provided by the marker. Unlike most of the conventional techniques, the proposed method offers a framework for simultaneous acquisition of height and position of an individual resident in the image. The accuracy and the robustness of our technique is verified on the experimental results of several real video sequences from outdoor environments.

Comparison of Dynamic Knee Valgus During Single-leg Step Down Between People With and Without Pronated Foot Using Two-dimensional Video Analysis

  • Kim, Hyun-sook;Yoo, Hwa-ik;Hwang, Ui-jae;Kwon, Oh-yun
    • Physical Therapy Korea
    • /
    • v.28 no.4
    • /
    • pp.266-272
    • /
    • 2021
  • Background: Considering the kinetic chain of the lower extremity, a pronated foot position (PFP) can affect malalignment of the lower extremity, such as a dynamic knee valgus (DKV). Although the DKV during several single-leg movement tests has been investigated, no studies have compared the differences in DKV during a single-leg step down (SLSD) between subjects with and without PFP. Objects: The purpose of this study was to compare the DKV during SLSD between subjects with and without PFP. Methods: Twelve subjects with PFP (9 men, 3 women) and 15 subjects without PFP (12 men, 3 women) participated in this study. To calculate the DKV, frontal plane projection angle (FPPA), knee-in distance (KID), and hip-out distance (HOD) during SLSD were analyzed by two-dimensional video analysis software (Kinovea). Results: The FPPA was significantly lower in PFP group, compared with control group (166.4° ± 7.5° and 174.5° ± 5.5°, p < 0.05). Also, the KID was significantly greater in PFP group, compared with control group (12.7 ± 3.9 cm and 7.3 ± 2.4 cm, p < 0.05). However, the HOD not significantly differed between two groups (12.7 ± 1.7 cm and 11.4 ± 2.5 cm, p > 0.05). Conclusion: The PFP is associated with lower FPPA and greater KID. When assess the DKV during SLSD, the PFP should be considered as a crucial factor for occurrence of DKV.