• Title/Summary/Keyword: 2D Video

Search Result 910, Processing Time 0.024 seconds

QoS of MPEG-2 Video under Cell Loss Condition (셀손실에 따른 MPEG-2 비디오의 서비스 품질)

  • Han, Jong-Seok;Choi, Jae-Hyoung;Kim, Jee-Joong
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.8
    • /
    • pp.30-38
    • /
    • 1999
  • On providing MPEG-2 video service through the ATM network, it is necessary for network provider to have the ATM QoS objectives in order to guarantee quality of service required by end users at the application layer. In this paper, the degradation of QoS caused by cell losses at application layer is assessed quantitatively by GIQ model considering AAL layer and is evaluated qualitatively by MOS from the viewpoint of the end users in order to analyze the relation between CLR and QoS of MPEG-2 video. From the simulation and empirical results, we know that CLR and $GIQ_{mean}$ for guaranteeing QoS of MOS grade 5(Excellent) are $CLR{\le}4{\times}10^7$ & $GIQ_{mean}{\ge}99.4%$ and those for guaranteeing QoS of MOS grade 4(Good) are $CLR{\le}2{\times}10^6$ & $GIQ_{mean}{\ge}99.705%$.

  • PDF

Smart Camera Technology to Support High Speed Video Processing in Vehicular Network (차량 네트워크에서 고속 영상처리 기반 스마트 카메라 기술)

  • Son, Sanghyun;Kim, Taewook;Jeon, Yongsu;Baek, Yunju
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.1
    • /
    • pp.152-164
    • /
    • 2015
  • A rapid development of semiconductors, sensors and mobile network technologies has enable that the embedded device includes high sensitivity sensors, wireless communication modules and a video processing module for vehicular environment, and many researchers have been actively studying the smart car technology combined on the high performance embedded devices. The vehicle is increased as the development of society, and the risk of accidents is increasing gradually. Thus, the advanced driver assistance system providing the vehicular status and the surrounding environment of the vehicle to the driver using various sensor data is actively studied. In this paper, we design and implement the smart vehicular camera device providing the V2X communication and gathering environment information. And we studied the method to create the metadata from a received video data and sensor data using video analysis algorithm. In addition, we invent S-ROI, D-ROI methods that set a region of interest in a video frame to improve calculation performance. We performed the performance evaluation for two ROI methods. As the result, we confirmed the video processing speed that S-ROI is 3.0 times and D-ROI is 4.8 times better than a full frame analysis.

MPEG-DASH based 3D Point Cloud Content Configuration Method (MPEG-DASH 기반 3차원 포인트 클라우드 콘텐츠 구성 방안)

  • Kim, Doohwan;Im, Jiheon;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.24 no.4
    • /
    • pp.660-669
    • /
    • 2019
  • Recently, with the development of three-dimensional scanning devices and multi-dimensional array cameras, research is continuously conducted on techniques for handling three-dimensional data in application fields such as AR (Augmented Reality) / VR (Virtual Reality) and autonomous traveling. In particular, in the AR / VR field, content that expresses 3D video as point data has appeared, but this requires a larger amount of data than conventional 2D images. Therefore, in order to serve 3D point cloud content to users, various technological developments such as highly efficient encoding / decoding and storage, transfer, etc. are required. In this paper, V-PCC bit stream created using V-PCC encoder proposed in MPEG-I (MPEG-Immersive) V-PCC (Video based Point Cloud Compression) group, It is defined by the MPEG-DASH (Dynamic Adaptive Streaming over HTTP) standard, and provides to be composed of segments. Also, in order to provide the user with the information of the 3D coordinate system, the depth information parameter of the signaling message is additionally defined. Then, we design a verification platform to verify the technology proposed in this paper, and confirm it in terms of the algorithm of the proposed technology.

Vanishing point-based 3D object detection method for improving traffic object recognition accuracy

  • Jeong-In, Park
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.1
    • /
    • pp.93-101
    • /
    • 2023
  • In this paper, we propose a method of creating a 3D bounding box for an object using a vanishing point to increase the accuracy of object recognition in an image when recognizing an traffic object using a video camera. Recently, when vehicles captured by a traffic video camera is to be detected using artificial intelligence, this 3D bounding box generation algorithm is applied. The vertical vanishing point (VP1) and horizontal vanishing point (VP2) are derived by analyzing the camera installation angle and the direction of the image captured by the camera, and based on this, the moving object in the video subject to analysis is specified. If this algorithm is applied, it is easy to detect object information such as the location, type, and size of the detected object, and when applied to a moving type such as a car, it is tracked to determine the location, coordinates, movement speed, and direction of each object by tracking it. Able to know. As a result of application to actual roads, tracking improved by 10%, in particular, the recognition rate and tracking of shaded areas (extremely small vehicle parts hidden by large cars) improved by 100%, and traffic data analysis accuracy was improved.

8-bit 10-MHz A/D Converter for Video Signal Processing (영상 신호 처리용 8-bit 10-MHz A/D 변환기)

  • Park Chang-Sun;Son Ju-Ho;Lee Jun-Ho;Kim Chong-Min;Kim Dong-Yong
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • autumn
    • /
    • pp.173-176
    • /
    • 1999
  • In this work, a A/D converter is implemented to obtain 8bit resolution at a conversion rate of 10Msample/s for video applications. Proposed architecture is designed low power A/D converter that pipelined architecture consists of flash A/D converter. This architecture consists of two identical stages that consist of sample/hold circuit, low power comparator, voltage reference circuit and MDAC of binary weighted capacitor array. Proposed A/D converter is designed using $0.25{\mu}m$ CMOS technology The SNR is 76.3dB at a sampling rate of 10MHz with 3.9MHz sine input signal. When an 8bit 10Msample/s A/D converter is simulated, the Differential Nonlinearity / Integral Nonlinearity (DNL/ INL) error are ${\pm}0.5/{\pm}2$ LSB, respectively. The power consumption is 13mW at 10Msample/s.

  • PDF

A Study on Fingerprinting Robustness Indicators for Immersive 360-degree Video (실감형 360도 영상 특징점 기술 강인성 지표에 관한 연구)

  • Kim, Youngmo;Park, Byeongchan;Jang, Seyoung;Yoo, Injae;Lee, Jaechung;Kim, Seok-Yoon
    • Journal of IKEEE
    • /
    • v.24 no.3
    • /
    • pp.743-753
    • /
    • 2020
  • In this paper, we propose a set of robustness indicators for immersive 360-degree video. With the full-fledged service of mobile carriers' 5G networks, it is possible to use large-capacity, immersive 360-degree videos at high speed anytime, anywhere. Since it can be illegally distributed in web-hard and torrents through DRM dismantling and various video modifications, however, evaluation indicators that can objectively evaluate the filtering performance for copyright protection are required. In this paper, a robustness indicators is proposed that applies the existing 2D Video robustness indicators and considers the projection method and reproduction method, which are the characteristics of Immersive 360-degree Video. The performance evaluation experiment has been carried out for a sample filtering system and it is verified that an excellent recognition rate of 95% or more has been achieved in about 3 second execution time.

H.264 Encoding Technique of Multi-view Video expressed by Layered Depth Image (계층적 깊이 영상으로 표현된 다시점 비디오에 대한 H.264 부호화 기술)

  • Shin, Jong-Hong;Jee, Inn-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.2
    • /
    • pp.43-51
    • /
    • 2014
  • Multi-view video including depth image is necessary to develop a new compression encoding technique for storage and transmission, because of a huge amount of data. Layered depth image is an efficient representation method of multi-view video data. This method makes a data structure that is synthesis of multi-view color and depth image. This efficient method to compress new contents is suggested to use layered depth image representation and to apply for video compression encoding by using 3D warping. This paper proposed enhanced compression method using layered depth image representation and H.264/AVC video coding technology. In experimental results, we confirmed high compression performance and good quality of reconstructed image.

Layered Depth Image Representation And H.264 Encoding of Multi-view video For Free viewpoint TV (자유시점 TV를 위한 다시점 비디오의 계층적 깊이 영상 표현과 H.264 부호화)

  • Shin, Jong Hong
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.7 no.2
    • /
    • pp.91-100
    • /
    • 2011
  • Free viewpoint TV can provide multi-angle view point images for viewer needs. In the real world, But all angle view point images can not be captured by camera. Only a few any angle view point images are captured by each camera. Group of the captured images is called multi-view image. Therefore free viewpoint TV wants to production of virtual sub angle view point images form captured any angle view point images. Interpolation methods are known of this problem general solution. To product interpolated view point image of correct angle need to depth image of multi-view image. Unfortunately, multi-view video including depth image is necessary to develop a new compression encoding technique for storage and transmission because of a huge amount of data. Layered depth image is an efficient representation method of multi-view video data. This method makes a data structure that is synthesis of multi-view color and depth image. This paper proposed enhanced compression method using layered depth image representation and H.264/AVC video coding technology. In experimental results, confirmed high compression performance and good quality reconstructed image.

Face Tracking and Recognition in Video with PCA-based Pose-Classification and (2D)2PCA recognition algorithm (비디오속의 얼굴추적 및 PCA기반 얼굴포즈분류와 (2D)2PCA를 이용한 얼굴인식)

  • Kim, Jin-Yul;Kim, Yong-Seok
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.5
    • /
    • pp.423-430
    • /
    • 2013
  • In typical face recognition systems, the frontal view of face is preferred to reduce the complexity of the recognition. Thus individuals may be required to stare into the camera, or the camera should be located so that the frontal images are acquired easily. However these constraints severely restrict the adoption of face recognition to wide applications. To alleviate this problem, in this paper, we address the problem of tracking and recognizing faces in video captured with no environmental control. The face tracker extracts a sequence of the angle/size normalized face images using IVT (Incremental Visual Tracking) algorithm that is known to be robust to changes in appearance. Since no constraints have been imposed between the face direction and the video camera, there will be various poses in face images. Thus the pose is identified using a PCA (Principal Component Analysis)-based pose classifier, and only the pose-matched face images are used to identify person against the pre-built face DB with 5-poses. For face recognition, PCA, (2D)PCA, and $(2D)^2PCA$ algorithms have been tested to compute the recognition rate and the execution time.

Case Study on the Physical Characteristics of Precipitation using 2D-Video Distrometer (2D-Video Distrometer를 이용한 강수의 물리적 특성에 관한 사례연구)

  • Park, Jong-Kil;Cheon, Eun-Ji;Jung, Woo-Sik
    • Journal of Environmental Science International
    • /
    • v.25 no.3
    • /
    • pp.345-359
    • /
    • 2016
  • This study analyze the synoptic meteorological cause of rainfall, rainfall intensity, drop size distribution(DSD), fall velocity and oblateness measured by the 2D-Video distrometer(2DVD) by comparing two cases which are heavy rainfall event case and a case that is not classified as heavy rainfall but having more than $30mm\;h^{-1}$ rainrate in July, 2014 at Gimhae region. As a results; Over the high pressure edge area where strong upward motion exists, the convective rain type occurred and near the changma front, convective and frontal rainfall combined rain type occurred. Therefore, rainrate varies based on the synoptic meteorological condition. The most rain drop distribution appeared in the raindrops with diameters between 0.4 mm and 0.6 mm and large particles appeared for the convective rain type since strong upward motion provide favorable conditions for the drops to grow by colliding and merging so the drop size distribution varies based on the location or rainfall types. The rainfall phases is mainly rain and as the diameter of the raindrop increase the fall velocity increase and oblateness decrease. The equation proposed based on the 2DVD tends to underestimated both fall velocity and oblateness compared with observation. Since these varies based on the rainfall characteristics of the observation location, standard equation for fall velocity and oblateness fit for Gimhae area can be developed by continuous observation and data collection hereafter.