• Title/Summary/Keyword: 3D Human Pose

Search Result 74, Processing Time 0.022 seconds

A Study on the Estimation of Multi-Object Social Distancing Using Stereo Vision and AlphaPose (Stereo Vision과 AlphaPose를 이용한 다중 객체 거리 추정 방법에 관한 연구)

  • Lee, Ju-Min;Bae, Hyeon-Jae;Jang, Gyu-Jin;Kim, Jin-Pyeong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.7
    • /
    • pp.279-286
    • /
    • 2021
  • Recently, We are carrying out a policy of physical distancing of at least 1m from each other to prevent the spreading of COVID-19 disease in public places. In this paper, we propose a method for measuring distances between people in real time and an automation system that recognizes objects that are within 1 meter of each other from stereo images acquired by drones or CCTVs according to the estimated distance. A problem with existing methods used to estimate distances between multiple objects is that they do not obtain three-dimensional information of objects using only one CCTV. his is because three-dimensional information is necessary to measure distances between people when they are right next to each other or overlap in two dimensional image. Furthermore, they use only the Bounding Box information to obtain the exact coordinates of human existence. Therefore, in this paper, to obtain the exact two-dimensional coordinate value in which a person exists, we extract a person's key point to detect the location, convert it to a three-dimensional coordinate value using Stereo Vision and Camera Calibration, and estimate the Euclidean distance between people. As a result of performing an experiment for estimating the accuracy of 3D coordinates and the distance between objects (persons), the average error within 0.098m was shown in the estimation of the distance between multiple people within 1m.

Survey and Phylogenetic Analysis of Rodents and Important Rodent-Borne Zoonotic Pathogens in Gedu, Bhutan

  • Phuentshok, Yoenten;Dorji, Kezang;Zangpo, Tandin;Davidson, Silas A.;Takhampunya, Ratree;Tenzinla, Tenzinla;Dorjee, Chencho;Morris, Roger S.;Jolly, Peter D.;Dorjee, Sithar;McKenzie, Joanna S.
    • Parasites, Hosts and Diseases
    • /
    • v.56 no.5
    • /
    • pp.521-525
    • /
    • 2018
  • Rodents are well-known reservoirs and vectors of many emerging and re-emerging infectious diseases, but little is known about their role in zoonotic disease transmission in Bhutan. In this study, a cross-sectional investigation of zoonotic disease pathogens in rodents was performed in Chukha district, Bhutan, where a high incidence of scrub typhus and cases of acute undifferentiated febrile illness had been reported in people during the preceding 4-6 months. Twelve rodents were trapped alive using wire-mesh traps. Following euthanasia, liver and kidney tissues were removed and tested using PCR for Orientia tsutsugamushi and other bacterial and rickettsial pathogens causing bartonellosis, borreliosis, human monocytic ehrlichiosis, human granulocytic anaplasmosis, leptospirosis, and rickettsiosis. A phylogenetic analysis was performed on all rodent species captured and pathogens detected. Four out of the 12 rodents (33.3%) tested positive by PCR for zoonotic pathogens. Anaplasma phagocytophilum, Bartonella grahamii, and B. queenslandensis were identified for the first time in Bhutan. Leptospira interrogans was also detected for the first time from rodents in Bhutan. The findings demonstrate the presence of these zoonotic pathogens in rodents in Bhutan, which may pose a risk of disease transmission to humans.

Character Motion Control by Using Limited Sensors and Animation Data (제한된 모션 센서와 애니메이션 데이터를 이용한 캐릭터 동작 제어)

  • Bae, Tae Sung;Lee, Eun Ji;Kim, Ha Eun;Park, Minji;Choi, Myung Geol
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.3
    • /
    • pp.85-92
    • /
    • 2019
  • A 3D virtual character playing a role in a digital story-telling has a unique style in its appearance and motion. Because the style reflects the unique personality of the character, it is very important to preserve the style and keep its consistency. However, when the character's motion is directly controlled by a user's motion who is wearing motion sensors, the unique style can be discarded. We present a novel character motion control method that uses only a small amount of animation data created only for the character to preserve the style of the character motion. Instead of machine learning approaches requiring a large amount of training data, we suggest a search-based method, which directly searches the most similar character pose from the animation data to the current user's pose. To show the usability of our method, we conducted our experiments with a character model and its animation data created by an expert designer for a virtual reality game. To prove that our method preserves well the original motion style of the character, we compared our result with the result obtained by using general human motion capture data. In addition, to show the scalability of our method, we presented experimental results with different numbers of motion sensors.

Real-Time Joint Animation Production and Expression System using Deep Learning Model and Kinect Camera (딥러닝 모델과 Kinect 카메라를 이용한 실시간 관절 애니메이션 제작 및 표출 시스템 구축에 관한 연구)

  • Kim, Sang-Joon;Lee, Yu-Jin;Park, Goo-man
    • Journal of Broadcast Engineering
    • /
    • v.26 no.3
    • /
    • pp.269-282
    • /
    • 2021
  • As the distribution of 3D content such as augmented reality and virtual reality increases, the importance of real-time computer animation technology is increasing. However, the computer animation process consists mostly of manual or marker-attaching motion capture, which requires a very long time for experienced professionals to obtain realistic images. To solve these problems, animation production systems and algorithms based on deep learning model and sensors have recently emerged. Thus, in this paper, we study four methods of implementing natural human movement in deep learning model and kinect camera-based animation production systems. Each method is chosen considering its environmental characteristics and accuracy. The first method uses a Kinect camera. The second method uses a Kinect camera and a calibration algorithm. The third method uses deep learning model. The fourth method uses deep learning model and kinect. Experiments with the proposed method showed that the fourth method of deep learning model and using the Kinect simultaneously showed the best results compared to other methods.