• Title/Summary/Keyword: Scene movement

Search Result 114, Processing Time 0.03 seconds

Parallel Dense Merging Network with Dilated Convolutions for Semantic Segmentation of Sports Movement Scene

  • Huang, Dongya;Zhang, Li
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.11
    • /
    • pp.3493-3506
    • /
    • 2022
  • In the field of scene segmentation, the precise segmentation of object boundaries in sports movement scene images is a great challenge. The geometric information and spatial information of the image are very important, but in many models, they are usually easy to be lost, which has a big influence on the performance of the model. To alleviate this problem, a parallel dense dilated convolution merging Network (termed PDDCM-Net) was proposed. The proposed PDDCMNet consists of a feature extractor, parallel dilated convolutions, and dense dilated convolutions merged with different dilation rates. We utilize different combinations of dilated convolutions that expand the receptive field of the model with fewer parameters than other advanced methods. Importantly, PDDCM-Net fuses both low-level and high-level information, in effect alleviating the problem of accurately segmenting the edge of the object and positioning the object position accurately. Experimental results validate that the proposed PDDCM-Net achieves a great improvement compared to several representative models on the COCO-Stuff data set.

A Scene-Specific Object Detection System Utilizing the Advantages of Fixed-Location Cameras

  • Jin Ho Lee;In Su Kim;Hector Acosta;Hyeong Bok Kim;Seung Won Lee;Soon Ki Jung
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.4
    • /
    • pp.329-336
    • /
    • 2023
  • This paper introduces an edge AI-based scene-specific object detection system for long-term traffic management, focusing on analyzing congestion and movement via cameras. It aims to balance fast processing and accuracy in traffic flow data analysis using edge computing. We adapt the YOLOv5 model, with four heads, to a scene-specific model that utilizes the fixed camera's scene-specific properties. This model selectively detects objects based on scale by blocking nodes, ensuring only objects of certain sizes are identified. A decision module then selects the most suitable object detector for each scene, enhancing inference speed without significant accuracy loss, as demonstrated in our experiments.

MPEG Video Segmentation Using Frame Feature Comparison (프레임 특징 비교를 이용한 압축비디오 분할)

  • 김영호;강대성
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.4 no.2
    • /
    • pp.25-30
    • /
    • 2003
  • Recently, development of digital technology is occupying a large part of multimedia information like character, voice, image, video, etc. Research about video indexing and retrieval progresses especially in research relative to video. In this paper, we propose new algorithm(Frame Feature Comparison) for MPEG video segmentation. Shot, Scene Change detection is basic and important works that segment it in MPEG video sequence. Generally, the segmentation algorithm that uses much has defect that occurs an error detection according to a flash of camera, movement of camera and fast movement of an object, because of comparing former frames with present frames. Therefore, we distinguish a scene change one more time using a scene change point detected in the conventional algorithm through comparing its mean value with abutted frames. In the result, we could detect more corrective scene change than the conventional algorithm.

  • PDF

Study on Simulator Sickness Measure on Scene Movement Based Ship Handing Simulator Using SSQ and COP (시각적 동요 기반 선박운항 시뮬레이터에서 SSQ와 COP를 이용한 시뮬레이터 멀미 계측에 관한 연구)

  • Fang, Tae-Hyun;Jang, Jun-Hyuk;Oh, Seung-Bin;Kim, Hong-Tae
    • Journal of Navigation and Port Research
    • /
    • v.38 no.5
    • /
    • pp.485-491
    • /
    • 2014
  • In this paper, it is proposed that the effects of simulator sickness due to scene movement in ship handling simulator can be measured by using center of pressure (COP) and a simulator sickness questionnaire (SSQ). For experiments of simulator sickness, twelve participants are exposed to scenes movement from ship handling simulator according to three steps of sea states. During experiments, COPs for subjects are measured by force plate. After exposure to scenes movement, subjects describe their sickness symptoms by answering the SSQ. Throughput analysing the results of scene movement, SSQ, and COP, the relation between the simulator sickness and COP is investigated. It is suggested that formulations for SSQ score and COP with respect to sea state are obtained by the curve fitting technique, and the longitudinal COP can be used for measuring the simulator sickness.

Scene Recognition based Autonomous Robot Navigation robust to Dynamic Environments (동적 환경에 강인한 장면 인식 기반의 로봇 자율 주행)

  • Kim, Jung-Ho;Kweon, In-So
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.3
    • /
    • pp.245-254
    • /
    • 2008
  • Recently, many vision-based navigation methods have been introduced as an intelligent robot application. However, many of these methods mainly focus on finding an image in the database corresponding to a query image. Thus, if the environment changes, for example, objects moving in the environment, a robot is unlikely to find consistent corresponding points with one of the database images. To solve these problems, we propose a novel navigation strategy which uses fast motion estimation and a practical scene recognition scheme preparing the kidnapping problem, which is defined as the problem of re-localizing a mobile robot after it is undergone an unknown motion or visual occlusion. This algorithm is based on motion estimation by a camera to plan the next movement of a robot and an efficient outlier rejection algorithm for scene recognition. Experimental results demonstrate the capability of the vision-based autonomous navigation against dynamic environments.

  • PDF

Implementation of Altitude Information for Flight Simulator in OpenSceneGraph (항공 시뮬레이터를 위한 OpenSceneGraph기반의 고도 정보 구현 방안)

  • Lee, ChungJae;Kim, JongBum;Kim, Ki-Il
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.9 no.1
    • /
    • pp.11-16
    • /
    • 2014
  • When it comes to develop flight simulator, HAT (Height Above Terrain) is required to provide altitude information to the pilot who learns how to control an airplane in landing and takeoff situation. However, there might be inconsistent problem between real terrain and simulation information since current implementation of HAT simply depends on center of gravity point on the airplane. To overcome mentioned problem, in this paper, we propose how to obtain more accurate altitude information than existing scheme by making use of HAT and HOT (Height Of Terrain) information of landing equipments according to movement of the airplane. Moreover, we demonstrate the accuracy of the proposed scheme through new flight simulator developed through OSG(OpenSceneGraph) by taking example of terrain information for domestic airport.

A Study of the Reactive Movement Synchronization for Analysis of Group Flow (그룹 몰입도 판단을 위한 움직임 동기화 연구)

  • Ryu, Joon Mo;Park, Seung-Bo;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.79-94
    • /
    • 2013
  • Recently, the high value added business is steadily growing in the culture and art area. To generated high value from a performance, the satisfaction of audience is necessary. The flow in a critical factor for satisfaction, and it should be induced from audience and measures. To evaluate interest and emotion of audience on contents, producers or investors need a kind of index for the measurement of the flow. But it is neither easy to define the flow quantitatively, nor to collect audience's reaction immediately. The previous studies of the group flow were evaluated by the sum of the average value of each person's reaction. The flow or "good feeling" from each audience was extracted from his face, especially, the change of his (or her) expression and body movement. But it was not easy to handle the large amount of real-time data from each sensor signals. And also it was difficult to set experimental devices, in terms of economic and environmental problems. Because, all participants should have their own personal sensor to check their physical signal. Also each camera should be located in front of their head to catch their looks. Therefore we need more simple system to analyze group flow. This study provides the method for measurement of audiences flow with group synchronization at same time and place. To measure the synchronization, we made real-time processing system using the Differential Image and Group Emotion Analysis (GEA) system. Differential Image was obtained from camera and by the previous frame was subtracted from present frame. So the movement variation on audience's reaction was obtained. And then we developed a program, GEX(Group Emotion Analysis), for flow judgment model. After the measurement of the audience's reaction, the synchronization is divided as Dynamic State Synchronization and Static State Synchronization. The Dynamic State Synchronization accompanies audience's active reaction, while the Static State Synchronization means to movement of audience. The Dynamic State Synchronization can be caused by the audience's surprise action such as scary, creepy or reversal scene. And the Static State Synchronization was triggered by impressed or sad scene. Therefore we showed them several short movies containing various scenes mentioned previously. And these kind of scenes made them sad, clap, and creepy, etc. To check the movement of audience, we defined the critical point, ${\alpha}$and ${\beta}$. Dynamic State Synchronization was meaningful when the movement value was over critical point ${\beta}$, while Static State Synchronization was effective under critical point ${\alpha}$. ${\beta}$ is made by audience' clapping movement of 10 teams in stead of using average number of movement. After checking the reactive movement of audience, the percentage(%) ratio was calculated from the division of "people having reaction" by "total people". Total 37 teams were made in "2012 Seoul DMC Culture Open" and they involved the experiments. First, they followed induction to clap by staff. Second, basic scene for neutralize emotion of audience. Third, flow scene was displayed to audience. Forth, the reversal scene was introduced. And then 24 teams of them were provided with amuse and creepy scenes. And the other 10 teams were exposed with the sad scene. There were clapping and laughing action of audience on the amuse scene with shaking their head or hid with closing eyes. And also the sad or touching scene made them silent. If the results were over about 80%, the group could be judged as the synchronization and the flow were achieved. As a result, the audience showed similar reactions about similar stimulation at same time and place. Once we get an additional normalization and experiment, we can obtain find the flow factor through the synchronization on a much bigger group and this should be useful for planning contents.

Analysis on the Movement Found in an Animation - Focusing on Laban's Effort - (애니메이션 <몬스터 대학교>의 움직임 분석 -라반의 에포트를 중심으로)

  • Sung, Rea
    • Cartoon and Animation Studies
    • /
    • s.40
    • /
    • pp.33-53
    • /
    • 2015
  • The movement of characters is one of the crucial elements to deliver their emotion flowing inside. Though it is the same movement, it may appear or be expressed differently according to the character's personality or emotion or the particular situation. The purpose of this study is to analyze not only the movement found superficially in an animation but also a character's internal emotion and attitude with Laban's movement analysis system, particularly effort, one of its analysis categories, and examine how effectively Laban's movement analysis often employed at the circles of dance can analyze movement in an animation. is about a monster that constantly makes efforts to realize its dream to be a scarer. Functional movement forms the most part, but expressive movement to show how a character thinks or feels also appears harmoniously. Characters' externally shown movement can express their internal emotion properly sometimes, but they also often move expressing their feelings in moderation. Therefore, this study analyzes the movement of characters found in the four scenes of with LMA's effort. According to the findings, at the scene where Michael enters the door leading to the human world following the scarer, the emotional state of Michael envious of the scarer is expressed with the Vision Drive giving the strong feel of dreaming. At the scene of the second game to choose the best scare team, it shows us the Spell Drive with its careful and light movement having clear intention to survive at the game. At the scene where there is a party held for the teams that have survived, it shows the Passion Drive of being eagerly expressing happy and delightful feelings without considering what is around. At the scene where Michael and Sullivan are pursued by people, the Action Drive was used to express movement that was heavy and strong and was getting faster gradually by focusing the feelings of the characters in haste into one place.

Relationship between Scene Movements and Cybersickness (화면 움직임과 Cybersickness의 관계에 관한 연구)

  • Park, Kyung-Soo;Choi, Jeong-A;Kim, Kyoung-Taek;Kim, Sang-Soo
    • Journal of the Ergonomics Society of Korea
    • /
    • v.24 no.1
    • /
    • pp.1-7
    • /
    • 2005
  • This paper investigates the effects of scene movements on cybersickness to develop the guidelines of scene movements in virtual environments. The types of scene movements were made for both scene navigations(through the axes of X: lateral, Y: fore & after, and Z: vertical) and scene rotations(by pitch, roll and yaw). And there were each three levels of speed; 2.7, 4.5 and 6.3 /s(for navigation), and 10, 20 and 30 /s(for rotation) were conducted. Twelve participants were exposed to each scene for 15 minutes, and three tests were performed to measure the degree of sickness. Before and after subjects were exposed to virtual environments, they were requested to describe their sickness symptoms by means of answering the Simulator Sickness Questionnaire(SSQ). And the postural stability tests, in which the Center of Pressure(COP) of subjects were traced and recorded by a 'force platform', were conducted. During the exposure on virtual environments, the subjects were requested to rate the degree of nausea. For both navigation and rotation, the effects of speeds and axes were significant in the SSQ scores and the nausea ratings, while it was not in the COP. The correlation between the SSQ scores and the COP data was not found. Therefore, it was inappropriate to use COP as a measure of cybersickness. The degree of sickness increased, except for the case of the yaw, as the speed increased. The sickness was most severe in the scene navigation through the axis X and in the scene rotation by the yaw.

Effects of Object-Background Contextual Consistency on the Allocation of Attention and Memory of the Object (물체-배경 맥락 부합성이 물체에 대한 주의 할당과 기억에 미치는 영향)

  • Lee, YoonKyoung;Kim, Bia
    • Korean Journal of Cognitive Science
    • /
    • v.24 no.2
    • /
    • pp.133-171
    • /
    • 2013
  • The gist of a scene can be identified in less than 100msec, and violation in the gist can influence the way to allocate attention to the parts of a scene. In other words, people tend to allocate more attention to the object(s) inconsistent with the gist of a scene and to have better memory of them. To investigate the effects of contextual consistency on the attention allocation and object memory, two experiments were conducted. In both experiments, a $3{\times}2$ factorial design was used with scene presentation time(2s, 5s, and 10s) as a between-subject factor and object-background contextual consistency(consistent, inconsistent) as a within-subject factor. In Experiment 1, eye movements were recorded while the participants viewed line-drawing scenes. The results showed that the eye movement patterns were different according to whether the scenes were consistent or not. Context-inconsistent objects showed faster initial fixation indices, longer fixation times, more frequent returns than context-consistent ones. These results are entirely consistent with those of previous studies. If an object is identified as inconsistent with the gist of a scene, it attracts attention. Furthermore, the inconsistent objects and their locations in the scenes were recalled better than the consistent ones and their locations. Experiment 2 was the same as Experiment 1 except that a dual-task paradigm was used to reduce the amount of attention to allocate to the objects. Participants had to detect the positions of the probe occurring every second while they viewed the scenes. Nonetheless, the result patterns were the same as in Experiment 1. Even when the amount of attention to allocate to the scene contents was reduced, the same effects of contextual inconsistency were observed. These results indicate that the object-background contextual consistency has a strong influence on the way of allocating attention and the memory of objects in a scene.

  • PDF