• Title/Summary/Keyword: Scene Flow

Search Result 63, Processing Time 0.154 seconds

Effects of treadmill training with real optic flow scene on balance and balance self-efficacy in individuals following stroke: a pilot randomized controlled trial

  • Kang, Hyungkyu;Chung, Yijung
    • Physical Therapy Rehabilitation Science
    • /
    • v.1 no.1
    • /
    • pp.33-39
    • /
    • 2012
  • Objective: The objective of this study is to investigate the effect of treadmill training with real optic flow scene on functional recovery of balance and balance self-efficacy in stroke patients. Design: Single blind, Randomized controlled trial. Methods: Nine patients following stroke were divided randomly into the treadmill with optic flow group (n=3), treadmill with virtual reality group (n=3), and control group (n=3). Subjects in the treadmill with optic flow group wore a head-mounted display in order to receive a speed modulated real optic flow scene during treadmill training for 30 minutes, while those in the treadmill with virtual reality group and control group received treadmill training with virtual reality and regular therapy for the same amount of time, five times per week for a period of three weeks. Timed up and go test (TUG) and activities-specific balance confidence scale (ABC scale) were evaluated before and after the intervention. Results: TUG in the treadmill training with optic flow group showed significantly greater improvement, compared with the treadmill training with virtual reality group and control group (p<0.05). Significantly greater improvement in the ABC scale was observed in the treadmill training with optic flow group and the tread mill training with virtual reality group, compared with the control group (p<0.05). Conclusions: Findings of this study demonstrate that treadmill training with real optic flow scene can be helpful in improving balance and balance self-efficacy of patients with chronic stroke and may be used as a practical adjunct to routine rehabilitation therapy.

  • PDF

The Implementing a Color, Edge, Optical Flow based on Mixed Algorithm for Shot Boundary Improvement (샷 경계검출 개선을 위한 칼라, 엣지, 옵티컬플로우 기반의 혼합형 알고리즘 구현)

  • Park, Seo Rin;Lim, Yang Mi
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.8
    • /
    • pp.829-836
    • /
    • 2018
  • This study attempts to detect a shot boundary in films(or dramas) based on the length of a sequence. As films or dramas use scene change effects a lot, the issues regarding the effects are more diverse than those used in surveillance cameras, sports videos, medical care and security. Visual techniques used in films are focused on the human sense of aesthetic therefore, it is difficult to solve the errors in shot boundary detection with the method employed in surveillance cameras. In order to define the errors arisen from the scene change effects between the images and resolve those issues, the mixed algorithm based upon color histogram, edge histogram, and optical flow was implemented. The shot boundary data from this study will be used when analysing the configuration of meaningful shots in sequences in the future.

A Scene-Specific Object Detection System Utilizing the Advantages of Fixed-Location Cameras

  • Jin Ho Lee;In Su Kim;Hector Acosta;Hyeong Bok Kim;Seung Won Lee;Soon Ki Jung
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.4
    • /
    • pp.329-336
    • /
    • 2023
  • This paper introduces an edge AI-based scene-specific object detection system for long-term traffic management, focusing on analyzing congestion and movement via cameras. It aims to balance fast processing and accuracy in traffic flow data analysis using edge computing. We adapt the YOLOv5 model, with four heads, to a scene-specific model that utilizes the fixed camera's scene-specific properties. This model selectively detects objects based on scale by blocking nodes, ensuring only objects of certain sizes are identified. A decision module then selects the most suitable object detector for each scene, enhancing inference speed without significant accuracy loss, as demonstrated in our experiments.

Traffic Flow Sensing Using Wireless Signals

  • Duan, Xuting;Jiang, Hang;Tian, Daxin;Zhou, Jianshan;Zhou, Gang;E, Wenjuan;Sun, Yafu;Xia, Shudong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.10
    • /
    • pp.3858-3874
    • /
    • 2021
  • As an essential part of the urban transportation system, precise perception of the traffic flow parameters at the traffic signal intersection ensures traffic safety and fully improves the intersection's capacity. Traditional detection methods of road traffic flow parameter can be divided into the micro and the macro. The microscopic detection methods include geomagnetic induction coil technology, aerial detection technology based on the unmanned aerial vehicles (UAV) and camera video detection technology based on the fixed scene. The macroscopic detection methods include floating car data analysis technology. All the above methods have their advantages and disadvantages. Recently, indoor location methods based on wireless signals have attracted wide attention due to their applicability and low cost. This paper extends the wireless signal indoor location method to the outdoor intersection scene for traffic flow parameter estimation. In this paper, the detection scene is constructed at the intersection based on the received signal strength indication (RSSI) ranging technology extracted from the wireless signal. We extracted the RSSI data from the wireless signals sent to the road side unit (RSU) by the vehicle nodes, calibrated the RSSI ranging model, and finally obtained the traffic flow parameters of the intersection entrance road. We measured the average speed of traffic flow through multiple simulation experiments, the trajectory of traffic flow, and the spatiotemporal map at a single intersection inlet. Finally, we obtained the queue length of the inlet lane at the intersection. The simulation results of the experiment show that the RSSI ranging positioning method based on wireless signals can accurately estimate the traffic flow parameters at the intersection, which also provides a foundation for accurately estimating the traffic flow state in the future era of the Internet of Vehicles.

A Study of the Reactive Movement Synchronization for Analysis of Group Flow (그룹 몰입도 판단을 위한 움직임 동기화 연구)

  • Ryu, Joon Mo;Park, Seung-Bo;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.79-94
    • /
    • 2013
  • Recently, the high value added business is steadily growing in the culture and art area. To generated high value from a performance, the satisfaction of audience is necessary. The flow in a critical factor for satisfaction, and it should be induced from audience and measures. To evaluate interest and emotion of audience on contents, producers or investors need a kind of index for the measurement of the flow. But it is neither easy to define the flow quantitatively, nor to collect audience's reaction immediately. The previous studies of the group flow were evaluated by the sum of the average value of each person's reaction. The flow or "good feeling" from each audience was extracted from his face, especially, the change of his (or her) expression and body movement. But it was not easy to handle the large amount of real-time data from each sensor signals. And also it was difficult to set experimental devices, in terms of economic and environmental problems. Because, all participants should have their own personal sensor to check their physical signal. Also each camera should be located in front of their head to catch their looks. Therefore we need more simple system to analyze group flow. This study provides the method for measurement of audiences flow with group synchronization at same time and place. To measure the synchronization, we made real-time processing system using the Differential Image and Group Emotion Analysis (GEA) system. Differential Image was obtained from camera and by the previous frame was subtracted from present frame. So the movement variation on audience's reaction was obtained. And then we developed a program, GEX(Group Emotion Analysis), for flow judgment model. After the measurement of the audience's reaction, the synchronization is divided as Dynamic State Synchronization and Static State Synchronization. The Dynamic State Synchronization accompanies audience's active reaction, while the Static State Synchronization means to movement of audience. The Dynamic State Synchronization can be caused by the audience's surprise action such as scary, creepy or reversal scene. And the Static State Synchronization was triggered by impressed or sad scene. Therefore we showed them several short movies containing various scenes mentioned previously. And these kind of scenes made them sad, clap, and creepy, etc. To check the movement of audience, we defined the critical point, ${\alpha}$and ${\beta}$. Dynamic State Synchronization was meaningful when the movement value was over critical point ${\beta}$, while Static State Synchronization was effective under critical point ${\alpha}$. ${\beta}$ is made by audience' clapping movement of 10 teams in stead of using average number of movement. After checking the reactive movement of audience, the percentage(%) ratio was calculated from the division of "people having reaction" by "total people". Total 37 teams were made in "2012 Seoul DMC Culture Open" and they involved the experiments. First, they followed induction to clap by staff. Second, basic scene for neutralize emotion of audience. Third, flow scene was displayed to audience. Forth, the reversal scene was introduced. And then 24 teams of them were provided with amuse and creepy scenes. And the other 10 teams were exposed with the sad scene. There were clapping and laughing action of audience on the amuse scene with shaking their head or hid with closing eyes. And also the sad or touching scene made them silent. If the results were over about 80%, the group could be judged as the synchronization and the flow were achieved. As a result, the audience showed similar reactions about similar stimulation at same time and place. Once we get an additional normalization and experiment, we can obtain find the flow factor through the synchronization on a much bigger group and this should be useful for planning contents.

The Mobile Cartoons Authoring Method Using Scene Flow Mode (Scene flow 방식을 이용한 모바일 만화 저작 기법)

  • Cho, Eun-Ae;Koh, Hee-Chang;Mo, Hae-Gyu
    • Cartoon and Animation Studies
    • /
    • s.19
    • /
    • pp.113-126
    • /
    • 2010
  • The digital cartoons market is looking for a new growth momentum as the radical increases of the demands and markets about the mobile contents with portable instrument popularization. The conventional digital cartoons markets which are based on web-toon, page viewer cartoons and e-paper cartoons have been studied various fields to overcome some limitations such as the traditional cartoons had. The mobile cartoons which have been changed more and more, have some canvas limitations due to the mobile screen size. These limitations lead to the communication problems between the cartoonists and the subscribers and resulting some obstacles of mobile cartoons activations. In this paper, we developed a authoring tool applied the Screen Flow method to overcome inefficiency of conventional authoring methods. This proposed method can reflect the cartoonists' during the process of authoring mobile cartoons, thereafter we studied about the authoring method of mobile cartoons and its effects. For the convenience of users creating and distributing content in a way has been studied.

  • PDF

Scene Change Detection and Key Frame Selection Using Fast Feature Extraction in the MPEG-Compressed Domain (MPEG 압축 영상에서의 고속 특징 요소 추출을 이용한 장면 전환 검출과 키 프레임 선택)

  • 송병철;김명준;나종범
    • Journal of Broadcast Engineering
    • /
    • v.4 no.2
    • /
    • pp.155-163
    • /
    • 1999
  • In this paper, we propose novel scene change detection and key frame selection techniques, which use two feature images, i.e., DC and edge images, extracted directly from MPEG compressed video. For fast edge image extraction. we suggest to utilize 5 lower AC coefficients of each DCT. Based on this scheme, we present another edge image extraction technique using AC prediction. Although the former is superior to the latter in terms of visual quality, both methods all can extract important edge features well. Simulation results indicate that scene changes such as cut. fades, and dissolves can be correctly detected by using the edge energy diagram obtained from edge images and histograms from DC images. In addition. we find that our edge images are comparable to those obtained in the spatial domain while keeping much lower computational cost. And based on HVS, a key frame of each scene can also be selected. In comparison with an existing method using optical flow. our scheme can select semantic key frames because we only use the above edge and DC images.

  • PDF

Spatial Multilevel Optical Flow Architecture for Motion Estimation of Stationary Objects with Moving Camera (공간 다중레벨 Optical Flow 구조를 사용한 이동 카메라에 인식된 고정물체의 움직임 추정)

  • Fuentes, Alvaro;Park, Jongbin;Yoon, Sook;Park, Dong Sun
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2018.05a
    • /
    • pp.53-54
    • /
    • 2018
  • This paper introduces an approach to detect motion areas of stationary objects when the camera slightly moves in the scene by computing optical flow. The flow field is computed by two pyramidal architectures of 5 levels which are built by down-sampling the size of the images by half at each level. Two pyramids of images are built and then optical flow is computed at each level. A warping process combines the information and generates a final flow field after applying edge smoothness and outliers reduction steps. Moreover, we convert the flow vectors in order of magnitude and angle to a color map using a pseudo-color palette. Experimental results in the Middlebury optical flow dataset demonstrate the effectiveness of our method compared to other approaches.

  • PDF

A Study on the Friction Loss Reduction in Fire Hoses Used at a Fire Scene (화재현장에서 사용하는 소방호스의 마찰손실 감소 방안에 관한 연구)

  • Min, Se-Hong;Kwon, Yong-Joon
    • Fire Science and Engineering
    • /
    • v.27 no.3
    • /
    • pp.52-59
    • /
    • 2013
  • It was described the measured friction loss depending on pressure used and changes in water flow rates for a fire hose used at a fire scene on this study. As a result of actual measurement based on the result obtained by analyzing the use situation of a fire hose such as the kind, quantity, pressure used, etc. of a fire hose, the friction loss in a fire hose under the condition of using by a fire officer at a fire scene was measured as up to 56.8 %. This is much different from the equivalent length of a fire hose used to calculate the pump head in an indoor and outdoor fire-fighting facility. There is no related restrictive regulation on friction loss, there are even no data on friction loss measured by fire hose makers, and spreading a fire hose without considering friction loss at a fire scene can result in an increased length of hose used and a high-pressure water discharge from a fire engine, so this study aims to establish a standard for an equivalent length to friction loss in a fire hose and to propose a spreading method considering friction loss in a fire hose at a fire scene.

SLAM Method by Disparity Change and Partial Segmentation of Scene Structure (시차변화(Disparity Change)와 장면의 부분 분할을 이용한 SLAM 방법)

  • Choi, Jaewoo;Lee, Chulhee;Eem, Changkyoung;Hong, Hyunki
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.8
    • /
    • pp.132-139
    • /
    • 2015
  • Visual SLAM(Simultaneous Localization And Mapping) has been used widely to estimate a mobile robot's location. Visual SLAM estimates relative motions with static visual features over image sequence. Because visual SLAM methods assume generally static features in the environment, we cannot obtain precise results in dynamic situation including many moving objects: cars and human beings. This paper presents a stereo vision based SLAM method in dynamic environment. First, we extract disparity map with stereo vision and compute optical flow. We then compute disparity change that is the estimated flow field between stereo views. After examining the disparity change value, we detect ROIs(Region Of Interest) in disparity space to determine dynamic scene objects. In indoor environment, many structural planes like walls may be determined as false dynamic elements. To solve this problem, we segment the scene into planar structure. More specifically, disparity values by the stereo vision are projected to X-Z plane and we employ Hough transform to determine planes. In final step, we remove ROIs nearby the walls and discriminate static scene elements in indoor environment. The experimental results show that the proposed method can obtain stable performance in dynamic environment.