• Title/Summary/Keyword: position tracking

Search Result 1,511, Processing Time 0.035 seconds

Effect of Sensory Integration Therapy Combined with Eye Tracker on Sensory Processing and Visual Perception of Children with Developmental Disabilities (아이트래커를 병행한 감각통합치료가 발달장애아동의 감각처리 및 시지각에 미치는 영향)

  • Kwon, So-Hyun;Ahn, Si-Nae
    • The Journal of Korean Academy of Sensory Integration
    • /
    • v.21 no.3
    • /
    • pp.39-53
    • /
    • 2023
  • Objective : The purpose was the effect of sensory integration therapy combined with an eye tracker on the sensory processing and visual perception of children with developmental disabilities. Methods : It was a single-subject study with a multiple baseline design between subjects, and the intervention applied sensory integration therapy combined with an eye tracker. Visual-motor speed and saccadic eye movements were assessed at each session of baseline and intervention periods. As pre- and post-evaluation, sensory profile, Korean-Developmental Test of Visual Perception and Trail Making Test were conducted. The results of each session evaluation and pre- and post-evaluation researched the effectiveness of the intervention through visual analysis and trend line analysis. Results : As a result of the evaluation for each session, the slope of the trend line for all children in visual-motor speed and saccadic eye movement increased sharply during the intervention compared to the baseline. As a result of the pre- and post-evaluation, the sensory processing of movement, body position, and visual changed from more than that of peers to a level similar to that of peers. In visual perception, all children's ability of Visual Closure increased. As a result of Trail Making Test conducted to confirm the improvement of children's visual tracking and visual-motor abilities, all children showed a decrease in performance time after the test compared to before. Conclusion : It was confirmed that sensory integration therapy combined with an eye tracker for developmental disabilities has effect on sensory processing and visual perception. It is expected to play an important role clinically as it can stimulate children's interest and motivation in line with recent technological improvements and the spread of smart devices.

Effectiveness of the Respiratory Gating System for Stereotectic Radiosurgery of Lung Cancer (폐암 환자의 정위적 방사선 수술 시 Respiratory Gating System의 유용성에 대한 연구)

  • Song Heung-Kwon;Kwon Kyung-Tae;Park Cheol-Su;Yang Oh-Nam;Kim Min-Su;Kim Jeong-Man
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.17 no.2
    • /
    • pp.125-131
    • /
    • 2005
  • Purpose : For stereotactic radiosurgery (SRS) of a tumor in the region whose movement due to respiration is significant, like Lung lower lobe, the gated therapy, which delivers radiation dose to the selected respiratory phases when tumor motion is small, was performed using the Respiratory gating system and its clinical effectiveness was evaluated. Materials and Methods : For two SRS patients with a tumor in Lung lower lobe, a marker block (infrared reflector) was attached on the abdomen. While patient' respiratory cycle was monitored with Real-time Position Management (RPM, Varian, USA), 4D CT was performed (10 phases per a cycle). Phases in which tumor motion did not change rapidly were decided as treatment phases. The treatment volume was contoured on the CT images for selected treatment phases using maximum intensity projection (MIP) method. In order to verify setup reproducibility and positional variation, 4D CT was repeated. Results : Gross tumor volume (GTV) showed maximum movement in superior-inferior direction. For patient #1, motion of GTV was reduced to 2.6 mm in treatment phases ($30{\sim}60%$), while that was 9.4 mm in full phases ($0{\sim}90%$) and for patient #2, it was reduced to 2.3 mm in treatment phases ($30{\sim}70%$), while it was 11.7 mm in full phases ($0{\sim}90%$). When comparing two sets of CT images, setup errors in all the directions were within 3 mm. Conclusion : Since tumor motion was reduced less than 5 mm, the Respiratory gating system for SRS of Lung lower lobe is useful.

  • PDF

Development of an Offline Based Internal Organ Motion Verification System during Treatment Using Sequential Cine EPID Images (연속촬영 전자조사 문 영상을 이용한 오프라인 기반 치료 중 내부 장기 움직임 확인 시스템의 개발)

  • Ju, Sang-Gyu;Hong, Chae-Seon;Huh, Woong;Kim, Min-Kyu;Han, Young-Yih;Shin, Eun-Hyuk;Shin, Jung-Suk;Kim, Jing-Sung;Park, Hee-Chul;Ahn, Sung-Hwan;Lim, Do-Hoon;Choi, Doo-Ho
    • Progress in Medical Physics
    • /
    • v.23 no.2
    • /
    • pp.91-98
    • /
    • 2012
  • Verification of internal organ motion during treatment and its feedback is essential to accurate dose delivery to the moving target. We developed an offline based internal organ motion verification system (IMVS) using cine EPID images and evaluated its accuracy and availability through phantom study. For verification of organ motion using live cine EPID images, a pattern matching algorithm using an internal surrogate, which is very distinguishable and represents organ motion in the treatment field, like diaphragm, was employed in the self-developed analysis software. For the system performance test, we developed a linear motion phantom, which consists of a human body shaped phantom with a fake tumor in the lung, linear motion cart, and control software. The phantom was operated with a motion of 2 cm at 4 sec per cycle and cine EPID images were obtained at a rate of 3.3 and 6.6 frames per sec (2 MU/frame) with $1,024{\times}768$ pixel counts in a linear accelerator (10 MVX). Organ motion of the target was tracked using self-developed analysis software. Results were compared with planned data of the motion phantom and data from the video image based tracking system (RPM, Varian, USA) using an external surrogate in order to evaluate its accuracy. For quantitative analysis, we analyzed correlation between two data sets in terms of average cycle (peak to peak), amplitude, and pattern (RMS, root mean square) of motion. Averages for the cycle of motion from IMVS and RPM system were $3.98{\pm}0.11$ (IMVS 3.3 fps), $4.005{\pm}0.001$ (IMVS 6.6 fps), and $3.95{\pm}0.02$ (RPM), respectively, and showed good agreement on real value (4 sec/cycle). Average of the amplitude of motion tracked by our system showed $1.85{\pm}0.02$ cm (3.3 fps) and $1.94{\pm}0.02$ cm (6.6 fps) as showed a slightly different value, 0.15 (7.5% error) and 0.06 (3% error) cm, respectively, compared with the actual value (2 cm), due to time resolution for image acquisition. In analysis of pattern of motion, the value of the RMS from the cine EPID image in 3.3 fps (0.1044) grew slightly compared with data from 6.6 fps (0.0480). The organ motion verification system using sequential cine EPID images with an internal surrogate showed good representation of its motion within 3% error in a preliminary phantom study. The system can be implemented for clinical purposes, which include organ motion verification during treatment, compared with 4D treatment planning data, and its feedback for accurate dose delivery to the moving target.

A preliminary study for development of an automatic incident detection system on CCTV in tunnels based on a machine learning algorithm (기계학습(machine learning) 기반 터널 영상유고 자동 감지 시스템 개발을 위한 사전검토 연구)

  • Shin, Hyu-Soung;Kim, Dong-Gyou;Yim, Min-Jin;Lee, Kyu-Beom;Oh, Young-Sup
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.19 no.1
    • /
    • pp.95-107
    • /
    • 2017
  • In this study, a preliminary study was undertaken for development of a tunnel incident automatic detection system based on a machine learning algorithm which is to detect a number of incidents taking place in tunnel in real time and also to be able to identify the type of incident. Two road sites where CCTVs are operating have been selected and a part of CCTV images are treated to produce sets of training data. The data sets are composed of position and time information of moving objects on CCTV screen which are extracted by initially detecting and tracking of incoming objects into CCTV screen by using a conventional image processing technique available in this study. And the data sets are matched with 6 categories of events such as lane change, stoping, etc which are also involved in the training data sets. The training data are learnt by a resilience neural network where two hidden layers are applied and 9 architectural models are set up for parametric studies, from which the architectural model, 300(first hidden layer)-150(second hidden layer) is found to be optimum in highest accuracy with respect to training data as well as testing data not used for training. From this study, it was shown that the highly variable and complex traffic and incident features could be well identified without any definition of feature regulation by using a concept of machine learning. In addition, detection capability and accuracy of the machine learning based system will be automatically enhanced as much as big data of CCTV images in tunnel becomes rich.

A Study on the Development of a Home Mess-Cleanup Robot Using an RFID Tag-Floor (RFID 환경을 이용한 홈 메스클린업 로봇 개발에 관한 연구)

  • Kim, Seung-Woo;Kim, Sang-Dae;Kim, Byung-Ho;Kim, Hong-Rae
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.2
    • /
    • pp.508-516
    • /
    • 2010
  • An autonomous and automatic home mess-cleanup robot is newly developed in this paper. Thus far, vacuum-cleaners have lightened the burden of household chores but the operational labor that vacuum-cleaners entail has been very severe. Recently, a cleaning robot was commercialized to solve but it also was not successful because it still had the problem of mess-cleanup, which pertained to the clean-up of large trash and the arrangement of newspapers, clothes, etc. Hence, we develop a new home mess-cleanup robot (McBot) to completely overcome this problem. The robot needs the capability for agile navigation and a novel manipulation system for mess-cleanup. The autonomous navigational system has to be controlled for the full scanning of the living room and for the precise tracking of the desired path. It must be also be able to recognize the absolute position and orientation of itself and to distinguish the messed object that is to be cleaned up from obstacles that should merely be avoided. The manipulator, which is not needed in a vacuum-cleaning robot, has the functions of distinguishing the large trash that is to be cleaned from the messed objects that are to be arranged. It needs to use its discretion with regard to the form of the messed objects and to properly carry these objects to the destination. In particular, in this paper, we describe our approach for achieving accurate localization using RFID for home mess-cleanup robots. Finally, the effectiveness of the developed McBot is confirmed through live tests of the mess-cleanup task.

A Collaborative Video Annotation and Browsing System using Linked Data (링크드 데이터를 이용한 협업적 비디오 어노테이션 및 브라우징 시스템)

  • Lee, Yeon-Ho;Oh, Kyeong-Jin;Sean, Vi-Sal;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.203-219
    • /
    • 2011
  • Previously common users just want to watch the video contents without any specific requirements or purposes. However, in today's life while watching video user attempts to know and discover more about things that appear on the video. Therefore, the requirements for finding multimedia or browsing information of objects that users want, are spreading with the increasing use of multimedia such as videos which are not only available on the internet-capable devices such as computers but also on smart TV and smart phone. In order to meet the users. requirements, labor-intensive annotation of objects in video contents is inevitable. For this reason, many researchers have actively studied about methods of annotating the object that appear on the video. In keyword-based annotation related information of the object that appeared on the video content is immediately added and annotation data including all related information about the object must be individually managed. Users will have to directly input all related information to the object. Consequently, when a user browses for information that related to the object, user can only find and get limited resources that solely exists in annotated data. Also, in order to place annotation for objects user's huge workload is required. To cope with reducing user's workload and to minimize the work involved in annotation, in existing object-based annotation automatic annotation is being attempted using computer vision techniques like object detection, recognition and tracking. By using such computer vision techniques a wide variety of objects that appears on the video content must be all detected and recognized. But until now it is still a problem facing some difficulties which have to deal with automated annotation. To overcome these difficulties, we propose a system which consists of two modules. The first module is the annotation module that enables many annotators to collaboratively annotate the objects in the video content in order to access the semantic data using Linked Data. Annotation data managed by annotation server is represented using ontology so that the information can easily be shared and extended. Since annotation data does not include all the relevant information of the object, existing objects in Linked Data and objects that appear in the video content simply connect with each other to get all the related information of the object. In other words, annotation data which contains only URI and metadata like position, time and size are stored on the annotation sever. So when user needs other related information about the object, all of that information is retrieved from Linked Data through its relevant URI. The second module enables viewers to browse interesting information about the object using annotation data which is collaboratively generated by many users while watching video. With this system, through simple user interaction the query is automatically generated and all the related information is retrieved from Linked Data and finally all the additional information of the object is offered to the user. With this study, in the future of Semantic Web environment our proposed system is expected to establish a better video content service environment by offering users relevant information about the objects that appear on the screen of any internet-capable devices such as PC, smart TV or smart phone.

Prediction of Target Motion Using Neural Network for 4-dimensional Radiation Therapy (신경회로망을 이용한 4차원 방사선치료에서의 조사 표적 움직임 예측)

  • Lee, Sang-Kyung;Kim, Yong-Nam;Park, Kyung-Ran;Jeong, Kyeong-Keun;Lee, Chang-Geol;Lee, Ik-Jae;Seong, Jin-Sil;Choi, Won-Hoon;Chung, Yoon-Sun;Park, Sung-Ho
    • Progress in Medical Physics
    • /
    • v.20 no.3
    • /
    • pp.132-138
    • /
    • 2009
  • Studies on target motion in 4-dimensional radiotherapy are being world-widely conducted to enhance treatment record and protection of normal organs. Prediction of tumor motion might be very useful and/or essential for especially free-breathing system during radiation delivery such as respiratory gating system and tumor tracking system. Neural network is powerful to express a time series with nonlinearity because its prediction algorithm is not governed by statistic formula but finds a rule of data expression. This study intended to assess applicability of neural network method to predict tumor motion in 4-dimensional radiotherapy. Scaled Conjugate Gradient algorithm was employed as a learning algorithm. Considering reparation data for 10 patients, prediction by the neural network algorithms was compared with the measurement by the real-time position management (RPM) system. The results showed that the neural network algorithm has the excellent accuracy of maximum absolute error smaller than 3 mm, except for the cases in which the maximum amplitude of respiration is over the range of respiration used in the learning process of neural network. It indicates the insufficient learning of the neural network for extrapolation. The problem could be solved by acquiring a full range of respiration before learning procedure. Further works are programmed to verify a feasibility of practical application for 4-dimensional treatment system, including prediction performance according to various system latency and irregular patterns of respiration.

  • PDF

Design of a Crowd-Sourced Fingerprint Mapping and Localization System (군중-제공 신호지도 작성 및 위치 추적 시스템의 설계)

  • Choi, Eun-Mi;Kim, In-Cheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.9
    • /
    • pp.595-602
    • /
    • 2013
  • WiFi fingerprinting is well known as an effective localization technique used for indoor environments. However, this technique requires a large amount of pre-built fingerprint maps over the entire space. Moreover, due to environmental changes, these maps have to be newly built or updated periodically by experts. As a way to avoid this problem, crowd-sourced fingerprint mapping attracts many interests from researchers. This approach supports many volunteer users to share their WiFi fingerprints collected at a specific environment. Therefore, crowd-sourced fingerprinting can automatically update fingerprint maps up-to-date. In most previous systems, however, individual users were asked to enter their positions manually to build their local fingerprint maps. Moreover, the systems do not have any principled mechanism to keep fingerprint maps clean by detecting and filtering out erroneous fingerprints collected from multiple users. In this paper, we present the design of a crowd-sourced fingerprint mapping and localization(CMAL) system. The proposed system can not only automatically build and/or update WiFi fingerprint maps from fingerprint collections provided by multiple smartphone users, but also simultaneously track their positions using the up-to-date maps. The CMAL system consists of multiple clients to work on individual smartphones to collect fingerprints and a central server to maintain a database of fingerprint maps. Each client contains a particle filter-based WiFi SLAM engine, tracking the smartphone user's position and building each local fingerprint map. The server of our system adopts a Gaussian interpolation-based error filtering algorithm to maintain the integrity of fingerprint maps. Through various experiments, we show the high performance of our system.

Behavior of amber fish, Seriola aureovittata released in the setnet (정치망내에 방류한 부시리, Seriola aureovittata 의 행동)

  • 신현옥;이주희
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.35 no.2
    • /
    • pp.161-169
    • /
    • 1999
  • This paper describes the swimming and escaping behavior of amber fish, Seriola aureovittata released in the first bag net of the setnet and observed with telemetry techniques. The setnet used in experiment is composed of a leader, a fish court with a flying net and two bag nets having ramp net. The behavior of the fish attached an ultrasonic depth pinger of 50 KHz is observed using a prototype LBL fish tracking system. The 3-D underwater position ofthe fish is calculated by hyperbolic method with three channels of receiver and the depth of pinger. The results obtained are as follows: 1. The fish released on the sea surface was escaped down to 15 m depth and rised up to near the sea surface during 5 minutes after release. The average swimming speed of the fish during this time was 0.87 m/sec. 2. The swimming speed of the fish is decreased slowly in relation to the time elapsed and the fish showed some escaping behavior forward to the fish court staying 1 to 7 m depth layer near the ramp net. The average speed of the fish during this time was 0.52 m/sec. 3. During 25 minutes after beginning of hauling net, the fish showed a faster swimming speed than before hauling and an escaping behavior repeatedly from the first ramp net to the second one in horizontal. In vertical, the fish moved up and down between the sea surface and 20 m depth. After this time, the fish showed the escaping behavior forward to fish court after come back to the first ramp net in spite of the hauling was continued. It is found that the fish was escaped from the first ramp net to the fish court while the hauling was carried out. The average speed of the fish after beginning of hauling was 0.72 m/sec which increased 38.5 % than right before the hauling and showed 0.44 to 0.82 m/see of speed till escaping the first bag net. The average swimming speed during observation was 0.67 m/sec (2.2 times of body length).

  • PDF

Study of Respiration Simulating Phantom using Thermocouple-based Respiration Monitoring Mask (열전쌍마스크를 이용한 호흡모사팬톰 연구)

  • Lim, Sang-Wook;Park, Sung-Ho;Yi, Byong-Yong;Lee, Sang-Hoon;Cho, Sam-Ju;Huh, Hyun-Do;Shin, Seong-Soo;Kim, Jong-Hoon;Lee, Sang-Wook;Kwon, Soo-Il;Choi, Eun-Kyung;Ahn, Seung-Do
    • Radiation Oncology Journal
    • /
    • v.23 no.4
    • /
    • pp.217-222
    • /
    • 2005
  • Purpose: To develop the respiration simulating phantom with thermocouple for evaluating 4D radiotherapy such as gated radiotherapy breathing control radiotherapy and dynamic tumor tracking radiotherapy. Materials and Methods: The respiration monitoring mask(ReMM) with thermocouple was developed to monitor the patient's irregular respiration. The signal from ReMM controls the simulating phantom as organ motion of patients in real-time. The organ and the phantom motion were compared with its respiratory curves to evaluate the simulating phantom. ReMM was used to measure patients' respiration, and the movement of simulating phantom was measured by using $RPM^{(R)}$. The fluoroscope was used to monitor the patient's diaphragm motion. relative to the organ motion, respectively. The standard deviation of discrepancy between the respiratory curve and the organ motion was 8.52% of motion range. Conclusion: Patients felt comfortable with ReMM. The relationship between the signal from ReMM and the organ motion shows strong correlation. The phantom simulates the organ motion in real-time according to the respiratory signal from the ReMM. It is expected that the simulating phantom with ReMM could be used to verify the 4D radiotherapy.