• Title/Summary/Keyword: Image-to-Image Translation

Search Result 303, Processing Time 0.034 seconds

Shoulder Arthrokinematics of Collegiate Ice Hockey Athletes Based on the 3D-2D Model Registration Technique

  • Jeong, Hee Seong;Song, Junbom;Lee, Inje;Kim, Doosup;Lee, Sae Yong
    • Korean Journal of Applied Biomechanics
    • /
    • v.31 no.3
    • /
    • pp.155-161
    • /
    • 2021
  • Objective: There is a lack of studies using the 3D-2D image registration techniques on the mechanism of a shoulder injury for ice hockey players. This study aimed to analyze in vivo 3D glenohumeral joint arthrokinematics in collegiate ice hockey athletes and compare shoulder scaption with or without a hockey stick using the 3D-2D image registration technique. Method: We recruited 12 male elite ice hockey players (age, 19.88 ± 0.65 years). For arthrokinematic analysis of the common shoulder abduction movements of the injury pathogenesis of ice hockey players, participants abducted their dominant arm along the scapular plane and then grabbed a stick using the same motion under C-arm fluoroscopy with 16 frames per second. Computed tomography (CT) scans of the shoulder complex were obtained with a 0.6-mm slice pitch. Data from the humerus translation distances, scapula upward rotation, anterior-posterior tilt, internal to external rotation angles, and scapulohumeral rhythm (SHR) ratio on glenohumeral (GH) joint kinematics were outputted using a MATLAB customized code. Results: The humeral translation in the stick hand compared to the bare hand moved more anterior and more superior until the abduction angle reached 40°. When the GH joint in the stick hand was at the maximal abduction of the scapula, the scapula was externally rotated 2~5° relative to 0°. The SHR ratio relative to the abduction along the scapular plane at 40° indicated a statistically significant difference between the two groups (p < 0.05). Conclusion: With arm loading with the stick, the humeral and scapular kinematics showed a significant correlation in the initial section of the SHR. Although these correlations might be difficult in clinical settings, ice hockey athletes can lead to the movement difference of the scapulohumeral joints with inherent instability.

Rotation-Scale-Translation-Intensity Invariant Algorithm for Fingerprint Identigfication (RSTI 불변 지문인식 알고리즘)

  • Kim, Hyun;Kim, Hak-Il
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.6
    • /
    • pp.88-100
    • /
    • 1998
  • In this paper, an algorithm for a real-time automatic fingerprint identification system is proposed. The fingerprint feature volume is extracted by considering distinct and local characteristics(such as intensity and image quality difference etc.) in fingerprint images, which makes the algorithm properly adaptive to various image acquisitionj methods. Also the matching technique is designed to be invariant on rotation, scaling and translation (RST) changes while being capable of real-time processing. And the classification of fingerprints is performed based on the ridge flow and the relations among singular points such as cores and deltas. The developed fingerprint identification algorithm has been applied to various sets of fingerprint images such as one from NIST(National Institute of Standards and Technology, USA), a pressed fingerprint database constructed according to Korean population distributions in sex, ages and jobs, and a set of rolled-than-scanned fingerprint images. The overall performance of the algorithm has been analyzed and evaluated to the false rejection ratio of 0.07% while holding the false acceptance ratio of 0%.

  • PDF

A Unicode based Deep Handwritten Character Recognition model for Telugu to English Language Translation

  • BV Subba Rao;J. Nageswara Rao;Bandi Vamsi;Venkata Nagaraju Thatha;Katta Subba Rao
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.2
    • /
    • pp.101-112
    • /
    • 2024
  • Telugu language is considered as fourth most used language in India especially in the regions of Andhra Pradesh, Telangana, Karnataka etc. In international recognized countries also, Telugu is widely growing spoken language. This language comprises of different dependent and independent vowels, consonants and digits. In this aspect, the enhancement of Telugu Handwritten Character Recognition (HCR) has not been propagated. HCR is a neural network technique of converting a documented image to edited text one which can be used for many other applications. This reduces time and effort without starting over from the beginning every time. In this work, a Unicode based Handwritten Character Recognition(U-HCR) is developed for translating the handwritten Telugu characters into English language. With the use of Centre of Gravity (CG) in our model we can easily divide a compound character into individual character with the help of Unicode values. For training this model, we have used both online and offline Telugu character datasets. To extract the features in the scanned image we used convolutional neural network along with Machine Learning classifiers like Random Forest and Support Vector Machine. Stochastic Gradient Descent (SGD), Root Mean Square Propagation (RMS-P) and Adaptative Moment Estimation (ADAM)optimizers are used in this work to enhance the performance of U-HCR and to reduce the loss function value. This loss value reduction can be possible with optimizers by using CNN. In both online and offline datasets, proposed model showed promising results by maintaining the accuracies with 90.28% for SGD, 96.97% for RMS-P and 93.57% for ADAM respectively.

Omnidirectional Camera Motion Estimation Using Projected Contours (사영 컨투어를 이용한 전방향 카메라의 움직임 추정 방법)

  • Hwang, Yong-Ho;Lee, Jae-Man;Hong, Hyun-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.5
    • /
    • pp.35-44
    • /
    • 2007
  • Since the omnidirectional camera system with a very large field of view could take many information about environment scene from few images, various researches for calibration and 3D reconstruction using omnidirectional image have been presented actively. Most of line segments of man-made objects we projected to the contours by using the omnidirectional camera model. Therefore, the corresponding contours among images sequences would be useful for computing the camera transformations including rotation and translation. This paper presents a novel two step minimization method to estimate the extrinsic parameters of the camera from the corresponding contours. In the first step, coarse camera parameters are estimated by minimizing an angular error function between epipolar planes and back-projected vectors from each corresponding point. Then we can compute the final parameters minimizing a distance error of the projected contours and the actual contours. Simulation results on the synthetic and real images demonstrated that our algorithm can achieve precise contour matching and camera motion estimation.

CycleGAN-based Object Detection under Night Environments (CycleGAN을 이용한 야간 상황 물체 검출 알고리즘)

  • Cho, Sangheum;Lee, Ryong;Na, Jaemin;Kim, Youngbin;Park, Minwoo;Lee, Sanghwan;Hwang, Wonjun
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.1
    • /
    • pp.44-54
    • /
    • 2019
  • Recently, image-based object detection has made great progress with the introduction of Convolutional Neural Network (CNN). Many trials such as Region-based CNN, Fast R-CNN, and Faster R-CNN, have been proposed for achieving better performance in object detection. YOLO has showed the best performance under consideration of both accuracy and computational complexity. However, these data-driven detection methods including YOLO have the fundamental problem is that they can not guarantee the good performance without a large number of training database. In this paper, we propose a data sampling method using CycleGAN to solve this problem, which can convert styles while retaining the characteristics of a given input image. We will generate the insufficient data samples for training more robust object detection without efforts of collecting more database. We make extensive experimental results using the day-time and night-time road images and we validate the proposed method can improve the object detection accuracy of the night-time without training night-time object databases, because we converts the day-time training images into the synthesized night-time images and we train the detection model with the real day-time images and the synthesized night-time images.

Study of Contents Localization Case on the Game 'Paper, Please': Based on the Korean and North Korean Translations (게임 'Paper, Please'의 번역을 통한 콘텐츠 현지화 사례 연구: 한국어와 문화어 번역의 차이를 중심으로)

  • Won, Ho-Hyeuk;Gu, Bon-Hyeok;Kim, Hyoung-Youb
    • Journal of Korea Game Society
    • /
    • v.19 no.2
    • /
    • pp.145-160
    • /
    • 2019
  • In this research, we attempt to suggest the differences between Korean translation and the North Korean translation of the game 'Paper, Please'; moreover, we will consider about the effect of language and image on localization through this. North Korean language and cultural contents in 'Paper, Please' are evaluated well by many people that they show real life of North Korea even though there are some errors like loanword translations and using anachronic symbol, 'Kaksital' as secret organization. Through the research, we could know that people could concentrate on cultural contents by images and motives without critical errors so have fun.

3D Depth Information Extraction Algorithm Based on Motion Estimation in Monocular Video Sequence (단안 영상 시퀸스에서 움직임 추정 기반의 3차원 깊이 정보 추출 알고리즘)

  • Park, Jun-Ho;Jeon, Dae-Seong;Yun, Yeong-U
    • The KIPS Transactions:PartB
    • /
    • v.8B no.5
    • /
    • pp.549-556
    • /
    • 2001
  • The general problems of recovering 3D for 2D imagery require the depth information for each picture element form focus. The manual creation of those 3D models is consuming time and cost expensive. The goal in this paper is to simplify the depth estimation algorithm that extracts the depth information of every region from monocular image sequence with camera translation to implement 3D video in realtime. The paper is based on the property that the motion of every point within image which taken from camera translation depends on the depth information. Full-search motion estimation based on block matching algorithm is exploited at first step and ten, motion vectors are compensated for the effect by camera rotation and zooming. We have introduced the algorithm that estimates motion of object by analysis of monocular motion picture and also calculates the averages of frame depth and relative depth of region to the average depth. Simulation results show that the depth of region belongs to a near object or a distant object is in accord with relative depth that human visual system recognizes.

  • PDF

Evaluation of Knee Joint after Double-Bundle ACL Reconstruction with Three-Dimensional Isotropic MRI

  • Jung, Min ju;Jeong, Yu Mi;Lee, Beom Goo;Sim, Jae Ang;Choi, Hye-Young;Kim, Jeong Ho;Lee, Sheen-Woo
    • Investigative Magnetic Resonance Imaging
    • /
    • v.20 no.2
    • /
    • pp.95-104
    • /
    • 2016
  • Purpose: To evaluate the knee joint after double-bundle anterior cruciate ligament (ACL) reconstruction with three-dimensional (3D) isotropic magnetic resonance (MR) image, and to directly compare the ACL graft findings on 3D MR with the clinical results. Materials and Methods: From January 2009 to December 2014, we retrospectively reviewed MRIs of 39 patients who had reconstructed ACL with double bundle technique. The subjects were examined using 3D isotropic proton-density sequence and routine two-dimensional (2D) sequence on 3.0T scanner. The MR images were qualitatively evaluated for the intraarticular curvature, graft tear, bony impingement, intraosseous tunnel cyst, and synovitis of anteromedial and posterolateral bundles (AMB, PLB). In addition anterior tibial translation, PCL angle, PCL ratio were quantitatively measured. KT arthrometric values were reviewed for anterior tibial translation as positive or negative. The second look arthroscopy results including tear and laxity were reviewed. Results: Significant correlations were found between an AMB tear on 3D-isotropic proton density MR images and arthroscopic proven AMB tear or laxity (P < 0.05). Also, a significant correlation was observed between increased PCL ratio on 3D isotropic MRI and the arthroscopic findings such as tear, laxities of grafts (P < 0.05). KT arthrometric results were found to be significantly correlated with AMB tears (P < 0.05) and tibial tunnel cysts (P < 0.05). Conclusion: An AMB tear on 3D-isotropic MRI was correlated with arthroscopic results qualitatively and quantitatively. 3D isotropic MRI findings can aid the evaluation of ACL grafts after double bundle reconstruction.

Very short-term rainfall prediction based on radar image learning using deep neural network (심층신경망을 이용한 레이더 영상 학습 기반 초단시간 강우예측)

  • Yoon, Seongsim;Park, Heeseong;Shin, Hongjoon
    • Journal of Korea Water Resources Association
    • /
    • v.53 no.12
    • /
    • pp.1159-1172
    • /
    • 2020
  • This study applied deep convolution neural network based on U-Net and SegNet using long period weather radar data to very short-term rainfall prediction. And the results were compared and evaluated with the translation model. For training and validation of deep neural network, Mt. Gwanak and Mt. Gwangdeoksan radar data were collected from 2010 to 2016 and converted to a gray-scale image file in an HDF5 format with a 1km spatial resolution. The deep neural network model was trained to predict precipitation after 10 minutes by using the four consecutive radar image data, and the recursive method of repeating forecasts was applied to carry out lead time 60 minutes with the pretrained deep neural network model. To evaluate the performance of deep neural network prediction model, 24 rain cases in 2017 were forecast for rainfall up to 60 minutes in advance. As a result of evaluating the predicted performance by calculating the mean absolute error (MAE) and critical success index (CSI) at the threshold of 0.1, 1, and 5 mm/hr, the deep neural network model showed better performance in the case of rainfall threshold of 0.1, 1 mm/hr in terms of MAE, and showed better performance than the translation model for lead time 50 minutes in terms of CSI. In particular, although the deep neural network prediction model performed generally better than the translation model for weak rainfall of 5 mm/hr or less, the deep neural network prediction model had limitations in predicting distinct precipitation characteristics of high intensity as a result of the evaluation of threshold of 5 mm/hr. The longer lead time, the spatial smoothness increase with lead time thereby reducing the accuracy of rainfall prediction The translation model turned out to be superior in predicting the exceedance of higher intensity thresholds (> 5 mm/hr) because it preserves distinct precipitation characteristics, but the rainfall position tends to shift incorrectly. This study are expected to be helpful for the improvement of radar rainfall prediction model using deep neural networks in the future. In addition, the massive weather radar data established in this study will be provided through open repositories for future use in subsequent studies.

Design and Implementation for Presentation Animation Contents Based on the Mobile (모바일 기반의 표현 애니메이션 컨텐츠의 설계 및 구현)

  • Hong, Sung-Soo;Kim, Woo-Sung
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.7
    • /
    • pp.956-966
    • /
    • 2004
  • The Korean animation has enjoyed the brisk formation and establishment of its world-class infra for the last several years without unified titles or concepts, under the name of a national strategic project in the age of digital image. It also enjoys its new evaluation as digital animation that it's one of the greatest money making business in the non-education and frivolous culture and has the closest relations with the modern time. A great portion of popular image media has been taken by animation for the last 30 years. In this paper we propose a motion algorithm using an animation technology. It was developed for education purposed and accessible through the internet. For instance, in Cyber Clam Museum, 1000 gesture contents, the visual processes were used to design a screen with a realistic image and create an animation that makes possible show at 360 and every such transformation as translation, rotation, and scaling can be applied in the image interactively for the convenient and effective viewing.

  • PDF