• Title/Summary/Keyword: VisualTran

Search Result 13, Processing Time 0.025 seconds

Investigating Arithmetic Mean, Harmonic Mean, and Average Speed through Dynamic Visual Representations

  • Vui, Tran
    • Research in Mathematical Education
    • /
    • v.18 no.1
    • /
    • pp.31-40
    • /
    • 2014
  • Working with dynamic visual representations can help students-with-computer discover new mathematical ideas. Students translate among multiple representations as a strategy to investigate non-routine problems to explore possible solutions in mathematics classrooms. In this paper, we use the area models as new representations for our secondary students to investigate three problems related to the average speed of a particle. Students show their ideas in the process of investigating arithmetic mean, harmonic mean, and average speed through their created dynamic figures. These figures really utilize dynamic geometry software.

Method for Local Contrast Control in DCT Domain (DCT영역에서의 국부 Contrast 조절 기법)

  • Tran, Nhat Huy;Minh, Trung Bui;Kim, Won-Ha;Kim, Seon-Guk
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2013.11a
    • /
    • pp.8-11
    • /
    • 2013
  • We implement the foveation and frequency sensitivity feature of human visual system in discrete cosine transform (DCT) domain. Resolution of human visual perception decays as distance from the eye-focused point, known as foveation property, and the middle frequency components give most pleasant image quality to human than the low and high frequency components, which is the frequency sensitivity property of human visual system. For satisfying the foveation property, we enhanced the local contrast at the focused regions and smoothed local contrast at the non-focused regions in the DCT domain without bringing the blocking and ringing artifacts. Moreover, the energies at each DCT frequency components is modified with various degree to fulfill the frequency sensitivity property. The proposed method is verified by the subjective and objective evaluations that it can the improve the human perceptual visual quality.

  • PDF

A Quality Comparison of English Translations of Korean Literature between Human Translation and Post-Editing

  • LEE, IL-JAE
    • International Journal of Advanced Culture Technology
    • /
    • v.6 no.4
    • /
    • pp.165-171
    • /
    • 2018
  • As the artificial intelligence (AI) plays a crucial role in machine translation (MT) which has loomed large as a new translation paradigm, concerns have also arisen if MT can produce a quality product as human translation (HT) can. In fact, several MT experimental studies report cases in which the MT product called post-editing (PE) as equally as HT or often superior ([1],[2],[6]). As motivated from those studies on translation quality between HT and PE, this study set up an experimental situation in which Korean literature was translated into English, comparatively, by 3 translators and 3 post-editors. Afterwards, a group of 3 other Koreans checked for accuracy of HT and PE; a group of 3 English native speakers scored for fluency of HT and PE. The findings are (1) HT took the translation time, at least, twice longer than PE. (2) Both HT and PE produced similar error types, and Mistranslation and Omission were the major errors for accuracy and Grammar for fluency. (3) HT turned to be inferior to PE for both accuracy and fluency.

Desgin of Foveated Frequency Sensitivity (Foveated Frequency Sensitivity의 구현)

  • Tran, Nhat Huy;Bui, Minh Trung;Kim, Wonha
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2014.11a
    • /
    • pp.248-251
    • /
    • 2014
  • We develop the signal processing method for implementing the human perceptual variant on frequency and space. The human visual perceptual sensitivity varies as frequency components and the human perceivable resolution diminishes as the distances further from the eye-focused point. For realizing the frequency sensitivity, we developed the signal direction adaptive multiband energy scaling method to weight the frequency components. The low-pass filtering is designed on the developed energy scaling method for diminishing perceivable resolutions as the deviated distance from the eye-focused point. The developed method not only enhances the frequency components of image signals at the eye-focused region but also smoothes non-perceivable detailed image signals at non-focused regions. The proposed method is verified by the subjective and objective evaluations that it can improve human perceptual visual quality.

  • PDF

Case Analysis of the performance contents using virtual reality technology (가상현실 기술을 활용한 공연콘텐츠의 사례분석연구)

  • YOO, YOUNGJAE
    • Journal of the Korea Convergence Society
    • /
    • v.8 no.5
    • /
    • pp.145-153
    • /
    • 2017
  • As success stories of using virtual reality (VR) have become more prevalent, interest in performance-related technology has rapidly increased around the world. Performances such as, Cirque du Soleil had much success in using video technology, these VR applications have enabled experiences using digital image technology. However, critics have claimed that the completeness and diversity of visual content is reduced, due to insufficient storylines, spatial composition, and partial use of the entire visual field. Therefore, the design of a performance using digital image technology should consider the characteristics of the production stage to be different from real world performances. In this study, the visual space of the stage, the method of creating space for the stage, and the movement of the performer and utilization of the performance was analyzed. Through this, It transferred the limitation of the traditional stage into the space and time of the image and opened the possibilities of the tran­-media.

A Framework for Computer Vision-aided Construction Safety Monitoring Using Collaborative 4D BIM

  • Tran, Si Van-Tien;Bao, Quy Lan;Nguyen, Truong Linh;Park, Chansik
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.1202-1208
    • /
    • 2022
  • Techniques based on computer vision are becoming increasingly important in construction safety monitoring. Using AI algorithms can automatically identify conceivable hazards and give feedback to stakeholders. However, the construction site remains various potential hazard situations during the project. Due to the site complexity, many visual devices simultaneously participate in the monitoring process. Therefore, it challenges developing and operating corresponding AI detection algorithms. Safety information resulting from computer vision needs to organize before delivering it to safety managers. This study proposes a framework for computer vision-aided construction safety monitoring using collaborative 4D BIM information to address this issue, called CSM4D. The suggested framework consists of two-module: (1) collaborative BIM information extraction module (CBIE) extracts the spatial-temporal information and potential hazard scenario of a specific activity; through that, Computer Vision-aid Safety Monitoring Module (CVSM) can apply accurate algorithms at the right workplace during the project. The proposed framework is expected to aid safety monitoring using computer vision and 4D BIM.

  • PDF

Multimodal Image Fusion with Human Pose for Illumination-Robust Detection of Human Abnormal Behaviors (조명을 위한 인간 자세와 다중 모드 이미지 융합 - 인간의 이상 행동에 대한 강력한 탐지)

  • Cuong H. Tran;Seong G. Kong
    • Annual Conference of KIPS
    • /
    • 2023.11a
    • /
    • pp.637-640
    • /
    • 2023
  • This paper presents multimodal image fusion with human pose for detecting abnormal human behaviors in low illumination conditions. Detecting human behaviors in low illumination conditions is challenging due to its limited visibility of the objects of interest in the scene. Multimodal image fusion simultaneously combines visual information in the visible spectrum and thermal radiation information in the long-wave infrared spectrum. We propose an abnormal event detection scheme based on the multimodal fused image and the human poses using the keypoints to characterize the action of the human body. Our method assumes that human behaviors are well correlated to body keypoints such as shoulders, elbows, wrists, hips. In detail, we extracted the human keypoint coordinates from human targets in multimodal fused videos. The coordinate values are used as inputs to train a multilayer perceptron network to classify human behaviors as normal or abnormal. Our experiment demonstrates a significant result on multimodal imaging dataset. The proposed model can capture the complex distribution pattern for both normal and abnormal behaviors.

Machine Learning-Based Prediction of COVID-19 Severity and Progression to Critical Illness Using CT Imaging and Clinical Data

  • Subhanik Purkayastha;Yanhe Xiao;Zhicheng Jiao;Rujapa Thepumnoeysuk;Kasey Halsey;Jing Wu;Thi My Linh Tran;Ben Hsieh;Ji Whae Choi;Dongcui Wang;Martin Vallieres;Robin Wang;Scott Collins;Xue Feng;Michael Feldman;Paul J. Zhang;Michael Atalay;Ronnie Sebro;Li Yang;Yong Fan;Wei-hua Liao;Harrison X. Bai
    • Korean Journal of Radiology
    • /
    • v.22 no.7
    • /
    • pp.1213-1224
    • /
    • 2021
  • Objective: To develop a machine learning (ML) pipeline based on radiomics to predict Coronavirus Disease 2019 (COVID-19) severity and the future deterioration to critical illness using CT and clinical variables. Materials and Methods: Clinical data were collected from 981 patients from a multi-institutional international cohort with real-time polymerase chain reaction-confirmed COVID-19. Radiomics features were extracted from chest CT of the patients. The data of the cohort were randomly divided into training, validation, and test sets using a 7:1:2 ratio. A ML pipeline consisting of a model to predict severity and time-to-event model to predict progression to critical illness were trained on radiomics features and clinical variables. The receiver operating characteristic area under the curve (ROC-AUC), concordance index (C-index), and time-dependent ROC-AUC were calculated to determine model performance, which was compared with consensus CT severity scores obtained by visual interpretation by radiologists. Results: Among 981 patients with confirmed COVID-19, 274 patients developed critical illness. Radiomics features and clinical variables resulted in the best performance for the prediction of disease severity with a highest test ROC-AUC of 0.76 compared with 0.70 (0.76 vs. 0.70, p = 0.023) for visual CT severity score and clinical variables. The progression prediction model achieved a test C-index of 0.868 when it was based on the combination of CT radiomics and clinical variables compared with 0.767 when based on CT radiomics features alone (p < 0.001), 0.847 when based on clinical variables alone (p = 0.110), and 0.860 when based on the combination of visual CT severity scores and clinical variables (p = 0.549). Furthermore, the model based on the combination of CT radiomics and clinical variables achieved time-dependent ROC-AUCs of 0.897, 0.933, and 0.927 for the prediction of progression risks at 3, 5 and 7 days, respectively. Conclusion: CT radiomics features combined with clinical variables were predictive of COVID-19 severity and progression to critical illness with fairly high accuracy.