• Title/Summary/Keyword: Camera Capture

Search Result 326, Processing Time 0.022 seconds

Relationship between Hallux Valgus Severity and 3D Ground Reaction Force in Individuals with Hallux Valgus Deformity during Gait

  • Kim, Yong-Wook
    • Journal of the Korean Society of Physical Medicine
    • /
    • v.16 no.3
    • /
    • pp.21-27
    • /
    • 2021
  • PURPOSE: This study examined the relationship between the severity of a hallux valgus (HV) deformity and the kinetic three-dimensional ground reaction force (GRF) through a motion analysis system with force platforms in individuals with a HV deformity during normal speed walking. METHODS: The participants were 36 adults with a HV deformity. The participants were asked to walk on a 6 m walkway with 40 infrared reflective markers attached to their pelvic and lower extremities. A camera capture system and two force platforms were used to collect kinetic data during gait. A Vicon Nexus and Visual3D motion analysis software were used to calculate the kinetic GRF data. RESULTS: This research showed that the anterior maximal force that occurred in the terminal stance phase during gait had a negative correlation with the HV angle (r = -.762, p < .01). In addition, the HV angle showed a low negative correlation with the second vertical maximal force (r = .346, p < .05) and a moderate positive correlation with the late medial maximal force (r = .641, p < .01). CONCLUSION: A more severe HV deformity results in greater abnormal translation of the plantar pressure and a significantly reduced pressure force under the first metatarsophalangeal joint.

A Study on Visualization of Fine Dust Captured by FOG Droplet (미세액적에 의한 미세먼지 포집 가시화 연구)

  • Oh, Jinho;Kim, Hyun Dong;Lee, Jung-Eon;Yang, Jun Hwan;Kim, Kyung Chun
    • Journal of the Korean Society of Visualization
    • /
    • v.19 no.3
    • /
    • pp.39-45
    • /
    • 2021
  • An experiment to visualize fine dust captured by FOG droplet is conducted. Coal dust with 23.56 MMD (Mean Median Diameter) and water with 17.02 MMD is used as fine dust and FOG droplet. Long distance microscope and high-speed camera are used to capture the images of micro-scale particles sprinkled by acrylic duct. After measuring and comparing the size of the coal dust and FOG droplet to MMD, process to seize the coal dust with FOG droplet is recorded in 2 conditions: Fixed and Floated coal dust in the floated FOG droplet flow. In both conditions, a coal dust particle is collided and captured by a FOG droplet particle. A FOG droplet particle attached at the surface of the coal dust particle does not break and remains spherical shape due to surface tension. Combined particles are rotated by momentum of the particle and fallen.

Development of the Real-time Concentration Measurement Method for Evaporating Binary Mixture Droplet using Surface Plasmon Resonance Imaging (표면플라즈몬공명 가시화 장치를 이용한 증발하는 이종혼합물 액적의 실시간 농도 가시화 기법 개발)

  • Jeong, Chan Ho;Lee, Hyung Ju;Choi, Chang Kyoung;Lee, Hyoungsoon;Lee, Seong Hyuk
    • Journal of ILASS-Korea
    • /
    • v.26 no.4
    • /
    • pp.212-218
    • /
    • 2021
  • The present study aims to develop the Surface Plasmon Resonance (SPR) imaging system facilitating the real-time measurement of the concentration of evaporating binary mixture droplet (BMD). We introduce the theoretical background of the SPR imaging technique and its methodology for concentration measurement. The SPR imaging system established in the present study consists of a LED light source, a polarizer, a lens, and a band pass filter for the collimated light of a 589 nm wavelength, and a CCD camera. Based on the Fresnel multiple-layer reflection theory, SPR imaging can capture the change of refractive index of evaporating BMD. For example, the present study exhibits the visualization process of ethylene glycol (EG)-water (W) BMD and measures real-time concentration change. Since the water component is more volatile than the ethylene glycol component, the refractive index of EG-W BMD varies with its mixture composition during BMD evaporation. We successfully measured the ethylene glycol concentration within the evaporating BMD by using SPR imaging.

Focus-adjustment Method for a High-magnification Zoom-lens System (고배율 줌 광학계의 상면 오차 보정 방법)

  • Jae Myung Ryu
    • Korean Journal of Optics and Photonics
    • /
    • v.34 no.2
    • /
    • pp.66-71
    • /
    • 2023
  • Zoom lenses are now starting to be applied to mobile-phone cameras as well. A zoom lens applied to a mobile-phone camera is mainly used to capture images in the telephoto range. Such an optical system has a long focal length, similar to that of a high-magnification zoom optical system, so the position of the imaging device also shifts significantly, due to manufacturing errors of the lenses and mechanical parts. In the past, the positional shift of the imaging device was corrected by moving the first lens group and the total optical system, but this paper confirms that the position of the imaging device can be corrected by selecting any two moving lens groups. However, it is found that more distance must be secured in the front and rear of a moving lens group for this purpose.

Comparison of estimating vegetation index for outdoor free-range pig production using convolutional neural networks

  • Sang-Hyon OH;Hee-Mun Park;Jin-Hyun Park
    • Journal of Animal Science and Technology
    • /
    • v.65 no.6
    • /
    • pp.1254-1269
    • /
    • 2023
  • This study aims to predict the change in corn share according to the grazing of 20 gestational sows in a mature corn field by taking images with a camera-equipped unmanned air vehicle (UAV). Deep learning based on convolutional neural networks (CNNs) has been verified for its performance in various areas. It has also demonstrated high recognition accuracy and detection time in agricultural applications such as pest and disease diagnosis and prediction. A large amount of data is required to train CNNs effectively. Still, since UAVs capture only a limited number of images, we propose a data augmentation method that can effectively increase data. And most occupancy prediction predicts occupancy by designing a CNN-based object detector for an image and counting the number of recognized objects or calculating the number of pixels occupied by an object. These methods require complex occupancy rate calculations; the accuracy depends on whether the object features of interest are visible in the image. However, in this study, CNN is not approached as a corn object detection and classification problem but as a function approximation and regression problem so that the occupancy rate of corn objects in an image can be represented as the CNN output. The proposed method effectively estimates occupancy for a limited number of cornfield photos, shows excellent prediction accuracy, and confirms the potential and scalability of deep learning.

A Study on the Best Applicationsof Infra-Red(IR) Sensors Mounted on the Unmanned Aerial Vehicles(UAV) in Agricultural Crops Field (무인기 탑재 열화상(IR) 센서의 농작물 대상 최적 활용 방안 연구)

  • Ho-Woong Shon;Tae-Hoon Kim;Hee-Woo Lee
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.26 no.6_2
    • /
    • pp.1073-1082
    • /
    • 2023
  • Thermal sensors, also called thermal infrared wavelength sensors, measure temperature based on the intensity of infrared signals that reach the sensor. The infrared signals recognized by the sensor include infrared wavelength(0.7~3.0㎛) and radiant infrared wavelength(3.0~100㎛). Infrared(IR) wavelengths are divided into five bands: near infrared(NIR), shortwave infrared(SWIR), midwave infrared(MWIR), longwave infrared(LWIR), and far infrared(FIR). Most thermal sensors use the LWIR to capture images. Thermal sensors measure the temperature of the target in a non-contact manner, and the data can be affected by the sensor's viewing angle between the target and the sensor, the amount of atmospheric water vapor (humidity), air temperature, and ground conditions. In this study, the characteristics of three thermal imaging sensor models that are widely used for observation using unmanned aerial vehicles were evaluated, and the optimal application field was determined.

An Explainable Deep Learning-Based Classification Method for Facial Image Quality Assessment

  • Kuldeep Gurjar;Surjeet Kumar;Arnav Bhavsar;Kotiba Hamad;Yang-Sae Moon;Dae Ho Yoon
    • Journal of Information Processing Systems
    • /
    • v.20 no.4
    • /
    • pp.558-573
    • /
    • 2024
  • Considering factors such as illumination, camera quality variations, and background-specific variations, identifying a face using a smartphone-based facial image capture application is challenging. Face Image Quality Assessment refers to the process of taking a face image as input and producing some form of "quality" estimate as an output. Typically, quality assessment techniques use deep learning methods to categorize images. The models used in deep learning are shown as black boxes. This raises the question of the trustworthiness of the models. Several explainability techniques have gained importance in building this trust. Explainability techniques provide visual evidence of the active regions within an image on which the deep learning model makes a prediction. Here, we developed a technique for reliable prediction of facial images before medical analysis and security operations. A combination of gradient-weighted class activation mapping and local interpretable model-agnostic explanations were used to explain the model. This approach has been implemented in the preselection of facial images for skin feature extraction, which is important in critical medical science applications. We demonstrate that the use of combined explanations provides better visual explanations for the model, where both the saliency map and perturbation-based explainability techniques verify predictions.

Gaze Tracking System Using Feature Points of Pupil and Glints Center (동공과 글린트의 특징점 관계를 이용한 시선 추적 시스템)

  • Park Jin-Woo;Kwon Yong-Moo;Sohn Kwang-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.11 no.1 s.30
    • /
    • pp.80-90
    • /
    • 2006
  • A simple 2D gaze tracking method using single camera and Purkinje image is proposed. This method employs single camera with infrared filter to capture one eye and two infrared light sources to make reflection points for estimating corresponding gaze point on the screen from user's eyes. Single camera, infrared light sources and user's head can be slightly moved. Thus, it renders simple and flexible system without using any inconvenient fixed equipments or assuming fixed head. The system also includes a simple and accurate personal calibration procedure. Before using the system, each user only has to stare at two target points for a few seconds so that the system can initiate user's individual factors of estimating algorithm. The proposed system has been developed to work in real-time providing over 10 frames per second with XGA $(1024{\times}768)$ resolution. The test results of nine objects of three subjects show that the system is achieving an average estimation error less than I degree.

Depth Upsampling Method Using Total Generalized Variation (일반적 총변이를 이용한 깊이맵 업샘플링 방법)

  • Hong, Su-Min;Ho, Yo-Sung
    • Journal of Broadcast Engineering
    • /
    • v.21 no.6
    • /
    • pp.957-964
    • /
    • 2016
  • Acquisition of reliable depth maps is a critical requirement in many applications such as 3D videos and free-viewpoint TV. Depth information can be obtained from the object directly using physical sensors, such as infrared ray (IR) sensors. Recently, Time-of-Flight (ToF) range camera including KINECT depth camera became popular alternatives for dense depth sensing. Although ToF cameras can capture depth information for object in real time, but are noisy and subject to low resolutions. Recently, filter-based depth up-sampling algorithms such as joint bilateral upsampling (JBU) and noise-aware filter for depth up-sampling (NAFDU) have been proposed to get high quality depth information. However, these methods often lead to texture copying in the upsampled depth map. To overcome this limitation, we formulate a convex optimization problem using higher order regularization for depth map upsampling. We decrease the texture copying problem of the upsampled depth map by using edge weighting term that chosen by the edge information. Experimental results have shown that our scheme produced more reliable depth maps compared with previous methods.

Development of Image Quality Enhancement of a Digital Camera with the Application of Exposure To The Right Exposure Method (ETTR 노출 방법을 활용한 디지털 카메라의 화질 향상)

  • Park, Hyung-Ju;Har, Dong-Hwan
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.8
    • /
    • pp.95-103
    • /
    • 2010
  • Raw files record luminance values corresponds to each pixel of a digital camera sensor. In digital imaging, controlling exposure to capture the first highlight stop is important on linear-distribution of raw file characteristic. This study sought to verify the efficiency of ETTR method and found the optimum over-exposure amount to maintain the first highlight stop to be the largest number of levels. This was achieved by over-exposing a scene with a raw file and converting it to under-exposure in a raw file converting software. Our paper verified the efficiency of ETTR by controlling the exposure range and ISOs. Throughout the results, if exposure increases gradually 6 steps, dynamic range is also increased. And it shows that the optimized exposure value is around + $1\frac{2}{3}$ stop over compared to the normal exposure with the high ISOs simultaneously. We compared visual noise value at $1\frac{2}{3}$ stop to the normal exposure visual noise. Based on the normal exposure's visual noise, we can confirm that visual noise decrement is increased by increasing ISOs. In this experimental result, we confirm that overexposure about + $1\frac{2}{3}$ stop is the optimum value to make the widest dynamic range and lower visual noise in high ISOs. Based on the study results, we can provide the effective ETTR information to consumers and manufacturers. This method will contribute to the optimum image performance in maximizing dynamic range and minimizing noise in a digital imaging.