• 제목/요약/키워드: Vision loss

검색결과 185건 처리시간 0.03초

Spatial and temporal trends in food security during the COVID-19 pandemic in Asia Pacific countries: India, Indonesia, Myanmar, and Vietnam

  • Yunhee Kang;Indira Prihartono;Sanghyo Kim;Subin Kim;Soomin Lee;Randall Spadoni;John McCormack;Erica Wetzler
    • Nutrition Research and Practice
    • /
    • 제18권1호
    • /
    • pp.149-164
    • /
    • 2024
  • BACKGROUND/OBJECTIVES: The economic recession caused by the coronavirus disease 2019 pandemic disproportionately affected poor and vulnerable populations globally. Better uunderstanding of vulnerability to shocks in food supply and demand in the Asia Pacific region is needed. SUBJECTS/METHODS: Using secondary data from rapid assessment surveys during the pandemic response (n = 10,420 in mid-2020; n = 6,004 in mid-2021) in India, Indonesia, Myanmar, and Vietnam, this study examined the risk factors for reported income reduction or job loss in mid-2021 and the temporal trend in food security status (household food availability, and market availability and affordability of essential items) from mid-2020 to mid-2021. RESULTS: The proportion of job loss/reduced household income was highest in India (60.4%) and lowest in Indonesia (39.0%). Urban residence (odds ratio [OR] range, 2.20-4.11; countries with significant results only), female respondents (OR range, 1.40-1.69), engagement in daily waged labor (OR range, 1.54-1.68), and running a small trade/business (OR range, 1.66-2.71) were significantly associated with income reduction or job loss in three out of 4 countries (all P < 0.05). Food stock availability increased significantly in 2021 compared to 2020 in all four countries (OR range, 1.91-4.45) (all P < 0.05). Availability of all essential items at markets increased in India (OR range, 1.45-3.99) but decreased for basic foods, hygiene items, and medicine in Vietnam (OR range, 0.81-0.86) in 2021 compared to 2020 (all P < 0.05). In 2021, the affordability of all essential items significantly improved in India (OR range, 1.18-3.49) while the affordability of rent, health care, and loans deteriorated in Indonesia (OR range, 0.23-0.71) when compared to 2020 (all P < 0.05). CONCLUSIONS: Long-term social protection programs need to be carefully designed and implemented to address food insecurity among vulnerable groups, considering each country's market conditions, consumer food purchasing behaviors, and financial support capacity.

영상변형:얼굴 스케치와 사진간의 증명가능한 영상변형 네트워크 (Image Translation: Verifiable Image Transformation Networks for Face Sketch-Photo and Photo-Sketch)

  • 숭타이리엥;이효종
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2019년도 춘계학술발표대회
    • /
    • pp.451-454
    • /
    • 2019
  • In this paper, we propose a verifiable image transformation networks to transform face sketch to photo and vice versa. Face sketch-photo is very popular in computer vision applications. It has been used in some specific official departments such as law enforcement and digital entertainment. There are several existing face sketch-photo synthesizing methods that use feed-forward convolution neural networks; however, it is hard to assure whether the results of the methods are well mapped by depending only on loss values or accuracy results alone. In our approach, we use two Resnet encoder-decoder networks as image transformation networks. One is for sketch-photo and another is for photo-sketch. They depend on each other to verify their output results during training. For example, using photo-sketch transformation networks to verify the photo result of sketch-photo by inputting the result to the photo-sketch transformation networks and find loss between the reversed transformed result with ground-truth sketch. Likely, we can verify the sketch result as well in a reverse way. Our networks contain two loss functions such as sketch-photo loss and photo-sketch loss for the basic transformation stages and the other two-loss functions such as sketch-photo verification loss and photo-sketch verification loss for the verification stages. Our experiment results on CUFS dataset achieve reasonable results compared with the state-of-the-art approaches.

The Influence of Pituitary Adenoma Size on Vision and Visual Outcomes after Trans-Sphenoidal Adenectomy : A Report of 78 Cases

  • Ho, Ren-Wen;Huang, Hsiu-Mei;Ho, Jih-Tsun
    • Journal of Korean Neurosurgical Society
    • /
    • 제57권1호
    • /
    • pp.23-31
    • /
    • 2015
  • Objective : The aims of this study were to investigate the quantitative relationship between pituitary macroadenoma size and degree of visual impairment, and assess visual improvement after surgical resection of the tumor. Methods : The medical records of patients with pituitary adenoma, who had undergone trans-sphenoidal adenectomy between January 2009 and January 2011, were reviewed. Patients underwent an ocular examination and brain MRI before and after surgery. The visual impairment score (VIS) was derived by combining the scores of best-corrected visual acuity and visual field. The relationship between VIS and tumor size/tumor type/position of the optic chiasm was assessed. Results : Seventy-eight patients were included (41 male, 37 female). Thirty-two (41%) patients experienced blurred vision or visual field defect as an initial symptom. Receiver operating characteristic curve analysis showed that tumors <2.2 cm tended to cause minimal or no visual impairment. Statistical analysis showed that 1) poor preoperative vision is related to tumor size, displacement of the optic chiasm in the sagittal view on MRI and optic atrophy, and 2) poorer visual prognosis is associated with greater preoperative VIS. In multivariate analysis the only factor significantly related to VIS improvement was increasing pituitary adenoma size, which predicted decreased improvement. Conclusion : Results from this study show that pituitary adenomas larger than 2 cm cause defects in vision while adenomas 2 cm or smaller do not cause significant visual impairment. Patients with a large macroadenoma or giant adenoma should undergo surgical resection as soon as possible to prevent permanent visual loss.

개의 모양체 종양 치료 3예 (Treatment of Ciliary Body Tumors in Three Dogs)

  • 이충호;김진현;김대용;윤정희;우흥명;권오경
    • 한국임상수의학회지
    • /
    • 제19권3호
    • /
    • pp.387-390
    • /
    • 2002
  • Ciliary body neoplasms are uncommon and have been described infrequently in the dog. We report successful treatment of three cases of canine ciliary body tumors that were diagnosed histologically as adenoma, adenocarcinoma, and malignant melanoma, respectively. They were presented with typical clinical signs that include glaucoma, anterior segment inflammation, and vision loss. On orbital ultrasound. very echodense masses involved in the ciliary body structure were revealed. Iridocyclectorny and enucleation were performed in lieu of attempts at orbital biopsy.

Retrobulbar Hematoma in Blow-Out Fracture after Open Reduction

  • Cheon, Ji Seon;Seo, Bin Na;Yang, Jeong Yeol;Son, Kyung Min
    • Archives of Plastic Surgery
    • /
    • 제40권4호
    • /
    • pp.445-449
    • /
    • 2013
  • Retrobulbar hemorrhage, especially when associated with visual loss, is a rare but significant complication after facial bone reconstruction. In this article, two cases of retrobulbar hematoma after surgical repair of blow-out fracture are reported. In one patient, permanent loss of vision was involved, but with the other patient, we were able to prevent this by performing immediate decompression after definite diagnosis. We present our clinical experience with regard to the treatment process and method for prevention of retrobulbar hematoma using a scalp vein set tube and a negative pressure drainage system.

조종석 각도변화가 양성 가속도에 미치는 영향에 관한 연구 (The effects of high sustained +Gz under different seat back angles)

  • 이창민;박세권
    • 대한인간공학회지
    • /
    • 제15권1호
    • /
    • pp.69-78
    • /
    • 1996
  • Current fighter pilots, flying new generation aircrafts with high performance, are under severe stress during aerial combat maneuvering when they are exposed to high sustained +Gz(Head-to-foot) acceleration stress. Two major factor limiting performance during high sustaied +Gz acceleration stress are loss of vision-greyout or blackout, and loss of consciousness (LOC). These symptoms are believed to occur as a result of insuff- icient blood flow to the retina and the brain. This study was conducted to evaluate the effects of high sustained +Gz stress under different seat back angle. The results. obtained by the biodvanmic computer simulations using the ATB(articulated total body) model, are represented with respect to three variables, such as HIC(head injury criterion) value, average G, and maximum G. The results demonstrate that the seat back angle(over $30^{\circ}C$) had a significant effect to decrease +Gz stress on the head segment and had no significant effect on HIC.

  • PDF

사람과 자동차 재인식이 가능한 다중 손실함수 기반 심층 신경망 학습 (Deep Neural Networks Learning based on Multiple Loss Functions for Both Person and Vehicles Re-Identification)

  • 김경태;최재영
    • 한국멀티미디어학회논문지
    • /
    • 제23권8호
    • /
    • pp.891-902
    • /
    • 2020
  • The Re-Identification(Re-ID) is one of the most popular researches in the field of computer vision due to a variety of applications. To achieve a high-level re-identification performance, recently other methods have developed the deep learning based networks that are specialized for only person or vehicle. However, most of the current methods are difficult to be used in real-world applications that require re-identification of both person and vehicle at the same time. To overcome this limitation, this paper proposes a deep neural network learning method that combines triplet and softmax loss to improve performance and re-identify people and vehicles simultaneously. It's possible to learn the detailed difference between the identities(IDs) by combining the softmax loss with the triplet loss. In addition, weights are devised to avoid bias in one-side loss when combining. We used Market-1501 and DukeMTMC-reID datasets, which are frequently used to evaluate person re-identification experiments. Moreover, the vehicle re-identification experiment was evaluated by using VeRi-776 and VehicleID datasets. Since the proposed method does not designed for a neural network specialized for a specific object, it can re-identify simultaneously both person and vehicle. To demonstrate this, an experiment was performed by using a person and vehicle re-identification dataset together.

안면골 골절로 인한 시신경 손상 (OPTIC NERVE INJURY DUE TO FACIAL FRACTURES)

  • 양영철;류수장;김종배
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • 제16권3호
    • /
    • pp.428-437
    • /
    • 1994
  • Optic nerve injury serious enough to result in blindness had been reported to occur in 3% of facial fractures. When blindness is immediate and complete, the prognosis for even partial recovery is poor. Progressive or incomplete visual loss may be ameliorated either by large dosage of steroid or by emergency optic nerve decompression, depending on the mechanism of injury, the degree of trauma to the optic canal, and the period of time that elapses between injury and medical intervention. We often miss initial assessment of visual function in management of facial fracture patients due to loss of consciousness, periorbital swelling and emergency situations. Delayed treatment of injuried optic nerve cause permanent blindness due to irreversible change of optic nerve. But by treating posttraumatic optic nerve injuries aggressively, usable vision can preserved in a number of patients. The following report concerns three who suffered visual loss due to optic nerve injury with no improvement after steroid therapy and/or optic nerve decompression surgery.

  • PDF

GAN-based shadow removal using context information

  • Yoon, Hee-jin;Kim, Kang-jik;Chun, Jun-chul
    • 인터넷정보학회논문지
    • /
    • 제20권6호
    • /
    • pp.29-36
    • /
    • 2019
  • When dealing with outdoor images in a variety of computer vision applications, the presence of shadow degrades performance. In order to understand the information occluded by shadow, it is essential to remove the shadow. To solve this problem, in many studies, involves a two-step process of shadow detection and removal. However, the field of shadow detection based on CNN has greatly improved, but the field of shadow removal has been difficult because it needs to be restored after removing the shadow. In this paper, it is assumed that shadow is detected, and shadow-less image is generated by using original image and shadow mask. In previous methods, based on CGAN, the image created by the generator was learned from only the aspect of the image patch in the adversarial learning through the discriminator. In the contrast, we propose a novel method using a discriminator that judges both the whole image and the local patch at the same time. We not only use the residual generator to produce high quality images, but we also use joint loss, which combines reconstruction loss and GAN loss for training stability. To evaluate our approach, we used an ISTD datasets consisting of a single image. The images generated by our approach show sharp and restored detailed information compared to previous methods.

균형 잡힌 데이터 증강 기반 영상 감정 분류에 관한 연구 (A Study on Visual Emotion Classification using Balanced Data Augmentation)

  • 정치윤;김무섭
    • 한국멀티미디어학회논문지
    • /
    • 제24권7호
    • /
    • pp.880-889
    • /
    • 2021
  • In everyday life, recognizing people's emotions from their frames is essential and is a popular research domain in the area of computer vision. Visual emotion has a severe class imbalance in which most of the data are distributed in specific categories. The existing methods do not consider class imbalance and used accuracy as the performance metric, which is not suitable for evaluating the performance of the imbalanced dataset. Therefore, we proposed a method for recognizing visual emotion using balanced data augmentation to address the class imbalance. The proposed method generates a balanced dataset by adopting the random over-sampling and image transformation methods. Also, the proposed method uses the Focal loss as a loss function, which can mitigate the class imbalance by down weighting the well-classified samples. EfficientNet, which is the state-of-the-art method for image classification is used to recognize visual emotion. We compare the performance of the proposed method with that of conventional methods by using a public dataset. The experimental results show that the proposed method increases the F1 score by 40% compared with the method without data augmentation, mitigating class imbalance without loss of classification accuracy.