• Title/Summary/Keyword: 육안식별

Search Result 107, Processing Time 0.025 seconds

Nondestructive Deterioration Diagnosis and Environmental Investigation of the Stupa of the Buddhist Monk Soyo in Baegyangsa Temple, Jangseong (장성 백양사 소요대사탑의 비파괴 훼손도 진단과 입지환경 검토)

  • Kim, Yuri;Lee, Myeong Seong;Chun, Yu Gun;Lee, Mi Hye;Jwa, Yong-Joo
    • Korean Journal of Heritage: History & Science
    • /
    • v.49 no.4
    • /
    • pp.52-63
    • /
    • 2016
  • The Stupa of Buddhist Monk Soyo in Baegyangsa temple, Jangseong, was erected to pay a tribute to the achievement of the Buddhist monk Soyo, who worked for Baegyangsa temple as a chief monk, and is a bellshaped stupa with the detailed pattern of a Korean traditional buddhist bell. It is composed of pinkish-grey sandstone and the body of the stupa was damaged by longitudinal cracks on the front and back areas and the exfoliation caused break-out in the most part of the sculpture on the left and right areas. According to the ultrasonic test and infrared thermography analysis for physical deterioration diagnosis, most weathering aspects appeared on the body of the stupa and some exfoliated part that could not be seen with the naked eye was detected 6.1% and 5.9% on the left and right side respectively. Hyperspectral imaging analysis was also carried out to assess biological deterioration. According to the result, the surface of the stupa was covered 71.8 ~ 79.9% with vegetation like algae, lichen and moss. NDVI(Normalized Difference Vegetation Index) was higher relatively on the bottom part near the ground, right and back areas of the stupa. Therefore conservation treatment for the exfoliated part and bio-deterioration is necessary and the environment condition needs to be fixed to prevent extra damages on the stupa.

Hyperspectral Image Analysis Technology Based on Machine Learning for Marine Object Detection (해상 객체 탐지를 위한 머신러닝 기반의 초분광 영상 분석 기술)

  • Sangwoo Oh;Dongmin Seo
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.28 no.7
    • /
    • pp.1120-1128
    • /
    • 2022
  • In the event of a marine accident, the longer the exposure time to the sea increases, the faster the chance of survival decreases. However, because the search area of the sea is extremely wide compared to that of land, marine object detection technology based on the sensor mounted on a satellite or an aircraft must be applied rather than ship for an efficient search. The purpose of this study was to rapidly detect an object in the ocean using a hyperspectral image sensor mounted on an aircraft. The image captured by this sensor has a spatial resolution of 8,241 × 1,024, and is a large-capacity data comprising 127 spectra and a resolution of 0.7 m per pixel. In this study, a marine object detection model was developed that combines a seawater identification algorithm using DBSCAN and a density-based land removal algorithm to rapidly analyze large data. When the developed detection model was applied to the hyperspectral image, the performance of analyzing a sea area of about 5 km2 within 100 s was confirmed. In addition, to evaluate the detection accuracy of the developed model, hyperspectral images of the Mokpo, Gunsan, and Yeosu regions were taken using an aircraft. As a result, ships in the experimental image could be detected with an accuracy of 90 %. The technology developed in this study is expected to be utilized as important information to support the search and rescue activities of small ships and human life.

Textile material classification in clothing images using deep learning (딥러닝을 이용한 의류 이미지의 텍스타일 소재 분류)

  • So Young Lee;Hye Seon Jeong;Yoon Sung Choi;Choong Kwon Lee
    • Smart Media Journal
    • /
    • v.12 no.7
    • /
    • pp.43-51
    • /
    • 2023
  • As online transactions increase, the image of clothing has a great influence on consumer purchasing decisions. The importance of image information for clothing materials has been emphasized, and it is important for the fashion industry to analyze clothing images and grasp the materials used. Textile materials used for clothing are difficult to identify with the naked eye, and much time and cost are consumed in sorting. This study aims to classify the materials of textiles from clothing images based on deep learning algorithms. Classifying materials can help reduce clothing production costs, increase the efficiency of the manufacturing process, and contribute to the service of recommending products of specific materials to consumers. We used machine vision-based deep learning algorithms ResNet and Vision Transformer to classify clothing images. A total of 760,949 images were collected and preprocessed to detect abnormal images. Finally, a total of 167,299 clothing images, 19 textile labels and 20 fabric labels were used. We used ResNet and Vision Transformer to classify clothing materials and compared the performance of the algorithms with the Top-k Accuracy Score metric. As a result of comparing the performance, the Vision Transformer algorithm outperforms ResNet.

Deep Learning Based Digital Staining Method in Fourier Ptychographic Microscopy Image (Fourier Ptychographic Microscopy 영상에서의 딥러닝 기반 디지털 염색 방법 연구)

  • Seok-Min Hwang;Dong-Bum Kim;Yu-Jeong Kim;Yeo-Rin Kim;Jong-Ha Lee
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.23 no.2
    • /
    • pp.97-106
    • /
    • 2022
  • In this study, H&E staining is necessary to distinguish cells. However, dyeing directly requires a lot of money and time. The purpose is to convert the phase image of unstained cells to the amplitude image of stained cells. Image data taken with FPM was created with Phase image and Amplitude image using Matlab's parameters. Through normalization, a visually identifiable image was obtained. Through normalization, a visually distinguishable image was obtained. Using the GAN algorithm, a Fake Amplitude image similar to the Real Amplitude image was created based on the Phase image, and cells were distinguished by objectification using MASK R-CNN with the Fake Amplitude image As a result of the study, D loss max is 3.3e-1, min is 6.8e-2, G loss max is 6.9e-2, min is 2.9e-2, A loss max is 5.8e-1, min is 1.2e-1, Mask R-CNN max is 1.9e0, and min is 3.2e-1.

A Study on Class Sample Extraction Technique Using Histogram Back-Projection for Object-Based Image Classification (객체 기반 영상 분류를 위한 히스토그램 역투영을 이용한 클래스 샘플 추출 기법에 관한 연구)

  • Chul-Soo Ye
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.2
    • /
    • pp.157-168
    • /
    • 2023
  • Image segmentation and supervised classification techniques are widely used to monitor the ground surface using high-resolution remote sensing images. In order to classify various objects, a process of defining a class corresponding to each object and selecting samples belonging to each class is required. Existing methods for extracting class samples should select a sufficient number of samples having similar intensity characteristics for each class. This process depends on the user's visual identification and takes a lot of time. Representative samples of the class extracted are likely to vary depending on the user, and as a result, the classification performance is greatly affected by the class sample extraction result. In this study, we propose an image classification technique that minimizes user intervention when extracting class samples by applying the histogram back-projection technique and has consistent intensity characteristics of samples belonging to classes. The proposed classification technique using histogram back-projection showed improved classification accuracy in both the experiment using hue subchannels of the hue saturation value transformed image from Compact Advanced Satellite 500-1 imagery and the experiment using the original image compared to the technique that did not use histogram back-projection.

A standardized procedure on building spectral library for hazardous chemicals mixed in river flow using hyperspectral image (초분광 영상을 활용한 하천수 혼합 유해화학물질 표준 분광라이브러리 구축 방안)

  • Gwon, Yeonghwa;Kim, Dongsu;You, Hojun
    • Journal of Korea Water Resources Association
    • /
    • v.53 no.10
    • /
    • pp.845-859
    • /
    • 2020
  • Climate change and recent heat waves have drawn public attention toward other environmental issues, such as water pollution in the form of algal blooms, chemical leaks, and oil spills. Water pollution by the leakage of chemicals may severely affect human health as well as contaminate the air, water, and soil and cause discoloration or death of crops that come in contact with these chemicals. Chemicals that may spill into water streams are often colorless and water-soluble, which makes it difficult to determine whether the water is polluted using the naked eye. When a chemical spill occurs, it is usually detected through a simple contact detection device by installing sensors at locations where leakage is likely to occur. The drawback with the approach using contact detection sensors is that it relies heavily on the skill of field workers. Moreover, these sensors are installed at a limited number of locations, so spill detection is not possible in areas where they are not installed. Recently hyperspectral images have been used to identify land cover and vegetation and to determine water quality by analyzing the inherent spectral characteristics of these materials. While hyperspectral sensors can potentially be used to detect chemical substances, there is currently a lack of research on the detection of chemicals in water streams using hyperspectral sensors. Therefore, this study utilized remote sensing techniques and the latest sensor technology to overcome the limitations of contact detection technology in detecting the leakage of hazardous chemical into aquatic systems. In this study, we aimed to determine whether 18 types of hazardous chemicals could be individually classified using hyperspectral image. To this end, we obtained hyperspectral images of each chemical to establish a spectral library. We expect that future studies will expand the spectral library database for hazardous chemicals and that verification of its application in water streams will be conducted so that it can be applied to real-time monitoring to facilitate rapid detection and response when a chemical spill has occurred.

Identification of New, Old and Mixed Brown Rice using Freshness and an Electronic Eye (신선도와 전자눈을 이용한 현미 신곡, 구곡 및 혼합곡의 판별)

  • Hong, Jee-Hwa;Park, Young-Jun;Kim, Hyun-Tae;Oh, Sang Kyun
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.63 no.2
    • /
    • pp.98-105
    • /
    • 2018
  • The sale of brown rice batches composed of rice produced in different years is prohibited in Korea. Thus, new methods for the identification of the year of production are critical for maintaining the distribution of high quality brown rice. Here, we describe the exploitation of an enzyme that can be used to discriminate between freshly harvested and one-year-old brown rice. The degree of enzyme activity was visualized through freshness test with Guaiacol, Oxydol, and p-phenylenediamine reagents. With electronic eye equipment, we selected 29 color codes for identifying new brown rice and old brown rice. The discrimination power of selected color codes showed a minimum of 0.263 to a maximum of 0.922 and an average value of 0.62. The accuracy with which new brown rice and old brown rice could be identified was 100% in principal component analysis (PCA) and discriminant function analysis (DFA). The DFA analysis had greater discriminatory power than did the PCA analysis. A verification test using new brown rice, old brown rice, or a mixture of the two was then performed to validate our method. The accuracy of identification of new and old brown rice was 100% in both cases, whereas mixed brown rice samples were correctly classified at a rate of 96.9%. Additionally, in order to test whether the discriminant constructed in winter can be applied to samples collected in summer, new and old brown rice stored for 8 months were collected and tested. Both new and old brown rice collected in summer were classified as old brown rice and showed 50% identification accuracy. We were able to attribute these observations to changes in enzyme content over time, and therefore we conclude, it will be necessary to develop discriminants that are specific to distinct storage periods in the near future.

Studies on Fungal Contamination and Mycotoxins of Rice Straw Round Bale Silage (사료용 볏짚 곤포사일리지의 곰팡이 및 Mycotoxin 오염 연구)

  • Sung, Ha-Guyn;Lee, Joung-Kyong;Seo, Sung
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.31 no.4
    • /
    • pp.451-462
    • /
    • 2011
  • The purpose of this study was to investigate fungi and mycotoxin contamination of the rice straw bale silage in Korean. It was tested the 33 samples of rice straw round bale silage with various condition which fed cattle in the farm. The level of fungal contamination was $2.1{\times}10^6\;cfu\;g^{-1}$ in the average and $9.2{\times}10^8\;cfu\;g^{-1}$ in the maximum. The fungal contamination was detected in the all of normal samples which good condition of rice straw bale silage. When the fungi was isolate and identify, it was found 28 species and mycotoxin producing fungi were 8 species as following as Aspergillus flavus, Aspergillus fumigatus, Fusarium culmorum, Fusarium verticillioides, Penicillium carneum, Penicillium paneum, Penicillium roqueforti, Penicillium viridicatum. Specially, Penicillium paneum was found 42% of samples and Aspergillus sp. (A. flavus, A. fumigatus) are 21% of samples. In case of mycotoxin contamination, the 42% of samples are detected more than one kind of mycotoxin. Some samples are contaminated three kinds of mycotoxin. This study was not found aflatoxin ($B_1$, $B_2$, $G_1$, $G_2$) and fumonisin ($B_1$, $B_2$), but were detected the contamination of ochratoxin A (1.0~5.8 ug/kg), deoxynivalenol (DON, 156.0~776.7 ug/kg) and zearalenone (ZON, 38.0~750.0 ug/kg). Therefore, the above results show that rice straw round bale silage expose on hazard factors as mycotoxigenic fungi and mycotoxin contamination, and than need more research about mycotoxin in animal feed to protect animal and human healthy.

A preliminary study and its application for the development of the quantitative evaluation method of developed fingerprints on porous surfaces using densitometric image analysis (다공성 표면에서 현출된 지문의 정량적인 평가방법 개발을 위한 농도계 이미지 분석을 이용한 선행연구 및 응용)

  • Cho, Jae-Hyun;Kim, Hyo-Won;Kim, Min-Sun;Choi, Sung-Woon
    • Analytical Science and Technology
    • /
    • v.29 no.3
    • /
    • pp.142-153
    • /
    • 2016
  • In crime scene investigation, fingerprint identification is regarded to be one of the most important techniques for personal identification. However, objective and unbiased evaluation methods that would compare the fingerprints with diverse available and developing methods are currently lacking. To develop an objective and quantitative method to improve fingerprint evaluation, a preliminary study was performed to extract useful research information from the analysis with densitometric image analysis (CP Atlas 2.0) and the Automated Fingerprint Identification System (AFIS) for the developed fingerprints on porous surfaces. First, inked fingerprints obtained by varying pressure (kg.f) and pressing time (sec.) to find optimal conditions for obtaining fingerprint samples were analyzed, because they could provide fingerprints of a relatively uniform quality. The extracted number of minutiae from the analysis with AFIS was compared with the calculated areas of friction ridge peaks from the image analysis. Inked fingerprints with a pressing pressure of 1.0 kg.f for 5 seconds provided the most visually clear fingerprints, the highest number of minutiae points, and the largest average area of the peaks of the friction ridge. In addition, the images of the developed latent fingerprints on thermal paper with the iodine fuming method were analyzed. Fingerprinting condition of 1.0 kg.f/5 sec was also found to be optimal when generating highest minutiae number and the largest average area of peaks of ridges. Additionally, when the concentration of ninhydrin solution (0.5 % vs. 5 %) was used to compare the developed latent fingerprints on print paper, the best fingerprinting condition was 2.0 kg.f/5 sec and 5 % of ninhydrin concentration. It was confirmed that the larger the average area of the peaks generated by the image analysis, the higher the number of minutiae points was found. With additional tests for fingerprint evaluation using the densitometric image analysis, this method can prove to be a new quantitative and objective assessment method for fingerprint development.

A Study on the Use of Active Protocol Using the Change of Pitch and Rotation Time in PET/CT (PET/CT에서 Pitch와 Rotation Time의 변화를 이용한 능동적인 프로토콜 사용에 대한 연구)

  • Jang, Eui Sun;Kwak, In Suk;Park, Sun Myung;Choi, Choon Ki;Lee, Hyuk;Kim, Soo Young;Choi, Sung Wook
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.17 no.2
    • /
    • pp.67-71
    • /
    • 2013
  • Purpose: The Change of CT exposure condition have a effect on image quality and patient exposure dose. In this study, we evaluated effect CT image quality and SUV when CT parameters (Pitch, Rotation time) were changed. Materials and Methods: Discovery Ste (GE, USA) was used as a PET/CT scanner. Using GE QA Phantom and AAPM CT Performance Phantom for evaluate Noise of CT image. Images are acquired by using 24 combinations that four stages pitch (0.562, 0.938, 1.375, 1.75:1) and six stages X-ray tube rotation time (0.5s-1.0s). PET images are acquired using 1994 NEMA PET Phantom ($^{18}F-FDG$ 5.3 kBq/mL, 2.5 min/frame). For noise test, noise are evaluated by standard deviation of each image's CT numbers. And then we used expectation noise according to change of DLP (Dose Length Product) to experimental noise ratio for index of effectiveness. For spatial resolution test, we confirmed that it is possible to identify to 1.0 mm size of the holes at the AAPM CT Performance Phantom. Finally we evaluated each 24 image's SUV. Results: Noise efficiency were 1.00, 1.03, 1.01, 0.96 and 1.00, 1.04, 1.02, 0.97 when pitch changes at the QA Phantom and AAPM Phantom. In case of X-ray tube rotation time changes, 0.99, 1.02, 1.00, 1.00, 0.99, 0.99 and 1.01, 1.01, 0.99, 1.01, 1.01, 1.01 at the QA Phantom and AAPM Phantom. We could identify 1.0 mm size of the holes all 24 images. Also, there were no significant change of SUV and all image's average SUV were 1.1. Conclusion: 1.75:1 pitch is the most effective value at the CT image evaluation according to pitch change and It doesn't affect to the spatial resolution and SUV. However, the change of rotation time doesn't affect anything. So, we recommend to use the effective pitch like 1.75:1 and adequate X-ray tube rotation time according to patient size.

  • PDF