• Title/Summary/Keyword: various processing methods

Search Result 1,791, Processing Time 0.11 seconds

Estimation of Rice Heading Date of Paddy Rice from Slanted and Top-view Images Using Deep Learning Classification Model (딥 러닝 분류 모델을 이용한 직하방과 경사각 영상 기반의 벼 출수기 판별)

  • Hyeok-jin Bak;Wan-Gyu Sang;Sungyul Chang;Dongwon Kwon;Woo-jin Im;Ji-hyeon Lee;Nam-jin Chung;Jung-Il Cho
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.4
    • /
    • pp.337-345
    • /
    • 2023
  • Estimating the rice heading date is one of the most crucial agricultural tasks related to productivity. However, due to abnormal climates around the world, it is becoming increasingly challenging to estimate the rice heading date. Therefore, a more objective classification method for estimating the rice heading date is needed than the existing methods. This study, we aimed to classify the rice heading stage from various images using a CNN classification model. We collected top-view images taken from a drone and a phenotyping tower, as well as slanted-view images captured with a RGB camera. The collected images underwent preprocessing to prepare them as input data for the CNN model. The CNN architectures employed were ResNet50, InceptionV3, and VGG19, which are commonly used in image classification models. The accuracy of the models all showed an accuracy of 0.98 or higher regardless of each architecture and type of image. We also used Grad-CAM to visually check which features of the image the model looked at and classified. Then verified our model accurately measure the rice heading date in paddy fields. The rice heading date was estimated to be approximately one day apart on average in the four paddy fields. This method suggests that the water head can be estimated automatically and quantitatively when estimating the rice heading date from various paddy field monitoring images.

Comparison on the Extract Content by Different Processing Method in Peony (Paeonia lactiflora Pall.) Root (작약 품종의 가공방법에 따른 엑스 함량 비교)

  • Choung, Myoung-Gun;An, Young-Nam;Kang, Kwang-Hee;Cho, Young-Son;Kim, Jae-Hyun
    • Korean Journal of Medicinal Crop Science
    • /
    • v.11 no.3
    • /
    • pp.201-206
    • /
    • 2003
  • This experiment was conducted to establish the standard of quality evaluation in peony root (Paeonia lactiflora Pall.) cultivated in Korea. The contents of extract and changes of extract pH in peony root with different root ages, cultivars and drying method were investigated. The contents of extract and changes of extract pH in peony root with the removed and the unremoved cork layer showed no difference among different root ages. On the other hand, the contents of extract in the root with the unremoved cork layer which was two- to four-year-old, were higher by 3.7 to 9.2% than those in the root with removed cork layer. This suggests that cork layer might be a good source of extracts. The contents of extract in root of Youngchonjakyak in both the removed and the unremoved cork layer were 36% and 30%, respectively and were higher than of Euisungjakyak and Jomjakyak, but the extract pH was not significantly different among three cultivars which were four-year-old. It showed that the contents of extract and the changes of extract pH in peony root with the removed and the unremoved cork layer of Euisungjakyak, which being four-year-old, showed clear difference at various drying methods. Among the different drying methods, it showed that the contents of extract of that with unremoved cork layer in the room temperature drying method was 32.8%, and that of root with the removed cork layer in the $80^{\circ}C$ hot water treatment drying method was 28.1% which were the highest values, respectively. The pH of extract in freeze drying was the highest (about 5.1), and the $80^{\circ}C$ hot water treatment drying showed the lowest (about 3.7).

Quantitative Indices of Small Heart According to Reconstruction Method of Myocardial Perfusion SPECT Using the 201Tl (201Tl을 이용한 심근관류 SPECT에서 재구성 방법에 따른 작은 용적 심장의 정량 지표 변화)

  • Kim, Sung Hwan;Ryu, Jae Kwang;Yoon, Soon Sang;Kim, Eun Hye
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.17 no.1
    • /
    • pp.18-24
    • /
    • 2013
  • Purpose: Myocardial perfusion SPECT using $^{201}Tl$ is an important method for viability of left ventricle and quantitative evaluation of cardiac function and now various reconstruction methods are used to improve the image quality. But in case of small sized heart, you should always be careful because of the Partial Volume Effect which may cause errors of quantitative indices at the reconstruction step. So, In this study, we compared those quantitative indices of left ventricle according to the reconstruction method of myocardial perfusion SPECT with the Echocardiography and verified the degree of the differences between them. Materials and Methods: Based on ESV 30 mL of Echocardiography, we divided 278 patients (male;98, female;188, Mean age;$65.5{\pm}11.1$) who visited the Asan medical center from February to September, 2012 into two categories; below the criteria to small sized heart, otherwise, normal or large sized heart. Filtered and output each case, we applied the method of FBP and OSEM to each of them, and calculated EDV, ESV and LVEF, and we conducted statistical processing through Repeated Measures ANOVA with indices that measured in Echocardiography. Results: In case of men and women, there were no significant difference in EDV between FBP and OSEM (p=0.053, p=0.098), but in case of Echocardiography, there were meaningful differences (p<0.001). The change of ESV especially women in small sized heard, significant differences has occurred among FBP, OSEM and Echocardiography. Also, in LVEF, there were no difference in men and women who have normal sized heart among FBP, OSEM and Echocardiography (p=0.375, p=0.969), but the women with small sized heart have showed significant differences (p<0.001). Conclusion: The change in quantitative indices of left ventricle between Nuclear cardiology image reconstruction, no difference has occurred in the patients with normal sized heart but based on ESV, under 30 mL of small sized heart, especially in female, there were significant differences in FBP, OSEM and Echocardiography. We found out that overestimated LVEF caused by PVE can be reduced in average by applying OSEM to all kinds of gamma camera, which are used in analyzing the differences.

  • PDF

True Orthoimage Generation from LiDAR Intensity Using Deep Learning (딥러닝에 의한 라이다 반사강도로부터 엄밀정사영상 생성)

  • Shin, Young Ha;Hyung, Sung Woong;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.4
    • /
    • pp.363-373
    • /
    • 2020
  • During last decades numerous studies generating orthoimage have been carried out. Traditional methods require exterior orientation parameters of aerial images and precise 3D object modeling data and DTM (Digital Terrain Model) to detect and recover occlusion areas. Furthermore, it is challenging task to automate the complicated process. In this paper, we proposed a new concept of true orthoimage generation using DL (Deep Learning). DL is rapidly used in wide range of fields. In particular, GAN (Generative Adversarial Network) is one of the DL models for various tasks in imaging processing and computer vision. The generator tries to produce results similar to the real images, while discriminator judges fake and real images until the results are satisfied. Such mutually adversarial mechanism improves quality of the results. Experiments were performed using GAN-based Pix2Pix model by utilizing IR (Infrared) orthoimages, intensity from LiDAR data provided by the German Society for Photogrammetry, Remote Sensing and Geoinformation (DGPF) through the ISPRS (International Society for Photogrammetry and Remote Sensing). Two approaches were implemented: (1) One-step training with intensity data and high resolution orthoimages, (2) Recursive training with intensity data and color-coded low resolution intensity images for progressive enhancement of the results. Two methods provided similar quality based on FID (Fréchet Inception Distance) measures. However, if quality of the input data is close to the target image, better results could be obtained by increasing epoch. This paper is an early experimental study for feasibility of DL-based true orthoimage generation and further improvement would be necessary.

Measurement of Two-Dimensional Velocity Distribution of Spatio-Temporal Image Velocimeter using Cross-Correlation Analysis (상호상관법을 이용한 시공간 영상유속계의 2차원 유속분포 측정)

  • Yu, Kwonkyu;Kim, Seojun;Kim, Dongsu
    • Journal of Korea Water Resources Association
    • /
    • v.47 no.6
    • /
    • pp.537-546
    • /
    • 2014
  • Surface image velocimetry was introduced as an efficient and sage alternative to conventional river flow measurement methods during floods. The conventional surface image velocimetry uses a pair of images to estimate velocity fields using cross-correlation analysis. This method is appropriate to analyzing images taken with a short time interval. It, however, has some drawbacks; it takes a while to analyze images for the verage velocity of long time intervals and is prone to include errors or uncertainties due to flow characteristics and/or image taking conditions. Methods using spatio-temporal images, called STIV, were developed to overcome the drawbacks of conventional surface image velocimetry. The grayscale-gradient tensor method, one of various STIVs, has shown to be effectively reducing the analysis time and is fairly insusceptible to any measurement noise. It, unfortunately, can only be applied to the main flow direction. This means that it can not measure any two-dimensional flow field, e.g. flow in the vicinity of river structures and flow around river bends. The present study aimed to develop a new method of analyzing spatio-temporal images in two-dimension using cross-correlation analysis. Unlike the conventional STIV, the developed method can be used to measure two-dimensional flow substantially. The method also has very high spatial resolution and reduces the analysis time. A verification test using artificial images with lid-driven cavity flow showed that the maximum error of the method is less than 10 % and the average error is less than 5 %. This means that the developed scheme seems to be fairly accurate, even for two-dimensional flow.

Improved Original Entry Point Detection Method Based on PinDemonium (PinDemonium 기반 Original Entry Point 탐지 방법 개선)

  • Kim, Gyeong Min;Park, Yong Su
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.7 no.6
    • /
    • pp.155-164
    • /
    • 2018
  • Many malicious programs have been compressed or encrypted using various commercial packers to prevent reverse engineering, So malicious code analysts must decompress or decrypt them first. The OEP (Original Entry Point) is the address of the first instruction executed after returning the encrypted or compressed executable file back to the original binary state. Several unpackers, including PinDemonium, execute the packed file and keep tracks of the addresses until the OEP appears and find the OEP among the addresses. However, instead of finding exact one OEP, unpackers provide a relatively large set of OEP candidates and sometimes OEP is missing among candidates. In other words, existing unpackers have difficulty in finding the correct OEP. We have developed new tool which provides fewer OEP candidate sets by adding two methods based on the property of the OEP. In this paper, we propose two methods to provide fewer OEP candidate sets by using the property that the function call sequence and parameters are same between packed program and original program. First way is based on a function call. Programs written in the C/C++ language are compiled to translate languages into binary code. Compiler-specific system functions are added to the compiled program. After examining these functions, we have added a method that we suggest to PinDemonium to detect the unpacking work by matching the patterns of system functions that are called in packed programs and unpacked programs. Second way is based on parameters. The parameters include not only the user-entered inputs, but also the system inputs. We have added a method that we suggest to PinDemonium to find the OEP using the system parameters of a particular function in stack memory. OEP detection experiments were performed on sample programs packed by 16 commercial packers. We can reduce the OEP candidate by more than 40% on average compared to PinDemonium except 2 commercial packers which are can not be executed due to the anti-debugging technique.

The Quality Characteristics of Deodeok-Doenjang Pre-treated by Various Sugaring Methods during Storage (전처리 당절임 방법 차이에 따른 더덕된장의 저장 중 품질특성)

  • Choi, Duck-Joo;Lee, Yun-Jung;Kim, Youn-Kyeong;Kim, Mun-Ho;Choi, So-Rye;Cha, Hwan-Soo;Youn, Aye-Ree
    • Korean journal of food and cookery science
    • /
    • v.30 no.6
    • /
    • pp.663-669
    • /
    • 2014
  • We preprocessed and pickled Deodeok with Doenjang to improve its preservability and to distribute it widely, and we stored Deodeok for 3 weeks at $7^{\circ}C$ and measured its quality. The sample pre-treated with 20% of dextrin retained its early texture better than the samples pre-treated with other methods after 3 weeks of storage (p<0.05). The samples pre-treated with other controls showed propagation of microorganisms; but Doenjang pre-treated with 20% of dextrin or sugar showed less increase in the water content. The microorganisms count in samples pre-treated with other controls was 4.0 log CFU/g after 3 weeks of storage, but the microorganisms count in the sample pre-treated with 20% of dextrin was 2.2 log CFU/g; in other words, the propagation of microorganisms was minimized in the sample pre-treated with 20% of dextrin (p<0.05). In the investigation of the preferences, this D-20 sample showed maximum improvement in color, smell, taste, and other general preferences factors. Thus, the best processing method for the optimal quality maintenance of Deodeok is to sugarize it with 20% of dextrin before pickling with Doenjang. The product prepared using with this process can be preserved for 3 weeks at $37^{\circ}C$; that is, this product is expected to have a refrigerator shelf life of 3 months.

Anti-listeria Activity of Lactococcus lactis Strains Isolated from Kimchi and Characteristics of Partially Purified Bacteriocins (김치에서 분리한 Lactococcus lactis 균주의 항리스테리아 활성 및 부분 정제된 박테리오신의 특성)

  • Son, Na-Yeon;Kim, Tae-Woon;Yuk, Hyun-Gyun
    • Journal of Food Hygiene and Safety
    • /
    • v.37 no.2
    • /
    • pp.97-106
    • /
    • 2022
  • Listeria monocytogenes (L. monocytogenes) is one of gram-positive foodborne pathogens with a very high fatality rate. Unlike most foodborne pathogens, L. monocytogenes is capable of growing at low temperatures, such as in refrigerated foods. Thus, various physical and chemical prevention methods are used in the manufacturing, processing and distribution of food. However, there are limitations to the methods such as possible changes to the food quality and the consumer awareness of synthetic preservatives. Thus, the aim of this study was to evaluate the anti-listeria activity of lactic acid bacteria (LAB) isolated from kimchi and characterize the bacteriocin produced by Lactococcuslactis which is one of isolated strains from kimchi. The analysis on the anti-listeria activity of a total of 36 species (Lactobacillus, Weissella, Lactobacillus, and Lactococcus) isolated from kimchi by the agar overlay method revealed that L. lactis NJ 1-10 and NJ 1-16 had the highest anti-listeria activity. For quantitatively analysis on the anti-listeria activity, NJ 1-10 and NJ 1-16 were co-cultured with L. monocytogenes in Brain Heat Infusion (BHI) broth, respectively. As a result, L. monocytogenes was reduced by 3.0 log CFU/mL in 20 h, lowering the number of bacteria to below the detection limit. Both LAB strains showed anti-listeria activity against 24 serotypes of L. monocytogenes, although the sizes of clear zone was slightly different. No clear zone was observed when the supernatants of both LAB cultures were treated with proteinase-K, indicating that their anti-listerial activities might be due to the production of bacteriocins. Heat stability of the partially purified bacteriocins of NJ 1-10 and NJ 1-16 was relatively stable at 60℃ and 80℃. Yet, their anti-listeria activities were completely lost by 60 min of treatment at 100℃ and 15 min of treatment at 121℃. The analysis on the pH stability showed that their anti-listeria activities were the most stable at pH 4.01, and decreased with the increasing pH value, yet, was not completely lost. Partially purified bacteriocins showed relatively stable anti-listeria activities in acetone, ethanol, and methanol, but their activities were reduced after chloroform treatment, yet was not completely lost. Conclusively, this study revealed that the bacteriocins produced by NJ 1-10 and NJ 1-16 effectively reduced L. monocytogenes, and that they were relatively stable against heat, pH, and organic solvents, therefore implying their potential as a natural antibacterial substance for controlling L. monocytogenes in food.

Real data-based active sonar signal synthesis method (실데이터 기반 능동 소나 신호 합성 방법론)

  • Yunsu Kim;Juho Kim;Jongwon Seok;Jungpyo Hong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.43 no.1
    • /
    • pp.9-18
    • /
    • 2024
  • The importance of active sonar systems is emerging due to the quietness of underwater targets and the increase in ambient noise due to the increase in maritime traffic. However, the low signal-to-noise ratio of the echo signal due to multipath propagation of the signal, various clutter, ambient noise and reverberation makes it difficult to identify underwater targets using active sonar. Attempts have been made to apply data-based methods such as machine learning or deep learning to improve the performance of underwater target recognition systems, but it is difficult to collect enough data for training due to the nature of sonar datasets. Methods based on mathematical modeling have been mainly used to compensate for insufficient active sonar data. However, methodologies based on mathematical modeling have limitations in accurately simulating complex underwater phenomena. Therefore, in this paper, we propose a sonar signal synthesis method based on a deep neural network. In order to apply the neural network model to the field of sonar signal synthesis, the proposed method appropriately corrects the attention-based encoder and decoder to the sonar signal, which is the main module of the Tacotron model mainly used in the field of speech synthesis. It is possible to synthesize a signal more similar to the actual signal by training the proposed model using the dataset collected by arranging a simulated target in an actual marine environment. In order to verify the performance of the proposed method, Perceptual evaluation of audio quality test was conducted and within score difference -2.3 was shown compared to actual signal in a total of four different environments. These results prove that the active sonar signal generated by the proposed method approximates the actual signal.

Construction of Event Networks from Large News Data Using Text Mining Techniques (텍스트 마이닝 기법을 적용한 뉴스 데이터에서의 사건 네트워크 구축)

  • Lee, Minchul;Kim, Hea-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.183-203
    • /
    • 2018
  • News articles are the most suitable medium for examining the events occurring at home and abroad. Especially, as the development of information and communication technology has brought various kinds of online news media, the news about the events occurring in society has increased greatly. So automatically summarizing key events from massive amounts of news data will help users to look at many of the events at a glance. In addition, if we build and provide an event network based on the relevance of events, it will be able to greatly help the reader in understanding the current events. In this study, we propose a method for extracting event networks from large news text data. To this end, we first collected Korean political and social articles from March 2016 to March 2017, and integrated the synonyms by leaving only meaningful words through preprocessing using NPMI and Word2Vec. Latent Dirichlet allocation (LDA) topic modeling was used to calculate the subject distribution by date and to find the peak of the subject distribution and to detect the event. A total of 32 topics were extracted from the topic modeling, and the point of occurrence of the event was deduced by looking at the point at which each subject distribution surged. As a result, a total of 85 events were detected, but the final 16 events were filtered and presented using the Gaussian smoothing technique. We also calculated the relevance score between events detected to construct the event network. Using the cosine coefficient between the co-occurred events, we calculated the relevance between the events and connected the events to construct the event network. Finally, we set up the event network by setting each event to each vertex and the relevance score between events to the vertices connecting the vertices. The event network constructed in our methods helped us to sort out major events in the political and social fields in Korea that occurred in the last one year in chronological order and at the same time identify which events are related to certain events. Our approach differs from existing event detection methods in that LDA topic modeling makes it possible to easily analyze large amounts of data and to identify the relevance of events that were difficult to detect in existing event detection. We applied various text mining techniques and Word2vec technique in the text preprocessing to improve the accuracy of the extraction of proper nouns and synthetic nouns, which have been difficult in analyzing existing Korean texts, can be found. In this study, the detection and network configuration techniques of the event have the following advantages in practical application. First, LDA topic modeling, which is unsupervised learning, can easily analyze subject and topic words and distribution from huge amount of data. Also, by using the date information of the collected news articles, it is possible to express the distribution by topic in a time series. Second, we can find out the connection of events in the form of present and summarized form by calculating relevance score and constructing event network by using simultaneous occurrence of topics that are difficult to grasp in existing event detection. It can be seen from the fact that the inter-event relevance-based event network proposed in this study was actually constructed in order of occurrence time. It is also possible to identify what happened as a starting point for a series of events through the event network. The limitation of this study is that the characteristics of LDA topic modeling have different results according to the initial parameters and the number of subjects, and the subject and event name of the analysis result should be given by the subjective judgment of the researcher. Also, since each topic is assumed to be exclusive and independent, it does not take into account the relevance between themes. Subsequent studies need to calculate the relevance between events that are not covered in this study or those that belong to the same subject.