• Title/Summary/Keyword: 입력 특징

Search Result 2,145, Processing Time 0.03 seconds

Estimation of Rice Heading Date of Paddy Rice from Slanted and Top-view Images Using Deep Learning Classification Model (딥 러닝 분류 모델을 이용한 직하방과 경사각 영상 기반의 벼 출수기 판별)

  • Hyeok-jin Bak;Wan-Gyu Sang;Sungyul Chang;Dongwon Kwon;Woo-jin Im;Ji-hyeon Lee;Nam-jin Chung;Jung-Il Cho
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.4
    • /
    • pp.337-345
    • /
    • 2023
  • Estimating the rice heading date is one of the most crucial agricultural tasks related to productivity. However, due to abnormal climates around the world, it is becoming increasingly challenging to estimate the rice heading date. Therefore, a more objective classification method for estimating the rice heading date is needed than the existing methods. This study, we aimed to classify the rice heading stage from various images using a CNN classification model. We collected top-view images taken from a drone and a phenotyping tower, as well as slanted-view images captured with a RGB camera. The collected images underwent preprocessing to prepare them as input data for the CNN model. The CNN architectures employed were ResNet50, InceptionV3, and VGG19, which are commonly used in image classification models. The accuracy of the models all showed an accuracy of 0.98 or higher regardless of each architecture and type of image. We also used Grad-CAM to visually check which features of the image the model looked at and classified. Then verified our model accurately measure the rice heading date in paddy fields. The rice heading date was estimated to be approximately one day apart on average in the four paddy fields. This method suggests that the water head can be estimated automatically and quantitatively when estimating the rice heading date from various paddy field monitoring images.

Factors influencing the axes of anterior teeth during SWA on masse sliding retraction with orthodontic mini-implant anchorage: a finite element study (교정용 미니 임플랜트 고정원과 SWA on masse sliding retraction 시 전치부 치축 조절 요인에 관한 유한요소해석)

  • Jeong, Hye-Sim;Moon, Yoon-Shik;Cho, Young-Soo;Lim, Seung-Min;Sung, Sang-Jin
    • The korean journal of orthodontics
    • /
    • v.36 no.5
    • /
    • pp.339-348
    • /
    • 2006
  • Objective: With development of the skeletal anchorage system, orthodontic mini-implant (OMI) assisted on masse sliding retraction has become part of general orthodontic treatment. But compared to the emphasis on successful anchorage preparation, the control of anterior teeth axis has not been emphasized enough. Methods: A 3-D finite element Base model of maxillary dental arch and a Lingual tipping model with lingually inclined anterior teeth were constructed. To evaluate factors influencing the axis of anterior teeth when OMI was used as anchorage, models were simulated with 2 mm or 5 mm retraction hooks and/or by the addition of 4 mm of compensating curve (CC) on the main archwire. The stress distribution on the roots and a 25000 times enlarged axis graph were evaluated. Results: Intrusive component of retraction force directed postero-superiorly from the 2 mm height hook did not reduce the lingual tipping of anterior teeth. When hook height was increased to 5 mm, lateral incisor showed crown-labial and root-lingual torque and uncontrolled tipping of the canine was increased.4 mm of CC added to the main archwire also induced crown-labial and root-lingual torque of the lateral incisor but uncontrolled tipping of the canine was decreased. Lingual tipping model showed very similar results compared with the Base model. Conclusion: The results of this study showed that height of the hook and compensating curve on the main archwire can influence the axis of anterior teeth. These data can be used as guidelines for clinical application.

A Study on Developing Customized Bolus using 3D Printers (3D 프린터를 이용한 Customized Bolus 제작에 관한 연구)

  • Jung, Sang Min;Yang, Jin Ho;Lee, Seung Hyun;Kim, Jin Uk;Yeom, Du Seok
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.27 no.1
    • /
    • pp.61-71
    • /
    • 2015
  • Purpose : 3D Printers are used to create three-dimensional models based on blueprints. Based on this characteristic, it is feasible to develop a bolus that can minimize the air gap between skin and bolus in radiotherapy. This study aims to compare and analyze air gap and target dose at the branded 1 cm bolus with the developed customized bolus using 3D printers. Materials and Methods : RANDO phantom with a protruded tumor was used to procure images using CT simulator. CT DICOM file was transferred into the STL file, equivalent to 3D printers. Using this, customized bolus molding box (maintaining the 1 cm width) was created by processing 3D printers, and paraffin was melted to develop the customized bolus. The air gap of customized bolus and the branded 1 cm bolus was checked, and the differences in air gap was used to compare $D_{max}$, $D_{min}$, $D_{mean}$, $D_{95%}$ and $V_{95%}$ in treatment plan through Eclipse. Results : Customized bolus production period took about 3 days. The total volume of air gap was average $3.9cm^3$ at the customized bolus. And it was average $29.6cm^3$ at the branded 1 cm bolus. The customized bolus developed by the 3D printer was more useful in minimizing the air gap than the branded 1 cm bolus. In the 6 MV photon, at the customized bolus, $D_{max}$, $D_{min}$, $D_{mean}$, $D_{95%}$, $V_{95%}$ of GTV were 102.8%, 88.1%, 99.1%, 95.0%, 94.4% and the $D_{max}$, $D_{min}$, $D_{mean}$, $D_{95%}$, $V_{95%}$ of branded 1cm bolus were 101.4%, 92.0%, 98.2%, 95.2%, 95.7%, respectively. In the proton, at the customized bolus, $D_{max}$, $D_{min}$, $D_{mean}$, $D_{95%}$, $V_{95%}$ of GTV were 104.1%, 84.0%, 101.2%, 95.1%, 99.8% and the $D_{max}$, $D_{min}$, $D_{mean}$, $D_{95%}$, $V_{95%}$ of branded 1cm bolus were 104.8%, 87.9%, 101.5%, 94.9%, 99.9%, respectively. Thus, in treatment plan, there was no significant difference between the customized bolus and 1 cm bolus. However, the normal tissue nearby the GTV showed relatively lower radiation dose. Conclusion : The customized bolus developed by 3D printers was effective in minimizing the air gap, especially when it is used against the treatment area with irregular surface. However, the air gap between branded bolus and skin was not enough to cause a change in target dose. On the other hand, in the chest wall could confirm that dose decrease for small the air gap. Customized bolus production period took about 3 days and the development cost was quite expensive. Therefore, the commercialization of customized bolus developed by 3D printers requires low-cost 3D printer materials, adequate for the use of bolus.

  • PDF

Construction and Application of Intelligent Decision Support System through Defense Ontology - Application example of Air Force Logistics Situation Management System (국방 온톨로지를 통한 지능형 의사결정지원시스템 구축 및 활용 - 공군 군수상황관리체계 적용 사례)

  • Jo, Wongi;Kim, Hak-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.77-97
    • /
    • 2019
  • The large amount of data that emerges from the initial connection environment of the Fourth Industrial Revolution is a major factor that distinguishes the Fourth Industrial Revolution from the existing production environment. This environment has two-sided features that allow it to produce data while using it. And the data produced so produces another value. Due to the massive scale of data, future information systems need to process more data in terms of quantities than existing information systems. In addition, in terms of quality, only a large amount of data, Ability is required. In a small-scale information system, it is possible for a person to accurately understand the system and obtain the necessary information, but in a variety of complex systems where it is difficult to understand the system accurately, it becomes increasingly difficult to acquire the desired information. In other words, more accurate processing of large amounts of data has become a basic condition for future information systems. This problem related to the efficient performance of the information system can be solved by building a semantic web which enables various information processing by expressing the collected data as an ontology that can be understood by not only people but also computers. For example, as in most other organizations, IT has been introduced in the military, and most of the work has been done through information systems. Currently, most of the work is done through information systems. As existing systems contain increasingly large amounts of data, efforts are needed to make the system easier to use through its data utilization. An ontology-based system has a large data semantic network through connection with other systems, and has a wide range of databases that can be utilized, and has the advantage of searching more precisely and quickly through relationships between predefined concepts. In this paper, we propose a defense ontology as a method for effective data management and decision support. In order to judge the applicability and effectiveness of the actual system, we reconstructed the existing air force munitions situation management system as an ontology based system. It is a system constructed to strengthen management and control of logistics situation of commanders and practitioners by providing real - time information on maintenance and distribution situation as it becomes difficult to use complicated logistics information system with large amount of data. Although it is a method to take pre-specified necessary information from the existing logistics system and display it as a web page, it is also difficult to confirm this system except for a few specified items in advance, and it is also time-consuming to extend the additional function if necessary And it is a system composed of category type without search function. Therefore, it has a disadvantage that it can be easily utilized only when the system is well known as in the existing system. The ontology-based logistics situation management system is designed to provide the intuitive visualization of the complex information of the existing logistics information system through the ontology. In order to construct the logistics situation management system through the ontology, And the useful functions such as performance - based logistics support contract management and component dictionary are further identified and included in the ontology. In order to confirm whether the constructed ontology can be used for decision support, it is necessary to implement a meaningful analysis function such as calculation of the utilization rate of the aircraft, inquiry about performance-based military contract. Especially, in contrast to building ontology database in ontology study in the past, in this study, time series data which change value according to time such as the state of aircraft by date are constructed by ontology, and through the constructed ontology, It is confirmed that it is possible to calculate the utilization rate based on various criteria as well as the computable utilization rate. In addition, the data related to performance-based logistics contracts introduced as a new maintenance method of aircraft and other munitions can be inquired into various contents, and it is easy to calculate performance indexes used in performance-based logistics contract through reasoning and functions. Of course, we propose a new performance index that complements the limitations of the currently applied performance indicators, and calculate it through the ontology, confirming the possibility of using the constructed ontology. Finally, it is possible to calculate the failure rate or reliability of each component, including MTBF data of the selected fault-tolerant item based on the actual part consumption performance. The reliability of the mission and the reliability of the system are calculated. In order to confirm the usability of the constructed ontology-based logistics situation management system, the proposed system through the Technology Acceptance Model (TAM), which is a representative model for measuring the acceptability of the technology, is more useful and convenient than the existing system.

KNU Korean Sentiment Lexicon: Bi-LSTM-based Method for Building a Korean Sentiment Lexicon (Bi-LSTM 기반의 한국어 감성사전 구축 방안)

  • Park, Sang-Min;Na, Chul-Won;Choi, Min-Seong;Lee, Da-Hee;On, Byung-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.219-240
    • /
    • 2018
  • Sentiment analysis, which is one of the text mining techniques, is a method for extracting subjective content embedded in text documents. Recently, the sentiment analysis methods have been widely used in many fields. As good examples, data-driven surveys are based on analyzing the subjectivity of text data posted by users and market researches are conducted by analyzing users' review posts to quantify users' reputation on a target product. The basic method of sentiment analysis is to use sentiment dictionary (or lexicon), a list of sentiment vocabularies with positive, neutral, or negative semantics. In general, the meaning of many sentiment words is likely to be different across domains. For example, a sentiment word, 'sad' indicates negative meaning in many fields but a movie. In order to perform accurate sentiment analysis, we need to build the sentiment dictionary for a given domain. However, such a method of building the sentiment lexicon is time-consuming and various sentiment vocabularies are not included without the use of general-purpose sentiment lexicon. In order to address this problem, several studies have been carried out to construct the sentiment lexicon suitable for a specific domain based on 'OPEN HANGUL' and 'SentiWordNet', which are general-purpose sentiment lexicons. However, OPEN HANGUL is no longer being serviced and SentiWordNet does not work well because of language difference in the process of converting Korean word into English word. There are restrictions on the use of such general-purpose sentiment lexicons as seed data for building the sentiment lexicon for a specific domain. In this article, we construct 'KNU Korean Sentiment Lexicon (KNU-KSL)', a new general-purpose Korean sentiment dictionary that is more advanced than existing general-purpose lexicons. The proposed dictionary, which is a list of domain-independent sentiment words such as 'thank you', 'worthy', and 'impressed', is built to quickly construct the sentiment dictionary for a target domain. Especially, it constructs sentiment vocabularies by analyzing the glosses contained in Standard Korean Language Dictionary (SKLD) by the following procedures: First, we propose a sentiment classification model based on Bidirectional Long Short-Term Memory (Bi-LSTM). Second, the proposed deep learning model automatically classifies each of glosses to either positive or negative meaning. Third, positive words and phrases are extracted from the glosses classified as positive meaning, while negative words and phrases are extracted from the glosses classified as negative meaning. Our experimental results show that the average accuracy of the proposed sentiment classification model is up to 89.45%. In addition, the sentiment dictionary is more extended using various external sources including SentiWordNet, SenticNet, Emotional Verbs, and Sentiment Lexicon 0603. Furthermore, we add sentiment information about frequently used coined words and emoticons that are used mainly on the Web. The KNU-KSL contains a total of 14,843 sentiment vocabularies, each of which is one of 1-grams, 2-grams, phrases, and sentence patterns. Unlike existing sentiment dictionaries, it is composed of words that are not affected by particular domains. The recent trend on sentiment analysis is to use deep learning technique without sentiment dictionaries. The importance of developing sentiment dictionaries is declined gradually. However, one of recent studies shows that the words in the sentiment dictionary can be used as features of deep learning models, resulting in the sentiment analysis performed with higher accuracy (Teng, Z., 2016). This result indicates that the sentiment dictionary is used not only for sentiment analysis but also as features of deep learning models for improving accuracy. The proposed dictionary can be used as a basic data for constructing the sentiment lexicon of a particular domain and as features of deep learning models. It is also useful to automatically and quickly build large training sets for deep learning models.