• Title/Summary/Keyword: Automatic Modeling

Search Result 646, Processing Time 0.038 seconds

Operational Characteristics of a Domestic Commercial Semi-automatic Vegetable Transplanter (상용 국산 반자동 채소 정식기의 작동 특성 분석)

  • Park, Jeong-Hyeon;Hwang, Seok-Joon;Nam, Ju-Seok
    • Journal of agriculture & life science
    • /
    • v.52 no.6
    • /
    • pp.127-138
    • /
    • 2018
  • In this study, the operational characteristics of a domestic vegetable transplanter were investigated. The main functional components and power path of the tranplanter were analyzed. The link structure of transplanting device waskinematically analyzed, and 3D modeling and dynamic simulation were performed. Based on this analysis, the trajectory of the bottom end of the transplanting hopper was analyzed. Also, the plant spacing according to the engine speed and the shifting stage of transplanting transmission was analyzed and verified by field test. As main results of this study, the transplanting device is one degree of freedom(DOF) 4-bar link type mechanism which comprises 10 links and 13 rotating joints. The transplanting hopper plants seedlings in a vertical direction while maintaining a constant posture by the links of transplanting device. The power is transmitted to both the driving part and transplanting part from the engine, and the maximum and minimum plant spacing of the transplanting device were 428.97 mm and 261.20 mm.

Development and Wind Speed Evaluation of Ultra High Resolution KMAPP Using Urban Building Information Data (도시건물정보를 반영한 초고해상도 규모상세화 수치자료 산출체계(KMAPP) 구축 및 풍속 평가)

  • Kim, Do-Hyoung;Lee, Seung-Wook;Jeong, Hyeong-Se;Park, Sung-Hwa;Kim, Yeon-Hee
    • Atmosphere
    • /
    • v.32 no.3
    • /
    • pp.179-189
    • /
    • 2022
  • The purpose of this study is to build and evaluate a high-resolution (50 m) KMAPP (Korea Meteorological Administration Post Processing) reflecting building data. KMAPP uses LDAPS (Local Data Assimilation and Prediction System) data to detail ground wind speed through surface roughness and elevation corrections. During the detailing process, we improved the vegetation roughness data to reflect the impact of city buildings. AWS (Automatic Weather Station) data from a total of 48 locations in the metropolitan area including Seoul in 2019 were used as the observation data used for verification. Sensitivity analysis was conducted by dividing the experiment according to the method of improving the vegetation roughness length. KMAPP has been shown to improve the tendency of LDAPS to over simulate surface wind speeds. Compared to LDAPS, Root Mean Square Error (RMSE) is improved by approximately 23% and Mean Bias Error (MBE) by about 47%. However, there is an error in the roughness length around the Han River or the coastline. Accordingly, the surface roughness length was improved in KMAPP and the building information was reflected. In the sensitivity experiment of improved KMAPP, RMSE was further improved to 6% and MBE to 3%. This study shows that high-resolution KMAPP reflecting building information can improve wind speed accuracy in urban areas.

A Method for Information Management of Defects in Bridge Superstructure Using BIM-COBie (BIM-COBie를 활용한 교량 상부구조의 손상정보 관리 방법)

  • Lee, Sangho;Lee, Jung-Bin;Tak, Ho-Kyun;Lee, Sang-Ho
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.43 no.2
    • /
    • pp.165-173
    • /
    • 2023
  • The data management and the evaluation of defects in the bridge are generally conducted based on inspection and diagnosis data, including the exterior damage map and defect quantity table prepared by periodic inspection. Since most of these data are written in 2D-based documents and are difficult to digitize in a standardized manner, it is challenging to utilize them beyond the defined functionality. This study proposed methods to efficiently build a BIM (Building Information Modeling)-based bridge damage model from raw data of inspection report and to manage and utilize the damage information linking to bridge model through the spread sheet data generated by COBie (Construction Operations Building Information Exchange). In addition, a method to conduct the condition assessment of defects in bridge was proposed based on an automatic evaluation process using digitized bridge member and damage information. The proposed methods were tested using superstructure of PSC-I girder concrete bridge, and the efficiency and effectiveness of the methods were verified.

Passage Planning in Coastal Waters for Maritime Autonomous Surface Ships using the D* Algorithm

  • Hyeong-Tak Lee;Hey-Min Choi
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.29 no.3
    • /
    • pp.281-287
    • /
    • 2023
  • Establishing a ship's passage plan is an essential step before it starts to sail. The research related to the automatic generation of ship passage plans is attracting attention because of the development of maritime autonomous surface ships. In coastal water navigation, the land, islands, and navigation rules need to be considered. From the path planning algorithm's perspective, a ship's passage planning is a global path-planning problem. Because conventional global path-planning methods such as Dijkstra and A* are time-consuming owing to the processes such as environmental modeling, it is difficult to modify a ship's passage plan during a voyage. Therefore, the D* algorithm was used to address these problems. The starting point was near Busan New Port, and the destination was Ulsan Port. The navigable area was designated based on a combination of the ship trajectory data and grid in the target area. The initial path plan generated using the D* algorithm was analyzed with 33 waypoints and a total distance of 113.946 km. The final path plan was simplified using the Douglas-Peucker algorithm. It was analyzed with a total distance of 110.156 km and 10 waypoints. This is approximately 3.05% less than the total distance of the initial passage plan of the ship. This study demonstrated the feasibility of automatically generating a path plan in coastal navigation for maritime autonomous surface ships using the D* algorithm. Using the shortest distance-based path planning algorithm, the ship's fuel consumption and sailing time can be minimized.

Automatic Estimation of Tillers and Leaf Numbers in Rice Using Deep Learning for Object Detection

  • Hyeokjin Bak;Ho-young Ban;Sungryul Chang;Dongwon Kwon;Jae-Kyeong Baek;Jung-Il Cho ;Wan-Gyu Sang
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2022.10a
    • /
    • pp.81-81
    • /
    • 2022
  • Recently, many studies on big data based smart farming have been conducted. Research to quantify morphological characteristics using image data from various crops in smart farming is underway. Rice is one of the most important food crops in the world. Much research has been done to predict and model rice crop yield production. The number of productive tillers per plant is one of the important agronomic traits associated with the grain yield of rice crop. However, modeling the basic growth characteristics of rice requires accurate data measurements. The existing method of measurement by humans is not only labor intensive but also prone to human error. Therefore, conversion to digital data is necessary to obtain accurate and phenotyping quickly. In this study, we present an image-based method to predict leaf number and evaluate tiller number of individual rice crop using YOLOv5 deep learning network. We performed using various network of the YOLOv5 model and compared them to determine higher prediction accuracy. We ako performed data augmentation, a method we use to complement small datasets. Based on the number of leaves and tiller actually measured in rice crop, the number of leaves predicted by the model from the image data and the existing regression equation were used to evaluate the number of tillers using the image data.

  • PDF

Dynamic characteristics analysis of CBGSCC bridge with large parameter samples

  • Zhongying He;Yifan Song;Genhui Wang;Penghui Sun
    • Steel and Composite Structures
    • /
    • v.52 no.2
    • /
    • pp.237-248
    • /
    • 2024
  • In order to make the dynamic analysis and design of improved composite beam with corrugated steel web (CBGSCC) bridge more efficient and economical, the parametric self-cyclic analysis model (SCAM) was written in Python on Anaconda platform. The SCAM can call ABAQUS finite element software to realize automatic modeling and dynamic analysis. For the CBGSCC bridge, parameters were set according to the general value range of CBGSCC bridge parameters in actual engineering, the SCAM was used to calculate the large sample model generated by parameter coupling, the optimal value range of each parameter was determined, and the sensitivity of the parameters was analyzed. The number of diaphragms effects weakly on the dynamic characteristics. The deck thickness has the greatest influence on frequency, which decreases as the deck thickness increases, and the deck thickness should be 20-25 cm. The vibration frequency increases with the increase of the bottom plate thickness, the web thickness, and the web height, the bottom plate thickness should be 17-23mm, the web thickness should be 13-17 mm, and the web height should be 1.65-1.7 5 m. Web inclination and Skew Angle should not exceed 30°, and the number of diaphragms should be 3-5 pieces. This method can be used as a new method for structural dynamic analysis, and the importance degree and optimal value range of each parameter of CBGSCC bridge can be used as a reference in the design process.

Subject-Balanced Intelligent Text Summarization Scheme (주제 균형 지능형 텍스트 요약 기법)

  • Yun, Yeoil;Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.141-166
    • /
    • 2019
  • Recently, channels like social media and SNS create enormous amount of data. In all kinds of data, portions of unstructured data which represented as text data has increased geometrically. But there are some difficulties to check all text data, so it is important to access those data rapidly and grasp key points of text. Due to needs of efficient understanding, many studies about text summarization for handling and using tremendous amounts of text data have been proposed. Especially, a lot of summarization methods using machine learning and artificial intelligence algorithms have been proposed lately to generate summary objectively and effectively which called "automatic summarization". However almost text summarization methods proposed up to date construct summary focused on frequency of contents in original documents. Those summaries have a limitation for contain small-weight subjects that mentioned less in original text. If summaries include contents with only major subject, bias occurs and it causes loss of information so that it is hard to ascertain every subject documents have. To avoid those bias, it is possible to summarize in point of balance between topics document have so all subject in document can be ascertained, but still unbalance of distribution between those subjects remains. To retain balance of subjects in summary, it is necessary to consider proportion of every subject documents originally have and also allocate the portion of subjects equally so that even sentences of minor subjects can be included in summary sufficiently. In this study, we propose "subject-balanced" text summarization method that procure balance between all subjects and minimize omission of low-frequency subjects. For subject-balanced summary, we use two concept of summary evaluation metrics "completeness" and "succinctness". Completeness is the feature that summary should include contents of original documents fully and succinctness means summary has minimum duplication with contents in itself. Proposed method has 3-phases for summarization. First phase is constructing subject term dictionaries. Topic modeling is used for calculating topic-term weight which indicates degrees that each terms are related to each topic. From derived weight, it is possible to figure out highly related terms for every topic and subjects of documents can be found from various topic composed similar meaning terms. And then, few terms are selected which represent subject well. In this method, it is called "seed terms". However, those terms are too small to explain each subject enough, so sufficient similar terms with seed terms are needed for well-constructed subject dictionary. Word2Vec is used for word expansion, finds similar terms with seed terms. Word vectors are created after Word2Vec modeling, and from those vectors, similarity between all terms can be derived by using cosine-similarity. Higher cosine similarity between two terms calculated, higher relationship between two terms defined. So terms that have high similarity values with seed terms for each subjects are selected and filtering those expanded terms subject dictionary is finally constructed. Next phase is allocating subjects to every sentences which original documents have. To grasp contents of all sentences first, frequency analysis is conducted with specific terms that subject dictionaries compose. TF-IDF weight of each subjects are calculated after frequency analysis, and it is possible to figure out how much sentences are explaining about each subjects. However, TF-IDF weight has limitation that the weight can be increased infinitely, so by normalizing TF-IDF weights for every subject sentences have, all values are changed to 0 to 1 values. Then allocating subject for every sentences with maximum TF-IDF weight between all subjects, sentence group are constructed for each subjects finally. Last phase is summary generation parts. Sen2Vec is used to figure out similarity between subject-sentences, and similarity matrix can be formed. By repetitive sentences selecting, it is possible to generate summary that include contents of original documents fully and minimize duplication in summary itself. For evaluation of proposed method, 50,000 reviews of TripAdvisor are used for constructing subject dictionaries and 23,087 reviews are used for generating summary. Also comparison between proposed method summary and frequency-based summary is performed and as a result, it is verified that summary from proposed method can retain balance of all subject more which documents originally have.

The evaluation for the usability ofthe Varian Standard Couch modelingusing Treatment Planning System (치료계획 시스템을 이용한 Varian Standard Couch 모델링의 유용성 평가)

  • Yang, yong mo;Song, yong min;Kim, jin man;Choi, ji min;Choi, byeung gi
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.28 no.1
    • /
    • pp.77-86
    • /
    • 2016
  • Purpose : When a radiation treatment, there is an attenuation by Carbon Fiber Couch. In this study, we tried to evaluate the usability of the Varian Standard Couch(VSC) by modeling with Treatment Planning System (TPS) Materials and Methods : VSC was scanned by CBCT(Cone Beam Computed Tomography) of the Linac(Clinac IX, VARIAN, USA), following the three conditions of VSC, Side Rail OutGrid(SROG), Side Rail InGrid(SRIG), Side Rail In OutSpine Down Bar(SRIOS). After scan, the data was transferred to TPS and modeled by contouring Side Rail, Side Bar Upper, Side Bar Lower, Spine Down Bar automatically. We scanned the Cheese Phantom(Middelton, USA) using Computed Tomography(Light Speed RT 16, GE, USA) and transfer the data to TPS, and apply VSC modeled previously with TPS to it. Dose was measured at the isocenter of Ion Chamber(A1SL, Standard imaging, USA) in Cheese Phantom using 4 and 10 MV radiation for every $5^{\circ}$ gantry angle in a different filed size($3{\times}3cm^2$, $10{\times}10cm^2$) without any change of MU(=100), and then we compared the calculated dose and measured dose. Also we included dose at the $127^{\circ}$ in SRIG to compare the attenuation by Side Bar Upper. Results : The density of VSC by CBCT in TPS was $0.9g/cm^3$, and in the case of Spine Down Bar, it was $0.7g/cm^3$. The radiation was attenuated by 17.49%, 16.49%, 8.54%, and 7.59% at the Side Rail, Side Bar Upper, Side Bar Lower, and Spine Down Bar. For the accuracy of modeling, calculated dose and measured dose were compared. The average error was 1.13% and the maximum error was 1.98% at the $170^{\circ}beam$ crossing the Spine Down Bar. Conclusion : To evaluate the usability for the VSC modeled by TPS, the maximum error was 1.98% as a result of compassion between calculated dose and measured dose. We found out that VSC modeling helped expect the dose, so we think that it will be helpful for the more accurate treatment.

  • PDF

Topic Modeling Insomnia Social Media Corpus using BERTopic and Building Automatic Deep Learning Classification Model (BERTopic을 활용한 불면증 소셜 데이터 토픽 모델링 및 불면증 경향 문헌 딥러닝 자동분류 모델 구축)

  • Ko, Young Soo;Lee, Soobin;Cha, Minjung;Kim, Seongdeok;Lee, Juhee;Han, Ji Yeong;Song, Min
    • Journal of the Korean Society for information Management
    • /
    • v.39 no.2
    • /
    • pp.111-129
    • /
    • 2022
  • Insomnia is a chronic disease in modern society, with the number of new patients increasing by more than 20% in the last 5 years. Insomnia is a serious disease that requires diagnosis and treatment because the individual and social problems that occur when there is a lack of sleep are serious and the triggers of insomnia are complex. This study collected 5,699 data from 'insomnia', a community on 'Reddit', a social media that freely expresses opinions. Based on the International Classification of Sleep Disorders ICSD-3 standard and the guidelines with the help of experts, the insomnia corpus was constructed by tagging them as insomnia tendency documents and non-insomnia tendency documents. Five deep learning language models (BERT, RoBERTa, ALBERT, ELECTRA, XLNet) were trained using the constructed insomnia corpus as training data. As a result of performance evaluation, RoBERTa showed the highest performance with an accuracy of 81.33%. In order to in-depth analysis of insomnia social data, topic modeling was performed using the newly emerged BERTopic method by supplementing the weaknesses of LDA, which is widely used in the past. As a result of the analysis, 8 subject groups ('Negative emotions', 'Advice and help and gratitude', 'Insomnia-related diseases', 'Sleeping pills', 'Exercise and eating habits', 'Physical characteristics', 'Activity characteristics', 'Environmental characteristics') could be confirmed. Users expressed negative emotions and sought help and advice from the Reddit insomnia community. In addition, they mentioned diseases related to insomnia, shared discourse on the use of sleeping pills, and expressed interest in exercise and eating habits. As insomnia-related characteristics, we found physical characteristics such as breathing, pregnancy, and heart, active characteristics such as zombies, hypnic jerk, and groggy, and environmental characteristics such as sunlight, blankets, temperature, and naps.

RPC Correction of KOMPSAT-3A Satellite Image through Automatic Matching Point Extraction Using Unmanned AerialVehicle Imagery (무인항공기 영상 활용 자동 정합점 추출을 통한 KOMPSAT-3A 위성영상의 RPC 보정)

  • Park, Jueon;Kim, Taeheon;Lee, Changhui;Han, Youkyung
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.1135-1147
    • /
    • 2021
  • In order to geometrically correct high-resolution satellite imagery, the sensor modeling process that restores the geometric relationship between the satellite sensor and the ground surface at the image acquisition time is required. In general, high-resolution satellites provide RPC (Rational Polynomial Coefficient) information, but the vendor-provided RPC includes geometric distortion caused by the position and orientation of the satellite sensor. GCP (Ground Control Point) is generally used to correct the RPC errors. The representative method of acquiring GCP is field survey to obtain accurate ground coordinates. However, it is difficult to find the GCP in the satellite image due to the quality of the image, land cover change, relief displacement, etc. By using image maps acquired from various sensors as reference data, it is possible to automate the collection of GCP through the image matching algorithm. In this study, the RPC of KOMPSAT-3A satellite image was corrected through the extracted matching point using the UAV (Unmanned Aerial Vehichle) imagery. We propose a pre-porocessing method for the extraction of matching points between the UAV imagery and KOMPSAT-3A satellite image. To this end, the characteristics of matching points extracted by independently applying the SURF (Speeded-Up Robust Features) and the phase correlation, which are representative feature-based matching method and area-based matching method, respectively, were compared. The RPC adjustment parameters were calculated using the matching points extracted through each algorithm. In order to verify the performance and usability of the proposed method, it was compared with the GCP-based RPC correction result. The GCP-based method showed an improvement of correction accuracy by 2.14 pixels for the sample and 5.43 pixelsfor the line compared to the vendor-provided RPC. In the proposed method using SURF and phase correlation methods, the accuracy of sample was improved by 0.83 pixels and 1.49 pixels, and that of line wasimproved by 4.81 pixels and 5.19 pixels, respectively, compared to the vendor-provided RPC. Through the experimental results, the proposed method using the UAV imagery presented the possibility as an alternative to the GCP-based method for the RPC correction.