• Title/Summary/Keyword: 생성형 모델

Search Result 811, Processing Time 0.027 seconds

Developing a Module to Store 3DF-GML Instance Documents in a Database (3DF-GML 인스턴스 문서의 데이터베이스 저장을 위한 모듈 개발)

  • Lee, Kang-Jae;Jang, Gun-Up;Lee, Ji-Yeong
    • Spatial Information Research
    • /
    • v.19 no.6
    • /
    • pp.87-100
    • /
    • 2011
  • Recently, a variety of GML application schemas have been designed in many fields. GML application schemas are specific to the application domain of interest and specify object types using primitive object types defined in the GML standard. GML instance documents are created based on such GML application schemas. The GML instance documents generally require large volumes to represent huge amounts of geographic objects. Thus, it is essential to store such GML instance documents in relational database for efficient management and use. Relational database is relatively convenient to use and is widely applied in various fields. Furthermore, it is fundamentally more efficient than file structure to handle large datasets. Many researches on storing GML documents have been carried out so far. However, there are few studies on storage of GML instance documents. Therefore, in this study, we developed the storage module to store the GML instance documents in relational database.

Isobaric Vapor-Liquid Equilibrium of 1-propanol and Benzene System at Subatmospheric Pressures (일정압력하에서 1-propanol/benzene 계의 기-액 상평형)

  • Rho, Seon-Gyun;Kang, Choon-Hyoung
    • Korean Chemical Engineering Research
    • /
    • v.56 no.2
    • /
    • pp.222-228
    • /
    • 2018
  • Benzene is one of the most widely used basic materials in the petrochemical industry. Generally, benzene exists as a mixture with alcohols rather than as a pure substance. Further, the alcohols-added mixtures usually exhibit an azeotropic composition. In this context, knowledge of the phase equilibrium behavior of the mixture is essential for its separation and purification. In this study, the vapor-liquid equilibrium data were measured in favor of a recirculating VLE apparatus under constant pressure for the 1 - propanol / benzene system. The measured vapor - liquid equilibrium data were also correlated by using the UNIQUAC and WILSON models and the thermodynamic consistency test based on the Gibbs/Duhem equation was followed. The results of the phase equilibrium experiment revealed RMSEs (Root Mean Square Error) and AADs (Average Absolute Deviation) of less than 0.05 for both models, indicating a good agreement between the experimental value and the calculated value. The results of the thermodynamic consistency test also confirmed through the residual term within ${\pm}0.2$.

Developing a New Algorithm for Conversational Agent to Detect Recognition Error and Neologism Meaning: Utilizing Korean Syllable-based Word Similarity (대화형 에이전트 인식오류 및 신조어 탐지를 위한 알고리즘 개발: 한글 음절 분리 기반의 단어 유사도 활용)

  • Jung-Won Lee;Il Im
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.267-286
    • /
    • 2023
  • The conversational agents such as AI speakers utilize voice conversation for human-computer interaction. Voice recognition errors often occur in conversational situations. Recognition errors in user utterance records can be categorized into two types. The first type is misrecognition errors, where the agent fails to recognize the user's speech entirely. The second type is misinterpretation errors, where the user's speech is recognized and services are provided, but the interpretation differs from the user's intention. Among these, misinterpretation errors require separate error detection as they are recorded as successful service interactions. In this study, various text separation methods were applied to detect misinterpretation. For each of these text separation methods, the similarity of consecutive speech pairs using word embedding and document embedding techniques, which convert words and documents into vectors. This approach goes beyond simple word-based similarity calculation to explore a new method for detecting misinterpretation errors. The research method involved utilizing real user utterance records to train and develop a detection model by applying patterns of misinterpretation error causes. The results revealed that the most significant analysis result was obtained through initial consonant extraction for detecting misinterpretation errors caused by the use of unregistered neologisms. Through comparison with other separation methods, different error types could be observed. This study has two main implications. First, for misinterpretation errors that are difficult to detect due to lack of recognition, the study proposed diverse text separation methods and found a novel method that improved performance remarkably. Second, if this is applied to conversational agents or voice recognition services requiring neologism detection, patterns of errors occurring from the voice recognition stage can be specified. The study proposed and verified that even if not categorized as errors, services can be provided according to user-desired results.

A Topographical Classifier Development Support System Cooperating with Data Mining Tool WEKA from Airborne LiDAR Data (항공 라이다 데이터로부터 데이터마이닝 도구 WEKA를 이용한 지형 분류기 제작 지원 시스템)

  • Lee, Sung-Gyu;Lee, Ho-Jun;Sung, Chul-Woong;Park, Chang-Hoo;Cho, Woo-Sug;Kim, Yoo-Sung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.1
    • /
    • pp.133-142
    • /
    • 2010
  • To monitor composition and change of the national land, intelligent topographical classifier which enables accurate classification of land-cover types from airborne LiDAR data is highly required. We developed a topographical classifier development support system cooperating with da1a mining tool WEKA to help users to construct accurate topographical classification systems. The topographical classifier development support system has the following functions; superposing LiDAR data upon corresponding aerial images, dividing LiDAR data into tiles for efficient processing, 3D visualization of partial LiDAR data, feature from tiles, automatic WEKA input generation, and automatic C++ program generation from the classification rule set. In addition, with dam mining tool WEKA, we can choose highly distinguishable features by attribute selection function and choose the best classification model as the result topographical classifier. Therefore, users can easily develop intelligent topographical classifier which is well fitted to the developing objectives by using the topographical classifier development support system.

A Robust Object Detection and Tracking Method using RGB-D Model (RGB-D 모델을 이용한 강건한 객체 탐지 및 추적 방법)

  • Park, Seohee;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.18 no.4
    • /
    • pp.61-67
    • /
    • 2017
  • Recently, CCTV has been combined with areas such as big data, artificial intelligence, and image analysis to detect various abnormal behaviors and to detect and analyze the overall situation of objects such as people. Image analysis research for this intelligent video surveillance function is progressing actively. However, CCTV images using 2D information generally have limitations such as object misrecognition due to lack of topological information. This problem can be solved by adding the depth information of the object created by using two cameras to the image. In this paper, we perform background modeling using Mixture of Gaussian technique and detect whether there are moving objects by segmenting the foreground from the modeled background. In order to perform the depth information-based segmentation using the RGB information-based segmentation results, stereo-based depth maps are generated using two cameras. Next, the RGB-based segmented region is set as a domain for extracting depth information, and depth-based segmentation is performed within the domain. In order to detect the center point of a robustly segmented object and to track the direction, the movement of the object is tracked by applying the CAMShift technique, which is the most basic object tracking method. From the experiments, we prove the efficiency of the proposed object detection and tracking method using the RGB-D model.

Development of a Measurement Data Algorithm of Deep Space Network for Korea Pathfinder Lunar Orbiter mission (달 탐사 시험용 궤도선을 위한 심우주 추적망의 관측값 구현 알고리즘 개발)

  • Kim, Hyun-Jeong;Park, Sang-Young;Kim, Min-Sik;Kim, Youngkwang;Lee, Eunji
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.45 no.9
    • /
    • pp.746-756
    • /
    • 2017
  • An algorithm is developed to generate measurement data of deep space network for Korea Pathfinder Lunar Orbiter (KPLO) mission. The algorithm can provide corrected measurement data for the Orbit Determination (OD) module in deep space. This study describes how to generate the computed data such as range, Doppler, azimuth angle and elevation angle. The geometric data were obtained by General Mission Analysis Tool (GMAT) simulation and the corrected data were calculated with measurement models. Therefore, the result of total delay includes effects of tropospheric delay, ionospheric delay, charged particle delay, antenna offset delay, and tropospheric refraction delay. The computed measurement data were validated by comparison with the results from Orbit Determination ToolBoX (ODTBX).

A Study on the Intelligent Document Processing Platform for Document Data Informatization (문서 데이터 정보화를 위한 지능형 문서처리 플랫폼에 관한 연구)

  • Hee-Do Heo;Dong-Koo Kang;Young-Soo Kim;Sam-Hyun Chun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.1
    • /
    • pp.89-95
    • /
    • 2024
  • Nowadays, the competitiveness of a company depends on the ability of all organizational members to share and utilize the organizational knowledge accumulated by the organization. As if to prove this, the world is now focusing on ChetGPT service using generative AI technology based on LLM (Large Language Model). However, it is still difficult to apply the ChetGPT service to work because there are many hallucinogenic problems. To solve this problem, sLLM (Lightweight Large Language Model) technology is being proposed as an alternative. In order to construct sLLM, corporate data is essential. Corporate data is the organization's ERP data and the company's office document knowledge data preserved by the organization. ERP Data can be used by directly connecting to sLLM, but office documents are stored in file format and must be converted to data format to be used by connecting to sLLM. In addition, there are too many technical limitations to utilize office documents stored in file format as organizational knowledge information. This study proposes a method of storing office documents in DB format rather than file format, allowing companies to utilize already accumulated office documents as an organizational knowledge system, and providing office documents in data form to the company's SLLM. We aim to contribute to improving corporate competitiveness by combining AI technology.

Fatigue Analysis based on Kriging for Flaperon Joint of Tilt Rotor Type Aircraft (틸트 로터형 항공기의 플랩퍼론 연결부에 대한 크리깅 기반 피로해석)

  • Park, Young-Chul;Jang, Byoung-Uk;Im, Jong-Bin;Lee, Jung-Jin;Lee, Soo-Yong;Park, Jung-Sun
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.36 no.6
    • /
    • pp.541-549
    • /
    • 2008
  • The fatigue analysis is performed to avoid structural failure in aerospace structures under repeated loads. In this paper, the fatigue life is estimated for the design of tilt rotor UAV. First of all, the fatigue load spectrum for tilt rotor UAV is generated. Fatigue analysis is done for the flaperon joint which may have FCL(fracture critical location). Tilt rotor UAV operates at two modes: helicopter mode such as taking off and landing; fixed wing mode like cruising. To make overall fatigue load spectrum, FELIX is used for helicopter mode and TWIST is used for fixed wing mode. The other hand, the Kriging meta model is used to get S-N regression curve for whole range of material life when S-N test data are analyzed. And then, the second order of S-N curve is accomplished by the least square method. In addition, the coefficient of determination method is used to ensure how accuracy it has. Finally, the fatigue life of flaperon joint is compared with that obtained by MSC. Fatigue.

Predictive Clustering-based Collaborative Filtering Technique for Performance-Stability of Recommendation System (추천 시스템의 성능 안정성을 위한 예측적 군집화 기반 협업 필터링 기법)

  • Lee, O-Joun;You, Eun-Soon
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.119-142
    • /
    • 2015
  • With the explosive growth in the volume of information, Internet users are experiencing considerable difficulties in obtaining necessary information online. Against this backdrop, ever-greater importance is being placed on a recommender system that provides information catered to user preferences and tastes in an attempt to address issues associated with information overload. To this end, a number of techniques have been proposed, including content-based filtering (CBF), demographic filtering (DF) and collaborative filtering (CF). Among them, CBF and DF require external information and thus cannot be applied to a variety of domains. CF, on the other hand, is widely used since it is relatively free from the domain constraint. The CF technique is broadly classified into memory-based CF, model-based CF and hybrid CF. Model-based CF addresses the drawbacks of CF by considering the Bayesian model, clustering model or dependency network model. This filtering technique not only improves the sparsity and scalability issues but also boosts predictive performance. However, it involves expensive model-building and results in a tradeoff between performance and scalability. Such tradeoff is attributed to reduced coverage, which is a type of sparsity issues. In addition, expensive model-building may lead to performance instability since changes in the domain environment cannot be immediately incorporated into the model due to high costs involved. Cumulative changes in the domain environment that have failed to be reflected eventually undermine system performance. This study incorporates the Markov model of transition probabilities and the concept of fuzzy clustering with CBCF to propose predictive clustering-based CF (PCCF) that solves the issues of reduced coverage and of unstable performance. The method improves performance instability by tracking the changes in user preferences and bridging the gap between the static model and dynamic users. Furthermore, the issue of reduced coverage also improves by expanding the coverage based on transition probabilities and clustering probabilities. The proposed method consists of four processes. First, user preferences are normalized in preference clustering. Second, changes in user preferences are detected from review score entries during preference transition detection. Third, user propensities are normalized using patterns of changes (propensities) in user preferences in propensity clustering. Lastly, the preference prediction model is developed to predict user preferences for items during preference prediction. The proposed method has been validated by testing the robustness of performance instability and scalability-performance tradeoff. The initial test compared and analyzed the performance of individual recommender systems each enabled by IBCF, CBCF, ICFEC and PCCF under an environment where data sparsity had been minimized. The following test adjusted the optimal number of clusters in CBCF, ICFEC and PCCF for a comparative analysis of subsequent changes in the system performance. The test results revealed that the suggested method produced insignificant improvement in performance in comparison with the existing techniques. In addition, it failed to achieve significant improvement in the standard deviation that indicates the degree of data fluctuation. Notwithstanding, it resulted in marked improvement over the existing techniques in terms of range that indicates the level of performance fluctuation. The level of performance fluctuation before and after the model generation improved by 51.31% in the initial test. Then in the following test, there has been 36.05% improvement in the level of performance fluctuation driven by the changes in the number of clusters. This signifies that the proposed method, despite the slight performance improvement, clearly offers better performance stability compared to the existing techniques. Further research on this study will be directed toward enhancing the recommendation performance that failed to demonstrate significant improvement over the existing techniques. The future research will consider the introduction of a high-dimensional parameter-free clustering algorithm or deep learning-based model in order to improve performance in recommendations.

Conformational Analyses for Hydrated Oligopeptides by Quantum Chemical Calculation (양자화학적 계산에 의한 올리고펩티드 수화물의 구조분석)

  • Sim, Jae-Ho
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.7
    • /
    • pp.95-104
    • /
    • 2018
  • The structures and energies of the anhydrate and hydrate (hydrate rate: h of 1) states of L-alanine (LA) and glycine (G) were calculated by quantum chemical calculations (QCCs) using B3LYP/6-31G(d,p) for four types of conformers (${\beta}$-extended: ${\Phi}/{\Psi}=t-/t+$, $PP_{II}$: g-/t+, $PP_{II}$-like: g-/g+, and ${\alpha}$-helix: g-/g-). In LA and G, which have an imino proton (NH), three conformation types of ${\beta}$-extended, $PP_{II}$-like, and ${\alpha}$-helix were obtained, and water molecules were inserted mainly between the intra-molecular hydrogen bond of $CO{\cdots}HN$ in $PP_{II}$-like and ${\alpha}$-helix, and attached to the CO group in ${\beta}$-extended. In LA and G, $PP_{II}$-like conformers were most stable in the anhydrate and hydrate states, and the result for LA was different from some experimental and theoretical results from other studies reporting that the main stable conformation of alanine oligopeptide was $PP_{II}$. The formation pattern and stability of the conformation of the oligopeptide was strongly dominated by the presence/absence of intra-molecular hydrogen bonding of $CO{\cdots}HN$, or the presence/absence of an $NH_2$ group in the starting amino acid.