• Title/Summary/Keyword: sLLM

Search Result 24, Processing Time 0.028 seconds

A Study on the Intelligent Document Processing Platform for Document Data Informatization (문서 데이터 정보화를 위한 지능형 문서처리 플랫폼에 관한 연구)

  • Hee-Do Heo;Dong-Koo Kang;Young-Soo Kim;Sam-Hyun Chun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.1
    • /
    • pp.89-95
    • /
    • 2024
  • Nowadays, the competitiveness of a company depends on the ability of all organizational members to share and utilize the organizational knowledge accumulated by the organization. As if to prove this, the world is now focusing on ChetGPT service using generative AI technology based on LLM (Large Language Model). However, it is still difficult to apply the ChetGPT service to work because there are many hallucinogenic problems. To solve this problem, sLLM (Lightweight Large Language Model) technology is being proposed as an alternative. In order to construct sLLM, corporate data is essential. Corporate data is the organization's ERP data and the company's office document knowledge data preserved by the organization. ERP Data can be used by directly connecting to sLLM, but office documents are stored in file format and must be converted to data format to be used by connecting to sLLM. In addition, there are too many technical limitations to utilize office documents stored in file format as organizational knowledge information. This study proposes a method of storing office documents in DB format rather than file format, allowing companies to utilize already accumulated office documents as an organizational knowledge system, and providing office documents in data form to the company's SLLM. We aim to contribute to improving corporate competitiveness by combining AI technology.

A syudy for Myungrihak's Samhab Modeling using LLM (LLM을 적용한 명리학의 삼합모델링에 관한 연구)

  • Lee, Ock-Hwa;Cho, Sung-je
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.5 no.2
    • /
    • pp.89-95
    • /
    • 2012
  • This paper aim at producing a Data Modeling of the science of divination using a study of a new method with a mathematical function on "Samhap". For that goal, we must study a new Linking Method. We call this "'Lee's Linking Method'. Therefore, When drawing up Samhap, we will provide convenience in the field, so we don't make it by handwrite, but will produce it by LLM.

Effects of Devarda's Alloy Addition on Determination of Total Nitrogen and Inorganic Nitrogen in Liquid Livestock Manure (Devarda's alloy 첨가가 축산분뇨 액비의 총 질소 및 무기태 질소 정량에 미치는 영향)

  • Lim, Tae-Jun;Kim, Ki-In;Park, Jin-Myeon;Lee, Seong-Eun;Noh, Jae-Seung;Hong, Soon-Dal
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.45 no.2
    • /
    • pp.223-226
    • /
    • 2012
  • Liquid livestock manure (LLM) has been used as a nitrogen fertilizer source for horticulture plants. LLM contains organic nitrogen (N), ammonium, nitrate, and nitrite. The amount of nitrate and nitrite in LLM are usually small compared to the amount of ammonium in it and so they can be negligible if total nitrogen (N) concentration in LLM is higher than $1,000mg\;L^{-1}$. However, if total N concentration in LLM is less than $1,000mg\;L^{-1}$, the amount of nitrate and nitrite may affect total N concentration in LLM. Currently, Kjeldahl digestion method is mainly used for ammonium-N in LLM. Therefore, it is ineffective to analyze nitrate-N and nitrite-N. The objective of this study was to evaluate whether the total N concentrations are affected by the amount of nitrate-N and nitrite-N with diverse LLMs by Kjeldahl method (with and without Devarda's alloy after Conc. sulfuric acid digestion). Five liquid livestock manure samples were collected at swine farms in Ansung and Icheon. All LLM samples were stored at $25^{\circ}C$, subsampled at every $15^{th}$ day for 90 days, and analyzed for total N, ammonium-N, and nitrate-N. At the $90^{th}$ day, LLM samples were analyzed with and without Devarda's alloy after Conc. sulfuric acid digestion. Potassium nitrate, ammonium nitrate, and ammonium chloride were used to determine the N recovery percentages. Total N concentration ranged from 560 to $4,230mg\;L^{-1}$. Nitrate-Ns were found in all LLM samples, ranged from 21 to $164mg\;L^{-1}$. N recovery percentages with potassium nitrate were 0 % without Devarda's alloy and 100% with Devarda's alloy because adding Devarda's alloy facilitated nitrate-N into ammonium-N conversion. Total Ns were significantly different between two methods, with and without Devarda's alloy. Total N concentrations were $210mg\;L^{-1}$ at LLM 4 and $370mg\;L^{-1}$ at LLM 5 without Devarda's alloy and $290mg\;L^{-1}$ at LLM 4 and $490mg\;L^{-1}$ at LLM 5 with Devarda's alloy. These results suggest that if total N of LLM is less $1,000mg\;L^{-1}$, additional procedure such as adding Devarda's alloy can be used to estimate the total N and inorganic N better.

A Study on the Evaluation of LLM's Gameplay Capabilities in Interactive Text-Based Games (대화형 텍스트 기반 게임에서 LLM의 게임플레이 기능 평가에 관한 연구)

  • Dongcheul Lee
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.3
    • /
    • pp.87-94
    • /
    • 2024
  • We investigated the feasibility of utilizing Large Language Models (LLMs) to perform text-based games without training on game data in advance. We adopted ChatGPT-3.5 and its state-of-the-art, ChatGPT-4, as the systems that implemented LLM. In addition, we added the persistent memory feature proposed in this paper to ChatGPT-4 to create three game player agents. We used Zork, one of the most famous text-based games, to see if the agents could navigate through complex locations, gather information, and solve puzzles. The results showed that the agent with persistent memory had the widest range of exploration and the best score among the three agents. However, all three agents were limited in solving puzzles, indicating that LLM is vulnerable to problems that require multi-level reasoning. Nevertheless, the proposed agent was still able to visit 37.3% of the total locations and collect all the items in the locations it visited, demonstrating the potential of LLM.

LiLa1-xNdx(MoO4)2 Single Crystal Growth by the Czochralski Method (쵸크랄스키법에 의한 LiLa1-xNdx(MoO4)2 단결정 육성 연구)

  • Bae In-Kook;Chae Soo-Chun;Jang Young-Nam;Kim Sang-Bae
    • Journal of the Korean Ceramic Society
    • /
    • v.41 no.9
    • /
    • pp.677-683
    • /
    • 2004
  • Nd:LLM (Nd:LiLa(MoO$_4$)$_2$) single crystals for the laser host material were grown by the Czochralski method. The Nd:LLM grown single crystals cracked easily, and the reasons of cracks are generally related with phase transition, incongruent melting, chemical heterogeneity of composition, geometric thermal structures of imbalance and growth direction. We confirmed that phase transition is not observed by TG-DTA thermal analysis, and the XRD analysis revealed congruent melting in our products. It was confirmed that the volatilization of Li$_2$O composition is the important reason of chemical heterogeneity. The geometric thermal profile of the resistance furnace of our own design was controlled with a crucible height. Also, Nd:LLM crystal affected growth direction, and was the best quality in case of (101) growth direction. The distribution and effective distribution coefficient of Nd$^{3+}$ ion were accomplished by PIXE analysis.s.

Estimation of Ruminal Degradation and Intestinal Digestion of Tropical Protein Resources Using the Nylon Bag Technique and the Three-step In vitro Procedure in Dairy Cattle on Rice Straw Diets

  • Promkot, C.;Wanapat, Metha;Rowlinson, P.
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.20 no.12
    • /
    • pp.1849-1857
    • /
    • 2007
  • The experiment was carried out using fistulated multiparous Holstein Friesian crossbred (75% Holstein Friesian and 25% Red Sindhi) dairy cows in their dry period fed on untreated rice straw to evaluate the nutritive value of local protein feed resources using the in sacco method and in vitro pepsin-pancreatin digestion. Experimental feeds were cottonseed meal (CSM); soybean meal (SBM); dried brewery's grains (DBG); palm kernel meal (PSM); cassava hay (CH); leucaena leaf meal (LLM). Each feedstuff was weighed into duplicate nylon bags and incubated in each of the two rumen fistulated cows for 0, 2, 4, 8, 16, 24, and 48 h. Rumen feed residues from bags of 16 h incubation were used for estimation of lower gut digestibility by the technique of in vitro pepsin-pancreatin digestion. Ruminal ammonia-nitrogen ($NH_3-N$) concentrations did not differ between treatments or time with a mean of 5.5 mg%. Effective degradability of DM of CSM, SBM, DBG, PSM, CH and LLM were 41.9, 56.1, 30.8, 47.0, 41.1 and 47.5%, respectively. Effective degradabilities of the CP in feedstuffs were 49.6, 59.2, 40.9, 33.5, 47.3 and 65.0% for the respective feedstuffs. The CP in vitro pepsin-pancreatin digestibility as ranked from the highest to the lowest were SBM, CSM, LLM, CH, DBG, PSM, respectively. The intestinal and total tract digestion of feedstuffs in the current study were relatively lower than that obtained from previous literature. The results of this study indicate that SBM and LLM were highly degradable in the rumen, while CH, CSM and DBG were less degradable and, hence resulted in higher rumen undegradable protein. Soybean meal and LLM could be used to improve rumen ecology whilst CH, CSM and DBG could be used as rumen by-pass protein for ruminant feeding in the tropics.

Application of ChatGPT text extraction model in analyzing rhetorical principles of COVID-19 pandemic information on a question-and-answer community

  • Hyunwoo Moon;Beom Jun Bae;Sangwon Bae
    • International journal of advanced smart convergence
    • /
    • v.13 no.2
    • /
    • pp.205-213
    • /
    • 2024
  • This study uses a large language model (LLM) to identify Aristotle's rhetorical principles (ethos, pathos, and logos) in COVID-19 information on Naver Knowledge-iN, South Korea's leading question-and-answer community. The research analyzed the differences of these rhetorical elements in the most upvoted answers with random answers. A total of 193 answer pairs were randomly selected, with 135 pairs for training and 58 for testing. These answers were then coded in line with the rhetorical principles to refine GPT 3.5-based models. The models achieved F1 scores of .88 (ethos), .81 (pathos), and .69 (logos). Subsequent analysis of 128 new answer pairs revealed that logos, particularly factual information and logical reasoning, was more frequently used in the most upvoted answers than the random answers, whereas there were no differences in ethos and pathos between the answer groups. The results suggest that health information consumers value information including logos while ethos and pathos were not associated with consumers' preference for health information. By utilizing an LLM for the analysis of persuasive content, which has been typically conducted manually with much labor and time, this study not only demonstrates the feasibility of using an LLM for latent content but also contributes to expanding the horizon in the field of AI text extraction.

Development of a Regulatory Q&A System for KAERI Utilizing Document Search Algorithms and Large Language Model (거대언어모델과 문서검색 알고리즘을 활용한 한국원자력연구원 규정 질의응답 시스템 개발)

  • Hongbi Kim;Yonggyun Yu
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.28 no.5
    • /
    • pp.31-39
    • /
    • 2023
  • The evolution of Natural Language Processing (NLP) and the rise of large language models (LLM) like ChatGPT have paved the way for specialized question-answering (QA) systems tailored to specific domains. This study outlines a system harnessing the power of LLM in conjunction with document search algorithms to interpret and address user inquiries using documents from the Korea Atomic Energy Research Institute (KAERI). Initially, the system refines multiple documents for optimized search and analysis, breaking the content into managable paragraphs suitable for the language model's processing. Each paragraph's content is converted into a vector via an embedding model and archived in a database. Upon receiving a user query, the system matches the extracted vectors from the question with the stored vectors, pinpointing the most pertinent content. The chosen paragraphs, combined with the user's query, are then processed by the language generation model to formulate a response. Tests encompassing a spectrum of questions verified the system's proficiency in discerning question intent, understanding diverse documents, and delivering rapid and precise answers.

Automation of M.E.P Design Using Large Language Models (대형 언어 모델을 활용한 설비설계의 자동화)

  • Park, Kyung Kyu;Lee, Seung-Been;Seo, Min Jo;Kim, Si Uk;Choi, Won Jun;Kim, Chee Kyung
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2023.11a
    • /
    • pp.237-238
    • /
    • 2023
  • Urbanization and the increase in building scale have amplified the complexity of M.E.P design. Traditional design methods face limitations when considering intricate pathways and variables, leading to an emergent need for research in automated design. Initial algorithmic approaches encountered challenges in addressing complex architectural structures and the diversity of M.E.P types. However, with the launch of OpenAI's ChatGPT-3.5 beta version in 2022, new opportunities in the automated design sector were unlocked. ChatGPT, based on the Large Language Model (LLM), has the capability to deeply comprehend the logical structures and meanings within training data. This study analyzed the potential application and latent value of LLMs in M.E.P design. Ultimately, the implementation of LLM in M.E.P design will make genuine automated design feasible, which is anticipated to drive advancements across designs in the construction sector.

  • PDF

A Proposal of Evaluation of Large Language Models Built Based on Research Data (연구데이터 관점에서 본 거대언어모델 품질 평가 기준 제언)

  • Na-eun Han;Sujeong Seo;Jung-ho Um
    • Journal of the Korean Society for information Management
    • /
    • v.40 no.3
    • /
    • pp.77-98
    • /
    • 2023
  • Large Language Models (LLMs) are becoming the major trend in the natural language processing field. These models were built based on research data, but information such as types, limitations, and risks of using research data are unknown. This research would present how to analyze and evaluate the LLMs that were built with research data: LLaMA or LLaMA base models such as Alpaca of Stanford, Vicuna of the large model systems organization, and ChatGPT from OpenAI from the perspective of research data. This quality evaluation focuses on the validity, functionality, and reliability of Data Quality Management (DQM). Furthermore, we adopted the Holistic Evaluation of Language Models (HELM) to understand its evaluation criteria and then discussed its limitations. This study presents quality evaluation criteria for LLMs using research data and future development directions.