• Title/Summary/Keyword: 자동 최적화

Search Result 651, Processing Time 0.032 seconds

A Tracer Study on Mankyeong River Using Effluents from a Sewage Treatment Plant (하수처리장 방류수를 이용한 추적자 시험: 만경강 유역에 대한 사례 연구)

  • Kim Jin-Sam;Kim Kang-Joo;Hahn Chan;Hwang Gab-Soo;Park Sung-Min;Lee Sang-Ho;Oh Chang-Whan;Park Eun-Gyu
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.11 no.2
    • /
    • pp.82-91
    • /
    • 2006
  • We investigated the possibility of using effluents from a municipal sewage treatment plant (STP) as tracers a tracer for hydrologic studies of rivers. The possibility was checked in a 12-km long reach downstream of Jeonju Municipal Sewage Treatment Plant (JSTP). Time-series monitoring of the water chemistry reveals that chemical compositions of the effluent from the JSTP are fluctuating within a relatively wide range during the sampling period. In addition, the signals from the plant were observed at the downstream stations consecutively with increasing time lags, especially in concentrations of the conservative chemical parameters (concentrations f3r chloride and sulfate, total concentration of major cations, and electric conductivity). Based on this observation, we could estimate the stream flow (Q), velocity (v), and dispersion coefficient (D). A 1-D nonreactive solute-transport model with automated optimization schemes was used for this study. The values of Q, v, and D estimated from this study varied from 6.4 to $9.0m^3/sec$ (at the downstream end of the reach), from 0.06 to 0.10 m/sec, and from 0.7 to $6.4m^2/sec$, respectively. The results show that the effluent from a large-scaled municipal STP frequently provides good, multiple natural tracers far hydrologic studies.

Automation of Online to Offline Stores: Extremely Small Depth-Yolov8 and Feature-Based Product Recognition (Online to Offline 상점의 자동화 : 초소형 깊이의 Yolov8과 특징점 기반의 상품 인식)

  • Jongwook Si;Daemin Kim;Sungyoung Kim
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.17 no.3
    • /
    • pp.121-129
    • /
    • 2024
  • The rapid advancement of digital technology and the COVID-19 pandemic have significantly accelerated the growth of online commerce, highlighting the need for support mechanisms that enable small business owners to effectively respond to these market changes. In response, this paper presents a foundational technology leveraging the Online to Offline (O2O) strategy to automatically capture products displayed on retail shelves and utilize these images to create virtual stores. The essence of this research lies in precisely identifying and recognizing the location and names of displayed products, for which a single-class-targeted, lightweight model based on YOLOv8, named ESD-YOLOv8, is proposed. The detected products are identified by their names through feature-point-based technology, equipped with the capability to swiftly update the system by simply adding photos of new products. Through experiments, product name recognition demonstrated an accuracy of 74.0%, and position detection achieved a performance with an F2-Score of 92.8% using only 0.3M parameters. These results confirm that the proposed method possesses high performance and optimized efficiency.

Knowledge graph-based knowledge map for efficient expression and inference of associated knowledge (연관지식의 효율적인 표현 및 추론이 가능한 지식그래프 기반 지식지도)

  • Yoo, Keedong
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.49-71
    • /
    • 2021
  • Users who intend to utilize knowledge to actively solve given problems proceed their jobs with cross- and sequential exploration of associated knowledge related each other in terms of certain criteria, such as content relevance. A knowledge map is the diagram or taxonomy overviewing status of currently managed knowledge in a knowledge-base, and supports users' knowledge exploration based on certain relationships between knowledge. A knowledge map, therefore, must be expressed in a networked form by linking related knowledge based on certain types of relationships, and should be implemented by deploying proper technologies or tools specialized in defining and inferring them. To meet this end, this study suggests a methodology for developing the knowledge graph-based knowledge map using the Graph DB known to exhibit proper functionality in expressing and inferring relationships between entities and their relationships stored in a knowledge-base. Procedures of the proposed methodology are modeling graph data, creating nodes, properties, relationships, and composing knowledge networks by combining identified links between knowledge. Among various Graph DBs, the Neo4j is used in this study for its high credibility and applicability through wide and various application cases. To examine the validity of the proposed methodology, a knowledge graph-based knowledge map is implemented deploying the Graph DB, and a performance comparison test is performed, by applying previous research's data to check whether this study's knowledge map can yield the same level of performance as the previous one did. Previous research's case is concerned with building a process-based knowledge map using the ontology technology, which identifies links between related knowledge based on the sequences of tasks producing or being activated by knowledge. In other words, since a task not only is activated by knowledge as an input but also produces knowledge as an output, input and output knowledge are linked as a flow by the task. Also since a business process is composed of affiliated tasks to fulfill the purpose of the process, the knowledge networks within a business process can be concluded by the sequences of the tasks composing the process. Therefore, using the Neo4j, considered process, task, and knowledge as well as the relationships among them are defined as nodes and relationships so that knowledge links can be identified based on the sequences of tasks. The resultant knowledge network by aggregating identified knowledge links is the knowledge map equipping functionality as a knowledge graph, and therefore its performance needs to be tested whether it meets the level of previous research's validation results. The performance test examines two aspects, the correctness of knowledge links and the possibility of inferring new types of knowledge: the former is examined using 7 questions, and the latter is checked by extracting two new-typed knowledge. As a result, the knowledge map constructed through the proposed methodology has showed the same level of performance as the previous one, and processed knowledge definition as well as knowledge relationship inference in a more efficient manner. Furthermore, comparing to the previous research's ontology-based approach, this study's Graph DB-based approach has also showed more beneficial functionality in intensively managing only the knowledge of interest, dynamically defining knowledge and relationships by reflecting various meanings from situations to purposes, agilely inferring knowledge and relationships through Cypher-based query, and easily creating a new relationship by aggregating existing ones, etc. This study's artifacts can be applied to implement the user-friendly function of knowledge exploration reflecting user's cognitive process toward associated knowledge, and can further underpin the development of an intelligent knowledge-base expanding autonomously through the discovery of new knowledge and their relationships by inference. This study, moreover than these, has an instant effect on implementing the networked knowledge map essential to satisfying contemporary users eagerly excavating the way to find proper knowledge to use.

Development of Radiosynthetic Methods of 18F-THK5351 for tau PET Imaging (타우 PET영상을 위한 18F-THK5351의 표지방법 개발)

  • Park, Jun-Young;Son, Jeong-Min;Chun, Joong-Hyun
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.22 no.1
    • /
    • pp.51-54
    • /
    • 2018
  • Purpose $^{18}F-THK5351$ is the newly developed PET probe for tau imaging in alzheimer's disease. The purpose of study was to establish the automated production of $^{18}F-THK5351$ on a commercial module. Materials and Methods Two different approaches were evaluated for the synthesis of $^{18}F-THK5351$. The first approach (method I) included the nucleophilic $^{18}F$-fluorination of the tosylate precursor, subsequently followed by pre-HPLC purification of crude reaction mixture with SPE cartridge. In the second approach (method II), the crude reaction mixture was directly introduced to a semi-preparative HPLC without SPE purification. The radiosynthesis of $^{18}F-THK5351$ was performed on a commercial GE $TRACERlab^{TM}$ $FX-_{FN}$ module. Quality control of $^{18}F-THK5351$ was carried out to meet the criteria guidelined in USP for PET radiopharmaceuticals. Results The overall radiochemical yield of method I was $23.8{\pm}1.9%$ (n=4) as the decay-corrected yield (end of synthesis, EOS) and the total synthesis time was $75{\pm}3min$. The radiochemical yield of method II was $31.9{\pm}6.7%$ (decay-corrected, n=10) and the total preparation time was $70{\pm}2min$. The radiochemical purity was>98%. Conclusion This study shows that method II provides higher radiochemical yield and shorter production time compared to the pre-SPE purification described in method I. The $^{18}F-THK5351$ synthesis by method II will be ideal for routine clinical application, considering short physical half-life of fluorine-18 ($t_{1/2}=110min$).

Definition of Tumor Volume Based on 18F-Fludeoxyglucose Positron Emission Tomography in Radiation Therapy for Liver Metastases: An Relational Analysis Study between Image Parameters and Image Segmentation Methods (간 전이 암 환자의 18F-FDG PET 기반 종양 영역 정의: 영상 인자와 자동 영상 분할 기법 간의 관계분석)

  • Kim, Heejin;Park, Seungwoo;Jung, Haijo;Kim, Mi-Sook;Yoo, Hyung Jun;Ji, Young Hoon;Yi, Chul-Young;Kim, Kum Bae
    • Progress in Medical Physics
    • /
    • v.24 no.2
    • /
    • pp.99-107
    • /
    • 2013
  • The surgical resection was occurred mainly in liver metastasis before the development of radiation therapy techniques. Recently, Radiation therapy is increased gradually due to the development of radiation dose delivery techniques. 18F-FDG PET image showed better sensitivity and specificity in liver metastasis detection. This image modality is important in the radiation treatment with planning CT for tumor delineation. In this study, we applied automatic image segmentation methods on PET image of liver metastasis and examined the impact of image factors on these methods. We selected the patients who were received the radiation therapy and 18F-FDG PET/CT in Korea Cancer Center Hospital from 2009 to 2012. Then, three kinds of image segmentation methods had been applied; The relative threshold method, the Gradient method and the region growing method. Based on these results, we performed statistical analysis in two directions. 1. comparison of GTV and image segmentation results. 2. performance of regression analysis for relation between image factor affecting image segmentation techniques. The mean volume of GTV was $60.9{\pm}65.9$ cc and the $GTV_{40%}$ was $22.43{\pm}35.27$ cc, and the $GTV_{50%}$ was $10.11{\pm}17.92$ cc, the $GTV_{RG}$ was $32.89{\pm}36.8$4 cc, the $GTV_{GD}$ was $30.34{\pm}35.77$ cc, respectively. The most similar segmentation method with the GTV result was the region growing method. For the quantitative analysis of the image factors which influenced on the region growing method, we used the standardized coefficient ${\beta}$, factors affecting the region growing method show GTV, $TumorSUV_{MAX/MIN}$, $SUV_{max}$, TBR in order. The result of the region growing (automatic segmentation) method showed the most similar result with the CT based GTV and the region growing method was affected by image factors. If we define the tumor volume by the auto image segmentation method which reflect the PET image parameters, more accurate and consistent tumor contouring can be done. And we can irradiate the optimized radiation dose to the cancer, ultimately.

Development of remote control automatic fire extinguishing system for fire suppression in double-deck tunnel (복층터널 화재대응을 위한 원격 자동소화 시스템 개발 연구)

  • Park, Jinouk;Yoo, Yongho;Kim, Yangkyun;Park, Byoungjik;Kim, Whiseong;Park, Sangheon
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.21 no.1
    • /
    • pp.167-175
    • /
    • 2019
  • To effectively deal with the fire in tunnel which is mostly the vehicle fire, it's more important to suppress the fire at early stage. In urban tunnel, however, accessibility to the scene of fire by the fire fighter is very limited due to severe traffic congestion which causes the difficulty with firefighting activity in timely manner and such a problem would be further worsened in underground road (double-deck tunnel) which has been increasingly extended and deepened. In preparation for the disaster in Korea, the range of life safety facilities for installation is defined based on category of the extension and fire protection referring to risk hazard index which is determined depending on tunnel length and conditions, and particularly to directly deal with the tunnel fire, fire extinguisher, indoor hydrant and sprinkler are designated as the mandatory facilities depending on category. But such fire extinguishing installations are found inappropriate functionally and technically and thus the measure to improve the system needs to be taken. Particularly in a double-deck tunnel which accommodates the traffic in both directions within a single tunnel of which section is divided by intermediate slab, the facility or the system which functions more rapidly and effectively is more than important. This study, thus, is intended to supplement the problems with existing tunnel life safety system (fire extinguishing) and develop the remote-controlled automatic fire extinguishing system which is optimized for a double-deck tunnel. Consequently, the system considering low floor height and extended length as well as indoor hydrant for a wide range of use have been developed together with the performance verification and the process for commercialization before applying to the tunnel is underway now.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Shape Scheme and Size Discrete Optimum Design of Plane Steel Trusses Using Improved Genetic Algorithm (개선된 유전자 알고리즘을 이용한 평면 철골트러스의 형상계획 및 단면 이산화 최적설계)

  • Kim, Soo-Won;Yuh, Baeg-Youh;Park, Choon-Wok;Kang, Moon-Myung
    • Journal of Korean Association for Spatial Structures
    • /
    • v.4 no.2 s.12
    • /
    • pp.89-97
    • /
    • 2004
  • The objective of this study is the development of a scheme and discrete optimum design algorithm, which is based on the genetic algorithm. The algorithm can perform both scheme and size optimum designs of plane trusses. The developed Scheme genetic algorithm was implemented in a computer program. For the optimum design, the objective function is the weight of structures and the constraints are limits on loads and serviceability. The basic search method for the optimum design is the genetic algorithm. The algorithm is known to be very efficient for the discrete optimization. However, its application to the complicated structures has been limited because of the extreme time need for a number of structural analyses. This study solves the problem by introducing the size & scheme genetic algorithm operators into the genetic algorithm. The genetic process virtually takes no time. However, the evolutionary process requires a tremendous amount of time for a number of structural analyses. Therefore, the application of the genetic algorithm to the complicated structures is extremely difficult, if not impossible. The scheme genetic algorithm operators was introduced to overcome the problem and to complement the evolutionary process. It is very efficient in the approximate analyses and scheme and size optimization of plane trusses structures and considerably reduces structural analysis time. Scheme and size discrete optimum combined into the genetic algorithm is what makes the practical discrete optimum design of plane fusses structures possible. The efficiency and validity of the developed discrete optimum design algorithm was verified by applying the algorithm to various optimum design examples: plane pratt, howe and warren truss.

  • PDF

Study of Quality Control of Traditional Wine Using IT Sensing Technology (IT 센싱 기술을 이용한 전통주 발효의 품질관리 연구)

  • Song, Hyeji;Choi, Jihee;Park, Chan-Won;Shin, Dong-Beom;Kang, Sung-Soo;Oh, Sung Hoon;Hwang, Kwontack
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.44 no.6
    • /
    • pp.904-911
    • /
    • 2015
  • The objective of this study was to investigate the quality characteristics of traditional wine using an radio-frequency identification (RFID) system annexed to a fermenter. In this study, we proposed an RFID-based data transmission scheme for monitoring fermentation of traditional alcoholic beverages. The pH, total acidity, total sugar, soluble sugar, free sugar, alcohol content, and organic acids of were investigated and subjected to fermentation of traditional alcoholic beverages three times. The pH ranged from 7.98, 7.95, and 7.68 at day 0, decreased drastically to 3.31~2.96 at day 2, and then slowly increased to the end point, finally reaching 3.34 at day 20. Acidity tended to increase quickly with time, especially for all samples after day 2. The fermentation environment induced a sudden increase acidity in reactants and indicated a low pH. The total sugars during fermentation quickly decreased to the range of 20.3, 22.43, and 19.2% at day 2, and the slope of reduction steadily decreased to 5.1, 6.1, and 4.8% at day 10. On the other hand, the alcohol content showed the reverse trend as total sugars. The alcohol content also showed the same pattern as total acids, showing the highest alcohol content of 17.3% (v/v) on day 20. In this study on traditional wine fermentation using an RFID system, we showed that pH, soluble sugar, and alcohol content can be adopted as key indicators for quality control and standardization of traditional wine manufacturing.

Improvement of Radiosynthesis Yield of [11C]acetate ([11C]아세트산의 방사화학적 수율 증가를 위한 연구)

  • Park, Jun Young;Son, Jeongmin
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.22 no.2
    • /
    • pp.74-78
    • /
    • 2018
  • Purpose $[^{11}C]$acetate has been proved useful in detecting the myocardial oxygen metabolism and various malignancies including prostate cancer, hepatocellular carcinoma, renal cell carcinoma and brain tumors. The purpose of study was to improve the radiosynthesis yield of $[^{11}C]$acetate on a automated radiosynthesis module. Materials and Methods $[^{11}C]$acetate was prepared by carboxylation of grignard reagent, methylmagnesium chloride, with $[^{11}C]$$CO_2$ gas, followed by hydrolysis with 1 mM acetic acid and purification using solid phase extraction cartridges. The effect of the reaction temperature ($0^{\circ}C$, $10^{\circ}C$, $-55^{\circ}C$) and cyclotron beam time (10 min, 15 min, 20 min, 25 min) on the radiosynthesis yield were investigated in the $[^{11}C]$acetate labeling reaction. Results The maximum radiosynthesis yield was obtained at $-10^{\circ}C$ of reaction temperature. The radioactivities of $[^{11}C]$acetate acquired at $-10^{\circ}C$ reaction temperature was 2.4 times higher than those of $[^{11}C]$acetate acquired at $-55^{\circ}C$. Radiosynthesis yield of $[^{11}C]$acetate increased with increasing cyclotron beam time. Conclusion This study shows that radiosynthesis yield of $[^{11}C]$acetate highly dependent on reaction temperature. The best radiosynthesis yield was obtained in reaction of grignard reagent with $[^{11}C]$$CO_2$ at $-10^{\circ}C$. This radiolabeling conditions will be ideal for routine clinical application.