• Title/Summary/Keyword: 개발 시스템 설계

Search Result 10,782, Processing Time 0.049 seconds

A Study on Transfer Process Model for long-term preservation of Electronic Records (전자기록의 장기보존을 위한 이관절차모형에 관한 연구)

  • Cheon, kwon-ju
    • The Korean Journal of Archival Studies
    • /
    • no.16
    • /
    • pp.39-96
    • /
    • 2007
  • Traditionally, the concept of transfer is that physical records such as paper documents, videos, photos are made a delivery to Archives or Records centers on the basis of transfer guidelines. But, with the automation of records management environment and spreading new records creation and management applications, we can create records and manage them in the cyberspace. In these reasons, the existing transfer system is that we move filed records to Archives or Records centers by paper boxes, needs to be changed. Under the needing conditions of a new transfer paradigm, the fact that the revision of Records Act that include some provisions about electronic records management and transfer, is desirable and proper. Nevertheless, the electronic transfer provisions are too conceptional to apply records management practice, so we have to develop detailed methods and processes. In this context, this paper suggest that a electronic records transfer process model on the basis of international standard and foreign countries' cases. Doing transfer records is one of the records management courses to use valuable records in the future. So, both producer and archive have to transfer records itself and context information to long-term preservation repository according to the transfer guidelines. In the long run, transfer comes to be the conclusion that records are moved to archive by a formal transfer process with taking a proper records protection steps. To accomplish these purposes, I analyzed the 'OAIS Reference Model' and 'Producer-Archive Interface Methodology Abstract Standard-CCSDS Blue Book' which is made by CCSDS(Consultative committee for Space Data Systems). but from both the words of 'Reference Model' and 'Standard', we can understand that these standard are not suitable for applying business practice directly. To solve this problem, I also analyzed foreign countries' transfer cases. Through the analysis of theory and case, I suggest that an Electronic Records Transfer Process Model which is consist of five sub-process that are 'Ingest prepare ${\rightarrow}$ Ingest ${\rightarrow}$ Validation ${\rightarrow}$ Preservation ${\rightarrow}$ Archival storage' and each sub-process also have some transfer elements. Especially, to confirm the new process model's feasibility, after classifying two types - one is from Public Records center to Public Archive, the other is from Civil Records center to Public or Civil Archive - of Korean Transfer, I made the new Transfer Model applied to the two types of transfer cases.

User Centered Interface Design of Web-based Attention Testing Tools: Inhibition of Return(IOR) and Graphic UI (웹 기반 주의력 검사의 사용자 인터페이스 설계: 회귀억제 과제와 그래픽 UI를 중심으로)

  • Kwahk, Ji-Eun;Kwak, Ho-Wan
    • Korean Journal of Cognitive Science
    • /
    • v.19 no.4
    • /
    • pp.331-367
    • /
    • 2008
  • This study aims to validate a web-based neuropsychological testing tool developed by Kwak(2007) and to suggest solutions to potential problems that can deteriorate its validity. When it targets a wider range of subjects, a web-based neuropsychological testing tool is challenged by high drop-out rates, lack of motivation, lack of interactivity with the experimenter, fear of computer, etc. As a possible solution to these threats, this study aims to redesign the user interface of a web-based attention testing tool through three phases of study. In Study 1, an extensive analysis of Kwak's(2007) attention testing tool was conducted to identify potential usability problems. The Heuristic Walkthrough(HW) method was used by three usability experts to review various design features. As a result, many problems were found throughout the tool. The findings concluded that the design of instructions, user information survey forms, task screen, results screen, etc. did not conform to the needs of users and their tasks. In Study 2, 11 guidelines for the design of web-based attention testing tools were established based on the findings from Study 1. The guidelines were used to optimize the design and organization of the tool so that it fits to the user and task needs. The resulting new design alternative was then implemented as a working prototype using the JAVA programming language. In Study 3, a comparative study was conducted to demonstrate the excellence of the new design of attention testing tool(named graphic style tool) over the existing design(named text style tool). A total of 60 subjects participated in user testing sessions where their error frequency, error patterns, and subjective satisfaction were measured through performance observation and questionnaires. Through the task performance measurement, a number of user errors in various types were observed in the existing text style tool. The questionnaire results were also in support of the new graphic style tool, users rated the new graphic style tool higher than the existing text style tool in terms of overall satisfaction, screen design, terms and system information, ease of learning, and system performance.

  • PDF

Treatment Efficiencies and Decomposition Velocities of Pollutants in Constructed Wetlands for Treating Hydroponic Wastewater (인공습지시스템을 이용한 폐양액처리장에서 오염물질의 정화효율 및 오염물질 분해속도)

  • Park, Jong-Hwan;Seo, Dong-Cheol;Kim, Ah-Reum;Kim, Sung-Hun;Lee, Choong-Heon;Lee, Seong-Tea;Jeong, Tae-Uk;Lee, Sang-Won;Ha, Yeong-Rae;Cho, Ju-Sik;Heo, Jong-Soo
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.44 no.5
    • /
    • pp.937-943
    • /
    • 2011
  • In order to develop constructed wetlands for treating hydroponic wastewater in greenhouses, removal efficiencies and decomposition velocities of pollutants in constructed wetland were investigated for treating hydroponic wastewater. Removal rates of BOD, COD, SS, T-N and T-P in effluent in constructed wetlands were 88%, 79%, 92%, 64% and 92%, respectively. The decomposition velocities (K; $day^{-1}$) of pollutants in $1^{st}$ HF bed of constructed wetlands were higher in the order of SS ($0.54day^{-1}$) > BOD ($0.39day^{-1}$) > COD ($0.27day^{-1}$) > T-P ($0.26day^{-1}$) > T-N ($0.06day^{-1}$). In $1^{st}$ HF bed of constructed wetlands, the decomposition velocity of SS was rapid than that for BOD, COD, T-N and T-P in constructed wetland for treating hydroponic wastewater. The decomposition velocity (K; $day^{-1}$) of pollutants in $2^{nd}$ HF bed of constructed wetland were higher in the order of T-P ($0.52day^{-1}$) > BOD ($0.28day^{-1}$) > COD ($0.15day^{-1}$) > T-N ($0.06day^{-1}$) > SS ($0.10day^{-1}$). In $2^{nd}$ HF bed of constructed wetlands, the decomposition velocity of T-P was rapid than that for BOD, COD, SS and T-N in constructed wetland for treating hydroponic wastewater.

A Study on Survey of Improvement of Non Face to Face Education focused on Professor of Disaster Management Field in COVID-19 (코로나19 상황에서 재난분야 교수자를 대상으로 한 비대면 교육의 개선에 관한 조사연구)

  • Park, Jin Chan;Beck, Min Ho
    • Journal of the Society of Disaster Information
    • /
    • v.17 no.3
    • /
    • pp.640-654
    • /
    • 2021
  • Purpose: Normal education operation was difficult in the national disaster situation of Coronavirus Infection-19. Non-face-to-face education can be an alternative to face to face education, but it is not easy to provide the same level of education. In this study, the professor of disaster management field will identify problems that can occur in the overall operation and progress of non-face-to-face education and seek ways to improve non-face-to-face education. Method: Non-face-to-face real-time education was largely categorized into pre-class, in-class, post-class, and evaluation, and case studies were conducted through the professor's case studies. Result&Conclusion: The results of the survey are as follows: First, pre-class, it was worth considering providing a non-face-to-face educational place for professors, and the need for prior education on non-face-to-face educational equipment and systems was required. In addition, it seems necessary to make sure that education is operated smoothly by giving enough notice on classes and to make efforts to develop non-face-to-face education programs for practical class. Second, communication between professor and learner, and among learners can be an important factor in non-face-to-face mid classes. To this end, it is necessary to actively utilize debate-type classes to lead learners to participate in education and enhance the educational effect through constant interaction. Third, non-face-to-face post classes, policies on the protection of privacy due to video records should be prepared to protect the privacy of professors in advance, and copyright infringement on educational materials should also be considered. In addition, it is necessary to devise various methods for fair and objective evaluation. According to the results of the interview, in the contents, which are components of non-face-to-face education, non-face-to-face education requires detailed plans on the number of students, contents, and curriculum suitable for non-face-to-face education from the design of the education. In the system, it is necessary to give the professor enough time to fully learn and familiarize with the function of the program through pre-education on the program before the professor gives non-face-to-face classes, and to operate the helpdesk, which can thoroughly check the pre-examination before non-face-to-face education and quickly resolve the problem in case of a problem.

Deep Learning OCR based document processing platform and its application in financial domain (금융 특화 딥러닝 광학문자인식 기반 문서 처리 플랫폼 구축 및 금융권 내 활용)

  • Dongyoung Kim;Doohyung Kim;Myungsung Kwak;Hyunsoo Son;Dongwon Sohn;Mingi Lim;Yeji Shin;Hyeonjung Lee;Chandong Park;Mihyang Kim;Dongwon Choi
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.1
    • /
    • pp.143-174
    • /
    • 2023
  • With the development of deep learning technologies, Artificial Intelligence powered Optical Character Recognition (AI-OCR) has evolved to read multiple languages from various forms of images accurately. For the financial industry, where a large number of diverse documents are processed through manpower, the potential for using AI-OCR is great. In this study, we present a configuration and a design of an AI-OCR modality for use in the financial industry and discuss the platform construction with application cases. Since the use of financial domain data is prohibited under the Personal Information Protection Act, we developed a deep learning-based data generation approach and used it to train the AI-OCR models. The AI-OCR models are trained for image preprocessing, text recognition, and language processing and are configured as a microservice architected platform to process a broad variety of documents. We have demonstrated the AI-OCR platform by applying it to financial domain tasks of document sorting, document verification, and typing assistance The demonstrations confirm the increasing work efficiency and conveniences.

A Study of Guidelines for Genetic Counseling in Preimplantation Genetic Diagnosis (PGD) (착상전 유전진단을 위한 유전상담 현황과 지침개발을 위한 기초 연구)

  • Kim, Min-Jee;Lee, Hyoung-Song;Kang, Inn-Soo;Jeong, Seon-Yong;Kim, Hyon-J.
    • Journal of Genetic Medicine
    • /
    • v.7 no.2
    • /
    • pp.125-132
    • /
    • 2010
  • Purpose: Preimplantation genetic diagnosis (PGD), also known as embryo screening, is a pre-pregnancy technique used to identify genetic defects in embryos created through in vitro fertilization. PGD is considered a means of prenatal diagnosis of genetic abnormalities. PGD is used when one or both genetic parents has a known genetic abnormality; testing is performed on an embryo to determine if it also carries the genetic abnormality. The main advantage of PGD is the avoidance of selective pregnancy termination as it imparts a high likelihood that the baby will be free of the disease under consideration. The application of PGD to genetic practices, reproductive medicine, and genetic counseling is becoming the key component of fertility practice because of the need to develop a custom PGD design for each couple. Materials and Methods: In this study, a survey on the contents of genetic counseling in PGD was carried out via direct contact or e-mail with the patients and specialists who had experienced PGD during the three months from February to April 2010. Results: A total of 91 persons including 60 patients, 49 of whom had a chromosomal disorder and 11 of whom had a single gene disorder, and 31 PGD specialists responded to the survey. Analysis of the survey results revealed that all respondents were well aware of the importance of genetic counseling in all steps of PGD including planning, operation, and follow-up. The patient group responded that the possibility of unexpected results (51.7%), genetic risk assessment and recurrence risk (46.7%), the reproduction options (46.7%), the procedure and limitation of PGD (43.3%) and the information of PGD technology (35.0%) should be included as a genetic counseling information. In detail, 51.7% of patients wanted to be counseled for the possibility of unexpected results and the recurrence risk, while 46.7% wanted to know their reproduction options (46.7%). Approximately 96.7% of specialists replied that a non-M.D. genetic counselor is necessary for effective and systematic genetic counseling in PGD because it is difficult for physicians to offer satisfying information to patients due to lack of counseling time and specific knowledge of the disorders. Conclusions: The information from the survey provides important insight into the overall present situation of genetic counseling for PGD in Korea. The survey results demonstrated that there is a general awareness that genetic counseling is essential for PGD, suggesting that appropriate genetic counseling may play a important role in the success of PGD. The establishment of genetic counseling guidelines for PGD may contribute to better planning and management strategies for PGD.

The way to make training data for deep learning model to recognize keywords in product catalog image at E-commerce (온라인 쇼핑몰에서 상품 설명 이미지 내의 키워드 인식을 위한 딥러닝 훈련 데이터 자동 생성 방안)

  • Kim, Kitae;Oh, Wonseok;Lim, Geunwon;Cha, Eunwoo;Shin, Minyoung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.1-23
    • /
    • 2018
  • From the 21st century, various high-quality services have come up with the growth of the internet or 'Information and Communication Technologies'. Especially, the scale of E-commerce industry in which Amazon and E-bay are standing out is exploding in a large way. As E-commerce grows, Customers could get what they want to buy easily while comparing various products because more products have been registered at online shopping malls. However, a problem has arisen with the growth of E-commerce. As too many products have been registered, it has become difficult for customers to search what they really need in the flood of products. When customers search for desired products with a generalized keyword, too many products have come out as a result. On the contrary, few products have been searched if customers type in details of products because concrete product-attributes have been registered rarely. In this situation, recognizing texts in images automatically with a machine can be a solution. Because bulk of product details are written in catalogs as image format, most of product information are not searched with text inputs in the current text-based searching system. It means if information in images can be converted to text format, customers can search products with product-details, which make them shop more conveniently. There are various existing OCR(Optical Character Recognition) programs which can recognize texts in images. But existing OCR programs are hard to be applied to catalog because they have problems in recognizing texts in certain circumstances, like texts are not big enough or fonts are not consistent. Therefore, this research suggests the way to recognize keywords in catalog with the Deep Learning algorithm which is state of the art in image-recognition area from 2010s. Single Shot Multibox Detector(SSD), which is a credited model for object-detection performance, can be used with structures re-designed to take into account the difference of text from object. But there is an issue that SSD model needs a lot of labeled-train data to be trained, because of the characteristic of deep learning algorithms, that it should be trained by supervised-learning. To collect data, we can try labelling location and classification information to texts in catalog manually. But if data are collected manually, many problems would come up. Some keywords would be missed because human can make mistakes while labelling train data. And it becomes too time-consuming to collect train data considering the scale of data needed or costly if a lot of workers are hired to shorten the time. Furthermore, if some specific keywords are needed to be trained, searching images that have the words would be difficult, as well. To solve the data issue, this research developed a program which create train data automatically. This program can make images which have various keywords and pictures like catalog and save location-information of keywords at the same time. With this program, not only data can be collected efficiently, but also the performance of SSD model becomes better. The SSD model recorded 81.99% of recognition rate with 20,000 data created by the program. Moreover, this research had an efficiency test of SSD model according to data differences to analyze what feature of data exert influence upon the performance of recognizing texts in images. As a result, it is figured out that the number of labeled keywords, the addition of overlapped keyword label, the existence of keywords that is not labeled, the spaces among keywords and the differences of background images are related to the performance of SSD model. This test can lead performance improvement of SSD model or other text-recognizing machine based on deep learning algorithm with high-quality data. SSD model which is re-designed to recognize texts in images and the program developed for creating train data are expected to contribute to improvement of searching system in E-commerce. Suppliers can put less time to register keywords for products and customers can search products with product-details which is written on the catalog.

Geochemical Equilibria and Kinetics of the Formation of Brown-Colored Suspended/Precipitated Matter in Groundwater: Suggestion to Proper Pumping and Turbidity Treatment Methods (지하수내 갈색 부유/침전 물질의 생성 반응에 관한 평형 및 반응속도론적 연구: 적정 양수 기법 및 탁도 제거 방안에 대한 제안)

  • 채기탁;윤성택;염승준;김남진;민중혁
    • Journal of the Korean Society of Groundwater Environment
    • /
    • v.7 no.3
    • /
    • pp.103-115
    • /
    • 2000
  • The formation of brown-colored precipitates is one of the serious problems frequently encountered in the development and supply of groundwater in Korea, because by it the water exceeds the drinking water standard in terms of color. taste. turbidity and dissolved iron concentration and of often results in scaling problem within the water supplying system. In groundwaters from the Pajoo area, brown precipitates are typically formed in a few hours after pumping-out. In this paper we examine the process of the brown precipitates' formation using the equilibrium thermodynamic and kinetic approaches, in order to understand the origin and geochemical pathway of the generation of turbidity in groundwater. The results of this study are used to suggest not only the proper pumping technique to minimize the formation of precipitates but also the optimal design of water treatment methods to improve the water quality. The bed-rock groundwater in the Pajoo area belongs to the Ca-$HCO_3$type that was evolved through water/rock (gneiss) interaction. Based on SEM-EDS and XRD analyses, the precipitates are identified as an amorphous, Fe-bearing oxides or hydroxides. By the use of multi-step filtration with pore sizes of 6, 4, 1, 0.45 and 0.2 $\mu\textrm{m}$, the precipitates mostly fall in the colloidal size (1 to 0.45 $\mu\textrm{m}$) but are concentrated (about 81%) in the range of 1 to 6 $\mu\textrm{m}$in teams of mass (weight) distribution. Large amounts of dissolved iron were possibly originated from dissolution of clinochlore in cataclasite which contains high amounts of Fe (up to 3 wt.%). The calculation of saturation index (using a computer code PHREEQC), as well as the examination of pH-Eh stability relations, also indicate that the final precipitates are Fe-oxy-hydroxide that is formed by the change of water chemistry (mainly, oxidation) due to the exposure to oxygen during the pumping-out of Fe(II)-bearing, reduced groundwater. After pumping-out, the groundwater shows the progressive decreases of pH, DO and alkalinity with elapsed time. However, turbidity increases and then decreases with time. The decrease of dissolved Fe concentration as a function of elapsed time after pumping-out is expressed as a regression equation Fe(II)=10.l exp(-0.0009t). The oxidation reaction due to the influx of free oxygen during the pumping and storage of groundwater results in the formation of brown precipitates, which is dependent on time, $Po_2$and pH. In order to obtain drinkable water quality, therefore, the precipitates should be removed by filtering after the stepwise storage and aeration in tanks with sufficient volume for sufficient time. Particle size distribution data also suggest that step-wise filtration would be cost-effective. To minimize the scaling within wells, the continued (if possible) pumping within the optimum pumping rate is recommended because this technique will be most effective for minimizing the mixing between deep Fe(II)-rich water and shallow $O_2$-rich water. The simultaneous pumping of shallow $O_2$-rich water in different wells is also recommended.

  • PDF

Economics and Ground Cover Growth Characteristics of a New Method of Shallow Soil Artificial Foundation Planting (저토심 인공지반 녹화공법의 경제성 및 도입 가능한 지피식물의 생육특성)

  • Choi, Jin-Woo;Kim, Hag-Kee;Lee, Kyong-Jae;Kang, Hyun-Kyung
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.37 no.5
    • /
    • pp.98-108
    • /
    • 2009
  • The purpose of this study is to analyze the characteristics of limited methods, economics and breeding appropriateness of native and imported ground cover plants in the methodology of a shallow soil rooftop garden. The new shallow soil rooftop gardening method uses a total of 13cm in soil thickness, including 4.5cm of top soil on a 7.5cm rock-wool-mat stacked onto a 1cm roll-type-draining plate. The total construction cost for each method of soil level within the design price standard for SEDUM BLOCK is 89,433won/$m^2$, and for DAKU is 92,550won/$m^2$. By comparing those two methods, the construction cost of the shallow soil artificial foundation methodology is 45,000won/$m^2$; this shows the new method is 50% less expensive than the existing method of shallow soil rooftop gardening. The experiment was executed on the rooftop of the Korean National Housing Corporation to ensure validity of the shallow soil artificial foundation planting, and the sample plants which were imported and grown now in native covering. A list investigating the growing plants was made of the cover rate in each plant class, both while alive and the dry plant weight. The native ground cover plants, Sedum kamtschaticum, Sedum middendorffianum, Allium senescens, Sedum sarmentosum, Aquilegia buergariana, and Caryopteris incana increased the cover rate, live weight and dry weight in the shallow soil artificial foundation method. Among the imported cover plants, Sedum sprium and Sedum reflexum, the cover rate increased and growth conditions improved. However, some species needed weed maintenance. After examination with the less expensive shallow soil artificial foundation method and growth analysis, it was found that rooftop gardens are a low-cost option and the growth of plants is great. This result shows the new method can contribute to the proliferation of rooftop gardens in urban settings.

Design and Implementation of Game Server using the Efficient Load Balancing Technology based on CPU Utilization (게임서버의 CPU 사용율 기반 효율적인 부하균등화 기술의 설계 및 구현)

  • Myung, Won-Shig;Han, Jun-Tak
    • Journal of Korea Game Society
    • /
    • v.4 no.4
    • /
    • pp.11-18
    • /
    • 2004
  • The on-line games in the past were played by only two persons exchanging data based on one-to-one connections, whereas recent ones (e.g. MMORPG: Massively Multi-player Online Role-playings Game) enable tens of thousands of people to be connected simultaneously. Specifically, Korea has established an excellent network infrastructure that can't be found anywhere in the world. Almost every household has a high-speed Internet access. What made this possible was, in part, high density of population that has accelerated the formation of good Internet infrastructure. However, this rapid increase in the use of on-line games may lead to surging traffics exceeding the limited Internet communication capacity so that the connection to the games is unstable or the server fails. expanding the servers though this measure is very costly could solve this problem. To deal with this problem, the present study proposes the load distribution technology that connects in the form of local clustering the game servers divided by their contents used in each on-line game reduces the loads of specific servers using the load balancer, and enhances performance of sewer for their efficient operation. In this paper, a cluster system is proposed where each Game server in the system has different contents service and loads are distributed efficiently using the game server resource information such as CPU utilization. Game sewers having different contents are mutually connected and managed with a network file system to maintain information consistency required to support resource information updates, deletions, and additions. Simulation studies show that our method performs better than other traditional methods. In terms of response time, our method shows shorter latency than RR (Round Robin) and LC (Least Connection) by about 12%, 10% respectively.

  • PDF