• Title/Summary/Keyword: handling information

Search Result 1,508, Processing Time 0.031 seconds

Status of the Constitutional Court Records Management and Improvement (헌법재판소 기록관리현황과 개선방안)

  • Lee, Cheol-Hwan;Lee, Young-Hak
    • The Korean Journal of Archival Studies
    • /
    • no.38
    • /
    • pp.75-124
    • /
    • 2013
  • This study aims, by paying attention to the special values of records of Constitutional Court, to discuss the characteristics of them and figuring out their present state, and to suggest some measures for improvement in the records management. First of all, I defined the concept of the records of Constitutional Court and its scope, and made an effort to comprehend their types and distinct features, and on the basis of which I tried to grasp the characteristics of the records. Put simply, the records of Constitutional Court are essential records indispensible to the application of Constitutional Court's documentation strategy of them, and they are valuable particularly at the level of the taking-root of democracy and the guarantee of human rights in a country. Owing to their characteristics of handling nationally important events, also, the context of the records is far-reaching to the records of other constitutional institutions and administrations, etc. In the second place, I analyzed Records Management Present State. At a division stage, I grasped the present state of creation, registration, and classification system of records. At an archives repository stage, I made efforts to figure out specifically the perseveration of records and the present of state of using them. On the basis of such figuring-outs of the present situation of records of Constitutional Court, I pointed at problems in how to manage them and suggested some measures to improve it in accordance with the problems, by dividing its process into four, Infrastructure, Process, Opening to the public and Application. In the infrastructure process, after revealing problems in its system, facilities, and human power, I presented some ways to improve it. In terms of its process, by focusing on classification and appraisal, I pointed out problems in them and suggested alternatives. In classification, I suggested to change the classification structure of trial records; in appraisal, I insisted on reconsidering the method of appropriating the retention periods of administration records, for it is not correspondent with reality in which, even in an file of a event, there are several different retention periods so it is likely for the context of the event worryingly to be segmented. In opening to the public and application, I pointed at problems in information disclosure at first, and made a suggestion of the establishment of a wide information disclosure law applicable to all sort of records. In application, I contended the expansion of the possibility of application of records and the scope of them through cooperation with other related-institutions.

Deep Learning-based Professional Image Interpretation Using Expertise Transplant (전문성 이식을 통한 딥러닝 기반 전문 이미지 해석 방법론)

  • Kim, Taejin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.79-104
    • /
    • 2020
  • Recently, as deep learning has attracted attention, the use of deep learning is being considered as a method for solving problems in various fields. In particular, deep learning is known to have excellent performance when applied to applying unstructured data such as text, sound and images, and many studies have proven its effectiveness. Owing to the remarkable development of text and image deep learning technology, interests in image captioning technology and its application is rapidly increasing. Image captioning is a technique that automatically generates relevant captions for a given image by handling both image comprehension and text generation simultaneously. In spite of the high entry barrier of image captioning that analysts should be able to process both image and text data, image captioning has established itself as one of the key fields in the A.I. research owing to its various applicability. In addition, many researches have been conducted to improve the performance of image captioning in various aspects. Recent researches attempt to create advanced captions that can not only describe an image accurately, but also convey the information contained in the image more sophisticatedly. Despite many recent efforts to improve the performance of image captioning, it is difficult to find any researches to interpret images from the perspective of domain experts in each field not from the perspective of the general public. Even for the same image, the part of interests may differ according to the professional field of the person who has encountered the image. Moreover, the way of interpreting and expressing the image also differs according to the level of expertise. The public tends to recognize the image from a holistic and general perspective, that is, from the perspective of identifying the image's constituent objects and their relationships. On the contrary, the domain experts tend to recognize the image by focusing on some specific elements necessary to interpret the given image based on their expertise. It implies that meaningful parts of an image are mutually different depending on viewers' perspective even for the same image. So, image captioning needs to implement this phenomenon. Therefore, in this study, we propose a method to generate captions specialized in each domain for the image by utilizing the expertise of experts in the corresponding domain. Specifically, after performing pre-training on a large amount of general data, the expertise in the field is transplanted through transfer-learning with a small amount of expertise data. However, simple adaption of transfer learning using expertise data may invoke another type of problems. Simultaneous learning with captions of various characteristics may invoke so-called 'inter-observation interference' problem, which make it difficult to perform pure learning of each characteristic point of view. For learning with vast amount of data, most of this interference is self-purified and has little impact on learning results. On the contrary, in the case of fine-tuning where learning is performed on a small amount of data, the impact of such interference on learning can be relatively large. To solve this problem, therefore, we propose a novel 'Character-Independent Transfer-learning' that performs transfer learning independently for each character. In order to confirm the feasibility of the proposed methodology, we performed experiments utilizing the results of pre-training on MSCOCO dataset which is comprised of 120,000 images and about 600,000 general captions. Additionally, according to the advice of an art therapist, about 300 pairs of 'image / expertise captions' were created, and the data was used for the experiments of expertise transplantation. As a result of the experiment, it was confirmed that the caption generated according to the proposed methodology generates captions from the perspective of implanted expertise whereas the caption generated through learning on general data contains a number of contents irrelevant to expertise interpretation. In this paper, we propose a novel approach of specialized image interpretation. To achieve this goal, we present a method to use transfer learning and generate captions specialized in the specific domain. In the future, by applying the proposed methodology to expertise transplant in various fields, we expected that many researches will be actively conducted to solve the problem of lack of expertise data and to improve performance of image captioning.

Subject-Balanced Intelligent Text Summarization Scheme (주제 균형 지능형 텍스트 요약 기법)

  • Yun, Yeoil;Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.141-166
    • /
    • 2019
  • Recently, channels like social media and SNS create enormous amount of data. In all kinds of data, portions of unstructured data which represented as text data has increased geometrically. But there are some difficulties to check all text data, so it is important to access those data rapidly and grasp key points of text. Due to needs of efficient understanding, many studies about text summarization for handling and using tremendous amounts of text data have been proposed. Especially, a lot of summarization methods using machine learning and artificial intelligence algorithms have been proposed lately to generate summary objectively and effectively which called "automatic summarization". However almost text summarization methods proposed up to date construct summary focused on frequency of contents in original documents. Those summaries have a limitation for contain small-weight subjects that mentioned less in original text. If summaries include contents with only major subject, bias occurs and it causes loss of information so that it is hard to ascertain every subject documents have. To avoid those bias, it is possible to summarize in point of balance between topics document have so all subject in document can be ascertained, but still unbalance of distribution between those subjects remains. To retain balance of subjects in summary, it is necessary to consider proportion of every subject documents originally have and also allocate the portion of subjects equally so that even sentences of minor subjects can be included in summary sufficiently. In this study, we propose "subject-balanced" text summarization method that procure balance between all subjects and minimize omission of low-frequency subjects. For subject-balanced summary, we use two concept of summary evaluation metrics "completeness" and "succinctness". Completeness is the feature that summary should include contents of original documents fully and succinctness means summary has minimum duplication with contents in itself. Proposed method has 3-phases for summarization. First phase is constructing subject term dictionaries. Topic modeling is used for calculating topic-term weight which indicates degrees that each terms are related to each topic. From derived weight, it is possible to figure out highly related terms for every topic and subjects of documents can be found from various topic composed similar meaning terms. And then, few terms are selected which represent subject well. In this method, it is called "seed terms". However, those terms are too small to explain each subject enough, so sufficient similar terms with seed terms are needed for well-constructed subject dictionary. Word2Vec is used for word expansion, finds similar terms with seed terms. Word vectors are created after Word2Vec modeling, and from those vectors, similarity between all terms can be derived by using cosine-similarity. Higher cosine similarity between two terms calculated, higher relationship between two terms defined. So terms that have high similarity values with seed terms for each subjects are selected and filtering those expanded terms subject dictionary is finally constructed. Next phase is allocating subjects to every sentences which original documents have. To grasp contents of all sentences first, frequency analysis is conducted with specific terms that subject dictionaries compose. TF-IDF weight of each subjects are calculated after frequency analysis, and it is possible to figure out how much sentences are explaining about each subjects. However, TF-IDF weight has limitation that the weight can be increased infinitely, so by normalizing TF-IDF weights for every subject sentences have, all values are changed to 0 to 1 values. Then allocating subject for every sentences with maximum TF-IDF weight between all subjects, sentence group are constructed for each subjects finally. Last phase is summary generation parts. Sen2Vec is used to figure out similarity between subject-sentences, and similarity matrix can be formed. By repetitive sentences selecting, it is possible to generate summary that include contents of original documents fully and minimize duplication in summary itself. For evaluation of proposed method, 50,000 reviews of TripAdvisor are used for constructing subject dictionaries and 23,087 reviews are used for generating summary. Also comparison between proposed method summary and frequency-based summary is performed and as a result, it is verified that summary from proposed method can retain balance of all subject more which documents originally have.

GWB: An integrated software system for Managing and Analyzing Genomic Sequences (GWB: 유전자 서열 데이터의 관리와 분석을 위한 통합 소프트웨어 시스템)

  • Kim In-Cheol;Jin Hoon
    • Journal of Internet Computing and Services
    • /
    • v.5 no.5
    • /
    • pp.1-15
    • /
    • 2004
  • In this paper, we explain the design and implementation of GWB(Gene WorkBench), which is a web-based, integrated system for efficiently managing and analyzing genomic sequences, Most existing software systems handling genomic sequences rarely provide both managing facilities and analyzing facilities. The analysis programs also tend to be unit programs that include just single or some part of the required functions. Moreover, these programs are widely distributed over Internet and require different execution environments. As lots of manual and conversion works are required for using these programs together, many life science researchers suffer great inconveniences. in order to overcome the problems of existing systems and provide a more convenient one for helping genomic researches in effective ways, this paper integrates both managing facilities and analyzing facilities into a single system called GWB. Most important issues regarding the design of GWB are how to integrate many different analysis programs into a single software system, and how to provide data or databases of different formats required to run these programs. In order to address these issues, GWB integrates different analysis programs byusing common input/output interfaces called wrappers, suggests a common format of genomic sequence data, organizes local databases consisting of a relational database and an indexed sequential file, and provides facilities for converting data among several well-known different formats and exporting local databases into XML files.

  • PDF

Study on the screening method for determination of heavy metals in cellular phone for the restrictions on the use of certain hazardous substances (RoHS) (유해물질 규제법(RoHS)에 따른 휴대폰 내의 중금속 함유량 측정을 위한 스크리닝법 연구)

  • Kim, Y.H.;Lee, J.S.;Lim, H.B.
    • Analytical Science and Technology
    • /
    • v.23 no.1
    • /
    • pp.1-14
    • /
    • 2010
  • It is of importance that all countries in worldwide, including EU and China, have adopted the Restrictions on the use of certain Hazardous Substances (RoHS) for all electronics. IEC62321 document, which was published by the International Electronics Committee (IEC) can have conflicts with the standards in the market. On the contrary Publicly Accessible Specification (PAS) for sampling published by IEC TC111 can be adopted for complementary application. In this work, we tried to find a route to disassemble and disjoint cellular phone sample, based on PAS and compare the screening methods available in the market. For this work, the cellular phone produced in 2001, before the regulation was born, was chosen for better detection. Although X-ray Fluorescence (XRF) showed excellent performance for screening, fast and easy handling, it can give information on the surface, not the bulk, and have some limitations due to significant matrix interference and lack of variety of standards for quantification. It means that screening with XRF sometimes requires supplementary tool. There are several techniques available in the market of analytical instruments. Laser ablation (LA) ICP-MS, energy dispersive (ED) XRF and scanning electron microscope (SEM)-energy dispersive X-ray (EDX) were demonstrated for screening a cellular phone. For quantitative determination, graphite furnace atomic absorption spectrometry (GF-AAS) was employed. Experimental results for Pb in a battery showed large difference in analytical results in between XRF and GF-AAS, i.e., 0.92% and 5.67%, respectively. In addition, the standard deviation of XRF was extremely large in the range of 23-168%, compared with that in the range of 1.9-92.3% for LA-ICP-MS. In conclusion, GF-AAS was required for quantitative analysis although EDX was used for screening. In this work, it was proved that LA-ICP-MS can be used as a screening method for fast analysis to determine hazardous elements in electrical products.

Design and Implementation of the SSL Component based on CBD (CBD에 기반한 SSL 컴포넌트의 설계 및 구현)

  • Cho Eun-Ae;Moon Chang-Joo;Baik Doo-Kwon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.12 no.3
    • /
    • pp.192-207
    • /
    • 2006
  • Today, the SSL protocol has been used as core part in various computing environments or security systems. But, the SSL protocol has several problems, because of the rigidity on operating. First, SSL protocol brings considerable burden to the CPU utilization so that performance of the security service in encryption transaction is lowered because it encrypts all data which is transferred between a server and a client. Second, SSL protocol can be vulnerable for cryptanalysis due to the key in fixed algorithm being used. Third, it is difficult to add and use another new cryptography algorithms. Finally. it is difficult for developers to learn use cryptography API(Application Program Interface) for the SSL protocol. Hence, we need to cover these problems, and, at the same time, we need the secure and comfortable method to operate the SSL protocol and to handle the efficient data. In this paper, we propose the SSL component which is designed and implemented using CBD(Component Based Development) concept to satisfy these requirements. The SSL component provides not only data encryption services like the SSL protocol but also convenient APIs for the developer unfamiliar with security. Further, the SSL component can improve the productivity and give reduce development cost. Because the SSL component can be reused. Also, in case of that new algorithms are added or algorithms are changed, it Is compatible and easy to interlock. SSL Component works the SSL protocol service in application layer. First of all, we take out the requirements, and then, we design and implement the SSL Component, confidentiality and integrity component, which support the SSL component, dependently. These all mentioned components are implemented by EJB, it can provide the efficient data handling when data is encrypted/decrypted by choosing the data. Also, it improves the usability by choosing data and mechanism as user intend. In conclusion, as we test and evaluate these component, SSL component is more usable and efficient than existing SSL protocol, because the increase rate of processing time for SSL component is lower that SSL protocol's.

Biological Monitoring of Paint Handling Workers exposed to PAHs using Urinary 1-Hydroxypyrene (다핵방향족탄화수소류에 노출된 페인트 취급 근로자에서 요 중 1- Hydroxypyrene을 이용한 생물학적 모니터링)

  • Lee, Jong-Seong;Kim, Eun-A;Lee, Yong-Hag;Moon, Deog-Hwan;Kim, Kwang-Jong
    • Journal of Korean Society of Occupational and Environmental Hygiene
    • /
    • v.15 no.2
    • /
    • pp.124-134
    • /
    • 2005
  • To investigate the exposure effect of polynuclear aromatic hydrocarbons (PAHs), we measured airborne total PAHs as an external dose, urinary 1-hydroxypyrene (1-OHP) as an internal dose of PAHs exposure, and analyzed the relationship between urinary 1-OHP concentration and PAHs exposure. The study population contained 44 workers in steel-pipe coating and paint manufacture industries. The airborne PAHs was obtained during survey day, and urine were sampled at the end of shift. Personal information on age, body weight, height, eniployment duration, smoking habit, and alcohol consumption was obtained by a structured questionnaire. Airborne PAHs were analyzed by the gas chromatograph with mass selective detector. Urinary 1-OHP levels were analyzed by the high performance liquid chromatograph with ultraviolet wavelength detector. For statistical estimation, t-test, ${\chi}^2$-test, analysis of variance, correlation analysis, arid regression analysis were executed by SPSS/PC (Windows version 10). The mean of environmental total PAHs was $87.8{\pm}7.81{\mu}g/m^3$. The mean concentration ($526.5{\pm}2.85{\mu}g/m^3$) of workers in steel-pipe coating industries using coal tar enamel was the higher than that ($17.5{\pm}3.36{\mu}g/m^3$) of workers in paint manufacture industries using coal tar paint. The mean of urinary 1-OHP concentration ($51.63{\pm}3.144{\mu}\;mol/mol$ creatinine) of workers in steel-pipe coating industries was the higher than that ($2.33{\pm}4.709{\mu}\;mol/mol$ creatinine) of workers in paint manufacture industries. The mean of urinary 1-OHP concentration of smokers was the higher than that of non-smokers. There was significant correlation between the urinary concentration of 1-OHP and the environmental concentration of PAHs (r=O.S48, p<0.001), pyrene(r=0.859, p<0.001), and urinary cotinine (r=0.324, p<0.05). The regression equation between the urinary concentration of 1-OHP in ${\mu}g/g$ creatinine($C_{1-OHP}$) and airborne concentration of PAHs (or pyrene) in ${\mu}g/m^3$ ($C_{PAHs}$ or Cpyrene) is: Log ($C_{1-OHP}$)=-0.650+0.889×Log($C_{PAHs}$), where $R^2=0.694$ and n=38 for p<0.001.Log ($C_{1-OHP}$)=1.087+0.707${\times}$Log(Cpyrene), where $R^2=0.713$ and n=38 for p<0.001. From the results of stepwise multiple regression analysis about 1-OHP, significant independents were total PAHs and urinary cotinine (adjusted $R^2=0.743$, p<0.001). In this study, there were significant correlation between the urinary concentration of 1-OHP and the airborne concentration of PAHs. The urinary 1-OHP was effective index as a biomarker of airborne PAHs in workplace. But it was influenced by non-occupational PAHs source, smoking.

Nursing Professor's inspection and Status of Patient's Records and Informed Consent for Clinical Practice of Nursing Student in Korea and Japan (한·일 간호대학생의 임상실습 시 환자의 설명동의 및 기록관리와 지도실태)

  • Cho, Yooh-Yang;Kim, In-Hong;Yamamoto, Fujie;Yamasaki, Fujiko
    • Journal of agricultural medicine and community health
    • /
    • v.31 no.1
    • /
    • pp.35-46
    • /
    • 2006
  • Objectives: In recently. the management and protection on individual information in patient's medical & nursing records have been very important, and that need a guideline. The purpose of this study was to investigate the status of using the patient's nursing records of nursing students in clinical practice, to find and discuss the patient's informed consent, and status of education and management concerned to patient's nursing records. Methods: This study used a mailing survey. data collected from September 24th to October 31th in 2002. The subject were 333 professors who are major in adult nursing, pediatric nursing, psychological nursing of 111 university of nursing department and nursing college. And then we received the survey mail from 103 professors that respondent rate was 30.9%. Results: The characteristics of study subjects showed 49.0% of university. 51.0% of college of nursing. 50.0% of the subjects practiced point the patient by oral approval in clinical practice. But when the decision of the patient was very difficult, 21.6% of the subjects take to informed consent from his or her families. During the clinical practice, 49.0% of the subjects were explain to patient about clinical practice and contents of the nursing student, only 7.8% of the subjects were explain to patient with nursing records. 52.0% of the subjects were took out records from the hospital, only 17.6% of the subjects had standard of the patient's informed consent and standard of handling practice records. 17.6%-92.2% of the subjects that educate and manage concern to patient's nursing records.

  • PDF

Proteomic Assessment of the Relevant Factors Affecting Pork Meat Quality Associated with Longissimus dorsi Muscles in Duroc Pigs

  • Cho, Jin Hyoung;Lee, Ra Ham;Jeon, Young-Joo;Park, Seon-Min;Shin, Jae-Cheon;Kim, Seok-Ho;Jeong, Jin Young;Kang, Hyun-sung;Choi, Nag-Jin;Seo, Kang Seok;Cho, Young Sik;Kim, MinSeok S.;Ko, Sungho;Seo, Jae-Min;Lee, Seung-Youp;Shim, Jung-Hyun;Chae, Jung-Il
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.29 no.11
    • /
    • pp.1653-1663
    • /
    • 2016
  • Meat quality is a complex trait influenced by many factors, including genetics, nutrition, feeding environment, animal handling, and their interactions. To elucidate relevant factors affecting pork quality associated with oxidative stress and muscle development, we analyzed protein expression in high quality longissimus dorsi muscles (HQLD) and low quality longissimus dorsi muscles (LQLD) from Duroc pigs by liquid chromatography-tandem mass spectrometry (LC-MS/MS)-based proteomic analysis. Between HQLD (n = 20) and LQLD (n = 20) Duroc pigs, 24 differentially expressed proteins were identified by LC-MS/MS. A total of 10 and 14 proteins were highly expressed in HQLD and LQLD, respectively. The 24 proteins have putative functions in the following seven categories: catalytic activity (31%), ATPase activity (19%), oxidoreductase activity (13%), cytoskeletal protein binding (13%), actin binding (12%), calcium ion binding (6%), and structural constituent of muscle (6%). Silver-stained image analysis revealed significant differential expression of lactate dehydrogenase A (LDHA) between HQLD and LQLD Duroc pigs. LDHA was subjected to in vitro study of myogenesis under oxidative stress conditions and LDH activity assay to verification its role in oxidative stress. No significant difference of mRNA expression level of LDHA was found between normal and oxidative stress condition. However, LDH activity was significantly higher under oxidative stress condition than at normal condition using in vitro model of myogenesis. The highly expressed LDHA was positively correlated with LQLD. Moreover, LDHA activity increased by oxidative stress was reduced by antioxidant resveratrol. This paper emphasizes the importance of differential expression patterns of proteins and their interaction for the development of meat quality traits. Our proteome data provides valuable information on important factors which might aid in the regulation of muscle development and the improvement of meat quality in longissimus dorsi muscles of Duroc pigs under oxidative stress conditions.

A Study on the Effect of On-Dock System in Container Terminals - Focusing on GwangYang Port - (컨테이너터미널에서 On-Dock 시스템 효과분석에 관한 연구 - 광양항을 중심으로 -)

  • Cha, Sang-Hyun;Noh, Chang-Kyun
    • Journal of Navigation and Port Research
    • /
    • v.39 no.1
    • /
    • pp.45-53
    • /
    • 2015
  • These days Container Terminals are focusing on increasing the quantity of containers and shipping lines choose Terminals by referring to the key elements of a terminal to perform the overall operation the fastest such as the location of the terminal, discharging ability, keeping environment, and other elements related to shipping in general. Container terminal is able to offer On-Dock service has become an important factor for shipping lines to choose that terminal. In this paper, we propose an algorithm for On-Dock system work algorithm, the algorithm Empty container exports, Full Container algorithm and The aim of our study focus on both container's gate out time and search for the effective terminal operation which is using the general On-Dock system through several algorithm like container batch priority, gate in and out job priority and empty container yard equipment allocation rule based on the automatic allocation method and manual allocation scheme for container. Gathering these information, it gives the priority and yard location of gate-out containers to control. That is, by selecting an optimum algorithm container, container terminals Empty reduces the container taken out time, it is possible to minimize unnecessary re-handling of the yard container can be enhanced with respect to the efficiency of the equipment. Operations and operating results of the Non On-Dock and On-Dock system is operated by the out work operations (scenarios) forms that are operating in the real Gwangyang Container Terminal derived results. Gwangyang Container terminal and apply the On-Dock system, Non On-Dock can be taken out this time, about 5 minutes more quickly when applying the system. when managing export orders for berths where On-Dock service is needed, ball containers are allocated and for import cargoes, D/O is managed and after carryout, return management, container damage, cleaning, fixing and controlling services are supported hence the berth service can be strengthened and container terminal business can grow.