• 제목/요약/키워드: Document Image

Search Result 301, Processing Time 0.025 seconds

Reliability Verification of Evidence Analysis Tools for Digital Forensics (디지털 포렌식을 위한 증거 분석 도구의 신뢰성 검증)

  • Lee, Tae-Rim;Shin, Sang-Uk
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.21 no.3
    • /
    • pp.165-176
    • /
    • 2011
  • In this paper, we examine the reliability verification procedure of evidence analysis tools for computer forensics and test the famous tools for their functional requirements using the verification items proposed by standard document, TIAK.KO-12.0112. Also, we carry out performance evaluation based on test results and suggest the way of performance improvement for evidence analysis tools. To achieve this, we first investigate functions that test subjects can perform, and then we set up a specific test plan and create evidence image files which contain the contents of a verification items. We finally verify and analyze the test results. In this process, we can discover some weaknesses of most of analysis tools, such as the restoration for deleted & fragmented files, the identification of the file format which is widely used in the country and the processing of the strings composed of Korean alphabet.

Considerations for Applying Korean Natural Language Processing Technology in Records Management (기록관리 분야에서 한국어 자연어 처리 기술을 적용하기 위한 고려사항)

  • Haklae, Kim
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.22 no.4
    • /
    • pp.129-149
    • /
    • 2022
  • Records have temporal characteristics, including the past and present; linguistic characteristics not limited to a specific language; and various types categorized in a complex way. Processing records such as text, video, and audio in the life cycle of records' creation, preservation, and utilization entails exhaustive effort and cost. Primary natural language processing (NLP) technologies, such as machine translation, document summarization, named-entity recognition, and image recognition, can be widely applied to electronic records and analog digitization. In particular, Korean deep learning-based NLP technologies effectively recognize various record types and generate record management metadata. This paper provides an overview of Korean NLP technologies and discusses considerations for applying NLP technology in records management. The process of using NLP technologies, such as machine translation and optical character recognition for digital conversion of records, is introduced as an example implemented in the Python environment. In contrast, a plan to improve environmental factors and record digitization guidelines for applying NLP technology in the records management field is proposed for utilizing NLP technology.

Variance Recovery in Text Detection using Color Variance Feature (색 분산 특징을 이용한 텍스트 추출에서의 손실된 분산 복원)

  • Choi, Yeong-Woo;Cho, Eun-Sook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.10
    • /
    • pp.73-82
    • /
    • 2009
  • This paper proposes a variance recovery method for character strokes that can be missed in applying the previously proposed color variance approach in text detection of natural scene images. The previous method has a shortcoming of missing the color variance due to the fixed length of horizontal and vertical windows of variance detection when the character strokes are thick or long. Thus, this paper proposes a variance recovery method by using geometric information of bounding boxes of connected components and heuristic knowledge. We have tested the proposed method using various kinds of document-style and natural scene images such as billboards, signboards, etc captured by digital cameras and mobile-phone cameras. And we showed the improved text detection accuracy even in the images of containing large characters.

Deep Learning Research Trends Analysis with Ego Centered Topic Citation Analysis (자아 중심 주제 인용분석을 활용한 딥러닝 연구동향 분석)

  • Lee, Jae Yun
    • Journal of the Korean Society for information Management
    • /
    • v.34 no.4
    • /
    • pp.7-32
    • /
    • 2017
  • Recently, deep learning has been rapidly spreading as an innovative machine learning technique in various domains. This study explored the research trends of deep learning via modified ego centered topic citation analysis. To do that, a few seed documents were selected from among the retrieved documents with the keyword 'deep learning' from Web of Science, and the related documents were obtained through citation relations. Those papers citing seed documents were set as ego documents reflecting current research in the field of deep learning. Preliminary studies cited frequently in the ego documents were set as the citation identity documents that represents the specific themes in the field of deep learning. For ego documents which are the result of current research activities, some quantitative analysis methods including co-authorship network analysis were performed to identify major countries and research institutes. For the citation identity documents, co-citation analysis was conducted, and key literatures and key research themes were identified by investigating the citation image keywords, which are major keywords those citing the citation identity document clusters. Finally, we proposed and measured the citation growth index which reflects the growth trend of the citation influence on a specific topic, and showed the changes in the leading research themes in the field of deep learning.

Hardware Design for JBIG2 Encoder on Embedded System (임베디드용 JBIG2 부호화기의 하드웨어 설계)

  • Seo, Seok-Yong;Ko, Hyung-Hwa
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.2C
    • /
    • pp.182-192
    • /
    • 2010
  • This paper proposes the hardware IP design of JBIG2 encoder. In order to facilitate the next generation FAX after the standardization of JBIG2, major modules of JBIG2 encoder are designed and implemented, such as symbol extraction module, Huffman coder, MMR coder, and MQ coder. ImpulseC Codeveloper and Xilinx ISE/EDK program are used for the synthesis of VHDL code. To minimize the memory usage, 128 lines of input image are processed succesively instead of total image. The synthesized IPs are downloaded to Virtex-4 FX60 FPGA on ML410 development board. The four synthesized IPs utilize 36.7% of total slice of FPGA. Using Active-HDL tool, the generated IPs were verified showing normal operation. Compared with the software operation using microblaze cpu on ML410 board, the synthesized IPs are better in operation time. The improvement ratio of operation time between the synthesized IP and software is 17 times in case of symbol extraction IP, and 10 times in Huffman coder IP. MMR coder IP shows 6 times faster and MQ coder IP shows 2.2 times faster than software only operation. The synthesized H/W IP and S/W module cooperated to succeed in compressing the CCITT standard document.

Research on the touch points of city brand users based on M-ICT (M-ICT시대의 도시 브랜드 사용자의 터치포인트에 관한 연구)

  • Yao, Xiao-Dong;Pan, Young-Hwan
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.11
    • /
    • pp.289-296
    • /
    • 2019
  • In the era of M-ICT (mobile information communication technology), it is becoming more and more important to establish a city brand image that is "user-centered" and creates a new experience of "interaction between people, people and things, and people and space". The experience of city users on the brand image of a city is the key factor to determine the competitiveness of the city, among which the user's research on the touch point of the city brand is particularly important. The purpose of this study is to enhance the user's brand experience and enhance the city's brand competitiveness. The research methods of this paper are literature review and investigation. Firstly, the background and purpose of the study are expounded, then the characteristics and ways of user experience touch points are defined through the document review, and the multi-latitude composition model of city brand touch points is proposed. By means of user investigation, the structural characteristics of high frequency touch points between digital touch points and physical touch points are obtained. According to the problems found in the investigation, the optimal design strategy of touch point is put forward in combination with the case. The innovation of this study is to study the relationship between the city brand and the touch point of user experience from the perspective of user experience, and propose a multi-latitude model of the touch point of city brand.

Development of Standard Process for Private Information Protection of Medical Imaging Issuance (개인정보 보호를 위한 의료영상 발급 표준 업무절차 개발연구)

  • Park, Bum-Jin;Yoo, Beong-Gyu;Lee, Jong-Seok;Jeong, Jae-Ho;Son, Gi-Gyeong;Kang, Hee-Doo
    • Journal of radiological science and technology
    • /
    • v.32 no.3
    • /
    • pp.335-341
    • /
    • 2009
  • Purpose : The medical imaging issuance is changed from conventional film method to Digital Compact Disk solution because of development on IT technology. However other medical record department's are undergoing identification check through and through whereas medical imaging department cannot afford to do that. So, we examine present applicant's recognition of private intelligence safeguard, and medical imaging issuance condition by CD & DVD medium toward various medical facility and then perform comparative analysis associated with domestic and foreign law & recommendation, lastly suggest standard for medical imaging issuance and process relate with internal environment. Materials and methods : First, we surveyed issuance process & required documents when situation of medical image issuance in the metropolitan medical facility by wire telephone between 2008.6.1$\sim$2008.7.1. in accordance with the medical law Article 21$\sim$clause 2, suggested standard through applicant's required documents occasionally - (1) in the event of oneself $\rightarrow$ verifying identification, (2) in the event of family $\rightarrow$ verifying applicant identification & family relations document (health insurance card, attested copy, and so on), (3) third person or representative $\rightarrow$ verifying applicant identification & letter of attorney & certificate of one's seal impression. Second, also checked required documents of applicant in accordance with upper standard when situation of medical image issuance in Kyung-hee university medical center during 3 month 2008.5.1$\sim$2008.7.31. Third, developed a work process by triangular position of issuance procedure for situation when verifying required documents & management of unpreparedness. Result : Look all over the our manufactured output in the hospital - satisfy the all conditions $\rightarrow$ 4 place(12%), possibly request everyone $\rightarrow$ 4 place(12%), and apply in the clinic section $\rightarrow$ 9 place(27%) that does not medical imaging issuance office, so we don't know about required documents condition. and look into whether meet or not the applicant's required documents on upper 3month survey - satisfy the all conditions $\rightarrow$ 629 case(49%), prepare a one part $\rightarrow$ 416 case(33%), insufficiency of all document $\rightarrow$ 226case(18%). On the authority of upper research result, we are establishing the service model mapping for objective reception when image export situation through triangular position of issuance procedure and reduce of friction with patient and promote the patient convenience. Conclusion : The PACS is classified under medical machinery that mean indicates about higher importance of medical information therefore medical information administrator's who already received professional education & mind, are performer about issuance process only and also have to provide under ID checking process exhaustively.

  • PDF

Development of Extreme Event Analysis Tool Base on Spatial Information Using Climate Change Scenarios (기후변화 시나리오를 활용한 공간정보 기반 극단적 기후사상 분석 도구(EEAT) 개발)

  • Han, Kuk-Jin;Lee, Moung-Jin
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.3
    • /
    • pp.475-486
    • /
    • 2020
  • Climate change scenarios are the basis of research to cope with climate change, and consist of large-scale spatio-temporal data. From the data point of view, one scenario has a large capacity of about 83 gigabytes or more, and the data format is semi-structured, making it difficult to utilize the data through means such as search, extraction, archiving and analysis. In this study, a tool for analyzing extreme climate events based on spatial information is developed to improve the usability of large-scale, multi-period climate change scenarios. In addition, a pilot analysis is conducted on the time and space in which the heavy rain thresholds that occurred in the past can occur in the future, by applying the developed tool to the RCP8.5 climate change scenario. As a result, the days with a cumulative rainfall of more than 587.6 mm over three days would account for about 76 days in the 2080s, and localized heavy rains would occur. The developed analysis tool was designed to facilitate the entire process from the initial setting through to deriving analysis results on a single platform, and enabled the results of the analysis to be implemented in various formats without using specific commercial software: web document format (HTML), image (PNG), climate change scenario (ESR), statistics (XLS). Therefore, the utilization of this analysis tool is considered to be useful for determining future prospects for climate change or vulnerability assessment, etc., and it is expected to be used to develop an analysis tool for climate change scenarios based on climate change reports to be presented in the future.

A Study on the Necessity and Applicability of Interactive Electronic Technical Manual(IETM) for Construction Projects (건설분야 전자매뉴얼의 필요성 및 특성분석을 통한 실무적용성 연구)

  • Kang, Leen-Seok;Jung, Won-Myung;Kwak, Joong-Min
    • Korean Journal of Construction Engineering and Management
    • /
    • v.6 no.1 s.23
    • /
    • pp.99-108
    • /
    • 2005
  • Interactive electronic technical manual(IETM) for construction projects means an electronic tool that regulations and specifications related to construction method or maintenance process ale described by electronic book type. It has a meaning of integrated information system that includes virtual reality(VR), 3D animation and image contents for representing real construction information so that user can easily understand the construction situation and maintenance process. The basic information and technical manuals of construction facilities are being written as paper documents in our construction industry. As the result, the information management in the maintenance phase of construction projects is inefficient, and maintenance cost is being increased. This study attempts to improve the lack of understanding about construction IETM through the analysis of necessity and unique function of construction IETM comparing with the IETMS in other industry, Finally, this study shows a scenario of construction IErM for mitigating natural disaster of construction facilities to verify applicability of IETM.

Eligibility Analysis of Land on a Reforestation CDM Project in Goseong District, South Korea (청정개발체제하 재 조림 사업의 토지적격성에 대한 사례 분석 -고성군 재조림 사업을 중심으로-)

  • Guishan, Cui;Kwon, Tae-Hyub;Lee, Woo-Kyun;Kwak, Hanbin;Nam, Kijun;Song, Yongho;Hangnan, Yu
    • Journal of Korean Society of Forest Science
    • /
    • v.102 no.2
    • /
    • pp.216-222
    • /
    • 2013
  • For reducing greenhouse gases, many countries carried out a series of activities not only at home but abroad. Particularly, after the release of the Kyoto Protocol, either nation or companies' participation was intensified, due to endow to responsibility of emission limits. This study focused on reforestation CDM work in Goseong Gun based on clean development system. Obstacle factors of land eligibility could be distinguished to three periods: before December 31th 1989, present and future. The obstacle before December 31th 1989 was that land cover of study area hardly illustrated by Landsat image, due to the low resolution, which were confirmed by a document of Grassland Composition Permission instead. The problem of current land eligibility is that the area of trees presence are difficult to be determined as forest or not. The boundary of forest in strata was identified, using 3-Dimensional Cartography Machine and aerial photograph. Land eligibility would still have obstacle whether the study area with trees presence has potentiality to be forest in the future at situation in absence of reforestation project. This was resolved by prediction of tree growth using stem analysis during execution of the project at study area.