• Title/Summary/Keyword: 사전스캔

Search Result 33, Processing Time 0.03 seconds

Automatic Word-Spacing of Syllable Bi-gram Information for Korean OCR Postprocessing (음절 Bi-gram정보를 이용한 한국어 OCR 후처리용 자동 띄어쓰기)

  • 전남열;박혁로
    • Proceedings of the Korean Society for Cognitive Science Conference
    • /
    • 2000.06a
    • /
    • pp.95-100
    • /
    • 2000
  • 문자 인식기를 가지고 스캔된 원문 이미지를 인식한 결과로 형태소 분석과 어절 분석을 통해 대용량의 문서 정보를 데이터베이스에 구축하고 전문 검색(full text retrieval)이 가능하도록 한다. 그러나, 입력문자가 오인식된 경우나 띄어쓰기가 잘못된 데이터는 형태소 분석이나 어절 분석에 그대로 사용할 수가 없다. 한글 문자 인식의 경우 문자 단위의 인식률은 약 90.5% 정도나 문자 인식 오류와 띄어쓰기 오류 등을 고려한 어절 단위의 인식률은 현저하게 떨어진다. 이를 위해 한극어의 음절 특성을 고려해서 사전을 기반하지 않고 학습이 잘된 말뭉치(corpus)와 음절 단위의 bi-gram 정보를 이용한 자동 띄어쓰기를 하여 실험한 결과 학습 코퍼스의 크기와 띄어쓰기 오류 위치 정보에 따라 다르지만 약 86.2%의 띄어쓰기 정확도를 보였다. 이 결과를 가지고 형태소 분서고가 언어 평가 등을 이용한 문자 인식 후처리 과정을 거치면 문자 인식 시스템의 인식률 향상에 크게 영향을 미칠 것이다.

  • PDF

Automatic Word-Spacing of Syllable Bi-gram Information for Korean OCR Postprocessing (음절 Bi-gram정보를 이용한 한국어 OCR 후처리용 자동 띄어쓰기)

  • Jeon, Nam-Youl;Park, Hyuk-Ro
    • Annual Conference on Human and Language Technology
    • /
    • 2000.10d
    • /
    • pp.95-100
    • /
    • 2000
  • 문자 인식기를 가지고 스캔된 원문 이미지를 인식한 결과로 형태소 분석과 어절 분석을 통해 대용량의 문서 정보를 데이터베이스에 구축하고 전문 검색(full text retrieval)이 가능하도록 한다. 그러나, 입력문자가 오인식된 경우나 띄어쓰기가 잘못된 데이터는 형태소 분석이나 어절 분석에 그대로 사용할 수가 없다. 한글 문자 인식의 경우 문자 단위의 인식률은 약 90.5% 정도나 문자 인식 오류와 띄어쓰기 오류 등을 고려한 어절 단위의 인식률은 현저하게 떨어진다. 이를 위해 한국어의 음절 특성을 고려해서 사전을 기반하지 않고 학습이 잘된 말뭉치(corpus)와 음절 단위의 bigram 정보를 이용한 자동 띄어쓰기를 하여 실험한 결과 학습 코퍼스의 크기와 띄어쓰기 오류 위치 정보에 따라 다르지만 약 86.2%의 띄어쓰기 정확도를 보였다. 이 결과를 가지고 형태소 분석과 언어 평가 등을 이용한 문자 인식 후처리 과정을 거치면 문자 인식 시스템의 인식률 향상에 크게 영향을 미칠 것이다.

  • PDF

Building the Process for Reducing Whole Body Bone Scan Errors and its Effect (전신 뼈 스캔의 오류 감소를 위한 프로세스 구축과 적용 효과)

  • Kim, Dong Seok;Park, Jang Won;Choi, Jae Min;Shim, Dong Oh;Kim, Ho Seong;Lee, Yeong Hee
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.21 no.1
    • /
    • pp.76-82
    • /
    • 2017
  • Purpose Whole body bone scan is one of the most frequently performed in nuclear medicine. Basically, both the anterior and posterior views are acquired simultaneously. Occasionally, it is difficult to distinguish the lesion by only the anterior view and the posterior view. In this case, accurate location of the lesion through SPECT / CT or additional static scan images are important. Therefore, in this study, various improvement activities have been carried out in order to enhance the work capacity of technologists. In this study, we investigate the effect of technologist training and standardized work process processes on bone scan error reduction. Materials and Methods Several systems have been introduced in sequence for the application of new processes. The first is the implementation of education and testing with physicians, the second is the classification of patients who are expected to undergo further scanning, introducing a pre-filtration system that allows technologists to check in advance, and finally, The communication system called NMQA is applied. From January, 2014 to December, 2016, we examined the whole body bone scan patients who visited the Department of Nuclear Medicine, Asan Medical Center, Seoul, Korea Results We investigated errors based on the Bone Scan NMQA sent from January 2014 to December 2016. The number of tests in which NMQA was transmitted over the entire bone scan during the survey period was calculated as a percentage. The annual output is 141 cases in 2014, 88 cases in 2015, and 86 cases in 2016. The rate of NMQA has decreased to 0.88% in 2014, 0.53% in 2015 and 0.45% in 2016. Conclusion The incidence of NMQA has decreased since 2014 when the new process was applied. However, we believe that it will be necessary to accumulate data continuously in the future because of insufficient data until statistically confirming its usefulness. This study confirmed the necessity of standardized work and education to improve the quality of Bone Scan image, and it is thought that update is needed for continuous research and interest in the future.

  • PDF

An Attack Graph Model for Dynamic Network Environment (동적 네트워크 환경에 적용 가능한 Attack Graph 모델 연구)

  • Moon, Joo Yeon;Kim, Taekyu;Kim, Insung;Kim, Huy Kang
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.28 no.2
    • /
    • pp.485-500
    • /
    • 2018
  • As the size of the system and network environment grows and the network structure and the system configuration change frequently, network administrators have difficulty managing the status manually and identifying real-time changes. In this paper, we suggest a system that scans dynamic network information in real time, scores vulnerability of network devices, generates all potential attack paths, and visualizes them using attack graph. We implemented the proposed algorithm based attack graph; and we demonstrated that it can be applicable in MTD concept based defense system by simulating on dynamic virtual network environment with SDN.

Comparative analysis of IaC Vulnerability Scanning Efficiency with AWS Cloudformation for DevSecOps (DevSecOps를 위한 AWS CloudFormation 기반 코드형 인프라 취약성 스캐닝 효율성 분석)

  • Siyun Chae;Jiwon Hong;Junga Kim;Seunghyun Park;Seongmin Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2024.05a
    • /
    • pp.216-217
    • /
    • 2024
  • 최근 클라우드 컴퓨팅 인프라 및 소프트웨어의 지속적인 발전으로 인한 복잡성 증가로 인해 신속한 확장성과 유연성에 대한 요구가 증가하고 있다. 이에 클라우드 네이티브 환경과의 호환성뿐만 아니라 개발과 운영의 효율성을 높이고자 코드로 인프라를 정의하여 자동화된 환경을 구축해 주는 코드형 인프라(Infrastructure as Code, IaC) 기술이 주목받고 있으며, AWS CloudFormation 은 대표적인 솔루션 중 하나이다. 그러나 IaC 형태로 배포되는 템플릿에 취약성이 존재할 경우, 인스턴스가 실행되기 전까지 보안 취약점을 미리 발견하기 어려워 DevOps 사이클에서의 보안 이슈를 야기할 수 있다. 이에 본 논문은 CloudFormation 템플릿의 보안 취약성 스캔이 가능하다고 알려진 오픈 소스 도구의 효율성을 평가하기 위한 사례 연구를 수행한다. 분석 결과를 바탕으로, DevSecOps 달성을 위한 IaC 기반 환경에서 취약성 사전 탐지의 필요성과 세분화된 접근 방식을 제시하고자 한다.

A Study on the 3D Measurement Data Application: The Detailed Restoration Modeling of Mireuksajiseoktap (미륵사지석탑 정밀복원모형 제작을 중심으로 한 3차원 실측데이터의 활용 연구)

  • Moon, Seang Hyen
    • Korean Journal of Heritage: History & Science
    • /
    • v.44 no.2
    • /
    • pp.76-95
    • /
    • 2011
  • After dismantled, Mireuksajiseoktap(Stone pagoda of Mireuksa Templesite) is being in the stage of restoration design. Now, different ways - producing restoration model, a 3 dimension simulation - have been requested to make more detailed and clearer restoration design prior to confirmation of its restoration design and actual restoration carry-out. This thesis proposes the way to build the detailed model for better restoration plan using extensively-used Reverse Engineering technique and Rapid Prototyping. It also introduces each stage such as a 3-dimension actual measurement, building database, a 3-dimension simulation etc., to build a desirable model. On the top of that, this thesis reveals that after dismantled, MIruksaji stone pagoda's interior and exterior were not constructed into pieces but wholeness, so that its looks can be grasped in more virtually and clearly. Secondly, this thesis makes a 3-dimension study on the 2-dimension design possible by acquiring basic materials about a 3-dimension design. Thirdly, the individual feature of each member like the change of member location can be comprehended, considering comparing analysis and joint condition of member. Lastly, in the structural perspective this thesis can be used as reference materials for structure reinforcement design by grasping destructed aspects of stone pagoda and weak points of the structure. In dismantlement-repair and restoration work of cultural properties that require delicate attention and exactness, there may be evitable errors on time and space in building reinforcement and restoration design based on a 2-dimension plan. Especially, the more complicate and bigger the subject is, the more difficult an analysis about the status quo and its delicate design are. A series of pre-review, based on the 3-dimension data according to actual measurement, can be one of the effective way to minimize the possibility that errors about time - space happen by building more delicate plan and resolving difficulties.

Separation of Chromophoric Substance from Amur Cork Tree Using GC-MS (GC-MS를 이용한 황벽의 색소 성분 분리 거동)

  • Ahn, Cheun-Soon
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.33 no.6
    • /
    • pp.980-989
    • /
    • 2009
  • Amur cork tree was extracted in methanol with the purpose of investigating the most effective extraction procedure for detecting the chromophore using the GC-MS analysis. Different procedures of waterbath and hotplate extractions were carried out and five different GC-MS instrument parameters including the operating temperatures in the GC capillary column and the MSD scan range were tested for their efficiencies. Berberine was determined by the detection of dihydroberberine at 15.0 min r.t. Hotplate was a better device for extracting amur cork tree than waterbath shaker either with or without presoaking in the room temperature. Water was not an adequate extraction medium for the berberine detection. The most effective GC-MS parameter was Method 4; the initial temperature at $50^{\circ}C$ followed by the temperature increase of $23^{\circ}C$/min until $210^{\circ}C$, then increase of $30^{\circ}C$/min until the final temperature reach at $305^{\circ}C$, then hold for 14 minutes to maintain the total run time 24.12 minutes. The MSD scan range for Method 4 was $35\sim400$m/z.

An Efficient Test Data Compression/Decompression for Low Power Testing (저전력 테스트를 고려한 효율적인 테스트 데이터 압축 방법)

  • Chun Sunghoon;Im Jung-Bin;Kim Gun-Bae;An Jin-Ho;Kang Sungho
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.42 no.2 s.332
    • /
    • pp.73-82
    • /
    • 2005
  • Test data volume and power consumption for scan vectors are two major problems in system-on-a-chip testing. Therefore, this paper proposes a new test data compression/decompression method for low power testing. The method is based on analyzing the factors that influence test parameters: compression ratio, power reduction and hardware overhead. To improve the compression ratio and the power reduction ratio, the proposed method is based on Modified Statistical Coding (MSC), Input Reduction (IR) scheme and the algorithms of reordering scan flip-flops and reordering test pattern sequence in a preprocessing step. Unlike previous approaches using the CSR architecture, the proposed method is to compress original test data, not $T_{diff}$, and decompress the compressed test data without the CSR architecture. Therefore, the proposed method leads to better compression ratio with lower hardware overhead and lower power consumption than previous works. An experimental comparison on ISCAS '89 benchmark circuits validates the proposed method.

A Study on the Use of Contrast Agent and the Improvement of Body Part Classification Performance through Deep Learning-Based CT Scan Reconstruction (딥러닝 기반 CT 스캔 재구성을 통한 조영제 사용 및 신체 부위 분류 성능 향상 연구)

  • Seongwon Na;Yousun Ko;Kyung Won Kim
    • Journal of Broadcast Engineering
    • /
    • v.28 no.3
    • /
    • pp.293-301
    • /
    • 2023
  • Unstandardized medical data collection and management are still being conducted manually, and studies are being conducted to classify CT data using deep learning to solve this problem. However, most studies are developing models based only on the axial plane, which is a basic CT slice. Because CT images depict only human structures unlike general images, reconstructing CT scans alone can provide richer physical features. This study seeks to find ways to achieve higher performance through various methods of converting CT scan to 2D as well as axial planes. The training used 1042 CT scans from five body parts and collected 179 test sets and 448 with external datasets for model evaluation. To develop a deep learning model, we used InceptionResNetV2 pre-trained with ImageNet as a backbone and re-trained the entire layer of the model. As a result of the experiment, the reconstruction data model achieved 99.33% in body part classification, 1.12% higher than the axial model, and the axial model was higher only in brain and neck in contrast classification. In conclusion, it was possible to achieve more accurate performance when learning with data that shows better anatomical features than when trained with axial slice alone.

Anomaly Detection Mechanism based on the Session Patterns and Fuzzy Cognitive Maps (퍼지인식도와 세션패턴 기반의 비정상 탐지 메커니즘)

  • Ryu Dae-Hee;Lee Se-Yul;Kim Hyeock-Jin;Song Young-Deog
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.6 s.38
    • /
    • pp.9-16
    • /
    • 2005
  • Recently, since the number of internet users is increasing rapidly and, by using the Public hacking tools, general network users can intrude computer systems easily, the hacking problem is setting more serious. In order to prevent the intrusion. it is needed to detect the sign in advance of intrusion in a Positive Prevention by detecting the various forms of hackers intrusion trials to know the vulnerability of systems. The existing network-based anomaly detection algorithms that cope with port-scanning and the network vulnerability scans have some weakness in intrusion detection. they can not detect slow scans and coordinated scans. therefore, the new concept of algorithm is needed to detect effectively the various. In this Paper, we propose a detection algorithm for session patterns and FCM.

  • PDF