• Title/Summary/Keyword: High Accuracy

Search Result 8,661, Processing Time 0.037 seconds

Improved Method of License Plate Detection and Recognition using Synthetic Number Plate (인조 번호판을 이용한 자동차 번호인식 성능 향상 기법)

  • Chang, Il-Sik;Park, Gooman
    • Journal of Broadcast Engineering
    • /
    • v.26 no.4
    • /
    • pp.453-462
    • /
    • 2021
  • A lot of license plate data is required for car number recognition. License plate data needs to be balanced from past license plates to the latest license plates. However, it is difficult to obtain data from the actual past license plate to the latest ones. In order to solve this problem, a license plate recognition study through deep learning is being conducted by creating a synthetic license plates. Since the synthetic data have differences from real data, and various data augmentation techniques are used to solve these problems. Existing data augmentation simply used methods such as brightness, rotation, affine transformation, blur, and noise. In this paper, we apply a style transformation method that transforms synthetic data into real-world data styles with data augmentation methods. In addition, real license plate data are noisy when it is captured from a distance and under the dark environment. If we simply recognize characters with input data, chances of misrecognition are high. To improve character recognition, in this paper, we applied the DeblurGANv2 method as a quality improvement method for character recognition, increasing the accuracy of license plate recognition. The method of deep learning for license plate detection and license plate number recognition used YOLO-V5. To determine the performance of the synthetic license plate data, we construct a test set by collecting our own secured license plates. License plate detection without style conversion recorded 0.614 mAP. As a result of applying the style transformation, we confirm that the license plate detection performance was improved by recording 0.679mAP. In addition, the successul detection rate without image enhancement was 0.872, and the detection rate was 0.915 after image enhancement, confirming that the performance improved.

The Optimization and Verification of an Analytical Method for Sodium Iron Chlorophyllin in Foods Using HPLC and LC/MS (식품 중 철클로로필린나트륨의 HPLC 및 LC/MS 최적 분석법과 타당성 검증)

  • Chong, Hee Sun;Park, Yeong Ju;Kim, Eun Gyeom;Park, Yea Lim;Kim, Jin Mi;Yamaguchi, Tokutaro;Lee, Chan;Suh, Hee-Jae
    • Journal of Food Hygiene and Safety
    • /
    • v.34 no.2
    • /
    • pp.148-157
    • /
    • 2019
  • An optimized analytical method for sodium iron chloriphyllin in foods was established and verified by using high performance liquid chromatography with attached diode array detection. An Inertsil ODS-2 column and methanol-water (80:20 containing 1% acetate) as a mobile phase were employed. The limit of detection and quantitation of sodium iron chloriphyllin were 0.1 and 0.3 mg/kg, respectively, and the linearity of calibration curve was excellent ($R^2=0.9999$). The accuracy and precision were 93.9~104.95% and 2.0~7.7% in both inter-day and intra-day tests. Recoveries for candy and salad dressing were ranged between 93 and 104% (relative standard deviation, (RSD) 0.3~4.3%), and between 83 and 115% (RSD 1.2~2.0%), respectively. Liquid chromatography mass spectrometry was used to verify the main components of sodium iron chlorophyllin which were Fe-isochlorin e4 and Fe-chlorin e4.

A Study on the Fabrication and Comparison of the Phantom for CT Dose Measurements Using 3D Printer (3D프린터를 이용한 CT 선량측정 팬텀 제작 및 비교에 관한 연구)

  • Yoon, Myeong-Seong;Kang, Seong-Hyeon;Hong, Soon-Min;Lee, Youngjin;Han, Dong-Koon
    • Journal of the Korean Society of Radiology
    • /
    • v.12 no.6
    • /
    • pp.737-743
    • /
    • 2018
  • Patient exposure dose exposure test, which is one of the items of accuracy control of Computed Tomography, conducts measurements every year based on the installation and operation of special medical equipment under Article 38 of the Medical Law, And keep records. The CT-Dose phantom used for dosimetry can accurately measure doses, but has the disadvantage of high price. Therefore, through this research, the existing CT - Dose phantom was similarly manufactured with a 3D printer and compared with the existing phantom to examine the usefulness. In order to produce the same phantom as the conventional CT-Dose phantom, a 3D printer of the FFF method is used by using a PLA filament, and in order to calculate the CTDIw value, Ion chambers were inserted into the central part and the central part, and measurements were made ten times each. Measurement results The CT-Dose phantom was measured at $30.44{\pm}0.31mGy$ in the periphery, $29.55{\pm}0.34mGy$ CTDIw value was measured at $30.14{\pm}0.30mGy$ in the center, and the phantom fabricated using the 3D printer was measured at the periphery $30.59{\pm}0.18mGy$, the central part was $29.01{\pm}0.04mGy$, and the CTDIw value was measured at $30.06{\pm}0.13mGy$. Analysis using the Mann - Whiteney U-test of the SPSS statistical program showed that there was a statistically significant difference in the result values in the central part, but statistically significant differences were observed between the peripheral part and CTDIw results I did not show. In conclusion, even in the CT-Dose phantom made with a 3D printer, we showed dose measurement performance like existing CT-Dose phantom and confirmed the possibility of low-cost phantom production using 3D printer through this research did it.

Exploring the Factors Influencing on the Accuracy of Self-Reported Responses in Affective Assessment of Science (과학과 자기보고식 정의적 영역 평가의 정확성에 영향을 주는 요소 탐색)

  • Chung, Sue-Im;Shin, Donghee
    • Journal of The Korean Association For Science Education
    • /
    • v.39 no.3
    • /
    • pp.363-377
    • /
    • 2019
  • This study reveals the aspects of subjectivity in the test results in a science-specific aspect when assessing science-related affective characteristic through self-report items. The science-specific response was defined as the response that appear due to student's recognition of nature or characteristics of science when his or her concepts or perceptions about science were attempted to measure. We have searched for cases where science-specific responses especially interfere with the measurement objective or accurate self-reports. The results of the error due to the science-specific factors were derived from the quantitative data of 649 students in the 1st and 2nd grade of high school and the qualitative data of 44 students interviewed. The perspective of science and the characteristics of science that students internalize from everyday life and science learning experiences interact with the items that form the test tool. As a result, it was found that there were obstacles to accurate self-report in three aspects: characteristics of science, personal science experience, and science in tool. In terms of the characteristic of science in relation to the essential aspect of science, students respond to items regardless of the measuring constructs, because of their views and perceived characteristics of science based on subjective recognition. The personal science experience factor representing the learner side consists of student's science motivation, interaction with science experience, and perception of science and life. Finally, from the instrumental point of view, science in tool leads to terminological confusion due to the uncertainty of science concepts and results in a distance from accurate self-report eventually. Implications from the results of the study are as follows: review of inclusion of science-specific factors, precaution to clarify the concept of measurement, check of science specificity factors at the development stage, and efforts to cross the boundaries between everyday science and school science.

Performance Evaluation of Reconstruction Algorithms for DMIDR (DMIDR 장치의 재구성 알고리즘 별 성능 평가)

  • Kwak, In-Suk;Lee, Hyuk;Moon, Seung-Cheol
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.23 no.2
    • /
    • pp.29-37
    • /
    • 2019
  • Purpose DMIDR(Discovery Molecular Imaging Digital Ready, General Electric Healthcare, USA) is a PET/CT scanner designed to allow application of PSF(Point Spread Function), TOF(Time of Flight) and Q.Clear algorithm. Especially, Q.Clear is a reconstruction algorithm which can overcome the limitation of OSEM(Ordered Subset Expectation Maximization) and reduce the image noise based on voxel unit. The aim of this paper is to evaluate the performance of reconstruction algorithms and optimize the algorithm combination to improve the accurate SUV(Standardized Uptake Value) measurement and lesion detectability. Materials and Methods PET phantom was filled with $^{18}F-FDG$ radioactivity concentration ratio of hot to background was in a ratio of 2:1, 4:1 and 8:1. Scan was performed using the NEMA protocols. Scan data was reconstructed using combination of (1)VPFX(VUE point FX(TOF)), (2)VPHD-S(VUE Point HD+PSF), (3)VPFX-S (TOF+PSF), (4)QCHD-S-400((VUE Point HD+Q.Clear(${\beta}-strength$ 400)+PSF), (5)QCFX-S-400(TOF +Q.Clear(${\beta}-strength$ 400)+PSF), (6)QCHD-S-50(VUE Point HD+Q.Clear(${\beta}-strength$ 50)+PSF) and (7)QCFX-S-50(TOF+Q.Clear(${\beta}-strength$ 50)+PSF). CR(Contrast Recovery) and BV(Background Variability) were compared. Also, SNR(Signal to Noise Ratio) and RC(Recovery Coefficient) of counts and SUV were compared respectively. Results VPFX-S showed the highest CR value in sphere size of 10 and 13 mm, and QCFX-S-50 showed the highest value in spheres greater than 17 mm. In comparison of BV and SNR, QCFX-S-400 and QCHD-S-400 showed good results. The results of SUV measurement were proportional to the H/B ratio. RC for SUV is in inverse proportion to the H/B ratio and QCFX-S-50 showed highest value. In addition, reconstruction algorithm of Q.Clear using 400 of ${\beta}-strength$ showed lower value. Conclusion When higher ${\beta}-strength$ was applied Q.Clear showed better image quality by reducing the noise. On the contrary, lower ${\beta}-strength$ was applied Q.Clear showed that sharpness increase and PVE(Partial Volume Effect) decrease, so it is possible to measure SUV based on high RC comparing to conventional reconstruction conditions. An appropriate choice of these reconstruction algorithm can improve the accuracy and lesion detectability. In this reason, it is necessary to optimize the algorithm parameter according to the purpose.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

A Study on the Reproducibility of 3D Shape Model of Garden Cultural Heritage using Photogrammetry with SNS Photographs - Focused on Soswaewon Garden, Damyang(Scenic Site No.40) - (SNS 사진과 사진측량을 이용한 정원유산의 3차원 형상 재현 가능성 연구 - 명승 제40호 담양 소쇄원(潭陽 瀟灑園)을 대상으로 -)

  • Kim, Choong-Sik;Lee, Sang-Ha
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.36 no.4
    • /
    • pp.94-104
    • /
    • 2018
  • This study examined photogrammetric reconstruction techniques that can measure the original form of a cultural property utilizing photographs taken in the past. During the research process, photographs taken in the past as well as photograph on the internet of Soswaewon Garden in Damyang(scenic site 40) were collected and utilized. The landscaping structures of Maedae, Aiyangdan, Ogokmun Wall, and Yakjak and natural scenery Gwangseok, of which photographs can be taken from any 360 degree direction from a close distance or a far distance without any barriers in the way, were selected and tested for the possibility of reproducing three-dimensional shapes. The photography method of 151 landscape photographs (58.6%) from internet portal sites for the aforementioned five landscape subjects containing information on the date the photograph was taken, focal length, and exposure were analyzed. As a result of the analysis, it was revealed that the majority of the photographs tend to focus on important parts of each subject. In addition, we discovered that there are two or three photography methods that internet users preferred in regards to each landscape subject. For the purposes of the experiment, photographs in which a single scene consistently appears for each landscape subject and it was determined that there was a high level of preference related to the photography method were analyzed, and three-dimensional mesh shape model was produced with a photoscan program to analyze the reproducibility of three-dimensional shapes. Based on the results of the reproduction, it was relatively possible to reproduce three-dimensional shapes for artifacts such as Ogukmun wall, Maedae, and Aeyangdan, but it was impossible to reproduce three-dimensional images for natural scenery or an object that has similar texture such as Yakjak and Gwangseok. As a result of experimentation related to the reconstruction of three-dimensional shapes with the photographs taken on site using a photography method similar to that of the photographs selected as previously mentioned, there was success related to reproducing the three-dimensional shapes of Yakjak and Gwangseok, of which it was not possible to do so through the photographs that had been collected previously. In addition, through comparison of past and present images, it was possible to measure the exact sizes as well as discover any changes that have taken place. If past photographs taken by tourists or landscape architects of cultural properties can be obtained, the three-dimensional shapes from a particular period of time can be reproduced. If this technology becomes widespread, it will increase the level of accuracy and reliability in regards to measuring the past shapes of cultural landscape properties and examining any changes to the properties.

Analysis of the Operation Status and Function based on the Overseas Accident Investigation Agency (국외 재난원인조사기구의 운영 현황 및 기능분석)

  • Lee, Kyung-Su;Yang, Seung-Ho;Kim, Yeon-Ju;Park, Jihye;Kim, Tai-Hoon;Kim, Hyunju
    • Journal of the Society of Disaster Information
    • /
    • v.17 no.3
    • /
    • pp.442-453
    • /
    • 2021
  • Purpose: The objective of this study is to suggest desirable direction of Korean accident investigation organization by analyzing the operation status and way of overseas developed countries' investigation agency. Method: To accomplish the objective of this study, we were examined four main characteristics of accident investigation agencies of the U.S., Japan, and Sweden, focusing on (1); the background of the establishment, (2);organizational structure, (3);major tasks and functions, (4); accident investigation procedures. Result: First, the purpose of its establishment and task is to prevent recurrence of disasters and accidents, at the same time, administrating and researching duties such as legal system, policy, recommending improvement and conducting scientific disaster-cause analysis to contribute safety for the government. Second, it is operated as an independent organization under the president, not belonging to the ministry, in order to enable fair investigation in an impartial position. Third, it has the authority to be recognized for its expertise in the results of investigation. In other words, it is operated as a permanent organization with professional personnel, and secures authority through the accident research with indepth investigation and high-quality recommendations. Conclusion: The overseas investigation agencies rapidly manage and coordinate their operational practices in order to resolve national requirements and social conflicts with fairness, accuracy and expertise in accident investigations. In order to prevent the recurrence of similar events, Korea needs to efficiently reconstruct its investigative functions distributed by each government department. In addition, institutional improvement is needed to make general adjustments at the national level, organize and operate control tower for when the accident has happened.

Development of Digital Transceiver Unit for 5G Optical Repeater (5G 광중계기 구동을 위한 디지털 송수신 유닛 설계)

  • Min, Kyoung-Ok;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.25 no.1
    • /
    • pp.156-167
    • /
    • 2021
  • In this paper, we propose a digital transceiver unit design for in-building of 5G optical repeaters that extends the coverage of 5G mobile communication network services and connects to a stable wireless network in a building. The digital transceiver unit for driving the proposed 5G optical repeater is composed of 4 blocks: a signal processing unit, an RF transceiver unit, an optical input/output unit, and a clock generation unit. The signal processing unit plays an important role, such as a combination of a basic operation of the CPRI interface, a 4-channel antenna signal, and response to external control commands. It also transmits and receives high-quality IQ data through the JESD204B interface. CFR and DPD blocks operate to protect the power amplifier. The RF transmitter/receiver converts the RF signal received from the antenna to AD, is transmitted to the signal processing unit through the JESD204B interface, and DA converts the digital signal transmitted from the signal processing unit to the JESD204B interface and transmits the RF signal to the antenna. The optical input/output unit converts an electric signal into an optical signal and transmits it, and converts the optical signal into an electric signal and receives it. The clock generator suppresses jitter of the synchronous clock supplied from the CPRI interface of the optical input/output unit, and supplies a stable synchronous clock to the signal processing unit and the RF transceiver. Before CPRI connection, a local clock is supplied to operate in a CPRI connection ready state. XCZU9CG-2FFVC900I of Xilinx's MPSoC series was used to evaluate the accuracy of the digital transceiver unit for driving the 5G optical repeater proposed in this paper, and Vivado 2018.3 was used as the design tool. The 5G optical repeater digital transceiver unit proposed in this paper converts the 5G RF signal input to the ADC into digital and transmits it to the JIG through CPRI and outputs the downlink data signal received from the JIG through the CPRI to the DAC. And evaluated the performance. The experimental results showed that flatness, Return Loss, Channel Power, ACLR, EVM, Frequency Error, etc. exceeded the target set value.

Very short-term rainfall prediction based on radar image learning using deep neural network (심층신경망을 이용한 레이더 영상 학습 기반 초단시간 강우예측)

  • Yoon, Seongsim;Park, Heeseong;Shin, Hongjoon
    • Journal of Korea Water Resources Association
    • /
    • v.53 no.12
    • /
    • pp.1159-1172
    • /
    • 2020
  • This study applied deep convolution neural network based on U-Net and SegNet using long period weather radar data to very short-term rainfall prediction. And the results were compared and evaluated with the translation model. For training and validation of deep neural network, Mt. Gwanak and Mt. Gwangdeoksan radar data were collected from 2010 to 2016 and converted to a gray-scale image file in an HDF5 format with a 1km spatial resolution. The deep neural network model was trained to predict precipitation after 10 minutes by using the four consecutive radar image data, and the recursive method of repeating forecasts was applied to carry out lead time 60 minutes with the pretrained deep neural network model. To evaluate the performance of deep neural network prediction model, 24 rain cases in 2017 were forecast for rainfall up to 60 minutes in advance. As a result of evaluating the predicted performance by calculating the mean absolute error (MAE) and critical success index (CSI) at the threshold of 0.1, 1, and 5 mm/hr, the deep neural network model showed better performance in the case of rainfall threshold of 0.1, 1 mm/hr in terms of MAE, and showed better performance than the translation model for lead time 50 minutes in terms of CSI. In particular, although the deep neural network prediction model performed generally better than the translation model for weak rainfall of 5 mm/hr or less, the deep neural network prediction model had limitations in predicting distinct precipitation characteristics of high intensity as a result of the evaluation of threshold of 5 mm/hr. The longer lead time, the spatial smoothness increase with lead time thereby reducing the accuracy of rainfall prediction The translation model turned out to be superior in predicting the exceedance of higher intensity thresholds (> 5 mm/hr) because it preserves distinct precipitation characteristics, but the rainfall position tends to shift incorrectly. This study are expected to be helpful for the improvement of radar rainfall prediction model using deep neural networks in the future. In addition, the massive weather radar data established in this study will be provided through open repositories for future use in subsequent studies.