• Title/Summary/Keyword: Converting machine

Search Result 93, Processing Time 0.022 seconds

Implementation of the high speed signal processing hardware system for Color Line Scan Camera (Color Line Scan Camera를 위한 고속 신호처리 하드웨어 시스템 구현)

  • Park, Se-hyun;Geum, Young-wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.9
    • /
    • pp.1681-1688
    • /
    • 2017
  • In this paper, we implemented a high-speed signal processing hardware system for Color Line Scan Camera using FPGA and Nor-Flash. The existing hardware system mainly processed by high-speed DSP based on software and it was a method of detecting defects mainly by RGB individual logic, however we suggested defect detection hardware using RGB-HSL hardware converter, FIFO, HSL Full-Color Defect Decoder and Image Frame Buffer. The defect detection hardware is composed of hardware look-up table in converting RGB to HSL and 4K HSL Full-Color Defect Decoder with high resolution. In addition, we included an image frame for comprehensive image processing based on two dimensional image by line data accumulation instead of local image processing based on line data. As a result, we can apply the implemented system to the grain sorting machine for the sorting of peanuts effectively.

A Converting Method from Topic Maps to RDFs without Structural Warp and Semantic Loss (NOWL: 구조 왜곡과 의미 손실 없이 토픽 맵을 RDF로 변환하는 방법)

  • Shin Shinae;Jeong Dongwon;Baik Doo-Kwon
    • Journal of KIISE:Databases
    • /
    • v.32 no.6
    • /
    • pp.593-602
    • /
    • 2005
  • Need for machine-understandable web (Semantic web) is increasing in order for users to exactly understand Web information resources and currently there are two main approaches to solve the problem. One is the Topic map developed by the ISO/IEC JTC 1 and the other is the RDF (Resource Description Framework), one of W3C standards. Semantic web supports all of the metadata of the Web information resources, thus the necessity of interoperability between the Topic map and the RDF is required. To address this issue, several conversion methods have been proposed. However, these methods have some problems such as loss of meanings, complicated structure, unnecessary nodes, etc. In this paper, a new method is proposed to resolve some parts of those problems. The method proposed is called NOWL (NO structural Warp and semantics Loss). NOWL method gives several contributions such as maintenance of the original a Topic map instance structure and elimination of the unnecessary nodes compared with the previous researches.

Modern Paper Quality Control

  • Komppa, Olavi
    • Journal of Korea Technical Association of The Pulp and Paper Industry
    • /
    • v.32 no.5
    • /
    • pp.72-79
    • /
    • 2000
  • On the other hand, the fiber orientation at the surface and middle layer of the sheet controls the bending stiffness of paperboard. Therefore, a reliable measurement of paper surface fiber orientation gives us a magnificent tool to investigate and predict paper curling and cockling tendency, and provides the necessary information to fine-tune the manufacturing process for optimum quality. Many papers, especially heavily calendered and coated grades, do resist liquid and gas penetration very much, being beyond the measurement range of the traditional instruments or resulting inconveniently long measuring time per sample. The increased surface hardness and use of filler minerals and mechanical pulp make a reliable, non-leaking sample contact to the measurement head a challenge of its own. Paper surface coating causes, as expected, a layer which has completely different permeability characteristics compared to the other layers of the sheet. The latest developments in sensor technologies have made it possible to reliably measure gas flow n well controlled conditions, allowing us to investigate the gas penetration of open structures, such as cigarette paper, tissue or sack paper, and in the low permeability range analyze even fully greaseproof papers, silicon papers, heavily coated papers and boards or even detect defects in barrier coatings! Even nitrogen or helium may be used as the gas, giving us completely new possibilities to rank the products or to find correlation to critical process or converting parameters. All the modern paper machines include many on-line measuring instruments which are used to give the necessary information for automatic process control systems. Hence, the reliability of this information obtained from different sensors is vital for good optimizing and process stability. If any of these on-line sensors do not operate perfectly as planned (having even small measurement error or malfunction), the process control will set the machine to operate away from the optimum, resulting loss of profit or eventual problems in quality or runnability. To assure optimum operation of the paper machines, a novel quality assurance policy for the on-line measurements has been developed, including control procedures utilizing traceable, accredited standards for the best reliability and performance.

  • PDF

Suggestions for the Development of RegTech Based Ontology and Deep Learning Technology to Interpret Capital Market Regulations (레그테크 기반의 자본시장 규제 해석 온톨로지 및 딥러닝 기술 개발을 위한 제언)

  • Choi, Seung Uk;Kwon, Oh Byung
    • The Journal of Information Systems
    • /
    • v.30 no.1
    • /
    • pp.65-84
    • /
    • 2021
  • Purpose Based on the development of artificial intelligence and big data technologies, the RegTech has been emerged to reduce regulatory costs and to enable efficient supervision by regulatory bodies. The word RegTech is a combination of regulation and technology, which means using the technological methods to facilitate the implementation of regulations and to make efficient surveillance and supervision of regulations. The purpose of this study is to describe the recent adoption of RegTech and to provide basic examples of applying RegTech to capital market regulations. Design/methodology/approach English-based ontology and deep learning technologies are quite developed in practice, and it will not be difficult to expand it to European or Latin American languages that are grammatically similar to English. However, it is not easy to use it in most Asian languages such as Korean, which have different grammatical rules. In addition, in the early stages of adoption, companies, financial institutions and regulators will not be familiar with this machine-based reporting system. There is a need to establish an ecosystem which facilitates the adoption of RegTech by consulting and supporting the stakeholders. In this paper, we provide a simple example that shows a procedure of applying RegTech to recognize and interpret Korean language-based capital market regulations. Specifically, we present the process of converting sentences in regulations into a meta-language through the morpheme analyses. We next conduct deep learning analyses to determine whether a regulatory sentence exists in each regulatory paragraph. Findings This study illustrates the applicability of RegTech-based ontology and deep learning technologies in Korean-based capital market regulations.

Comparison of image quality according to activation function during Super Resolution using ESCPN (ESCPN을 이용한 초해상화 시 활성화 함수에 따른 이미지 품질의 비교)

  • Song, Moon-Hyuk;Song, Ju-Myung;Hong, Yeon-Jo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.129-132
    • /
    • 2022
  • Super-resolution is the process of converting a low-quality image into a high-quality image. This study was conducted using ESPCN. In a super-resolution deep neural network, different quality images can be output even when receiving the same input data according to the activation function that determines the weight when passing through each node. Therefore, the purpose of this study is to find the most suitable activation function for super-resolution by applying the activation functions ReLU, ELU, and Swish and compare the quality of the output image for the same input images. The CelebaA Dataset was used as the dataset. Images were cut into a square during the pre-processing process then the image quality was lowered. The degraded image was used as the input image and the original image was used for evaluation. As a result, ELU and swish took a long time to train compared to ReLU, which is mainly used for machine learning but showed better performance.

  • PDF

A Hybrid Semantic-Geometric Approach for Clutter-Resistant Floorplan Generation from Building Point Clouds

  • Kim, Seongyong;Yajima, Yosuke;Park, Jisoo;Chen, Jingdao;Cho, Yong K.
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.792-799
    • /
    • 2022
  • Building Information Modeling (BIM) technology is a key component of modern construction engineering and project management workflows. As-is BIM models that represent the spatial reality of a project site can offer crucial information to stakeholders for construction progress monitoring, error checking, and building maintenance purposes. Geometric methods for automatically converting raw scan data into BIM models (Scan-to-BIM) often fail to make use of higher-level semantic information in the data. Whereas, semantic segmentation methods only output labels at the point level without creating object level models that is necessary for BIM. To address these issues, this research proposes a hybrid semantic-geometric approach for clutter-resistant floorplan generation from laser-scanned building point clouds. The input point clouds are first pre-processed by normalizing the coordinate system and removing outliers. Then, a semantic segmentation network based on PointNet++ is used to label each point as ceiling, floor, wall, door, stair, and clutter. The clutter points are removed whereas the wall, door, and stair points are used for 2D floorplan generation. A region-growing segmentation algorithm paired with geometric reasoning rules is applied to group the points together into individual building elements. Finally, a 2-fold Random Sample Consensus (RANSAC) algorithm is applied to parameterize the building elements into 2D lines which are used to create the output floorplan. The proposed method is evaluated using the metrics of precision, recall, Intersection-over-Union (IOU), Betti error, and warping error.

  • PDF

Enhancing Acute Kidney Injury Prediction through Integration of Drug Features in Intensive Care Units

  • Gabriel D. M. Manalu;Mulomba Mukendi Christian;Songhee You;Hyebong Choi
    • International journal of advanced smart convergence
    • /
    • v.12 no.4
    • /
    • pp.434-442
    • /
    • 2023
  • The relationship between acute kidney injury (AKI) prediction and nephrotoxic drugs, or drugs that adversely affect kidney function, is one that has yet to be explored in the critical care setting. One contributing factor to this gap in research is the limited investigation of drug modalities in the intensive care unit (ICU) context, due to the challenges of processing prescription data into the corresponding drug representations and a lack in the comprehensive understanding of these drug representations. This study addresses this gap by proposing a novel approach that leverages patient prescription data as a modality to improve existing models for AKI prediction. We base our research on Electronic Health Record (EHR) data, extracting the relevant patient prescription information and converting it into the selected drug representation for our research, the extended-connectivity fingerprint (ECFP). Furthermore, we adopt a unique multimodal approach, developing machine learning models and 1D Convolutional Neural Networks (CNN) applied to clinical drug representations, establishing a procedure which has not been used by any previous studies predicting AKI. The findings showcase a notable improvement in AKI prediction through the integration of drug embeddings and other patient cohort features. By using drug features represented as ECFP molecular fingerprints along with common cohort features such as demographics and lab test values, we achieved a considerable improvement in model performance for the AKI prediction task over the baseline model which does not include the drug representations as features, indicating that our distinct approach enhances existing baseline techniques and highlights the relevance of drug data in predicting AKI in the ICU setting.

Predicting Changes in Restaurant Business District by Administrative Districts in Seoul using Deep Learning (딥러닝 기반 서울시 행정동별 외식업종 상권 변화 예측)

  • Jiyeon Kim;Sumin Oh;Minseo Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.2
    • /
    • pp.459-463
    • /
    • 2024
  • Frequent closures among self-employed individuals lead to national economic losses. Given the high closure rates in the restaurant industry, predicting changes in this sector is crucial for business survival. While research on factors affecting restaurant industry survival is active, studies predicting commercial district changes are lacking. Thus, this study focuses on forecasting such alterations, designing a deep learning model for Seoul's administrative district commercial district changes. It collects 2023 and 2022 second-quarter variables related to these changes, converting yearly fluctuations into percentages for augmentation. The proposed deep learning model aims to predict commercial district changes. Future policies, considering this study, could support restaurant industry growth and economic development.

Metabolic Diseases Classification Models according to Food Consumption using Machine Learning (머신러닝을 활용한 식품소비에 따른 대사성 질환 분류 모델)

  • Hong, Jun Ho;Lee, Kyung Hee;Lee, Hye Rim;Cheong, Hwan Suk;Cho, Wan-Sup
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.3
    • /
    • pp.354-360
    • /
    • 2022
  • Metabolic disease is a disease with a prevalence of 26% in Korean, and has three of the five states of abdominal obesity, hypertension, hunger glycemic disorder, high neutral fat, and low HDL cholesterol at the same time. This paper links the consumer panel data of the Rural Development Agency(RDA) and the medical care data of the National Health Insurance Service(NHIS) to generate a classification model that can be divided into a metabolic disease group and a control group through food consumption characteristics, and attempts to compare the differences. Many existing domestic and foreign studies related to metabolic diseases and food consumption characteristics are disease correlation studies of specific food groups and specific ingredients, and this paper is logistic considering all food groups included in the general diet. We created a classification model using regression, a decision tree-based classification model, and a classification model using XGBoost. Of the three models, the high-precision model is the XGBoost classification model, but the accuracy was not high at less than 0.7. As a future study, it is necessary to extend the observation period for food consumption in the patient group to more than 5 years and to study the metabolic disease classification model after converting the food consumed into nutritional characteristics.

Development of a polystyrene phantom for quality assurance of a Gamma Knife®

  • Yona Choi;Kook Jin Chun;Jungbae Bahng;Sang Hyoun Choi;Gyu Seok Cho;Tae Hoon Kim;Hye Jeong Yang;Yeong Chan Seo;Hyun-Tai Chung
    • Nuclear Engineering and Technology
    • /
    • v.55 no.8
    • /
    • pp.2935-2940
    • /
    • 2023
  • A polystyrene phantom was developed following the guidance of the International Atomic Energy Association (IAEA) for gamma knife (GK) quality assurance. Its performance was assessed by measuring the absorbed dose rate to water and dose distributions. The phantom was made of polystyrene, which has an electron density (1.0156) similar to that of water. The phantom included one outer phantom and four inner phantoms. Two inner phantoms held PTW T31010 and Exradin A16 ion chambers. One inner phantom held a film in the XY plane of the Leksell coordinate system, and another inner phantom held a film in the YZ or ZX planes. The absorbed dose rate to water and beam profiles of the machine-specific reference (msr) field, namely, the 16 mm collimator field of a GK PerfexionTM or IconTM, were measured at seven GK sites. The measured results were compared to those of an IAEA-recommended solid water (SW) phantom. The radius of the polystyrene phantom was determined to be 7.88 cm by converting the electron density of the plastic, considering a water depth of 8 g/cm2. The absorbed dose rates to water measured in both phantoms differed from the treatment planning program by less than 1.1%. Before msr correction, the PTW T31010 dose rates (PTW Freiberg GmbH, New York, NY, USA) in the polystyrene phantom were 0.70 (0.29)% higher on average than those in the SW phantom. The Exradin A16 (Standard Imaging, Middleton, WI, USA) dose rates were 0.76 (0.32)% higher in the polystyrene phantom. After msr correction factors were applied, there were no statistically significant differences in the A16 dose rates measured in the two phantoms; however, the T31010 dose rates were 0.72 (0.29)% higher in the polystyrene phantom. When the full widths at half maximum and penumbras of the msr field were compared, no significant differences between the two phantoms were observed, except for the penumbra in the Y-axis. However, the difference in the penumbra was smaller than variations among different sites. A polystyrene phantom developed for gamma knife dosimetry showed dosimetric performance comparable to that of a commercial SW phantom. In addition to its cost effectiveness, the polystyrene phantom removes air space around the detector. Additional simulations of the msr correction factors of the polystyrene phantom should be performed.