• Title/Summary/Keyword: value engineering

Search Result 20,029, Processing Time 0.047 seconds

Estimation of the Input Wave Height of the Wave Generator for Regular Waves by Using Artificial Neural Networks and Gaussian Process Regression (인공신경망과 가우시안 과정 회귀에 의한 규칙파의 조파기 입력파고 추정)

  • Jung-Eun, Oh;Sang-Ho, Oh
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.34 no.6
    • /
    • pp.315-324
    • /
    • 2022
  • The experimental data obtained in a wave flume were analyzed using machine learning techniques to establish a model that predicts the input wave height of the wavemaker based on the waves that have experienced wave shoaling and to verify the performance of the established model. For this purpose, artificial neural network (NN), the most representative machine learning technique, and Gaussian process regression (GPR), one of the non-parametric regression analysis methods, were applied respectively. Then, the predictive performance of the two models was compared. The analysis was performed independently for the case of using all the data at once and for the case by classifying the data with a criterion related to the occurrence of wave breaking. When the data were not classified, the error between the input wave height at the wavemaker and the measured value was relatively large for both the NN and GPR models. On the other hand, if the data were divided into non-breaking and breaking conditions, the accuracy of predicting the input wave height was greatly improved. Among the two models, the overall performance of the GPR model was better than that of the NN model.

Estimation of Illuminant Chromaticity by Equivalent Distance Reference Illumination Map and Color Correlation (균등거리 기준 조명 맵과 색 상관성을 이용한 조명 색도 추정)

  • Kim Jeong Yeop
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.6
    • /
    • pp.267-274
    • /
    • 2023
  • In this paper, a method for estimating the illuminant chromaticity of a scene for an input image is proposed. The illuminant chromaticity is estimated using the illuminant reference region. The conventional method uses a certain number of reference lighting information. By comparing the chromaticity distribution of pixels from the input image with the chromaticity set prepared in advance for the reference illuminant, the reference illuminant with the largest overlapping area is regarded as the scene illuminant for the corresponding input image. In the process of calculating the overlapping area, the weights for each reference light were applied in the form of a Gaussian distribution, but a clear standard for the variance value could not be presented. The proposed method extracts an independent reference chromaticity region from a given reference illuminant, calculates the characteristic values in the r-g chromaticity plane of the RGB color coordinate system for all pixels of the input image, and then calculates the independent chromaticity region and features from the input image. The similarity is evaluated and the illuminant with the highest similarity was estimated as the illuminant chromaticity component of the image. The performance of the proposed method was evaluated using the database image and showed an average of about 60% improvement compared to the conventional basic method and showed an improvement performance of around 53% compared to the conventional Gaussian weight of 0.1.

Real Estate Asset NFT Tokenization and FT Asset Portfolio Management (부동산 유동화 NFT와 FT 분할 거래 시스템 설계 및 구현)

  • Young-Gun Kim;Seong-Whan Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.9
    • /
    • pp.419-430
    • /
    • 2023
  • Currently, NFTs have no dominant application except for the proof of ownership for digital content, and it also have small liquidity problem, which makes their price difficult to predict. Real estate usually has very high barriers to investment due to its high pricing. Real estate can be converted into NFTs and also divided into small value fungible tokens (FTs), and it can increase the the volume of the investor community due to more liquidity and better accessibility. In this document, we implement and design a system that allows ordinary users can invest on high priced real estate utilizing Black Litterman (BL) model-based Portfolio investment interface. To this end, we target a set of real estates pegged as collateral and issue NFT for the collateral using blockchain. We use oracle to get the current real estate information and to monitor varying real estate prices. After tokenizing real estate into NFTs, we divide the NFTs into easily accessible price FTs, thereby, we can lower prices and provide large liquidity with price volatility limited. In addition, we also implemented BL based asset portfolio interface for effective portfolio composition for investing in multiple of real estates with small investments. Using BL model, investors can fix the asset portfolio. We implemented the whole system using Solidity smart contracts on Flask web framework with public data portals as oracle interfaces.

P-Impedance Inversion in the Shallow Sediment of the Korea Strait by Integrating Core Laboratory Data and the Seismic Section (심부 시추코어 실험실 분석자료와 탄성파 탐사자료 통합 분석을 통한 대한해협 천부 퇴적층 임피던스 도출)

  • Snons Cheong;Gwang Soo Lee;Woohyun Son;Gil Young Kim;Dong Geun Yoo;Yunseok Choi
    • Geophysics and Geophysical Exploration
    • /
    • v.26 no.3
    • /
    • pp.138-149
    • /
    • 2023
  • In geoscience and engineering the geological characteristics of sediment strata is crucial and possible if reliable borehole logging and seismic data are available. To investigate the characteristics of the shallow strata in the Korea Strait, laboratory sonic logs were obtained from deep borehole data and seismic section. In this study, we integrated and analyzed the sonic log data obtained from the drilling core (down to a depth of 200 m below the seabed) and multichannel seismic section. The correlation value was increased from 15% to 45% through time-depth conversion. An initial model of P-wave impedance was set, and the results were compared by performing model-based, band-limited, and sparse-spike inversions. The derived P-impedance distributions exhibited differences between sediment-dominant and unconsolidated layers. The P-impedance inversion process can be used as a framework for an integrated analysis of additional core logs and seismic data in the future. Furthermore, the derived P-impedance can be used to detect shallow gas-saturated regions or faults in the shallow sediment. As domestic deep drilling is being performed continuously for identifying the characteristics of carbon dioxide storage candidates and evaluating resources, the applicability of the integrated inversion will increase in the future.

Physicochemical properties of a calcium aluminate cement containing nanoparticles of zinc oxide

  • Amanda Freitas da Rosa;Thuany Schmitz Amaral;Maria Eduarda Paz Dotto;Taynara Santos Goulart;Hebert Luis Rossetto;Eduardo Antunes Bortoluzzi;Cleonice da Silveira Teixeira;Lucas da Fonseca Roberti Garcia
    • Restorative Dentistry and Endodontics
    • /
    • v.48 no.1
    • /
    • pp.3.1-3.14
    • /
    • 2023
  • Objectives: This study evaluated the effect of different nanoparticulated zinc oxide (nano-ZnO) and conventional-ZnO ratios on the physicochemical properties of calcium aluminate cement (CAC). Materials and Methods: The conventional-ZnO and nano-ZnO were added to the cement powder in the following proportions: G1 (20% conventional-ZnO), G2 (15% conventional-ZnO + 5% nano-ZnO), G3 (12% conventional-ZnO + 3% nano-ZnO) and G4 (10% conventional-ZnO + 5% nano-ZnO). The radiopacity (Rad), setting time (Set), dimensional change (Dc), solubility (Sol), compressive strength (Cst), and pH were evaluated. The nano-ZnO and CAC containing conventional-ZnO were also assessed using scanning electron microscopy, transmission electron microscopy, and energy-dispersive X-ray spectroscopy. Radiopacity data were analyzed by the 1-way analysis of variance (ANOVA) and Bonferroni tests (p < 0.05). The data of the other properties were analyzed by the ANOVA, Tukey, and Fisher tests (p < 0.05). Results: The nano-ZnO and CAC containing conventional-ZnO powders presented particles with few impurities and nanometric and micrometric sizes, respectively. G1 had the highest Rad mean value (p < 0.05). When compared to G1, groups containing nano-ZnO had a significant reduction in the Set (p < 0.05) and lower values of Dc at 24 hours (p < 0.05). The Cst was higher for G4, with a significant difference for the other groups (p < 0.05). The Sol did not present significant differences among groups (p > 0.05). Conclusions: The addition of nano-ZnO to CAC improved its dimensional change, setting time, and compressive strength, which may be promising for the clinical performance of this cement.

Method of Biological Information Analysis Based-on Object Contextual (대상객체 맥락 기반 생체정보 분석방법)

  • Kim, Kyung-jun;Kim, Ju-yeon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.41-43
    • /
    • 2022
  • In order to prevent and block infectious diseases caused by the recent COVID-19 pandemic, non-contact biometric information acquisition and analysis technology is attracting attention. The invasive and attached biometric information acquisition method accurately has the advantage of measuring biometric information, but has a risk of increasing contagious diseases due to the close contact. To solve these problems, the non-contact method of extracting biometric information such as human fingerprints, faces, iris, veins, voice, and signatures with automated devices is increasing in various industries as data processing speed increases and recognition accuracy increases. However, although the accuracy of the non-contact biometric data acquisition technology is improved, the non-contact method is greatly influenced by the surrounding environment of the object to be measured, which is resulting in distortion of measurement information and poor accuracy. In this paper, we propose a context-based bio-signal modeling technique for the interpretation of personalized information (image, signal, etc.) for bio-information analysis. Context-based biometric information modeling techniques present a model that considers contextual and user information in biometric information measurement in order to improve performance. The proposed model analyzes signal information based on the feature probability distribution through context-based signal analysis that can maximize the predicted value probability.

  • PDF

A Study on Machine Learning-Based Real-Time Automated Measurement Data Analysis Techniques (머신러닝 기반의 실시간 자동화계측 데이터 분석 기법 연구)

  • Jung-Youl Choi;Jae-Min Han;Dae-Hui Ahn;Jee-Seung Chung;Jung-Ho Kim;Sung-Jin Lee
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.1
    • /
    • pp.685-690
    • /
    • 2023
  • It was analyzed that the volume of deep excavation works adjacent to existing underground structures is increasing according to the population growth and density of cities. Currently, many underground structures and tracks are damaged by external factors, and the cause is analyzed based on the measurement results in the tunnel, and measurements are being made for post-processing, not for prevention. The purpose of this study is to analyze the effect on the deformation of the structure due to the excavation work adjacent to the urban railway track in use. In addition, the safety of structures is evaluated through machine learning techniques for displacement of structures before damage and destruction of underground structures and tracks due to external factors. As a result of the analysis, it was analyzed that the model suitable for predicting the structure management standard value time in the analyzed dataset was a polynomial regression machine. Since it may be limited to the data applied in this study, future research is needed to increase the diversity of structural conditions and the amount of data.

Building Sentence Meaning Identification Dataset Based on Social Problem-Solving R&D Reports (사회문제 해결 연구보고서 기반 문장 의미 식별 데이터셋 구축)

  • Hyeonho Shin;Seonki Jeong;Hong-Woo Chun;Lee-Nam Kwon;Jae-Min Lee;Kanghee Park;Sung-Pil Choi
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.4
    • /
    • pp.159-172
    • /
    • 2023
  • In general, social problem-solving research aims to create important social value by offering meaningful answers to various social pending issues using scientific technologies. Not surprisingly, however, although numerous and extensive research attempts have been made to alleviate the social problems and issues in nation-wide, we still have many important social challenges and works to be done. In order to facilitate the entire process of the social problem-solving research and maximize its efficacy, it is vital to clearly identify and grasp the important and pressing problems to be focused upon. It is understandable for the problem discovery step to be drastically improved if current social issues can be automatically identified from existing R&D resources such as technical reports and articles. This paper introduces a comprehensive dataset which is essential to build a machine learning model for automatically detecting the social problems and solutions in various national research reports. Initially, we collected a total of 700 research reports regarding social problems and issues. Through intensive annotation process, we built totally 24,022 sentences each of which possesses its own category or label closely related to social problem-solving such as problems, purposes, solutions, effects and so on. Furthermore, we implemented four sentence classification models based on various neural language models and conducted a series of performance experiments using our dataset. As a result of the experiment, the model fine-tuned to the KLUE-BERT pre-trained language model showed the best performance with an accuracy of 75.853% and an F1 score of 63.503%.

Ensemble Learning-Based Prediction of Good Sellers in Overseas Sales of Domestic Books and Keyword Analysis of Reviews of the Good Sellers (앙상블 학습 기반 국내 도서의 해외 판매 굿셀러 예측 및 굿셀러 리뷰 키워드 분석)

  • Do Young Kim;Na Yeon Kim;Hyon Hee Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.4
    • /
    • pp.173-178
    • /
    • 2023
  • As Korean literature spreads around the world, its position in the overseas publishing market has become important. As demand in the overseas publishing market continues to grow, it is essential to predict future book sales and analyze the characteristics of books that have been highly favored by overseas readers in the past. In this study, we proposed ensemble learning based prediction model and analyzed characteristics of the cumulative sales of more than 5,000 copies classified as good sellers published overseas over the past 5 years. We applied the five ensemble learning models, i.e., XGBoost, Gradient Boosting, Adaboost, LightGBM, and Random Forest, and compared them with other machine learning algorithms, i.e., Support Vector Machine, Logistic Regression, and Deep Learning. Our experimental results showed that the ensemble algorithm outperforms other approaches in troubleshooting imbalanced data. In particular, the LightGBM model obtained an AUC value of 99.86% which is the best prediction performance. Among the features used for prediction, the most important feature is the author's number of overseas publications, and the second important feature is publication in countries with the largest publication market size. The number of evaluation participants is also an important feature. In addition, text mining was performed on the four book reviews that sold the most among good-selling books. Many reviews were interested in stories, characters, and writers and it seems that support for translation is needed as many of the keywords of "translation" appear in low-rated reviews.

Fake News Detection Using CNN-based Sentiment Change Patterns (CNN 기반 감성 변화 패턴을 이용한 가짜뉴스 탐지)

  • Tae Won Lee;Ji Su Park;Jin Gon Shon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.4
    • /
    • pp.179-188
    • /
    • 2023
  • Recently, fake news disguises the form of news content and appears whenever important events occur, causing social confusion. Accordingly, artificial intelligence technology is used as a research to detect fake news. Fake news detection approaches such as automatically recognizing and blocking fake news through natural language processing or detecting social media influencer accounts that spread false information by combining with network causal inference could be implemented through deep learning. However, fake news detection is classified as a difficult problem to solve among many natural language processing fields. Due to the variety of forms and expressions of fake news, the difficulty of feature extraction is high, and there are various limitations, such as that one feature may have different meanings depending on the category to which the news belongs. In this paper, emotional change patterns are presented as an additional identification criterion for detecting fake news. We propose a model with improved performance by applying a convolutional neural network to a fake news data set to perform analysis based on content characteristics and additionally analyze emotional change patterns. Sentimental polarity is calculated for the sentences constituting the news and the result value dependent on the sentence order can be obtained by applying long-term and short-term memory. This is defined as a pattern of emotional change and combined with the content characteristics of news to be used as an independent variable in the proposed model for fake news detection. We train the proposed model and comparison model by deep learning and conduct an experiment using a fake news data set to confirm that emotion change patterns can improve fake news detection performance.