• Title/Summary/Keyword: Preprocessing System

Search Result 712, Processing Time 0.033 seconds

Experimental Implementation of a Cableless Seismic Data Acquisition Module Using Arduino (아두이노를 활용한 무선 탄성파 자료취득 모듈 구현 실험)

  • Chanil Kim;Sangin Cho;Sukjoon Pyun
    • Geophysics and Geophysical Exploration
    • /
    • v.26 no.3
    • /
    • pp.103-113
    • /
    • 2023
  • In the oil and gas exploration market, various cableless seismic systems have been developed as an alternative to improve data acquisition efficiency. However, developing such equipment at a small scale for academic research is not available owing to highly priced commercial products. Fortunately, building and experimenting with open-source hardware enable the academic utilization of cableless seismic equipment with relatively low cost. This study aims to develop a cableless seismic acquisition module using Arduino. A cableless seismic system requires the combination of signal sensing, simple pre-processing, and data storage in a single device. A conventional geophone is used as the sensor that detects the seismic wave signal. In addition, it is connected to an Arduino circuit that plays a role in implementing the processing and storing module for the detected signals. Three main functions are implemented in the Arduino module: preprocessing, A/D conversion, and data storage. The developed single-channel module can acquire a common receiver gather from multiple source experiments.

Development of Online Fashion Thesaurus and Taxonomy for Text Mining (텍스트마이닝을 위한 패션 속성 분류체계 및 말뭉치 웹사전 구축)

  • Seyoon Jang;Ha Youn Kim;Songmee Kim;Woojin Choi;Jin Jeong;Yuri Lee
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.46 no.6
    • /
    • pp.1142-1160
    • /
    • 2022
  • Text data plays a significant role in understanding and analyzing trends in consumer, business, and social sectors. For text analysis, there must be a corpus that reflects specific domain knowledge. However, in the field of fashion, the professional corpus is insufficient. This study aims to develop a taxonomy and thesaurus that considers the specialty of fashion products. To this end, about 100,000 fashion vocabulary terms were collected by crawling text data from WSGN, Pantone, and online platforms; text subsequently was extracted through preprocessing with Python. The taxonomy was composed of items, silhouettes, details, styles, colors, textiles, and patterns/prints, which are seven attributes of clothes. The corpus was completed through processing synonyms of terms from fashion books such as dictionaries. Finally, 10,294 vocabulary words, including 1,956 standard Korean words, were classified in the taxonomy. All data was then developed into a web dictionary system. Quantitative and qualitative performance tests of the results were conducted through expert reviews. The performance of the thesaurus also was verified by comparing the results of text mining analysis through the previously developed corpus. This study contributes to achieving a text data standard and enables meaningful results of text mining analysis in the fashion field.

Robust Scheme of Segmenting Characters of License Plate on Irregular Illumination Condition (불규칙 조명 환경에 강인한 번호판 문자 분리 기법)

  • Kim, Byoung-Hyun;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.11
    • /
    • pp.61-71
    • /
    • 2009
  • Vehicle license plate is the only way to check the registrated information of a vehicle. Many works have been devoted to the vision system of recognizing the license plate, which has been widely used to control an illegal parking. However, it is difficult to correctly segment characters on the license plate since an illumination is affected by a weather change and a neighboring obstacles. This paper proposes a robust method of segmenting the character of the license plate on irregular illumination condition. The proposed method enhance the contrast of license plate images using the Chi-Square probability density function. For segmenting characters on the license plate, binary images with the high quality are gained by applying the adaptive threshold. Preprocessing and labeling algorithm are used to eliminate noises existing during the whole segmentation process. Finally, profiling method is applied to segment characters on license plate from binary images.

IoT botnet attack detection using deep autoencoder and artificial neural networks

  • Deris Stiawan;Susanto ;Abdi Bimantara;Mohd Yazid Idris;Rahmat Budiarto
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.5
    • /
    • pp.1310-1338
    • /
    • 2023
  • As Internet of Things (IoT) applications and devices rapidly grow, cyber-attacks on IoT networks/systems also have an increasing trend, thus increasing the threat to security and privacy. Botnet is one of the threats that dominate the attacks as it can easily compromise devices attached to an IoT networks/systems. The compromised devices will behave like the normal ones, thus it is difficult to recognize them. Several intelligent approaches have been introduced to improve the detection accuracy of this type of cyber-attack, including deep learning and machine learning techniques. Moreover, dimensionality reduction methods are implemented during the preprocessing stage. This research work proposes deep Autoencoder dimensionality reduction method combined with Artificial Neural Network (ANN) classifier as botnet detection system for IoT networks/systems. Experiments were carried out using 3- layer, 4-layer and 5-layer pre-processing data from the MedBIoT dataset. Experimental results show that using a 5-layer Autoencoder has better results, with details of accuracy value of 99.72%, Precision of 99.82%, Sensitivity of 99.82%, Specificity of 99.31%, and F1-score value of 99.82%. On the other hand, the 5-layer Autoencoder model succeeded in reducing the dataset size from 152 MB to 12.6 MB (equivalent to a reduction of 91.2%). Besides that, experiments on the N_BaIoT dataset also have a very high level of accuracy, up to 99.99%.

Image Restoration Filter using Combined Weight in Mixed Noise Environment (복합잡음 환경에서 결합가중치를 이용한 영상복원 필터)

  • Cheon, Bong-Won;Kim, Nam-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.210-212
    • /
    • 2021
  • In modern society, various digital equipment are being distributed due to the influence of the 4th industrial revolution, and they are used in a wide range of fields such as automated processes, intelligent CCTV, medical industry, robots, and drones. Accordingly, the importance of the preprocessing process in a system operating based on an image is increasing, and an algorithm for effectively reconstructing an image is drawing attention. In this paper, we propose a filter algorithm based on a combined weight value to reconstruct an image in a complex noise environment. The proposed algorithm calculates the weight according to the spatial distance and the weight according to the difference between the pixel values for the input image and the pixel values inside the filtering mask, respectively. The final output was filtered by applying the join weights calculated based on the two weights to the mask. In order to verify the performance of the proposed algorithm, we simulated it by comparing it with the existing filter algorithm.

  • PDF

Prediction Model of Real Estate Transaction Price with the LSTM Model based on AI and Bigdata

  • Lee, Jeong-hyun;Kim, Hoo-bin;Shim, Gyo-eon
    • International Journal of Advanced Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.274-283
    • /
    • 2022
  • Korea is facing a number difficulties arising from rising housing prices. As 'housing' takes the lion's share in personal assets, many difficulties are expected to arise from fluctuating housing prices. The purpose of this study is creating housing price prediction model to prevent such risks and induce reasonable real estate purchases. This study made many attempts for understanding real estate instability and creating appropriate housing price prediction model. This study predicted and validated housing prices by using the LSTM technique - a type of Artificial Intelligence deep learning technology. LSTM is a network in which cell state and hidden state are recursively calculated in a structure which added cell state, which is conveyor belt role, to the existing RNN's hidden state. The real sale prices of apartments in autonomous districts ranging from January 2006 to December 2019 were collected through the Ministry of Land, Infrastructure, and Transport's real sale price open system and basic apartment and commercial district information were collected through the Public Data Portal and the Seoul Metropolitan City Data. The collected real sale price data were scaled based on monthly average sale price and a total of 168 data were organized by preprocessing respective data based on address. In order to predict prices, the LSTM implementation process was conducted by setting training period as 29 months (April 2015 to August 2017), validation period as 13 months (September 2017 to September 2018), and test period as 13 months (December 2018 to December 2019) according to time series data set. As a result of this study for predicting 'prices', there have been the following results. Firstly, this study obtained 76 percent of prediction similarity. We tried to design a prediction model of real estate transaction price with the LSTM Model based on AI and Bigdata. The final prediction model was created by collecting time series data, which identified the fact that 76 percent model can be made. This validated that predicting rate of return through the LSTM method can gain reliability.

Livestock Telemedicine System Prediction Model for Human Healthy Life (인간의 건강한 삶을 위한 가축원격 진료 예측 모델)

  • Kang, Yun-Jeong;Lee, Kwang-Jae;Choi, Dong-Oun
    • Journal of Korea Entertainment Industry Association
    • /
    • v.13 no.8
    • /
    • pp.335-343
    • /
    • 2019
  • Healthy living is an essential element of human happiness. Quality eating provides the basis for life, and the health of livestock, which provides meat and dairy products, has a direct impact on human health. In the case of calves, diarrhea is the cause of all diseases.In this paper, we use a sensor to measure calf 's biometric data to diagnose calf diarrhea. The collected biometric data is subjected to a preprocessing process for use as meaningful information. We measure calf birth history and calf biometrics. The ontology is constructed by inputting environmental information of housing and biochemistry, immunity, and measurement information of human body for disease management. We will build a knowledge base for predicting calf diarrhea by predicting calf diarrhea through logical reasoning. Predict diarrhea with the knowledge base on the name of the disease, cause, timing and symptoms of livestock diseases. These knowledge bases can be expressed as domain ontologies for parent ontology and prediction, and as a result, treatment and prevention methods can be suggested.

Efficient Semi-automatic Annotation System based on Deep Learning

  • Hyunseok Lee;Hwa Hui Shin;Soohoon Maeng;Dae Gwan Kim;Hyojeong Moon
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.6
    • /
    • pp.267-275
    • /
    • 2023
  • This paper presents the development of specialized software for annotating volume-of-interest on 18F-FDG PET/CT images with the goal of facilitating the studies and diagnosis of head and neck cancer (HNC). To achieve an efficient annotation process, we employed the SE-Norm-Residual Layer-based U-Net model. This model exhibited outstanding proficiency to segment cancerous regions within 18F-FDG PET/CT scans of HNC cases. Manual annotation function was also integrated, allowing researchers and clinicians to validate and refine annotations based on dataset characteristics. Workspace has a display with fusion of both PET and CT images, providing enhance user convenience through simultaneous visualization. The performance of deeplearning model was validated using a Hecktor 2021 dataset, and subsequently developed semi-automatic annotation functionalities. We began by performing image preprocessing including resampling, normalization, and co-registration, followed by an evaluation of the deep learning model performance. This model was integrated into the software, serving as an initial automatic segmentation step. Users can manually refine pre-segmented regions to correct false positives and false negatives. Annotation images are subsequently saved along with their corresponding 18F-FDG PET/CT fusion images, enabling their application across various domains. In this study, we developed a semi-automatic annotation software designed for efficiently generating annotated lesion images, with applications in HNC research and diagnosis. The findings indicated that this software surpasses conventional tools, particularly in the context of HNC-specific annotation with 18F-FDG PET/CT data. Consequently, developed software offers a robust solution for producing annotated datasets, driving advances in the studies and diagnosis of HNC.

Convolutional neural network of age-related trends digital radiographs of medial clavicle in a Thai population: a preliminary study

  • Phisamon Kengkard;Jirachaya Choovuthayakorn;Chollada Mahakkanukrauh;Nadee Chitapanarux;Pittayarat Intasuwan;Yanumart Malatong;Apichat Sinthubua;Patison Palee;Sakarat Na Lampang;Pasuk Mahakkanukrauh
    • Anatomy and Cell Biology
    • /
    • v.56 no.1
    • /
    • pp.86-93
    • /
    • 2023
  • Age at death estimation has always been a crucial yet challenging part of identification process in forensic field. The use of human skeletons have long been explored using the principle of macro and micro-architecture change in correlation with increasing age. The clavicle is recommended as the best candidate for accurate age estimation because of its accessibility, time to maturation and minimal effect from weight. Our study applies pre-trained convolutional neural network in order to achieve the most accurate and cost effective age estimation model using clavicular bone. The total of 988 clavicles of Thai population with known age and sex were radiographed using Kodak 9000 Extra-oral Imaging System. The radiographs then went through preprocessing protocol which include region of interest selection and quality assessment. Additional samples were generated using generative adversarial network. The total clavicular images used in this study were 3,999 which were then separated into training and test set, and the test set were subsequently categorized into 7 age groups. GoogLeNet was modified at two layers and fine tuned the parameters. The highest validation accuracy was 89.02% but the test set achieved only 30% accuracy. Our results show that the use of medial clavicular radiographs has a potential in the field of age at death estimation, thus, further study is recommended.

Deep Learning Research on Vessel Trajectory Prediction Based on AIS Data with Interpolation Techniques

  • Won-Hee Lee;Seung-Won Yoon;Da-Hyun Jang;Kyu-Chul Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.3
    • /
    • pp.1-10
    • /
    • 2024
  • The research on predicting the routes of ships, which constitute the majority of maritime transportation, can detect potential hazards at sea in advance and prevent accidents. Unlike roads, there is no distinct signal system at sea, and traffic management is challenging, making ship route prediction essential for maritime safety. However, the time intervals of the ship route datasets are irregular due to communication disruptions. This study presents a method to adjust the time intervals of data using appropriate interpolation techniques for ship route prediction. Additionally, a deep learning model for predicting ship routes has been developed. This model is an LSTM model that predicts the future GPS coordinates of ships by understanding their movement patterns through real-time route information contained in AIS data. This paper presents a data preprocessing method using linear interpolation and a suitable deep learning model for ship route prediction. The experimental results demonstrate the effectiveness of the proposed method with an MSE of 0.0131 and an Accuracy of 0.9467.