• Title/Summary/Keyword: computational algorithm

Search Result 4,384, Processing Time 0.036 seconds

An Analysis of Improvement and Compilation Issues of Mathematics Textbooks for Elementary Schools: Focusing on the 2015 Revised Elementary School Mathematics Textbook Government Published (초등학교 수학 교과서 개선과 편찬 상의 이슈 분석: 2015 개정 초등학교 수학 국정 교과용 도서를 중심으로)

  • Lee, Hwa Young
    • Education of Primary School Mathematics
    • /
    • v.25 no.4
    • /
    • pp.411-431
    • /
    • 2022
  • In this paper, implications for future curriculum compilation were sought by analyzing the process and results of compiling books for elementary school mathematics textbooks government published according to the 2015 revised curriculum. The 2015 revised elementary mathematics textbooks government published was operated with a systematic compilation system so that academia and school field experts across the country could demonstrate their expertise. As improvements in content, the unit and time to strengthen basic computational skills were increased, and the mathematical concept and principle introduction method and algorithm presentation method were improved, and the internal connection between contents was strengthened. The learning period was adjusted, such as moving and arranging contents that are difficult for students to understand to the upper semester or the upper grade. In the 1st and 2nd graders, the amount of reading was drastically reduced to suit the students' level of Korean, and sentences and vocabulary were improved, and instructions were briefly revised. As for editing and design improvements, illustrations of each unit's introduction and contextual pictures were presented in detail, and the characters in the textbook were consistently presented across all grades, giving children characters a role to actively participate in learning in the textbook. In the process of compiling, the media, the National Assembly, and civic groups raised opinions that sentences and vocabulary in first-year textbooks are more difficult than students' level of Hangeul education, that reducing textbooks makes it difficult for students to understand. Accordingly, efforts to improve textbook compilation and the results were viewed. Through the overall analysis as above, for future compilation of state-authored textbooks and certified textbooks, a plan to improve textbook compilation for students and teachers and a plan to operate compilation was proposed.

Scheduling of Parallel Offset Printing Process for Packaging Printing (패키징 인쇄를 위한 병렬 오프셋 인쇄 공정의 스케줄링)

  • Jaekyeong, Moon;Hyunchul, Tae
    • KOREAN JOURNAL OF PACKAGING SCIENCE & TECHNOLOGY
    • /
    • v.28 no.3
    • /
    • pp.183-192
    • /
    • 2022
  • With the growth of the packaging industry, demand on the packaging printing comes in various forms. Customers' orders are diversifying and the standards for quality are increasing. Offset printing is mainly used in the packaging printing since it is easy to print in large quantities. However, productivity of the offset printing decreases when printing various order. This is because it takes time to change colors for each printing unit. Therefore, scheduling that minimizes the color replacement time and shortens the overall makespan is required. By the existing manual method based on workers' experience or intuition, scheduling results may vary for workers and this uncertainty increase the production cost. In this study, we propose an automated scheduling method of parallel offset printing process for packaging printing. We decompose the original problem into assigning and sequencing orders, and ink arrangement for printing problems. Vehicle routing problem and assignment problem are applied to each part. Mixed integer programming is used to model the problem mathematically. But it needs a lot of computational time to solve as the size of the problem grows. So guided local search algorithm is used to solve the problem. Through actual data experiments, we reviewed our method's applicability and role in the field.

A Study on the Optimization of Main Dimensions of a Ship by Design Search Techniques based on the AI (AI 기반 설계 탐색 기법을 통한 선박의 주요 치수 최적화)

  • Dong-Woo Park;Inseob Kim
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.28 no.7
    • /
    • pp.1231-1237
    • /
    • 2022
  • In the present study, the optimization of the main particulars of a ship using AI-based design search techniques was investigated. For the design search techniques, the SHERPA algorithm by HEEDS was applied, and CFD analysis using STAR-CCM+ was applied for the calculation of resistance performance. Main particulars were automatically transformed by modifying the main particulars of the ship at the stage of preprocessing using JAVA script and Python. Small catamaran was chosen for the present study, and the main dimensions of the length, breadth, draft of demi-hull, and distance between demi-hulls were considered as design variables. Total resistance was considered as an objective function, and the range of displaced volume considering the arrangement of the outfitting system was chosen as the constraint. As a result, the changes in the individual design variables were within ±5%, and the total resistance of the optimized hull form was decreased by 11% compared with that of the existing hull form. Throughout the present study, the resistance performance of small catamaran could be improved by the optimization of the main dimensions without direct modification of the hull shape. In addition, the application of optimization using design search techniques is expected for the improvement in the resistance performance of a ship.

Computer Vision-based Continuous Large-scale Site Monitoring System through Edge Computing and Small-Object Detection

  • Kim, Yeonjoo;Kim, Siyeon;Hwang, Sungjoo;Hong, Seok Hwan
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.1243-1244
    • /
    • 2022
  • In recent years, the growing interest in off-site construction has led to factories scaling up their manufacturing and production processes in the construction sector. Consequently, continuous large-scale site monitoring in low-variability environments, such as prefabricated components production plants (precast concrete production), has gained increasing importance. Although many studies on computer vision-based site monitoring have been conducted, challenges for deploying this technology for large-scale field applications still remain. One of the issues is collecting and transmitting vast amounts of video data. Continuous site monitoring systems are based on real-time video data collection and analysis, which requires excessive computational resources and network traffic. In addition, it is difficult to integrate various object information with different sizes and scales into a single scene. Various sizes and types of objects (e.g., workers, heavy equipment, and materials) exist in a plant production environment, and these objects should be detected simultaneously for effective site monitoring. However, with the existing object detection algorithms, it is difficult to simultaneously detect objects with significant differences in size because collecting and training massive amounts of object image data with various scales is necessary. This study thus developed a large-scale site monitoring system using edge computing and a small-object detection system to solve these problems. Edge computing is a distributed information technology architecture wherein the image or video data is processed near the originating source, not on a centralized server or cloud. By inferring information from the AI computing module equipped with CCTVs and communicating only the processed information with the server, it is possible to reduce excessive network traffic. Small-object detection is an innovative method to detect different-sized objects by cropping the raw image and setting the appropriate number of rows and columns for image splitting based on the target object size. This enables the detection of small objects from cropped and magnified images. The detected small objects can then be expressed in the original image. In the inference process, this study used the YOLO-v5 algorithm, known for its fast processing speed and widely used for real-time object detection. This method could effectively detect large and even small objects that were difficult to detect with the existing object detection algorithms. When the large-scale site monitoring system was tested, it performed well in detecting small objects, such as workers in a large-scale view of construction sites, which were inaccurately detected by the existing algorithms. Our next goal is to incorporate various safety monitoring and risk analysis algorithms into this system, such as collision risk estimation, based on the time-to-collision concept, enabling the optimization of safety routes by accumulating workers' paths and inferring the risky areas based on workers' trajectory patterns. Through such developments, this continuous large-scale site monitoring system can guide a construction plant's safety management system more effectively.

  • PDF

Towards high-accuracy data modelling, uncertainty quantification and correlation analysis for SHM measurements during typhoon events using an improved most likely heteroscedastic Gaussian process

  • Qi-Ang Wang;Hao-Bo Wang;Zhan-Guo Ma;Yi-Qing Ni;Zhi-Jun Liu;Jian Jiang;Rui Sun;Hao-Wei Zhu
    • Smart Structures and Systems
    • /
    • v.32 no.4
    • /
    • pp.267-279
    • /
    • 2023
  • Data modelling and interpretation for structural health monitoring (SHM) field data are critical for evaluating structural performance and quantifying the vulnerability of infrastructure systems. In order to improve the data modelling accuracy, and extend the application range from data regression analysis to out-of-sample forecasting analysis, an improved most likely heteroscedastic Gaussian process (iMLHGP) methodology is proposed in this study by the incorporation of the outof-sample forecasting algorithm. The proposed iMLHGP method overcomes this limitation of constant variance of Gaussian process (GP), and can be used for estimating non-stationary typhoon-induced response statistics with high volatility. The first attempt at performing data regression and forecasting analysis on structural responses using the proposed iMLHGP method has been presented by applying it to real-world filed SHM data from an instrumented cable-stay bridge during typhoon events. Uncertainty quantification and correlation analysis were also carried out to investigate the influence of typhoons on bridge strain data. Results show that the iMLHGP method has high accuracy in both regression and out-of-sample forecasting. The iMLHGP framework takes both data heteroscedasticity and accurate analytical processing of noise variance (replace with a point estimation on the most likely value) into account to avoid the intensive computational effort. According to uncertainty quantification and correlation analysis results, the uncertainties of strain measurements are affected by both traffic and wind speed. The overall change of bridge strain is affected by temperature, and the local fluctuation is greatly affected by wind speed in typhoon conditions.

Comparative analysis on darcy-forchheimer flow of 3-D MHD hybrid nanofluid (MoS2-Fe3O4/H2O) incorporating melting heat and mass transfer over a rotating disk with dufour and soret effects

  • A.M. Abd-Alla;Esraa N. Thabet;S.M.M.El-Kabeir;H. A. Hosham;Shimaa E. Waheed
    • Advances in nano research
    • /
    • v.16 no.4
    • /
    • pp.325-340
    • /
    • 2024
  • There are several novel uses for dispersing many nanoparticles into a conventional fluid, including dynamic sealing, damping, heat dissipation, microfluidics, and more. Therefore, melting heat and mass transfer characteristics of a 3-D MHD Hybrid Nanofluid flow over a rotating disc with presenting dufour and soret effects are assessed numerically in this study. In this instance, we investigated both ferric sulfate and molybdenum disulfide as nanoparticles suspended within base fluid water. The governing partial differential equations are transformed into linked higher-order non-linear ordinary differential equations by the local similarity transformation. The collection of these deduced equations is then resolved using a Chebyshev spectral collocation-based algorithm built into the Mathematica software. To demonstrate how different instances of hybrid/ nanofluid are impacted by changes in temperature, velocity, and the distribution of nanoparticle concentration, examples of graphical and numerical data are given. For many values of the material parameters, the computational findings are shown. Simulations conducted for different physical parameters in the model show that adding hybrid nanoparticle to the fluid mixture increases heat transfer in comparison to simple nanofluids. It has been identified that hybrid nanoparticles, as opposed to single-type nanoparticles, need to be taken into consideration to create an effective thermal system. Furthermore, porosity lowers the velocities of simple and hybrid nanofluids in both cases. Additionally, results show that the drag force from skin friction causes the nanoparticle fluid to travel more slowly than the hybrid nanoparticle fluid. The findings also demonstrate that suction factors like magnetic and porosity parameters, as well as nanoparticles, raise the skin friction coefficient. Furthermore, It indicates that the outcomes from different flow scenarios correlate and are in strong agreement with the findings from the published literature. Bar chart depictions are altered by changes in flow rates. Moreover, the results confirm doctors' views to prescribe hybrid nanoparticle and particle nanoparticle contents for achalasia patients and also those who suffer from esophageal stricture and tumors. The results of this study can also be applied to the energy generated by the melting disc surface, which has a variety of industrial uses. These include, but are not limited to, the preparation of semiconductor materials, the solidification of magma, the melting of permafrost, and the refreezing of frozen land.

Research on the Development of Distance Metrics for the Clustering of Vessel Trajectories in Korean Coastal Waters (국내 연안 해역 선박 항적 군집화를 위한 항적 간 거리 척도 개발 연구)

  • Seungju Lee;Wonhee Lee;Ji Hong Min;Deuk Jae Cho;Hyunwoo Park
    • Journal of Navigation and Port Research
    • /
    • v.47 no.6
    • /
    • pp.367-375
    • /
    • 2023
  • This study developed a new distance metric for vessel trajectories, applicable to marine traffic control services in the Korean coastal waters. The proposed metric is designed through the weighted summation of the traditional Hausdorff distance, which measures the similarity between spatiotemporal data and incorporates the differences in the average Speed Over Ground (SOG) and the variance in Course Over Ground (COG) between two trajectories. To validate the effectiveness of this new metric, a comparative analysis was conducted using the actual Automatic Identification System (AIS) trajectory data, in conjunction with an agglomerative clustering algorithm. Data visualizations were used to confirm that the results of trajectory clustering, with the new metric, reflect geographical distances and the distribution of vessel behavioral characteristics more accurately, than conventional metrics such as the Hausdorff distance and Dynamic Time Warping distance. Quantitatively, based on the Davies-Bouldin index, the clustering results were found to be superior or comparable and demonstrated exceptional efficiency in computational distance calculation.

Improvements for Atmospheric Motion Vectors Algorithm Using First Guess by Optical Flow Method (옵티컬 플로우 방법으로 계산된 초기 바람 추정치에 따른 대기운동벡터 알고리즘 개선 연구)

  • Oh, Yurim;Park, Hyungmin;Kim, Jae Hwan;Kim, Somyoung
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_1
    • /
    • pp.763-774
    • /
    • 2020
  • Wind data forecasted from the numerical weather prediction (NWP) model is generally used as the first-guess of the target tracking process to obtain the atmospheric motion vectors(AMVs) because it increases tracking accuracy and reduce computational time. However, there is a contradiction that the NWP model used as the first-guess is used again as the reference in the AMVs verification process. To overcome this problem, model-independent first guesses are required. In this study, we propose the AMVs derivation from Lucas and Kanade optical flow method and then using it as the first guess. To retrieve AMVs, Himawari-8/AHI geostationary satellite level-1B data were used at 00, 06, 12, and 18 UTC from August 19 to September 5, 2015. To evaluate the impact of applying the optical flow method on the AMV derivation, cross-validation has been conducted in three ways as follows. (1) Without the first-guess, (2) NWP (KMA/UM) forecasted wind as the first-guess, and (3) Optical flow method based wind as the first-guess. As the results of verification using ECMWF ERA-Interim reanalysis data, the highest precision (RMSVD: 5.296-5.804 ms-1) was obtained using optical flow based winds as the first-guess. In addition, the computation speed for AMVs derivation was the slowest without the first-guess test, but the other two had similar performance. Thus, applying the optical flow method in the target tracking process of AMVs algorithm, this study showed that the optical flow method is very effective as a first guess for model-independent AMVs derivation.

Three-Dimensional High-Frequency Electromagnetic Modeling Using Vector Finite Elements (벡터 유한 요소를 이용한 고주파 3차원 전자탐사 모델링)

  • Son Jeong-Sul;Song Yoonho;Chung Seung-Hwan;Suh Jung Hee
    • Geophysics and Geophysical Exploration
    • /
    • v.5 no.4
    • /
    • pp.280-290
    • /
    • 2002
  • Three-dimensional (3-D) electromagnetic (EM) modeling algorithm has been developed using finite element method (FEM) to acquire more efficient interpretation techniques of EM data. When FEM based on nodal elements is applied to EM problem, spurious solutions, so called 'vector parasite', are occurred due to the discontinuity of normal electric fields and may lead the completely erroneous results. Among the methods curing the spurious problem, this study adopts vector element of which basis function has the amplitude and direction. To reduce computational cost and required core memory, complex bi-conjugate gradient (CBCG) method is applied to solving complex symmetric matrix of FEM and point Jacobi method is used to accelerate convergence rate. To verify the developed 3-D EM modeling algorithm, its electric and magnetic field for a layered-earth model are compared with those of layered-earth solution. As we expected, the vector based FEM developed in this study does not cause ny vector parasite problem, while conventional nodal based FEM causes lots of errors due to the discontinuity of field variables. For testing the applicability to high frequencies 100 MHz is used as an operating frequency for the layer structure. Modeled fields calculated from developed code are also well matched with the layered-earth ones for a model with dielectric anomaly as well as conductive anomaly. In a vertical electric dipole source case, however, the discontinuity of field variables causes the conventional nodal based FEM to include a lot of errors due to the vector parasite. Even for the case, the vector based FEM gave almost the same results as the layered-earth solution. The magnetic fields induced by a dielectric anomaly at high frequencies show unique behaviors different from those by a conductive anomaly. Since our 3-D EM modeling code can reflect the effect from a dielectric anomaly as well as a conductive anomaly, it may be a groundwork not only to apply high frequency EM method to the field survey but also to analyze the fold data obtained by high frequency EM method.

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.