• Title/Summary/Keyword: Computational Science application

Search Result 313, Processing Time 0.029 seconds

Generalized Sigmidal Basis Function for Improving the Learning Performance fo Multilayer Perceptrons (다층 퍼셉트론의 학습 성능 개선을 위한 일반화된 시그모이드 베이시스 함수)

  • Park, Hye-Yeong;Lee, Gwan-Yong;Lee, Il-Byeong;Byeon, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.26 no.11
    • /
    • pp.1261-1269
    • /
    • 1999
  • 다층 퍼셉트론은 다양한 응용 분야에 성공적으로 적용되고 있는 대표적인 신경회로망 모델이다. 그러나 다층 퍼셉트론의 학습에서 나타나는 플라토에 기인한 느린 학습 속도와 지역 극소는 실제 응용문제에 적용함에 있어서 가장 큰 문제로 지적되어왔다. 이 문제를 해결하기 위해 여러 가지 다양한 학습알고리즘들이 개발되어 왔으나, 계산의 비효율성으로 인해 실제 문제에는 적용하기 힘든 예가 많은 등, 현재까지 만족할 만한 해결책은 제시되지 못하고 있다. 본 논문에서는 다층퍼셉트론의 베이시스 함수로 사용되는 시그모이드 함수를 보다 일반화된 형태로 정의하여 사용함으로써 학습에 있어서의 플라토를 완화하고, 지역극소에 빠지는 것을 줄이는 접근방법을 소개한다. 본 방법은 기존의 변형된 가중치 수정식을 사용한 학습 속도 향상의 방법들과는 다른 접근 방법을 택함으로써 기존의 방법들과 함께 사용하는 것이 가능하다는 특징을 갖고 있다. 제안하는 방법의 성능을 확인하기 위하여 간단한 패턴 인식 문제들에의 적용 실험 및 기존의 학습 속도 향상 방법을 함께 사용하여 시계열 예측 문제에 적용한 실험을 수행하였고, 그 결과로부터 제안안 방법의 효율성을 확인할 수 있었다. Abstract A multilayer perceptron is the most well-known neural network model which has been successfully applied to various fields of application. Its slow learning caused by plateau and local minima of gradient descent learning, however, have been pointed as the biggest problems in its practical use. To solve such a problem, a number of researches on learning algorithms have been conducted, but it can be said that none of satisfying solutions have been presented so far because the problems such as computational inefficiency have still been existed in these algorithms. In this paper, we propose a new learning approach to minimize the effect of plateau and reduce the possibility of getting trapped in local minima by generalizing the sigmoidal function which is used as the basis function of a multilayer perceptron. Adapting a new approach that differs from the conventional methods with revised updating equation, the proposed method can be used together with the existing methods to improve the learning performance. We conducted some experiments to test the proposed method on simple problems of pattern recognition and a problem of time series prediction, compared our results with the results of the existing methods, and confirmed that the proposed method is efficient enough to apply to the real problems.

A Road Luminance Measurement Application based on Android (안드로이드 기반의 도로 밝기 측정 어플리케이션 구현)

  • Choi, Young-Hwan;Kim, Hongrae;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.16 no.2
    • /
    • pp.49-55
    • /
    • 2015
  • According to the statistics of traffic accidents over recent 5 years, traffic accidents during the night times happened more than the day times. There are various causes to occur traffic accidents and the one of the major causes is inappropriate or missing street lights that make driver's sight confused and causes the traffic accidents. In this paper, with smartphones, we designed and implemented a lane luminance measurement application which stores the information of driver's location, driving, and lane luminance into database in real time to figure out the inappropriate street light facilities and the area that does not have any street lights. This application is implemented under Native C/C++ environment using android NDK and it improves the operation speed than code written in Java or other languages. To measure the luminance of road, the input image with RGB color space is converted to image with YCbCr color space and Y value returns the luminance of road. The application detects the road lane and calculates the road lane luminance into the database sever. Also this application receives the road video image using smart phone's camera and improves the computational cost by allocating the ROI(Region of interest) of input images. The ROI of image is converted to Grayscale image and then applied the canny edge detector to extract the outline of lanes. After that, we applied hough line transform method to achieve the candidated lane group. The both sides of lane is selected by lane detection algorithm that utilizes the gradient of candidated lanes. When the both lanes of road are detected, we set up a triangle area with a height 20 pixels down from intersection of lanes and the luminance of road is estimated from this triangle area. Y value is calculated from the extracted each R, G, B value of pixels in the triangle. The average Y value of pixels is ranged between from 0 to 100 value to inform a luminance of road and each pixel values are represented with color between black and green. We store car location using smartphone's GPS sensor into the database server after analyzing the road lane video image with luminance of road about 60 meters ahead by wireless communication every 10 minutes. We expect that those collected road luminance information can warn drivers about safe driving or effectively improve the renovation plans of road luminance management.

Development of Network Based MT Data Processing System (네트워크에 기반한 MT자료의 처리기술 개발 연구)

  • Lee Heuisoon;Kwon Byung-Doo;Chung Hojoon;Oh Seokhoon
    • Geophysics and Geophysical Exploration
    • /
    • v.3 no.2
    • /
    • pp.53-60
    • /
    • 2000
  • The server/client systems using the web protocol and distribution computing environment by network was applied to the MT data processing based on the Java technology. Using this network based system, users can get consistent and stable results because the system has standard analysing methods and has been tested from many users through the internet. Users can check the MT data processing at any time and get results during exploration to reduce the exploration time and money. The pure/enterprised Java technology provides facilities to develop the network based MT data processing system. Web based socket communication and RMI technology are tested respectively to produce the effective and practical client application. Intrinsically, the interpretation of MT data performing the inversion and data process requires heavy computational ability. Therefore we adopt the MPI parallel processing technique to fit the desire of in situ users and expect the effectiveness for the control and upgrade of programing codes.

  • PDF

The Adaptive Personalization Method According to Users Purchasing Index : Application to Beverage Purchasing Predictions (고객별 구매빈도에 동적으로 적응하는 개인화 시스템 : 음료수 구매 예측에의 적용)

  • Park, Yoon-Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.95-108
    • /
    • 2011
  • TThis is a study of the personalization method that intelligently adapts the level of clustering considering purchasing index of a customer. In the e-biz era, many companies gather customers' demographic and transactional information such as age, gender, purchasing date and product category. They use this information to predict customer's preferences or purchasing patterns so that they can provide more customized services to their customers. The previous Customer-Segmentation method provides customized services for each customer group. This method clusters a whole customer set into different groups based on their similarity and builds predictive models for the resulting groups. Thus, it can manage the number of predictive models and also provide more data for the customers who do not have enough data to build a good predictive model by using the data of other similar customers. However, this method often fails to provide highly personalized services to each customer, which is especially important to VIP customers. Furthermore, it clusters the customers who already have a considerable amount of data as well as the customers who only have small amount of data, which causes to increase computational cost unnecessarily without significant performance improvement. The other conventional method called 1-to-1 method provides more customized services than the Customer-Segmentation method for each individual customer since the predictive model are built using only the data for the individual customer. This method not only provides highly personalized services but also builds a relatively simple and less costly model that satisfies with each customer. However, the 1-to-1 method has a limitation that it does not produce a good predictive model when a customer has only a few numbers of data. In other words, if a customer has insufficient number of transactional data then the performance rate of this method deteriorate. In order to overcome the limitations of these two conventional methods, we suggested the new method called Intelligent Customer Segmentation method that provides adaptive personalized services according to the customer's purchasing index. The suggested method clusters customers according to their purchasing index, so that the prediction for the less purchasing customers are based on the data in more intensively clustered groups, and for the VIP customers, who already have a considerable amount of data, clustered to a much lesser extent or not clustered at all. The main idea of this method is that applying clustering technique when the number of transactional data of the target customer is less than the predefined criterion data size. In order to find this criterion number, we suggest the algorithm called sliding window correlation analysis in this study. The algorithm purposes to find the transactional data size that the performance of the 1-to-1 method is radically decreased due to the data sparity. After finding this criterion data size, we apply the conventional 1-to-1 method for the customers who have more data than the criterion and apply clustering technique who have less than this amount until they can use at least the predefined criterion amount of data for model building processes. We apply the two conventional methods and the newly suggested method to Neilsen's beverage purchasing data to predict the purchasing amounts of the customers and the purchasing categories. We use two data mining techniques (Support Vector Machine and Linear Regression) and two types of performance measures (MAE and RMSE) in order to predict two dependent variables as aforementioned. The results show that the suggested Intelligent Customer Segmentation method can outperform the conventional 1-to-1 method in many cases and produces the same level of performances compare with the Customer-Segmentation method spending much less computational cost.

Computational estimation of the earthquake response for fibre reinforced concrete rectangular columns

  • Liu, Chanjuan;Wu, Xinling;Wakil, Karzan;Jermsittiparsert, Kittisak;Ho, Lanh Si;Alabduljabbar, Hisham;Alaskar, Abdulaziz;Alrshoudi, Fahed;Alyousef, Rayed;Mohamed, Abdeliazim Mustafa
    • Steel and Composite Structures
    • /
    • v.34 no.5
    • /
    • pp.743-767
    • /
    • 2020
  • Due to the impressive flexural performance, enhanced compressive strength and more constrained crack propagation, Fibre-reinforced concrete (FRC) have been widely employed in the construction application. Majority of experimental studies have focused on the seismic behavior of FRC columns. Based on the valid experimental data obtained from the previous studies, the current study has evaluated the seismic response and compressive strength of FRC rectangular columns while following hybrid metaheuristic techniques. Due to the non-linearity of seismic data, Adaptive neuro-fuzzy inference system (ANFIS) has been incorporated with metaheuristic algorithms. 317 different datasets from FRC column tests has been applied as one database in order to determine the most influential factor on the ultimate strengths of FRC rectangular columns subjected to the simulated seismic loading. ANFIS has been used with the incorporation of Particle Swarm Optimization (PSO) and Genetic algorithm (GA). For the analysis of the attained results, Extreme learning machine (ELM) as an authentic prediction method has been concurrently used. The variable selection procedure is to choose the most dominant parameters affecting the ultimate strengths of FRC rectangular columns subjected to simulated seismic loading. Accordingly, the results have shown that ANFIS-PSO has successfully predicted the seismic lateral load with R2 = 0.857 and 0.902 for the test and train phase, respectively, nominated as the lateral load prediction estimator. On the other hand, in case of compressive strength prediction, ELM is to predict the compressive strength with R2 = 0.657 and 0.862 for test and train phase, respectively. The results have shown that the seismic lateral force trend is more predictable than the compressive strength of FRC rectangular columns, in which the best results belong to the lateral force prediction. Compressive strength prediction has illustrated a significant deviation above 40 Mpa which could be related to the considerable non-linearity and possible empirical shortcomings. Finally, employing ANFIS-GA and ANFIS-PSO techniques to evaluate the seismic response of FRC are a promising reliable approach to be replaced for high cost and time-consuming experimental tests.

A Study on the Application of a Drone-Based 3D Model for Wind Environment Prediction

  • Jang, Yeong Jae;Jo, Hyeon Jeong;Oh, Jae Hong;Lee, Chang No
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.2
    • /
    • pp.93-101
    • /
    • 2021
  • Recently, with the urban redevelopment and the spread of the planned cities, there is increasing interest in the wind environment, which is related not only to design of buildings and landscaping but also to the comfortability of pedestrians. Numerical analysis for wind environment prediction is underway in many fields, such as dense areas of high-rise building or composition of the apartment complexes, a precisive 3D building model is essentially required in this process. Many studies conducted for wind environment analysis have typically used the method of creating a 3D model by utilizing the building layer included in the GIS (Geographic Information System) data. These data can easily and quickly observe the flow of atmosphere in a wide urban environment, but cannot be suitable for observing precisive flow of atmosphere, and in particular, the effect of a complicated structure of a single building on the flow of atmosphere cannot be calculated. Recently, drone photogrammetry has shown the advantage of being able to automatically perform building modeling based on a large number of images. In this study, we applied photogrammetry technology using a drone to evaluate the flow of atmosphere around two buildings located close to each other. Two 3D models were made into an automatic modeling technique and manual modeling technique. Auto-modeling technique is using an automatically generates a point cloud through photogrammetry and generating models through interpolation, and manual-modeling technique is a manually operated technique that individually generates 3D models based on point clouds. And then the flow of atmosphere for the two models was compared and analyzed. As a result, the wind environment of the two models showed a clear difference, and the model created by auto-modeling showed faster flow of atmosphere than the model created by manual modeling. Also in the case of the 3D mesh generated by auto-modeling showed the limitation of not proceeding an accurate analysis because the precise 3D shape was not reproduced in the closed area such as the porch of the building or the bridge between buildings.

Utilization of $CO_2$ Influenced by Windbreak in an Elevated Production System for Strawberry (딸기 고설재배시설에서의 이산화탄소 농도 유지를 위한 방풍막 설치 효과)

  • Kim, Y.-H.;Lee, I.-B.;Chun, Chang-Hoo;Hwang, H.-S.;Hong, S.-W.;Seo, I.-H.;Yoo, J.-I.;Bitog, Jessie P.;Kwon, K.-S.
    • Journal of Bio-Environment Control
    • /
    • v.18 no.1
    • /
    • pp.29-39
    • /
    • 2009
  • The influence of windbreak to minimize the ventilation velocity near the plant canopy of a greenhouse strawberry was thoroughly investigated using computational fluid dynamics (CFD) technology. Windbreaks were constructed surrounding the plant canopy to control ventilation and maintain the concentration of the supplied $CO_2$ from the soil surface close to the strawberry plants. The influence of no windbreak, 0.15 m and 0.30 m height windbreaks with varied air velocity of 0.5, 1.0 and 1.5 m/s were simulated in the study. The concentrations of supplied $CO_2$ within the plant canopy of were measured. To simplify the model, plants were not included in the final model. Considering 1.0m/s wind velocity which is the normal wind velocity of greenhouses, the concentrations of $CO_2$ were approximately 420, 580 and 653 ppm ($1{\times}10^{-9}kg/m^3$) for no windbreak, 0.15 and 0.30 m windbreak height, respectively. Considering that the maximum concentration of $CO_2$ for the strawberry plants was around 600-800 ppm, the 0.30 m windbreak height is highly recommended. This study revealed that the windbreak was very effective in preserving $CO_2$ gas within the plant canopy. More so, the study also proved that the CFD technique can be used to determine the concentration of $CO_2$ within the plant canopy for the plants consumption at any designed condition. For an in-depth application of this study, the plants as well as the different conditions for $CO_2$ utilization, etc. should be considered.

The Numerical Study on the Ventilation of Non-isothermal Concentrated Fume (수치해석적 방법을 이용한 비등온 고농도 연무의 배기량 산정에 관한 연구)

  • Lim, Seok-Chai;Chang, Hyuk-Sang;Ha, Ji-Soo
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.30 no.5
    • /
    • pp.534-543
    • /
    • 2008
  • The experimental study with the prototype provides more acceptable data than the others. But there are so many limited conditions to perform the experimental study with the prototype. So the theoretical similitude with the scaled model and the numerical study with the CFD method have been chosen alternatively to analysis the fume movement. In this study, the ventilation was estimated from the results of the numerical study based on the experimental results as the boundary conditions. The grid A and B were same size and shape with the models which was used in the experimental study and consisted with 163,839, 122,965 cells respectively. The height of the fume layer was estimated form the mole fraction of fume components and the ventilation was determined by the velocity and temperature of the fume. The results of this study showed that the fume movements estimated from the numerical study are enough to apply to the prototype if there are proper heat loss correction factors. The numerical study is easier to change study conditions and faster to get results from the study than the experimental study. So if we find some proper heat loss correction factors, it's possible to execute the various and advanced study with the numerical study.

Internal Defection Evaluation of Spot Weld Part and Carbon Composite using the Non-contact Air-coupled Ultrasonic Transducer Method (비접촉 초음파 탐상기법을 이용한 스폿용접부 및 탄소복합체의 내부 결함평가)

  • Kwak, Nam-Su;Lee, Seung-Chul
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.11
    • /
    • pp.6432-6439
    • /
    • 2014
  • The NAUT (Non-contact Air coupled Ultrasonic Testing) technique is one of the ultrasonic testing methods that enables non-contact ultrasonic testing by compensating for the energy loss caused by the difference in acoustic impedance of air with an ultrasonic pulser receiver, PRE-AMP and high-sensitivity transducer. As the NAUT is performed in a state of steady ultrasonic transmission and reception, testing can be performed on materials of high or low temperatures or specimens with a rough surface or narrow part, which could not have been tested using the conventional contact-type testing technique. For this study, the internal defects of spot weld, which are often applied to auto parts, and CFRP parts, were tested to determine if it is practical to make the NAUT technique commercial. As the spot welded part had a high ultrasonic transmissivity, the result was shown as red. On the other hand, the part with an internal defect had a layer of air and low transmissivity, which was shown as blue. In addition, depending on the PRF (Pulse Repetition Frequency), an important factor that determines the measurement speed, the color sharpness showed differences. With the images obtained from CFRP specimens or an imaging device, it was possible to identify the shape, size and position of the internal defect within a short period of time. In this paper, it was confirmed in the above-described experiment that both internal defect detection and image processing of the defect could be possible using the NAUT technique. Moreover, it was possible to apply NAUT to the detection of internal defects in the spot welded parts or in CFRP parts, and commercialize its practical application to various fields.

Prediction of Genomic Relationship Matrices using Single Nucleotide Polymorphisms in Hanwoo (한우의 유전체 표지인자 활용 개체 혈연관계 추정)

  • Lee, Deuk-Hwan;Cho, Chung-Il;Kim, Nae-Soo
    • Journal of Animal Science and Technology
    • /
    • v.52 no.5
    • /
    • pp.357-366
    • /
    • 2010
  • The emergence of next-generation sequencing technologies has lead to application of new computational and statistical methodologies that allow incorporating genetic information from entire genomes of many individuals composing the population. For example, using single-nucleotide polymorphisms (SNP) obtained from whole genome amplification platforms such as the Ilummina BovineSNP50 chip, many researchers are actively engaged in the genetic evaluation of cattle livestock using whole genome relationship analyses. In this study, we estimated the genomic relationship matrix (GRM) and compared it with one computed using a pedigree relationship matrix (PRM) using a population of Hanwoo. This project is a preliminary study that will eventually include future work on genomic selection and prediction. Data used in this study were obtained from 187 blood samples consisting of the progeny of 20 young bulls collected after parentage testing from the Hanwoo improvement center, National Agriculture Cooperative Federation as well as 103 blood samples from the progeny of 12 proven bulls collected from farms around the Kyong-buk area in South Korea. The data set was divided into two cases for analysis. In the first case missing genotypes were included. In the second case missing genotypes were excluded. The effect of missing genotypes on the accuracy of genomic relationship estimation was investigated. Estimation of relationships using genomic information was also carried out chromosome by chromosome for whole genomic SNP markers based on the regression method using allele frequencies across loci. The average correlation coefficient and standard deviation between relationships using pedigree information and chromosomal genomic information using data which was verified using a parentage test andeliminated missing genotypes was $0.81{\pm}0.04$ and their correlation coefficient when using whole genomic information was 0.98, which was higher. Variation in relationships between non-inbred half sibs was $0.22{\pm}0.17$ on chromosomal and $0.22{\pm}0.04$ on whole genomic SNP markers. The variations were larger and unusual values were observed when non-parentage test data were included. So, relationship matrix by genomic information can be useful for genetic evaluation of animal breeding.