• Title/Summary/Keyword: optimal algorithm

Search Result 6,798, Processing Time 0.033 seconds

An Efficient Public Bicycle Reallocation using the Real-Time Bicycle on-Demand HDPRA Scheme (효율적인 공공 자전거 재배치를 위한 실시간 자전거 수요량 기반의 HDPRA 기법 제안)

  • Eun-Ok Yun;Kang-Min Kim;Hye-Sung Park;Sung-Wook Chung
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.17 no.2
    • /
    • pp.83-92
    • /
    • 2024
  • Currently, various countries are enhancing accessibility by providing bicycle rental services for convenient usage within daily life. This paper introduces the Nubija public bicycle service in Changwon, South Korea, aiming to address the imbalance between demand and supply of Nubija bicycles. We propose a Highest Priority Reallocation Scheme to prevent this disparity. Comparing this scheme with others that randomly visit terminals for redistribution and those that prioritize terminals closest to current locations, we illustrate its superior efficiency. Our proposed Highest Priority Reallocation Scheme prioritizes terminals with the highest demand and shortest distances nearby. Through experiments, our proposed scheme demonstrates superior performance, with the lowest average of 817.44km distance and an average of 6437.45 times, i.e., 88.14% successful rental occurrences. This highlights its superiority over the other two algorithms.

Design of lift-down kitchen cabinet for elderly and disabled (고령자 및 장애인을 위한 승강형 주방 상부장 설계)

  • Kibum Shim;Hoon Shim;Geon-Hyeok Lim;Jiwon Jang;Sang-Hyun Kim
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.465-470
    • /
    • 2024
  • Kitchen cabinets are widely used for their spacious storage and efficient use of space, but their high installed location makes it difficult for the elderly and disabled to access. Therefore, in this paper, we propose a new height-adjustable kitchen cabinet that can be used more easily and safely. The lift-down range of cabinet was set considering the installation location of cabinet for efficient use of kitchen space and the maximum height accessible to the elderly and disabled, and the link geometry and driving method of the complex link mechanism were determined through the mechanism design procedure to ensure that the selected floor come down safely along the optimal descend path. In addition, the appropriate motor and control algorithm were added to allow the user to descend to the desired height with a simple button operation. It was confirmed through actual production that the proposed linkage mechanism performs the desired lift-down motion.

Analysis of Machine Learning Research Patterns from a Quality Management Perspective (품질경영 관점에서 머신러닝 연구 패턴 분석)

  • Ye-eun Kim;Ho Jun Song;Wan Seon Shin
    • Journal of Korean Society for Quality Management
    • /
    • v.52 no.1
    • /
    • pp.77-93
    • /
    • 2024
  • Purpose: The purpose of this study is to examine machine learning use cases in manufacturing companies from a digital quality management (DQM) perspective and to analyze and present machine learning research patterns from a quality management perspective. Methods: This study was conducted based on systematic literature review methodology. A comprehensive and systematic review was conducted on manufacturing papers covering the overall quality management process from 2015 to 2022. A total of 3 research questions were established according to the goal of the study, and a total of 5 literature selection criteria were set, based on which approximately 110 research papers were selected. Based on the selected papers, machine learning research patterns according to quality management were analyzed. Results: The results of this study are as follows. Among quality management activities, it can be seen that research on the use of machine learning technology is being most actively conducted in relation to quality defect analysis. It suggests that research on the use of NN-based algorithms is taking place most actively compared to other machine learning methods across quality management activities. Lastly, this study suggests that the unique characteristics of each machine learning algorithm should be considered for efficient and effective quality management in the manufacturing industry. Conclusion: This study is significant in that it presents machine learning research trends from an industrial perspective from a digital quality management perspective and lays the foundation for presenting optimal machine learning algorithms in future quality management activities.

A Proposed Algorithm and Sampling Conditions for Nonlinear Analysis of EEG (뇌파의 비선형 분석을 위한 신호추출조건 및 계산 알고리즘)

  • Shin, Chul-Jin;Lee, Kwang-Ho;Choi, Sung-Ku;Yoon, In-Young
    • Sleep Medicine and Psychophysiology
    • /
    • v.6 no.1
    • /
    • pp.52-60
    • /
    • 1999
  • Objectives: With the object of finding the appropriate conditions and algorithms for dimensional analysis of human EEG, we calculated correlation dimensions in the various condition of sampling rate and data aquisition time and improved the computation algorithm by taking advantage of bit operation instead of log operation. Methods: EEG signals from 13 scalp lead of a man were digitized with A-D converter under the condition of 12 bit resolution and 1000 Hertz of sampling rate during 32 seconds. From the original data, we made 15 time series data which have different sampling rate of 62.5, 125, 250, 500, 1000 hertz and data acqusition time of 10, 20, 30 second, respectively. New algorithm to shorten the calculation time using bit operation and the Least Trimmed Squares(LTS) estimator to get the optimal slope was applied to these data. Results: The values of the correlation dimension showed the increasing pattern as the data acquisition time becomes longer. The data with sampling rate of 62.5 Hz showed the highest value of correlation dimension regardless of sampling time but the correlation dimension at other sampling rates revealed similar values. The computation with bit operation instead of log operation had a statistically significant effect of shortening of calculation time and LTS method estimated more stably the slope of correlation dimension than the Least Squares estimator. Conclusion: The bit operation and LTS methods were successfully utilized to time-saving and efficient calculation of correlation dimension. In addition, time series of 20-sec length with sampling rate of 125 Hz was adequate to estimate the dimensional complexity of human EEG.

  • PDF

Optimal supervised LSA method using selective feature dimension reduction (선택적 자질 차원 축소를 이용한 최적의 지도적 LSA 방법)

  • Kim, Jung-Ho;Kim, Myung-Kyu;Cha, Myung-Hoon;In, Joo-Ho;Chae, Soo-Hoan
    • Science of Emotion and Sensibility
    • /
    • v.13 no.1
    • /
    • pp.47-60
    • /
    • 2010
  • Most of the researches about classification usually have used kNN(k-Nearest Neighbor), SVM(Support Vector Machine), which are known as learn-based model, and Bayesian classifier, NNA(Neural Network Algorithm), which are known as statistics-based methods. However, there are some limitations of space and time when classifying so many web pages in recent internet. Moreover, most studies of classification are using uni-gram feature representation which is not good to represent real meaning of words. In case of Korean web page classification, there are some problems because of korean words property that the words have multiple meanings(polysemy). For these reasons, LSA(Latent Semantic Analysis) is proposed to classify well in these environment(large data set and words' polysemy). LSA uses SVD(Singular Value Decomposition) which decomposes the original term-document matrix to three different matrices and reduces their dimension. From this SVD's work, it is possible to create new low-level semantic space for representing vectors, which can make classification efficient and analyze latent meaning of words or document(or web pages). Although LSA is good at classification, it has some drawbacks in classification. As SVD reduces dimensions of matrix and creates new semantic space, it doesn't consider which dimensions discriminate vectors well but it does consider which dimensions represent vectors well. It is a reason why LSA doesn't improve performance of classification as expectation. In this paper, we propose new LSA which selects optimal dimensions to discriminate and represent vectors well as minimizing drawbacks and improving performance. This method that we propose shows better and more stable performance than other LSAs' in low-dimension space. In addition, we derive more improvement in classification as creating and selecting features by reducing stopwords and weighting specific values to them statistically.

  • PDF

Treatment of Contaminated Sediment for Water Quality Improvement of Small-scale Reservoir (소하천형 호수의 수질개선을 위한 퇴적저니 처리방안 연구)

  • 배우근;이창수;정진욱;최동호
    • Journal of Soil and Groundwater Environment
    • /
    • v.7 no.4
    • /
    • pp.31-39
    • /
    • 2002
  • Pollutants from industry, mining, agriculture, and other sources have contaminated sediments in many surface water bodies. Sediment contamination poses a severe threat to human health and environment because many toxic contaminants that are barely detectable in the water column can accumulate in sediments at much higher levels. The purpose of this study was to make optimal treatment and disposal plan o( sediment for water quality improvement in small-scale resevoir based on an evaluation of degree of contamination. The degree of contamination were investigated for 23 samples of 9 site at different depth of sediment in small-scale J river. Results for analysis of contaminated sediments were observed that copper concentration of 4 samples were higher than the regulation of hazardous waste (3 mg/L) and that of all samples were exceeded soil pollution warning levels for agricultural areas. Lead and mercury concentration of all samples were detected below both regulations. Necessary of sediment dredge was evaluated for organic matter and nutrient through standard levels of Paldang lake and the lower Han river in Korea and Tokyo bay and Yokohama bay in Japan. The degree of contamination for organic matter and nutrient was not serious. Compared standard levels of Japan, America, and Canada for heavy metal, contaminated sediment was concluded as lowest effect level or limit of tolerance level because standard levels of America and Canada was established worst effect of benthic organisms. The optimal treatment method of sediment contained heavy metal was cement-based solidification/stabilization to prevent heavy metal leaching.

Optimal Designs of Urban Watershed Boundary and Sewer Networks to Reduce Peak Outflows (첨두유출량 저감을 위한 도시유역 경계 및 우수관망 최적 설계)

  • Lee, Jung-Ho;Jun, Hwan-Don;Kim, Joong-Hoon
    • Journal of the Korean Society of Hazard Mitigation
    • /
    • v.11 no.2
    • /
    • pp.157-161
    • /
    • 2011
  • Although many researches have been carried out concerning the watershed division in natural areas, it has not been researched for the urban watershed division. If the boundary between two urban areas is indistinct because no natural distinction or no administrative division is between the areas, the boundary between the urban areas that have the different outlets (multi-outlet urban watershed) is determined by only designer of sewer system. The suggested urban watershed division model (UWDM) determines the watershed boundary to reduce simultaneously the peak outflows at the outlets of each watershed. Then, the UWDM determines the sewer network to reduce the peak outflow at outlet by determining the pipe connecting directions between the manholes that have the multi-possible pipe connecting directions. In the UWDM, because the modification of the sewer network changes the superposition effect of the runoff hydrographs in sewer pipes, the optimal sewer layout can reduce the peak outflow at outlet, as much as the superposition effects of the hydrographs are reduced. Therefore, the UWDM can optimize the watershed distinction in multi-outlet urban watershed by determining the connecting directions of the boundary-manholes using the genetic algorithm. The suggested model was applied to a multi-outlet urban watershed of 50.3ha, Seoul, Korea, and the watershed division of this model, the peak outflows at two outlets were decreased by approximately 15% for the design rainfall.

OD matrix estimation using link use proportion sample data as additional information (표본링크이용비를 추가정보로 이용한 OD 행렬 추정)

  • 백승걸;김현명;신동호
    • Journal of Korean Society of Transportation
    • /
    • v.20 no.4
    • /
    • pp.83-93
    • /
    • 2002
  • To improve the performance of estimation, the research that uses additional information addition to traffic count and target OD with additional survey cost have been studied. The purpose of this paper is to improve the performance of OD estimation by reducing the feasible solutions with cost-efficiently additional information addition to traffic counts and target OD. For this purpose, we Propose the OD estimation method with sample link use proportion as additional information. That is, we obtain the relationship between OD trip and link flow from sample link use proportion that is high reliable information with roadside survey, not from the traffic assignment of target OD. Therefore, this paper proposes OD estimation algorithm in which the conservation of link flow rule under the path-based non-equilibrium traffic assignment concept. Numerical result with test network shows that it is possible to improve the performance of OD estimation where the precision of additional data is low, since sample link use Proportion represented the information showing the relationship between OD trip and link flow. And this method shows the robust performance of estimation where traffic count or OD trip be changed, since this method did not largely affected by the error of target OD and the one of traffic count. In addition to, we also propose that we must set the level of data precision by considering the level of other information precision, because "precision problem between information" is generated when we use additional information like sample link use proportion etc. And we Propose that the method using traffic count as basic information must obtain the link flow to certain level in order to high the applicability of additional information. Finally, we propose that additional information on link have a optimal counting location problem. Expecially by Precision of information side it is possible that optimal survey location problem of sample link use proportion have a much impact on the performance of OD estimation rather than optimal counting location problem of link flow.

Development of Neural Network Based Cycle Length Design Model Minimizing Delay for Traffic Responsive Control (실시간 신호제어를 위한 신경망 적용 지체최소화 주기길이 설계모형 개발)

  • Lee, Jung-Youn;Kim, Jin-Tae;Chang, Myung-Soon
    • Journal of Korean Society of Transportation
    • /
    • v.22 no.3 s.74
    • /
    • pp.145-157
    • /
    • 2004
  • The cycle length design model of the Korean traffic responsive signal control systems is devised to vary a cycle length as a response to changes in traffic demand in real time by utilizing parameters specified by a system operator and such field information as degrees of saturation of through phases. Since no explicit guideline is provided to a system operator, the system tends to include ambiguity in terms of the system optimization. In addition, the cycle lengths produced by the existing model have yet been verified if they are comparable to the ones minimizing delay. This paper presents the studies conducted (1) to find shortcomings embedded in the existing model by comparing the cycle lengths produced by the model against the ones minimizing delay and (2) to propose a new direction to design a cycle length minimizing delay and excluding such operator oriented parameters. It was found from the study that the cycle lengths from the existing model fail to minimize delay and promote intersection operational conditions to be unsatisfied when traffic volume is low, due to the feature of the changed target operational volume-to-capacity ratio embedded in the model. The 64 different neural network based cycle length design models were developed based on simulation data surrogating field data. The CORSIM optimal cycle lengths minimizing delay were found through the COST software developed for the study. COST searches for the CORSIM optimal cycle length minimizing delay with a heuristic searching method, a hybrid genetic algorithm. Among 64 models, the best one producing cycle lengths close enough to the optimal was selected through statistical tests. It was found from the verification test that the best model designs a cycle length as similar pattern to the ones minimizing delay. The cycle lengths from the proposed model are comparable to the ones from TRANSYT-7F.

An Integrated Model based on Genetic Algorithms for Implementing Cost-Effective Intelligent Intrusion Detection Systems (비용효율적 지능형 침입탐지시스템 구현을 위한 유전자 알고리즘 기반 통합 모형)

  • Lee, Hyeon-Uk;Kim, Ji-Hun;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.125-141
    • /
    • 2012
  • These days, the malicious attacks and hacks on the networked systems are dramatically increasing, and the patterns of them are changing rapidly. Consequently, it becomes more important to appropriately handle these malicious attacks and hacks, and there exist sufficient interests and demand in effective network security systems just like intrusion detection systems. Intrusion detection systems are the network security systems for detecting, identifying and responding to unauthorized or abnormal activities appropriately. Conventional intrusion detection systems have generally been designed using the experts' implicit knowledge on the network intrusions or the hackers' abnormal behaviors. However, they cannot handle new or unknown patterns of the network attacks, although they perform very well under the normal situation. As a result, recent studies on intrusion detection systems use artificial intelligence techniques, which can proactively respond to the unknown threats. For a long time, researchers have adopted and tested various kinds of artificial intelligence techniques such as artificial neural networks, decision trees, and support vector machines to detect intrusions on the network. However, most of them have just applied these techniques singularly, even though combining the techniques may lead to better detection. With this reason, we propose a new integrated model for intrusion detection. Our model is designed to combine prediction results of four different binary classification models-logistic regression (LOGIT), decision trees (DT), artificial neural networks (ANN), and support vector machines (SVM), which may be complementary to each other. As a tool for finding optimal combining weights, genetic algorithms (GA) are used. Our proposed model is designed to be built in two steps. At the first step, the optimal integration model whose prediction error (i.e. erroneous classification rate) is the least is generated. After that, in the second step, it explores the optimal classification threshold for determining intrusions, which minimizes the total misclassification cost. To calculate the total misclassification cost of intrusion detection system, we need to understand its asymmetric error cost scheme. Generally, there are two common forms of errors in intrusion detection. The first error type is the False-Positive Error (FPE). In the case of FPE, the wrong judgment on it may result in the unnecessary fixation. The second error type is the False-Negative Error (FNE) that mainly misjudges the malware of the program as normal. Compared to FPE, FNE is more fatal. Thus, total misclassification cost is more affected by FNE rather than FPE. To validate the practical applicability of our model, we applied it to the real-world dataset for network intrusion detection. The experimental dataset was collected from the IDS sensor of an official institution in Korea from January to June 2010. We collected 15,000 log data in total, and selected 10,000 samples from them by using random sampling method. Also, we compared the results from our model with the results from single techniques to confirm the superiority of the proposed model. LOGIT and DT was experimented using PASW Statistics v18.0, and ANN was experimented using Neuroshell R4.0. For SVM, LIBSVM v2.90-a freeware for training SVM classifier-was used. Empirical results showed that our proposed model based on GA outperformed all the other comparative models in detecting network intrusions from the accuracy perspective. They also showed that the proposed model outperformed all the other comparative models in the total misclassification cost perspective. Consequently, it is expected that our study may contribute to build cost-effective intelligent intrusion detection systems.