• Title/Summary/Keyword: support optimization

Search Result 765, Processing Time 0.023 seconds

Public Transportation Mobile Application for Individuals with Mobility Challenge (교통약자를 위한 대중교통 모바일 애플리케이션)

  • Min An;Cheol-Soo Kang
    • Journal of Advanced Technology Convergence
    • /
    • v.3 no.1
    • /
    • pp.13-20
    • /
    • 2024
  • This paper discusses a study on a mobile application aimed at making public transportation more convenient for people with mobility challenges on both Android and iOS platforms. The research analyzes the limitations and weaknesses of existing mobile applications for public transportation from the perspective of individuals with mobility challenges. The goal is to overcome these limitations and provide an optimized user experience. The motivation behind this research stems from the recognition that people with mobility challenges face difficulties in their daily commute, and current public transportation applications do not adequately cater to their needs. Consequently, the study aims to develop a specialized mobile application for individuals with mobility challenges to support them in achieving greater independence in their daily travels.

Performance Evaluation of Machine Learning Model for Seismic Response Prediction of Nuclear Power Plant Structures considering Aging deterioration (원전 구조물의 경년열화를 고려한 지진응답예측 기계학습 모델의 성능평가)

  • Kim, Hyun-Su;Kim, Yukyung;Lee, So Yeon;Jang, Jun Su
    • Journal of Korean Association for Spatial Structures
    • /
    • v.24 no.3
    • /
    • pp.43-51
    • /
    • 2024
  • Dynamic responses of nuclear power plant structure subjected to earthquake loads should be carefully investigated for safety. Because nuclear power plant structure are usually constructed by material of reinforced concrete, the aging deterioration of R.C. have no small effect on structural behavior of nuclear power plant structure. Therefore, aging deterioration of R.C. nuclear power plant structure should be considered for exact prediction of seismic responses of the structure. In this study, a machine learning model for seismic response prediction of nuclear power plant structure was developed by considering aging deterioration. The OPR-1000 was selected as an example structure for numerical simulation. The OPR-1000 was originally designated as the Korean Standard Nuclear Power Plant (KSNP), and was re-designated as the OPR-1000 in 2005 for foreign sales. 500 artificial ground motions were generated based on site characteristics of Korea. Elastic modulus, damping ratio, poisson's ratio and density were selected to consider material property variation due to aging deterioration. Six machine learning algorithms such as, Decision Tree (DT), Random Forest (RF), Support Vector Machine (SVM), K-Nearest Neighbor (KNN), Artificial Neural Networks (ANN), eXtreme Gradient Boosting (XGBoost), were used t o construct seispic response prediction model. 13 intensity measures and 4 material properties were used input parameters of the training database. Performance evaluation was performed using metrics like root mean square error, mean square error, mean absolute error, and coefficient of determination. The optimization of hyperparameters was achieved through k-fold cross-validation and grid search techniques. The analysis results show that neural networks present good prediction performance considering aging deterioration.

Case Study on Activating Local Youth Entrepreneurship Project (로컬 청년창업 프로젝트 활성화 사례연구)

  • Jiyoung Hong;Geonuk Nam;Yeryung Moon;Gaeun Son;Hanjin Lee
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.5
    • /
    • pp.143-151
    • /
    • 2024
  • This study proposes a guide to activate regional resource-based start-up projects by analyzing program planning, consultation, promotion, recruitment, operation, support, and performance measurement. It provides insights into effective methods for engaging prospective entrepreneurs interested in local brands.The study categorized 100 local start-up communities on platforms like YouTube and Instagram, identified suitable message characteristics for each channel, and measured conversion rates after distributing seven types of messages. Over four weeks, the messages received over 57,000 views, achieved a 13% conversion rate, and attracted about 100 applicants. This evaluation identified the most effective message types, offering key policy implications and insights into early-stage entrepreneurs' awareness and the entrepreneurial ecosystem. The researchers also revealed that cooperation with local communities and regional innovation centers is an important success of a local startup.

Study on Optimization of Liquid Fermentation Medium and Antitumor Activity of the Mycelium on Phyllopora lonicerae

  • Min Liu;Lu Liu;Guoli Zhang;Guangyuan Wang;Ranran Hou;Yinghao Zhang;Xuemei Tian
    • Journal of Microbiology and Biotechnology
    • /
    • v.34 no.9
    • /
    • pp.1898-1911
    • /
    • 2024
  • Phylloporia lonicerae is an annual fungus that specifically parasitizes living Lonicera plants, offering significant potential for developing new resource food and medicine. However, wild resources and mycelium production of this fungus is limited, and its anti-tumor active ingredients and mechanisms remain unclear, hampering the development of this fungus. Thus, we optimized the fermentation medium of P. lonicerae and studied the anti-tumor activity of its mycelium. The results indicated that the optimum fermentation medium consisted of 2% sucrose, 0.2% peptone, 0.1% KH2PO4, 0.05% MgSO4·7H2O, 0.16% Lonicera japonica petals, 0.18% P fungal elicitor, and 0.21% L. japonica stem. The biomass reached 7.82 ± 0.41 g/l after 15 days of cultivation in the optimized medium, a 142% increase compared with the potato dextrose broth medium, with a 64% reduction in cultivation time. The intracellular alcohol extract had a higher inhibitory effect on A549 and Eca-109 cells than the intracellular water extract, with half-maximal inhibitory concentration values of 2.42 and 2.92 mg/ml, respectively. Graded extraction of the alcohol extract yielded petroleum ether phase, chloroform phase, ethyl acetate phase, and n-butanol phase. Among them, the petroleum ether phase exhibited a better effect than the positive control, with a half-maximal inhibitory concentration of 113.3 ㎍/ml. Flow cytometry analysis indicated that petroleum ether components could induce apoptosis of Eca-109 cells, suggesting that this extracted component can be utilized as an anticancer agent in functional foods. This study offers valuable technical support and a theoretical foundation for promoting the comprehensive development and efficient utilization of P. lonicerae.

Analysis of the Effect of the Etching Process and Ion Injection Process in the Unit Process for the Development of High Voltage Power Semiconductor Devices (고전압 전력반도체 소자 개발을 위한 단위공정에서 식각공정과 이온주입공정의 영향 분석)

  • Gyu Cheol Choi;KyungBeom Kim;Bonghwan Kim;Jong Min Kim;SangMok Chang
    • Clean Technology
    • /
    • v.29 no.4
    • /
    • pp.255-261
    • /
    • 2023
  • Power semiconductors are semiconductors used for power conversion, transformation, distribution, and control. Recently, the global demand for high-voltage power semiconductors is increasing across various industrial fields, and optimization research on high-voltage IGBT components is urgently needed in these industries. For high-voltage IGBT development, setting the resistance value of the wafer and optimizing key unit processes are major variables in the electrical characteristics of the finished chip. Furthermore, the securing process and optimization of the technology to support high breakdown voltage is also important. Etching is a process of transferring the pattern of the mask circuit in the photolithography process to the wafer and removing unnecessary parts at the bottom of the photoresist film. Ion implantation is a process of injecting impurities along with thermal diffusion technology into the wafer substrate during the semiconductor manufacturing process. This process helps achieve a certain conductivity. In this study, dry etching and wet etching were controlled during field ring etching, which is an important process for forming a ring structure that supports the 3.3 kV breakdown voltage of IGBT, in order to analyze four conditions and form a stable body junction depth to secure the breakdown voltage. The field ring ion implantation process was optimized based on the TEG design by dividing it into four conditions. The wet etching 1-step method was advantageous in terms of process and work efficiency, and the ring pattern ion implantation conditions showed a doping concentration of 9.0E13 and an energy of 120 keV. The p-ion implantation conditions were optimized at a doping concentration of 6.5E13 and an energy of 80 keV, and the p+ ion implantation conditions were optimized at a doping concentration of 3.0E15 and an energy of 160 keV.

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.

Steel Plate Faults Diagnosis with S-MTS (S-MTS를 이용한 강판의 표면 결함 진단)

  • Kim, Joon-Young;Cha, Jae-Min;Shin, Junguk;Yeom, Choongsub
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.47-67
    • /
    • 2017
  • Steel plate faults is one of important factors to affect the quality and price of the steel plates. So far many steelmakers generally have used visual inspection method that could be based on an inspector's intuition or experience. Specifically, the inspector checks the steel plate faults by looking the surface of the steel plates. However, the accuracy of this method is critically low that it can cause errors above 30% in judgment. Therefore, accurate steel plate faults diagnosis system has been continuously required in the industry. In order to meet the needs, this study proposed a new steel plate faults diagnosis system using Simultaneous MTS (S-MTS), which is an advanced Mahalanobis Taguchi System (MTS) algorithm, to classify various surface defects of the steel plates. MTS has generally been used to solve binary classification problems in various fields, but MTS was not used for multiclass classification due to its low accuracy. The reason is that only one mahalanobis space is established in the MTS. In contrast, S-MTS is suitable for multi-class classification. That is, S-MTS establishes individual mahalanobis space for each class. 'Simultaneous' implies comparing mahalanobis distances at the same time. The proposed steel plate faults diagnosis system was developed in four main stages. In the first stage, after various reference groups and related variables are defined, data of the steel plate faults is collected and used to establish the individual mahalanobis space per the reference groups and construct the full measurement scale. In the second stage, the mahalanobis distances of test groups is calculated based on the established mahalanobis spaces of the reference groups. Then, appropriateness of the spaces is verified by examining the separability of the mahalanobis diatances. In the third stage, orthogonal arrays and Signal-to-Noise (SN) ratio of dynamic type are applied for variable optimization. Also, Overall SN ratio gain is derived from the SN ratio and SN ratio gain. If the derived overall SN ratio gain is negative, it means that the variable should be removed. However, the variable with the positive gain may be considered as worth keeping. Finally, in the fourth stage, the measurement scale that is composed of selected useful variables is reconstructed. Next, an experimental test should be implemented to verify the ability of multi-class classification and thus the accuracy of the classification is acquired. If the accuracy is acceptable, this diagnosis system can be used for future applications. Also, this study compared the accuracy of the proposed steel plate faults diagnosis system with that of other popular classification algorithms including Decision Tree, Multi Perception Neural Network (MLPNN), Logistic Regression (LR), Support Vector Machine (SVM), Tree Bagger Random Forest, Grid Search (GS), Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The steel plates faults dataset used in the study is taken from the University of California at Irvine (UCI) machine learning repository. As a result, the proposed steel plate faults diagnosis system based on S-MTS shows 90.79% of classification accuracy. The accuracy of the proposed diagnosis system is 6-27% higher than MLPNN, LR, GS, GA and PSO. Based on the fact that the accuracy of commercial systems is only about 75-80%, it means that the proposed system has enough classification performance to be applied in the industry. In addition, the proposed system can reduce the number of measurement sensors that are installed in the fields because of variable optimization process. These results show that the proposed system not only can have a good ability on the steel plate faults diagnosis but also reduce operation and maintenance cost. For our future work, it will be applied in the fields to validate actual effectiveness of the proposed system and plan to improve the accuracy based on the results.

Efficient Utilization of Private Resources for the National Defense - Focused on maintenance, supply, transportation, training & education - (국방분야 민간자원의 효율적 활용방안 - 정비, 보급, 수송, 교육훈련분야를 중심으로 -)

  • Park, Kyun-Yong
    • Journal of National Security and Military Science
    • /
    • s.9
    • /
    • pp.313-340
    • /
    • 2011
  • The National Defense Reformation bill of "National Defense Reformation 2020" which have been constantly disputed and reformed by the government went through various levels of complementary measures after the North Korean sinking on the Republic of Korea (ROK) Naval Vessel "Cheonan". The final outcome of this reform is also known as the 307 Plan and this was announced on the 8th March. The reformed National Defense Reformation is to reduce the number of units and military personnel under the military structure reformation. However, in order for us to undertake successful National Defense Reformation, the use of privatized civilian resources are essential. Therefore according to this theory, the ROK Ministry of National Defense (MND) have selected the usage of privatized resources as one of the main core agenda for the National Defense Reformation management procedures, and under this agenda the MND plans to further expand the usage of private Especially the MND plans to minimize the personnel resources applied in non-combat areas and in turn use these supplemented personnel with optimization. In order to do this, the MND have initiated necessary appropriate analysis over the whole national defense section by understanding various projects and acquisition requests required by each militaries and civilian research institutions. However for efficient management of privatized civilian resources, first of all, those possible efficient private resources which can achieve optimization will need to be identified, and secondly continuous systematic reinforcements will need to be made in private resource usage legislations. Furthermore, we would need to consider the possibility of labor disputes because of privatization expansion. Therefore, full legal and systematic complementary measures are required in all possible issue arising areas which can affect the combat readiness posture. There is another problem of huge increase in operational expenses as reduction of standby forces are only reducing the number of soldiers and filling these numbers with more cost expensive commissioned officers. However, to overcome this problem, we would need to reduce the number of positions available for active officers and fill these positions with military reserve personnel who previously had working experiences with the related positions (thereby guaranteeing active officers re-employment after completing active service). This would in tum maintain the standards of combat readiness posture and reduce necessary financial budgets which may newly arise. The area of maintenance, supply, transportation, training & education duties which are highly efficient when using privatized resources, will need to be transformed from military management based to civilian management based system. For maintenance, this can be processed by integrating National Maintenance Support System. In order for us to undertake this procedure, we would need to develop maintenance units which are possible to be privatized and this will in turn reduce the military personnel executing job duties, improve service quality and prevent duplicate investments etc. For supply area, we will need to establish Integrated Military Logistics Center in-connection with national and civilian logistics system. This will in turn reduce the logistics time frame as well as required personnel and equipments. In terms of transportation, we will need to further expand the renting and leasing system. This will need to be executed by integrating the National Defense Transportation Information System which will in turn reduce the required personnel and financial budgets. Finally for training and education, retired military personnel can be employed as training instructors and at the military academy, further expansion in the number of civilian professors can be employed in-connection with National Defense Reformation. In other words, more active privatized civilian resources will need to be managed and used for National Defense Reformation.

  • PDF

Study on Effective 5G Network Deployment Method for 5G Mobile Communication Services (5G 이동통신 서비스를 위한 효율적인 5G 망구축 방안에 관한 연구)

  • CHUNG, Woo-Ghee
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.29 no.5
    • /
    • pp.353-358
    • /
    • 2018
  • We herein analyze the service traffic characteristics and spectrum of the 5G mobile communication and suggest the effective 5G network deployment method for 5G mobile communication services. The data rates of the 5G mobile communication are from several kbps (voice and IoT) up to 1 Gbps (hologram, among others). The 5G mobile communication services show the diverse cell coverage environments owing to the use of diverse service data rates and multiple spectrum bands. To effectively support the 5G mobile communication services, the network deployment requires the optimization of the service coverages for new service environments and multiple spectrum bands. Considering the 5G spectrum bandwidth debated at present, if the 5G services of 100 Mbps can be supported in the 200 m cell edge using the 3.5 GHz spectrum bands, the 5G services of the 1 Gbps hologram and 500-Mbps 4k UHD can be supported in the cell edges of 50 m and 100 m using the 28 GHz spectrum bands. Therefore, the 5G services can be supported effectively by the 5G network deployment using spectrum portfolio configurations to match the diverse 5G services and multiple bands.