• Title/Summary/Keyword: Model-based Optimization

Search Result 2,614, Processing Time 0.045 seconds

Road Extraction from Images Using Semantic Segmentation Algorithm (영상 기반 Semantic Segmentation 알고리즘을 이용한 도로 추출)

  • Oh, Haeng Yeol;Jeon, Seung Bae;Kim, Geon;Jeong, Myeong-Hun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.3
    • /
    • pp.239-247
    • /
    • 2022
  • Cities are becoming more complex due to rapid industrialization and population growth in modern times. In particular, urban areas are rapidly changing due to housing site development, reconstruction, and demolition. Thus accurate road information is necessary for various purposes, such as High Definition Map for autonomous car driving. In the case of the Republic of Korea, accurate spatial information can be generated by making a map through the existing map production process. However, targeting a large area is limited due to time and money. Road, one of the map elements, is a hub and essential means of transportation that provides many different resources for human civilization. Therefore, it is essential to update road information accurately and quickly. This study uses Semantic Segmentation algorithms Such as LinkNet, D-LinkNet, and NL-LinkNet to extract roads from drone images and then apply hyperparameter optimization to models with the highest performance. As a result, the LinkNet model using pre-trained ResNet-34 as the encoder achieved 85.125 mIoU. Subsequent studies should focus on comparing the results of this study with those of studies using state-of-the-art object detection algorithms or semi-supervised learning-based Semantic Segmentation techniques. The results of this study can be applied to improve the speed of the existing map update process.

Personalized Speech Classification Scheme for the Smart Speaker Accessibility Improvement of the Speech-Impaired people (언어장애인의 스마트스피커 접근성 향상을 위한 개인화된 음성 분류 기법)

  • SeungKwon Lee;U-Jin Choe;Gwangil Jeon
    • Smart Media Journal
    • /
    • v.11 no.11
    • /
    • pp.17-24
    • /
    • 2022
  • With the spread of smart speakers based on voice recognition technology and deep learning technology, not only non-disabled people, but also the blind or physically handicapped can easily control home appliances such as lights and TVs through voice by linking home network services. This has greatly improved the quality of life. However, in the case of speech-impaired people, it is impossible to use the useful services of the smart speaker because they have inaccurate pronunciation due to articulation or speech disorders. In this paper, we propose a personalized voice classification technique for the speech-impaired to use for some of the functions provided by the smart speaker. The goal of this paper is to increase the recognition rate and accuracy of sentences spoken by speech-impaired people even with a small amount of data and a short learning time so that the service provided by the smart speaker can be actually used. In this paper, data augmentation and one cycle learning rate optimization technique were applied while fine-tuning ResNet18 model. Through an experiment, after recording 10 times for each 30 smart speaker commands, and learning within 3 minutes, the speech classification recognition rate was about 95.2%.

A Tracer Study on Mankyeong River Using Effluents from a Sewage Treatment Plant (하수처리장 방류수를 이용한 추적자 시험: 만경강 유역에 대한 사례 연구)

  • Kim Jin-Sam;Kim Kang-Joo;Hahn Chan;Hwang Gab-Soo;Park Sung-Min;Lee Sang-Ho;Oh Chang-Whan;Park Eun-Gyu
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.11 no.2
    • /
    • pp.82-91
    • /
    • 2006
  • We investigated the possibility of using effluents from a municipal sewage treatment plant (STP) as tracers a tracer for hydrologic studies of rivers. The possibility was checked in a 12-km long reach downstream of Jeonju Municipal Sewage Treatment Plant (JSTP). Time-series monitoring of the water chemistry reveals that chemical compositions of the effluent from the JSTP are fluctuating within a relatively wide range during the sampling period. In addition, the signals from the plant were observed at the downstream stations consecutively with increasing time lags, especially in concentrations of the conservative chemical parameters (concentrations f3r chloride and sulfate, total concentration of major cations, and electric conductivity). Based on this observation, we could estimate the stream flow (Q), velocity (v), and dispersion coefficient (D). A 1-D nonreactive solute-transport model with automated optimization schemes was used for this study. The values of Q, v, and D estimated from this study varied from 6.4 to $9.0m^3/sec$ (at the downstream end of the reach), from 0.06 to 0.10 m/sec, and from 0.7 to $6.4m^2/sec$, respectively. The results show that the effluent from a large-scaled municipal STP frequently provides good, multiple natural tracers far hydrologic studies.

Optimization Mixture Ratio of Petasites japonicus, Luffa cylindrica and Houttuynia cordata to Develop a Functional Drink by Mixture Design (혼합물 실험계획법에 의한 머위 및 부원료의 혼합비율 최적화)

  • Jeong, Hae-Jin;Lee, Kyoung-Pil;Chung, Hun-Sik;Kim, Dong-Seop;Kim, Han-Soo;Choi, Young-Whan;Im, Dong-Soon;Seong, Jong-Hwan;Lee, Young-Guen
    • Journal of Life Science
    • /
    • v.25 no.3
    • /
    • pp.329-335
    • /
    • 2015
  • This study was performed to determine the optimal ratio of Petasites japonicus, Luffa cylindrica, and Houttuynia cordata, all of which are supposed to have anti-respiratory disease effects, such as against rhinitis. The experiment incorporated a mixture design and included 12 experimental points with center replicates for three different independent variables (Petasites japonicus 30~70%; Luffa cylindrica 10~30%; and Houttuynia cordata 10~30%). Based on this design, the mixture was extracted in hot water at 121℃ for 45 min and anti-allergy and anti-microbial activities were observed. The response surface and trace plot described for the anti-allergy activity showed Petasites japonicas was a relatively important factor. The correlation coefficient (R2) value 82.10% for the inhibition effect of degranulation was analyzed by the regression equation. The analysis of variance showed the model fit was statistically significant (p<0.05). The optimal ratio of the mixture was Petasites japonicus 0.75%, Luffa cylindrica 0.11%, and Houttuynia cordata 0.14%. The anti-microbial activity for each extraction of the mixture was valid on gram-positive, such as Staphylococcus aureus (KCCM 40881) and Staphylococcus epidermidis (KCCM 35494), while it was less effective on gram-negative, such as Escherichia coli (KCCM 11234) and Pseudomonas aeruginosa (KCCM 11328).

Optimization for Extraction of ${\beta}-Carotene$ from Carrot by Supercritical Carbon Dioxide (초임계 유체에 의한 당근의 ${\beta}-Carotene$ 추출의 최적화)

  • Kim, Young-Hoh;Chang, Kyu-Seob;Park, Young-Deuk
    • Korean Journal of Food Science and Technology
    • /
    • v.28 no.3
    • /
    • pp.411-416
    • /
    • 1996
  • Supercritical fluid extraction of ${\beta}$-carotene from carrot was optimized to maximize ${\beta}$-carotene (Y) extraction yield. A central composite design involving extraction pressure ($X_1$ 200-,100 bar), temperature ($X_2,\;35-51^{\circ}C$) and time ($X_1$$ 60-200min) was used. Three independent factors ($X_1,\;X_2,\;X_3$) were chosen to determine their effects on the various responses and the function was expressed in terms of a quadratic polynomial equation,$Y={\beta}_0+{\beta}_1X_1+{\beta}_2X_2+{\beta}_3X_3+{\beta}_11X_12+{\beta}_22X_3^2+{\beta}_-12X_1X_2+{\beta}_12X_1X_2+{\beta}_13X_1X_3+{\beta}_23X_2X_3,$ which measures the linear, quadratic and interaction effects. Extraction yields of ${\beta}$-carotene were affected by pressure, time and temperature in the decreasing order, and linear effect of tenter point (${\beta}_11$) and pressure (${\beta}_1$) were significant at a level of 0.001(${\alpha}$). Based on the analysis of variance, the model fitted for ${\beta}_11$-carotene (Y) was significant at 5% confidence level and the coefficient of determination was 0.938. According to the response surface of ${\beta}$-carotene by cannoical analysis, the stationary point for quantitatively dependent variable (Y) was found to be the maximum point for extraction yield. Response area for ${\beta}$-carotene (Y) in terms of interesting region was estimated over $10,611{\mu}g$ Per 100 g raw carrot under extraction.

  • PDF

Teachers' Recognition on the Optimization of the Educational Contents of Clothing and Textiles in Practical Arts or Technology.Home Economics (실과 및 기술.가정 교과에서 의생활 교육내용의 적정성에 대한 교사의 인식)

  • Baek Seung-Hee;Han Young-Sook;Lee Hye-Ja
    • Journal of Korean Home Economics Education Association
    • /
    • v.18 no.3 s.41
    • /
    • pp.97-117
    • /
    • 2006
  • The purpose of this study was to investigate the teachers' recognition on the optimization of the educational contents of Clothing & Textiles in subjects of :he Practical Arts or the Technology & Home Economics in the course of elementary, middle and high schools. The statistical data for this research were collected from 203 questionnaires of teachers who work on elementary, middle and high schools. Mean. standard deviation, percentage were calculated using SPSS/WIN 12.0 program. Also. these materials were verified by t-test, One-way ANOVA and post verification Duncan. The results were as follows; First, The equipment ratio of practice laboratory were about 24% and very poor in elementary schools but those of middle and high school were 97% and 78% each and higher than elementary schools. Second, More than 50% of teachers recognized the amount of learning 'proper'. The elementary school teachers recognized the mount of learning in 'operating sewing machines' too heavy especially, the same as middle school teachers in 'making shorts': the same as high school teachers in 'making tablecloth and curtain' and 'making pillow cover or bag'. Third, All of the elementary, middle and high school teachers recognized the levels of total contents of clothing and textiles 'common'. The 80% of elementary school teachers recognized 'operating sewing machines' and 'making cushions' difficult especially. The same as middle school teachers in 'hand knitting handbag by crochet hoop needle', 'the various kinds of cloth' and 'making short pants'. The same as high school teachers in 'making tablecloth or curtain'. Fourth, Elementary school teachers recognized 'practicing basic hand needlework' and 'making pouch using hand needlework' important in the degree of educational contents importance. Middle school teachers recognized 'making short pants unimportant. High school teachers considered the contents focusing on practice such as 'making tablecloth and curtain' and 'making pillow cover or bags' unimportant. My suggestions were as follows; Both laboratories and facilities for practice should be established for making clothing and textiles lessons effective in Practical Arts in elementary schools. The 'operating sewing machines' which were considered difficult should be dealt in upper grade, re-conditioning to easier or omitted. The practical contents should be changed to student-activity-oriented and should be recomposed in order to familiar with students' living. It was needed to various and sufficient supports for increasing the teachers' practical abilities.

  • PDF

A hybrid algorithm for the synthesis of computer-generated holograms

  • Nguyen The Anh;An Jun Won;Choe Jae Gwang;Kim Nam
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2003.07a
    • /
    • pp.60-61
    • /
    • 2003
  • A new approach to reduce the computation time of genetic algorithm (GA) for making binary phase holograms is described. Synthesized holograms having diffraction efficiency of 75.8% and uniformity of 5.8% are proven in computer simulation and experimentally demonstrated. Recently, computer-generated holograms (CGHs) having high diffraction efficiency and flexibility of design have been widely developed in many applications such as optical information processing, optical computing, optical interconnection, etc. Among proposed optimization methods, GA has become popular due to its capability of reaching nearly global. However, there exits a drawback to consider when we use the genetic algorithm. It is the large amount of computation time to construct desired holograms. One of the major reasons that the GA' s operation may be time intensive results from the expense of computing the cost function that must Fourier transform the parameters encoded on the hologram into the fitness value. In trying to remedy this drawback, Artificial Neural Network (ANN) has been put forward, allowing CGHs to be created easily and quickly (1), but the quality of reconstructed images is not high enough to use in applications of high preciseness. For that, we are in attempt to find a new approach of combiningthe good properties and performance of both the GA and ANN to make CGHs of high diffraction efficiency in a short time. The optimization of CGH using the genetic algorithm is merely a process of iteration, including selection, crossover, and mutation operators [2]. It is worth noting that the evaluation of the cost function with the aim of selecting better holograms plays an important role in the implementation of the GA. However, this evaluation process wastes much time for Fourier transforming the encoded parameters on the hologram into the value to be solved. Depending on the speed of computer, this process can even last up to ten minutes. It will be more effective if instead of merely generating random holograms in the initial process, a set of approximately desired holograms is employed. By doing so, the initial population will contain less trial holograms equivalent to the reduction of the computation time of GA's. Accordingly, a hybrid algorithm that utilizes a trained neural network to initiate the GA's procedure is proposed. Consequently, the initial population contains less random holograms and is compensated by approximately desired holograms. Figure 1 is the flowchart of the hybrid algorithm in comparison with the classical GA. The procedure of synthesizing a hologram on computer is divided into two steps. First the simulation of holograms based on ANN method [1] to acquire approximately desired holograms is carried. With a teaching data set of 9 characters obtained from the classical GA, the number of layer is 3, the number of hidden node is 100, learning rate is 0.3, and momentum is 0.5, the artificial neural network trained enables us to attain the approximately desired holograms, which are fairly good agreement with what we suggested in the theory. The second step, effect of several parameters on the operation of the hybrid algorithm is investigated. In principle, the operation of the hybrid algorithm and GA are the same except the modification of the initial step. Hence, the verified results in Ref [2] of the parameters such as the probability of crossover and mutation, the tournament size, and the crossover block size are remained unchanged, beside of the reduced population size. The reconstructed image of 76.4% diffraction efficiency and 5.4% uniformity is achieved when the population size is 30, the iteration number is 2000, the probability of crossover is 0.75, and the probability of mutation is 0.001. A comparison between the hybrid algorithm and GA in term of diffraction efficiency and computation time is also evaluated as shown in Fig. 2. With a 66.7% reduction in computation time and a 2% increase in diffraction efficiency compared to the GA method, the hybrid algorithm demonstrates its efficient performance. In the optical experiment, the phase holograms were displayed on a programmable phase modulator (model XGA). Figures 3 are pictures of diffracted patterns of the letter "0" from the holograms generated using the hybrid algorithm. Diffraction efficiency of 75.8% and uniformity of 5.8% are measured. We see that the simulation and experiment results are fairly good agreement with each other. In this paper, Genetic Algorithm and Neural Network have been successfully combined in designing CGHs. This method gives a significant reduction in computation time compared to the GA method while still allowing holograms of high diffraction efficiency and uniformity to be achieved. This work was supported by No.mOl-2001-000-00324-0 (2002)) from the Korea Science & Engineering Foundation.

  • PDF

Optimization of Medium Components using Response Surface Methodology for Cost-effective Mannitol Production by Leuconostoc mesenteroides SRCM201425 (반응표면분석법을 이용한 Leuconostoc mesenteroides SRCM201425의 만니톨 생산배지 최적화)

  • Ha, Gwangsu;Shin, Su-Jin;Jeong, Seong-Yeop;Yang, HoYeon;Im, Sua;Heo, JuHee;Yang, Hee-Jong;Jeong, Do-Youn
    • Journal of Life Science
    • /
    • v.29 no.8
    • /
    • pp.861-870
    • /
    • 2019
  • This study was undertaken to establish optimum medium compositions for cost-effective mannitol production by Leuconostoc mesenteroides SRCM201425 isolated from kimchi. L. mesenteroides SRCM21425 from kimchi was selected for efficient mannitol production based on fructose analysis and identified by its 16S rRNA gene sequence, as well as by carbohydrate fermentation pattern analysis. To enhance mannitol production by L. mesenteroides SRCM201425, the effects of carbon, nitrogen, and mineral sources on mannitol production were first determined using Plackett-Burman design (PBD). The effects of 11 variables on mannitol production were investigated of which three variables, fructose, sucrose, and peptone, were selected. In the second step, each concentration of fructose, sucrose, and peptone was optimized using a central composite design (CCD) and response surface analysis. The predicted concentrations of fructose, sucrose, and peptone were 38.68 g/l, 30 g/l, and 39.67 g/l, respectively. The mathematical response model was reliable, with a coefficient of determination of $R^2=0.9185$. Mannitol production increased 20-fold as compared with the MRS medium, corresponding to a mannitol yield 97.46% when compared to MRS supplemented with 100 g/l of fructose in flask system. Furthermore, the production in the optimized medium was cost-effective. The findings of this study can be expected to be useful in biological production for catalytic hydrogenation causing byproduct and additional production costs.

Contrast Media in Abdominal Computed Tomography: Optimization of Delivery Methods

  • Joon Koo Han;Byung Ihn Choi;Ah Young Kim;Soo Jung Kim
    • Korean Journal of Radiology
    • /
    • v.2 no.1
    • /
    • pp.28-36
    • /
    • 2001
  • Objective: To provide a systematic overview of the effects of various parameters on contrast enhancement within the same population, an animal experiment as well as a computer-aided simulation study was performed. Materials and Methods: In an animal experiment, single-level dynamic CT through the liver was performed at 5-second intervals just after the injection of contrast medium for 3 minutes. Combinations of three different amounts (1, 2, 3 mL/kg), concentrations (150, 200, 300 mgI/mL), and injection rates (0.5, 1, 2 mL/sec) were used. The CT number of the aorta (A), portal vein (P) and liver (L) was measured in each image, and time-attenuation curves for A, P and L were thus obtained. The degree of maximum enhancement (Imax) and time to reach peak enhancement (Tmax) of A, P and L were determined, and times to equilibrium (Teq) were analyzed. In the computed-aided simulation model, a program based on the amount, flow, and diffusion coefficient of body fluid in various compartments of the human body was designed. The input variables were the concentrations, volumes and injection rates of the contrast media used. The program generated the time-attenuation curves of A, P and L, as well as liver-to-hepatocellular carcinoma (HCC) contrast curves. On each curve, we calculated and plotted the optimal temporal window (time period above the lower threshold, which in this experiment was 10 Hounsfield units), the total area under the curve above the lower threshold, and the area within the optimal range. Results: A. Animal Experiment: At a given concentration and injection rate, an increased volume of contrast medium led to increases in Imax A, P and L. In addition, Tmax A, P, L and Teq were prolonged in parallel with increases in injection time The time-attenuation curve shifted upward and to the right. For a given volume and injection rate, an increased concentration of contrast medium increased the degree of aortic, portal and hepatic enhancement, though Tmax A, P and L remained the same. The time-attenuation curve shifted upward. For a given volume and concentration of contrast medium, changes in the injection rate had a prominent effect on aortic enhancement, and that of the portal vein and hepatic parenchyma also showed some increase, though the effect was less prominent. A increased in the rate of contrast injection led to shifting of the time enhancement curve to the left and upward. B. Computer Simulation: At a faster injection rate, there was minimal change in the degree of hepatic attenuation, though the duration of the optimal temporal window decreased. The area between 10 and 30 HU was greatest when contrast media was delivered at a rate of 2 3 mL/sec. Although the total area under the curve increased in proportion to the injection rate, most of this increase was above the upper threshould and thus the temporal window was narrow and the optimal area decreased. Conclusion: Increases in volume, concentration and injection rate all resulted in improved arterial enhancement. If cost was disregarded, increasing the injection volume was the most reliable way of obtaining good quality enhancement. The optimal way of delivering a given amount of contrast medium can be calculated using a computer-based mathematical model.

  • PDF

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.