• Title/Summary/Keyword: 정보시스템(IS) 성과

Search Result 12,790, Processing Time 0.051 seconds

Effectiveness Enhancement Measures for Local Government Environmental Impact Assessment (EIA) by Improving Small-scale EIA Institution (소규모 환경영향평가 제도개선을 통한 지자체 환경영향평가 효과성 증진방안)

  • Jongook Lee;Kyeong Doo Cho
    • Journal of Environmental Impact Assessment
    • /
    • v.32 no.1
    • /
    • pp.15-28
    • /
    • 2023
  • In the Republic of Korea, the target project scope of the small-scale EIA is stipulated as the plan area above around 5,000~60,000m2 depending on a type of project and classification of land use. Whereas, the lower limit of the corresponding local government EIA project is generally located above the small-scale EIA's limits, and overlapping ranges exist. This situation has been enlarged since road construction and district unit planning were included as the target projects for small-scale EIA in the "Enforcement Decree of the Environmental Impact Assessment Act", which was partially revised in November 2016, and the current consultation system needed discussion in that small-scale EIA is allowed to be done without gathering review opinions at the local level. In fact, projects subjected to local government EIA but consulted as small-scale EIAs may seem insignificant because of a small number of total cases; however, it is worth paying attention to the fact that a local government may not add a target project due to the small-scale EIA. This study suggested the three policy measures for improving small-scale EIA to enhance the effectiveness of local government EIA: supplementing the institutional arrangements to incorporate the review opinion from the local region in small-scale EIA, giving priority to local EIA for conducing the projects in overlapping ranges with partial amendments on EIA law regarding exceptions to local government EIA, including small target projects (not to be small-scale EIA targets) to the ordinance that are deemed necessary to be conducted as local government EIA. Even though a positive function of small-scale EIA has been confirmed, efforts should be made to improve the situation in which many projects within local governments are consulted without review from the region.

Introduction of region-based site functions into the traditional market environmental support funding policy development (재래시장 환경개선 지원정책 개발에서의 지역 장소적 기능 도입)

  • Jeong, Dae-Yong;Lee, Se-Ho
    • Proceedings of the Korean DIstribution Association Conference
    • /
    • 2005.05a
    • /
    • pp.383-405
    • /
    • 2005
  • The traditional market is foremost a regionally positioned place, wherein the market directly represents regional and cultural centered traits while it plays an important role in the circulation of facilities through reciprocal, informative and cultural exchanges while sewing to form local communities. The traditional market in Korea is one of representative retail businesses and premodern marketing techniques by family owned business of less than five members such as product management, purchase method, and marketing patterns etc. Since the 1990s, the appearance of new circulation-type businesses and large discount convenience stores escalated the loss of traditional competitiveness, increased the living standard of customers, changed purchasing patterns, and expanded the ubiquity of the Internet. All of these changes in external circulation circumstances have led the traditional markets to lose their place in the economy. The traditional market should revive on a regional site basis through the formation of a community of regional neighbors and through knowledge-sharing that leads to the creation of wealth. For the purpose of creating a wealth in a place, the following components are necessary: 1) a facility suitable for the spatial place of the present, 2)trust built through exchanges within the changing market environment, which would simultaneously satisfy customer's desires, 3) international bench marking on cases such as regionally centered TCM (England), BID (USA), and TMO (Japan) so that the market unit of store placement transfers from a spot policy to a line policy, 4)conversion of communicative conception through a surface policy approach centered around a macro-region perspective. The budget of the traditional market funding policy was operational between 2001 and 2004, serving as a counter move to solve the problem of the old traditional market through government intervention in regional economies to promote national economic strength. This national treasury funding project was centered on environmental improvement, research corps, and business modernization through the expenditure of 3,853 hundred million won (Korean currency). However, the effectiveness of this project has yet to be to proven through investigation. Furthermore, in promoting this funding support project, a lack of professionalism among merchants in the market led to constant limitations in comprehensive striving strategies, reduced capabilities in middle-and long-term plan setup, and created reductions in voluntary merchant agreement solutions. The traditional market should go beyond mere physical place and ordinary products creative site strategies employing the communicative approach must accompany these strategies to make the market a new regional and spatial living place. Thus, regarding recent paradigm changes and the introduction of region-based site functions into the traditional market, acquiring a conversion of direction into the newly developed project is essential to reinvestigate the traditional market composed of cultural and economic meanings, for the purpose of the research. Excavating social policy demands through the comparative analysis of domestic and international cases as well as innovative and expert management leadership development for NPO or NGO civil entrepreneurs through advanced case research on present promotion methods is extremely important. Discovering the seeds of the cultural contents industry cored around regional resource usages, commercializing regionally reknowned products, and constructing complex cultural living places for regional networks are especially important. In order to accelerate these solutions, a comprehensive and systemized approach research operated within a mentor academy system is required, as research will reveal distinctive traits of the traditional market in the aging society.

  • PDF

A Study on Evaluating the Possibility of Monitoring Ships of CAS500-1 Images Based on YOLO Algorithm: A Case Study of a Busan New Port and an Oakland Port in California (YOLO 알고리즘 기반 국토위성영상의 선박 모니터링 가능성 평가 연구: 부산 신항과 캘리포니아 오클랜드항을 대상으로)

  • Park, Sangchul;Park, Yeongbin;Jang, Soyeong;Kim, Tae-Ho
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1463-1478
    • /
    • 2022
  • Maritime transport accounts for 99.7% of the exports and imports of the Republic of Korea; therefore, developing a vessel monitoring system for efficient operation is of significant interest. Several studies have focused on tracking and monitoring vessel movements based on automatic identification system (AIS) data; however, ships without AIS have limited monitoring and tracking ability. High-resolution optical satellite images can provide the missing layer of information in AIS-based monitoring systems because they can identify non-AIS vessels and small ships over a wide range. Therefore, it is necessary to investigate vessel monitoring and small vessel classification systems using high-resolution optical satellite images. This study examined the possibility of developing ship monitoring systems using Compact Advanced Satellite 500-1 (CAS500-1) satellite images by first training a deep learning model using satellite image data and then performing detection in other images. To determine the effectiveness of the proposed method, the learning data was acquired from ships in the Yellow Sea and its major ports, and the detection model was established using the You Only Look Once (YOLO) algorithm. The ship detection performance was evaluated for a domestic and an international port. The results obtained using the detection model in ships in the anchorage and berth areas were compared with the ship classification information obtained using AIS, and an accuracy of 85.5% and 70% was achieved using domestic and international classification models, respectively. The results indicate that high-resolution satellite images can be used in mooring ships for vessel monitoring. The developed approach can potentially be used in vessel tracking and monitoring systems at major ports around the world if the accuracy of the detection model is improved through continuous learning data construction.

Development Plan of Guard Service According to the LBS Introduction (경호경비 발전전략에 따른 위치기반서비스(LBS) 도입)

  • Kim, Chang-Ho;Chang, Ye-Chin
    • Korean Security Journal
    • /
    • no.13
    • /
    • pp.145-168
    • /
    • 2007
  • Like to change to the information-oriented society, the guard service needs to be changed. The communication and hardware technology develop rapidly and according to the internet environment change from cable to wireless, modern person can approach every kinds of information service using wireless communication machinery which can be moved such as laptop, computer, PDA, mobile phone and so on, LBS field which presents the needing information and service at anytime, anywhere, and which kinds of device expands it's territory all the more together with the appearance of ubiquitous concept. LBS use the chip in the mobile phone and make to confirm the position of the joining member anytime within several tens centimeters to hundreds meters. LBS can be divided by the service method which use mobile communication base station and apply satellite. Also each service type can be divided by location chase service, public safe service, location based information service and so on, and it is the part which will plan with guard service development. It will be prospected 8.460 hundred million in 2005 years and 16.561 hundred million in 2007 years scale of market. Like this situation, it can be guessed that the guard service has to change rapidly according to the LBS application. Study method chooses documentary review basically, and at first theory method mainly uses the second documentary examination which depends on learned journal and independent volume which published in the inside and the outside of the country, internet searching, other kinds of all study report, statute book, thesis which published at public order research institute of the Regional Police Headquarter, police operation data, data which related with statute, documents and statistical data which depend on private guard company and so on. So the purpose of the study gropes in accordance with the LBS application, and present the problems and improvement method to analyze indirect of manager side of operate guard adaptation service of LBS, government side which has to activate LBS, systematical, operation management, manpower management and education training which related with guard course side which has to study and educate in accordance with application of the new guard service, as well as intents to excellent quality service of guard.

  • PDF

Implantable Flexible Sensor for Telemetrical Real-Time Blood Pressure Monitoring using Polymer/Metal Multilayer Processing Technique (폴리머/ 금속 다층 공정 기술을 이용한 실시간 혈압 모니터링을 위한 유연한 생체 삽입형 센서)

  • Lim Chang-Hyun;Kim Yong-Jun;Yoon Young-Ro;Yoon Hyoung-Ro;Shin Tae-Min
    • Journal of Biomedical Engineering Research
    • /
    • v.25 no.6
    • /
    • pp.599-604
    • /
    • 2004
  • Implantable flexible sensor using polymer/metal multilayer processing technique for telemetrical real-time blood pressure monitoring is presented. The realized sensor is mechanically flexible, which can be less invasively implanted and attached on the outside of blood vessel to monitor the variation of blood pressure. Therefore, unlike conventional detecting methods which install sensor on the inside of vessel, the suggested monitoring method can monitor the relative blood pressure without injuring blood vessel. The major factor of sudden death of adults is a disease of artery like angina pectoris and myocardial infarction. A disease of circulatory system resulted from vessel occlusion by plaque can be preventable and treatable early through continuous blood pressure monitoring. The procedure of suggested new method for monitoring variation of blood pressure is as follows. First, integrated sensor is attached to the outer wall of blood vessel. Second, it detects mechanical contraction and expansion of blood vessel. And then, reader antenna recognizes it using telemetrical method as the relative variation of blood pressure. There are not any active devices in the sensor system; therefore, the transmission of energy and signal depends on the principle of mutual inductance between internal antenna of LC resonator and external antenna of reader. To confirm the feasibility of the sensing mechanism, in vitro experiment using silicone rubber tubing and blood is practiced. First of all, pressure is applied to the silicone tubing which is filled by blood. Then the shift of resonant frequency with the change of applied pressure is measured. The frequency of 2.4 MHz is varied while the applied pressure is changed from 0 to 213.3 kPa. Therefore, the sensitivity of implantable blood pressure is 11.25 kHz/kPa.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

An Exploratory study on the demand for training programs to improve Real Estate Agents job performance -Focused on Cheonan, Chungnam- (부동산중개인의 직무능력 향상을 위한 교육프로그램 욕구에 관한 탐색적 연구 -충청남도 천안지역을 중심으로-)

  • Lee, Jae-Beom
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.12 no.9
    • /
    • pp.3856-3868
    • /
    • 2011
  • Until recently, research trend in real estate has been focused on real estate market and the market analysis. But the studies on real estate training program development for real estate agents to improve their job performance are relatively short in numbers. Thus, this study shows empirical analysis of the needs for the training programs for real estate agents in Cheonan to improve their job performance. The results are as follows. First, in the survey of asking what educational contents they need in order to improve real estate agents' job performance, most of the respondents show their needs for the analysis of house's value, legal knowledge, real estate management, accounting, real estate marketing, and understanding of the real estate policy. This is because they are well aware that the best way of responding to the changing clients' needs comes from training programs. Secondly, asked about real estate marketing strategies, most of respondents showed their awareness of new strategies to meet the needs of clients. This is because new forms of marketing strategies including internet ads are needed in the field as the paradigm including Information Technology changes. Thirdly, asked about the need for real estate-related training programs, 92% of the respondents answered they need real estate education programs run by the continuing education centers of the universities. In addition, the survey showed their needs for retraining programs that utilize the resources in the local universities. Other than this, to have effective and efficient training programs, they demanded running a training system by utilizing the human resources of the universities under the name of the department of 'Real Estate Contract' for real estate agents' job performance. Fourthly, the survey revealed real estate management(44.2%) and real estate marketing(42.3%) is the most chosen contents they want to take in the regular course for improving real estate agents' job performance. This shows their will to understand clients' needs through the mind of real estate management and real estate marketing. The survey showed they prefer the training programs as an irregular course to those in the regular one. Despite the above results, this study chose subjects only in Cheanan and thus it needs to research more diverse areas. The needs of programs to improve real estate agents job performance should be analyzed empirically targeting the real estate agents not just in Cheonan but also cities like Pyeongchon, Ilsan and Bundang in which real estate business is booming, as well as undergraduate and graduate students whose major is real estate studies. These studies will be able to provide information to help develop the customized training programs by evaluating elements that real estate agents need in order to meet clients satisfaction and improve their job performance. Many variables of the program development learned through these studies can be incorporated in the curriculum of the real estate studies and used very practically as information for the development of the real estate studies in this fast changing era.

Implementation Strategy for the Elderly Care Solution Based on Usage Log Analysis: Focusing on the Case of Hyodol Product (사용자 로그 분석에 기반한 노인 돌봄 솔루션 구축 전략: 효돌 제품의 사례를 중심으로)

  • Lee, Junsik;Yoo, In-Jin;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.117-140
    • /
    • 2019
  • As the aging phenomenon accelerates and various social problems related to the elderly of the vulnerable are raised, the need for effective elderly care solutions to protect the health and safety of the elderly generation is growing. Recently, more and more people are using Smart Toys equipped with ICT technology for care for elderly. In particular, log data collected through smart toys is highly valuable to be used as a quantitative and objective indicator in areas such as policy-making and service planning. However, research related to smart toys is limited, such as the development of smart toys and the validation of smart toy effectiveness. In other words, there is a dearth of research to derive insights based on log data collected through smart toys and to use them for decision making. This study will analyze log data collected from smart toy and derive effective insights to improve the quality of life for elderly users. Specifically, the user profiling-based analysis and elicitation of a change in quality of life mechanism based on behavior were performed. First, in the user profiling analysis, two important dimensions of classifying the type of elderly group from five factors of elderly user's living management were derived: 'Routine Activities' and 'Work-out Activities'. Based on the dimensions derived, a hierarchical cluster analysis and K-Means clustering were performed to classify the entire elderly user into three groups. Through a profiling analysis, the demographic characteristics of each group of elderlies and the behavior of using smart toy were identified. Second, stepwise regression was performed in eliciting the mechanism of change in quality of life. The effects of interaction, content usage, and indoor activity have been identified on the improvement of depression and lifestyle for the elderly. In addition, it identified the role of user performance evaluation and satisfaction with smart toy as a parameter that mediated the relationship between usage behavior and quality of life change. Specific mechanisms are as follows. First, the interaction between smart toy and elderly was found to have an effect of improving the depression by mediating attitudes to smart toy. The 'Satisfaction toward Smart Toy,' a variable that affects the improvement of the elderly's depression, changes how users evaluate smart toy performance. At this time, it has been identified that it is the interaction with smart toy that has a positive effect on smart toy These results can be interpreted as an elderly with a desire to meet emotional stability interact actively with smart toy, and a positive assessment of smart toy, greatly appreciating the effectiveness of smart toy. Second, the content usage has been confirmed to have a direct effect on improving lifestyle without going through other variables. Elderly who use a lot of the content provided by smart toy have improved their lifestyle. However, this effect has occurred regardless of the attitude the user has toward smart toy. Third, log data show that a high degree of indoor activity improves both the lifestyle and depression of the elderly. The more indoor activity, the better the lifestyle of the elderly, and these effects occur regardless of the user's attitude toward smart toy. In addition, elderly with a high degree of indoor activity are satisfied with smart toys, which cause improvement in the elderly's depression. However, it can be interpreted that elderly who prefer outdoor activities than indoor activities, or those who are less active due to health problems, are hard to satisfied with smart toys, and are not able to get the effects of improving depression. In summary, based on the activities of the elderly, three groups of elderly were identified and the important characteristics of each type were identified. In addition, this study sought to identify the mechanism by which the behavior of the elderly on smart toy affects the lives of the actual elderly, and to derive user needs and insights.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.