• Title/Summary/Keyword: Time Complexity

Search Result 3,079, Processing Time 0.039 seconds

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

A Case Study of the Performance and Success Factors of ISMP(Information Systems Master Plan) (정보시스템 마스터플랜(ISMP) 수행 성과와 성공요인에 관한 사례연구)

  • Park, So-Hyun;Lee, Kuk-Hie;Gu, Bon-Jae;Kim, Min-Seog
    • Information Systems Review
    • /
    • v.14 no.1
    • /
    • pp.85-103
    • /
    • 2012
  • ISMP is a method of writing clearly the user requirements in the RFP(Request for Proposal) of the IS development projects. Unlike the conventional methods of RFP preparation that describe the user requirements of target systems in a rather superficial manner, ISMP systematically identifies the businesses needs and the status of information technology, analyzes in detail the user requirements, and defines in detail the specific functions of the target systems. By increasing the clarity of RFP, the scale and complexity of related businesses can be calculated accurately, many responding companies can prepare proposals clearly, and the level of fairness during the evaluation of many proposals can be improved, as well. Above all though, the problems that are posed as chronic challenges in this field, i.e., the misunderstanding and conflicts between the users and developers, excessive burden on developers, etc. can be resolved. This study is a case study that analyzes the execution process, execution accomplishment, problems, and the success factors of two pilot projects that introduced ISMP for the first time. ISMP performance procedures of actual site were verified, and how the user needs in the request for quote are described was examined. The satisfaction levels of ISMP RFP for quote were found to be high as compared to the conventional RFP. Although occurred were some problems such as RFP preparation difficulties, increased workload, etc. due to the lack of understanding and execution experience of ISMP, in overall, also occurred were some positive effects such as the establishment of the scope of target systems, improved information sharing and cooperation between the users and the developers, seamless communication between issuing customer corporations and IT service companies, reduction of changes in user requirements, etc. As a result of conducting action research type in-depth interviews on the persons in charge of actual work, factors were derived as ISMP success factors: prior consensus on the need for ISMP, the acquisition of execution resources resulting from the support of CEO and CIO, and the selection of specification level of the user requirements. The results of this study will provide useful site information to the corporations that are considering adopting ISMP and IT service firms, and present meaningful suggestions on the future study directions to researchers in the field of IT service competitive advantages.

  • PDF

A Control Method for designing Object Interactions in 3D Game (3차원 게임에서 객체들의 상호 작용을 디자인하기 위한 제어 기법)

  • 김기현;김상욱
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.3
    • /
    • pp.322-331
    • /
    • 2003
  • As the complexity of a 3D game is increased by various factors of the game scenario, it has a problem for controlling the interrelation of the game objects. Therefore, a game system has a necessity of the coordination of the responses of the game objects. Also, it is necessary to control the behaviors of animations of the game objects in terms of the game scenario. To produce realistic game simulations, a system has to include a structure for designing the interactions among the game objects. This paper presents a method that designs the dynamic control mechanism for the interaction of the game objects in the game scenario. For the method, we suggest a game agent system as a framework that is based on intelligent agents who can make decisions using specific rules. Game agent systems are used in order to manage environment data, to simulate the game objects, to control interactions among game objects, and to support visual authoring interface that ran define a various interrelations of the game objects. These techniques can process the autonomy level of the game objects and the associated collision avoidance method, etc. Also, it is possible to make the coherent decision-making ability of the game objects about a change of the scene. In this paper, the rule-based behavior control was designed to guide the simulation of the game objects. The rules are pre-defined by the user using visual interface for designing their interaction. The Agent State Decision Network, which is composed of the visual elements, is able to pass the information and infers the current state of the game objects. All of such methods can monitor and check a variation of motion state between game objects in real time. Finally, we present a validation of the control method together with a simple case-study example. In this paper, we design and implement the supervised classification systems for high resolution satellite images. The systems support various interfaces and statistical data of training samples so that we can select the most effective training data. In addition, the efficient extension of new classification algorithms and satellite image formats are applied easily through the modularized systems. The classifiers are considered the characteristics of spectral bands from the selected training data. They provide various supervised classification algorithms which include Parallelepiped, Minimum distance, Mahalanobis distance, Maximum likelihood and Fuzzy theory. We used IKONOS images for the input and verified the systems for the classification of high resolution satellite images.

Improving Bidirectional LSTM-CRF model Of Sequence Tagging by using Ontology knowledge based feature (온톨로지 지식 기반 특성치를 활용한 Bidirectional LSTM-CRF 모델의 시퀀스 태깅 성능 향상에 관한 연구)

  • Jin, Seunghee;Jang, Heewon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.253-266
    • /
    • 2018
  • This paper proposes a methodology applying sequence tagging methodology to improve the performance of NER(Named Entity Recognition) used in QA system. In order to retrieve the correct answers stored in the database, it is necessary to switch the user's query into a language of the database such as SQL(Structured Query Language). Then, the computer can recognize the language of the user. This is the process of identifying the class or data name contained in the database. The method of retrieving the words contained in the query in the existing database and recognizing the object does not identify the homophone and the word phrases because it does not consider the context of the user's query. If there are multiple search results, all of them are returned as a result, so there can be many interpretations on the query and the time complexity for the calculation becomes large. To overcome these, this study aims to solve this problem by reflecting the contextual meaning of the query using Bidirectional LSTM-CRF. Also we tried to solve the disadvantages of the neural network model which can't identify the untrained words by using ontology knowledge based feature. Experiments were conducted on the ontology knowledge base of music domain and the performance was evaluated. In order to accurately evaluate the performance of the L-Bidirectional LSTM-CRF proposed in this study, we experimented with converting the words included in the learned query into untrained words in order to test whether the words were included in the database but correctly identified the untrained words. As a result, it was possible to recognize objects considering the context and can recognize the untrained words without re-training the L-Bidirectional LSTM-CRF mode, and it is confirmed that the performance of the object recognition as a whole is improved.

A Study on the cost allocation method of the operating room in the hospital (수술실의 원가배부기준 설정연구)

  • Kim, Hwi-Jung;Jung, Key-Sun;Choi, Sung-Woo
    • Korea Journal of Hospital Management
    • /
    • v.8 no.1
    • /
    • pp.135-164
    • /
    • 2003
  • The operating room is the major facility that costs the highest investment per unit area in a hospital. It requires commitment of hospital resources such as manpower, equipments and material. The quantity of these resources committed actually differs from one type of operation to another. Because of this, it is not an easy task to allocate the operating cost to individual clinical departments that share the operating room. A practical way to do so may be to collect and add the operating costs incurred by each clinical department and charge the net cost to the account of the corresponding clinical department. It has been customary to allocate the cost of the operating room to the account of each individual department on the basis of the ratio of the number of operations of the department or the total revenue by each operating room. In an attempt to set up more rational cost allocation method than the customary method, this study proposes a new cost allocation method that calls for itemizing the operation cost into its constituent expenses in detail and adding them up for the operating cost incurred by each individual department. For comparison of the new method with the conventional method, the operating room in the main building of hospital A near Seoul is chosen as a study object. It is selected because it is the biggest operating room in hospital A and most of operations in this hospital are conducted in this room. For this study the one-month operation record performed in January 2001 in this operating room is analyzed to allocate the per-month operation cost to six clinical departments that used this operating room; the departments of general surgery, orthopedic surgery, neuro-surgery, dental surgery, urology, and obstetrics & gynecology. In the new method(or method 1), each operation cost is categorized into three major expenses; personnel expense, material expense, and overhead expense and is allocated into the account of the clinical department that used the operating room. The method 1 shows that, among the total one-month operating cost of 814,054 thousand wons in this hospital, 163,714 thousand won is allocated to GS, 335,084 thousand won to as, 202,772 thousand won to NS, 42,265 thousand won to uno, 33,423 thousand won to OB/GY, and 36.796 thousand won to DS. The allocation of the operating cost to six departments by the new method is quite different from that by the conventional method. According to one conventional allocation method based on the ratio of the number of operations of a department to the total number of operations in the operating room(method 2 hereafter), 329,692 thousand won are allocated to GS, 262,125 thousand won to as, 87,104 thousand won to NS, 59,426 thousand won to URO, 51.285 thousand won to OB/GY, and 24,422 thousand won to DS. According to the other conventional allocation method based on the ratio of the revenue of a department(method 3 hereafter), 148,158 thousand won are allocated to GS, 272,708 thousand won to as, 268.638 thousand won to NS, 45,587 thousand won to uno, 51.285 thousand won to OB/GY, and 27.678 thousand won to DS. As can be noted from these results, the cost allocation to six departments by method 1 is strikingly different from those by method 2 and method 3. The operating cost allocated to GS by method 2 is about twice by method 1. Method 3 makes allocations of the operating cost to individual departments very similarly as method 1. However, there are still discrepancies between the two methods. In particular the cost allocations to OB/GY by the two methods have roughly 53.4% discrepancy. The conventional methods 2 and 3 fail to take into account properly the fact that the average time spent for the operation is different and dependent on the clinical department, whether or not to use expensive clinical material dictate the operating cost, and there is difference between the official operating cost and the actual operating cost. This is why the conventional methods turn out to be inappropriate as the operating cost allocation methods. In conclusion, the new method here may be laborious and cause a complexity in bookkeeping because it requires detailed bookkeeping of the operation cost by its constituent expenses and also by individual clinical department, treating each department as an independent accounting unit. But the method is worth adopting because it will allow the concerned hospital to estimate the operating cost as accurately as practicable. The cost data used in this study such as personnel expense, material cost, overhead cost may not be correct ones. Therefore, the operating cost estimated in the main text may not be the same as the actual cost. Also, the study is focused on the case of only hospital A, which is hardly claimed to represent the hospitals across the nation. In spite of these deficiencies, this study is noteworthy from the standpoint that it proposes a practical allocation method of the operating cost to each individual clinical department.

  • PDF

Intelligent Optimal Route Planning Based on Context Awareness (상황인식 기반 지능형 최적 경로계획)

  • Lee, Hyun-Jung;Chang, Yong-Sik
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.117-137
    • /
    • 2009
  • Recently, intelligent traffic information systems have enabled people to forecast traffic conditions before hitting the road. These convenient systems operate on the basis of data reflecting current road and traffic conditions as well as distance-based data between locations. Thanks to the rapid development of ubiquitous computing, tremendous context data have become readily available making vehicle route planning easier than ever. Previous research in relation to optimization of vehicle route planning merely focused on finding the optimal distance between locations. Contexts reflecting the road and traffic conditions were then not seriously treated as a way to resolve the optimal routing problems based on distance-based route planning, because this kind of information does not have much significant impact on traffic routing until a a complex traffic situation arises. Further, it was also not easy to take into full account the traffic contexts for resolving optimal routing problems because predicting the dynamic traffic situations was regarded a daunting task. However, with rapid increase in traffic complexity the importance of developing contexts reflecting data related to moving costs has emerged. Hence, this research proposes a framework designed to resolve an optimal route planning problem by taking full account of additional moving cost such as road traffic cost and weather cost, among others. Recent technological development particularly in the ubiquitous computing environment has facilitated the collection of such data. This framework is based on the contexts of time, traffic, and environment, which addresses the following issues. First, we clarify and classify the diverse contexts that affect a vehicle's velocity and estimates the optimization of moving cost based on dynamic programming that accounts for the context cost according to the variance of contexts. Second, the velocity reduction rate is applied to find the optimal route (shortest path) using the context data on the current traffic condition. The velocity reduction rate infers to the degree of possible velocity including moving vehicles' considerable road and traffic contexts, indicating the statistical or experimental data. Knowledge generated in this papercan be referenced by several organizations which deal with road and traffic data. Third, in experimentation, we evaluate the effectiveness of the proposed context-based optimal route (shortest path) between locations by comparing it to the previously used distance-based shortest path. A vehicles' optimal route might change due to its diverse velocity caused by unexpected but potential dynamic situations depending on the road condition. This study includes such context variables as 'road congestion', 'work', 'accident', and 'weather' which can alter the traffic condition. The contexts can affect moving vehicle's velocity on the road. Since these context variables except for 'weather' are related to road conditions, relevant data were provided by the Korea Expressway Corporation. The 'weather'-related data were attained from the Korea Meteorological Administration. The aware contexts are classified contexts causing reduction of vehicles' velocity which determines the velocity reduction rate. To find the optimal route (shortest path), we introduced the velocity reduction rate in the context for calculating a vehicle's velocity reflecting composite contexts when one event synchronizes with another. We then proposed a context-based optimal route (shortest path) algorithm based on the dynamic programming. The algorithm is composed of three steps. In the first initialization step, departure and destination locations are given, and the path step is initialized as 0. In the second step, moving costs including composite contexts into account between locations on path are estimated using the velocity reduction rate by context as increasing path steps. In the third step, the optimal route (shortest path) is retrieved through back-tracking. In the provided research model, we designed a framework to account for context awareness, moving cost estimation (taking both composite and single contexts into account), and optimal route (shortest path) algorithm (based on dynamic programming). Through illustrative experimentation using the Wilcoxon signed rank test, we proved that context-based route planning is much more effective than distance-based route planning., In addition, we found that the optimal solution (shortest paths) through the distance-based route planning might not be optimized in real situation because road condition is very dynamic and unpredictable while affecting most vehicles' moving costs. For further study, while more information is needed for a more accurate estimation of moving vehicles' costs, this study still stands viable in the applications to reduce moving costs by effective route planning. For instance, it could be applied to deliverers' decision making to enhance their decision satisfaction when they meet unpredictable dynamic situations in moving vehicles on the road. Overall, we conclude that taking into account the contexts as a part of costs is a meaningful and sensible approach to in resolving the optimal route problem.

A Study on the Abstract Types of the Contemporary Landscape Design (현대조경디자인의 추상유형에 관한 연구)

  • Kim, Jun-Yon;Lee, Haeung-Yul;Bang, Kwang-Ja
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.36 no.6
    • /
    • pp.1-11
    • /
    • 2009
  • This study focuses on Abstract Types in Contemporary Landscape Design. The formation and artistry of contemporary landscape design reveals many areas which Previously have not been able to be expressed in scenic landscape thanks to the deviation of the genre in contemporary landscape and the hybridization that has occurred among architecture, landscape and art genres. The focus of this study is basic research concerning "the abstract", which is used as a creative artistic theory in a variety of art fields such as landscape, architecture and painting. Through a theoretical establishment of "the abstract", its process of change, and the discovery of its contemporary principles, the relationship between each art field in landscapes and the formation of the abstract, abstract language, and abstract properties have been studied. The use of the abstract in contemporary landscape design can be classified in three ways: Inductive abstract representing conceptual transcendental symbols not logically but rather through intuition and transcendental cognition to display the inner expressions, ideas and minds of the artists. Second, a deductive abstract represents an expansive, logical model for the simplification of objects, distortion, exaggeration based on knowledge and logical reasoning about objective fact based on traditional realism. The complexity of the abstract is a concept that is bound to both the deductive & inductive abstract. As a major trend, the concept of "The abstract" in contemporary landscape has been putting forth ever-deeper roots. New trends like abstract works and landscape architecture reflecting the artist's inner expression, in particular, will provide fertile soil for landscape in the future. Further research about the concept of "the abstract" will also be necessary in the time to come.

Noise-robust electrocardiogram R-peak detection with adaptive filter and variable threshold (적응형 필터와 가변 임계값을 적용하여 잡음에 강인한 심전도 R-피크 검출)

  • Rahman, MD Saifur;Choi, Chul-Hyung;Kim, Si-Kyung;Park, In-Deok;Kim, Young-Pil
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.12
    • /
    • pp.126-134
    • /
    • 2017
  • There have been numerous studies on extracting the R-peak from electrocardiogram (ECG) signals. However, most of the detection methods are complicated to implement in a real-time portable electrocardiograph device and have the disadvantage of requiring a large amount of calculations. R-peak detection requires pre-processing and post-processing related to baseline drift and the removal of noise from the commercial power supply for ECG data. An adaptive filter technique is widely used for R-peak detection, but the R-peak value cannot be detected when the input is lower than a threshold value. Moreover, there is a problem in detecting the P-peak and T-peak values due to the derivation of an erroneous threshold value as a result of noise. We propose a robust R-peak detection algorithm with low complexity and simple computation to solve these problems. The proposed scheme removes the baseline drift in ECG signals using an adaptive filter to solve the problems involved in threshold extraction. We also propose a technique to extract the appropriate threshold value automatically using the minimum and maximum values of the filtered ECG signal. To detect the R-peak from the ECG signal, we propose a threshold neighborhood search technique. Through experiments, we confirmed the improvement of the R-peak detection accuracy of the proposed method and achieved a detection speed that is suitable for a mobile system by reducing the amount of calculation. The experimental results show that the heart rate detection accuracy and sensitivity were very high (about 100%).

An Empirical Study on the Importance of Psychological Contract Commitment in Information Systems Outsourcing (정보시스템 아웃소싱에서 심리적 계약 커미트먼트의 중요성에 대한 연구)

  • Kim, Hyung-Jin;Lee, Sang-Hoon;Lee, Ho-Geun
    • Asia pacific journal of information systems
    • /
    • v.17 no.2
    • /
    • pp.49-81
    • /
    • 2007
  • Research in the IS (Information Systems) outsourcing has focused on the importance of legal contracts and partnerships between vendors and clients. Without detailed legal contracts, there is no guarantee that an outsourcing vendor would not indulge in self-serving behavior. In addition, partnerships can supplement legal contracts in managing the relationship between clients and vendors legal contracts by itself cannot deal with all the complexity and ambiguity involved with IS outsourcing relationships. In this paper, we introduce a psychological contract (between client and vendor) as an important variable for IS outsourcing success. A psychological contract refers to individual's mental beliefs about his or her mutual obligations in a contractual relationship (Rousseau, 1995). A psychological contract emerges when one party believes that a promise of future returns has been made, a contribution has been given, and thus, an obligation has been created to provide future benefits (Rousseau, 1989). An employmentpsychological contract, which is a widespread concept in psychology, refers to employer and employee expectations of the employment relationship, i.e. mutual obligations, values, expectations and aspirations that operate over and above the formal contract of employment (Smithson and Lewis, 2003). Similar to the psychological contract between an employer and employee, IS outsourcing involves a contract and a set of mutual obligations between client and vendor (Ho et al., 2003). Given the lack of prior research on psychological contracts in the IS outsourcing context, we extend such studies and give insights through investigating the role of psychological contracts between client and vendor. Psychological contract theory offers highly relevant and sound theoretical lens for studying IS outsourcing management because of its six distinctive principles: (1) it focuses on mutual (rather than one-sided) obligations between contractual parties, (2) it's more comprehensive than the concept of legal contract, (3) it's an individual-level construct, (4) it changes over time, (5) it affects organizational behaviors, and (6) it's susceptible to organizational factors (Koh et al., 2004; Rousseau, 1996; Coyle-Shapiro, 2000). The aim of this paper is to put the concept, psychological contract commitment (PCC), under the spotlight, by finding out its mediating effects between legal contracts/partnerships and IS outsourcing success. Our interest is in the psychological contract commitment (PCC) or commitment to psychological contracts, which is the extent to which a partner consistently and deeply concerns with what the counter-party believes as obligations during the IS project. The basic premise for the hypothesized relationship between PCC and success is that for outsourcing success, client and vendor should continually commit to mutual obligations in which both parties believe, rather than to only explicit obligations. The psychological contract commitment playsa pivotal role in evaluating a counter-party because it reflects what one party really expects from the other. If one party consistently shows high commitment to psychological contracts, the other party would evaluate it positively. This will increase positive reciprocation efforts of the other party, thus leading to successful outsourcing outcomes (McNeeley and Meglino, 1994). We have used matched sample data for this research. We have collected three responses from each set of a client and a vendor firm: a project manager of the client firm, a project member from the vendor firm with whom the project manager cooperated, and an end-user of the client company who actually used the outsourced information systems. Special caution was given to the data collection process to avoid any bias in responses. We first sent three types of questionnaires (A, Band C) to each project manager of the client firm, asking him/her to answer the first type of questionnaires (A).

The Method of Multi-screen Service using Scene Composition Technology based on HTML5 (HTML5 기반 장면구성 기술을 통한 멀티스크린 서비스 제공 방법)

  • Jo, Minwoo;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.18 no.6
    • /
    • pp.895-910
    • /
    • 2013
  • Multi-screen service is a service that consumes more than one media in a number of terminals simultaneously or discriminately. This multi-screen service has become useful due to distribute of smart TV and terminals. Also, in case of hybrid broadcasting environment that is convergence of broadcasting and communication environment, it is able to provide various user experience through contents consumed by multiple screens. In hybrid broadcasting environment, scene composition technology can be used as an element technology for multi-screen service. Using scene composition technology, multiple media can be consumed complexly through the specified presentation time and space. Thus, multi-screen service based on the scene composition technology can provide spatial and temporal control and consumption of multiple media by linkage between the terminals. However, existing scene composition technologies are not able to use easily in hybrid broadcasting because of applicable environmental constraints, the difficulty in applying the various terminal and complexity. For this problems, HTML5 can be considered. HTML5 is expected to be applied in various smart terminals commonly, and provides consumption of diverse media. So, in this paper, it proposes the scene composition and multi-screen service technology based on HTML5 that is expected be used in various smart terminals providing hybrid broadcasting environment. For this, it includes the introduction in terms of HTML5 and multi-screen service, the method of providing information related with scene composition and multi-screen service through the extention of elements and attributes in HTML5, media signaling between terminals and the method of synchronization. In addition, the proposed scene composition and multi-screen service technology based on HTML5 was verified through the implementation and experiment.