• Title/Summary/Keyword: Korean Paper

Search Result 170,828, Processing Time 0.17 seconds

A Fast 4X4 Intra Prediction Method using Motion Vector Information and Statistical Mode Correlation between 16X16 and 4X4 Intra Prediction In H.264|MPEG-4 AVC (H.264|MPEG-4 AVC 비디오 부호화에서 움직임 벡터 정보와 16~16 및 4X4 화면 내 예측 최종 모드간 통계적 연관성을 이용한 화면 간 프레임에서의 4X4 화면 내 예측 고속화 방법)

  • Na, Tae-Young;Jung, Yun-Sik;Kim, Mun-Churl;Hahm, Sang-Jin;Park, Chang-Seob;Park, Keun-Soo
    • Journal of Broadcast Engineering
    • /
    • v.13 no.2
    • /
    • pp.200-213
    • /
    • 2008
  • H.264| MPEG-4 AVC is a new video codingstandard defined by JVT (Joint Video Team) which consists of ITU-T and ISO/IEC. Many techniques are adopted fur the compression efficiency: Especially, an intra prediction in an inter frame is one example but it leads to excessive amount of encoding time due to the decision of a candidate mode and a RDcost calculation. For this reason, a fast determination of the best intra prediction mode is the main issue for saving the encoding time. In this paper, by using the result of statistical relation between intra $16{\times}16$ and $4{\times}4$ intra predictions, the number of candidate modes for $4{\times}4$ intra prediction is reduced. Firstly, utilizing motion vector obtained after inter prediction, prediction of a block mode for each macroblock is made. If an intra prediction is needed, the correlation table between $16{\times}16$ and $4{\times}4$ intra predicted modes is created using the probability during each I frame-coding process. Secondly, using this result, the candidate modes for a $4{\times}4$ intra prediction that reaches a predefined specific probability value are only considered in the same GOP For the experiments, JM11.0, the reference software of H.264|MPEG-4 AVC is used and the experimental results show that the encoding time could be reduced by 51.24% in maximum with negligible amounts of PSNR drop and bitrate increase.

The Evolution of Cyber Singer Viewed from the Coevolution of Man and Machine (인간과 기계의 공진화적 관점에서 바라본 사이버가수의 진화과정)

  • Kim, Dae-Woo
    • Cartoon and Animation Studies
    • /
    • s.39
    • /
    • pp.261-295
    • /
    • 2015
  • Cyber singer appeared in the late 1990s has disappeared briefly appeared. although a few attempts in the 2000s, it did not show significant successes. cyber singer was born thanks to the technical development of the IT industry and the emergence of an idol training system in the music industry. It was developed by Vocaloid 'Seeyou' starting from 'Adam'. cyber singer that differenatiated typical digital characters in a cartoon or game may be subject to idolize to the music as a medium. They also feature forming a plurality of fandom. therefore, such attempts and repeated failures, this could be considered a fashion, but it flew content creation and ongoing attempts to take advantage of the new media, such as Vocaloid can see that there are expectations for a true Cyber-born singer. Early-Cyber singer is made only resemble human appearance, but 'Sciart' and 'Seeyou' has been evolving to becoming more like the human capabilities. in this paper, stylized cyber singer had disappeared in the past in the process of developing the technology to evolve into own artificial life does not end in failure cases, gradually led to a change in public perceptions of the image look looking machine was an attempt in that sense. With the direction of the evolution of the mechanical function to obtain a human, fun and human exchanges and mutual feelings. And it is equipped with an artificial life form that evolved with it only in appearance and function. in order to support this logic, I refer to the study of the coevolution of man and machine at every Bruce Mazlish. And, I have analyzed the evolution of cyber singer Bruce research from the perspective of the development process since the late 1990s, the planning of the eight singers who have appeared and design of the cyber character and important voices to be evaluated as a singer (vocal). The machine has been evolving coevolution with humans. cyber singer ambivalent development targets are recognized, but strive to become the new artificial creatures of horror idea of human desire and death continues. therefore, the new Cyber-organisms are likely to be the same style as 'Seeyou'. because, cartoon forms and whirring voice may not be in the form of a signifier is the real human desires, but this is because the contemporary public's desire to be desired and the technical development of this type can be created at the point where the cross-signifier.

Bankruptcy Type Prediction Using A Hybrid Artificial Neural Networks Model (하이브리드 인공신경망 모형을 이용한 부도 유형 예측)

  • Jo, Nam-ok;Kim, Hyun-jung;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.3
    • /
    • pp.79-99
    • /
    • 2015
  • The prediction of bankruptcy has been extensively studied in the accounting and finance field. It can have an important impact on lending decisions and the profitability of financial institutions in terms of risk management. Many researchers have focused on constructing a more robust bankruptcy prediction model. Early studies primarily used statistical techniques such as multiple discriminant analysis (MDA) and logit analysis for bankruptcy prediction. However, many studies have demonstrated that artificial intelligence (AI) approaches, such as artificial neural networks (ANN), decision trees, case-based reasoning (CBR), and support vector machine (SVM), have been outperforming statistical techniques since 1990s for business classification problems because statistical methods have some rigid assumptions in their application. In previous studies on corporate bankruptcy, many researchers have focused on developing a bankruptcy prediction model using financial ratios. However, there are few studies that suggest the specific types of bankruptcy. Previous bankruptcy prediction models have generally been interested in predicting whether or not firms will become bankrupt. Most of the studies on bankruptcy types have focused on reviewing the previous literature or performing a case study. Thus, this study develops a model using data mining techniques for predicting the specific types of bankruptcy as well as the occurrence of bankruptcy in Korean small- and medium-sized construction firms in terms of profitability, stability, and activity index. Thus, firms will be able to prevent it from occurring in advance. We propose a hybrid approach using two artificial neural networks (ANNs) for the prediction of bankruptcy types. The first is a back-propagation neural network (BPN) model using supervised learning for bankruptcy prediction and the second is a self-organizing map (SOM) model using unsupervised learning to classify bankruptcy data into several types. Based on the constructed model, we predict the bankruptcy of companies by applying the BPN model to a validation set that was not utilized in the development of the model. This allows for identifying the specific types of bankruptcy by using bankruptcy data predicted by the BPN model. We calculated the average of selected input variables through statistical test for each cluster to interpret characteristics of the derived clusters in the SOM model. Each cluster represents bankruptcy type classified through data of bankruptcy firms, and input variables indicate financial ratios in interpreting the meaning of each cluster. The experimental result shows that each of five bankruptcy types has different characteristics according to financial ratios. Type 1 (severe bankruptcy) has inferior financial statements except for EBITDA (earnings before interest, taxes, depreciation, and amortization) to sales based on the clustering results. Type 2 (lack of stability) has a low quick ratio, low stockholder's equity to total assets, and high total borrowings to total assets. Type 3 (lack of activity) has a slightly low total asset turnover and fixed asset turnover. Type 4 (lack of profitability) has low retained earnings to total assets and EBITDA to sales which represent the indices of profitability. Type 5 (recoverable bankruptcy) includes firms that have a relatively good financial condition as compared to other bankruptcy types even though they are bankrupt. Based on the findings, researchers and practitioners engaged in the credit evaluation field can obtain more useful information about the types of corporate bankruptcy. In this paper, we utilized the financial ratios of firms to classify bankruptcy types. It is important to select the input variables that correctly predict bankruptcy and meaningfully classify the type of bankruptcy. In a further study, we will include non-financial factors such as size, industry, and age of the firms. Thus, we can obtain realistic clustering results for bankruptcy types by combining qualitative factors and reflecting the domain knowledge of experts.

Study of Web Services Interoperabiliy for Multiple Applications (다중 Application을 위한 Web Services 상호 운용성에 관한 연구)

  • 유윤식;송종철;최일선;임산송;정회경
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2004.05b
    • /
    • pp.217-220
    • /
    • 2004
  • According as utilization for web increases rapidly, it is demanded that model about support interaction between web-based applications systematically and solutions can integrate new distributed platforms and existing environment effectively, accordingly, Web Services appeared by solution in reply. These days, a lot of software and hardware companies try to adoption of Web Services to their market, attenpt to construct their applications associationing components from various Web Services providers. However, to execute Web Services completely. it must have interoperability and need the standardization work that avoid thing which is subject to platform, application as well as service and programming language from other companies. WS-I (Web Services Interoperability organization) have established Basic Profile 1.0 based on XML, UDDI, WSDL and SOAP for web services interoperability and developed usage scenario Profile to apply Web Services in practice. In this paper, to verify suitability Web Services interoperability between heterogeneous two applications, have design and implements the Book Information Web Services that based on the Web Services Client of J2SE platform and the Web Services Server of .NET platform, so that analysis and verify the service by adaptation of WS-I Basic Profile.

  • PDF

The Performance Bottleneck of Subsequence Matching in Time-Series Databases: Observation, Solution, and Performance Evaluation (시계열 데이타베이스에서 서브시퀀스 매칭의 성능 병목 : 관찰, 해결 방안, 성능 평가)

  • 김상욱
    • Journal of KIISE:Databases
    • /
    • v.30 no.4
    • /
    • pp.381-396
    • /
    • 2003
  • Subsequence matching is an operation that finds subsequences whose changing patterns are similar to a given query sequence from time-series databases. This paper points out the performance bottleneck in subsequence matching, and then proposes an effective method that improves the performance of entire subsequence matching significantly by resolving the performance bottleneck. First, we analyze the disk access and CPU processing times required during the index searching and post processing steps through preliminary experiments. Based on their results, we show that the post processing step is the main performance bottleneck in subsequence matching, and them claim that its optimization is a crucial issue overlooked in previous approaches. In order to resolve the performance bottleneck, we propose a simple but quite effective method that processes the post processing step in the optimal way. By rearranging the order of candidate subsequences to be compared with a query sequence, our method completely eliminates the redundancy of disk accesses and CPU processing occurred in the post processing step. We formally prove that our method is optimal and also does not incur any false dismissal. We show the effectiveness of our method by extensive experiments. The results show that our method achieves significant speed-up in the post processing step 3.91 to 9.42 times when using a data set of real-world stock sequences and 4.97 to 5.61 times when using data sets of a large volume of synthetic sequences. Also, the results show that our method reduces the weight of the post processing step in entire subsequence matching from about 90% to less than 70%. This implies that our method successfully resolves th performance bottleneck in subsequence matching. As a result, our method provides excellent performance in entire subsequence matching. The experimental results reveal that it is 3.05 to 5.60 times faster when using a data set of real-world stock sequences and 3.68 to 4.21 times faster when using data sets of a large volume of synthetic sequences compared with the previous one.

A Fast Algorithm for Computing Multiplicative Inverses in GF(2$^{m}$) using Factorization Formula and Normal Basis (인수분해 공식과 정규기저를 이용한 GF(2$^{m}$ ) 상의 고속 곱셈 역원 연산 알고리즘)

  • 장용희;권용진
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.5_6
    • /
    • pp.324-329
    • /
    • 2003
  • The public-key cryptosystems such as Diffie-Hellman Key Distribution and Elliptical Curve Cryptosystems are built on the basis of the operations defined in GF(2$^{m}$ ):addition, subtraction, multiplication and multiplicative inversion. It is important that these operations should be computed at high speed in order to implement these cryptosystems efficiently. Among those operations, as being the most time-consuming, multiplicative inversion has become the object of lots of investigation Formant's theorem says $\beta$$^{-1}$ =$\beta$$^{2}$sup m/-2/, where $\beta$$^{-1}$ is the multiplicative inverse of $\beta$$\in$GF(2$^{m}$ ). Therefore, to compute the multiplicative inverse of arbitrary elements of GF(2$^{m}$ ), it is most important to reduce the number of times of multiplication by decomposing 2$^{m}$ -2 efficiently. Among many algorithms relevant to the subject, the algorithm proposed by Itoh and Tsujii[2] has reduced the required number of times of multiplication to O(log m) by using normal basis. Furthermore, a few papers have presented algorithms improving the Itoh and Tsujii's. However they have some demerits such as complicated decomposition processes[3,5]. In this paper, in the case of 2$^{m}$ -2, which is mainly used in practical applications, an efficient algorithm is proposed for computing the multiplicative inverse at high speed by using both the factorization formula x$^3$-y$^3$=(x-y)(x$^2$+xy+y$^2$) and normal basis. The number of times of multiplication of the algorithm is smaller than that of the algorithm proposed by Itoh and Tsujii. Also the algorithm decomposes 2$^{m}$ -2 more simply than other proposed algorithms.

A Construction of TMO Object Group Model for Distributed Real-Time Services (분산 실시간 서비스를 위한 TMO 객체그룹 모델의 구축)

  • 신창선;김명희;주수종
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.5_6
    • /
    • pp.307-318
    • /
    • 2003
  • In this paper, we design and construct a TMO object group that provides the guaranteed real-time services in the distributed object computing environments, and verify execution power of its model for the correct distributed real-time services. The TMO object group we suggested is based on TINA's object group concept. This model consists of TMO objects having real-time properties and some components that support the object management service and the real-time scheduling service in the TMO object group. Also TMO objects can be duplicated or non-duplicated on distributed systems. Our model can execute the guaranteed distributed real-time service on COTS middlewares without restricting the specially ORB or the of operating system. For achieving goals of our model. we defined the concepts of the TMO object and the structure of the TMO object group. Also we designed and implemented the functions and interactions of components in the object group. The TMO object group includes the Dynamic Binder object and the Scheduler object for supporting the object management service and the real-time scheduling service, respectively The Dynamic Binder object supports the dynamic binding service that selects the appropriate one out of the duplicated TMO objects for the clients'request. And the Scheduler object supports the real-time scheduling service that determines the priority of tasks executed by an arbitrary TMO object for the clients'service requests. And then, in order to verify the executions of our model, we implemented the Dynamic Binder object and the Scheduler object adopting the binding priority algorithm for the dynamic binding service and the EDF algorithm for the real-time scheduling service from extending the existing known algorithms. Finally, from the numerical analyzed results we are shown, we verified whether our TMO object group model could support dynamic binding service for duplicated or non-duplicated TMO objects, also real-time scheduling service for an arbitrary TMO object requested from clients.

(A Scalable Multipoint-to-Multipoint Routing Protocol in Ad-Hoc Networks) (애드-혹 네트워크에서의 확장성 있는 다중점 대 다중점 라우팅 프로토콜)

  • 강현정;이미정
    • Journal of KIISE:Information Networking
    • /
    • v.30 no.3
    • /
    • pp.329-342
    • /
    • 2003
  • Most of the existing multicast routing protocols for ad-hoc networks do not take into account the efficiency of the protocol for the cases when there are large number of sources in the multicast group, resulting in either large overhead or poor data delivery ratio when the number of sources is large. In this paper, we propose a multicast routing protocol for ad-hoc networks, which particularly considers the scalability of the protocol in terms of the number of sources in the multicast groups. The proposed protocol designates a set of sources as the core sources. Each core source is a root of each tree that reaches all the destinations of the multicast group. The union of these trees constitutes the data delivery mesh, and each of the non-core sources finds the nearest core source in order to delegate its data delivery. For the efficient operation of the proposed protocol, it is important to have an appropriate number of core sources. Having too many of the core sources incurs excessive control and data packet overhead, whereas having too little of them results in a vulnerable and overloaded data delivery mesh. The data delivery mesh is optimally reconfigured through the periodic control message flooding from the core sources, whereas the connectivity of the mesh is maintained by a persistent local mesh recovery mechanism. The simulation results show that the proposed protocol achieves an efficient multicast communication with high data delivery ratio and low communication overhead compared with the other existing multicast routing protocols when there are multiple sources in the multicast group.

Structural Behavior of the Buried flexible Conduits in Coastal Roads Under the Live Load (활하중이 작용하는 해안도로 하부 연성지중구조물의 거동 분석)

  • Cho, Sung-Min;Chang, Yong-Chai
    • Journal of Navigation and Port Research
    • /
    • v.26 no.3
    • /
    • pp.323-328
    • /
    • 2002
  • Soil-steel structures have been used for the underpass, or drainage systems in the road embankment. This type of structures sustain external load using the correlations with the steel wall and engineered backfill materials. Buried flexible conduits made of corrugated steel plates for the coastal road was tested under vehicle loading to investigate the effects of live load. Testing conduits was a circular structure with a diameter of 6.25m. Live-load tests were conducted on two sections, one of which an attempt was made to reinforce the soil cover with the two layers of geo-gird. Hoop fiber strains of corrugated plate, normal earth pressures exerted outside the structure, and deformations of structure were instrumented during the tests. This paper describes the measured static and dynamic load responses of structure. Wall thrust by vehicle loads increased mainly at the crown and shoulder part of the conduit. However additional bending moment by vehicle loads was neglectable. The effectiveness of geogrid-reinforced soil cover on reducing hoop thrust is also discussed based on the measurements in two sections of the structure. The maximum thrusts at the section with geogrid-reinforced soil cover was 85-92% of those with un-reinforced soil cover in the static load tests of the circular structure; this confirms the beneficial effect of soil cover reinforcement on reducing the hoop thrust. However, it was revealed that the two layers of geogrid had no effect on reducing the overburden pressure at the crown level of structure. The obtained values of DLA decrease approximately in proportion to the increase in soil cover from 0.9m to 1.5m. These values are about 1.2-1.4 times higher than those specified in CHBDC.

Development of $14"{\times}8.5"$ active matrix flat-panel digital x-ray detector system and Imaging performance (평판 디지털 X-ray 검출기의 개발과 성능 평가에 관한 연구)

  • Park, Ji-Koon;Choi, Jang-Yong;Kang, Sang-Sik;Lee, Dong-Gil;Seok, Dae-Woo;Nam, Sang Hee
    • Journal of radiological science and technology
    • /
    • v.26 no.4
    • /
    • pp.39-46
    • /
    • 2003
  • Digital radiographic systems based on solid-state detectors, commonly referred to as flat-panel detectors, are gaining popularity in clinical practice. Large area, flat panel solid state detectors are being investigated for digital radiography. The purpose of this work was to evaluate the active matrix flat panel digital x-ray detectors in terms of their modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE). In this paper, development and evaluation of a selenium-based flat-panel digital x-ray detector are described. The prototype detector has a pixel pitch of $139\;{\mu}m$ and a total active imaging area of $14{\times}8.5\;inch^2$, giving a total 3.9 million pixels. This detector include a x-ray imaging layer of amorphous selenium as a photoconductor which is evaporated in vacuum state on a TFT flat panel, to make signals in proportion to incident x-ray. The film thickness was about $500\;{\mu}m$. To evaluate the imaging performance of the digital radiography(DR) system developed in our group, sensitivity, linearity, the modulation transfer function(MTF), noise power spectrum (NPS) and detective quantum efficiency(DQE) of detector was measured. The measured sensitivity was $4.16{\times}10^6\;ehp/pixel{\cdot}mR$ at the bias field of $10\;V/{\mu}m$ : The beam condition was 41.9\;KeV. Measured MTF at 2.5\;lp/mm was 52%, and the DQE at 1.5\;lp/mm was 75%. And the excellent linearity was showed where the coefficient of determination ($r^2$) is 0.9693.

  • PDF