• Title/Summary/Keyword: output-only information

Search Result 739, Processing Time 0.029 seconds

Enhanced Spatial Covariance Matrix Estimation for Asynchronous Inter-Cell Interference Mitigation in MIMO-OFDMA System (3GPP LTE MIMO-OFDMA 시스템의 인접 셀 간섭 완화를 위한 개선된 Spatial Covariance Matrix 추정 기법)

  • Moon, Jong-Gun;Jang, Jun-Hee;Han, Jung-Su;Kim, Sung-Soo;Kim, Yong-Serk;Choi, Hyung-Jin
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.5C
    • /
    • pp.527-539
    • /
    • 2009
  • In this paper, we propose an asynchonous ICI (Inter-Cell Interference) mitigation techniques for 3GPP LTE MIMO-OFDMA down-link receiver. An increasing in symbol timing misalignments may occur relative to sychronous network as the result of BS (Base Station) timing differences. Such symbol synchronization errors that exceed the guard interval or the cyclic prefix duration may result in MAI (Multiple Access Interference) for other carriers. In particular, at the cell boundary, this MAI becomes a critical factor, leading to degraded channel throughput and severe asynchronous ICI. Hence, many researchers have investigated the interference mitigation method in the presence of asynchronous ICI and it appears that the knowledge of the SCM (Spatial Covariance Matrix) of the asynchronous ICI plus background noise is an important issue. Generally, it is assumed that the SCM estimated by using training symbols. However, it is difficult to measure the interference statistics for a long time and training symbol is also not appropriate for MIMO-OFDMA system such as LTE. Therefore, a noise reduction method is required to improve the estimation accuracy. Although the conventional time-domain low-pass type weighting method can be effective for noise reduction, it causes significant estimation error due to the spectral leakage in practical OFDM system. Therefore, we propose a time-domain sinc type weighing method which can not only reduce the noise effectively minimizing estimation error caused by the spectral leakage but also implement frequency-domain moving average filter easily. By using computer simulation, we show that the proposed method can provide up to 3dB SIR gain compared with the conventional method.

High Resolution Gyeonggi-do Agrometeorology Information Analysis System based on the Observational Data using Local Analysis and Prediction System (LAPS) (LAPS와 관측자료를 이용한 고해상도 경기도 농업기상정보 분석시스템)

  • Chun, Ji-Min;Kim, Kyu-Rang;Lee, Seon-Yong;Kang, Wee-Soo;Park, Jong-Sun;Yi, Chae-Yon;Choi, Young-Jean;Park, Eun-Woo;Hong, Sun-Sung
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.14 no.2
    • /
    • pp.53-62
    • /
    • 2012
  • Demand for high resolution weather data grows in the agriculture and forestry fields. Local Analysis and Prediction System (LAPS) can be used to analyze the local weather at high spatial and temporal resolution, utilizing the data from various sources including numerical weather prediction models, wind or temperature profilers, Automated Weather Station (AWS) networks, radars, and satellites. LAPS has been set to analyze weather elements such as air temperature, relative humidity, wind speed, and wind direction every hour at the spatial resolution of $100m{\times}100m$ for Gyeonggi-do on near real-time basis. The AWS data were revised by adding the agricultural field AWS data (33 stations) in addition to the KMA data. The analysis periods were from 1 to 31 August 2009 and from 15 to 21 February 2010. The comparison of the LAPS output showed the smaller errors when using the agricultural AWS observation data together with the KMA data as its input data than using only either the agricultural or KMA AWS data. The accuracy of the current system needs improvement by further optimization of analyzing options of the system. However, the system is highly applicable to various fields in agriculture and forestry because it can provide site specific data with reasonable time intervals.

Service Curve Allocation Schemes for High Network Utilization with a Constant Deadline Computation Cost (상수의 데드라인 계산 비용으로 높은 네트웍 유용도를 얻는 서비스 곡선 할당 방식)

  • 편기현;송준화;이흥규
    • Journal of KIISE:Information Networking
    • /
    • v.30 no.4
    • /
    • pp.535-544
    • /
    • 2003
  • Integrated services networks should guarantee end-to-end delay bounds for real-time applications to provide high quality services. A real-time scheduler is installed on all the output ports to provide such guaranteed service. However, scheduling algorithms studied so far have problems with either network utilization or scalability. Here, network utilization indicates how many real-time sessions can be admitted. In this paper, we propose service curve allocation schemes that result in both high network utilization and scalability in a service curve algorithm. In service curve algorithm, an adopted service curve allocation scheme determines both network utilization and scalability. Contrary to the common belief, we have proved that only a part of a service curve is used to compute deadlines, not the entire curve. From this fact, we propose service curve allocation schemes that result in a constant time for computing deadlines. We through a simulation study that our proposed schemes can achieve better network utilizations than Generalized processor Sharing (GPS) algorithms including the multirate algorithm. To our knowledge, the service curve algorithm adopting our schemes can achieve the widest network utilization among existing scheduling algorithms that have the same scalability.

MILP-Aided Division Property and Integral Attack on Lightweight Block Cipher PIPO (경량 블록 암호 PIPO의 MILP-Aided 디비전 프로퍼티 분석 및 인테그랄 공격)

  • Kim, Jeseong;Kim, Seonggyeom;Kim, Sunyeop;Hong, Deukjo;Sung, Jaechul;Hong, Seokhie
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.31 no.5
    • /
    • pp.875-888
    • /
    • 2021
  • In this paper, we search integral distinguishers of lightweight block cipher PIPO and propose a key recovery attack on 8-round PIPO-64/128 with the obtained 6-round distinguishers. The lightweight block cipher PIPO proposed in ICISC 2020 is designed to provide the efficient implementation of high-order masking for side-channel attack resistance. In the proposal, various attacks such as differential and linear cryptanalyses were applied to show the sufficient security strength. However, the designers leave integral attack to be conducted and only show that it is unlikely for PIPO to have integral distinguishers longer than 5-round PIPO without further analysis on Division Property. In this paper, we search integral distinguishers of PIPO using a MILP-aided Division Property search method. Our search can show that there exist 6-round integral distinguishers, which is different from what the designers insist. We also consider linear operation on input and output of distinguisher, respectively, and manage to obtain totally 136 6-round integral distinguishers. Finally, we present an 8-round PIPO-64/128 key recovery attack with time complexity 2124.5849 and memory complexity of 293 with four 6-round integral distinguishers among the entire obtained distinguishers.

A Study of Improvements in the Standards of Cost Estimate for the New Excellent Technology in Construction (건설 신기술의 원가산정기준 개선방안에 대한 연구)

  • Lee, Ju-hyun;Tae, Yong-Ho;Baek, Seung-Ho;Kim, Kyoungmin
    • Korean Journal of Construction Engineering and Management
    • /
    • v.23 no.5
    • /
    • pp.65-76
    • /
    • 2022
  • The New Excellent Technology (NET) designation system, introduced in 1989 for the purpose of promoting the development of domestic construction technology and enhancing national competitiveness, reviews the statement of construction cost of new technologies. And the cost reduction effect such as design, construction, and maintenance cost and the effect of reducing the construction duration are evaluated as an evaluation criteria of economic feasibility. However, in this evaluation process, differences of opinion between the institution of construction cost estimating standard management and the new technology developer about unique technologies frequently occur. In addition it is difficult to objectively compare the construction duration with existing similar technologies because there is no information on productivity as the current cost estimating standards for new technologies only present the required amount per unit quantity. In this study, the current state of cost estimating criteria review procedure, evaluation criteria, and cost estimating standards establishment method were analyzed when screening for the designation of a new construction technologies, and compared with overseas cost estimating standards, measures to improve the cost estimating standards of current construction new technologies were suggested. Through the improved cost estimating standards of this study, it is expected that cost information on new technologies will be provided to clients in more detail than the current ones, and the availability and applicability of new construction technologies would be improved by simplifying the construction cost calculation process more.

Development of a Prediction Model for Fall Patients in the Main Diagnostic S Code Using Artificial Intelligence (인공지능을 이용한 주진단 S코드의 낙상환자 예측모델 개발)

  • Ye-Ji Park;Eun-Mee Choi;So-Hyeon Bang;Jin-Hyoung Jeong
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.6
    • /
    • pp.526-532
    • /
    • 2023
  • Falls are fatal accidents that occur more than 420,000 times a year worldwide. Therefore, to study patients with falls, we found the association between extrinsic injury codes and principal diagnosis S-codes of patients with falls, and developed a prediction model to predict extrinsic injury codes based on the data of principal diagnosis S-codes of patients with falls. In this study, we received two years of data from 2020 and 2021 from Institution A, located in Gangneung City, Gangwon Special Self-Governing Province, and extracted only the data from W00 to W19 of the extrinsic injury codes related to falls, and developed a prediction model using W01, W10, W13, and W18 of the extrinsic injury codes of falls, which had enough principal diagnosis S-codes to develop a prediction model. 80% of the data were categorized as training data and 20% as testing data. The model was developed using MLP (Multi-Layer Perceptron) with 6 variables (gender, age, principal diagnosis S-code, surgery, hospitalization, and alcohol consumption) in the input layer, 2 hidden layers with 64 nodes, and an output layer with 4 nodes for W01, W10, W13, and W18 exogenous damage codes using the softmax activation function. As a result of the training, the first training had an accuracy of 31.2%, but the 30th training had an accuracy of 87.5%, which confirmed the association between the fall extrinsic code and the main diagnosis S code of the fall patient.

Analyzing the Impact of Multivariate Inputs on Deep Learning-Based Reservoir Level Prediction and Approaches for Mid to Long-Term Forecasting (다변량 입력이 딥러닝 기반 저수율 예측에 미치는 영향 분석과 중장기 예측 방안)

  • Hyeseung Park;Jongwook Yoon;Hojun Lee;Hyunho Yang
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.4
    • /
    • pp.199-207
    • /
    • 2024
  • Local reservoirs are crucial sources for agricultural water supply, necessitating stable water level management to prepare for extreme climate conditions such as droughts. Water level prediction is significantly influenced by local climate characteristics, such as localized rainfall, as well as seasonal factors including cropping times, making it essential to understand the correlation between input and output data as much as selecting an appropriate prediction model. In this study, extensive multivariate data from over 400 reservoirs in Jeollabuk-do from 1991 to 2022 was utilized to train and validate a water level prediction model that comprehensively reflects the complex hydrological and climatological environmental factors of each reservoir, and to analyze the impact of each input feature on the prediction performance of water levels. Instead of focusing on improvements in water level performance through neural network structures, the study adopts a basic Feedforward Neural Network composed of fully connected layers, batch normalization, dropout, and activation functions, focusing on the correlation between multivariate input data and prediction performance. Additionally, most existing studies only present short-term prediction performance on a daily basis, which is not suitable for practical environments that require medium to long-term predictions, such as 10 days or a month. Therefore, this study measured the water level prediction performance up to one month ahead through a recursive method that uses daily prediction values as the next input. The experiment identified performance changes according to the prediction period and analyzed the impact of each input feature on the overall performance based on an Ablation study.

The Prediction of DEA based Efficiency Rating for Venture Business Using Multi-class SVM (다분류 SVM을 이용한 DEA기반 벤처기업 효율성등급 예측모형)

  • Park, Ji-Young;Hong, Tae-Ho
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.139-155
    • /
    • 2009
  • For the last few decades, many studies have tried to explore and unveil venture companies' success factors and unique features in order to identify the sources of such companies' competitive advantages over their rivals. Such venture companies have shown tendency to give high returns for investors generally making the best use of information technology. For this reason, many venture companies are keen on attracting avid investors' attention. Investors generally make their investment decisions by carefully examining the evaluation criteria of the alternatives. To them, credit rating information provided by international rating agencies, such as Standard and Poor's, Moody's and Fitch is crucial source as to such pivotal concerns as companies stability, growth, and risk status. But these types of information are generated only for the companies issuing corporate bonds, not venture companies. Therefore, this study proposes a method for evaluating venture businesses by presenting our recent empirical results using financial data of Korean venture companies listed on KOSDAQ in Korea exchange. In addition, this paper used multi-class SVM for the prediction of DEA-based efficiency rating for venture businesses, which was derived from our proposed method. Our approach sheds light on ways to locate efficient companies generating high level of profits. Above all, in determining effective ways to evaluate a venture firm's efficiency, it is important to understand the major contributing factors of such efficiency. Therefore, this paper is constructed on the basis of following two ideas to classify which companies are more efficient venture companies: i) making DEA based multi-class rating for sample companies and ii) developing multi-class SVM-based efficiency prediction model for classifying all companies. First, the Data Envelopment Analysis(DEA) is a non-parametric multiple input-output efficiency technique that measures the relative efficiency of decision making units(DMUs) using a linear programming based model. It is non-parametric because it requires no assumption on the shape or parameters of the underlying production function. DEA has been already widely applied for evaluating the relative efficiency of DMUs. Recently, a number of DEA based studies have evaluated the efficiency of various types of companies, such as internet companies and venture companies. It has been also applied to corporate credit ratings. In this study we utilized DEA for sorting venture companies by efficiency based ratings. The Support Vector Machine(SVM), on the other hand, is a popular technique for solving data classification problems. In this paper, we employed SVM to classify the efficiency ratings in IT venture companies according to the results of DEA. The SVM method was first developed by Vapnik (1995). As one of many machine learning techniques, SVM is based on a statistical theory. Thus far, the method has shown good performances especially in generalizing capacity in classification tasks, resulting in numerous applications in many areas of business, SVM is basically the algorithm that finds the maximum margin hyperplane, which is the maximum separation between classes. According to this method, support vectors are the closest to the maximum margin hyperplane. If it is impossible to classify, we can use the kernel function. In the case of nonlinear class boundaries, we can transform the inputs into a high-dimensional feature space, This is the original input space and is mapped into a high-dimensional dot-product space. Many studies applied SVM to the prediction of bankruptcy, the forecast a financial time series, and the problem of estimating credit rating, In this study we employed SVM for developing data mining-based efficiency prediction model. We used the Gaussian radial function as a kernel function of SVM. In multi-class SVM, we adopted one-against-one approach between binary classification method and two all-together methods, proposed by Weston and Watkins(1999) and Crammer and Singer(2000), respectively. In this research, we used corporate information of 154 companies listed on KOSDAQ market in Korea exchange. We obtained companies' financial information of 2005 from the KIS(Korea Information Service, Inc.). Using this data, we made multi-class rating with DEA efficiency and built multi-class prediction model based data mining. Among three manners of multi-classification, the hit ratio of the Weston and Watkins method is the best in the test data set. In multi classification problems as efficiency ratings of venture business, it is very useful for investors to know the class with errors, one class difference, when it is difficult to find out the accurate class in the actual market. So we presented accuracy results within 1-class errors, and the Weston and Watkins method showed 85.7% accuracy in our test samples. We conclude that the DEA based multi-class approach in venture business generates more information than the binary classification problem, notwithstanding its efficiency level. We believe this model can help investors in decision making as it provides a reliably tool to evaluate venture companies in the financial domain. For the future research, we perceive the need to enhance such areas as the variable selection process, the parameter selection of kernel function, the generalization, and the sample size of multi-class.

Dynamic Limit and Predatory Pricing Under Uncertainty (불확실성하(不確實性下)의 동태적(動態的) 진입제한(進入制限) 및 약탈가격(掠奪價格) 책정(策定))

  • Yoo, Yoon-ha
    • KDI Journal of Economic Policy
    • /
    • v.13 no.1
    • /
    • pp.151-166
    • /
    • 1991
  • In this paper, a simple game-theoretic entry deterrence model is developed that integrates both limit pricing and predatory pricing. While there have been extensive studies which have dealt with predation and limit pricing separately, no study so far has analyzed these closely related practices in a unified framework. Treating each practice as if it were an independent phenomenon is, of course, an analytical necessity to abstract from complex realities. However, welfare analysis based on such a model may give misleading policy implications. By analyzing limit and predatory pricing within a single framework, this paper attempts to shed some light on the effects of interactions between these two frequently cited tactics of entry deterrence. Another distinctive feature of the paper is that limit and predatory pricing emerge, in equilibrium, as rational, profit maximizing strategies in the model. Until recently, the only conclusion from formal analyses of predatory pricing was that predation is unlikely to take place if every economic agent is assumed to be rational. This conclusion rests upon the argument that predation is costly; that is, it inflicts more losses upon the predator than upon the rival producer, and, therefore, is unlikely to succeed in driving out the rival, who understands that the price cutting, if it ever takes place, must be temporary. Recently several attempts have been made to overcome this modelling difficulty by Kreps and Wilson, Milgram and Roberts, Benoit, Fudenberg and Tirole, and Roberts. With the exception of Roberts, however, these studies, though successful in preserving the rationality of players, still share one serious weakness in that they resort to ad hoc, external constraints in order to generate profit maximizing predation. The present paper uses a highly stylized model of Cournot duopoly and derives the equilibrium predatory strategy without invoking external constraints except the assumption of asymmetrically distributed information. The underlying intuition behind the model can be summarized as follows. Imagine a firm that is considering entry into a monopolist's market but is uncertain about the incumbent firm's cost structure. If the monopolist has low cost, the rival would rather not enter because it would be difficult to compete with an efficient, low-cost firm. If the monopolist has high costs, however, the rival will definitely enter the market because it can make positive profits. In this situation, if the incumbent firm unwittingly produces its monopoly output, the entrant can infer the nature of the monopolist's cost by observing the monopolist's price. Knowing this, the high cost monopolist increases its output level up to what would have been produced by a low cost firm in an effort to conceal its cost condition. This constitutes limit pricing. The same logic applies when there is a rival competitor in the market. Producing a high cost duopoly output is self-revealing and thus to be avoided. Therefore, the firm chooses to produce the low cost duopoly output, consequently inflicting losses to the entrant or rival producer, thus acting in a predatory manner. The policy implications of the analysis are rather mixed. Contrary to the widely accepted hypothesis that predation is, at best, a negative sum game, and thus, a strategy that is unlikely to be played from the outset, this paper concludes that predation can be real occurence by showing that it can arise as an effective profit maximizing strategy. This conclusion alone may imply that the government can play a role in increasing the consumer welfare, say, by banning predation or limit pricing. However, the problem is that it is rather difficult to ascribe any welfare losses to these kinds of entry deterring practices. This difficulty arises from the fact that if the same practices have been adopted by a low cost firm, they could not be called entry-deterring. Moreover, the high cost incumbent in the model is doing exactly what the low cost firm would have done to keep the market to itself. All in all, this paper suggests that a government injunction of limit and predatory pricing should be applied with great care, evaluating each case on its own basis. Hasty generalization may work to the detriment, rather than the enhancement of consumer welfare.

  • PDF

Ontology-Based Process-Oriented Knowledge Map Enabling Referential Navigation between Knowledge (지식 간 상호참조적 네비게이션이 가능한 온톨로지 기반 프로세스 중심 지식지도)

  • Yoo, Kee-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.61-83
    • /
    • 2012
  • A knowledge map describes the network of related knowledge into the form of a diagram, and therefore underpins the structure of knowledge categorizing and archiving by defining the relationship of the referential navigation between knowledge. The referential navigation between knowledge means the relationship of cross-referencing exhibited when a piece of knowledge is utilized by a user. To understand the contents of the knowledge, a user usually requires additionally information or knowledge related with each other in the relation of cause and effect. This relation can be expanded as the effective connection between knowledge increases, and finally forms the network of knowledge. A network display of knowledge using nodes and links to arrange and to represent the relationship between concepts can provide a more complex knowledge structure than a hierarchical display. Moreover, it can facilitate a user to infer through the links shown on the network. For this reason, building a knowledge map based on the ontology technology has been emphasized to formally as well as objectively describe the knowledge and its relationships. As the necessity to build a knowledge map based on the structure of the ontology has been emphasized, not a few researches have been proposed to fulfill the needs. However, most of those researches to apply the ontology to build the knowledge map just focused on formally expressing knowledge and its relationships with other knowledge to promote the possibility of knowledge reuse. Although many types of knowledge maps based on the structure of the ontology were proposed, no researches have tried to design and implement the referential navigation-enabled knowledge map. This paper addresses a methodology to build the ontology-based knowledge map enabling the referential navigation between knowledge. The ontology-based knowledge map resulted from the proposed methodology can not only express the referential navigation between knowledge but also infer additional relationships among knowledge based on the referential relationships. The most highlighted benefits that can be delivered by applying the ontology technology to the knowledge map include; formal expression about knowledge and its relationships with others, automatic identification of the knowledge network based on the function of self-inference on the referential relationships, and automatic expansion of the knowledge-base designed to categorize and store knowledge according to the network between knowledge. To enable the referential navigation between knowledge included in the knowledge map, and therefore to form the knowledge map in the format of a network, the ontology must describe knowledge according to the relation with the process and task. A process is composed of component tasks, while a task is activated after any required knowledge is inputted. Since the relation of cause and effect between knowledge can be inherently determined by the sequence of tasks, the referential relationship between knowledge can be circuitously implemented if the knowledge is modeled to be one of input or output of each task. To describe the knowledge with respect to related process and task, the Protege-OWL, an editor that enables users to build ontologies for the Semantic Web, is used. An OWL ontology-based knowledge map includes descriptions of classes (process, task, and knowledge), properties (relationships between process and task, task and knowledge), and their instances. Given such an ontology, the OWL formal semantics specifies how to derive its logical consequences, i.e. facts not literally present in the ontology, but entailed by the semantics. Therefore a knowledge network can be automatically formulated based on the defined relationships, and the referential navigation between knowledge is enabled. To verify the validity of the proposed concepts, two real business process-oriented knowledge maps are exemplified: the knowledge map of the process of 'Business Trip Application' and 'Purchase Management'. By applying the 'DL-Query' provided by the Protege-OWL as a plug-in module, the performance of the implemented ontology-based knowledge map has been examined. Two kinds of queries to check whether the knowledge is networked with respect to the referential relations as well as the ontology-based knowledge network can infer further facts that are not literally described were tested. The test results show that not only the referential navigation between knowledge has been correctly realized, but also the additional inference has been accurately performed.