• Title/Summary/Keyword: optimal model

Search Result 8,455, Processing Time 0.042 seconds

The Concentration of Economic Power in Korea (경제력집중(經濟力集中) : 기본시각(基本視角)과 정책방향(政策方向))

  • Lee, Kyu-uck
    • KDI Journal of Economic Policy
    • /
    • v.12 no.1
    • /
    • pp.31-68
    • /
    • 1990
  • The concentration of economic power takes the form of one or a few firms controlling a substantial portion of the economic resources and means in a certain economic area. At the same time, to the extent that these firms are owned by a few individuals, resource allocation can be manipulated by them rather than by the impersonal market mechanism. This will impair allocative efficiency, run counter to a decentralized market system and hamper the equitable distribution of wealth. Viewed from the historical evolution of Western capitalism in general, the concentration of economic power is a paradox in that it is a product of the free market system itself. The economic principle of natural discrimination works so that a few big firms preempt scarce resources and market opportunities. Prominent historical examples include trusts in America, Konzern in Germany and Zaibatsu in Japan in the early twentieth century. In other words, the concentration of economic power is the outcome as well as the antithesis of free competition. As long as judgment of the economic system at large depends upon the value systems of individuals, therefore, the issue of how to evaluate the concentration of economic power will inevitably be tinged with ideology. We have witnessed several different approaches to this problem such as communism, fascism and revised capitalism, and the last one seems to be the only surviving alternative. The concentration of economic power in Korea can be summarily represented by the "jaebol," namely, the conglomerate business group, the majority of whose member firms are monopolistic or oligopolistic in their respective markets and are owned by particular individuals. The jaebol has many dimensions in its size, but to sketch its magnitude, the share of the jaebol in the manufacturing sector reached 37.3% in shipment and 17.6% in employment as of 1989. The concentration of economic power can be ascribed to a number of causes. In the early stages of economic development, when the market system is immature, entrepreneurship must fill the gap inherent in the market in addition to performing its customary managerial function. Entrepreneurship of this sort is a scarce resource and becomes even more valuable as the target rate of economic growth gets higher. Entrepreneurship can neither be readily obtained in the market nor exhausted despite repeated use. Because of these peculiarities, economic power is bound to be concentrated in the hands of a few entrepreneurs and their business groups. It goes without saying, however, that the issue of whether the full exercise of money-making entrepreneurship is compatible with social mores is a different matter entirely. The rapidity of the concentration of economic power can also be traced to the diversification of business groups. The transplantation of advanced technology oriented toward mass production tends to saturate the small domestic market quite early and allows a firm to expand into new markets by making use of excess capacity and of monopoly profits. One of the reasons why the jaebol issue has become so acute in Korea lies in the nature of the government-business relationship. The Korean government has set economic development as its foremost national goal and, since then, has intervened profoundly in the private sector. Since most strategic industries promoted by the government required a huge capacity in technology, capital and manpower, big firms were favored over smaller firms, and the benefits of industrial policy naturally accrued to large business groups. The concentration of economic power which occured along the way was, therefore, not necessarily a product of the market system. At the same time, the concentration of ownership in business groups has been left largely intact as they have customarily met capital requirements by means of debt. The real advantage enjoyed by large business groups lies in synergy due to multiplant and multiproduct production. Even these effects, however, cannot always be considered socially optimal, as they offer disadvantages to other independent firms-for example, by foreclosing their markets. Moreover their fictitious or artificial advantages only aggravate the popular perception that most business groups have accumulated their wealth at the expense of the general public and under the behest of the government. Since Korea stands now at the threshold of establishing a full-fledged market economy along with political democracy, the phenomenon called the concentration of economic power must be correctly understood and the roles of business groups must be accordingly redefined. In doing so, we would do better to take a closer look at Japan which has experienced a demise of family-controlled Zaibatsu and a success with business groups(Kigyoshudan) whose ownership is dispersed among many firms and ultimately among the general public. The Japanese case cannot be an ideal model, but at least it gives us a good point of departure in that the issue of ownership is at the heart of the matter. In setting the basic direction of public policy aimed at controlling the concentration of economic power, one must harmonize efficiency and equity. Firm size in itself is not a problem, if it is dictated by efficiency considerations and if the firm behaves competitively in the market. As long as entrepreneurship is required for continuous economic growth and there is a discrepancy in entrepreneurial capacity among individuals, a concentration of economic power is bound to take place to some degree. Hence, the most effective way of reducing the inefficiency of business groups may be to impose competitive pressure on their activities. Concurrently, unless the concentration of ownership in business groups is scaled down, the seed of social discontent will still remain. Nevertheless, the dispersion of ownership requires a number of preconditions and, consequently, we must make consistent, long-term efforts on many fronts. We can suggest a long list of policy measures specifically designed to control the concentration of economic power. Whatever the policy may be, however, its intended effects will not be fully realized unless business groups abide by the moral code expected of socially responsible entrepreneurs. This is especially true, since the root of the problem of the excessive concentration of economic power lies outside the issue of efficiency, in problems concerning distribution, equity, and social justice.

  • PDF

Converting Ieodo Ocean Research Station Wind Speed Observations to Reference Height Data for Real-Time Operational Use (이어도 해양과학기지 풍속 자료의 실시간 운용을 위한 기준 고도 변환 과정)

  • BYUN, DO-SEONG;KIM, HYOWON;LEE, JOOYOUNG;LEE, EUNIL;PARK, KYUNG-AE;WOO, HYE-JIN
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.23 no.4
    • /
    • pp.153-178
    • /
    • 2018
  • Most operational uses of wind speed data require measurements at, or estimates generated for, the reference height of 10 m above mean sea level (AMSL). On the Ieodo Ocean Research Station (IORS), wind speed is measured by instruments installed on the lighthouse tower of the roof deck at 42.3 m AMSL. This preliminary study indicates how these data can best be converted into synthetic 10 m wind speed data for operational uses via the Korea Hydrographic and Oceanographic Agency (KHOA) website. We tested three well-known conventional empirical neutral wind profile formulas (a power law (PL); a drag coefficient based logarithmic law (DCLL); and a roughness height based logarithmic law (RHLL)), and compared their results to those generated using a well-known, highly tested and validated logarithmic model (LMS) with a stability function (${\psi}_{\nu}$), to assess the potential use of each method for accurately synthesizing reference level wind speeds. From these experiments, we conclude that the reliable LMS technique and the RHLL technique are both useful for generating reference wind speed data from IORS observations, since these methods produced very similar results: comparisons between the RHLL and the LMS results showed relatively small bias values ($-0.001m\;s^{-1}$) and Root Mean Square Deviations (RMSD, $0.122m\;s^{-1}$). We also compared the synthetic wind speed data generated using each of the four neutral wind profile formulas under examination with Advanced SCATterometer (ASCAT) data. Comparisons revealed that the 'LMS without ${\psi}_{\nu}^{\prime}$ produced the best results, with only $0.191m\;s^{-1}$ of bias and $1.111m\;s^{-1}$ of RMSD. As well as comparing these four different approaches, we also explored potential refinements that could be applied within or through each approach. Firstly, we tested the effect of tidal variations in sea level height on wind speed calculations, through comparison of results generated with and without the adjustment of sea level heights for tidal effects. Tidal adjustment of the sea levels used in reference wind speed calculations resulted in remarkably small bias (<$0.0001m\;s^{-1}$) and RMSD (<$0.012m\;s^{-1}$) values when compared to calculations performed without adjustment, indicating that this tidal effect can be ignored for the purposes of IORS reference wind speed estimates. We also estimated surface roughness heights ($z_0$) based on RHLL and LMS calculations in order to explore the best parameterization of this factor, with results leading to our recommendation of a new $z_0$ parameterization derived from observed wind speed data. Lastly, we suggest the necessity of including a suitable, experimentally derived, surface drag coefficient and $z_0$ formulas within conventional wind profile formulas for situations characterized by strong wind (${\geq}33m\;s^{-1}$) conditions, since without this inclusion the wind adjustment approaches used in this study are only optimal for wind speeds ${\leq}25m\;s^{-1}$.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

An Empirical Study on the Influencing Factors for Big Data Intented Adoption: Focusing on the Strategic Value Recognition and TOE Framework (빅데이터 도입의도에 미치는 영향요인에 관한 연구: 전략적 가치인식과 TOE(Technology Organizational Environment) Framework을 중심으로)

  • Ka, Hoi-Kwang;Kim, Jin-soo
    • Asia pacific journal of information systems
    • /
    • v.24 no.4
    • /
    • pp.443-472
    • /
    • 2014
  • To survive in the global competitive environment, enterprise should be able to solve various problems and find the optimal solution effectively. The big-data is being perceived as a tool for solving enterprise problems effectively and improve competitiveness with its' various problem solving and advanced predictive capabilities. Due to its remarkable performance, the implementation of big data systems has been increased through many enterprises around the world. Currently the big-data is called the 'crude oil' of the 21st century and is expected to provide competitive superiority. The reason why the big data is in the limelight is because while the conventional IT technology has been falling behind much in its possibility level, the big data has gone beyond the technological possibility and has the advantage of being utilized to create new values such as business optimization and new business creation through analysis of big data. Since the big data has been introduced too hastily without considering the strategic value deduction and achievement obtained through the big data, however, there are difficulties in the strategic value deduction and data utilization that can be gained through big data. According to the survey result of 1,800 IT professionals from 18 countries world wide, the percentage of the corporation where the big data is being utilized well was only 28%, and many of them responded that they are having difficulties in strategic value deduction and operation through big data. The strategic value should be deducted and environment phases like corporate internal and external related regulations and systems should be considered in order to introduce big data, but these factors were not well being reflected. The cause of the failure turned out to be that the big data was introduced by way of the IT trend and surrounding environment, but it was introduced hastily in the situation where the introduction condition was not well arranged. The strategic value which can be obtained through big data should be clearly comprehended and systematic environment analysis is very important about applicability in order to introduce successful big data, but since the corporations are considering only partial achievements and technological phases that can be obtained through big data, the successful introduction is not being made. Previous study shows that most of big data researches are focused on big data concept, cases, and practical suggestions without empirical study. The purpose of this study is provide the theoretically and practically useful implementation framework and strategies of big data systems with conducting comprehensive literature review, finding influencing factors for successful big data systems implementation, and analysing empirical models. To do this, the elements which can affect the introduction intention of big data were deducted by reviewing the information system's successful factors, strategic value perception factors, considering factors for the information system introduction environment and big data related literature in order to comprehend the effect factors when the corporations introduce big data and structured questionnaire was developed. After that, the questionnaire and the statistical analysis were performed with the people in charge of the big data inside the corporations as objects. According to the statistical analysis, it was shown that the strategic value perception factor and the inside-industry environmental factors affected positively the introduction intention of big data. The theoretical, practical and political implications deducted from the study result is as follows. The frist theoretical implication is that this study has proposed theoretically effect factors which affect the introduction intention of big data by reviewing the strategic value perception and environmental factors and big data related precedent studies and proposed the variables and measurement items which were analyzed empirically and verified. This study has meaning in that it has measured the influence of each variable on the introduction intention by verifying the relationship between the independent variables and the dependent variables through structural equation model. Second, this study has defined the independent variable(strategic value perception, environment), dependent variable(introduction intention) and regulatory variable(type of business and corporate size) about big data introduction intention and has arranged theoretical base in studying big data related field empirically afterwards by developing measurement items which has obtained credibility and validity. Third, by verifying the strategic value perception factors and the significance about environmental factors proposed in the conventional precedent studies, this study will be able to give aid to the afterwards empirical study about effect factors on big data introduction. The operational implications are as follows. First, this study has arranged the empirical study base about big data field by investigating the cause and effect relationship about the influence of the strategic value perception factor and environmental factor on the introduction intention and proposing the measurement items which has obtained the justice, credibility and validity etc. Second, this study has proposed the study result that the strategic value perception factor affects positively the big data introduction intention and it has meaning in that the importance of the strategic value perception has been presented. Third, the study has proposed that the corporation which introduces big data should consider the big data introduction through precise analysis about industry's internal environment. Fourth, this study has proposed the point that the size and type of business of the corresponding corporation should be considered in introducing the big data by presenting the difference of the effect factors of big data introduction depending on the size and type of business of the corporation. The political implications are as follows. First, variety of utilization of big data is needed. The strategic value that big data has can be accessed in various ways in the product, service field, productivity field, decision making field etc and can be utilized in all the business fields based on that, but the parts that main domestic corporations are considering are limited to some parts of the products and service fields. Accordingly, in introducing big data, reviewing the phase about utilization in detail and design the big data system in a form which can maximize the utilization rate will be necessary. Second, the study is proposing the burden of the cost of the system introduction, difficulty in utilization in the system and lack of credibility in the supply corporations etc in the big data introduction phase by corporations. Since the world IT corporations are predominating the big data market, the big data introduction of domestic corporations can not but to be dependent on the foreign corporations. When considering that fact, that our country does not have global IT corporations even though it is world powerful IT country, the big data can be thought to be the chance to rear world level corporations. Accordingly, the government shall need to rear star corporations through active political support. Third, the corporations' internal and external professional manpower for the big data introduction and operation lacks. Big data is a system where how valuable data can be deducted utilizing data is more important than the system construction itself. For this, talent who are equipped with academic knowledge and experience in various fields like IT, statistics, strategy and management etc and manpower training should be implemented through systematic education for these talents. This study has arranged theoretical base for empirical studies about big data related fields by comprehending the main variables which affect the big data introduction intention and verifying them and is expected to be able to propose useful guidelines for the corporations and policy developers who are considering big data implementationby analyzing empirically that theoretical base.