• Title/Summary/Keyword: environment units

Search Result 996, Processing Time 0.026 seconds

Success Factor in the K-Pop Music Industry: focusing on the mediated effect of Internet Memes (대중음악 흥행 요인에 대한 연구: 인터넷 밈(Internet Meme)의 매개효과를 중심으로)

  • YuJeong Sim;Minsoo Shin
    • Journal of Service Research and Studies
    • /
    • v.13 no.1
    • /
    • pp.48-62
    • /
    • 2023
  • As seen in the recent K-pop craze, the size and influence of the Korean music industry is growing even bigger. At least 6,000 songs are released a year in the Korean music market, but not many can be said to have been successful. Many studies and attempts are being made to identify the factors that make the hit music. Commercial factors such as media exposure and promotion as well as the quality of music play an important role in the commercial success of music. Recently, there have been many marketing campaigns using Internet memes in the pop music industry, and Internet memes are activities or trends that spread in various forms, such as images and videos, as cultural units that spread among people. Depending on the Internet environment and the characteristics of digital communication, contents are expanded and reproduced in the form of various memes, which causes a greater response to consumers. Previously, the phenomenon of Internet memes has occurred naturally, but artists who are aware of the marketing effects have recently used it as an element of marketing. In this paper, the mediated effect of Internet memes in relation to the success factors of popular music was analyzed, and a prediction model reflecting them was proposed. As a result of the analysis, the factors with the mediated effect of 'cover effect' and 'challenge effect' were the same. Among the internal success factors, there were mediated effects in "Singer Recognition," the genres of "POP, Dance, Ballad, Trot and Electronica," and among the external success factors, mediated effects in "Planning Company Capacity," "The Number of Music Broadcasting Programs," and "The Number of News Articles." Predictive models reflecting cover effects and challenge effects showed F1-score at 0.6889 and 0.7692, respectively. This study is meaningful in that it has collected and analyzed actual chart data and presented commercial directions that can be used in practice, and found that there are many success factors of popular music and the mediating effects of Internet memes.

Seasonal Morphodynamic Changes of Multiple Sand Bars in Sinduri Macrotidal Beach, Taean, Chungnam (충남 태안군 신두리 대조차 해빈에 나타나는 다중사주의 계절별 지형변화 특성)

  • Tae Soo Chang;Young Yun Lee;Hyun Ho Yoon;Kideok Do
    • Journal of the Korean earth science society
    • /
    • v.45 no.3
    • /
    • pp.203-213
    • /
    • 2024
  • This study aimed to investigate the seasonal patterns of multiple bar formation in summer and flattening in winter on the macrotidal Sinduri beach in Taean, and to understand the processes their formation and subsequent flattening. Beach profiling has been conducted regularly over the last four years using a VRS-GPS system. Surface sediment samples were collected seasonally along the transectline, and grain size analyses were performed. Tidal current data were acquired using a TIDOS current observation system during both winter and summer. The Sinduri macrotidal beach consists of two geomorphic units: an upper high-gradient beach face and a lower gentler sloped intertidal zone. High berms and beach cusps did not develop on this beach face. The approximately 400-m-wide intertidal zone comprises distinct 2-5 lines of multiple bars. Mean grain sizes of sand bars range from 2.0 to 2.75 phi, corresponding to fine sands. Mean sizes show shoreward coarsening trend. Regular beach-profiling survey revealed that the summer profile has a multi-barred morphology with a maximum of five bar lines, whereas, the winter profile has a non-barred, flat morphology. The non-barred winter profiles likely result from flattening by scour-and-fill processes during winter. The growth of multiple bars in summer is interpreted to be formed by a break-point mechanism associated with moderate waves and the translation of tide levels, rather than the standing wave hypothesis, which is stationary at high tide. The break-point hypothesis for multi-bars is supported by the presence of the largest bar at mean sea-level, shorter bar spacing toward the shore, irregular bar spacing, strong asymmetry of bars, and the 10-30 m shoreward migration of multi-bars.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

A Study on measurement of scattery ray of Computed Tomography (전산화 단층촬영실의 산란선 측정에 대한 연구)

  • Cho, Pyong-Kon;Lee, Joon-Hyup;Kim, Yoon-Sik;Lee, Chang-Yeop
    • Journal of radiological science and technology
    • /
    • v.26 no.2
    • /
    • pp.37-42
    • /
    • 2003
  • Purpose : Computed tomographic equipment is essential for diagnosis by means of radiation. With passage of time and development of science computed tomographic was developed time and again and in future examination by means of this equipment is expected to increase. In this connection these authors measured rate of scatter ray generation at front of lead glass for patients within control room of computed tomographic equipment room and outside of entrance door for exit and entrance of patients and attempted to ind out method for minimizing exposure to scatter ray. Material and Method : From November 2001 twenty five units of computed tomographic equipments which were already installed and operation by 13 general hospitals and university hospitals in Seoul were subjected to this study. As condition of photographing those recommended by manufacturer for measuring exposure to sauter ray was use. At the time objects used DALI CT Radiation Dose Test Phantom fot Head (${\oint}16\;cm$ Plexglas) and Phantom for Stomache(${\oint}32\;cm$ Plexglas) were used. For measurement of scatter ray Reader (Radiation Monitor Controller Model 2026) and G-M Survey were used to Survey Meter of Radical Corporation, model $20{\times}5-1800$, Electrometer/Ion Chamber, S/N 21740. Spots for measurement of scatter ray included front of lead glass for patients within control room of computed tomographic equipment room which is place where most of work by gradiographic personnel are carried out and is outside of entrance door for exit and entrance of patients and their guardians and at spot 100 cm off from isocenter at the time of scanning the object. The results : Work environment within computed tomography room which was installed and under operation by each hospital showed considerable difference depending on circumstances of pertinent hospitals and status of scatter ray was as follows. 1) From isocenter of computed tomographic equipment to lead glass for patients within control room average distance was 377 cm. At that time scatter ray showed diverse distribution from spot where no presence was detected to spot where about 100 mR/week was detected. But it met requirement of weekly tolerance $2.58{\times}10^{-5}\;C/kg$(100 mR/week). 2) From isocenter of computed tomographic equipment to outside of entrance door where patients and their guardians exit and enter was 439 cm in average, At that time scatter ray showed diverse distribution from spot where almost no presence was detected to spot with different level but in most of cases it satisfied requirement of weekly tolerance of $2.58{\times}10^{-6}\;C/kg$(100 mR/week). 3) At the time of scanning object amount of scatter ray at spot with 100 cm distance from isocenter showed considerable difference depending on equipments. Conclusion : Use of computed tomographic equipment as one for generation of radiation for diagnosis is increasing daily. Compared to other general X-ray photographing field of diagnosis is very high but there is a high possibility of exposure to radiation and scatter ray. To be free from scatter ray at computed tomographic equipment room even by slight degree it is essential to secure sufficient space and more effort should be exerted for development of variety of skills to enable maximum photographic image at minimum cost.

  • PDF

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

A Study on the Forest Yield Regulation by Systems Analysis (시스템분석(分析)에 의(依)한 삼림수확조절(森林收穫調節)에 관(關)한 연구(硏究))

  • Cho, Eung-hyouk
    • Korean Journal of Agricultural Science
    • /
    • v.4 no.2
    • /
    • pp.344-390
    • /
    • 1977
  • The purpose of this paper was to schedule optimum cutting strategy which could maximize the total yield under certain restrictions on periodic timber removals and harvest areas from an industrial forest, based on a linear programming technique. Sensitivity of the regulation model to variations in restrictions has also been analyzed to get information on the changes of total yield in the planning period. The regulation procedure has been made on the experimental forest of the Agricultural College of Seoul National University. The forest is composed of 219 cutting units, and characterized by younger age group which is very common in Korea. The planning period is devided into 10 cutting periods of five years each, and cutting is permissible only on the stands of age groups 5-9. It is also assumed in the study that the subsequent forests are established immediately after cutting existing forests, non-stocked forest lands are planted in first cutting period, and established forests are fully stocked until next harvest. All feasible cutting regimes have been defined to each unit depending on their age groups. Total yield (Vi, k) of each regime expected in the planning period has been projected using stand yield tables and forest inventory data, and the regime which gives highest Vi, k has been selected as a optimum cutting regime. After calculating periodic yields and cutting areas, and total yield from the optimum regimes selected without any restrictions, the upper and lower limits of periodic yields(Vj-max, Vj-min) and those of periodic cutting areas (Aj-max, Aj-min) have been decided. The optimum regimes under such restrictions have been selected by linear programming. The results of the study may be summarized as follows:- 1. The fluctuations of periodic harvest yields and areas under cutting regimes selected without restrictions were very great, because of irregular composition of age classes and growing stocks of existing stands. About 68.8 percent of total yield is expected in period 10, while none of yield in periods 6 and 7. 2. After inspection of the above solution, restricted optimum cutting regimes were obtained under the restrictions of Amin=150 ha, Amax=400ha, $Vmin=5,000m^3$ and $Vmax=50,000m^3$, using LP regulation model. As a result, about $50,000m^3$ of stable harvest yield per period and a relatively balanced age group distribution is expected from period 5. In this case, the loss in total yield was about 29 percent of that of unrestricted regimes. 3. Thinning schedule could be easily treated by the model presented in the study, and the thinnings made it possible to select optimum regimes which might be effective for smoothing the wood flows, not to speak of increasing total yield in the planning period. 4. It was known that the stronger the restrictions becomes in the optimum solution the earlier the period comes in which balanced harvest yields and age group distribution can be formed. There was also a tendency in this particular case that the periodic yields were strongly affected by constraints, and the fluctuations of harvest areas depended upon the amount of periodic yields. 5. Because the total yield was decreased at the increasing rate with imposing stronger restrictions, the Joss would be very great where strict sustained yield and normal age group distribution are required in the earlier periods. 6. Total yield under the same restrictions in a period was increased by lowering the felling age and extending the range of cutting age groups. Therefore, it seemed to be advantageous for producing maximum timber yield to adopt wider range of cutting age groups with the lower limit at which the smallest utilization size of timber could be produced. 7. The LP regulation model presented in the study seemed to be useful in the Korean situation from the following point of view: (1) The model can provide forest managers with the solution of where, when, and how much to cut in order to best fulfill the owners objective. (2) Planning is visualized as a continuous process where new strateges are automatically evolved as changes in the forest environment are recognized. (3) The cost (measured as decrease in total yield) of imposing restrictions can be easily evaluated. (4) Thinning schedule can be treated without difficulty. (5) The model can be applied to irregular forests. (6) Traditional regulation methods can be rainforced by the model.

  • PDF