• Title/Summary/Keyword: 흐름

Search Result 12,410, Processing Time 0.045 seconds

Development of Intelligent Job Classification System based on Job Posting on Job Sites (구인구직사이트의 구인정보 기반 지능형 직무분류체계의 구축)

  • Lee, Jung Seung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.123-139
    • /
    • 2019
  • The job classification system of major job sites differs from site to site and is different from the job classification system of the 'SQF(Sectoral Qualifications Framework)' proposed by the SW field. Therefore, a new job classification system is needed for SW companies, SW job seekers, and job sites to understand. The purpose of this study is to establish a standard job classification system that reflects market demand by analyzing SQF based on job offer information of major job sites and the NCS(National Competency Standards). For this purpose, the association analysis between occupations of major job sites is conducted and the association rule between SQF and occupation is conducted to derive the association rule between occupations. Using this association rule, we proposed an intelligent job classification system based on data mapping the job classification system of major job sites and SQF and job classification system. First, major job sites are selected to obtain information on the job classification system of the SW market. Then We identify ways to collect job information from each site and collect data through open API. Focusing on the relationship between the data, filtering only the job information posted on each job site at the same time, other job information is deleted. Next, we will map the job classification system between job sites using the association rules derived from the association analysis. We will complete the mapping between these market segments, discuss with the experts, further map the SQF, and finally propose a new job classification system. As a result, more than 30,000 job listings were collected in XML format using open API in 'WORKNET,' 'JOBKOREA,' and 'saramin', which are the main job sites in Korea. After filtering out about 900 job postings simultaneously posted on multiple job sites, 800 association rules were derived by applying the Apriori algorithm, which is a frequent pattern mining. Based on 800 related rules, the job classification system of WORKNET, JOBKOREA, and saramin and the SQF job classification system were mapped and classified into 1st and 4th stages. In the new job taxonomy, the first primary class, IT consulting, computer system, network, and security related job system, consisted of three secondary classifications, five tertiary classifications, and five fourth classifications. The second primary classification, the database and the job system related to system operation, consisted of three secondary classifications, three tertiary classifications, and four fourth classifications. The third primary category, Web Planning, Web Programming, Web Design, and Game, was composed of four secondary classifications, nine tertiary classifications, and two fourth classifications. The last primary classification, job systems related to ICT management, computer and communication engineering technology, consisted of three secondary classifications and six tertiary classifications. In particular, the new job classification system has a relatively flexible stage of classification, unlike other existing classification systems. WORKNET divides jobs into third categories, JOBKOREA divides jobs into second categories, and the subdivided jobs into keywords. saramin divided the job into the second classification, and the subdivided the job into keyword form. The newly proposed standard job classification system accepts some keyword-based jobs, and treats some product names as jobs. In the classification system, not only are jobs suspended in the second classification, but there are also jobs that are subdivided into the fourth classification. This reflected the idea that not all jobs could be broken down into the same steps. We also proposed a combination of rules and experts' opinions from market data collected and conducted associative analysis. Therefore, the newly proposed job classification system can be regarded as a data-based intelligent job classification system that reflects the market demand, unlike the existing job classification system. This study is meaningful in that it suggests a new job classification system that reflects market demand by attempting mapping between occupations based on data through the association analysis between occupations rather than intuition of some experts. However, this study has a limitation in that it cannot fully reflect the market demand that changes over time because the data collection point is temporary. As market demands change over time, including seasonal factors and major corporate public recruitment timings, continuous data monitoring and repeated experiments are needed to achieve more accurate matching. The results of this study can be used to suggest the direction of improvement of SQF in the SW industry in the future, and it is expected to be transferred to other industries with the experience of success in the SW industry.

A Study on People Counting in Public Metro Service using Hybrid CNN-LSTM Algorithm (Hybrid CNN-LSTM 알고리즘을 활용한 도시철도 내 피플 카운팅 연구)

  • Choi, Ji-Hye;Kim, Min-Seung;Lee, Chan-Ho;Choi, Jung-Hwan;Lee, Jeong-Hee;Sung, Tae-Eung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.131-145
    • /
    • 2020
  • In line with the trend of industrial innovation, IoT technology utilized in a variety of fields is emerging as a key element in creation of new business models and the provision of user-friendly services through the combination of big data. The accumulated data from devices with the Internet-of-Things (IoT) is being used in many ways to build a convenience-based smart system as it can provide customized intelligent systems through user environment and pattern analysis. Recently, it has been applied to innovation in the public domain and has been using it for smart city and smart transportation, such as solving traffic and crime problems using CCTV. In particular, it is necessary to comprehensively consider the easiness of securing real-time service data and the stability of security when planning underground services or establishing movement amount control information system to enhance citizens' or commuters' convenience in circumstances with the congestion of public transportation such as subways, urban railways, etc. However, previous studies that utilize image data have limitations in reducing the performance of object detection under private issue and abnormal conditions. The IoT device-based sensor data used in this study is free from private issue because it does not require identification for individuals, and can be effectively utilized to build intelligent public services for unspecified people. Especially, sensor data stored by the IoT device need not be identified to an individual, and can be effectively utilized for constructing intelligent public services for many and unspecified people as data free form private issue. We utilize the IoT-based infrared sensor devices for an intelligent pedestrian tracking system in metro service which many people use on a daily basis and temperature data measured by sensors are therein transmitted in real time. The experimental environment for collecting data detected in real time from sensors was established for the equally-spaced midpoints of 4×4 upper parts in the ceiling of subway entrances where the actual movement amount of passengers is high, and it measured the temperature change for objects entering and leaving the detection spots. The measured data have gone through a preprocessing in which the reference values for 16 different areas are set and the difference values between the temperatures in 16 distinct areas and their reference values per unit of time are calculated. This corresponds to the methodology that maximizes movement within the detection area. In addition, the size of the data was increased by 10 times in order to more sensitively reflect the difference in temperature by area. For example, if the temperature data collected from the sensor at a given time were 28.5℃, the data analysis was conducted by changing the value to 285. As above, the data collected from sensors have the characteristics of time series data and image data with 4×4 resolution. Reflecting the characteristics of the measured, preprocessed data, we finally propose a hybrid algorithm that combines CNN in superior performance for image classification and LSTM, especially suitable for analyzing time series data, as referred to CNN-LSTM (Convolutional Neural Network-Long Short Term Memory). In the study, the CNN-LSTM algorithm is used to predict the number of passing persons in one of 4×4 detection areas. We verified the validation of the proposed model by taking performance comparison with other artificial intelligence algorithms such as Multi-Layer Perceptron (MLP), Long Short Term Memory (LSTM) and RNN-LSTM (Recurrent Neural Network-Long Short Term Memory). As a result of the experiment, proposed CNN-LSTM hybrid model compared to MLP, LSTM and RNN-LSTM has the best predictive performance. By utilizing the proposed devices and models, it is expected various metro services will be provided with no illegal issue about the personal information such as real-time monitoring of public transport facilities and emergency situation response services on the basis of congestion. However, the data have been collected by selecting one side of the entrances as the subject of analysis, and the data collected for a short period of time have been applied to the prediction. There exists the limitation that the verification of application in other environments needs to be carried out. In the future, it is expected that more reliability will be provided for the proposed model if experimental data is sufficiently collected in various environments or if learning data is further configured by measuring data in other sensors.

Incidence of Hypertension in a Cohort of an Adult Population (성인코호트에서 고혈압 발생률)

  • Kam, Sin;Oh, Hee-Sook;Lee, Sang-Won;Woo, Kook-Hyeun;Ahn, Moon-Young;Chun, Byung-Yeol
    • Journal of Preventive Medicine and Public Health
    • /
    • v.35 no.2
    • /
    • pp.141-146
    • /
    • 2002
  • Objectives : This study was peformed in order to assess the incidence of hypertension based on two-years follow-up of a rural hypertension-free cohort in Korea. Methods : The study cohen comprised 2,580 subjects aged above 20 (1,107 men and 1,473 women) of Chung-Song County in Kyungpook Province judged to be hypertensive-free at the baseline examination in 1996. For each of two examinations in the two-year follow-up, those subjects free of hypertension were followed for the development of hypertension to the next examination one year (1997) and two years later (1998). The drop-out rate was 24.7% in men and 19.6% in women. Hypertension was defined as follows 1) above mild hypertension as a SBP above 140 mmHg or a DBP above 90 mmMg,2) above moderate hypertension as a SBP above 160 mmHg or a DBP above 100 mmHg or when the participant reported having used antihypertensive medication after beginning this survey. Results : The age-standardized incidence of above mild hypertension was 6 per 100 person years (PYS) in men and that of above moderate hypertension was 1.2. In women, the age-standardized rate for above mild hypertension was 5.7 and 1.5 for above mild and moderate hypertension, respectively. However, the rates of incidence as calculated by the risk method were 4.8% and 1.0% in men and 4.6%, 1.2% in women, respectively. In both genders, incidence was significantly associated with advancing age(p<0.01), In men, the incidences of above moderate hypertension by age group were 0.5 per 100 PYS aged 20-39, 0.7 aged 40-49, 1.7 aged 50-59, 3.6 aged 60-69, and 5.8 aged above 70(p<0.01). In women, those the incidence measured 0.6 per 100 PYS aged 20-39, 1.8 aged 40-49, 1.3 aged 50-59, 3.3 aged 60-69, and 5.6 aged above 70(p<0.01). After age 60, the incidence of hypertension increased rapidly. Conclusions : The incidence data of hypertension reported in this study may serve as a reference data for evaluating the impact of future public efforts in the primary prevention of hypertension in Korea.

Purification Characteristics and Hydraulic Conditions in an Artificial Wetland System (인공습지시스템에서 수리학적 조건과 수질정화특성)

  • Park, Byeng-Hyen;Kim, Jae-Ok;Lee, Kwng-Sik;Joo, Gea-Jae;Lee, Sang-Joon;Nam, Gui-Sook
    • Korean Journal of Ecology and Environment
    • /
    • v.35 no.4 s.100
    • /
    • pp.285-294
    • /
    • 2002
  • The purpose of this study was to evaluate the relationships between purification characteristics and hydraulic conditions, and to clarify the basic and essential factors required to be considered in the construction and management of artificial wetland system for the improvement of reservoir water quality. The artificial wetland system was composed of a pumping station and six sequential plants beds with five species of macrophytes: Oenanthe javanica, Acorus calamus, Zizania latifolia, Typha angustifolia, and Phragmites australis. The system was operated on free surface-flow system, and operation conditions were $3,444-4,156\; m^3/d$ of inflow rate, 0.5-2.0 hr of HRT, 0.1-0.2 m of water depth, 6.0-9.4 m/d of hydraulic loading, and relatively low nutrients concentration (0.224-2.462 mgN/L, 0.145-0.164 mgP/L) of inflow water. The mean purification efficiencies of TN ranged from 12.1% to 14.3% by showing the highest efficiency at the Phragmites australis bed, and these of TP were 6.3-9.5% by showing the similar ranges of efficiencies among all species. The mean purification efficiencies of SS and Chl-A ranged from 17.4% to 38.5% and from 12.0% to 20.2%, respectively, and the Oenanthe javanica bed showed the highest efficiency with higher concentration of influent than others. The mean purification amount per day of each pollutant were $9.8-4.1\;g{\cdot}m^{-2}{\cdot}d^{-1}$ in BOD, $1.299-2.343\;g{\cdot}m^{-2}{\cdot}d^{-1}$ in TN, $0.085-1.821\;g{\cdot}m^{-2}{\cdot}d^{-1}$ in TP, $17.9-111.6\;g{\cdot}m^{-2}{\cdot}d^{-1}$ in SS and $0.011-0.094\;g{\cdot}m^{-2}{\cdot}d^{-1}$ in Chl-a. The purification amount per day of TN revealed the hi링hest level at the Zizania latifolia bed, and TP showed at the Acrous calamus bed. SS and Chl-a, as particulate materials, revealed the highest purification amount per day at the Oenanthe javanica bed that was high on the whole parameters. It was estimated that the purification amount per day was increased with the high concentration of influent and shoot density of macrophytes, as was shown in the purification efficiency. Correlation coefficients between purification efficiencies and hydraulic conditions (HRT and inflow rate) were 0.016-0.731 of $R^2$ in terms of HRT, and 0.015-0.868 of $R^2$ daily inflow rate. Correlation coefficients of purification amounts per day with hydraulic conditions were 0.173-0.763 of Ra in terms of HRT, and 0.209-0.770 daily inflow rate. Among the correlation coefficients between purification efficiency and hydraulic condition, the percentages of over 0.5 range of $R^2$ were 20% in HRT and in daily inflow rate. However, the percentages of over 0.5 range of correlation coefficients ($R^2$) between purification amount per day and hydraulic conditions were 53% in HRT and 73% in daily inflow rate. The relationships between purificationamount per day and hydraulic condition were more significant than those of purifi-cation efficiency. In this study, high hydraulic conditions (HRT and inflow rate) are not likely to affect significantly the purification efficiency of nutrient. Therefore, the emphasis should be on the purification amounts per day with high hydraulicloadings (HRT and inflow rate) for the improvement of eutrophic reservoir withrelatively low nutrients concentration and large quantity to be treated.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

A Study of Dohang-Ri wooden coffin and Anya-Kuk (도항리목관묘(道項里木棺墓) 안사국(安邪國))

  • Lee, Ju-Huen
    • Korean Journal of Heritage: History & Science
    • /
    • v.37
    • /
    • pp.5-37
    • /
    • 2004
  • A wooden coffin has been given academic attention in Kaya(伽倻), due to the place the stage of Samhan society appropriately in the development of ancient korea history. Special attention must be paid on Dohang-Ri(道項里) woden coffin, since it is expected to explain the origin of Arakaya(阿羅伽倻) in the region of southen korea peninsula. Dohang-Ri wooden coffin is become generally knowen two types, and complete its unique feature in Chin-Han(辰韓) and Byun-Han(弁韓). Recently the fact that Dohang-Ri wooden coffin is discovered only in the king tombs of Chang Won Daho-Ri(昌原茶戶里) and it is remarkable of political connection between Kuya-Kuk (狗邪國) and Anya-Kuk(安邪國). Various ironware unearth of Dohang-Ri has seen similar to that from Daho-Ri, but it has not bronze mirror be maid Chines, symbol with dignity of social position in the ruling ciass. It seems that political unit of Daho-Ri is advanced sociaity and central force than Dohang-Ri in the Byun-Han. The later of two century, I have a think about wooden coffin changes the wooden chamble of Dohang-Ri and Daho-Ri that it go out of sight at AD.2 century. Becouse of possitive achaelogical sites has not confirm, it request radical interpretaion. I inference to accordingly to the it appearance connected of the wars between the eight country of southen regins in korea peninsula at the first half of AD.3 cencury. Exactly, the politial units of Dohang-Ri and Daho-Ri has concentration of trade in Racdong river(洛東江) and Nam river(南江) water system and that give form to coexistence system of economic and political mutuality.

Documentation of Intangible Cultural Heritage Using Motion Capture Technology Focusing on the documentation of Seungmu, Salpuri and Taepyeongmu (부록 3. 모션캡쳐를 이용한 무형문화재의 기록작성 - 국가지정 중요무형문화재 승무·살풀이·태평무를 중심으로 -)

  • Park, Weonmo;Go, Jungil;Kim, Yongsuk
    • Korean Journal of Heritage: History & Science
    • /
    • v.39
    • /
    • pp.351-378
    • /
    • 2006
  • With the development of media, the methods for the documentation of intangible cultural heritage have been also developed and diversified. As well as the previous analogue ways of documentation, the have been recently applying new multi-media technologies focusing on digital pictures, sound sources, movies, etc. Among the new technologies, the documentation of intangible cultural heritage using the method of 'Motion Capture' has proved itself prominent especially in the fields that require three-dimensional documentation such as dances and performances. Motion Capture refers to the documentation technology which records the signals of the time varing positions derived from the sensors equipped on the surface of an object. It converts the signals from the sensors into digital data which can be plotted as points on the virtual coordinates of the computer and records the movement of the points during a certain period of time, as the object moves. It produces scientific data for the preservation of intangible cultural heritage, by displaying digital data which represents the virtual motion of a holder of an intangible cultural heritage. National Research Institute of Cultural Properties (NRICP) has been working on for the development of new documentation method for the Important Intangible Cultural Heritage designated by Korean government. This is to be done using 'motion capture' equipments which are also widely used for the computer graphics in movie or game industries. This project is designed to apply the motion capture technology for 3 years- from 2005 to 2007 - for 11 performances from 7 traditional dances of which body gestures have considerable values among the Important Intangible Cultural Heritage performances. This is to be supported by lottery funds. In 2005, the first year of the project, accumulated were data of single dances, such as Seungmu (monk's dance), Salpuri(a solo dance for spiritual cleansing dance), Taepyeongmu (dance of peace), which are relatively easy in terms of performing skills. In 2006, group dances, such as Jinju Geommu (Jinju sword dance), Seungjeonmu (dance for victory), Cheoyongmu (dance of Lord Cheoyong), etc., will be documented. In the last year of the project, 2007, education programme for comparative studies, analysis and transmission of intangible cultural heritage and three-dimensional contents for public service will be devised, based on the accumulated data, as well as the documentation of Hakyeonhwadae Habseolmu (crane dance combined with the lotus blossom dance). By describing the processes and results of motion capture documentation of Salpuri dance (Lee Mae-bang), Taepyeongmu (Kang seon-young) and Seungmu (Lee Mae-bang, Lee Ae-ju and Jung Jae-man) conducted in 2005, this report introduces a new approach for the documentation of intangible cultural heritage. During the first year of the project, two questions have been raised. First, how can we capture motions of a holder (dancer) without cutoffs during quite a long performance? After many times of tests, the motion capture system proved itself stable with continuous results. Second, how can we reproduce the accurate motion without the re-targeting process? The project re-created the most accurate motion of the dancer's gestures, applying the new technology to drew out the shape of the dancers's body digital data before the motion capture process for the first time in Korea. The accurate three-dimensional body models for four holders obtained by the body scanning enhanced the accuracy of the motion capture of the dance.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.

Structural features and Diffusion Patterns of Gartner Hype Cycle for Artificial Intelligence using Social Network analysis (인공지능 기술에 관한 가트너 하이프사이클의 네트워크 집단구조 특성 및 확산패턴에 관한 연구)

  • Shin, Sunah;Kang, Juyoung
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.107-129
    • /
    • 2022
  • It is important to preempt new technology because the technology competition is getting much tougher. Stakeholders conduct exploration activities continuously for new technology preoccupancy at the right time. Gartner's Hype Cycle has significant implications for stakeholders. The Hype Cycle is a expectation graph for new technologies which is combining the technology life cycle (S-curve) with the Hype Level. Stakeholders such as R&D investor, CTO(Chef of Technology Officer) and technical personnel are very interested in Gartner's Hype Cycle for new technologies. Because high expectation for new technologies can bring opportunities to maintain investment by securing the legitimacy of R&D investment. However, contrary to the high interest of the industry, the preceding researches faced with limitations aspect of empirical method and source data(news, academic papers, search traffic, patent etc.). In this study, we focused on two research questions. The first research question was 'Is there a difference in the characteristics of the network structure at each stage of the hype cycle?'. To confirm the first research question, the structural characteristics of each stage were confirmed through the component cohesion size. The second research question is 'Is there a pattern of diffusion at each stage of the hype cycle?'. This research question was to be solved through centralization index and network density. The centralization index is a concept of variance, and a higher centralization index means that a small number of nodes are centered in the network. Concentration of a small number of nodes means a star network structure. In the network structure, the star network structure is a centralized structure and shows better diffusion performance than a decentralized network (circle structure). Because the nodes which are the center of information transfer can judge useful information and deliver it to other nodes the fastest. So we confirmed the out-degree centralization index and in-degree centralization index for each stage. For this purpose, we confirmed the structural features of the community and the expectation diffusion patterns using Social Network Serice(SNS) data in 'Gartner Hype Cycle for Artificial Intelligence, 2021'. Twitter data for 30 technologies (excluding four technologies) listed in 'Gartner Hype Cycle for Artificial Intelligence, 2021' were analyzed. Analysis was performed using R program (4.1.1 ver) and Cyram Netminer. From October 31, 2021 to November 9, 2021, 6,766 tweets were searched through the Twitter API, and converting the relationship user's tweet(Source) and user's retweets (Target). As a result, 4,124 edgelists were analyzed. As a reult of the study, we confirmed the structural features and diffusion patterns through analyze the component cohesion size and degree centralization and density. Through this study, we confirmed that the groups of each stage increased number of components as time passed and the density decreased. Also 'Innovation Trigger' which is a group interested in new technologies as a early adopter in the innovation diffusion theory had high out-degree centralization index and the others had higher in-degree centralization index than out-degree. It can be inferred that 'Innovation Trigger' group has the biggest influence, and the diffusion will gradually slow down from the subsequent groups. In this study, network analysis was conducted using social network service data unlike methods of the precedent researches. This is significant in that it provided an idea to expand the method of analysis when analyzing Gartner's hype cycle in the future. In addition, the fact that the innovation diffusion theory was applied to the Gartner's hype cycle's stage in artificial intelligence can be evaluated positively because the Gartner hype cycle has been repeatedly discussed as a theoretical weakness. Also it is expected that this study will provide a new perspective on decision-making on technology investment to stakeholdes.

The cinematic interpretation of pansori and its transformation process (판소리의 영화적 해석과 변모의 과정)

  • Song, So-ra
    • (The) Research of the performance art and culture
    • /
    • no.43
    • /
    • pp.47-78
    • /
    • 2021
  • This study was written to examine the acceptance of pansori in movies based on pansori, and to explore changes in modern society's perception and expectations of pansori. A pansori is getting the love of the upper and lower castes in the late Joseon period, but loses the status at the time of the Japanese colonial rule and Korean War. In response, the country designated pansori as an important intangible cultural asset in 1964 to protect the disappearance of pansori. Until the 1980s, however, pansori did not gain popularity by itself. After the 2000s, Pansori tried to breathe in with the contemporary public due to the socio-cultural demand to globalize our culture. And now Pansori is one of the most popular cultures in the world today, as the pop band Feel the Rhythm of KOREA shows. The changing public perception of pansori and its status in modern society can also be seen in the mass media called movies. This study explored the process of this change with six films based on pansori, from "Seopyeonje" directed by Lim Kwon-taek in 1993 to the film "The Singer" in 2020. First, the films "Seopyeonje" and "Hwimori" were produced in the 1990s. Both of these films show the reality of pansori, which has fallen out of public interest due to the crisis of transmission in the early and mid-20th century. And in the midst of that, he captured the scene of a singer struggling fiercely for the artistic completion of Pansori itself. Next, look at the film "Lineage of the Voice" in 2008 and "DURESORI: The Voice of East" in 2012. These two films depict the growth of children who perform art, featuring contemporary children who play pansori and Korean traditional music. Pansori in these films is no longer an old piece of music, nor is it a sublime art that is completed in harsh training. It is only naturally treated as one of the contemporary arts. Finally, "The Sound of a Flower" in 2015 and "The Singer" in 2020. The two films constructed a story from Pansori's history based on the time background of the film during the late Joseon Dynasty, when Pansori was loved the most by the people. This reflects the atmosphere of the times when traditions are used as the subject of cultural content, and shows the changed public perception of pansori and the status of pansori.