• Title/Summary/Keyword: 채널 판별

Search Result 72, Processing Time 0.028 seconds

Sports Media Value in New Media Platform Era: The Role of Media Engagement and Empathy (뉴미디어 플랫폼 시대의 스포츠미디어 가치: 미디어 인게이지먼트와 공감의 역할)

  • Choi, Eui-Yul;Jeon, Yong-Bae;Kim, Hyun-Duck
    • Journal of the Korean Applied Science and Technology
    • /
    • v.39 no.3
    • /
    • pp.433-441
    • /
    • 2022
  • The purpose of this study is to investigate the relationship between media engagement, media empathy, and media value of MCN sports broadcasting. To achieve this purpose, a survey was conducted on 324 MCN sports broadcast viewers. Exploratory factor analysis was performed to confirm validity, and Cronbach's α test was performed to investigate reliability. In addition, correlation analysis was performed to verify discriminant validity, and linear regression analysis was performed to verify the research hypothesis, and the following conclusions were drawn. Media engagement had a positive effect on media value. Media engagement had a positive effect on media empathy. Media empathy has a positive effect on media value.

LG Household and Healthcare' Cosmetic Brand, OHUI CRM Strategy Case (LG생활건강 백화점 화장품 브랜드 오휘의 CRM전략)

  • Lee, Wansoo;Hur, Wonmoo
    • Asia Marketing Journal
    • /
    • v.7 no.1
    • /
    • pp.91-112
    • /
    • 2005
  • LG Household & Healthcare has achieved its goals of revenue growth and customer loyalty increase by applying new CRM strategy of OHUI brand, which is ranked in the middle of department store cosmetic channel. OHUI has set up detailed CRM strategy in order to solve AS-Is issues found by systematical review. First OHUI has gained critical mass and increased customer loyalty by developing customized loyalty program. OHUI also simplified customer types in order for employees to identify the customer type and apply the incentive program. As a result, the company has maximized the power of execution of its new strategy. Finally, OHUI has stabilized CRM by sharing best practice and implementing KPI. Throughout a series of CRM initiatives, OHUI has marked outstanding revenue growth and market share comparing to its competitors.

  • PDF

Detection of Underwater Transient Signals Using Noise Suppression Module of EVRC Speech Codec (EVRC 음성부호화기의 잡음억제단을 이용한 수중 천이신호 검출)

  • Kim, Tae-Hwan;Bae, Keun-Sung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.6
    • /
    • pp.301-305
    • /
    • 2007
  • In this paper, we propose a simple algorithm for detecting underwater transient signals on the fact that the frequency range of underwater transient signals is similar to audio frequency. For this, we use a preprocessing module of EVRC speech codec that is the standard speech codec of the mobile communications. If a signal is entered into EVRC noise suppression module, we can get some parameters such as the update flag, the energy of each channel, the noise suppressed signal, the energy of input signal, the energy of background noise, and the energy of enhanced signal. Therefore the energy of the enhanced signal that is normalized with the energy of the background noise is compared with the pre-defined detection threshold, and then we can detect the transient signal. And the detection threshold is updated using the previous value in the noisy period. The experimental result shows that the proposed algorithm has $0{\sim}4% error rate in the AWGN or the colored noise environment.

Cloud Detection Using HIMAWARI-8/AHI Based Reflectance Spectral Library Over Ocean (Himawari-8/AHI 기반 반사도 분광 라이브러리를 이용한 해양 구름 탐지)

  • Kwon, Chaeyoung;Seo, Minji;Han, Kyung-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.5_1
    • /
    • pp.599-605
    • /
    • 2017
  • Accurate cloud discrimination in satellite images strongly affects accuracy of remotely sensed parameter produced using it. Especially, cloud contaminated pixel over ocean is one of the major error factors such as Sea Surface Temperature (SST), ocean color, and chlorophyll-a retrievals,so accurate cloud detection is essential process and it can lead to understand ocean circulation. However, static threshold method using real-time algorithm such as Moderate Resolution Imaging Spectroradiometer (MODIS), Advanced Himawari Imager (AHI) can't fully explained reflectance variability over ocean as a function of relative positions between the sun - sea surface - satellite. In this paper, we assembled a reflectance spectral library as a function of Solar Zenith Angle (SZA) and Viewing Zenith Angle (VZA) from ocean surface reflectance with clear sky condition of Advanced Himawari Imager (AHI) identified by NOAA's cloud products and spectral library is used for applying the Dynamic Time Warping (DTW) to detect cloud pixels. We compared qualitatively between AHI cloud property and our results and it showed that AHI cloud property had general tendency toward overestimation and wrongly detected clear as unknown at high SZA. We validated by visual inspection with coincident imagery and it is generally appropriate.

Development of a Spectrum Analysis Software for Multipurpose Gamma-ray Detectors (감마선 검출기를 위한 스펙트럼 분석 소프트웨어 개발)

  • Lee, Jong-Myung;Kim, Young-Kwon;Park, Kil-Soon;Kim, Jung-Min;Lee, Ki-Sung;Joung, Jin-Hun
    • Journal of radiological science and technology
    • /
    • v.33 no.1
    • /
    • pp.51-59
    • /
    • 2010
  • We developed an analysis software that automatically detects incoming isotopes for multi-purpose gamma-ray detectors. The software is divided into three major parts; Network Interface Module (NIM), Spectrum Analysis Module (SAM), and Graphic User Interface Module (GUIM). The main part is SAM that extracts peak information of energy spectrum from the collected data through network and identifies the isotopes by comparing the peaks with pre-calibrated libraries. The proposed peak detection algorithm was utilized to construct libraries of standard isotopes with two peaks and to identify the unknown isotope with the constructed libraries. We tested the software by using GammaPro1410 detector developed by NuCare Medical Systems. The results showed that NIM performed 200K counts per seconds and the most isotopes tested were correctly recognized within 1% error range when only a single unknown isotope was used for detection test. The software is expected to be used for radiation monitoring in various applications such as hospitals, power plants, and research facilities etc.

Monitoring-based Coordination of Network-adaptive FEC for Wireless Multi-hop Video Streaming (무선 멀티 홉 비디오 스트리밍을 위한 모니터링 기반의 네트워크 적응적 FEC 코디네이션)

  • Choi, Koh;Yoo, Jae-Yong;Kim, Jong-Won
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.2A
    • /
    • pp.114-126
    • /
    • 2011
  • Video streaming over wireless multi-hop networks(WMNs) contains the following challenges from channel fading and variable bandwidth of wireless channel, and it cause degradation of video streaming performance. To overcome the challenges, currently, WMNs can use Forward Error Correction (FEC) mechanism. In WMNs, traditional FEC schemes, E2E-FEC and HbH-FEC, for video streaming are applied, but it has long transmission delay, high computational complexity and inefficient usage of resource. Also, to distinguish network status in streaming path, it has limitation. In this paper, we propose monitoring-based coordination of network-adaptive hop-to-end(H2E) FEC scheme. To enable proposed scheme, we apply a centralized coordinator. The coordinator has observing overall monitoring information and coordinating H2E-FEC mechanism. Main points of H2E-FEC is distinguishing operation range as well as selecting FEC starting node and redundancy from monitored results in coordination. To verify the proposed scheme, we perform extensive experiment over the OMF(Orbit Measurement Framework) and IEEE 802.1la-based multi-hop WMN testbed, and we carry out performance improvement, 17%, from performance comparison by existing FEC scheme.

Sensitivity Analysis of IR Aerosol Detection Algorithm (적외선 채널을 이용한 에어로솔 탐지의 경계값 및 민감도 분석)

  • Ha, Jong-Sung;Lee, Hyun-Jin;Kim, Jae-Hwan
    • Korean Journal of Remote Sensing
    • /
    • v.22 no.6
    • /
    • pp.507-518
    • /
    • 2006
  • The radiation at $11{\mu}m$ absorbed more than at $12{\mu}m$ when aerosols is loaded in the atmosphere, whereas it will be the other way around when cloud is present. The difference of the two channels provides an opportunity to detect aerosols such as Yellow Sand even with the presence of clouds and at night. However problems associated with this approach arise because the difference can be affected by various atmospheric and surface conditions. In this paper, we has analyzed how the threshold and sensitivity of the brightness temperature difference between two channel (BTD) vary with respect to the conditions in detail. The important finding is that the threshold value for the BTD distinguishing between aerosols and cloud is $0.8^{\circ}K$ with the US standard atmosphere, which is greater than the typical value of $0^{\circ}K$. The threshold and sensitivity studies for the BTD show that solar zenith angle, aerosols altitude, surface reflectivity, and atmospheric temperature profile marginally affect the BTD. However, satellite zenith angle, surface temperature along with emissivity, and vertical profile of water vapor are strongly influencing on the BTD, which is as much as of about 50%. These results strongly suggest that the aerosol retrieval with the BTD method must be cautious and the outcomes must be carefully calibrated with respect to the sources of the error.

Feasibility Study on FSIM Index to Evaluate SAR Image Co-registration Accuracy (SAR 영상 정합 정확도 평가를 위한 FSIM 인자 활용 가능성)

  • Kim, Sang-Wan;Lee, Dongjun
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.847-859
    • /
    • 2021
  • Recently, as the number of high-resolution satellite SAR images increases, the demand for precise matching of SAR imagesin change detection and image fusion is consistently increasing. RMSE (Root Mean Square Error) values using GCPs (Ground Control Points) selected by analysts have been widely used for quantitative evaluation of image registration results, while it is difficult to find an approach for automatically measuring the registration accuracy. In this study, a feasibility analysis was conducted on using the FSIM (Feature Similarity) index as a measure to evaluate the registration accuracy. TerraSAR-X (TSX) staring spotlight data collected from various incidence angles and orbit directions were used for the analysis. FSIM was almost independent on the spatial resolution of the SAR image. Using a single SAR image, the FSIM with respect to registration errors was analyzed, then use it to compare with the value estimated from TSX data with different imaging geometry. FSIM index slightly decreased due to the differencesin imaging geometry such as different look angles, different orbit tracks. As the result of analyzing the FSIM value by land cover type, the change in the FSIM index according to the co-registration error was most evident in the urban area. Therefore, the FSIM index calculated in the urban was mostsuitable for determining the accuracy of image registration. It islikely that the FSIM index has sufficient potential to be used as an index for the co-registration accuracy of SAR image.

Prediction of a hit drama with a pattern analysis on early viewing ratings (초기 시청시간 패턴 분석을 통한 대흥행 드라마 예측)

  • Nam, Kihwan;Seong, Nohyoon
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.33-49
    • /
    • 2018
  • The impact of TV Drama success on TV Rating and the channel promotion effectiveness is very high. The cultural and business impact has been also demonstrated through the Korean Wave. Therefore, the early prediction of the blockbuster success of TV Drama is very important from the strategic perspective of the media industry. Previous studies have tried to predict the audience ratings and success of drama based on various methods. However, most of the studies have made simple predictions using intuitive methods such as the main actor and time zone. These studies have limitations in predicting. In this study, we propose a model for predicting the popularity of drama by analyzing the customer's viewing pattern based on various theories. This is not only a theoretical contribution but also has a contribution from the practical point of view that can be used in actual broadcasting companies. In this study, we collected data of 280 TV mini-series dramas, broadcasted over the terrestrial channels for 10 years from 2003 to 2012. From the data, we selected the most highly ranked and the least highly ranked 45 TV drama and analyzed the viewing patterns of them by 11-step. The various assumptions and conditions for modeling are based on existing studies, or by the opinions of actual broadcasters and by data mining techniques. Then, we developed a prediction model by measuring the viewing-time distance (difference) using Euclidean and Correlation method, which is termed in our study similarity (the sum of distance). Through the similarity measure, we predicted the success of dramas from the viewer's initial viewing-time pattern distribution using 1~5 episodes. In order to confirm that the model is shaken according to the measurement method, various distance measurement methods were applied and the model was checked for its dryness. And when the model was established, we could make a more predictive model using a grid search. Furthermore, we classified the viewers who had watched TV drama more than 70% of the total airtime as the "passionate viewer" when a new drama is broadcasted. Then we compared the drama's passionate viewer percentage the most highly ranked and the least highly ranked dramas. So that we can determine the possibility of blockbuster TV mini-series. We find that the initial viewing-time pattern is the key factor for the prediction of blockbuster dramas. From our model, block-buster dramas were correctly classified with the 75.47% accuracy with the initial viewing-time pattern analysis. This paper shows high prediction rate while suggesting audience rating method different from existing ones. Currently, broadcasters rely heavily on some famous actors called so-called star systems, so they are in more severe competition than ever due to rising production costs of broadcasting programs, long-term recession, aggressive investment in comprehensive programming channels and large corporations. Everyone is in a financially difficult situation. The basic revenue model of these broadcasters is advertising, and the execution of advertising is based on audience rating as a basic index. In the drama, there is uncertainty in the drama market that it is difficult to forecast the demand due to the nature of the commodity, while the drama market has a high financial contribution in the success of various contents of the broadcasting company. Therefore, to minimize the risk of failure. Thus, by analyzing the distribution of the first-time viewing time, it can be a practical help to establish a response strategy (organization/ marketing/story change, etc.) of the related company. Also, in this paper, we found that the behavior of the audience is crucial to the success of the program. In this paper, we define TV viewing as a measure of how enthusiastically watching TV is watched. We can predict the success of the program successfully by calculating the loyalty of the customer with the hot blood. This way of calculating loyalty can also be used to calculate loyalty to various platforms. It can also be used for marketing programs such as highlights, script previews, making movies, characters, games, and other marketing projects.

Financial Fraud Detection using Text Mining Analysis against Municipal Cybercriminality (지자체 사이버 공간 안전을 위한 금융사기 탐지 텍스트 마이닝 방법)

  • Choi, Sukjae;Lee, Jungwon;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.119-138
    • /
    • 2017
  • Recently, SNS has become an important channel for marketing as well as personal communication. However, cybercrime has also evolved with the development of information and communication technology, and illegal advertising is distributed to SNS in large quantity. As a result, personal information is lost and even monetary damages occur more frequently. In this study, we propose a method to analyze which sentences and documents, which have been sent to the SNS, are related to financial fraud. First of all, as a conceptual framework, we developed a matrix of conceptual characteristics of cybercriminality on SNS and emergency management. We also suggested emergency management process which consists of Pre-Cybercriminality (e.g. risk identification) and Post-Cybercriminality steps. Among those we focused on risk identification in this paper. The main process consists of data collection, preprocessing and analysis. First, we selected two words 'daechul(loan)' and 'sachae(private loan)' as seed words and collected data with this word from SNS such as twitter. The collected data are given to the two researchers to decide whether they are related to the cybercriminality, particularly financial fraud, or not. Then we selected some of them as keywords if the vocabularies are related to the nominals and symbols. With the selected keywords, we searched and collected data from web materials such as twitter, news, blog, and more than 820,000 articles collected. The collected articles were refined through preprocessing and made into learning data. The preprocessing process is divided into performing morphological analysis step, removing stop words step, and selecting valid part-of-speech step. In the morphological analysis step, a complex sentence is transformed into some morpheme units to enable mechanical analysis. In the removing stop words step, non-lexical elements such as numbers, punctuation marks, and double spaces are removed from the text. In the step of selecting valid part-of-speech, only two kinds of nouns and symbols are considered. Since nouns could refer to things, the intent of message is expressed better than the other part-of-speech. Moreover, the more illegal the text is, the more frequently symbols are used. The selected data is given 'legal' or 'illegal'. To make the selected data as learning data through the preprocessing process, it is necessary to classify whether each data is legitimate or not. The processed data is then converted into Corpus type and Document-Term Matrix. Finally, the two types of 'legal' and 'illegal' files were mixed and randomly divided into learning data set and test data set. In this study, we set the learning data as 70% and the test data as 30%. SVM was used as the discrimination algorithm. Since SVM requires gamma and cost values as the main parameters, we set gamma as 0.5 and cost as 10, based on the optimal value function. The cost is set higher than general cases. To show the feasibility of the idea proposed in this paper, we compared the proposed method with MLE (Maximum Likelihood Estimation), Term Frequency, and Collective Intelligence method. Overall accuracy and was used as the metric. As a result, the overall accuracy of the proposed method was 92.41% of illegal loan advertisement and 77.75% of illegal visit sales, which is apparently superior to that of the Term Frequency, MLE, etc. Hence, the result suggests that the proposed method is valid and usable practically. In this paper, we propose a framework for crisis management caused by abnormalities of unstructured data sources such as SNS. We hope this study will contribute to the academia by identifying what to consider when applying the SVM-like discrimination algorithm to text analysis. Moreover, the study will also contribute to the practitioners in the field of brand management and opinion mining.