• Title/Summary/Keyword: 사용자특성

Search Result 4,436, Processing Time 0.032 seconds

Key Methodologies to Effective Site-specific Accessment in Contaminated Soils : A Review (오염토양의 효과적 현장조사에 대한 주요 방법론의 검토)

  • Chung, Doug-Young
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.32 no.4
    • /
    • pp.383-397
    • /
    • 1999
  • For sites to be investigated, the results of such an investigation can be used in determining foals for cleanup, quantifying risks, determining acceptable and unacceptable risk, and developing cleanup plans t hat do not cause unnecessary delays in the redevelopment and reuse of the property. To do this, it is essential that an appropriately detailed study of the site be performed to identify the cause, nature, and extent of contamination and the possible threats to the environment or to any people living or working nearby through the analysis of samples of soil and soil gas, groundwater, surface water, and sediment. The migration pathways of contaminants also are examined during this phase. Key aspects of cost-effective site assessment to help standardize and accelerate the evaluation of contaminated soils at sites are to provide a simple step-by-step methodology for environmental science/engineering professionals to calculate risk-based, site-specific soil levels for contaminants in soil. Its use may significantly reduce the time it takes to complete soil investigations and cleanup actions at some sites, as well as improve the consistency of these actions across the nation. To achieve the effective site assessment, it requires the criteria for choosing the type of standard and setting the magnitude of the standard come from different sources, depending on many factors including the nature of the contamination. A general scheme for site-specific assessment consists of sequential Phase I, II, and III, which is defined by workplan and soil screening levels. Phase I are conducted to identify and confirm a site's recognized environmental conditions resulting from past actions. If a Phase 1 identifies potential hazardous substances, a Phase II is usually conducted to confirm the absence, or presence and extent, of contamination. Phase II involve the collection and analysis of samples. And Phase III is to remediate the contaminated soils determined by Phase I and Phase II. However, important factors in determining whether a assessment standard is site-specific and suitable are (1) the spatial extent of the sampling and the size of the sample area; (2) the number of samples taken: (3) the strategy of taking samples: and (4) the way the data are analyzed. Although selected methods are recommended, application of quantitative methods is directed by users having prior training or experience for the dynamic site investigation process.

  • PDF

Development Strategy for New Climate Change Scenarios based on RCP (온실가스 시나리오 RCP에 대한 새로운 기후변화 시나리오 개발 전략)

  • Baek, Hee-Jeong;Cho, ChunHo;Kwon, Won-Tae;Kim, Seong-Kyoun;Cho, Joo-Young;Kim, Yeongsin
    • Journal of Climate Change Research
    • /
    • v.2 no.1
    • /
    • pp.55-68
    • /
    • 2011
  • The Intergovernmental Panel on Climate Change(IPCC) has identified the causes of climate change and come up with measures to address it at the global level. Its key component of the work involves developing and assessing future climate change scenarios. The IPCC Expert Meeting in September 2007 identified a new greenhouse gas concentration scenario "Representative Concentration Pathway(RCP)" and established the framework and development schedules for Climate Modeling (CM), Integrated Assessment Modeling(IAM), Impact Adaptation Vulnerability(IAV) community for the fifth IPCC Assessment Reports while 130 researchers and users took part in. The CM community at the IPCC Expert Meeting in September 2008, agreed on a new set of coordinated climate model experiments, the phase five of the Coupled Model Intercomparison Project(CMIP5), which consists of more than 30 standardized experiment protocols for the shortterm and long-term time scales, in order to enhance understanding on climate change for the IPCC AR5 and to develop climate change scenarios and to address major issues raised at the IPCC AR4. Since early 2009, fourteen countries including the Korea have been carrying out CMIP5-related projects. Withe increasing interest on climate change, in 2009 the COdinated Regional Downscaling EXperiment(CORDEX) has been launched to generate regional and local level information on climate change. The National Institute of Meteorological Research(NIMR) under the Korea Meteorological Administration (KMA) has contributed to the IPCC AR4 by developing climate change scenarios based on IPCC SRES using ECHO-G and embarked on crafting national scenarios for climate change as well as RCP-based global ones by engaging in international projects such as CMIP5 and CORDEX. NIMR/KMA will make a contribution to drawing the IPCC AR5 and will develop national climate change scenarios reflecting geographical factors, local climate characteristics and user needs and provide them to national IAV and IAM communites to assess future regional climate impacts and take action.

Design and Implementation of Transmission Scheduler for Terrestrial UHD Contents (지상파 UHD 콘텐츠 전송 스케줄러 설계 및 구현)

  • Paik, Jong-Ho;Seo, Minjae;Yu, Kyung-A
    • Journal of Broadcast Engineering
    • /
    • v.24 no.1
    • /
    • pp.118-131
    • /
    • 2019
  • In order to provide 8K UHD contents of terrestrial broadcasting with a large capacity, the terrestrial broadcasting system has various problems such as limited bandwidth and so on. To solve these problems, UHD contents transmission technology has been actively studied, and an 8K UHD broadcasting system using terrestrial broadcasting network and communication network has been proposed. The proposed technique is to solve the limited bandwidth problem of terrestrial broadcasting network by segmenting 8K UHD contents and transmitting them to heterogeneous networks through hierarchical separation. Through the terrestrial broadcasting network, the base layer corresponding to FHD and the additional enhancement layer data for 4K UHD are transmitted, and the additional enhancement layer data corresponding to 8K UHD is transmitted through the communication network. When 8K UHD contents are provided in such a way, user can receive up to 4K UHD broadcasting by terrestrial channels, and also can receive up to 8K UHD additional communication networks. However, in order to transmit the 4K UHD contents within the allocated bit rate of the domestic terrestrial UHD broadcasting, the compression rate is increased, so a certain level of image deterioration occurs inevitably. Due to the nature of UHD contents, video quality should be considered as a top priority over other factors, so that video quality should be guaranteed even within a limited bit rate. This requires packet scheduling of content generators in the broadcasting system. Since the multiplexer sends out the packets received from the content generator in order, it is very important to make the transmission time and the transmission rate of the process from the content generator to the multiplexer constant and accurate. Therefore, we propose a variable transmission scheduler between the content generator and the multiplexer to guarantee the image quality of a certain level of UHD contents in this paper.

Automatic Speech Style Recognition Through Sentence Sequencing for Speaker Recognition in Bilateral Dialogue Situations (양자 간 대화 상황에서의 화자인식을 위한 문장 시퀀싱 방법을 통한 자동 말투 인식)

  • Kang, Garam;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.2
    • /
    • pp.17-32
    • /
    • 2021
  • Speaker recognition is generally divided into speaker identification and speaker verification. Speaker recognition plays an important function in the automatic voice system, and the importance of speaker recognition technology is becoming more prominent as the recent development of portable devices, voice technology, and audio content fields continue to expand. Previous speaker recognition studies have been conducted with the goal of automatically determining who the speaker is based on voice files and improving accuracy. Speech is an important sociolinguistic subject, and it contains very useful information that reveals the speaker's attitude, conversation intention, and personality, and this can be an important clue to speaker recognition. The final ending used in the speaker's speech determines the type of sentence or has functions and information such as the speaker's intention, psychological attitude, or relationship to the listener. The use of the terminating ending has various probabilities depending on the characteristics of the speaker, so the type and distribution of the terminating ending of a specific unidentified speaker will be helpful in recognizing the speaker. However, there have been few studies that considered speech in the existing text-based speaker recognition, and if speech information is added to the speech signal-based speaker recognition technique, the accuracy of speaker recognition can be further improved. Hence, the purpose of this paper is to propose a novel method using speech style expressed as a sentence-final ending to improve the accuracy of Korean speaker recognition. To this end, a method called sentence sequencing that generates vector values by using the type and frequency of the sentence-final ending appearing in the utterance of a specific person is proposed. To evaluate the performance of the proposed method, learning and performance evaluation were conducted with a actual drama script. The method proposed in this study can be used as a means to improve the performance of Korean speech recognition service.

A study on the derivation and evaluation of flow duration curve (FDC) using deep learning with a long short-term memory (LSTM) networks and soil water assessment tool (SWAT) (LSTM Networks 딥러닝 기법과 SWAT을 이용한 유량지속곡선 도출 및 평가)

  • Choi, Jung-Ryel;An, Sung-Wook;Choi, Jin-Young;Kim, Byung-Sik
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.spc1
    • /
    • pp.1107-1118
    • /
    • 2021
  • Climate change brought on by global warming increased the frequency of flood and drought on the Korean Peninsula, along with the casualties and physical damage resulting therefrom. Preparation and response to these water disasters requires national-level planning for water resource management. In addition, watershed-level management of water resources requires flow duration curves (FDC) derived from continuous data based on long-term observations. Traditionally, in water resource studies, physical rainfall-runoff models are widely used to generate duration curves. However, a number of recent studies explored the use of data-based deep learning techniques for runoff prediction. Physical models produce hydraulically and hydrologically reliable results. However, these models require a high level of understanding and may also take longer to operate. On the other hand, data-based deep-learning techniques offer the benefit if less input data requirement and shorter operation time. However, the relationship between input and output data is processed in a black box, making it impossible to consider hydraulic and hydrological characteristics. This study chose one from each category. For the physical model, this study calculated long-term data without missing data using parameter calibration of the Soil Water Assessment Tool (SWAT), a physical model tested for its applicability in Korea and other countries. The data was used as training data for the Long Short-Term Memory (LSTM) data-based deep learning technique. An anlysis of the time-series data fond that, during the calibration period (2017-18), the Nash-Sutcliffe Efficiency (NSE) and the determinanation coefficient for fit comparison were high at 0.04 and 0.03, respectively, indicating that the SWAT results are superior to the LSTM results. In addition, the annual time-series data from the models were sorted in the descending order, and the resulting flow duration curves were compared with the duration curves based on the observed flow, and the NSE for the SWAT and the LSTM models were 0.95 and 0.91, respectively, and the determination coefficients were 0.96 and 0.92, respectively. The findings indicate that both models yield good performance. Even though the LSTM requires improved simulation accuracy in the low flow sections, the LSTM appears to be widely applicable to calculating flow duration curves for large basins that require longer time for model development and operation due to vast data input, and non-measured basins with insufficient input data.

An Implementation of OTB Extension to Produce TOA and TOC Reflectance of LANDSAT-8 OLI Images and Its Product Verification Using RadCalNet RVUS Data (Landsat-8 OLI 영상정보의 대기 및 지표반사도 산출을 위한 OTB Extension 구현과 RadCalNet RVUS 자료를 이용한 성과검증)

  • Kim, Kwangseob;Lee, Kiwon
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.449-461
    • /
    • 2021
  • Analysis Ready Data (ARD) for optical satellite images represents a pre-processed product by applying spectral characteristics and viewing parameters for each sensor. The atmospheric correction is one of the fundamental and complicated topics, which helps to produce Top-of-Atmosphere (TOA) and Top-of-Canopy (TOC) reflectance from multi-spectral image sets. Most remote sensing software provides algorithms or processing schemes dedicated to those corrections of the Landsat-8 OLI sensors. Furthermore, Google Earth Engine (GEE), provides direct access to Landsat reflectance products, USGS-based ARD (USGS-ARD), on the cloud environment. We implemented the Orfeo ToolBox (OTB) atmospheric correction extension, an open-source remote sensing software for manipulating and analyzing high-resolution satellite images. This is the first tool because OTB has not provided calibration modules for any Landsat sensors. Using this extension software, we conducted the absolute atmospheric correction on the Landsat-8 OLI images of Railroad Valley, United States (RVUS) to validate their reflectance products using reflectance data sets of RVUS in the RadCalNet portal. The results showed that the reflectance products using the OTB extension for Landsat revealed a difference by less than 5% compared to RadCalNet RVUS data. In addition, we performed a comparative analysis with reflectance products obtained from other open-source tools such as a QGIS semi-automatic classification plugin and SAGA, besides USGS-ARD products. The reflectance products by the OTB extension showed a high consistency to those of USGS-ARD within the acceptable level in the measurement data range of the RadCalNet RVUS, compared to those of the other two open-source tools. In this study, the verification of the atmospheric calibration processor in OTB extension was carried out, and it proved the application possibility for other satellite sensors in the Compact Advanced Satellite (CAS)-500 or new optical satellites.

A case study of blockchain-based public performance video platform establishment: Focusing on Gyeonggi Art On, a new media art broadcasting station in Gyeonggi-do (블록체인 기반 공연영상 공공 플랫폼 구축 사례 연구: 경기도 뉴미디어 예술방송국 경기아트온을 중심으로)

  • Lee, Seung Hyun
    • Journal of Service Research and Studies
    • /
    • v.13 no.1
    • /
    • pp.108-126
    • /
    • 2023
  • This study explored the sustainability of a blockchain-based cultural art performance video platform through the construction of Gyeonggi Art On, a new media art broadcasting station in Gyeonggi-do. In addition, the technical limitations of video content transaction using block chain, legal and institutional issues, and the protection of personal information and intellectual property rights were reviewed. As for the research method, participatory observation methods such as in-depth interviews with developers and operators and participation in meetings were conducted. The researcher participated in and observed the entire development process, including designing and developing blockchain nodes, smart contracts, APIs, UI/UX, and testing interworking between blockchain and content distribution services. Research Question 1: The results of the study on 'Which technology model is suitable for a blockchain-based performance video content distribution public platform?' are as follows. 1) The blockchain type suitable for the public platform for distribution of art performance video contents based on the blockchain is the private type that can be intervened only when the blockchain manager directly invites it. 2) In public platforms such as Gyeonggi ArtOn, among the copyright management model, which is an art based on NFT issuance, and the BC token and cloud-based content distribution model, the model that provides content to external demand organizations through API and uses K-token for fee settlement is suitable. 3) For public platform initial services such as Gyeonggi ArtOn, a closed blockchain that provides services only to users who have been granted the right to use content is suitable. Research question 2: What legal and institutional problems should be reviewed when operating a blockchain-based performance video distribution public platform? The results of the study are as follows. 1) Blockchain-based smart contracts have a party eligibility problem due to the nature of blockchain technology in which the identities of transaction parties may not be revealed. 2) When a security incident occurs in the block chain, it is difficult to recover the loss because it is unclear how to compensate or remedy the user's loss. 3) The concept of default cannot be applied to smart contracts, and even if the obligations under the smart contract have already been fulfilled, the possibility of incomplete performance must be reviewed.

A Study on the Influence of Digital Experience Factors on Purchase Intention and Loyalty in Online Shopping Mall - Focusing on the Mediating Effect of Flow - (온라인 쇼핑몰에서 디지털 경험요인이 구매의도에 미치는 영향에 관한 연구 : 플로우의 매개효과를 중심으로)

  • Jung, Sang-hee
    • Journal of Venture Innovation
    • /
    • v.3 no.2
    • /
    • pp.147-175
    • /
    • 2020
  • This study analyzed the effects that digital experience factors influence on purchase intention and the purchase. The study targeted an online shopping mall with a strong digital experience value among industries. The research model was derived by adding variables to independent and mediating variables according to the industry context of online shopping which is based on the theoretical background and previous studies. Product variety, price efficiency, convenience and conversation were used by terms of digital marketing mix as independent variables. Personalization has been very important factor in online shopping malls, and therefore added as a independent variable. Flow has been added as a mediating variable. Purchase and purchase intention has been used as dependent variables. For empirical testing of established research models and generalization of research results, research was conducted on online shopping malls where digital experiences are important. To do this, a survey was conducted for existing users of online shopping malls. In hypothesis testing, the hypothesis was established that product diversity, price efficiency, convenience, conversation and personalization influenced the intention to purchase online shopping. In particular, the product diversity and conversation variable were tested as the most influential factors on purchase intention. For price efficiency and personalization there were no statistically significant effect. Flow has been shown to be a partial mediator between Product variety and purchase intention in online shopping. In particular, in the case of personalization, it was tested to have a significant influence on purchase intention only when there was a flow experience called pleasure and immersion. This is because the flow experience of pleasure and immersion has played a full mediating role and significantly has affected the purchase intention, because the consumers themselves have to carry out the overall purchase journey without human help due to the nature of online. In the digital experience economy, since consumers are mostly digital consumers, where communication and sharing are the basics, they have been conducting digital word-of-mouth communication and sharing naturally before purchasing. Based on these results, theoretical and practical implications were suggested.

Betweenness Centrality-based Evacuation Vulnerability Analysis for Subway Stations: Case Study on Gwanggyo Central Station (매개 중심성 기반 지하철 역사 재난 대피 취약성 분석: 광교중앙역 사례연구)

  • Jeong, Ji Won;Ahn, Seungjun;Yoo, Min-Taek
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.44 no.3
    • /
    • pp.407-416
    • /
    • 2024
  • Over the past 20 years, there has been a rapid increase in the number and size of subway stations and underground structures worldwide, and the importance of safety for subway users has also continuously grown. Subway stations, due to their structural characteristics, have limited visibility and escape routes in disaster situations, posing a high risk of human casualties and economic losses. Therefore, an analysis of disaster vulnerabilities is essential not only for existing subway systems but also for deep underground facilities like GTX. This paper presents a case study applying a betweenness centrality-based disaster vulnerability analysis framework to the case of Gwanggyo Central Station. The analysis of Gwanggyo Central Station's base model and various disaster scenarios revealed that the betweenness centrality distribution is symmetrical, following the symmetrical spatial structure of the station, with high centrality concentrated in the central areas of basement levels one and two. These areas exhibited values more than 220% above the average, indicating a high likelihood of bottleneck phenomena during evacuation in disaster situations. To mitigate this vulnerability, scenarios were proposed to distribute evacuation flows concentrated in the central areas, enhancing the usability of peripheral areas as evacuation routes by connecting staircases continuously. This modification, when considered, showed a decrease in centrality concentration, confirming that the proposed addition of evacuation paths could effectively contribute to dispersing the flow of evacuation in Gwanggyo Central Station. This case study demonstrates the effectiveness of the proposed framework for assessing evacuation vulnerability in enhancing subway station user safety and can be effectively applied in disaster response and management plans for major underground facilities.

Product Evaluation Criteria Extraction through Online Review Analysis: Using LDA and k-Nearest Neighbor Approach (온라인 리뷰 분석을 통한 상품 평가 기준 추출: LDA 및 k-최근접 이웃 접근법을 활용하여)

  • Lee, Ji Hyeon;Jung, Sang Hyung;Kim, Jun Ho;Min, Eun Joo;Yeo, Un Yeong;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.97-117
    • /
    • 2020
  • Product evaluation criteria is an indicator describing attributes or values of products, which enable users or manufacturers measure and understand the products. When companies analyze their products or compare them with competitors, appropriate criteria must be selected for objective evaluation. The criteria should show the features of products that consumers considered when they purchased, used and evaluated the products. However, current evaluation criteria do not reflect different consumers' opinion from product to product. Previous studies tried to used online reviews from e-commerce sites that reflect consumer opinions to extract the features and topics of products and use them as evaluation criteria. However, there is still a limit that they produce irrelevant criteria to products due to extracted or improper words are not refined. To overcome this limitation, this research suggests LDA-k-NN model which extracts possible criteria words from online reviews by using LDA and refines them with k-nearest neighbor. Proposed approach starts with preparation phase, which is constructed with 6 steps. At first, it collects review data from e-commerce websites. Most e-commerce websites classify their selling items by high-level, middle-level, and low-level categories. Review data for preparation phase are gathered from each middle-level category and collapsed later, which is to present single high-level category. Next, nouns, adjectives, adverbs, and verbs are extracted from reviews by getting part of speech information using morpheme analysis module. After preprocessing, words per each topic from review are shown with LDA and only nouns in topic words are chosen as potential words for criteria. Then, words are tagged based on possibility of criteria for each middle-level category. Next, every tagged word is vectorized by pre-trained word embedding model. Finally, k-nearest neighbor case-based approach is used to classify each word with tags. After setting up preparation phase, criteria extraction phase is conducted with low-level categories. This phase starts with crawling reviews in the corresponding low-level category. Same preprocessing as preparation phase is conducted using morpheme analysis module and LDA. Possible criteria words are extracted by getting nouns from the data and vectorized by pre-trained word embedding model. Finally, evaluation criteria are extracted by refining possible criteria words using k-nearest neighbor approach and reference proportion of each word in the words set. To evaluate the performance of the proposed model, an experiment was conducted with review on '11st', one of the biggest e-commerce companies in Korea. Review data were from 'Electronics/Digital' section, one of high-level categories in 11st. For performance evaluation of suggested model, three other models were used for comparing with the suggested model; actual criteria of 11st, a model that extracts nouns by morpheme analysis module and refines them according to word frequency, and a model that extracts nouns from LDA topics and refines them by word frequency. The performance evaluation was set to predict evaluation criteria of 10 low-level categories with the suggested model and 3 models above. Criteria words extracted from each model were combined into a single words set and it was used for survey questionnaires. In the survey, respondents chose every item they consider as appropriate criteria for each category. Each model got its score when chosen words were extracted from that model. The suggested model had higher scores than other models in 8 out of 10 low-level categories. By conducting paired t-tests on scores of each model, we confirmed that the suggested model shows better performance in 26 tests out of 30. In addition, the suggested model was the best model in terms of accuracy. This research proposes evaluation criteria extracting method that combines topic extraction using LDA and refinement with k-nearest neighbor approach. This method overcomes the limits of previous dictionary-based models and frequency-based refinement models. This study can contribute to improve review analysis for deriving business insights in e-commerce market.