• 제목/요약/키워드: Time Series Analysis

검색결과 3,249건 처리시간 0.04초

Particulate Matter and CO2 Improvement Effects by Vegetation-based Bio-filters and the Indoor Comfort Index Analysis (식생기반 바이오필터의 미세먼지, 이산화탄소 개선효과와 실내쾌적지수 분석)

  • Kim, Tae-Han;Choi, Boo-Hun;Choi, Na-Hyun;Jang, Eun-Suk
    • Korean Journal of Environmental Agriculture
    • /
    • 제37권4호
    • /
    • pp.268-276
    • /
    • 2018
  • BACKGROUND: In the month of January 2018, fine dust alerts and warnings were issued 36 times for $PM_{10}$ and 81 times for PM2.5. Air quality is becoming a serious issue nation-wide. Although interest in air-purifying plants is growing due to the controversy over the risk of chemical substances of regular air-purifying solutions, industrial spread of the plants has been limited due to their efficiency in air-conditioning perspective. METHODS AND RESULTS: This study aims to propose a vegetation-based bio-filter system that can assure total indoor air volume for the efficient application of air-purifying plants. In order to evaluate the quantitative performance of the system, time-series analysis was conducted on air-conditioning performance, indoor air quality, and comfort index improvement effects in a lecture room-style laboratory with 16 persons present in the room. The system provided 4.24 ACH ventilation rate and reduced indoor temperature by $1.6^{\circ}C$ and black bulb temperature by $1.0^{\circ}C$. Relative humidity increased by 24.4% and deteriorated comfort index. However, this seemed to be offset by turbulent flow created from the operation of air blowers. While $PM_{10}$ was reduced by 39.5% to $22.11{\mu}g/m^3$, $CO_2$ increased up to 1,329ppm. It is interpreted that released $CO_2$ could not be processed because light compensation point was not reached. As for the indoor comfort index, PMV was reduced by 83.6 % and PPD was reduced by 47.0% on average, indicating that indoor space in a comfort range could be created by operating vegetation-based bio-filters. CONCLUSION: The study confirmed that the vegetation-based bio-filter system is effective in lowering indoor temperature and $PM_{10}$ and has positive effects on creating comfortable indoor space in terms of PMV and PPD.

On Securing Continuity of Long-Term Observational Eddy Flux Data: Field Intercomparison between Open- and Enclosed-Path Gas Analyzers (장기 관측 에디 플럭스 자료의 연속성 확보에 대하여: 개회로 및 봉폐회로 기체분석기의 야외 상호 비교)

  • Kang, Minseok;Kim, Joon;Yang, Hyunyoung;Lim, Jong-Hwan;Chun, Jung-Hwa;Moon, Minkyu
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • 제21권3호
    • /
    • pp.135-145
    • /
    • 2019
  • Analysis of a long cycle or a trend of time series data based on a long-term observation would require comparability between data observed in the past and the present. In the present study, we proposed an approach to ensure the compatibility among the instruments used for the long-term observation, which would allow to secure continuity of the data. An open-path gas analyzer (Model LI-7500, LI-COR, Inc., USA) has been used for eddy covariance flux measurement in the Gwangneung deciduous forest for more than 10 years. The open-path gas analyzer was replaced by an enclosed-path gas analyzer (Model EC155, Campbell Scientific, Inc., USA) in July 2015. Before completely replacing the gas analyzer, the carbon dioxide ($CO_2$) and latent heat fluxes were collected using both gas analyzers simultaneously during a five-month period from August to December in 2015. It was found that the $CO_2$ fluxes were not significantly different between the gas analyzers under the condition that the daily mean temperature was higher than $0^{\circ}C$. However, the $CO_2$ flux measured by the open-path gas analyzer was negatively biased (from positive sign, i.e., carbon source, to 0 or negative sign, i.e., carbon neutral or sink) due to the instrument surface heating under the condition that the daily mean temperature was lower than $0^{\circ}C$. Despite applying the frequency response correction associated with tube attenuation of water vapor, the latent heat flux measured by the enclosed-path gas analyzer was on average 9% smaller than that measured by the open-path gas analyzer, which resulted in >20% difference of the sums over the study period. These results indicated that application of the additional air density correction would be needed due to the instrument heat and analysis of the long-term observational flux data would be facilitated by understanding the underestimation tendency of latent heat flux measurements by an enclosed-path gas analyzer.

Comparison on Patterns of Conflicts in the South China Sea and the East China Sea through Analysis on Mechanism of Chinese Gray Zone Strategy (중국의 회색지대전략 메커니즘 분석을 통한 남중국해 및 동중국해 분쟁 양상 비교: 시계열 데이터에 근거한 경험적 연구를 중심으로)

  • Cho, Yongsu
    • Maritime Security
    • /
    • 제1권1호
    • /
    • pp.273-310
    • /
    • 2020
  • This study aims at empirically analyzing the overall mechanism of the "Gray Zone Strategy", which has begun to be used as one of Chinese major maritime security strategies in maritime conflicts surrounding the South China Sea and East China Sea since early 2010, and comparing the resulting conflict patterns in those reg ions. To this end, I made the following two hypotheses about Chinese gray zone strategy. The hypotheses that I have argued in this study are the first, "The marine gray zone strategy used by China shows different structures of implementation in the South China Sea and the East China Sea, which are major conflict areas.", the second, "Therefore, the patterns of disputes in the South China Sea and the East China Sea also show a difference." In order to examine this, I will classify Chinese gray zone strategy mechanisms multi-dimensionally in large order, 1) conflict trends and frequency of strategy execution, 2) types and strengths of strategy, 3) actors of strategy execution, and 4) response methods of counterparts. So, I tried to collect data related to this based on quantitative modeling to test these. After that, about 10 years of data pertaining to this topic were processed, and a research model was designed with a new categorization and operational definition of gray zone strategies. Based on this, I was able to successfully test all the hypotheses by successfully comparing the comprehensive mechanisms of the gray zone strategy used by China and the conflict patterns between the South China Sea and the East China Sea. In the conclusion, the verified results were rementioned with emphasizing the need to overcome the security vulnerabilities in East Asia that could be caused by China's marine gray zone strategy. This study, which has never been attempted so far, is of great significance in that it clarified the intrinsic structure in which China's gray zone strategy was implemented using empirical case studies, and the correlation between this and maritime conflict patterns was investigated.

  • PDF

Comparison of Models for Stock Price Prediction Based on Keyword Search Volume According to the Social Acceptance of Artificial Intelligence (인공지능의 사회적 수용도에 따른 키워드 검색량 기반 주가예측모형 비교연구)

  • Cho, Yujung;Sohn, Kwonsang;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • 제27권1호
    • /
    • pp.103-128
    • /
    • 2021
  • Recently, investors' interest and the influence of stock-related information dissemination are being considered as significant factors that explain stock returns and volume. Besides, companies that develop, distribute, or utilize innovative new technologies such as artificial intelligence have a problem that it is difficult to accurately predict a company's future stock returns and volatility due to macro-environment and market uncertainty. Market uncertainty is recognized as an obstacle to the activation and spread of artificial intelligence technology, so research is needed to mitigate this. Hence, the purpose of this study is to propose a machine learning model that predicts the volatility of a company's stock price by using the internet search volume of artificial intelligence-related technology keywords as a measure of the interest of investors. To this end, for predicting the stock market, we using the VAR(Vector Auto Regression) and deep neural network LSTM (Long Short-Term Memory). And the stock price prediction performance using keyword search volume is compared according to the technology's social acceptance stage. In addition, we also conduct the analysis of sub-technology of artificial intelligence technology to examine the change in the search volume of detailed technology keywords according to the technology acceptance stage and the effect of interest in specific technology on the stock market forecast. To this end, in this study, the words artificial intelligence, deep learning, machine learning were selected as keywords. Next, we investigated how many keywords each week appeared in online documents for five years from January 1, 2015, to December 31, 2019. The stock price and transaction volume data of KOSDAQ listed companies were also collected and used for analysis. As a result, we found that the keyword search volume for artificial intelligence technology increased as the social acceptance of artificial intelligence technology increased. In particular, starting from AlphaGo Shock, the keyword search volume for artificial intelligence itself and detailed technologies such as machine learning and deep learning appeared to increase. Also, the keyword search volume for artificial intelligence technology increases as the social acceptance stage progresses. It showed high accuracy, and it was confirmed that the acceptance stages showing the best prediction performance were different for each keyword. As a result of stock price prediction based on keyword search volume for each social acceptance stage of artificial intelligence technologies classified in this study, the awareness stage's prediction accuracy was found to be the highest. The prediction accuracy was different according to the keywords used in the stock price prediction model for each social acceptance stage. Therefore, when constructing a stock price prediction model using technology keywords, it is necessary to consider social acceptance of the technology and sub-technology classification. The results of this study provide the following implications. First, to predict the return on investment for companies based on innovative technology, it is most important to capture the recognition stage in which public interest rapidly increases in social acceptance of the technology. Second, the change in keyword search volume and the accuracy of the prediction model varies according to the social acceptance of technology should be considered in developing a Decision Support System for investment such as the big data-based Robo-advisor recently introduced by the financial sector.

A Study on the Application of Outlier Analysis for Fraud Detection: Focused on Transactions of Auction Exception Agricultural Products (부정 탐지를 위한 이상치 분석 활용방안 연구 : 농수산 상장예외품목 거래를 대상으로)

  • Kim, Dongsung;Kim, Kitae;Kim, Jongwoo;Park, Steve
    • Journal of Intelligence and Information Systems
    • /
    • 제20권3호
    • /
    • pp.93-108
    • /
    • 2014
  • To support business decision making, interests and efforts to analyze and use transaction data in different perspectives are increasing. Such efforts are not only limited to customer management or marketing, but also used for monitoring and detecting fraud transactions. Fraud transactions are evolving into various patterns by taking advantage of information technology. To reflect the evolution of fraud transactions, there are many efforts on fraud detection methods and advanced application systems in order to improve the accuracy and ease of fraud detection. As a case of fraud detection, this study aims to provide effective fraud detection methods for auction exception agricultural products in the largest Korean agricultural wholesale market. Auction exception products policy exists to complement auction-based trades in agricultural wholesale market. That is, most trades on agricultural products are performed by auction; however, specific products are assigned as auction exception products when total volumes of products are relatively small, the number of wholesalers is small, or there are difficulties for wholesalers to purchase the products. However, auction exception products policy makes several problems on fairness and transparency of transaction, which requires help of fraud detection. In this study, to generate fraud detection rules, real huge agricultural products trade transaction data from 2008 to 2010 in the market are analyzed, which increase more than 1 million transactions and 1 billion US dollar in transaction volume. Agricultural transaction data has unique characteristics such as frequent changes in supply volumes and turbulent time-dependent changes in price. Since this was the first trial to identify fraud transactions in this domain, there was no training data set for supervised learning. So, fraud detection rules are generated using outlier detection approach. We assume that outlier transactions have more possibility of fraud transactions than normal transactions. The outlier transactions are identified to compare daily average unit price, weekly average unit price, and quarterly average unit price of product items. Also quarterly averages unit price of product items of the specific wholesalers are used to identify outlier transactions. The reliability of generated fraud detection rules are confirmed by domain experts. To determine whether a transaction is fraudulent or not, normal distribution and normalized Z-value concept are applied. That is, a unit price of a transaction is transformed to Z-value to calculate the occurrence probability when we approximate the distribution of unit prices to normal distribution. The modified Z-value of the unit price in the transaction is used rather than using the original Z-value of it. The reason is that in the case of auction exception agricultural products, Z-values are influenced by outlier fraud transactions themselves because the number of wholesalers is small. The modified Z-values are called Self-Eliminated Z-scores because they are calculated excluding the unit price of the specific transaction which is subject to check whether it is fraud transaction or not. To show the usefulness of the proposed approach, a prototype of fraud transaction detection system is developed using Delphi. The system consists of five main menus and related submenus. First functionalities of the system is to import transaction databases. Next important functions are to set up fraud detection parameters. By changing fraud detection parameters, system users can control the number of potential fraud transactions. Execution functions provide fraud detection results which are found based on fraud detection parameters. The potential fraud transactions can be viewed on screen or exported as files. The study is an initial trial to identify fraud transactions in Auction Exception Agricultural Products. There are still many remained research topics of the issue. First, the scope of analysis data was limited due to the availability of data. It is necessary to include more data on transactions, wholesalers, and producers to detect fraud transactions more accurately. Next, we need to extend the scope of fraud transaction detection to fishery products. Also there are many possibilities to apply different data mining techniques for fraud detection. For example, time series approach is a potential technique to apply the problem. Even though outlier transactions are detected based on unit prices of transactions, however it is possible to derive fraud detection rules based on transaction volumes.

A Study on Commodity Asset Investment Model Based on Machine Learning Technique (기계학습을 활용한 상품자산 투자모델에 관한 연구)

  • Song, Jin Ho;Choi, Heung Sik;Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • 제23권4호
    • /
    • pp.127-146
    • /
    • 2017
  • Services using artificial intelligence have begun to emerge in daily life. Artificial intelligence is applied to products in consumer electronics and communications such as artificial intelligence refrigerators and speakers. In the financial sector, using Kensho's artificial intelligence technology, the process of the stock trading system in Goldman Sachs was improved. For example, two stock traders could handle the work of 600 stock traders and the analytical work for 15 people for 4weeks could be processed in 5 minutes. Especially, big data analysis through machine learning among artificial intelligence fields is actively applied throughout the financial industry. The stock market analysis and investment modeling through machine learning theory are also actively studied. The limits of linearity problem existing in financial time series studies are overcome by using machine learning theory such as artificial intelligence prediction model. The study of quantitative financial data based on the past stock market-related numerical data is widely performed using artificial intelligence to forecast future movements of stock price or indices. Various other studies have been conducted to predict the future direction of the market or the stock price of companies by learning based on a large amount of text data such as various news and comments related to the stock market. Investing on commodity asset, one of alternative assets, is usually used for enhancing the stability and safety of traditional stock and bond asset portfolio. There are relatively few researches on the investment model about commodity asset than mainstream assets like equity and bond. Recently machine learning techniques are widely applied on financial world, especially on stock and bond investment model and it makes better trading model on this field and makes the change on the whole financial area. In this study we made investment model using Support Vector Machine among the machine learning models. There are some researches on commodity asset focusing on the price prediction of the specific commodity but it is hard to find the researches about investment model of commodity as asset allocation using machine learning model. We propose a method of forecasting four major commodity indices, portfolio made of commodity futures, and individual commodity futures, using SVM model. The four major commodity indices are Goldman Sachs Commodity Index(GSCI), Dow Jones UBS Commodity Index(DJUI), Thomson Reuters/Core Commodity CRB Index(TRCI), and Rogers International Commodity Index(RI). We selected each two individual futures among three sectors as energy, agriculture, and metals that are actively traded on CME market and have enough liquidity. They are Crude Oil, Natural Gas, Corn, Wheat, Gold and Silver Futures. We made the equally weighted portfolio with six commodity futures for comparing with other commodity indices. We set the 19 macroeconomic indicators including stock market indices, exports & imports trade data, labor market data, and composite leading indicators as the input data of the model because commodity asset is very closely related with the macroeconomic activities. They are 14 US economic indicators, two Chinese economic indicators and two Korean economic indicators. Data period is from January 1990 to May 2017. We set the former 195 monthly data as training data and the latter 125 monthly data as test data. In this study, we verified that the performance of the equally weighted commodity futures portfolio rebalanced by the SVM model is better than that of other commodity indices. The prediction accuracy of the model for the commodity indices does not exceed 50% regardless of the SVM kernel function. On the other hand, the prediction accuracy of equally weighted commodity futures portfolio is 53%. The prediction accuracy of the individual commodity futures model is better than that of commodity indices model especially in agriculture and metal sectors. The individual commodity futures portfolio excluding the energy sector has outperformed the three sectors covered by individual commodity futures portfolio. In order to verify the validity of the model, it is judged that the analysis results should be similar despite variations in data period. So we also examined the odd numbered year data as training data and the even numbered year data as test data and we confirmed that the analysis results are similar. As a result, when we allocate commodity assets to traditional portfolio composed of stock, bond, and cash, we can get more effective investment performance not by investing commodity indices but by investing commodity futures. Especially we can get better performance by rebalanced commodity futures portfolio designed by SVM model.

Predicting the Direction of the Stock Index by Using a Domain-Specific Sentiment Dictionary (주가지수 방향성 예측을 위한 주제지향 감성사전 구축 방안)

  • Yu, Eunji;Kim, Yoosin;Kim, Namgyu;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • 제19권1호
    • /
    • pp.95-110
    • /
    • 2013
  • Recently, the amount of unstructured data being generated through a variety of social media has been increasing rapidly, resulting in the increasing need to collect, store, search for, analyze, and visualize this data. This kind of data cannot be handled appropriately by using the traditional methodologies usually used for analyzing structured data because of its vast volume and unstructured nature. In this situation, many attempts are being made to analyze unstructured data such as text files and log files through various commercial or noncommercial analytical tools. Among the various contemporary issues dealt with in the literature of unstructured text data analysis, the concepts and techniques of opinion mining have been attracting much attention from pioneer researchers and business practitioners. Opinion mining or sentiment analysis refers to a series of processes that analyze participants' opinions, sentiments, evaluations, attitudes, and emotions about selected products, services, organizations, social issues, and so on. In other words, many attempts based on various opinion mining techniques are being made to resolve complicated issues that could not have otherwise been solved by existing traditional approaches. One of the most representative attempts using the opinion mining technique may be the recent research that proposed an intelligent model for predicting the direction of the stock index. This model works mainly on the basis of opinions extracted from an overwhelming number of economic news repots. News content published on various media is obviously a traditional example of unstructured text data. Every day, a large volume of new content is created, digitalized, and subsequently distributed to us via online or offline channels. Many studies have revealed that we make better decisions on political, economic, and social issues by analyzing news and other related information. In this sense, we expect to predict the fluctuation of stock markets partly by analyzing the relationship between economic news reports and the pattern of stock prices. So far, in the literature on opinion mining, most studies including ours have utilized a sentiment dictionary to elicit sentiment polarity or sentiment value from a large number of documents. A sentiment dictionary consists of pairs of selected words and their sentiment values. Sentiment classifiers refer to the dictionary to formulate the sentiment polarity of words, sentences in a document, and the whole document. However, most traditional approaches have common limitations in that they do not consider the flexibility of sentiment polarity, that is, the sentiment polarity or sentiment value of a word is fixed and cannot be changed in a traditional sentiment dictionary. In the real world, however, the sentiment polarity of a word can vary depending on the time, situation, and purpose of the analysis. It can also be contradictory in nature. The flexibility of sentiment polarity motivated us to conduct this study. In this paper, we have stated that sentiment polarity should be assigned, not merely on the basis of the inherent meaning of a word but on the basis of its ad hoc meaning within a particular context. To implement our idea, we presented an intelligent investment decision-support model based on opinion mining that performs the scrapping and parsing of massive volumes of economic news on the web, tags sentiment words, classifies sentiment polarity of the news, and finally predicts the direction of the next day's stock index. In addition, we applied a domain-specific sentiment dictionary instead of a general purpose one to classify each piece of news as either positive or negative. For the purpose of performance evaluation, we performed intensive experiments and investigated the prediction accuracy of our model. For the experiments to predict the direction of the stock index, we gathered and analyzed 1,072 articles about stock markets published by "M" and "E" media between July 2011 and September 2011.

A Statistical model to Predict soil Temperature by Combining the Yearly Oscillation Fourier Expansion and Meteorological Factors (연주기(年週期) Fourier 함수(函數)와 기상요소(氣象要素)에 의(依)한 지온예측(地溫豫測) 통계(統計) 모형(模型))

  • Jung, Yeong-Sang;Lee, Byun-Woo;Kim, Byung-Chang;Lee, Yang-Soo;Um, Ki-Tae
    • Korean Journal of Soil Science and Fertilizer
    • /
    • 제23권2호
    • /
    • pp.87-93
    • /
    • 1990
  • A statistical model to predict soil temperature from the ambient meteorological factors including mean, maximum and minimum air temperatures, precipitation, wind speed and snow depth combined with Fourier time series expansion was developed with the data measured at the Suwon Meteorolical Service from 1979 to 1988. The stepwise elimination technique was used for statistical analysis. For the yearly oscillation model for soil temperature with 8 terms of Fourier expansion, the mean square error was decreased with soil depth showing 2.30 for the surface temperature, and 1.34-0.42 for 5 to 500-cm soil temperatures. The $r^2$ ranged from 0.913 to 0.988. The number of lag days of air temperature by remainder analysis was 0 day for the soil surface temperature, -1 day for 5 to 30-cm soil temperature, and -2 days for 50-cm soil temperature. The number of lag days for precipitaion, snow depth and wind speed was -1 day for the 0 to 10-cm soil temperatures, and -2 to -3 days for the 30 to 50-cm soil teperatures. For the statistical soil temperature prediction model combined with the yearly oscillation terms and meteorological factors as remainder terms considering the lag days obtained above, the mean square error was 1.64 for the soil surfac temperature, and ranged 1.34-0.42 for 5 to 500cm soil temperatures. The model test with 1978 data independent to model development resulted in good agreement with $r^2$ ranged 0.976 to 0.996. The magnitudes of coeffcicients implied that the soil depth where daily meteorological variables night affect soil temperature was 30 to 50 cm. In the models, solar radiation was not included as a independent variable ; however, in a seperated analysis on relationship between the difference(${\Delta}Tmxs$) of the maximum soil temperature and the maximum air temperature and solar radiation(Rs ; $J\;m^{-2}$) under a corn canopy showed linear relationship as $${\Delta}Tmxs=0.902+1.924{\times}10^{-3}$$ Rs for leaf area index lower than 2 $${\Delta}Tmxs=0.274+8.881{\times}10^{-4}$$ Rs for leaf area index higher than 2.

  • PDF

Study on Influencing Factors of Traffic Accidents in Urban Tunnel Using Quantification Theory (In Busan Metropolitan City) (수량화 이론을 이용한 도시부 터널 내 교통사고 영향요인에 관한 연구 - 부산광역시를 중심으로 -)

  • Lim, Chang Sik;Choi, Yang Won
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • 제35권1호
    • /
    • pp.173-185
    • /
    • 2015
  • This study aims to investigate the characteristics and types of car accidents and establish a prediction model by analyzing 456 car accidents having occurred in the 11 tunnels in Busan, through statistical analysis techniques. The results of this study can be summarized as below. As a result of analyzing the characteristics of car accidents, it was found that 64.9% of all the car accidents took place in the tunnels between 08:00 and 18:00, which was higher than 45.8 to 46.1% of the car accidents in common roads. As a result of analyzing the types of car accidents, the car-to-car accident type was the majority, and the sole-car accident type in the tunnels was relatively high, compared to that in common roads. Besides, people at the age between 21 and 40 were most involved in car accidents, and in the vehicle type of the first party to car accidents, trucks showed a high proportion, and in the cloud cover, rainy days or cloudy days showed a high proportion unlike clear days. As a result of analyzing the principal components of car accident influence factors, it was found that the first principal components were road, tunnel structure and traffic flow-related factors, the second principal components lighting facility and road structure-related factors, the third principal factors stand-by and lighting facility-related factors, the fourth principal components human and time series-related factors, the fifth principal components human-related factors, the sixth principal components vehicle and traffic flow-related factors, and the seventh principal components meteorological factors. As a result of classifying car accident spots, there were 5 optimized groups classified, and as a result of analyzing each group based on Quantification Theory Type I, it was found that the first group showed low explanation power for the prediction model, while the fourth group showed a middle explanation power and the second, third and fifth groups showed high explanation power for the prediction model. Out of all the items(principal components) over 0.2(a weak correlation) in the partial correlation coefficient absolute value of the prediction model, this study analyzed variables including road environment variables. As a result, main examination items were summarized as proper traffic flow processing, cross-section composition(the width of a road), tunnel structure(the length of a tunnel), the lineal of a road, ventilation facilities and lighting facilities.

Study on the Chemical Management - 2. Comparison of Classification and Health Index of Chemicals Regulated by the Ministry of Environment and the Ministry of the Employment and Labor (화학물질 관리 연구-2. 환경부와 고용노동부의 관리 화학물질의 구분, 노출기준 및 독성 지표 등의 특성 비교)

  • Kim, Sunju;Yoon, Chungsik;Ham, Seunghon;Park, Jihoon;Kim, Songha;Kim, Yuna;Lee, Jieun;Lee, Sangah;Park, Donguk;Lee, Kwonseob;Ha, Kwonchul
    • Journal of Korean Society of Occupational and Environmental Hygiene
    • /
    • 제25권1호
    • /
    • pp.58-71
    • /
    • 2015
  • Objectives: The aims of this study were to investigate the classification system of chemical substances in the Occupational Safety and Health Act(OSHA) and Chemical Substances Control Act(CSCA) and to compare several health indices (i.e., Time Weighted Average (TWA), Lethal Dose ($LD_{50}$), and Lethal Concentration ($LC_{50}$) of chemical substances by categories in each law. Methods: The chemicals regulated by each law were classified by the specific categories provided in the respective law; seven categories for OSHA (chemicals with OELs, chemicals prohibited from manufacturing, etc., chemicals requiring approval, chemicals kept below permissible limits, chemicals requiring workplace monitoring, chemicals requiring special management, and chemicals requiring special heath diagnosis) and five categories from the CSCA(poisonous substances, permitted substances, restricted substances, prohibited substances, and substances requiring preparation for accidents). Information on physicochemical properties, health indices including CMR characteristics, $LD_{50}$ and $LD_{50}$ were searched from the homepages of the Korean Occupational and Safety Agency and the National Institute of Environmental Research, etc. Statistical analysis was conducted for comparison between TWA and health index for each category. Results: The number of chemicals based on CAS numbers was different from the numbers of series of chemicals listed in each law because of repeat listings due to different names (e.g., glycol monoethylether vs. 2-ethoxy ethanol) and grouping of different chemicals under the same serial number(i.e., five different benzidine-related chemicals were categorized under one serial number(06-4-13) as prohibited substances under the CSCA). A total of 722 chemicals and 995 chemicals were listed at the OSHA and its sub-regulations and CSCA and its sub-regulations, respectively. Among these, 36.8% based on OSHA chemicals and 26.7% based on CSCA chemicals were regulated simultaneously through both laws. The correlation coefficients between TWA and $LC_{50}$ and between TWA and $LD_{50}$, were 0.641 and 0.506, respectively. The geometric mean values of TWA calculated by each category in both laws have no tendency according to category. The patterns of cumulative graph for TWA, $LD_{50}$, $LC_{50}$ were similar to the chemicals regulated by OHSA and CCSA, but their median values were lower for CCSA regulated chemicals than OSHA regulated chemicals. The GM of carcinogenic chemicals under the OSHA was significantly lower than non-CMR chemicals($2.21mg/m^3$ vs $5.69mg/m^3$, p=0.006), while there was no significant difference in CSCA chemicals($0.85mg/m^3$ vs $1.04mg/m^3$, p=0.448). $LC_{50}$ showed no significant difference between carcinogens, mutagens, reproductive toxic chemicals and non-CMR chemicals in both laws' regulated chemicals, while there was a difference between carcinogens and non-CMR chemicals in $LD_{50}$ of the CSCA. Conclusions: This study found that there was no specific tendency or significant difference in health indicessuch TWA, $LD_{50}$ and $LC_{50}$ in subcategories of chemicals as classified by the Ministry of Labor and Employment and the Ministry of Environment. Considering the background and the purpose of each law, collaboration for harmonization in chemical categorizing and regulation is necessary.