• Title/Summary/Keyword: 적용가능성

Search Result 10,116, Processing Time 0.051 seconds

Prognostic Value of the Expression of p53 and bcl-2 in Non-Small Cell Lung Cancer (비소세포폐암에서 p53과 bcl-2의 발현이 예후에 미치는 영향)

  • Yang, Seok-Chul;Yoon, Ho-Joo;Shin, Dong-Ho;Park, Sung-Soo;Lee, Jung-Hee;Keum, Joo-Seob;Kong, Gu;Lee, Jung-Dal
    • Tuberculosis and Respiratory Diseases
    • /
    • v.45 no.5
    • /
    • pp.962-974
    • /
    • 1998
  • Background: Alteration of p53 tumor suppressor genes is most frequently identified in human neoplasms, including lung carcinoma. It is well known that bcl-2 oncoprotein protects cells from apoptosis. Recent studies have demonstrated that bcl-2 expression is associated with favorable prognosis for patients with non-small cell lung carcinoma. However, the precise biologic role of bcl-2 in the development of these tumors is still obscure. p53 and bcl-2 have important regulatory influence in the apoptotic pathway and thus their relationship is of interest in tumorigenesis, especially lung cancer. Purpose: The author investigated to know the prognostic significance of the expression of p53 and bcl-2 in radically resected non-small cell lung cancer. Method: 84 cases of formalin-fixed paraffin-embedded blocks from resected primary non-small cell lung cancer from 1980 to 1994 at Hanyang University Hospital were available for both clinical follow-up and immunohistochemical staining using monoclonal antibodies for p53 and bcl-2. Results : The histologic classification of the tumor was based on WHO criteria., and the specimens included 45 squamous cell carcinomas(53.6%), 28 adeonocarcinomas(33.3%) and 11 large cell carcinomas(13.1 %). p53 immunoreactivity was noted in 47 cases of 84 cases(56.0%). bcl-2 immunoreactivity was noted in 15 cases of 84 cases(17.9%). The mean survival duration was $64.23{\pm}10.73$ months in bcl-2 positive group and $35.28{\pm}4$. 39 months in bcl-2 negative group. The bcl-2 expression was significantly correlated with survival in radically resected non-small cell lung cancer patients(p=0.03). The mean survival duration was $34.71{\pm}6.12$ months in p53 positive group and $45.35{\pm}6.30$ months in p53 negative group(p=0.21). The p53 expression was not predictive for survival. There was no correlation between combination of the different status of p53 and bcl-2 expression in our study. Conclusions : The interaction and the regulation of new biologic markers, such as those involved in the apoptotic pathway, are complex. bcl-2 overexpression is a good prognostic factor in non-small cell lung cancer and p53 expression is not significantly associated with the prognostic factor in non-small cell lung cancer.

  • PDF

DC Resistivity method to image the underground structure beneath river or lake bottom (하저 지반특성 규명을 위한 전기비저항 탐사)

  • Kim Jung-Ho;Yi Myeong-Jong;Song Yoonho;Cho Seong-Jun;Lee Seong-Kon;Son Jeongsul
    • 한국지구물리탐사학회:학술대회논문집
    • /
    • 2002.09a
    • /
    • pp.139-162
    • /
    • 2002
  • Since weak zones or geological lineaments are likely to be eroded, weak zones may develop beneath rivers, and a careful evaluation of ground condition is important to construct structures passing through a river. Dc resistivity surveys, however, have seldomly applied to the investigation of water-covered area, possibly because of difficulties in data aquisition and interpretation. The data aquisition having high quality may be the most important factor, and is more difficult than that in land survey, due to the water layer overlying the underground structure to be imaged. Through the numerical modeling and the analysis of case histories, we studied the method of resistivity survey at the water-covered area, starting from the characteristics of measured data, via data acquisition method, to the interpretation method. We unfolded our discussion according to the installed locations of electrodes, ie., floating them on the water surface, and installing at the water bottom, since the methods of data acquisition and interpretation vary depending on the electrode location. Through this study, we could confirm that the dc resistivity method can provide the fairly reasonable subsurface images. It was also shown that installing electrodes at the water bottom can give the subsurface image with much higher resolution than floating them on the water surface. Since the data acquired at the water-covered area have much lower sensitivity to the underground structure than those at the land, and can be contaminated by the higher noise, such as streaming potential, it would be very important to select the acquisition method and electrode array being able to provide the higher signal-to-noise ratio data as well as the high resolving power. The method installing electrodes at the water bottom is suitable to the detailed survey because of much higher resolving power, whereas the method floating them, especially streamer dc resistivity survey, is to the reconnaissance survey owing of very high speed of field work.

  • PDF

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

Risk Factor Analysis for Operative Death and Brain Injury after Surgery of Stanford Type A Aortic Dissection (스탠포드 A형 대동맥 박리증 수술 후 수술 사망과 뇌손상의 위험인자 분석)

  • Kim Jae-Hyun;Oh Sam-Sae;Lee Chang-Ha;Baek Man-Jong;Hwang Seong-Wook;Lee Cheul;Lim Hong-Gook;Na Chan-Young
    • Journal of Chest Surgery
    • /
    • v.39 no.4 s.261
    • /
    • pp.289-297
    • /
    • 2006
  • Background: Surgery for Stanford type A aortic dissection shows a high operative mortality rate and frequent postoperative brain injury. This study was designed to find out the risk factors leading to operative mortality and brain injury after surgical repair in patients with type A aortic dissection. Material and Method: One hundred and eleven patients with type A aortic dissection who underwent surgical repair between February, 1995 and January 2005 were reviewed retrospectively. There were 99 acute dissections and 12 chronic dissections. Univariate and multivariate analysis were performed to identify risk factors of operative mortality and brain injury. Resuit: Hospital mortality occurred in 6 patients (5.4%). Permanent neurologic deficit occurred in 8 patients (7.2%) and transient neurologic deficit in 4 (3.6%). Overall 1, 5, 7 year survival rate was 94.4, 86.3, and 81.5%, respectively. Univariate analysis revealed 4 risk factors to be statistically significant as predictors of mortality: previous chronic type III dissection, emergency operation, intimal tear in aortic arch, and deep hypothemic circulatory arrest (DHCA) for more than 45 minutes. Multivariate analysis revealed previous chronic type III aortic dissection (odds ratio (OR) 52.2), and DHCA for more than 45 minutes (OR 12.0) as risk factors of operative mortality. Pathological obesity (OR 12.9) and total arch replacement (OR 8.5) were statistically significant risk factors of brain injury in multivariate analysis. Conclusion: The result of surgical repair for Stanford type A aortic dissection was good when we took into account the mortality rate, the incidence of neurologic injury, and the long-term survival rate. Surgery of type A aortic dissection in patients with a history of chronic type III dissection may increase the risk of operative mortality. Special care should be taken and efforts to reduce the hypothermic circulatory arrest time should alway: be kept in mind. Surgeons who are planning to operate on patients with pathological obesity, or total arch replacement should be seriously consider for there is a higher risk of brain injury.

Development of a Traffic Accident Prediction Model and Determination of the Risk Level at Signalized Intersection (신호교차로에서의 사고예측모형개발 및 위험수준결정 연구)

  • 홍정열;도철웅
    • Journal of Korean Society of Transportation
    • /
    • v.20 no.7
    • /
    • pp.155-166
    • /
    • 2002
  • Since 1990s. there has been an increasing number of traffic accidents at intersection. which requires more urgent measures to insure safety on intersection. This study set out to analyze the road conditions, traffic conditions and traffic operation conditions on signalized intersection. to identify the elements that would impose obstructions in safety, and to develop a traffic accident prediction model to evaluate the safety of an intersection using the cop relation between the elements and an accident. In addition, the focus was made on suggesting appropriate traffic safety policies by dealing with the danger elements in advance and on enhancing the safety on the intersection in developing a traffic accident prediction model fir a signalized intersection. The data for the study was collected at an intersection located in Wonju city from January to December 2001. It consisted of the number of accidents, the road conditions, the traffic conditions, and the traffic operation conditions at the intersection. The collected data was first statistically analyzed and then the results identified the elements that had close correlations with accidents. They included the area pattern, the use of land, the bus stopping activities, the parking and stopping activities on the road, the total volume, the turning volume, the number of lanes, the width of the road, the intersection area, the cycle, the sight distance, and the turning radius. These elements were used in the second correlation analysis. The significant level was 95% or higher in all of them. There were few correlations between independent variables. The variables that affected the accident rate were the number of lanes, the turning radius, the sight distance and the cycle, which were used to develop a traffic accident prediction model formula considering their distribution. The model formula was compared with a general linear regression model in accuracy. In addition, the statistics of domestic accidents were investigated to analyze the distribution of the accidents and to classify intersections according to the risk level. Finally, the results were applied to the Spearman-rank correlation coefficient to see if the model was appropriate. As a result, the coefficient of determination was highly significant with the value of 0.985 and the ranks among the intersections according to the risk level were appropriate too. The actual number of accidents and the predicted ones were compared in terms of the risk level and they were about the same in the risk level for 80% of the intersections.

A Study of Guidelines for Genetic Counseling in Preimplantation Genetic Diagnosis (PGD) (착상전 유전진단을 위한 유전상담 현황과 지침개발을 위한 기초 연구)

  • Kim, Min-Jee;Lee, Hyoung-Song;Kang, Inn-Soo;Jeong, Seon-Yong;Kim, Hyon-J.
    • Journal of Genetic Medicine
    • /
    • v.7 no.2
    • /
    • pp.125-132
    • /
    • 2010
  • Purpose: Preimplantation genetic diagnosis (PGD), also known as embryo screening, is a pre-pregnancy technique used to identify genetic defects in embryos created through in vitro fertilization. PGD is considered a means of prenatal diagnosis of genetic abnormalities. PGD is used when one or both genetic parents has a known genetic abnormality; testing is performed on an embryo to determine if it also carries the genetic abnormality. The main advantage of PGD is the avoidance of selective pregnancy termination as it imparts a high likelihood that the baby will be free of the disease under consideration. The application of PGD to genetic practices, reproductive medicine, and genetic counseling is becoming the key component of fertility practice because of the need to develop a custom PGD design for each couple. Materials and Methods: In this study, a survey on the contents of genetic counseling in PGD was carried out via direct contact or e-mail with the patients and specialists who had experienced PGD during the three months from February to April 2010. Results: A total of 91 persons including 60 patients, 49 of whom had a chromosomal disorder and 11 of whom had a single gene disorder, and 31 PGD specialists responded to the survey. Analysis of the survey results revealed that all respondents were well aware of the importance of genetic counseling in all steps of PGD including planning, operation, and follow-up. The patient group responded that the possibility of unexpected results (51.7%), genetic risk assessment and recurrence risk (46.7%), the reproduction options (46.7%), the procedure and limitation of PGD (43.3%) and the information of PGD technology (35.0%) should be included as a genetic counseling information. In detail, 51.7% of patients wanted to be counseled for the possibility of unexpected results and the recurrence risk, while 46.7% wanted to know their reproduction options (46.7%). Approximately 96.7% of specialists replied that a non-M.D. genetic counselor is necessary for effective and systematic genetic counseling in PGD because it is difficult for physicians to offer satisfying information to patients due to lack of counseling time and specific knowledge of the disorders. Conclusions: The information from the survey provides important insight into the overall present situation of genetic counseling for PGD in Korea. The survey results demonstrated that there is a general awareness that genetic counseling is essential for PGD, suggesting that appropriate genetic counseling may play a important role in the success of PGD. The establishment of genetic counseling guidelines for PGD may contribute to better planning and management strategies for PGD.

The Study on Conservation and Management of Natural Habitat of Spleenworts on Samdo Island (Asplenium antiquum Makino), Jeju (Natural Monument No. 18) (천연기념물 제주 삼도 파초일엽 자생지 생육 및 관리 현황 연구)

  • Shin, Jin-Ho;Kim, Han;Lee, Na-Ra;Son, Ji-Won
    • Korean Journal of Environment and Ecology
    • /
    • v.33 no.3
    • /
    • pp.280-291
    • /
    • 2019
  • A. antiquum, first observed in Jeju Samdo Island in 1949, was designated as the Natural Monument No. 18 in December 1962 in recognition of its academic value. In Korea, it grows in nature only in Samdo in Jeju Island. Although its natural habitat was greatly damaged and almost destroyed due to firewood, stealing, etc. After the emancipation, it has been maintained by the transplantation and restoration. The site observed by this study has been managed as a restricted area since 2011. Since it has been about 20 years since the restoration of the native site in the 2000s, it is necessary to check the official management history records, such as the origin of transplantation and restoration to monitor the changes in the growth status and to control the habitat. As the results of this study, we have secured the records of cultural property management history, such as the identification of native species and the transplantation and restoration records. We also examined the change of the growth and development of A. antiquum 20 years after the restoration. There are no official records of the individuals transplanted to the restored natural habitat of A. antiquum in the 1970s and 1980s, and there was a controversy about the nativeness of those individuals that were restored and transplanted in 1974 since they were Japanese individuals. The studies of identifying native as the results of this study, we have secured the records of cultural property management history, such as the identification of native species and the transplantation and restoration records. We also examined the change of the growth and development of A. antiquum 20 years after the restoration. There are two sites in natural habitat in Samdo Island. A total of 65 individuals grow in three layers on three stone walls in a site while 29 individuals grow in two columns in the other site. A. antiquum grows in an evergreen broad-leaved forest dominated by Neolitsea sericea, and we did not find any other individuals of naturally growing A. antiquum outside the investigated site. This study checked the distribution of A. antiquum seedlings observed initially after the restoration. There were more than 300 seedling individuals, and we selected three densely populated sites for monitoring. There were 23 A. antiquum seedlings with 4 - 17 leaves per individual and the leaf length of 0.5 - 20 cm in monitoring site 1. There were 88 individuals with 5 - 6 leaves per individual and the leaf length of 1.3 - 10.4 cm in monitoring site 2 while there were 22 individuals with 5 - 9 leaves per individual and the leaf length of 4.5 - 12.1 cm in monitoring site 3. Although the natural habitat of A. antiquum was designated as a restricted public area in 2011, there is a high possibility that the habitat can be damaged because some activities, such as fishing and scuba diving are allowed. Therefore, it is necessary to enforce the law strictly, to provide sufficient education for the preservation of natural treasures, and to present accurate information about cultural assets.

Different Look, Different Feel: Social Robot Design Evaluation Model Based on ABOT Attributes and Consumer Emotions (각인각색, 각봇각색: ABOT 속성과 소비자 감성 기반 소셜로봇 디자인평가 모형 개발)

  • Ha, Sangjip;Lee, Junsik;Yoo, In-Jin;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.2
    • /
    • pp.55-78
    • /
    • 2021
  • Tosolve complex and diverse social problems and ensure the quality of life of individuals, social robots that can interact with humans are attracting attention. In the past, robots were recognized as beings that provide labor force as they put into industrial sites on behalf of humans. However, the concept of today's robot has been extended to social robots that coexist with humans and enable social interaction with the advent of Smart technology, which is considered an important driver in most industries. Specifically, there are service robots that respond to customers, the robots that have the purpose of edutainment, and the emotionalrobots that can interact with humans intimately. However, popularization of robots is not felt despite the current information environment in the modern ICT service environment and the 4th industrial revolution. Considering social interaction with users which is an important function of social robots, not only the technology of the robots but also other factors should be considered. The design elements of the robot are more important than other factors tomake consumers purchase essentially a social robot. In fact, existing studies on social robots are at the level of proposing "robot development methodology" or testing the effects provided by social robots to users in pieces. On the other hand, consumer emotions felt from the robot's appearance has an important influence in the process of forming user's perception, reasoning, evaluation and expectation. Furthermore, it can affect attitude toward robots and good feeling and performance reasoning, etc. Therefore, this study aims to verify the effect of appearance of social robot and consumer emotions on consumer's attitude toward social robot. At this time, a social robot design evaluation model is constructed by combining heterogeneous data from different sources. Specifically, the three quantitative indicator data for the appearance of social robots from the ABOT Database is included in the model. The consumer emotions of social robot design has been collected through (1) the existing design evaluation literature and (2) online buzzsuch as product reviews and blogs, (3) qualitative interviews for social robot design. Later, we collected the score of consumer emotions and attitudes toward various social robots through a large-scale consumer survey. First, we have derived the six major dimensions of consumer emotions for 23 pieces of detailed emotions through dimension reduction methodology. Then, statistical analysis was performed to verify the effect of derived consumer emotionson attitude toward social robots. Finally, the moderated regression analysis was performed to verify the effect of quantitatively collected indicators of social robot appearance on the relationship between consumer emotions and attitudes toward social robots. Interestingly, several significant moderation effects were identified, these effects are visualized with two-way interaction effect to interpret them from multidisciplinary perspectives. This study has theoretical contributions from the perspective of empirically verifying all stages from technical properties to consumer's emotion and attitudes toward social robots by linking the data from heterogeneous sources. It has practical significance that the result helps to develop the design guidelines based on consumer emotions in the design stage of social robot development.

The Prediction of Export Credit Guarantee Accident using Machine Learning (기계학습을 이용한 수출신용보증 사고예측)

  • Cho, Jaeyoung;Joo, Jihwan;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.83-102
    • /
    • 2021
  • The government recently announced various policies for developing big-data and artificial intelligence fields to provide a great opportunity to the public with respect to disclosure of high-quality data within public institutions. KSURE(Korea Trade Insurance Corporation) is a major public institution for financial policy in Korea, and thus the company is strongly committed to backing export companies with various systems. Nevertheless, there are still fewer cases of realized business model based on big-data analyses. In this situation, this paper aims to develop a new business model which can be applied to an ex-ante prediction for the likelihood of the insurance accident of credit guarantee. We utilize internal data from KSURE which supports export companies in Korea and apply machine learning models. Then, we conduct performance comparison among the predictive models including Logistic Regression, Random Forest, XGBoost, LightGBM, and DNN(Deep Neural Network). For decades, many researchers have tried to find better models which can help to predict bankruptcy since the ex-ante prediction is crucial for corporate managers, investors, creditors, and other stakeholders. The development of the prediction for financial distress or bankruptcy was originated from Smith(1930), Fitzpatrick(1932), or Merwin(1942). One of the most famous models is the Altman's Z-score model(Altman, 1968) which was based on the multiple discriminant analysis. This model is widely used in both research and practice by this time. The author suggests the score model that utilizes five key financial ratios to predict the probability of bankruptcy in the next two years. Ohlson(1980) introduces logit model to complement some limitations of previous models. Furthermore, Elmer and Borowski(1988) develop and examine a rule-based, automated system which conducts the financial analysis of savings and loans. Since the 1980s, researchers in Korea have started to examine analyses on the prediction of financial distress or bankruptcy. Kim(1987) analyzes financial ratios and develops the prediction model. Also, Han et al.(1995, 1996, 1997, 2003, 2005, 2006) construct the prediction model using various techniques including artificial neural network. Yang(1996) introduces multiple discriminant analysis and logit model. Besides, Kim and Kim(2001) utilize artificial neural network techniques for ex-ante prediction of insolvent enterprises. After that, many scholars have been trying to predict financial distress or bankruptcy more precisely based on diverse models such as Random Forest or SVM. One major distinction of our research from the previous research is that we focus on examining the predicted probability of default for each sample case, not only on investigating the classification accuracy of each model for the entire sample. Most predictive models in this paper show that the level of the accuracy of classification is about 70% based on the entire sample. To be specific, LightGBM model shows the highest accuracy of 71.1% and Logit model indicates the lowest accuracy of 69%. However, we confirm that there are open to multiple interpretations. In the context of the business, we have to put more emphasis on efforts to minimize type 2 error which causes more harmful operating losses for the guaranty company. Thus, we also compare the classification accuracy by splitting predicted probability of the default into ten equal intervals. When we examine the classification accuracy for each interval, Logit model has the highest accuracy of 100% for 0~10% of the predicted probability of the default, however, Logit model has a relatively lower accuracy of 61.5% for 90~100% of the predicted probability of the default. On the other hand, Random Forest, XGBoost, LightGBM, and DNN indicate more desirable results since they indicate a higher level of accuracy for both 0~10% and 90~100% of the predicted probability of the default but have a lower level of accuracy around 50% of the predicted probability of the default. When it comes to the distribution of samples for each predicted probability of the default, both LightGBM and XGBoost models have a relatively large number of samples for both 0~10% and 90~100% of the predicted probability of the default. Although Random Forest model has an advantage with regard to the perspective of classification accuracy with small number of cases, LightGBM or XGBoost could become a more desirable model since they classify large number of cases into the two extreme intervals of the predicted probability of the default, even allowing for their relatively low classification accuracy. Considering the importance of type 2 error and total prediction accuracy, XGBoost and DNN show superior performance. Next, Random Forest and LightGBM show good results, but logistic regression shows the worst performance. However, each predictive model has a comparative advantage in terms of various evaluation standards. For instance, Random Forest model shows almost 100% accuracy for samples which are expected to have a high level of the probability of default. Collectively, we can construct more comprehensive ensemble models which contain multiple classification machine learning models and conduct majority voting for maximizing its overall performance.

Arsenic Removal Mechanism of the Residual Slag Generated after the Mineral Carbonation Process in Aqueous System (광물탄산화 공정 이후 발생하는 잔사슬래그의 수계 내 비소 제거 기작)

  • Kim, Kyeongtae;Latief, Ilham Abdul;Kim, Danu;Kim, Seonhee;Lee, Minhee
    • Economic and Environmental Geology
    • /
    • v.55 no.4
    • /
    • pp.377-388
    • /
    • 2022
  • Laboratory-scale experiments were performed to identify the As removal mechanism of the residual slag generated after the mineral carbonation process. The residual slags were manufactured from the steelmaking slag (blast oxygen furnace slag: BOF) through direct and indirect carbonation process. RDBOF (residual BOF after the direct carbonation) and RIBOF (residual BOF after the indirect carbonation) showed different physicochemical-structural characteristics compared with raw BOF such as chemical-mineralogical properties, the pH level of leachate and forming micropores on the surface of the slag. In batch experiment, 0.1 g of residual slag was added to 10 mL of As-solution (initial concentration: 203.6 mg/L) titrated at various pH levels. The RDBOF showed 99.3% of As removal efficiency at initial pH 1, while it sharply decreased with the increase of initial pH. As the initial pH of solution decreased, the dissolution of carbonate minerals covering the surface was accelerated, increasing the exposed area of Fe-oxide and promoting the adsorption of As-oxyanions on the RDBOF surface. Whereas, the As removal efficiency of RIBOF increased with the increase of initial pH levels, and it reached up to 70% at initial pH 10. Considering the PZC (point of zero charge) of the RIBOF (pH 4.5), it was hardly expected that the electrical adsorption of As-oxyanion on surface of the RIBOF at initial pH of 4-10. Nevertheless it was observed that As-oxyanion was linked to the Fe-oxide on the RIBOF surface by the cation bridge effect of divalent cations such as Ca2+, Mn2+, and Fe2+. The surface of RIBOF became stronger negatively charged, the cation bridge effect was more strictly enforced, and more As can be fixed on the RIBOF surface. However, the Ca-products start to precipitate on the surface at pH 10-11 or higher and they even prevent the surface adsorption of As-oxyanion by Fe-oxide. The TCLP test was performed to evaluate the stability of As fixed on the surface of the residual slag after the batch experiment. Results supported that RDBOF and RIBOF firmly fixed As over the wide pH levels, by considering their As desorption rate of less than 2%. From the results of this study, it was proved that both residual slags can be used as an eco-friendly and low-cost As remover with high As removal efficiency and high stability and they also overcome the pH increase in solution, which is the disadvantage of existing steelmaking slag as an As remover.