• Title/Summary/Keyword: Learning Processes

Search Result 1,079, Processing Time 0.028 seconds

Korean Word Sense Disambiguation using Dictionary and Corpus (사전과 말뭉치를 이용한 한국어 단어 중의성 해소)

  • Jeong, Hanjo;Park, Byeonghwa
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.1-13
    • /
    • 2015
  • As opinion mining in big data applications has been highlighted, a lot of research on unstructured data has made. Lots of social media on the Internet generate unstructured or semi-structured data every second and they are often made by natural or human languages we use in daily life. Many words in human languages have multiple meanings or senses. In this result, it is very difficult for computers to extract useful information from these datasets. Traditional web search engines are usually based on keyword search, resulting in incorrect search results which are far from users' intentions. Even though a lot of progress in enhancing the performance of search engines has made over the last years in order to provide users with appropriate results, there is still so much to improve it. Word sense disambiguation can play a very important role in dealing with natural language processing and is considered as one of the most difficult problems in this area. Major approaches to word sense disambiguation can be classified as knowledge-base, supervised corpus-based, and unsupervised corpus-based approaches. This paper presents a method which automatically generates a corpus for word sense disambiguation by taking advantage of examples in existing dictionaries and avoids expensive sense tagging processes. It experiments the effectiveness of the method based on Naïve Bayes Model, which is one of supervised learning algorithms, by using Korean standard unabridged dictionary and Sejong Corpus. Korean standard unabridged dictionary has approximately 57,000 sentences. Sejong Corpus has about 790,000 sentences tagged with part-of-speech and senses all together. For the experiment of this study, Korean standard unabridged dictionary and Sejong Corpus were experimented as a combination and separate entities using cross validation. Only nouns, target subjects in word sense disambiguation, were selected. 93,522 word senses among 265,655 nouns and 56,914 sentences from related proverbs and examples were additionally combined in the corpus. Sejong Corpus was easily merged with Korean standard unabridged dictionary because Sejong Corpus was tagged based on sense indices defined by Korean standard unabridged dictionary. Sense vectors were formed after the merged corpus was created. Terms used in creating sense vectors were added in the named entity dictionary of Korean morphological analyzer. By using the extended named entity dictionary, term vectors were extracted from the input sentences and then term vectors for the sentences were created. Given the extracted term vector and the sense vector model made during the pre-processing stage, the sense-tagged terms were determined by the vector space model based word sense disambiguation. In addition, this study shows the effectiveness of merged corpus from examples in Korean standard unabridged dictionary and Sejong Corpus. The experiment shows the better results in precision and recall are found with the merged corpus. This study suggests it can practically enhance the performance of internet search engines and help us to understand more accurate meaning of a sentence in natural language processing pertinent to search engines, opinion mining, and text mining. Naïve Bayes classifier used in this study represents a supervised learning algorithm and uses Bayes theorem. Naïve Bayes classifier has an assumption that all senses are independent. Even though the assumption of Naïve Bayes classifier is not realistic and ignores the correlation between attributes, Naïve Bayes classifier is widely used because of its simplicity and in practice it is known to be very effective in many applications such as text classification and medical diagnosis. However, further research need to be carried out to consider all possible combinations and/or partial combinations of all senses in a sentence. Also, the effectiveness of word sense disambiguation may be improved if rhetorical structures or morphological dependencies between words are analyzed through syntactic analysis.

Classification of Urban Green Space Using Airborne LiDAR and RGB Ortho Imagery Based on Deep Learning (항공 LiDAR 및 RGB 정사 영상을 이용한 딥러닝 기반의 도시녹지 분류)

  • SON, Bokyung;LEE, Yeonsu;IM, Jungho
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.24 no.3
    • /
    • pp.83-98
    • /
    • 2021
  • Urban green space is an important component for enhancing urban ecosystem health. Thus, identifying the spatial structure of urban green space is required to manage a healthy urban ecosystem. The Ministry of Environment has provided the level 3 land cover map(the highest (1m) spatial resolution map) with a total of 41 classes since 2010. However, specific urban green information such as street trees was identified just as grassland or even not classified them as a vegetated area in the map. Therefore, this study classified detailed urban green information(i.e., tree, shrub, and grass), not included in the existing level 3 land cover map, using two types of high-resolution(<1m) remote sensing data(i.e., airborne LiDAR and RGB ortho imagery) in Suwon, South Korea. U-Net, one of image segmentation deep learning approaches, was adopted to classify detailed urban green space. A total of three classification models(i.e., LRGB10, LRGB5, and RGB5) were proposed depending on the target number of classes and the types of input data. The average overall accuracies for test sites were 83.40% (LRGB10), 89.44%(LRGB5), and 74.76%(RGB5). Among three models, LRGB5, which uses both airborne LiDAR and RGB ortho imagery with 5 target classes(i.e., tree, shrub, grass, building, and the others), resulted in the best performance. The area ratio of total urban green space(based on trees, shrub, and grass information) for the entire Suwon was 45.61%(LRGB10), 43.47%(LRGB5), and 44.22%(RGB5). All models were able to provide additional 13.40% of urban tree information on average when compared to the existing level 3 land cover map. Moreover, these urban green classification results are expected to be utilized in various urban green studies or decision making processes, as it provides detailed information on urban green space.

Development of System for Real-Time Object Recognition and Matching using Deep Learning at Simulated Lunar Surface Environment (딥러닝 기반 달 표면 모사 환경 실시간 객체 인식 및 매칭 시스템 개발)

  • Jong-Ho Na;Jun-Ho Gong;Su-Deuk Lee;Hyu-Soung Shin
    • Tunnel and Underground Space
    • /
    • v.33 no.4
    • /
    • pp.281-298
    • /
    • 2023
  • Continuous research efforts are being devoted to unmanned mobile platforms for lunar exploration. There is an ongoing demand for real-time information processing to accurately determine the positioning and mapping of areas of interest on the lunar surface. To apply deep learning processing and analysis techniques to practical rovers, research on software integration and optimization is imperative. In this study, a foundational investigation has been conducted on real-time analysis of virtual lunar base construction site images, aimed at automatically quantifying spatial information of key objects. This study involved transitioning from an existing region-based object recognition algorithm to a boundary box-based algorithm, thus enhancing object recognition accuracy and inference speed. To facilitate extensive data-based object matching training, the Batch Hard Triplet Mining technique was introduced, and research was conducted to optimize both training and inference processes. Furthermore, an improved software system for object recognition and identical object matching was integrated, accompanied by the development of visualization software for the automatic matching of identical objects within input images. Leveraging satellite simulative captured video data for training objects and moving object-captured video data for inference, training and inference for identical object matching were successfully executed. The outcomes of this research suggest the feasibility of implementing 3D spatial information based on continuous-capture video data of mobile platforms and utilizing it for positioning objects within regions of interest. As a result, these findings are expected to contribute to the integration of an automated on-site system for video-based construction monitoring and control of significant target objects within future lunar base construction sites.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Retail Product Development and Brand Management Collaboration between Industry and University Student Teams (산업여대학학생단대지간적령수산품개발화품패관리협작(产业与大学学生团队之间的零售产品开发和品牌管理协作))

  • Carroll, Katherine Emma
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.3
    • /
    • pp.239-248
    • /
    • 2010
  • This paper describes a collaborative project between academia and industry which focused on improving the marketing and product development strategies for two private label apparel brands of a large regional department store chain in the southeastern United States. The goal of the project was to revitalize product lines of the two brands by incorporating student ideas for new solutions, thereby giving the students practical experience with a real-life industry situation. There were a number of key players involved in the project. A privately-owned department store chain based in the southeastern United States which was seeking an academic partner had recognized a need to update two existing private label brands. They targeted middle-aged consumers looking for casual, moderately priced merchandise. The company was seeking to change direction with both packaging and presentation, and possibly product design. The branding and product development divisions of the company contacted professors in an academic department of a large southeastern state university. Two of the professors agreed that the task would be a good fit for their classes - one was a junior-level Intermediate Brand Management class; the other was a senior-level Fashion Product Development class. The professors felt that by working collaboratively on the project, students would be exposed to a real world scenario, within the security of an academic learning environment. Collaboration within an interdisciplinary team has the advantage of providing experiences and resources beyond the capabilities of a single student and adds "brainpower" to problem-solving processes (Lowman 2000). This goal of improving the capabilities of students directed the instructors in each class to form interdisciplinary teams between the Branding and Product Development classes. In addition, many universities are employing industry partnerships in research and teaching, where collaboration within temporal (semester) and physical (classroom/lab) constraints help to increase students' knowledge and experience of a real-world situation. At the University of Tennessee, the Center of Industrial Services and UT-Knoxville's College of Engineering worked with a company to develop design improvements in its U.S. operations. In this study, Because should be lower case b with a private label retail brand, Wickett, Gaskill and Damhorst's (1999) revised Retail Apparel Product Development Model was used by the product development and brand management teams. This framework was chosen because it addresses apparel product development from the concept to the retail stage. Two classes were involved in this project: a junior level Brand Management class and a senior level Fashion Product Development class. Seven teams were formed which included four students from Brand Management and two students from Product Development. The classes were taught the same semester, but not at the same time. At the beginning of the semester, each class was introduced to the industry partner and given the problem. Half the teams were assigned to the men's brand and half to the women's brand. The teams were responsible for devising approaches to the problem, formulating a timeline for their work, staying in touch with industry representatives and making sure that each member of the team contributed in a positive way. The objective for the teams was to plan, develop, and present a product line using merchandising processes (following the Wickett, Gaskill and Damhorst model) and develop new branding strategies for the proposed lines. The teams performed trend, color, fabrication and target market research; developed sketches for a line; edited the sketches and presented their line plans; wrote specifications; fitted prototypes on fit models, and developed final production samples for presentation to industry. The branding students developed a SWOT analysis, a Brand Measurement report, a mind-map for the brands and a fully integrated Marketing Report which was presented alongside the ideas for the new lines. In future if the opportunity arises to work in this collaborative way with an existing company who wishes to look both at branding and product development strategies, classes will be scheduled at the same time so that students have more time to meet and discuss timelines and assigned tasks. As it was, student groups had to meet outside of each class time and this proved to be a challenging though not uncommon part of teamwork (Pfaff and Huddleston, 2003). Although the logistics of this exercise were time-consuming to set up and administer, professors felt that the benefits to students were multiple. The most important benefit, according to student feedback from both classes, was the opportunity to work with industry professionals, follow their process, and see the results of their work evaluated by the people who made the decisions at the company level. Faculty members were grateful to have a "real-world" case to work with in the classroom to provide focus. Creative ideas and strategies were traded as plans were made, extending and strengthening the departmental links be tween the branding and product development areas. By working not only with students coming from a different knowledge base, but also having to keep in contact with the industry partner and follow the framework and timeline of industry practice, student teams were challenged to produce excellent and innovative work under new circumstances. Working on the product development and branding for "real-life" brands that are struggling gave students an opportunity to see how closely their coursework ties in with the real-world and how creativity, collaboration and flexibility are necessary components of both the design and business aspects of company operations. Industry personnel were impressed by (a) the level and depth of knowledge and execution in the student projects, and (b) the creativity of new ideas for the brands.

Structural Adjustment of Domestic Firms in the Era of Market Liberalization (시장개방(市場開放)과 국내기업(國內企業)의 구조조정(構造調整))

  • Seong, So-mi
    • KDI Journal of Economic Policy
    • /
    • v.13 no.4
    • /
    • pp.91-116
    • /
    • 1991
  • Market liberalization progressing simultaneously with high and rapidly rising domestic wages has created an adverse business environment for domestic firms. Korean firms are losing their international competitiveness in comparison to firms from LDC(Less Developed Countries) in low-tech industries. In high-tech industries, domestic firms without government protection (which is impossible due to the liberalization policy and the current international status of the Korean economy) are in a disadvantaged position relative to firms from advanced countries. This paper examines the division of roles between the private sector and the government in order to achieve a successful structural adjustment, which has become the impending industrial policy issue caused by high domestic wages, on the one hand, and the opening of domestic markets, on the other. The micro foundation of the economy-wide structural adjustment is actually the restructuring of business portfolios at the firm level. The firm-level business restructuring means that firms in low-value-added businesses or with declining market niches establish new major businesses in higher value-added segments or growing market niches. The adjustment of the business structure at the firm level can only be accomplished by accumulating firm-specific managerial assets necessary to establish a new business structure. This can be done through learning-by-doing in the whole system of management, including research and development, manufacturing, and marketing. Therefore, the voluntary cooperation among the people in the company is essential for making the cost of the learning process lower than that at the competing companies. Hence, firms that attempt to restructure their major businesses need to induce corporate-wide participation through innovations in organization and management, encourage innovative corporate culture, and maintain cooperative labor unions. Policy discussions on structural adjustments usually regard firms as a black box behind a few macro variables. But in reality, firm activities are not flows of materials but relationships among human resources. The growth potential of companies are embodied in the human resources of the firm; the balance of interest among stockholders, managers, and workers of the company' brings the accumulation of the company's core competencies. Therefore, policymakers and economists shoud change their old concept of the firm as a technological black box which produces a marketable commodities. Firms should be regarded as coalitions of interest groups such as stockholders, managers, and workers. Consequently the discussion on the structural adjustment both at the macroeconomic level and the firm level should be based on this new paradigm of understanding firms. The government's role in reducing the cost of structural adjustment and supporting should the creation of new industries emphasize the following: First, government must promote the competition in domestic markets by revising laws related to antitrust policy, bankruptcy, and the promotion of small and medium-sized companies. General consensus on the limitations of government intervention and the merit of deregulation should be sought among policymakers and people in the business world. In the age of internationalization, nation-specific competitive advantages cannot be exclusively in favor of domestic firms. The international competitiveness of a domestic firm derives from the firm-specific core competencies which can be accumulated by internal investment and organization of the firm. Second, government must build up a solid infrastructure of production factors including capital, technology, manpower, and information. Structural adjustment often entails bankruptcies and partial waste of resources. However, it is desirable for the government not to try to sustain marginal businesses, but to support the diversification or restructuring of businesses by assisting in factor creation. Institutional support for venture businesses needs to be improved, especially in the financing system since many investment projects in venture businesses are highly risky, even though they are very promising. The proportion of low-value added production processes and declining industries should be reduced by promoting foreign direct investment and factory automation. Moreover, one cannot over-emphasize the importance of future-oriented labor policies to be based on the new paradigm of understanding firm activities. The old laws and instititutions related to labor unions need to be reformed. Third, government must improve the regimes related to money, banking, and the tax system to change business practices dependent on government protection or undesirable in view of the evolution of the Korean economy as a whole. To prevent rational business decisions from contradicting to the interest of the economy as a whole, government should influence the business environment, not the business itself.

  • PDF

A Methodology of Customer Churn Prediction based on Two-Dimensional Loyalty Segmentation (이차원 고객충성도 세그먼트 기반의 고객이탈예측 방법론)

  • Kim, Hyung Su;Hong, Seung Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.111-126
    • /
    • 2020
  • Most industries have recently become aware of the importance of customer lifetime value as they are exposed to a competitive environment. As a result, preventing customers from churn is becoming a more important business issue than securing new customers. This is because maintaining churn customers is far more economical than securing new customers, and in fact, the acquisition cost of new customers is known to be five to six times higher than the maintenance cost of churn customers. Also, Companies that effectively prevent customer churn and improve customer retention rates are known to have a positive effect on not only increasing the company's profitability but also improving its brand image by improving customer satisfaction. Predicting customer churn, which had been conducted as a sub-research area for CRM, has recently become more important as a big data-based performance marketing theme due to the development of business machine learning technology. Until now, research on customer churn prediction has been carried out actively in such sectors as the mobile telecommunication industry, the financial industry, the distribution industry, and the game industry, which are highly competitive and urgent to manage churn. In addition, These churn prediction studies were focused on improving the performance of the churn prediction model itself, such as simply comparing the performance of various models, exploring features that are effective in forecasting departures, or developing new ensemble techniques, and were limited in terms of practical utilization because most studies considered the entire customer group as a group and developed a predictive model. As such, the main purpose of the existing related research was to improve the performance of the predictive model itself, and there was a relatively lack of research to improve the overall customer churn prediction process. In fact, customers in the business have different behavior characteristics due to heterogeneous transaction patterns, and the resulting churn rate is different, so it is unreasonable to assume the entire customer as a single customer group. Therefore, it is desirable to segment customers according to customer classification criteria, such as loyalty, and to operate an appropriate churn prediction model individually, in order to carry out effective customer churn predictions in heterogeneous industries. Of course, in some studies, there are studies in which customers are subdivided using clustering techniques and applied a churn prediction model for individual customer groups. Although this process of predicting churn can produce better predictions than a single predict model for the entire customer population, there is still room for improvement in that clustering is a mechanical, exploratory grouping technique that calculates distances based on inputs and does not reflect the strategic intent of an entity such as loyalties. This study proposes a segment-based customer departure prediction process (CCP/2DL: Customer Churn Prediction based on Two-Dimensional Loyalty segmentation) based on two-dimensional customer loyalty, assuming that successful customer churn management can be better done through improvements in the overall process than through the performance of the model itself. CCP/2DL is a series of churn prediction processes that segment two-way, quantitative and qualitative loyalty-based customer, conduct secondary grouping of customer segments according to churn patterns, and then independently apply heterogeneous churn prediction models for each churn pattern group. Performance comparisons were performed with the most commonly applied the General churn prediction process and the Clustering-based churn prediction process to assess the relative excellence of the proposed churn prediction process. The General churn prediction process used in this study refers to the process of predicting a single group of customers simply intended to be predicted as a machine learning model, using the most commonly used churn predicting method. And the Clustering-based churn prediction process is a method of first using clustering techniques to segment customers and implement a churn prediction model for each individual group. In cooperation with a global NGO, the proposed CCP/2DL performance showed better performance than other methodologies for predicting churn. This churn prediction process is not only effective in predicting churn, but can also be a strategic basis for obtaining a variety of customer observations and carrying out other related performance marketing activities.

Development and Application of Earth Science Module Based on Earth System (지구계 주제 중심의 지구과학 모듈 개발 및 적용)

  • Lee, Hyo-Nyong;Kwon, Young-Ryun
    • Journal of the Korean earth science society
    • /
    • v.29 no.2
    • /
    • pp.175-188
    • /
    • 2008
  • The purposes of this study were to develop an Earth systems-based earth science module and to investigate the effects of field application. The module was applied to two classrooms of a total of 76 second-year high schoolers, in order to investigate the effectiveness of the developed module. Data was collected from observations in earth science classrooms, interviews, and questionnaires. The findings were as follows. First, the Earth systems-based earth science module was designed to be associated with the aims of the national Earth Science Curriculum and to improve students' Earth science literacy. The module was composed of two sections for a total of seven instructional hours for high schoolers. The former sections included the understanding of the Earth system through the understanding of each individual component of the system, its characteristics, properties and structure. The latter section of the module, consisting of 4 instructional hours, dealt with earth environmental problems, the understanding of subsystems changing through natural processes and cycles, and human interactions and their effects upon Earth systems. Second, the module was helpful in learning about the importance of understanding the interactions between water, rock, air, and life when it comes to understanding the Earth system, its components, characteristics, and properties. The Earth systems-based earth science module is a valuable and helpful instructional material which can enhance students' understanding of Earth systems and earth science literacy.

A Coupled-ART Neural Network Capable of Modularized Categorization of Patterns (복합 특징의 분리 처리를 위한 모듈화된 Coupled-ART 신경회로망)

  • 우용태;이남일;안광선
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.19 no.10
    • /
    • pp.2028-2042
    • /
    • 1994
  • Properly defining signal and noise in a self-organizing system like ART(Adaptive Resonance Theory) neural network model raises a number of subtle issues. Pattern context must enter the definition so that input features, treated as irrelevant noise when they are embedded in a given input pattern, may be treated as informative signals when they are embedded in a different input pattern. The ATR automatically self-scales their computational units to embody context and learning dependent definitions of a signal and noise and there is no problem in categorizing input pattern that have features similar in nature. However, when we have imput patterns that have features that are different in size and nature, the use of only one vigilance parameter is not enough to differentiate a signal from noise for a good categorization. For example, if the value fo vigilance parameter is large, then noise may be processed as an informative signal and unnecessary categories are generated: and if the value of vigilance parameter is small, an informative signal may be ignored and treated as noise. Hence it is no easy to achieve a good pattern categorization. To overcome such problems, a Coupled-ART neural network capable of modularized categorization of patterns is proposed. The Coupled-ART has two layer of tightly coupled modules. the upper and the lower. The lower layer processes the global features of a pattern and the structural features, separately in parallel. The upper layer combines the categorized outputs from the lower layer and categorizes the combined output, Hence, due to the modularized categorization of patterns, the Coupled-ART classifies patterns more efficiently than the ART1 model.

  • PDF

Manganese and Iron Interaction: a Mechanism of Manganese-Induced Parkinsonism

  • Zheng, Wei
    • Proceedings of the Korea Environmental Mutagen Society Conference
    • /
    • 2003.10a
    • /
    • pp.34-63
    • /
    • 2003
  • Occupational and environmental exposure to manganese continue to represent a realistic public health problem in both developed and developing countries. Increased utility of MMT as a replacement for lead in gasoline creates a new source of environmental exposure to manganese. It is, therefore, imperative that further attention be directed at molecular neurotoxicology of manganese. A Need for a more complete understanding of manganese functions both in health and disease, and for a better defined role of manganese in iron metabolism is well substantiated. The in-depth studies in this area should provide novel information on the potential public health risk associated with manganese exposure. It will also explore novel mechanism(s) of manganese-induced neurotoxicity from the angle of Mn-Fe interaction at both systemic and cellular levels. More importantly, the result of these studies will offer clues to the etiology of IPD and its associated abnormal iron and energy metabolism. To achieve these goals, however, a number of outstanding questions remain to be resolved. First, one must understand what species of manganese in the biological matrices plays critical role in the induction of neurotoxicity, Mn(II) or Mn(III)? In our own studies with aconitase, Cpx-I, and Cpx-II, manganese was added to the buffers as the divalent salt, i.e., $MnCl_2$. While it is quite reasonable to suggest that the effect on aconitase and/or Cpx-I activites was associated with the divalent species of manganese, the experimental design does not preclude the possibility that a manganese species of higher oxidation state, such as Mn(III), is required for the induction of these effects. The ionic radius of Mn(III) is 65 ppm, which is similar to the ionic size to Fe(III) (65 ppm at the high spin state) in aconitase (Nieboer and Fletcher, 1996; Sneed et al., 1953). Thus it is plausible that the higher oxidation state of manganese optimally fits into the geometric space of aconitase, serving as the active species in this enzymatic reaction. In the current literature, most of the studies on manganese toxicity have used Mn(II) as $MnCl_2$ rather than Mn(III). The obvious advantage of Mn(II) is its good water solubility, which allows effortless preparation in either in vivo or in vitro investigation, whereas almost all of the Mn(III) salt products on the comparison between two valent manganese species nearly infeasible. Thus a more intimate collaboration with physiochemists to develop a better way to study Mn(III) species in biological matrices is pressingly needed. Second, In spite of the special affinity of manganese for mitochondria and its similar chemical properties to iron, there is a sound reason to postulate that manganese may act as an iron surrogate in certain iron-requiring enzymes. It is, therefore, imperative to design the physiochemical studies to determine whether manganese can indeed exchange with iron in proteins, and to understand how manganese interacts with tertiary structure of proteins. The studies on binding properties (such as affinity constant, dissociation parameter, etc.) of manganese and iron to key enzymes associated with iron and energy regulation would add additional information to our knowledge of Mn-Fe neurotoxicity. Third, manganese exposure, either in vivo or in vitro, promotes cellular overload of iron. It is still unclear, however, how exactly manganese interacts with cellular iron regulatory processes and what is the mechanism underlying this cellular iron overload. As discussed above, the binding of IRP-I to TfR mRNA leads to the expression of TfR, thereby increasing cellular iron uptake. The sequence encoding TfR mRNA, in particular IRE fragments, has been well-documented in literature. It is therefore possible to use molecular technique to elaborate whether manganese cytotoxicity influences the mRNA expression of iron regulatory proteins and how manganese exposure alters the binding activity of IPRs to TfR mRNA. Finally, the current manganese investigation has largely focused on the issues ranging from disposition/toxicity study to the characterization of clinical symptoms. Much less has been done regarding the risk assessment of environmenta/occupational exposure. One of the unsolved, pressing puzzles is the lack of reliable biomarker(s) for manganese-induced neurologic lesions in long-term, low-level exposure situation. Lack of such a diagnostic means renders it impossible to assess the human health risk and long-term social impact associated with potentially elevated manganese in environment. The biochemical interaction between manganese and iron, particularly the ensuing subtle changes of certain relevant proteins, provides the opportunity to identify and develop such a specific biomarker for manganese-induced neuronal damage. By learning the molecular mechanism of cytotoxicity, one will be able to find a better way for prediction and treatment of manganese-initiated neurodegenerative diseases.

  • PDF