• Title/Summary/Keyword: 매칭정보

Search Result 1,686, Processing Time 0.026 seconds

The Importance of Qualitative Approach to Managing the Regulatory Lag of Convergence New Products: Focusing on the Certification of Compliance of New Products of Industrial Convergence (융합 신제품 규제 시차 관리를 위한 정성적 접근의 중요성: '산업융합 신제품의 적합성 인증제도'를 중심으로)

  • Kim, Hyung-Jin
    • Informatization Policy
    • /
    • v.29 no.3
    • /
    • pp.26-47
    • /
    • 2022
  • "The certification of compliance of new products of industrial convergence" (hereinafter referred to as "certification of compliance") is a legal certification system in accordance with the Industrial Convergence Promotion Act through which a convergence new product can be officially certified without legislation when the certification standards applicable to the product are not yet provided. Unlike other certification systems, the certification of compliance is characterized by the role of resolving the certification difficulties driven by the regulatory lag of convergence new products. Nevertheless, studies that analyzed the certification of compliance in detail from the viewpoint of regulatory improvement were surprisingly rare. Through the sequential matching of the steps of certification of compliance with the process from the occurrence of a regulatory problem to resolution, our study provided clear understanding as to how the regulatory lag could be reduced by the procedure for certification of compliance. Furthermore, we divided the perspective on regulatory lag management into quantitative and qualitative, and the structures and practices of certification of compliance were then analyzed from the two perspectives. By doing this, the present study emphasized that the fundamental reason the certification of compliance could effectively solve the regulatory lag problem of convergence new products was not only the quantitative elements such as legal deadlines for each step but also several qualitative approaches to securing the quality of every stage.

Various Quality Fingerprint Classification Using the Optimal Stochastic Models (최적화된 확률 모델을 이용한 다양한 품질의 지문분류)

  • Jung, Hye-Wuk;Lee, Jee-Hyong
    • Journal of the Korea Society for Simulation
    • /
    • v.19 no.1
    • /
    • pp.143-151
    • /
    • 2010
  • Fingerprint classification is a step to increase the efficiency of an 1:N fingerprint recognition system and plays a role to reduce the matching time of fingerprint and to increase accuracy of recognition. It is difficult to classify fingerprints, because the ridge pattern of each fingerprint class has an overlapping characteristic with more than one class, fingerprint images may include a lot of noise and an input condition is an exceptional case. In this paper, we propose a novel approach to design a stochastic model and to accomplish fingerprint classification using a directional characteristic of fingerprints for an effective classification of various qualities. We compute the directional value by searching a fingerprint ridge pixel by pixel and extract a directional characteristic by merging a computed directional value by fixed pixels unit. The modified Markov model of each fingerprint class is generated using Markov model which is a stochastic information extraction and a recognition method by extracted directional characteristic. The weight list of classification model of each class is decided by analyzing the state transition matrixes of the generated Markov model of each class and the optimized value which improves the performance of fingerprint classification using GA (Genetic Algorithm) is estimated. The performance of the optimized classification model by GA is superior to the model before the optimization by the experiment result of applying the fingerprint database of various qualities to the optimized model by GA. And the proposed method effectively achieved fingerprint classification to exceptional input conditions because this approach is independent of the existence and nonexistence of singular points by the result of analyzing the fingerprint database which is used to the experiments.

A Study of Pre-Service Secondary Science Teacher's Conceptual Understanding on Carbon Neutral: Focused on Eye Tracking System (탄소중립에 관한 중등 과학 예비교사들의 개념 이해 연구 : 시선추적시스템을 중심으로)

  • Younjeong Heo;Shin Han;Hyoungbum Kim
    • Journal of the Korean Society of Earth Science Education
    • /
    • v.16 no.2
    • /
    • pp.261-275
    • /
    • 2023
  • The purpose of this study was to analyze the conceptual understanding of carbon neutrality among secondary school science pre-service teachers, as well as to identify gaze patterns in visual materials. For this study, gaze tracking data of 20 pre-service secondary school science teachers were analyzed. Through this, the levels of conceptual understanding of carbon neutrality were categorized for the participants, and differences in gaze patterns were analyzed based on the degree of conceptual understanding of carbon neutrality. The research findings are as follows. First, as a result of performing modeling activities to predict carbon emissions and removals until 2100 using the concept of '2050 carbon neutrality,' 50% of the participants held a conception that carbon emissions would continue to increase. Additionally, 25% of the participants did not properly understand the causal relationship between net carbon dioxide emissions and cumulative concentrations. Second, the gaze movements of the participants regarding visual materials related to carbon neutrality were significantly influenced by the information presented in the text area, and in the case of graphs, the focus was mainly on the data area. Moreover, when visual data with the same function and category were arranged, participants showed the most interest in materials explaining concepts or visual data placed on the left side. This implies a preference for specific positions or orders. Participants with lower levels of conceptual understanding and inadequate grasp of causal relationships among elements exhibited notably reduced concentration and overall gaze flow. These findings suggest that conceptual understanding of carbon neutrality including climate change and natural disaster significantly influences interest in and engagement with visual materials.

Adaptive-learning Code Allocation Technique for Improving Dimming Level and Reducing Flicker in Visible Light Communication (가시광통신에서 Dimming Level 향상 및 Flicker 감소를 위한 적응-학습 코드할당 기법)

  • Lee, Kyu-Jin;Han, Doo-Hee
    • Journal of Convergence for Information Technology
    • /
    • v.12 no.2
    • /
    • pp.30-36
    • /
    • 2022
  • In this paper, when the lighting and communication functions of the visible light communication system are used at the same time, we propose a technique to reduce the dimming level and flicker of the lighting. Visible light communication must satisfy both communication and lighting performance. However, the existing data code method results in reducing the brightness of the entire lighting. This causes deterioration of lighting performance and flicker phenomenon. To solve this problem, in this paper, we propose an adaptive learning code allocation technique that allocates binary codes to transmitted characters and optimizes and matches the binary codes allocated according to the frequency of occurrence of alphabets in character strings. Through this, we studied a technique that can faithfully play the role of lighting as well as communication function by allocating codes so that the 'OFF' pattern does not occur continuously while maintaining the maximum dimming level of each character string. As a result of the performance evaluation, the frequency of occurrence of '1' increased significantly without significantly affecting the overall communication performance, and on the contrary, the frequency of consecutive '0' decreased, indicating that the lighting performance of the system was greatly improved.

A Pre-Selection of Candidate Units Using Accentual Characteristic In a Unit Selection Based Japanese TTS System (일본어 악센트 특징을 이용한 합성단위 선택 기반 일본어 TTS의 후보 합성단위의 사전선택 방법)

  • Na, Deok-Su;Min, So-Yeon;Lee, Kwang-Hyoung;Lee, Jong-Seok;Bae, Myung-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.4
    • /
    • pp.159-165
    • /
    • 2007
  • In this paper, we propose a new pre-selection of candidate units that is suitable for the unit selection based Japanese TTS system. General pre-selection method performed by calculating a context-dependent cost within IP (Intonation Phrase). Different from other languages, however. Japanese has an accent represented as the height of a relative pitch, and several words form a single accentual phrase. Also. the prosody in Japanese changes in accentual phrase units. By reflecting such prosodic change in pre-selection. the qualify of synthesized speech can be improved. Furthermore, by calculating a context-dependent cost within accentual phrase, synthesis speed can be improved than calculating within intonation phrase. The proposed method defines AP. analyzes AP in context and performs pre-selection using accentual phrase matching which calculates CCL (connected context length) of the Phoneme's candidates that should be synthesized in each accentual phrase. The baseline system used in the proposed method is VoiceText, which is a synthesizer of Voiceware. Evaluations were made on perceptual error (intonation error, concatenation mismatch error) and synthesis time. Experimental result showed that the proposed method improved the qualify of synthesized speech. as well as shortened the synthesis time.

Methodology for Issue-related R&D Keywords Packaging Using Text Mining (텍스트 마이닝 기반의 이슈 관련 R&D 키워드 패키징 방법론)

  • Hyun, Yoonjin;Shun, William Wong Xiu;Kim, Namgyu
    • Journal of Internet Computing and Services
    • /
    • v.16 no.2
    • /
    • pp.57-66
    • /
    • 2015
  • Considerable research efforts are being directed towards analyzing unstructured data such as text files and log files using commercial and noncommercial analytical tools. In particular, researchers are trying to extract meaningful knowledge through text mining in not only business but also many other areas such as politics, economics, and cultural studies. For instance, several studies have examined national pending issues by analyzing large volumes of text on various social issues. However, it is difficult to provide successful information services that can identify R&D documents on specific national pending issues. While users may specify certain keywords relating to national pending issues, they usually fail to retrieve appropriate R&D information primarily due to discrepancies between these terms and the corresponding terms actually used in the R&D documents. Thus, we need an intermediate logic to overcome these discrepancies, also to identify and package appropriate R&D information on specific national pending issues. To address this requirement, three methodologies are proposed in this study-a hybrid methodology for extracting and integrating keywords pertaining to national pending issues, a methodology for packaging R&D information that corresponds to national pending issues, and a methodology for constructing an associative issue network based on relevant R&D information. Data analysis techniques such as text mining, social network analysis, and association rules mining are utilized for establishing these methodologies. As the experiment result, the keyword enhancement rate by the proposed integration methodology reveals to be about 42.8%. For the second objective, three key analyses were conducted and a number of association rules between national pending issue keywords and R&D keywords were derived. The experiment regarding to the third objective, which is issue clustering based on R&D keywords is still in progress and expected to give tangible results in the future.

Recognition of Resident Registration Card using ART2-based RBF Network and face Verification (ART2 기반 RBF 네트워크와 얼굴 인증을 이용한 주민등록증 인식)

  • Kim Kwang-Baek;Kim Young-Ju
    • Journal of Intelligence and Information Systems
    • /
    • v.12 no.1
    • /
    • pp.1-15
    • /
    • 2006
  • In Korea, a resident registration card has various personal information such as a present address, a resident registration number, a face picture and a fingerprint. A plastic-type resident card currently used is easy to forge or alter and tricks of forgery grow to be high-degree as time goes on. So, whether a resident card is forged or not is difficult to judge by only an examination with the naked eye. This paper proposed an automatic recognition method of a resident card which recognizes a resident registration number by using a refined ART2-based RBF network newly proposed and authenticates a face picture by a template image matching method. The proposed method, first, extracts areas including a resident registration number and the date of issue from a resident card image by applying Sobel masking, median filtering and horizontal smearing operations to the image in turn. To improve the extraction of individual codes from extracted areas, the original image is binarized by using a high-frequency passing filter and CDM masking is applied to the binaried image fur making image information of individual codes better. Lastly, individual codes, which are targets of recognition, are extracted by applying 4-directional contour tracking algorithm to extracted areas in the binarized image. And this paper proposed a refined ART2-based RBF network to recognize individual codes, which applies ART2 as the loaming structure of the middle layer and dynamicaly adjusts a teaming rate in the teaming of the middle and the output layers by using a fuzzy control method to improve the performance of teaming. Also, for the precise judgement of forgey of a resident card, the proposed method supports a face authentication by using a face template database and a template image matching method. For performance evaluation of the proposed method, this paper maked metamorphoses of an original image of resident card such as a forgey of face picture, an addition of noise, variations of contrast variations of intensity and image blurring, and applied these images with original images to experiments. The results of experiment showed that the proposed method is excellent in the recognition of individual codes and the face authentication fur the automatic recognition of a resident card.

  • PDF

An Ontology Model for Public Service Export Platform (공공 서비스 수출 플랫폼을 위한 온톨로지 모형)

  • Lee, Gang-Won;Park, Sei-Kwon;Ryu, Seung-Wan;Shin, Dong-Cheon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.149-161
    • /
    • 2014
  • The export of domestic public services to overseas markets contains many potential obstacles, stemming from different export procedures, the target services, and socio-economic environments. In order to alleviate these problems, the business incubation platform as an open business ecosystem can be a powerful instrument to support the decisions taken by participants and stakeholders. In this paper, we propose an ontology model and its implementation processes for the business incubation platform with an open and pervasive architecture to support public service exports. For the conceptual model of platform ontology, export case studies are used for requirements analysis. The conceptual model shows the basic structure, with vocabulary and its meaning, the relationship between ontologies, and key attributes. For the implementation and test of the ontology model, the logical structure is edited using Prot$\acute{e}$g$\acute{e}$ editor. The core engine of the business incubation platform is the simulator module, where the various contexts of export businesses should be captured, defined, and shared with other modules through ontologies. It is well-known that an ontology, with which concepts and their relationships are represented using a shared vocabulary, is an efficient and effective tool for organizing meta-information to develop structural frameworks in a particular domain. The proposed model consists of five ontologies derived from a requirements survey of major stakeholders and their operational scenarios: service, requirements, environment, enterprise, and county. The service ontology contains several components that can find and categorize public services through a case analysis of the public service export. Key attributes of the service ontology are composed of categories including objective, requirements, activity, and service. The objective category, which has sub-attributes including operational body (organization) and user, acts as a reference to search and classify public services. The requirements category relates to the functional needs at a particular phase of system (service) design or operation. Sub-attributes of requirements are user, application, platform, architecture, and social overhead. The activity category represents business processes during the operation and maintenance phase. The activity category also has sub-attributes including facility, software, and project unit. The service category, with sub-attributes such as target, time, and place, acts as a reference to sort and classify the public services. The requirements ontology is derived from the basic and common components of public services and target countries. The key attributes of the requirements ontology are business, technology, and constraints. Business requirements represent the needs of processes and activities for public service export; technology represents the technological requirements for the operation of public services; and constraints represent the business law, regulations, or cultural characteristics of the target country. The environment ontology is derived from case studies of target countries for public service operation. Key attributes of the environment ontology are user, requirements, and activity. A user includes stakeholders in public services, from citizens to operators and managers; the requirements attribute represents the managerial and physical needs during operation; the activity attribute represents business processes in detail. The enterprise ontology is introduced from a previous study, and its attributes are activity, organization, strategy, marketing, and time. The country ontology is derived from the demographic and geopolitical analysis of the target country, and its key attributes are economy, social infrastructure, law, regulation, customs, population, location, and development strategies. The priority list for target services for a certain country and/or the priority list for target countries for a certain public services are generated by a matching algorithm. These lists are used as input seeds to simulate the consortium partners, and government's policies and programs. In the simulation, the environmental differences between Korea and the target country can be customized through a gap analysis and work-flow optimization process. When the process gap between Korea and the target country is too large for a single corporation to cover, a consortium is considered an alternative choice, and various alternatives are derived from the capability index of enterprises. For financial packages, a mix of various foreign aid funds can be simulated during this stage. It is expected that the proposed ontology model and the business incubation platform can be used by various participants in the public service export market. It could be especially beneficial to small and medium businesses that have relatively fewer resources and experience with public service export. We also expect that the open and pervasive service architecture in a digital business ecosystem will help stakeholders find new opportunities through information sharing and collaboration on business processes.

Development of Intelligent Job Classification System based on Job Posting on Job Sites (구인구직사이트의 구인정보 기반 지능형 직무분류체계의 구축)

  • Lee, Jung Seung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.123-139
    • /
    • 2019
  • The job classification system of major job sites differs from site to site and is different from the job classification system of the 'SQF(Sectoral Qualifications Framework)' proposed by the SW field. Therefore, a new job classification system is needed for SW companies, SW job seekers, and job sites to understand. The purpose of this study is to establish a standard job classification system that reflects market demand by analyzing SQF based on job offer information of major job sites and the NCS(National Competency Standards). For this purpose, the association analysis between occupations of major job sites is conducted and the association rule between SQF and occupation is conducted to derive the association rule between occupations. Using this association rule, we proposed an intelligent job classification system based on data mapping the job classification system of major job sites and SQF and job classification system. First, major job sites are selected to obtain information on the job classification system of the SW market. Then We identify ways to collect job information from each site and collect data through open API. Focusing on the relationship between the data, filtering only the job information posted on each job site at the same time, other job information is deleted. Next, we will map the job classification system between job sites using the association rules derived from the association analysis. We will complete the mapping between these market segments, discuss with the experts, further map the SQF, and finally propose a new job classification system. As a result, more than 30,000 job listings were collected in XML format using open API in 'WORKNET,' 'JOBKOREA,' and 'saramin', which are the main job sites in Korea. After filtering out about 900 job postings simultaneously posted on multiple job sites, 800 association rules were derived by applying the Apriori algorithm, which is a frequent pattern mining. Based on 800 related rules, the job classification system of WORKNET, JOBKOREA, and saramin and the SQF job classification system were mapped and classified into 1st and 4th stages. In the new job taxonomy, the first primary class, IT consulting, computer system, network, and security related job system, consisted of three secondary classifications, five tertiary classifications, and five fourth classifications. The second primary classification, the database and the job system related to system operation, consisted of three secondary classifications, three tertiary classifications, and four fourth classifications. The third primary category, Web Planning, Web Programming, Web Design, and Game, was composed of four secondary classifications, nine tertiary classifications, and two fourth classifications. The last primary classification, job systems related to ICT management, computer and communication engineering technology, consisted of three secondary classifications and six tertiary classifications. In particular, the new job classification system has a relatively flexible stage of classification, unlike other existing classification systems. WORKNET divides jobs into third categories, JOBKOREA divides jobs into second categories, and the subdivided jobs into keywords. saramin divided the job into the second classification, and the subdivided the job into keyword form. The newly proposed standard job classification system accepts some keyword-based jobs, and treats some product names as jobs. In the classification system, not only are jobs suspended in the second classification, but there are also jobs that are subdivided into the fourth classification. This reflected the idea that not all jobs could be broken down into the same steps. We also proposed a combination of rules and experts' opinions from market data collected and conducted associative analysis. Therefore, the newly proposed job classification system can be regarded as a data-based intelligent job classification system that reflects the market demand, unlike the existing job classification system. This study is meaningful in that it suggests a new job classification system that reflects market demand by attempting mapping between occupations based on data through the association analysis between occupations rather than intuition of some experts. However, this study has a limitation in that it cannot fully reflect the market demand that changes over time because the data collection point is temporary. As market demands change over time, including seasonal factors and major corporate public recruitment timings, continuous data monitoring and repeated experiments are needed to achieve more accurate matching. The results of this study can be used to suggest the direction of improvement of SQF in the SW industry in the future, and it is expected to be transferred to other industries with the experience of success in the SW industry.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.