• Title/Summary/Keyword: Limit

Search Result 15,819, Processing Time 0.042 seconds

Analysis of HBeAg and HBV DNA Detection in Hepatitis B Patients Treated with Antiviral Therapy (항 바이러스 치료중인 B형 간염환자에서 HBeAg 및 HBV DNA 검출에 관한 분석)

  • Cheon, Jun Hong;Chae, Hong Ju;Park, Mi Sun;Lim, Soo Yeon;Yoo, Seon Hee;Lee, Sun Ho
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.23 no.1
    • /
    • pp.35-39
    • /
    • 2019
  • Purpose Hepatitis B virus (hepatitis B virus, HBV) infection is a worldwide major public health problem and it is known as a major cause of chronic hepatitis, liver cirrhosis and liver cancer. And serologic tests of hepatitis B virus is essential for diagnosing and treating these diseases. In addition, with the development of molecular diagnostics, the detection of HBV DNA in serum diagnoses HBV infection and is recognized as an important indicator for the antiviral agent treatment response assessment. We performed HBeAg assay using Immunoradiometric assay (IRMA) and Chemiluminescent Microparticle Immunoassay (CMIA) in hepatitis B patients treated with antiviral agents. The detection rate of HBV DNA in serum was measured and compared by RT-PCR (Real Time - Polymerase Chain Reaction) method Materials and Methods HBeAg serum examination and HBV DNA quantification test were conducted on 270 hepatitis B patients undergoing anti-virus treatment after diagnosis of hepatitis B virus infection. Two serologic tests (IRMA, CMIA) with different detection principles were applied for the HBeAg serum test. Serum HBV DNA was quantitatively measured by real-time polymerase chain reaction (RT-PCR) using the Abbott m2000 System. Results The detection rate of HBeAg was 24.1% (65/270) for IRMA and 82.2% (222/270) for CMIA. Detection rate of serum HBV DNA by real-time RT-PCR is 29.3% (79/270). The measured amount of serum HBV DNA concentration is $4.8{\times}10^7{\pm}1.9{\times}10^8IU/mL$($mean{\pm}SD$). The minimum value is 16IU/mL, the maximum value is $1.0{\times}10^9IU/mL$, and the reference value for quantitative detection limit is 15IU/mL. The detection rates and concentrations of HBV DNA by group according to the results of HBeAg serological (IRMA, CMIA)tests were as follows. 1) Group I (IRMA negative, CMIA positive, N = 169), HBV DNA detection rate of 17.7% (30/169), $6.8{\times}10^5{\pm}1.9{\times}10^6IU/mL$ 2) Group II (IRMA positive, CMIA positive, N = 53), HBV DNA detection rate 62.3% (33/53), $1.1{\times}10^8{\pm}2.8{\times}10^8IU/mL$ 3) Group III (IRMA negative, CMIA negative, N = 36), HBV DNA detection rate 36.1% (13/36), $3.0{\times}10^5{\pm}1.1{\times}10^6IU/mL$ 4) Group IV(IRMA positive, CMIA negative, N = 12), HBV DNA detection rate 25% (3/12), $1.3{\times}10^3{\pm}1.1{\times}10^3IU/mL$ Conclusion HBeAg detection rate according to the serological test showed a large difference. This difference is considered for a number of reasons such as characteristics of the Ab used for assay kit and epitope, HBV of genotype. Detection rate and the concentration of the group-specific HBV DNA classified serologic results confirmed the high detection rate and the concentration in Group II (IRMA-positive, CMIA positive, N = 53).

Analysis of Metadata Standards of Record Management for Metadata Interoperability From the viewpoint of the Task model and 5W1H (메타데이터 상호운용성을 위한 기록관리 메타데이터 표준 분석 5W1H와 태스크 모델의 관점에서)

  • Baek, Jae-Eun;Sugimoto, Shigeo
    • The Korean Journal of Archival Studies
    • /
    • no.32
    • /
    • pp.127-176
    • /
    • 2012
  • Metadata is well recognized as one of the foundational factors in archiving and long-term preservation of digital resources. There are several metadata standards for records management, archives and preservation, e.g. ISAD(G), EAD, AGRkMs, PREMIS, and OAIS. Consideration is important in selecting appropriate metadata standards in order to design metadata schema that meet the requirements of a particular archival system. Interoperability of metadata with other systems should be considered in schema design. In our previous research, we have presented a feature analysis of metadata standards by identifying the primary resource lifecycle stages where each standard is applied. We have clarified that any single metadata standard cannot cover the whole records lifecycle for archiving and preservation. Through this feature analysis, we analyzed the features of metadata in the whole records lifecycle, and we clarified the relationships between the metadata standards and the stages of the lifecycle. In the previous study, more detailed analysis was left for future study. This paper proposes to analyze the metadata schemas from the viewpoint of tasks performed in the lifecycle. Metadata schemas are primarily defined to describe properties of a resource in accordance with the purposes of description, e.g. finding aids, records management, preservation and so forth. In other words, the metadata standards are resource- and purpose-centric, and the resource lifecycle is not explicitly reflected in the standards. There are no systematic methods for mapping between different metadata standards in accordance with the lifecycle. This paper proposes a method for mapping between metadata standards based on the tasks contained in the resource lifecycle. We first propose a Task Model to clarify tasks applied to resources in each stage of the lifecycle. This model is created as a task-centric model to identify features of metadata standards and to create mappings among elements of those standards. It is important to categorize the elements in order to limit the semantic scope of mapping among elements and decrease the number of combinations of elements for mapping. This paper proposes to use 5W1H (Who, What, Why, When, Where, How) model to categorize the elements. 5W1H categories are generally used for describing events, e.g. news articles. As performing a task on a resource causes an event and metadata elements are used in the event, we consider that the 5W1H categories are adequate to categorize the elements. By using these categories, we determine the features of every element of metadata standards which are AGLS, AGRkMS, PREMIS, EAD, OAIS and an attribute set extracted from DPC decision flow. Then, we perform the element mapping between the standards, and find the relationships between the standards. In this study, we defined a set of terms for each of 5W1H categories, which typically appear in the definition of an element, and used those terms to categorize the elements. For example, if the definition of an element includes the terms such as person and organization that mean a subject which contribute to create, modify a resource the element is categorized into the Who category. A single element can be categorized into one or more 5W1H categories. Thus, we categorized every element of the metadata standards using the 5W1H model, and then, we carried out mapping among the elements in each category. We conclude that the Task Model provides a new viewpoint for metadata schemas and is useful to help us understand the features of metadata standards for records management and archives. The 5W1H model, which is defined based on the Task Model, provides us a core set of categories to semantically classify metadata elements from the viewpoint of an event caused by a task.

Color-related Query Processing for Intelligent E-Commerce Search (지능형 검색엔진을 위한 색상 질의 처리 방안)

  • Hong, Jung A;Koo, Kyo Jung;Cha, Ji Won;Seo, Ah Jeong;Yeo, Un Yeong;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.109-125
    • /
    • 2019
  • As interest on intelligent search engines increases, various studies have been conducted to extract and utilize the features related to products intelligencely. In particular, when users search for goods in e-commerce search engines, the 'color' of a product is an important feature that describes the product. Therefore, it is necessary to deal with the synonyms of color terms in order to produce accurate results to user's color-related queries. Previous studies have suggested dictionary-based approach to process synonyms for color features. However, the dictionary-based approach has a limitation that it cannot handle unregistered color-related terms in user queries. In order to overcome the limitation of the conventional methods, this research proposes a model which extracts RGB values from an internet search engine in real time, and outputs similar color names based on designated color information. At first, a color term dictionary was constructed which includes color names and R, G, B values of each color from Korean color standard digital palette program and the Wikipedia color list for the basic color search. The dictionary has been made more robust by adding 138 color names converted from English color names to foreign words in Korean, and with corresponding RGB values. Therefore, the fininal color dictionary includes a total of 671 color names and corresponding RGB values. The method proposed in this research starts by searching for a specific color which a user searched for. Then, the presence of the searched color in the built-in color dictionary is checked. If there exists the color in the dictionary, the RGB values of the color in the dictioanry are used as reference values of the retrieved color. If the searched color does not exist in the dictionary, the top-5 Google image search results of the searched color are crawled and average RGB values are extracted in certain middle area of each image. To extract the RGB values in images, a variety of different ways was attempted since there are limits to simply obtain the average of the RGB values of the center area of images. As a result, clustering RGB values in image's certain area and making average value of the cluster with the highest density as the reference values showed the best performance. Based on the reference RGB values of the searched color, the RGB values of all the colors in the color dictionary constructed aforetime are compared. Then a color list is created with colors within the range of ${\pm}50$ for each R value, G value, and B value. Finally, using the Euclidean distance between the above results and the reference RGB values of the searched color, the color with the highest similarity from up to five colors becomes the final outcome. In order to evaluate the usefulness of the proposed method, we performed an experiment. In the experiment, 300 color names and corresponding color RGB values by the questionnaires were obtained. They are used to compare the RGB values obtained from four different methods including the proposed method. The average euclidean distance of CIE-Lab using our method was about 13.85, which showed a relatively low distance compared to 3088 for the case using synonym dictionary only and 30.38 for the case using the dictionary with Korean synonym website WordNet. The case which didn't use clustering method of the proposed method showed 13.88 of average euclidean distance, which implies the DBSCAN clustering of the proposed method can reduce the Euclidean distance. This research suggests a new color synonym processing method based on RGB values that combines the dictionary method with the real time synonym processing method for new color names. This method enables to get rid of the limit of the dictionary-based approach which is a conventional synonym processing method. This research can contribute to improve the intelligence of e-commerce search systems especially on the color searching feature.

A Study on Estimating Shear Strength of Continuum Rock Slope (연속체 암반비탈면의 강도정수 산정 연구)

  • Kim, Hyung-Min;Lee, Su-gon;Lee, Byok-Kyu;Woo, Jae-Gyung;Hur, Ik;Lee, Jun-Ki
    • Journal of the Korean Geotechnical Society
    • /
    • v.35 no.5
    • /
    • pp.5-19
    • /
    • 2019
  • Considering the natural phenomenon in which steep slopes ($65^{\circ}{\sim}85^{\circ}$) consisting of rock mass remain stable for decades, slopes steeper than 1:0.5 (the standard of slope angle for blast rock) may be applied in geotechnical conditions which are similar to those above at the design and initial construction stages. In the process of analysing the stability of a good to fair continuum rock slope that can be designed as a steep slope, a general method of estimating rock mass strength properties from design practice perspective was required. Practical and genealized engineering methods of determining the properties of a rock mass are important for a good continuum rock slope that can be designed as a steep slope. The Genealized Hoek-Brown (H-B) failure criterion and GSI (Geological Strength Index), which were revised and supplemented by Hoek et al. (2002), were assessed as rock mass characterization systems fully taking into account the effects of discontinuities, and were widely utilized as a method for calculating equivalent Mohr-Coulomb shear strength (balancing the areas) according to stress changes. The concept of calculating equivalent M-C shear strength according to the change of confining stress range was proposed, and on a slope, the equivalent shear strength changes sensitively with changes in the maximum confining stress (${{\sigma}^{\prime}}_{3max}$ or normal stress), making it difficult to use it in practical design. In this study, the method of estimating the strength properties (an iso-angle division method) that can be applied universally within the maximum confining stress range for a good to fair continuum rock mass slope is proposed by applying the H-B failure criterion. In order to assess the validity and applicability of the proposed method of estimating the shear strength (A), the rock slope, which is a study object, was selected as the type of rock (igneous, metamorphic, sedimentary) on the steep slope near the existing working design site. It is compared and analyzed with the equivalent M-C shear strength (balancing the areas) proposed by Hoek. The equivalent M-C shear strength of the balancing the areas method and iso-angle division method was estimated using the RocLab program (geotechnical properties calculation software based on the H-B failure criterion (2002)) by using the basic data of the laboratory rock triaxial compression test at the existing working design site and the face mapping of discontinuities on the rock slope of study area. The calculated equivalent M-C shear strength of the balancing the areas method was interlinked to show very large or small cohesion and internal friction angles (generally, greater than $45^{\circ}$). The equivalent M-C shear strength of the iso-angle division is in-between the equivalent M-C shear properties of the balancing the areas, and the internal friction angles show a range of $30^{\circ}$ to $42^{\circ}$. We compared and analyzed the shear strength (A) of the iso-angle division method at the study area with the shear strength (B) of the existing working design site with similar or the same grade RMR each other. The application of the proposed iso-angle division method was indirectly evaluated through the results of the stability analysis (limit equilibrium analysis and finite element analysis) applied with these the strength properties. The difference between A and B of the shear strength is about 10%. LEM results (in wet condition) showed that Fs (A) = 14.08~58.22 (average 32.9) and Fs (B) = 18.39~60.04 (average 32.2), which were similar in accordance with the same rock types. As a result of FEM, displacement (A) = 0.13~0.65 mm (average 0.27 mm) and displacement (B) = 0.14~1.07 mm (average 0.37 mm). Using the GSI and Hoek-Brown failure criterion, the significant result could be identified in the application evaluation. Therefore, the strength properties of rock mass estimated by the iso-angle division method could be applied with practical shear strength.

Development of a Simultaneous Analytical Method for Determination of Insecticide Broflanilide and Its Metabolite Residues in Agricultural Products Using LC-MS/MS (LC-MS/MS를 이용한 농산물 중 살충제 Broflanilide 및 대사물질 동시시험법 개발)

  • Park, Ji-Su;Do, Jung-Ah;Lee, Han Sol;Park, Shin-min;Cho, Sung Min;Kim, Ji-Young;Shin, Hye-Sun;Jang, Dong Eun;Jung, Yong-hyun;Lee, Kangbong
    • Journal of Food Hygiene and Safety
    • /
    • v.34 no.2
    • /
    • pp.124-134
    • /
    • 2019
  • An analytical method was developed for the determination of broflanilide and its metabolites in agricultural products. Sample preparation was conducted using the QuEChERS (Quick, Easy, Cheap, Effective, Rugged and Safe) method and LC-MS/MS (liquid chromatograph-tandem mass spectrometer). The analytes were extracted with acetonitrile and cleaned up using d-SPE (dispersive solid phase extraction) sorbents such as anhydrous magnesium sulfate, primary secondary amine (PSA) and octadecyl ($C_{18}$). The limit of detection (LOD) and quantification (LOQ) were 0.004 and 0.01 mg/kg, respectively. The recovery results for broflanilide, DM-8007 and S(PFP-OH)-8007 ranged between 90.7 to 113.7%, 88.2 to 109.7% and 79.8 to 97.8% at different concentration levels (LOQ, 10LOQ, 50LOQ) with relative standard deviation (RSD) less than 8.8%. The inter-laboratory study recovery results for broflanilide and DM-8007 and S (PFP-OH)-8007 ranged between 86.3 to 109.1%, 87.8 to 109.7% and 78.8 to 102.1%, and RSD values were also below 21%. All values were consistent with the criteria ranges requested in the Codex guidelines (CAC/GL 40-1993, 2003) and the Food and Drug Safety Evaluation guidelines (2016). Therefore, the proposed analytical method was accurate, effective and sensitive for broflanilide determination in agricultural commodities.

The Conceptual Intersection between the Old and the New and the Transformation of the Traditional Knowledge System (신구(新舊) 관념의 교차와 전통 지식 체계의 변용)

  • Lee, Haenghoon
    • The Journal of Korean Philosophical History
    • /
    • no.32
    • /
    • pp.215-249
    • /
    • 2011
  • This essay reflects on the modernity of Korea by examining the transformation of the traditional knowledge system from a historico-semantic perspective with its focus on the opposition and collision of the old and the new conception occurred in the early period(1890~1910) of the acceptance of the Western modern civilization. With scientific success, trick of reason, Christianity and evolutionary view of history, the Western modernity regarded itself as a peak of civilization and forced the non-Western societies into the world system in which they came to be considered as 'barbarism(野蠻)' or 'half-enlightened(半開).' The East Asian civilization, which had its own history for several centuries, became degraded as kind of delusion and old-fashioned customs from which it ought to free itself. The Western civilization presented itself as exemplary future which East Asian people should achieve, while East Asian past traditions came to be conceived as just unnecessary vestiges which it was better to wipe out. It can be said that East Asian modernization was established through the propagation and acceptance of the modern products of the Western civilization rather than through the preservation of its past experience and pursuit of the new at the same time. Accordingly, it is difficult to apply directly to East Asian societies Koselleck's hypothesis; while mapping out his Basic Concept of History, he assumed that, in the so-called 'age of saddle,' semantic struggle over concepts becomes active between the past experience and the horizon of expectation on the future, and concepts undergoes 'temporalization', 'democratization', 'ideologization', 'politicization.'The struggle over the old and new conceptions in Korea was most noticeable in the opposition of the Neo-Confucian scholars of Hwangseongsinmun and the theorists of civilization of Doknipsinmun. The opposition and struggle demanded the change of understanding in every field, but there was difference of opinion over the conception of the past traditional knowledge system. For the theorists of civilization, 'the old(舊)' was not just 'past' and 'old-fashioned' things, but rather an obstacle to the building of new civilization. On the other hand, it contained the possibility of regeneration(新) for the Neo-Confucian scholars; that is, they suggested finding a guide into tomorrow by taking lessons from the past. The traditional knowledge system lost their holy status of learning(聖學) in the process of its change into a 'new learning(新學),' and religion and religious tradition also weakened. The traditional knowledge system could change itself into modern learning by accepting scientific methodology which pursues objectivity and rationality. This transformation of the traditional knowledge system and 'the formation of the new learning from the old learning' was accompanied by the intersection between the old and new conceptions. It is necessary to pay attention to the role played by the concept of Sil(hak)(實學) or Practical Learning in the intersection of the old and new conceptions. Various modern media published before and after the 20th century show clearly the multi-layered development of the old and new conceptions, and it is noticeable that 'Sil(hak)' as conceptual frame of reference contributed to the transformation of the traditional knowledge system into the new learning. Although Silhak often designated, or was even considered equivalent to, the Western learning, Neo-Confucian scholars reinterpreted the concept of 'Silhak' which the theorists of civilization had monopolized until then, and opened the way to change the traditional knowledge system into the new learning. They re-appropriated the concept of Silhak, and enabled it to be invested with values, which were losing their own status due to the overwhelming scientific technology. With Japanese occupation of Korea by force, the attempt to transform the traditional knowledge system independently was obliged to reach its own limit, but its theory of 'making new learning from old one' can be considered to get over both the contradiction of Dondoseogi(東道西器: principle of preserving Eastern philosophy while accepting Western technology) and the de-subjectivity of the theory of civilization. While developing its own logic, the theory of Dongdoseogi was compelled to bring in the contradiction of considering the indivisible(道and 器) as divisible, though it tried to cope with the reality where the principle of morality and that of competition were opposed each other and the ideologies of 'evolution' and 'progress' prevailed. On the other hand, the theory of civilization was not free from the criticism that it brought about a crack in subjectivity due to its internalization of the West, cutting itself off from the traditional knowledge system.

Comparison of Leaf Color and Storability of Mixed Baby Leaf Vegetables according to the Mixing Ratios of Red Romaine lettuces (Lactuca sativa), Peucedanum japoincum, and Ligularia stenocephala during MA Storage (MA저장중 혼합비율에 따른 적로메인, 갯기름나물, 그리고 곤달비 혼합 어린잎채소의 엽색과 저장성 비교)

  • Choi, In-Lee;Lee, Joo Hwan;Wang, Li-Xia;Park, Wan Geun;Kang, Ho-Min
    • Journal of Bio-Environment Control
    • /
    • v.30 no.1
    • /
    • pp.77-84
    • /
    • 2021
  • This study attempted to find a way to maintain the quality of mixing baby wild leaf vegetables with existing baby leaf vegetables in various ratios. The crops for mixing baby leaf vegetables were Peucedanum japoincum Thunberg and Ligularia stenocephala, as wild vegetables, and red romaine, which is widely used in young leafy vegetables. The mixing ratio of red romaine and wild vegetables was red romaine 0: mantilla oil 5: L. stenocephala ratio 5 (R0: P5: L5), red romaine 3.3: P. japoincum 3.3: L. stenocephala ratio 3.3 (R3.3: P3.3: L3.3), red romaine 5: P. japoincum 2.5: L. stenocephala 2.5 (R5: P2.5: L2.5), red romaine 8: P. japoincum 1: L. stenocephala 1 (R8: P1: L1), red romaine 10: P. japoincum 0: L. stenocephala 0 (R10: P0: L0). All treatments were packaged in OTR (oxygen transmittance) 10,000 cc m-2·day-1·atm-1 film and stored for 27 days at 2℃/85% RH conditions. Fresh weight, carbon dioxide, oxygen, and ethylene concentrations of the baby leaf packages were examined approximately every 3 days, and visual quality, chlorophyll content, and chromaticity were examined on the 27th day of storage. The oxygen and carbon dioxide concentration in the packages were affected by the respiration rate of the crop. As the mixing ratio of lettuce, which had a low respiration rate, increased, the oxygen concentration in the packages was higher and the carbon dioxide concentration was lower. Oxygen concentration decreased significantly after 15 days, but was remained above 16%, and on the contrary, carbon dioxide concentration was kept at 1-4% until the 15th, and then gradually increased to 2-5% on the 27th day. The concentration of ethylene was maintained at 3-6 µL·L-1 until the end of storage (27th day). Visual quality score measured at the end of storage was slightly less than 3.0, which is the limit of marketability of all treatments. Although there was no significant difference, the chlorophyll content (SPAD) of red romaine and P. japoincum were most similar with an initial value in R8:P1:1 treatment, and L. stenocephala was higher value in R8:P1:L1 and R5:P2.5:L2.5 treatments at the end of storage. The leaf color (L∗, a∗, b∗, chroma) of the three crops at end of storage compared with the heat map showed the least change in the R5:P2.5:L2.5 and R8:P1:L1 treatments at the end of storage. Among them, R8:P1:L1 treatment maintained the highest chlorophyll content, the second lowest ethylene concentration, and adequate carbon dioxide concentration of 2-3%. Therefore, it is judged that the mixed ratio of red romaine 8: P. japoincum 1: L. stenocephala 1 (R8: P1: L1) is most suitable for the mixed package of baby leaf vegetables of these three crops.

A Study on the Characteristics of Enterprise R&D Capabilities Using Data Mining (데이터마이닝을 활용한 기업 R&D역량 특성에 관한 탐색 연구)

  • Kim, Sang-Gook;Lim, Jung-Sun;Park, Wan
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.1-21
    • /
    • 2021
  • As the global business environment changes, uncertainties in technology development and market needs increase, and competition among companies intensifies, interests and demands for R&D activities of individual companies are increasing. In order to cope with these environmental changes, R&D companies are strengthening R&D investment as one of the means to enhance the qualitative competitiveness of R&D while paying more attention to facility investment. As a result, facilities or R&D investment elements are inevitably a burden for R&D companies to bear future uncertainties. It is true that the management strategy of increasing investment in R&D as a means of enhancing R&D capability is highly uncertain in terms of corporate performance. In this study, the structural factors that influence the R&D capabilities of companies are explored in terms of technology management capabilities, R&D capabilities, and corporate classification attributes by utilizing data mining techniques, and the characteristics these individual factors present according to the level of R&D capabilities are analyzed. This study also showed cluster analysis and experimental results based on evidence data for all domestic R&D companies, and is expected to provide important implications for corporate management strategies to enhance R&D capabilities of individual companies. For each of the three viewpoints, detailed evaluation indexes were composed of 7, 2, and 4, respectively, to quantitatively measure individual levels in the corresponding area. In the case of technology management capability and R&D capability, the sub-item evaluation indexes that are being used by current domestic technology evaluation agencies were referenced, and the final detailed evaluation index was newly constructed in consideration of whether data could be obtained quantitatively. In the case of corporate classification attributes, the most basic corporate classification profile information is considered. In particular, in order to grasp the homogeneity of the R&D competency level, a comprehensive score for each company was given using detailed evaluation indicators of technology management capability and R&D capability, and the competency level was classified into five grades and compared with the cluster analysis results. In order to give the meaning according to the comparative evaluation between the analyzed cluster and the competency level grade, the clusters with high and low trends in R&D competency level were searched for each cluster. Afterwards, characteristics according to detailed evaluation indicators were analyzed in the cluster. Through this method of conducting research, two groups with high R&D competency and one with low level of R&D competency were analyzed, and the remaining two clusters were similar with almost high incidence. As a result, in this study, individual characteristics according to detailed evaluation indexes were analyzed for two clusters with high competency level and one cluster with low competency level. The implications of the results of this study are that the faster the replacement cycle of professional managers who can effectively respond to changes in technology and market demand, the more likely they will contribute to enhancing R&D capabilities. In the case of a private company, it is necessary to increase the intensity of input of R&D capabilities by enhancing the sense of belonging of R&D personnel to the company through conversion to a corporate company, and to provide the accuracy of responsibility and authority through the organization of the team unit. Since the number of technical commercialization achievements and technology certifications are occurring both in the case of contributing to capacity improvement and in case of not, it was confirmed that there is a limit in reviewing it as an important factor for enhancing R&D capacity from the perspective of management. Lastly, the experience of utility model filing was identified as a factor that has an important influence on R&D capability, and it was confirmed the need to provide motivation to encourage utility model filings in order to enhance R&D capability. As such, the results of this study are expected to provide important implications for corporate management strategies to enhance individual companies' R&D capabilities.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.

A Study on Searching for Export Candidate Countries of the Korean Food and Beverage Industry Using Node2vec Graph Embedding and Light GBM Link Prediction (Node2vec 그래프 임베딩과 Light GBM 링크 예측을 활용한 식음료 산업의 수출 후보국가 탐색 연구)

  • Lee, Jae-Seong;Jun, Seung-Pyo;Seo, Jinny
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.73-95
    • /
    • 2021
  • This study uses Node2vec graph embedding method and Light GBM link prediction to explore undeveloped export candidate countries in Korea's food and beverage industry. Node2vec is the method that improves the limit of the structural equivalence representation of the network, which is known to be relatively weak compared to the existing link prediction method based on the number of common neighbors of the network. Therefore, the method is known to show excellent performance in both community detection and structural equivalence of the network. The vector value obtained by embedding the network in this way operates under the condition of a constant length from an arbitrarily designated starting point node. Therefore, it has the advantage that it is easy to apply the sequence of nodes as an input value to the model for downstream tasks such as Logistic Regression, Support Vector Machine, and Random Forest. Based on these features of the Node2vec graph embedding method, this study applied the above method to the international trade information of the Korean food and beverage industry. Through this, we intend to contribute to creating the effect of extensive margin diversification in Korea in the global value chain relationship of the industry. The optimal predictive model derived from the results of this study recorded a precision of 0.95 and a recall of 0.79, and an F1 score of 0.86, showing excellent performance. This performance was shown to be superior to that of the binary classifier based on Logistic Regression set as the baseline model. In the baseline model, a precision of 0.95 and a recall of 0.73 were recorded, and an F1 score of 0.83 was recorded. In addition, the light GBM-based optimal prediction model derived from this study showed superior performance than the link prediction model of previous studies, which is set as a benchmarking model in this study. The predictive model of the previous study recorded only a recall rate of 0.75, but the proposed model of this study showed better performance which recall rate is 0.79. The difference in the performance of the prediction results between benchmarking model and this study model is due to the model learning strategy. In this study, groups were classified by the trade value scale, and prediction models were trained differently for these groups. Specific methods are (1) a method of randomly masking and learning a model for all trades without setting specific conditions for trade value, (2) arbitrarily masking a part of the trades with an average trade value or higher and using the model method, and (3) a method of arbitrarily masking some of the trades with the top 25% or higher trade value and learning the model. As a result of the experiment, it was confirmed that the performance of the model trained by randomly masking some of the trades with the above-average trade value in this method was the best and appeared stably. It was found that most of the results of potential export candidates for Korea derived through the above model appeared appropriate through additional investigation. Combining the above, this study could suggest the practical utility of the link prediction method applying Node2vec and Light GBM. In addition, useful implications could be derived for weight update strategies that can perform better link prediction while training the model. On the other hand, this study also has policy utility because it is applied to trade transactions that have not been performed much in the research related to link prediction based on graph embedding. The results of this study support a rapid response to changes in the global value chain such as the recent US-China trade conflict or Japan's export regulations, and I think that it has sufficient usefulness as a tool for policy decision-making.