• Title/Summary/Keyword: search technique

Search Result 1,415, Processing Time 0.04 seconds

Design of an Intellectual Smart Mirror Appication helping Face Makeup (얼굴 메이크업을 도와주는 지능형 스마트 거울 앱의설계)

  • Oh, Sun Jin;Lee, Yoon Suk
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.5
    • /
    • pp.497-502
    • /
    • 2022
  • Information delivery among young generation has a distinct tendency to prefer visual to text as means of information distribution and sharing recently, and it is natural to distribute information through Youtube or one-man broadcasting on Internet. That is, young generation usually get their information through this kind of distribution procedure. Many young generation are also drastic and more aggressive for decorating themselves very uniquely. It tends to create personal characteristics freely through drastic expression and attempt of face makeup, hair styling and fashion coordination without distinction of sex. Especially, face makeup becomes an object of major concern among males nowadays, and female of course, then it is the major means to express their personality. In this study, to meet the demands of the times, we design and implement the intellectual smart mirror application that efficiently retrieves and recommends the related videos among Youtube or one-man broadcastings produced by famous professional makeup artists to implement the face makeup congruous with our face shape, hair color & style, skin tone, fashion color & style in order to create the face makeup that represent our characteristics. We also introduce the AI technique to provide optimal solution based on the learning of user's search patterns and facial features, and finally provide the detailed makeup face images to give the chance to get the makeup skill stage by stage.

Domestic Clinical Research Trends of Pharmacopuncture Treatment for Nerve Entrapment Syndroeme: A Scoping Review (포착신경병증의 약침치료에 대한 국내 임상 연구 동향: 주제범위 문헌고찰)

  • Woenhyung Lee;Hyeonjun Woo;Yunhee Han;Seungkwan Choi;Jungho Jo;Byeonghyeon Jeon;Wonbae Ha;Junghan Lee
    • Journal of Korean Medicine Rehabilitation
    • /
    • v.33 no.4
    • /
    • pp.31-44
    • /
    • 2023
  • Objectives The purpose of this study is to check the research trends of pharmacopuncture treatment in nerve entrapment syndrome, identify specific techniques, identify which pharmacopuncture are used, and provide directions for future research. Methods This study was conducted based on the five steps suggested by Arksey and O'Malley. We searched five domestic databases (Research Information Sharing Service, Oriental Medicine Advanced Searching Integrated System, Korean studies Information Service System, Science ON, and KMBASE) and identified studies with key search terms like "nerve entrapment" And "pharmacopuncture" until June 23, 2023. Results Twenty-nine studies were finally selected. among them, 25 papers were non-comparative studies (86.2%). The most common disease was carpal tube syndrome (n=10). All the investigated studies were treated by injecting pharmacopuncture into the pathway of the entraped nerve. The depth of pharmacopuncture injection was mentioned only in 13 studies. As for the pharmacopuncture used, sweet bee venom was 8 studies and bee venom was 6 studies, and about half of the pharmacopuncture manufactured with Bee venom as the main component accounted for. Conclusions This study is a scoping review of the pharmacopuncture treatment for nerve entrapment, which was first conducted in Korea. The treatment is mainly performed on the path way of the entraped nerve. After that, it is necessary to study the standardization of the specific technique method of pharmacopuncture and the uniformity of evaluation criteria.

A Technique for Selecting Quadrature Points for Dimension Reduction Method to Improve Efficiency in Reliability-based Design Optimization (신뢰성 기반 최적설계의 효율성 향상을 위한 차원감소법의 적분직교점 선정 기법)

  • Ha-Yeong Kim;Hyunkyoo Cho
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.37 no.3
    • /
    • pp.217-224
    • /
    • 2024
  • This paper proposes an efficient dimension reduction method (DRM) that considers the nonlinearity of the performance functions in reliability-based design optimization (RBDO). The dimension reduction method evaluates the reliability more accurately than the first-order reliability method (FORM) using integration quadrature points and weights. However, its efficiency is hindered as the number of quadrature points increases owing to the need for an additional evaluation of the performance function. In this study, we assessed the nonlinearity of the performance function in RBDO and proposed criteria for determining the number of quadrature points based on the degree of nonlinearity. This approach suggests adjusting the number of quadrature points during each iteration of the RBDO process while maintaining the accuracy of theDRM while improving the computational efficiency. The nonlinearity of the performance function was evaluated using the angle between the vectors used in the maximum probable target point (MPTP) search. Numerical tests were conducted to determine the appropriate number of quadrature points according to the degree of nonlinearity. Through a 2D numerical example, it is confirmed that the proposed method improves the efficiency while maintaining the accuracy of the dimension reduction method or Monte Carlo Simulation (MCS).

Analyzing the Issue Life Cycle by Mapping Inter-Period Issues (기간별 이슈 매핑을 통한 이슈 생명주기 분석 방법론)

  • Lim, Myungsu;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.25-41
    • /
    • 2014
  • Recently, the number of social media users has increased rapidly because of the prevalence of smart devices. As a result, the amount of real-time data has been increasing exponentially, which, in turn, is generating more interest in using such data to create added value. For instance, several attempts are being made to analyze the relevant search keywords that are frequently used on new portal sites and the words that are regularly mentioned on various social media in order to identify social issues. The technique of "topic analysis" is employed in order to identify topics and themes from a large amount of text documents. As one of the most prevalent applications of topic analysis, the technique of issue tracking investigates changes in the social issues that are identified through topic analysis. Currently, traditional issue tracking is conducted by identifying the main topics of documents that cover an entire period at the same time and analyzing the occurrence of each topic by the period of occurrence. However, this traditional issue tracking approach has two limitations. First, when a new period is included, topic analysis must be repeated for all the documents of the entire period, rather than being conducted only on the new documents of the added period. This creates practical limitations in the form of significant time and cost burdens. Therefore, this traditional approach is difficult to apply in most applications that need to perform an analysis on the additional period. Second, the issue is not only generated and terminated constantly, but also one issue can sometimes be distributed into several issues or multiple issues can be integrated into one single issue. In other words, each issue is characterized by a life cycle that consists of the stages of creation, transition (merging and segmentation), and termination. The existing issue tracking methods do not address the connection and effect relationship between these issues. The purpose of this study is to overcome the two limitations of the existing issue tracking method, one being the limitation regarding the analysis method and the other being the limitation involving the lack of consideration of the changeability of the issues. Let us assume that we perform multiple topic analysis for each multiple period. Then it is essential to map issues of different periods in order to trace trend of issues. However, it is not easy to discover connection between issues of different periods because the issues derived for each period mutually contain heterogeneity. In this study, to overcome these limitations without having to analyze the entire period's documents simultaneously, the analysis can be performed independently for each period. In addition, we performed issue mapping to link the identified issues of each period. An integrated approach on each details period was presented, and the issue flow of the entire integrated period was depicted in this study. Thus, as the entire process of the issue life cycle, including the stages of creation, transition (merging and segmentation), and extinction, is identified and examined systematically, the changeability of the issues was analyzed in this study. The proposed methodology is highly efficient in terms of time and cost, as it sufficiently considered the changeability of the issues. Further, the results of this study can be used to adapt the methodology to a practical situation. By applying the proposed methodology to actual Internet news, the potential practical applications of the proposed methodology are analyzed. Consequently, the proposed methodology was able to extend the period of the analysis and it could follow the course of progress of each issue's life cycle. Further, this methodology can facilitate a clearer understanding of complex social phenomena using topic analysis.

A Study on Shape Optimization of Plane Truss Structures (평면(平面) 트러스 구조물(構造物)의 형상최적화(形狀最適化)에 관한 구연(究研))

  • Lee, Gyu won;Byun, Keun Joo;Hwang, Hak Joo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.5 no.3
    • /
    • pp.49-59
    • /
    • 1985
  • Formulation of the geometric optimization for truss structures based on the elasticity theory turn out to be the nonlinear programming problem which has to deal with the Cross sectional area of the member and the coordinates of its nodes simultaneously. A few techniques have been proposed and adopted for the analysis of this nonlinear programming problem for the time being. These techniques, however, bear some limitations on truss shapes loading conditions and design criteria for the practical application to real structures. A generalized algorithm for the geometric optimization of the truss structures which can eliminate the above mentioned limitations, is developed in this study. The algorithm developed utilizes the two-phases technique. In the first phase, the cross sectional area of the truss member is optimized by transforming the nonlinear problem into SUMT, and solving SUMT utilizing the modified Newton-Raphson method. In the second phase, the geometric shape is optimized utilizing the unidirctional search technique of the Rosenbrock method which make it possible to minimize only the objective function. The algorithm developed in this study is numerically tested for several truss structures with various shapes, loading conditions and design criteria, and compared with the results of the other algorithms to examme its applicability and stability. The numerical comparisons show that the two-phases algorithm developed in this study is safely applicable to any design criteria, and the convergency rate is very fast and stable compared with other iteration methods for the geometric optimization of truss structures.

  • PDF

Method Development for the Profiling Analysis of Endogenous Metabolites by Accurate-Mass Quadrupole Time-of-Flight(Q-TOF) LC/MS (LC/TOFMS를 이용한 생체시료의 내인성 대사체 분석법 개발)

  • Lee, In-Sun;Kim, Jin-Ho;Cho, Soo-Yeul;Shim, Sun-Bo;Park, Hye-Jin;Lee, Jin-Hee;Lee, Ji-Hyun;Hwang, In-Sun;Kim, Sung-Il;Lee, Jung-Hee;Cho, Su-Yeon;Choi, Don-Woong;Cho, Yang-Ha
    • Journal of Food Hygiene and Safety
    • /
    • v.25 no.4
    • /
    • pp.388-394
    • /
    • 2010
  • Metabolomics aims at the comprehensive, qualitative and quantitative analysis of wide arrays of endogenous metabolites in biological samples. It has shown particular promise in the area of toxicology and drug development, functional genomics, system biology and clinical diagnosis. In this study, analytical technique of MS instrument with high resolution mass measurement, such as time-of-flight (TOF) was validated for the purpose of investigation of amino acids, sugars and fatty acids. Rat urine and serum samples were extracted by selected each solvent (50% acetonitrile, 100% acetonitrile, acetone, methanol, water, ether) extraction method. We determined the optimized liquid chromatography/time-of-flight mass spectrometry (LC/TOFMS) system and selected appropriated columns, mobile phases, fragment energy and collision energy, which could search 17 metabolites. The spectral data collected from LC/TOFMS were tested by ANOVA. Obtained with the use of LC/TOFMS technique, our results indicated that (1) MS and MS/MS parameters were optimized and most abundant product ion of each metabolite were selected to be monitorized; (2) with design of experiment analysis, methanol yielded the optimal extraction efficiency. Therefore, the results of this study are expected to be useful in the endogenous metabolite fields according to validated SOP for endogenous amino acids, sugars and fatty acids.

Evaluation for applicability of river depth measurement method depending on vegetation effect using drone-based spatial-temporal hyperspectral image (드론기반 시공간 초분광영상을 활용한 식생유무에 따른 하천 수심산정 기법 적용성 검토)

  • Gwon, Yeonghwa;Kim, Dongsu;You, Hojun
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.4
    • /
    • pp.235-243
    • /
    • 2023
  • Due to the revision of the River Act and the enactment of the Act on the Investigation, Planning, and Management of Water Resources, a regular bed change survey has become mandatory and a system is being prepared such that local governments can manage water resources in a planned manner. Since the topography of a bed cannot be measured directly, it is indirectly measured via contact-type depth measurements such as level survey or using an echo sounder, which features a low spatial resolution and does not allow continuous surveying owing to constraints in data acquisition. Therefore, a depth measurement method using remote sensing-LiDAR or hyperspectral imaging-has recently been developed, which allows a wider area survey than the contact-type method as it acquires hyperspectral images from a lightweight hyperspectral sensor mounted on a frequently operating drone and by applying the optimal bandwidth ratio search algorithm to estimate the depth. In the existing hyperspectral remote sensing technique, specific physical quantities are analyzed after matching the hyperspectral image acquired by the drone's path to the image of a surface unit. Previous studies focus primarily on the application of this technology to measure the bathymetry of sandy rivers, whereas bed materials are rarely evaluated. In this study, the existing hyperspectral image-based water depth estimation technique is applied to rivers with vegetation, whereas spatio-temporal hyperspectral imaging and cross-sectional hyperspectral imaging are performed for two cases in the same area before and after vegetation is removed. The result shows that the water depth estimation in the absence of vegetation is more accurate, and in the presence of vegetation, the water depth is estimated by recognizing the height of vegetation as the bottom. In addition, highly accurate water depth estimation is achieved not only in conventional cross-sectional hyperspectral imaging, but also in spatio-temporal hyperspectral imaging. As such, the possibility of monitoring bed fluctuations (water depth fluctuation) using spatio-temporal hyperspectral imaging is confirmed.

Development of Cloud Detection Method Considering Radiometric Characteristics of Satellite Imagery (위성영상의 방사적 특성을 고려한 구름 탐지 방법 개발)

  • Won-Woo Seo;Hongki Kang;Wansang Yoon;Pyung-Chae Lim;Sooahm Rhee;Taejung Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1211-1224
    • /
    • 2023
  • Clouds cause many difficult problems in observing land surface phenomena using optical satellites, such as national land observation, disaster response, and change detection. In addition, the presence of clouds affects not only the image processing stage but also the final data quality, so it is necessary to identify and remove them. Therefore, in this study, we developed a new cloud detection technique that automatically performs a series of processes to search and extract the pixels closest to the spectral pattern of clouds in satellite images, select the optimal threshold, and produce a cloud mask based on the threshold. The cloud detection technique largely consists of three steps. In the first step, the process of converting the Digital Number (DN) unit image into top-of-atmosphere reflectance units was performed. In the second step, preprocessing such as Hue-Value-Saturation (HSV) transformation, triangle thresholding, and maximum likelihood classification was applied using the top of the atmosphere reflectance image, and the threshold for generating the initial cloud mask was determined for each image. In the third post-processing step, the noise included in the initial cloud mask created was removed and the cloud boundaries and interior were improved. As experimental data for cloud detection, CAS500-1 L2G images acquired in the Korean Peninsula from April to November, which show the diversity of spatial and seasonal distribution of clouds, were used. To verify the performance of the proposed method, the results generated by a simple thresholding method were compared. As a result of the experiment, compared to the existing method, the proposed method was able to detect clouds more accurately by considering the radiometric characteristics of each image through the preprocessing process. In addition, the results showed that the influence of bright objects (panel roofs, concrete roads, sand, etc.) other than cloud objects was minimized. The proposed method showed more than 30% improved results(F1-score) compared to the existing method but showed limitations in certain images containing snow.

A Ranking Algorithm for Semantic Web Resources: A Class-oriented Approach (시맨틱 웹 자원의 랭킹을 위한 알고리즘: 클래스중심 접근방법)

  • Rho, Sang-Kyu;Park, Hyun-Jung;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.31-59
    • /
    • 2007
  • We frequently use search engines to find relevant information in the Web but still end up with too much information. In order to solve this problem of information overload, ranking algorithms have been applied to various domains. As more information will be available in the future, effectively and efficiently ranking search results will become more critical. In this paper, we propose a ranking algorithm for the Semantic Web resources, specifically RDF resources. Traditionally, the importance of a particular Web page is estimated based on the number of key words found in the page, which is subject to manipulation. In contrast, link analysis methods such as Google's PageRank capitalize on the information which is inherent in the link structure of the Web graph. PageRank considers a certain page highly important if it is referred to by many other pages. The degree of the importance also increases if the importance of the referring pages is high. Kleinberg's algorithm is another link-structure based ranking algorithm for Web pages. Unlike PageRank, Kleinberg's algorithm utilizes two kinds of scores: the authority score and the hub score. If a page has a high authority score, it is an authority on a given topic and many pages refer to it. A page with a high hub score links to many authoritative pages. As mentioned above, the link-structure based ranking method has been playing an essential role in World Wide Web(WWW), and nowadays, many people recognize the effectiveness and efficiency of it. On the other hand, as Resource Description Framework(RDF) data model forms the foundation of the Semantic Web, any information in the Semantic Web can be expressed with RDF graph, making the ranking algorithm for RDF knowledge bases greatly important. The RDF graph consists of nodes and directional links similar to the Web graph. As a result, the link-structure based ranking method seems to be highly applicable to ranking the Semantic Web resources. However, the information space of the Semantic Web is more complex than that of WWW. For instance, WWW can be considered as one huge class, i.e., a collection of Web pages, which has only a recursive property, i.e., a 'refers to' property corresponding to the hyperlinks. However, the Semantic Web encompasses various kinds of classes and properties, and consequently, ranking methods used in WWW should be modified to reflect the complexity of the information space in the Semantic Web. Previous research addressed the ranking problem of query results retrieved from RDF knowledge bases. Mukherjea and Bamba modified Kleinberg's algorithm in order to apply their algorithm to rank the Semantic Web resources. They defined the objectivity score and the subjectivity score of a resource, which correspond to the authority score and the hub score of Kleinberg's, respectively. They concentrated on the diversity of properties and introduced property weights to control the influence of a resource on another resource depending on the characteristic of the property linking the two resources. A node with a high objectivity score becomes the object of many RDF triples, and a node with a high subjectivity score becomes the subject of many RDF triples. They developed several kinds of Semantic Web systems in order to validate their technique and showed some experimental results verifying the applicability of their method to the Semantic Web. Despite their efforts, however, there remained some limitations which they reported in their paper. First, their algorithm is useful only when a Semantic Web system represents most of the knowledge pertaining to a certain domain. In other words, the ratio of links to nodes should be high, or overall resources should be described in detail, to a certain degree for their algorithm to properly work. Second, a Tightly-Knit Community(TKC) effect, the phenomenon that pages which are less important but yet densely connected have higher scores than the ones that are more important but sparsely connected, remains as problematic. Third, a resource may have a high score, not because it is actually important, but simply because it is very common and as a consequence it has many links pointing to it. In this paper, we examine such ranking problems from a novel perspective and propose a new algorithm which can solve the problems under the previous studies. Our proposed method is based on a class-oriented approach. In contrast to the predicate-oriented approach entertained by the previous research, a user, under our approach, determines the weights of a property by comparing its relative significance to the other properties when evaluating the importance of resources in a specific class. This approach stems from the idea that most queries are supposed to find resources belonging to the same class in the Semantic Web, which consists of many heterogeneous classes in RDF Schema. This approach closely reflects the way that people, in the real world, evaluate something, and will turn out to be superior to the predicate-oriented approach for the Semantic Web. Our proposed algorithm can resolve the TKC(Tightly Knit Community) effect, and further can shed lights on other limitations posed by the previous research. In addition, we propose two ways to incorporate data-type properties which have not been employed even in the case when they have some significance on the resource importance. We designed an experiment to show the effectiveness of our proposed algorithm and the validity of ranking results, which was not tried ever in previous research. We also conducted a comprehensive mathematical analysis, which was overlooked in previous research. The mathematical analysis enabled us to simplify the calculation procedure. Finally, we summarize our experimental results and discuss further research issues.

Self-optimizing feature selection algorithm for enhancing campaign effectiveness (캠페인 효과 제고를 위한 자기 최적화 변수 선택 알고리즘)

  • Seo, Jeoung-soo;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.173-198
    • /
    • 2020
  • For a long time, many studies have been conducted on predicting the success of campaigns for customers in academia, and prediction models applying various techniques are still being studied. Recently, as campaign channels have been expanded in various ways due to the rapid revitalization of online, various types of campaigns are being carried out by companies at a level that cannot be compared to the past. However, customers tend to perceive it as spam as the fatigue of campaigns due to duplicate exposure increases. Also, from a corporate standpoint, there is a problem that the effectiveness of the campaign itself is decreasing, such as increasing the cost of investing in the campaign, which leads to the low actual campaign success rate. Accordingly, various studies are ongoing to improve the effectiveness of the campaign in practice. This campaign system has the ultimate purpose to increase the success rate of various campaigns by collecting and analyzing various data related to customers and using them for campaigns. In particular, recent attempts to make various predictions related to the response of campaigns using machine learning have been made. It is very important to select appropriate features due to the various features of campaign data. If all of the input data are used in the process of classifying a large amount of data, it takes a lot of learning time as the classification class expands, so the minimum input data set must be extracted and used from the entire data. In addition, when a trained model is generated by using too many features, prediction accuracy may be degraded due to overfitting or correlation between features. Therefore, in order to improve accuracy, a feature selection technique that removes features close to noise should be applied, and feature selection is a necessary process in order to analyze a high-dimensional data set. Among the greedy algorithms, SFS (Sequential Forward Selection), SBS (Sequential Backward Selection), SFFS (Sequential Floating Forward Selection), etc. are widely used as traditional feature selection techniques. It is also true that if there are many risks and many features, there is a limitation in that the performance for classification prediction is poor and it takes a lot of learning time. Therefore, in this study, we propose an improved feature selection algorithm to enhance the effectiveness of the existing campaign. The purpose of this study is to improve the existing SFFS sequential method in the process of searching for feature subsets that are the basis for improving machine learning model performance using statistical characteristics of the data to be processed in the campaign system. Through this, features that have a lot of influence on performance are first derived, features that have a negative effect are removed, and then the sequential method is applied to increase the efficiency for search performance and to apply an improved algorithm to enable generalized prediction. Through this, it was confirmed that the proposed model showed better search and prediction performance than the traditional greed algorithm. Compared with the original data set, greed algorithm, genetic algorithm (GA), and recursive feature elimination (RFE), the campaign success prediction was higher. In addition, when performing campaign success prediction, the improved feature selection algorithm was found to be helpful in analyzing and interpreting the prediction results by providing the importance of the derived features. This is important features such as age, customer rating, and sales, which were previously known statistically. Unlike the previous campaign planners, features such as the combined product name, average 3-month data consumption rate, and the last 3-month wireless data usage were unexpectedly selected as important features for the campaign response, which they rarely used to select campaign targets. It was confirmed that base attributes can also be very important features depending on the type of campaign. Through this, it is possible to analyze and understand the important characteristics of each campaign type.