• Title/Summary/Keyword: Generating function

Search Result 798, Processing Time 0.026 seconds

Method of Deriving Activity Relationship and Location Information from BIM Model for Construction Schedule Management (공정관리 활용을 위한 BIM모델의 공정별 수순 및 위치정보 추출방안)

  • Yoon, Hyeongseok;Lee, Jaehee;Hwang, Jaeyeong;Kang, Hyojeong;Park, sangmi;Kang, Leenseok
    • Korean Journal of Construction Engineering and Management
    • /
    • v.23 no.2
    • /
    • pp.33-44
    • /
    • 2022
  • The simulation function by the 4D system is a representative BIM function in the construction stage. For the 4D simulation, schedule information for each activity must be created and then linked with the 3D model. Since the 3D model created in the design stage does not consider schedule information, there are practical difficulties in the process of creating schedule information for application to the construction stage and linking the 3D model. In this study, after extracting the schedule information of the construction stage using the HDBSCAN algorithm from the 3D model in the design stage, authors propose a methodology for automatically generating schedule information by identifying precedence and sequencing relationships by applying the topological alignment algorithm. Since the generated schedule information is created based on the 3D model, it can be used as information that is automatically linked by the common parameters between the schedule and the 3D model in the 4D system, and the practical utility of the 4D system can be increased. The proposed methodology was applied to the four bridge projects to confirm the schedule information generation, and applied to the 4D system to confirm the simplification of the link process between schedule and 3D model.

Material Image Classification using Normal Map Generation (Normal map 생성을 이용한 물질 이미지 분류)

  • Nam, Hyeongil;Kim, Tae Hyun;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.27 no.1
    • /
    • pp.69-79
    • /
    • 2022
  • In this study, a method of generating and utilizing a normal map image used to represent the characteristics of the surface of an image material to improve the classification accuracy of the original material image is proposed. First of all, (1) to generate a normal map that reflects the surface properties of a material in an image, a U-Net with attention-R2 gate as a generator was used, and a Pix2Pix-based method using the generated normal map and the similarity with the original normal map as a reconstruction loss was used. Next, (2) we propose a network that can improve the accuracy of classification of the original material image by applying the previously created normal map image to the attention gate of the classification network. For normal maps generated using Pixar Dataset, the similarity between normal maps corresponding to ground truth is evaluated. In this case, the results of reconstruction loss function applied differently according to the similarity metrics are compared. In addition, for evaluation of material image classification, it was confirmed that the proposed method based on MINC-2500 and FMD datasets and comparative experiments in previous studies could be more accurately distinguished. The method proposed in this paper is expected to be the basis for various image processing and network construction that can identify substances within an image.

Automatic Validation of the Geometric Quality of Crowdsourcing Drone Imagery (크라우드소싱 드론 영상의 기하학적 품질 자동 검증)

  • Dongho Lee ;Kyoungah Choi
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_1
    • /
    • pp.577-587
    • /
    • 2023
  • The utilization of crowdsourced spatial data has been actively researched; however, issues stemming from the uncertainty of data quality have been raised. In particular, when low-quality data is mixed into drone imagery datasets, it can degrade the quality of spatial information output. In order to address these problems, the study presents a methodology for automatically validating the geometric quality of crowdsourced imagery. Key quality factors such as spatial resolution, resolution variation, matching point reprojection error, and bundle adjustment results are utilized. To classify imagery suitable for spatial information generation, training and validation datasets are constructed, and machine learning is conducted using a radial basis function (RBF)-based support vector machine (SVM) model. The trained SVM model achieved a classification accuracy of 99.1%. To evaluate the effectiveness of the quality validation model, imagery sets before and after applying the model to drone imagery not used in training and validation are compared by generating orthoimages. The results confirm that the application of the quality validation model reduces various distortions that can be included in orthoimages and enhances object identifiability. The proposed quality validation methodology is expected to increase the utility of crowdsourced data in spatial information generation by automatically selecting high-quality data from the multitude of crowdsourced data with varying qualities.

The Prediction of DEA based Efficiency Rating for Venture Business Using Multi-class SVM (다분류 SVM을 이용한 DEA기반 벤처기업 효율성등급 예측모형)

  • Park, Ji-Young;Hong, Tae-Ho
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.139-155
    • /
    • 2009
  • For the last few decades, many studies have tried to explore and unveil venture companies' success factors and unique features in order to identify the sources of such companies' competitive advantages over their rivals. Such venture companies have shown tendency to give high returns for investors generally making the best use of information technology. For this reason, many venture companies are keen on attracting avid investors' attention. Investors generally make their investment decisions by carefully examining the evaluation criteria of the alternatives. To them, credit rating information provided by international rating agencies, such as Standard and Poor's, Moody's and Fitch is crucial source as to such pivotal concerns as companies stability, growth, and risk status. But these types of information are generated only for the companies issuing corporate bonds, not venture companies. Therefore, this study proposes a method for evaluating venture businesses by presenting our recent empirical results using financial data of Korean venture companies listed on KOSDAQ in Korea exchange. In addition, this paper used multi-class SVM for the prediction of DEA-based efficiency rating for venture businesses, which was derived from our proposed method. Our approach sheds light on ways to locate efficient companies generating high level of profits. Above all, in determining effective ways to evaluate a venture firm's efficiency, it is important to understand the major contributing factors of such efficiency. Therefore, this paper is constructed on the basis of following two ideas to classify which companies are more efficient venture companies: i) making DEA based multi-class rating for sample companies and ii) developing multi-class SVM-based efficiency prediction model for classifying all companies. First, the Data Envelopment Analysis(DEA) is a non-parametric multiple input-output efficiency technique that measures the relative efficiency of decision making units(DMUs) using a linear programming based model. It is non-parametric because it requires no assumption on the shape or parameters of the underlying production function. DEA has been already widely applied for evaluating the relative efficiency of DMUs. Recently, a number of DEA based studies have evaluated the efficiency of various types of companies, such as internet companies and venture companies. It has been also applied to corporate credit ratings. In this study we utilized DEA for sorting venture companies by efficiency based ratings. The Support Vector Machine(SVM), on the other hand, is a popular technique for solving data classification problems. In this paper, we employed SVM to classify the efficiency ratings in IT venture companies according to the results of DEA. The SVM method was first developed by Vapnik (1995). As one of many machine learning techniques, SVM is based on a statistical theory. Thus far, the method has shown good performances especially in generalizing capacity in classification tasks, resulting in numerous applications in many areas of business, SVM is basically the algorithm that finds the maximum margin hyperplane, which is the maximum separation between classes. According to this method, support vectors are the closest to the maximum margin hyperplane. If it is impossible to classify, we can use the kernel function. In the case of nonlinear class boundaries, we can transform the inputs into a high-dimensional feature space, This is the original input space and is mapped into a high-dimensional dot-product space. Many studies applied SVM to the prediction of bankruptcy, the forecast a financial time series, and the problem of estimating credit rating, In this study we employed SVM for developing data mining-based efficiency prediction model. We used the Gaussian radial function as a kernel function of SVM. In multi-class SVM, we adopted one-against-one approach between binary classification method and two all-together methods, proposed by Weston and Watkins(1999) and Crammer and Singer(2000), respectively. In this research, we used corporate information of 154 companies listed on KOSDAQ market in Korea exchange. We obtained companies' financial information of 2005 from the KIS(Korea Information Service, Inc.). Using this data, we made multi-class rating with DEA efficiency and built multi-class prediction model based data mining. Among three manners of multi-classification, the hit ratio of the Weston and Watkins method is the best in the test data set. In multi classification problems as efficiency ratings of venture business, it is very useful for investors to know the class with errors, one class difference, when it is difficult to find out the accurate class in the actual market. So we presented accuracy results within 1-class errors, and the Weston and Watkins method showed 85.7% accuracy in our test samples. We conclude that the DEA based multi-class approach in venture business generates more information than the binary classification problem, notwithstanding its efficiency level. We believe this model can help investors in decision making as it provides a reliably tool to evaluate venture companies in the financial domain. For the future research, we perceive the need to enhance such areas as the variable selection process, the parameter selection of kernel function, the generalization, and the sample size of multi-class.

Incremental Ensemble Learning for The Combination of Multiple Models of Locally Weighted Regression Using Genetic Algorithm (유전 알고리즘을 이용한 국소가중회귀의 다중모델 결합을 위한 점진적 앙상블 학습)

  • Kim, Sang Hun;Chung, Byung Hee;Lee, Gun Ho
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.9
    • /
    • pp.351-360
    • /
    • 2018
  • The LWR (Locally Weighted Regression) model, which is traditionally a lazy learning model, is designed to obtain the solution of the prediction according to the input variable, the query point, and it is a kind of the regression equation in the short interval obtained as a result of the learning that gives a higher weight value closer to the query point. We study on an incremental ensemble learning approach for LWR, a form of lazy learning and memory-based learning. The proposed incremental ensemble learning method of LWR is to sequentially generate and integrate LWR models over time using a genetic algorithm to obtain a solution of a specific query point. The weaknesses of existing LWR models are that multiple LWR models can be generated based on the indicator function and data sample selection, and the quality of the predictions can also vary depending on this model. However, no research has been conducted to solve the problem of selection or combination of multiple LWR models. In this study, after generating the initial LWR model according to the indicator function and the sample data set, we iterate evolution learning process to obtain the proper indicator function and assess the LWR models applied to the other sample data sets to overcome the data set bias. We adopt Eager learning method to generate and store LWR model gradually when data is generated for all sections. In order to obtain a prediction solution at a specific point in time, an LWR model is generated based on newly generated data within a predetermined interval and then combined with existing LWR models in a section using a genetic algorithm. The proposed method shows better results than the method of selecting multiple LWR models using the simple average method. The results of this study are compared with the predicted results using multiple regression analysis by applying the real data such as the amount of traffic per hour in a specific area and hourly sales of a resting place of the highway, etc.

Characteristics of the Electro-Optical Camera(EOC) (다목적실용위성탑재 전자광학카메라(EOC)의 성능 특성)

  • Seunghoon Lee;Hyung-Sik Shim;Hong-Yul Paik
    • Korean Journal of Remote Sensing
    • /
    • v.14 no.3
    • /
    • pp.213-222
    • /
    • 1998
  • Electro-Optical Camera(EOC) is the main payload of the KOrea Multi-Purpose SATellite(KOMPSAT) with the mission of cartography to build up a digital map of Korean territory including a Digital Terrain Elevation Map(DTEM). This instalment which comprises EOC Sensor Assembly and EOC Electronics Assembly produces the panchromatic images of 6.6 m GSD with a swath wider than 17 km by push-broom scanning and spacecraft body pointing in a visible range of wavelength, 510~730 nm. The high resolution panchromatic image is to be collected for 2 minutes during 98 minutes of orbit cycle covering about 800 km along ground track, over the mission lifetime of 3 years with the functions of programmable gain/offset and on-board image data storage. The image of 8 bit digitization, which is collected by a full reflective type F8.3 triplet without obscuration, is to be transmitted to Ground Station at a rate less than 25 Mbps. EOC was elaborated to have the performance which meets or surpasses its requirements of design phase. The spectral response, the modulation transfer function, and the uniformity of all the 2592 pixel of CCD of EOC are illustrated as they were measured for the convenience of end-user. The spectral response was measured with respect to each gain setup of EOC and this is expected to give the capability of generating more accurate panchromatic image to the users of EOC data. The modulation transfer function of EOC was measured as greater than 16 % at Nyquist frequency over the entire field of view, which exceeds its requirement of larger than 10 %. The uniformity that shows the relative response of each pixel of CCD was measured at every pixel of the Focal Plane Array of EOC and is illustrated for the data processing.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Distribution of vibration signals according to operating conditions of wind turbine (풍력발전기 운전환경에 따른 진동신호 분포)

  • Shin, Sung-Hwan;Kim, SangRyul;Seo, Yun-Ho
    • The Journal of the Acoustical Society of Korea
    • /
    • v.35 no.3
    • /
    • pp.192-201
    • /
    • 2016
  • Condition Monitoring System (CMS) has been used to detect unexpected faults of wind turbine caused by the abrupt change of circumstances or the aging of its mechanical part. In fact, it is a very hard work to do regular inspection for its maintenance because wind turbine is located on the mountaintop or sea. The purpose of this study is to find out distribution patterns of vibration signals measured from the main mechanical parts of wind turbine according to its operation condition. To this end, acceleration signals of main bearing, gearbox, generator, wind speed, rotational speed, etc were measured through the long period more than 2 years and trend analyses on each signal were conducted as a function of the rotational speed. In addition, correlation analysis among the signals was done to grasp the relation between mechanical parts. As a result, the vibrations were dependent on the rotational speed of main shaft and whether power was generated or not, and their distributions at a specific rotational speed could be approximated to Weibull distribution. It was also investigated that the vibration at main bearing was correlated with vibration at gearbox each other, whereas vibration at generator should be dealt with individually because of generating mechanism. These results can be used for improving performance of CMS that early detects the mechanical abnormality of wind turbine.

Anti-Oxidant Effect and Anti-Inflammatory of Fermented Citrus unshiu Peel Extract by using Schizophyllum commune (치마버섯을 이용한 진피 발효 배양물의 항산화 및 항염 효과)

  • Song, Min-Hyeon;Bae, Jun-Tae;Ko, Hyun-Ju;Jang, Yong-Man;Lee, Jong-Dae;Lee, Geun-Soo;Pyo, Hyeong-Bae
    • Journal of the Society of Cosmetic Scientists of Korea
    • /
    • v.37 no.4
    • /
    • pp.351-356
    • /
    • 2011
  • Citrus unshiu (C. unshiu) Markovich were dried peel of mandarin orange, of which fresh fruit was one of the famous foods in Korea and Eastern Asia. In the oriental medicine, C. unshiu peel was known to have a diuretic effect and to strengthen spleen function. Recently, natural flavonoids of C. unshiu peel have been investigated. In this study, C. unshiu peel extract containing flavonoid-glycosides was cultured with Schizophyllum commune (S. commune) mycelia producing ${\beta}$-glu- cosidase and its biological activities were investigated. ${\beta}$-glucosidase of S. commune mycelia converted the flavonoid-glycosides (rutin and hesperidin) into aglycones (naringenin and hesperetin). Fermented C. unshiu peel extract compounds were analyzed by HPLC system. The photoprotective potential of fermented C. unshiu peel extract was tested in human dermal fibroblasts (HDFs) exposed to UVA. Fermented C. unshiu peel extract extract also showed notable in vitro anti-inflammatory effect on cellular systems generating cyclooxygenase-2 (COX-2) and 5-lipoxygenase (5-LOX) metabolites. Also, UVB-induced production of interleukin-$1{\alpha}$ in human HaCaT cells was reduced in a dose-dependent manner by treatment with fermented C. unshiu peel extract. These results suggest that fermented C. unshiu peel extract may mitigate the effects of photoaging in skin by reducing UV-induced adverse skin reaction.

The Effect of Brand Storytelling in Brand Reputation (브랜드명성수준에 따른 브랜드 스토리텔링의 효과)

  • Choi, Soow-A;Jung, Hyo-Sun;Hwang, Yoon-Yong
    • Journal of Distribution Science
    • /
    • v.12 no.4
    • /
    • pp.55-63
    • /
    • 2014
  • Purpose - Brands and products often play key roles in enabling consumers to experience a good attitude, resulting in mentally enacting a specific prototype and reliving the experience by retelling a specific story. Brand storytelling can function as an important tool for managing the brand. To successfully apply a firm's brand storytelling, it is important to prove the effectiveness of storytelling. Therefore, by utilizing the research of Escalas (1998) and Fog et al. (2005), a list of measurements for storytelling component quality (SCQ) was applied. In addition, customer attitudes toward brand storytelling were tested. In particular, if customers encounter a dynamic and interesting story, although the brand is not widely known, they can be in communion with the brand and establish an emotional connection (Hill, 2003). Thus, brand reputation was divided into two levels (high vs. low), and the difference in effectiveness between storytelling component quality and consumers' advertisement attitude, brand attitude, and purchasing intention was examined. Research design, data, and methodology - By using the measurement list used in Choi, Na, and Hwang (2013), 12 categories in the level of message quality, conflict quality, character quality, and plot quality were measured. In addition, categories of brand reputation, advertisement attitude, brand attitude, and purchasing intention were measured. The study was based on 181 final survey samples targeting undergraduate and graduate students in Gwangju Metropolitan City. Results - Consumer responses toward storytelling were researched in the context of brand characteristics or product attributes, such as brand reputation, differentiated from extant simple effects of storytelling. Some brands with high reputation enjoy a halo effect due to prior learning, while other brands with comparatively low reputation have trouble generating positive responses despite attempts to enhance the level of reputation or induce favorable attitudes. Although not all due to the component quality of storytelling, the case of brands with low reputation exerted more positive impact on consumer attitudes than did brands with high reputation. As mentioned earlier, consumer evaluation of the component quality of storytelling was categorized into advertising attitudes, brand attitudes, and purchase intention for this study; this provides managerial implications in other ways. The results imply that an effective application of storytelling could be an important emotional tool for the development of both brands with low brand awareness and of well-known brands. Finally, this study serves to increase consumers' understanding and ability in interpreting brand stories that marketers tell about themselves, as well as to highlight differential experiences with products by level of brand hierarchy. Conclusions - This research aimed to provide an objective guideline for storytelling component quality while considering brand awareness. Thus, brand reputation was considered for proving the baseline effectiveness of storytelling, and this study provided directions for strategic establishment of storytelling. Based on this, we conclude that in further studies, it will be necessary to systematically manage brand story by considering other situation variables and various story patterns, and studying their differences.