• Title/Summary/Keyword: the use of technology

Search Result 24,525, Processing Time 0.065 seconds

Influence analysis of Internet buzz to corporate performance : Individual stock price prediction using sentiment analysis of online news (온라인 언급이 기업 성과에 미치는 영향 분석 : 뉴스 감성분석을 통한 기업별 주가 예측)

  • Jeong, Ji Seon;Kim, Dong Sung;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.37-51
    • /
    • 2015
  • Due to the development of internet technology and the rapid increase of internet data, various studies are actively conducted on how to use and analyze internet data for various purposes. In particular, in recent years, a number of studies have been performed on the applications of text mining techniques in order to overcome the limitations of the current application of structured data. Especially, there are various studies on sentimental analysis to score opinions based on the distribution of polarity such as positivity or negativity of vocabularies or sentences of the texts in documents. As a part of such studies, this study tries to predict ups and downs of stock prices of companies by performing sentimental analysis on news contexts of the particular companies in the Internet. A variety of news on companies is produced online by different economic agents, and it is diffused quickly and accessed easily in the Internet. So, based on inefficient market hypothesis, we can expect that news information of an individual company can be used to predict the fluctuations of stock prices of the company if we apply proper data analysis techniques. However, as the areas of corporate management activity are different, an analysis considering characteristics of each company is required in the analysis of text data based on machine-learning. In addition, since the news including positive or negative information on certain companies have various impacts on other companies or industry fields, an analysis for the prediction of the stock price of each company is necessary. Therefore, this study attempted to predict changes in the stock prices of the individual companies that applied a sentimental analysis of the online news data. Accordingly, this study chose top company in KOSPI 200 as the subjects of the analysis, and collected and analyzed online news data by each company produced for two years on a representative domestic search portal service, Naver. In addition, considering the differences in the meanings of vocabularies for each of the certain economic subjects, it aims to improve performance by building up a lexicon for each individual company and applying that to an analysis. As a result of the analysis, the accuracy of the prediction by each company are different, and the prediction accurate rate turned out to be 56% on average. Comparing the accuracy of the prediction of stock prices on industry sectors, 'energy/chemical', 'consumer goods for living' and 'consumer discretionary' showed a relatively higher accuracy of the prediction of stock prices than other industries, while it was found that the sectors such as 'information technology' and 'shipbuilding/transportation' industry had lower accuracy of prediction. The number of the representative companies in each industry collected was five each, so it is somewhat difficult to generalize, but it could be confirmed that there was a difference in the accuracy of the prediction of stock prices depending on industry sectors. In addition, at the individual company level, the companies such as 'Kangwon Land', 'KT & G' and 'SK Innovation' showed a relatively higher prediction accuracy as compared to other companies, while it showed that the companies such as 'Young Poong', 'LG', 'Samsung Life Insurance', and 'Doosan' had a low prediction accuracy of less than 50%. In this paper, we performed an analysis of the share price performance relative to the prediction of individual companies through the vocabulary of pre-built company to take advantage of the online news information. In this paper, we aim to improve performance of the stock prices prediction, applying online news information, through the stock price prediction of individual companies. Based on this, in the future, it will be possible to find ways to increase the stock price prediction accuracy by complementing the problem of unnecessary words that are added to the sentiment dictionary.

Dismantling and Restoration of the Celadon Stool Treasure with an Openwork Ring Design (보물 청자 투각고리문 의자의 해체 및 복원)

  • KWON, Ohyoung;LEE, Sunmyung;LEE, Jangjon;PARK, Younghwan
    • Korean Journal of Heritage: History & Science
    • /
    • v.55 no.2
    • /
    • pp.200-211
    • /
    • 2022
  • The celadon stools with an openwork ring design which consist of four items as one collection were excavated from Gaeseong, Gyeonggi-do Province. The celadon stools were designated and managed as treasures due to their high arthistorical value in the form of demonstrating the excellence of celadon manufacturing techniques and the fanciful lifestyles during the Goryeo Dynasty. However, one of the items, which appeared to have been repaired and restored in the past, suffered a decline in aesthetic value due to the aging of the treatment materials and the lack of skill on the part of the conservator, raising the need for re-treatment as a result of structural instability. An examination of the conservation condition prior to conservation treatment found structural vulnerabilities because physical damage had been artificially inflicted throughout the area that was rendered defective at the time of manufacturing. The bonded surfaces for the cracked areas and detached fragments did not fit, and these areas and fragments had deteriorated because the adhesive trickled down onto the celadon surface or secondary contaminants, such as dust, were on the adhesive surface. The study identified the position, scope, and conditions of the bonded areas at the cracks UV rays and microscopy in order to investigate the condition of repair and restoration. By conducting Fourier-transform infrared spectroscopy(FT-IR) and portable x-ray fluorescence spectroscopy on the materials used for the former conservation treatment, the study confirmed the use of cellulose resins and epoxy resins as adhesives. Furthermore, the analysis revealed the addition of gypsum(CaSO4·2H2O) and bone meal(Ca10 (PO4)6(OH)2) to the adhesive to increase the bonding strength of some of the bonded areas that sustained force. Based on the results of the investigation, the conservation treatment for the artifact would focus on completely dismantling the existing bonded areas and then consolidating vulnerable areas through bonding and restoration. After removing and dismantling the prior adhesive used, the celadon stool was separated into 6 large fragments including the top and bottom, the curved legs, and some of the ring design. After dismantling, the remaining adhesive and contaminants were chemically and physically removed, and a steam cleaner was used to clean the fractured surfaces to increase the bonding efficacy of the re-bonding. The bonding of the artifact involved applying the adhesive differently depending on the bonding area and size. The cyanoacrylate resin Loctite 401 was used on the bonding area that held the positions of the fragments, while the acrylic resin Paraloid B-72 20%(in xylene) was treated on cross sections for reversibility in the areas that provided structural stability before bonding the fragments using the epoxy resin Epo-tek 301-2. For areas that would sustain force, as in the top and bottom, kaolin was added to Epo-tek 301-2 in order to reinforce the bonding strength. For the missing parts of the ring design where a continuous pattern could be assumed, a frame was made using SN-sheets, and the ring design was then modeled and restored by connecting the damaged cross section with Wood epos. Other restoration areas that occurred during bonding were treated by being filled with Wood epos for aesthetic and structural stabilization. Restored and filled areas were color-matched to avoid the feeling of disharmony from differences of texture in case of exhibitions in the future. The investigation and treatment process involving a variety of scientific technology was systematically documented so as to be utilized as basic data for the conservation and maintenance.

Analysis of shopping website visit types and shopping pattern (쇼핑 웹사이트 탐색 유형과 방문 패턴 분석)

  • Choi, Kyungbin;Nam, Kihwan
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.85-107
    • /
    • 2019
  • Online consumers browse products belonging to a particular product line or brand for purchase, or simply leave a wide range of navigation without making purchase. The research on the behavior and purchase of online consumers has been steadily progressed, and related services and applications based on behavior data of consumers have been developed in practice. In recent years, customization strategies and recommendation systems of consumers have been utilized due to the development of big data technology, and attempts are being made to optimize users' shopping experience. However, even in such an attempt, it is very unlikely that online consumers will actually be able to visit the website and switch to the purchase stage. This is because online consumers do not just visit the website to purchase products but use and browse the websites differently according to their shopping motives and purposes. Therefore, it is important to analyze various types of visits as well as visits to purchase, which is important for understanding the behaviors of online consumers. In this study, we explored the clustering analysis of session based on click stream data of e-commerce company in order to explain diversity and complexity of search behavior of online consumers and typified search behavior. For the analysis, we converted data points of more than 8 million pages units into visit units' sessions, resulting in a total of over 500,000 website visit sessions. For each visit session, 12 characteristics such as page view, duration, search diversity, and page type concentration were extracted for clustering analysis. Considering the size of the data set, we performed the analysis using the Mini-Batch K-means algorithm, which has advantages in terms of learning speed and efficiency while maintaining the clustering performance similar to that of the clustering algorithm K-means. The most optimized number of clusters was derived from four, and the differences in session unit characteristics and purchasing rates were identified for each cluster. The online consumer visits the website several times and learns about the product and decides the purchase. In order to analyze the purchasing process over several visits of the online consumer, we constructed the visiting sequence data of the consumer based on the navigation patterns in the web site derived clustering analysis. The visit sequence data includes a series of visiting sequences until one purchase is made, and the items constituting one sequence become cluster labels derived from the foregoing. We have separately established a sequence data for consumers who have made purchases and data on visits for consumers who have only explored products without making purchases during the same period of time. And then sequential pattern mining was applied to extract frequent patterns from each sequence data. The minimum support is set to 10%, and frequent patterns consist of a sequence of cluster labels. While there are common derived patterns in both sequence data, there are also frequent patterns derived only from one side of sequence data. We found that the consumers who made purchases through the comparative analysis of the extracted frequent patterns showed the visiting pattern to decide to purchase the product repeatedly while searching for the specific product. The implication of this study is that we analyze the search type of online consumers by using large - scale click stream data and analyze the patterns of them to explain the behavior of purchasing process with data-driven point. Most studies that typology of online consumers have focused on the characteristics of the type and what factors are key in distinguishing that type. In this study, we carried out an analysis to type the behavior of online consumers, and further analyzed what order the types could be organized into one another and become a series of search patterns. In addition, online retailers will be able to try to improve their purchasing conversion through marketing strategies and recommendations for various types of visit and will be able to evaluate the effect of the strategy through changes in consumers' visit patterns.

The Effect of PET/CT Images on SUV with the Correction of CT Image by Using Contrast Media (PET/CT 영상에서 조영제를 이용한 CT 영상의 보정(Correction)에 따른 표준화섭취계수(SUV)의 영향)

  • Ahn, Sha-Ron;Park, Hoon-Hee;Park, Min-Soo;Lee, Seung-Jae;Oh, Shin-Hyun;Lim, Han-Sang;Kim, Jae-Sam;Lee, Chang-Ho
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.1
    • /
    • pp.77-81
    • /
    • 2009
  • Purpose: The PET of the PET/CT (Positron Emission Tomography/Computed Tomography) quantitatively shows the biological and chemical information of the body, but has limitation of presenting the clear anatomic structure. Thus combining the PET with CT, it is not only possible to offer the higher resolution but also effectively shorten the scanning time and reduce the noises by using CT data in attenuation correction. And because, at the CT scanning, the contrast media makes it easy to determine a exact range of the lesion and distinguish the normal organs, there is a certain increase in the use of it. However, in the case of using the contrast media, it affects semi-quantitative measures of the PET/CT images. In this study, therefore, we will be to establish the reliability of the SUV (Standardized Uptake Value) with CT data correction so that it can help more accurate diagnosis. Materials and Methods: In this experiment, a total of 30 people are targeted - age range: from 27 to 72, average age : 49.6 - and DSTe (General Electric Healthcare, Milwaukee, MI, USA) is used for equipment. $^{18}F$- FDG 370~555 MBq is injected into the subjects depending on their weight and, after about 60 minutes of their stable position, a whole-body scan is taken. The CT scan is set to 140 kV and 210 mA, and the injected amount of the contrast media is 2 cc per 1 kg of the patients' weight. With the raw data from the scan, we obtain a image showing the effect of the contrast media through the attenuation correction by both of the corrected and uncorrected CT data. Then we mark out ROI (Region of Interest) in each area to measure SUV and analyze the difference. Results: According to the analysis, the SUV is decreased in the liver and heart which have more bloodstream than the others, because of the contrast media correction. On the other hand, there is no difference in the lungs. Conclusions: Whereas the CT scan images with the contrast media from the PET/CT increase the contrast of the targeted region for the test so that it can improve efficiency of diagnosis, there occurred an increase of SUV, a semi-quantitative analytical method. In this research, we measure the variation of SUV through the correction of the influence of contrast media and compare the differences. As we revise the SUV which is increasing in the image with attenuation correction by using contrast media, we can expect anatomical images of high-resolution. Furthermore, it is considered that through this trusted semi-quantitative method, it will definitely enhance the diagnostic value.

  • PDF

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

Image Watermarking for Copyright Protection of Images on Shopping Mall (쇼핑몰 이미지 저작권보호를 위한 영상 워터마킹)

  • Bae, Kyoung-Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.147-157
    • /
    • 2013
  • With the advent of the digital environment that can be accessed anytime, anywhere with the introduction of high-speed network, the free distribution and use of digital content were made possible. Ironically this environment is raising a variety of copyright infringement, and product images used in the online shopping mall are pirated frequently. There are many controversial issues whether shopping mall images are creative works or not. According to Supreme Court's decision in 2001, to ad pictures taken with ham products is simply a clone of the appearance of objects to deliver nothing but the decision was not only creative expression. But for the photographer's losses recognized in the advertising photo shoot takes the typical cost was estimated damages. According to Seoul District Court precedents in 2003, if there are the photographer's personality and creativity in the selection of the subject, the composition of the set, the direction and amount of light control, set the angle of the camera, shutter speed, shutter chance, other shooting methods for capturing, developing and printing process, the works should be protected by copyright law by the Court's sentence. In order to receive copyright protection of the shopping mall images by the law, it is simply not to convey the status of the product, the photographer's personality and creativity can be recognized that it requires effort. Accordingly, the cost of making the mall image increases, and the necessity for copyright protection becomes higher. The product images of the online shopping mall have a very unique configuration unlike the general pictures such as portraits and landscape photos and, therefore, the general image watermarking technique can not satisfy the requirements of the image watermarking. Because background of product images commonly used in shopping malls is white or black, or gray scale (gradient) color, it is difficult to utilize the space to embed a watermark and the area is very sensitive even a slight change. In this paper, the characteristics of images used in shopping malls are analyzed and a watermarking technology which is suitable to the shopping mall images is proposed. The proposed image watermarking technology divide a product image into smaller blocks, and the corresponding blocks are transformed by DCT (Discrete Cosine Transform), and then the watermark information was inserted into images using quantization of DCT coefficients. Because uniform treatment of the DCT coefficients for quantization cause visual blocking artifacts, the proposed algorithm used weighted mask which quantizes finely the coefficients located block boundaries and coarsely the coefficients located center area of the block. This mask improves subjective visual quality as well as the objective quality of the images. In addition, in order to improve the safety of the algorithm, the blocks which is embedded the watermark are randomly selected and the turbo code is used to reduce the BER when extracting the watermark. The PSNR(Peak Signal to Noise Ratio) of the shopping mall image watermarked by the proposed algorithm is 40.7~48.5[dB] and BER(Bit Error Rate) after JPEG with QF = 70 is 0. This means the watermarked image is high quality and the algorithm is robust to JPEG compression that is used generally at the online shopping malls. Also, for 40% change in size and 40 degrees of rotation, the BER is 0. In general, the shopping malls are used compressed images with QF which is higher than 90. Because the pirated image is used to replicate from original image, the proposed algorithm can identify the copyright infringement in the most cases. As shown the experimental results, the proposed algorithm is suitable to the shopping mall images with simple background. However, the future study should be carried out to enhance the robustness of the proposed algorithm because the robustness loss is occurred after mask process.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

The hydrodynamic characteristics of the canvas kite - 2. The characteristics of the triangular canvas kite - (캔버스 카이트의 유체역학적 특성에 관한 연구 - 2. 삼각형 캔버스 카이트의 특성 -)

  • Bae, Bong-Seong;Bae, Jae-Hyun;An, Heui-Chun;Lee, Ju-Hee;Shin, Jung-Wook
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.40 no.3
    • /
    • pp.206-213
    • /
    • 2004
  • As far as an opening device of fishing gears is concerned, applications of a kite are under development around the world. The typical examples are found in the opening device of the stow net on anchor and the buoyancy material of the trawl. While the stow net on anchor has proved its capability for the past 20 years, the trawl has not been wildly used since it has been first introduced for the commercial use only without sufficient studies and thus has revealed many drawbacks. Therefore, the fundamental hydrodynamics of the kite itself need to ne studied further. Models of plate and canvas kite were deployed in the circulating water tank for the mechanical test. For this situation lift and drag tests were performed considering a change in the shape of objects, which resulted in a different aspect ratio of rectangle and trapezoid. The results obtained from the above approaches are summarized as follows, where aspect ratio, attack angle, lift coefficient and maximum lift coefficient are denoted as A, B, $C_L$ and $C_{Lmax}$ respectively : 1. Given the triangular plate, $C_{Lmax}$ was produced as 1.26${\sim}$1.32 with A${\leq}$1 and 38$^{\circ}$B${\leq}$42$^{\circ}$. And when A${\geq}$1.5 and 20$^{\circ}$${\leq}$B${\leq}$50$^{\circ}$, $C_L$ was around 0.85. Given the inverted triangular plate, $C_{Lmax}$ was 1.46${\sim}$1.56 with A${\leq}$1 and 36$^{\circ}$B${\leq}$38$^{\circ}$. And When A${\geq}$1.5 and 22$^{\circ}$B${\leq}$26$^{\circ}$, $C_{Lmax}$ was 1.05${\sim}$1.21. Given the triangular kite, $C_{Lmax}$ was produced as 1.67${\sim}$1.77 with A${\leq}$1 and 46$^{\circ}$B${\leq}$48$^{\circ}$. And when A${\geq}$1.5 and 20$^{\circ}$B${\leq}$50$^{\circ}$, $C_L$ was around 1.10. Given the inverted triangular kite, $C_{Lmax}$ was 1.44${\sim}$1.68 with A${\leq}$1 and 28$^{\circ}$B${\leq}$32$^{\circ}$. And when A${\geq}$1.5 and 18$^{\circ}$B${\leq}$24$^{\circ}$, $C_{Lmax}$ was 1.03${\sim}$1.18. 2. For a model with A=1/2, an increase in B caused an increase in $C_L$ until $C_L$ has reached the maximum. Then there was a tendency of a very gradual decrease or no change in the value of $C_L$. For a model with A=2/3, the tendency of $C_L$ was similar to the case of a model with A=1/2. For a model with A=1, an increase in B caused an increase in $C_L$ until $C_L$ has reached the maximum. And the tendency of $C_L$ didn't change dramatically. For a model with A=1.5, the tendency of $C_L$ as a function of B was changed very small as 0.75${\sim}$1.22 with 20$^{\circ}$B${\leq}$50$^{\circ}$. For a model with A=2, the tendency of $C_L$ as a function of B was almost the same in the triangular model. There was no considerable change in the models with 20$^{\circ}$B${\leq}$50$^{\circ}$. 3. The inverted model's $C_L$ as a function of increase of B reached the maximum rapidly, then decreased gradually compared to the non-inverted models. Others were decreased dramatically. 4. The action point of dynamic pressure in accordance with the attack angle was close to the rear area of the model with small attack angle, and with large attack angle, the action point was close to the front part of the model. 5. There was camber vertex in the position in which the fluid pressure was generated, and the triangular canvas had large value of camber vertex when the aspect ratio was high, while the inverted triangular canvas was versa. 6. All canvas kite had larger camber ratio when the aspect ratio was high, and the triangular canvas had larger one when the attack angle was high, while the inverted triangluar canvas was versa.

A Study on the Fabrication of the Laminated Wood Composed of Poplar and Larch (포푸라와 일본잎갈나무의 집성재 제조에 관한 연구)

  • Jo, Jae-Myeong;Kang, Sun-Goo;Kim, Ki-Hyeon;Chung, Byeong-Jae
    • Journal of the Korean Wood Science and Technology
    • /
    • v.2 no.2
    • /
    • pp.25-31
    • /
    • 1974
  • 1. Various gluing qualities applying Resorcinol Plyophen #6000 were studied on aiming the strength relationships of laminated woods resulted by single species [poplar (Populus deltoides), larch(Larix leptolepis)], mixed species of (poplar and larch), preservatives, treated poplar the scarf joint with mixed species of poplar and larch and the scarf joint treated with preservatives. 1. 1 On the block shear and on the DVL tension test, the mean wood failure ratio showed an excellent value i.e., above 65% and the tangential strength for larch was higher than that of radial, but it was reversed for poplar as shown in Tables 1 and 2. 1. 2 The lamina treated with Na-PCP reduced slightly the strength but the limited strength allowed for manufacturing laminated wood was not influenced by treating Na-PCP as shown in Tables 3 and 4. 1. 3 The safe scarf ratio in the plane scarf joint was above 1/12 for larch and 1/6 for poplar regard less of the chemical treatment or untreatment as shown in Tables. 5, 6, 7 and 8. 2. In the normal and boiled state, the gluing quality of the laminated wood composed of single[poplar (Populus deltoides), larch (Larix leptolepis)] and double species (poplar and larch) glued with Resorcinol Plyophen #6000 were measured as follow, and also represented the delamination of the same laminated wood. 2.1 The normal block shear strength of the straight and curved laminated wood (in life size) were more than three times of the standards adhesion strength. And, the value of the boiled stock was decreased to one half of the standard shear adhesion strength, but it was more than twice the standard strength for the boiled stock. Thus, it was recognized that the water resistance of the Resorcinol Plyophen #6000 was very high as shown in Tables 9 and 10. 2. 2 The delamination ratio of the straight and curved laminated woods in respect of their composition were decraesed, in turn, in the following order i. e., larch, mixed stock (larch+poplar) and poplar. The maximum value represented by the larch was 3.5% but it was below the limited value as shown in Table 11. 3. The various strengthes i.e., compressive, bending and adhesion obtainted by the straight laminaced wood which were constructed by five plies of single and double species of lamina i. e., larch (Larix leptolepis) and poplar (Populus euramericana), glued with urea resin were shown as follows: 3. 1 If desired a higher strength of architectural laminated wood composed of poplar (P) and larch (L), the combination of the laminas should be arranged as follows, L+P+L+P+L as shown in Table 12. 3.2 The strength of laminated wood composed of laminas which included pith and knots was conside rably decreased than that of clear lamina as shown Table 13. 3.3 The shear strength of the FPL block of the straight laminated wood constructed by the same species which were glued with urea adhesives was more than twice the limited adhesion strength, thus it makes possible to use it for interior constructional stock.

  • PDF

Development of a split beam transducer for measuring fish size distribution (어체 크기의 자동 식별을 위한 split beam 음향 변환기의 재발)

  • 이대재;신형일
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.37 no.3
    • /
    • pp.196-213
    • /
    • 2001
  • A split beam ultrasonic transducer operating at a frequency of 70 kHz to use in the fish sizing echo sounder was developed and the acoustic radiation characteristics were experimentally analyzed. The amplitude shading method utilizing the properties of the Chebyshev polynomials was used to obtain side lobe levels below -20 dB and to optimize the relationship between main beam width and side lobe level of the transducer, and the amplitude shading coefficient to each of the elements was achieved by changing the amplitude contribution of elements with 4 weighting transformers embodied in the planar array transducer assembly. The planar array split beam transducer assembly was composed of 36 piezoelectric ceramics (NEPEC N-21, Tokin) of rod type of 10 mm in diameter and 18.7 mm in length of 70 kHz arranged in the rectangular configuration, and the 4 electrical inputs were supplied to the beamformer. A series of impedance measurements were conducted to check the uniformity of the individual quadrants, and also in the configurations of reception and transmission, resonant frequency, and the transmitting and receiving characteristics were measured in the water tank and analyzed, respectively. The results obtained are summarized as follows : 1. Average resonant and antiresonant frequencies of electrical impedance for four quadrants of the split beam transducer in water were 69.8 kHz and 83.0 kHz, respectively. Average electrical impedance for each individual transducer quadrant was 49.2$\Omega$ at resonant frequency and 704.7$\Omega$ at antiresonant frequency. 2. The resonance peak in the transmitting voltage response (TVR) for four quadrants of the split beam transducer was observed all at 70.0 kHz and the value of TVR was all about 165.5 dB re 1 $\mu$Pa/V at 1 m at 70.0 kHz with bandwidth of 10.0 kHz between -3 dB down points. The resonance peak in the receiving sensitivity (SRT) for four combined quadrants (quad LU+LL, quad RU+RL, quad LU+RU, quad LL+RL) of the split beam transducer was observed all at 75.0 kHz and the value of SRT was all about -177.7 dB re 1 V/$\mu$Pa at 75.0 kHz with bandwidth of 10.0 kHz between -3 dB down points. The sum beam transmitting voltage response and receiving senstivity was 175.0 dB re 1$\mu$Pa/V at 1 m at 75.0 kHz with bandwidth of 10.0 kHz, respectively. 3. The sum beam of split beam transducer was approximately circular with a half beam angle of $9.0^\circ$ at -3 dB points all in both axis of the horizontal plane and the vertical plane. The first measured side lobe levels for the sum beam of split beam transducer were -19.7 dB at $22^\circ$ and -19.4 dB at $-26^\circ$ in the horizontal plane, respectively and -20.1 dB at $22^\circ$ and -22.0 dB at $-26^\circ$ in the vertical plane, respectively. 4. The developed split beam transducer was tested to estimate the angular position of the target in the beam through split beam phase measurements, and the beam pattern loss for target strength corrections was measured and analyzed.

  • PDF