• Title/Summary/Keyword: Advanced Calculation Methods

Search Result 140, Processing Time 0.032 seconds

A Study on the Selecting Method of Books for the Medical Library in Korea; Citation Countung and Analysis of the Medical Literature (한국의학도서관(韓國醫學圖書館)에 있어서의 도서선택방법(圖書選擇方法)에 관한 연구(硏究) -인용문헌(引用文獻)의 계수(計數)와 분석(分析)을 중심(中心)으로-)

  • Shin, Jung-Won
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.2 no.1
    • /
    • pp.266-295
    • /
    • 1974
  • The purpose of this study is to make a contribution to the effective management of medical literature in Korean libraries and specifically to help to select medical books and periodicals by determining the value and life of medical literature by means of citation counting and analysis. This report will present methods of calculation and data collection to measure the importance, half life of medical literature and the authority of author for Korean medical libraries. The writer conducted comparative studies based on data covering a two-year period, 1970-1971, using about 16,899 citations in 1,032 articles of the above journals. The references and citations are counted and analyzed by the number of authors, periodicals. books and publication dates. By the following ratio. calculated by the citation counting and analysis, we can decide the rates of medical periodicals to books, foreign literature to domestic literature and literature of the most numerously cited. authors, for the selecting method of Korean Medical libraries. (1) It is disclosed that 61 main authors are cited 9 times. Most of them are Western authors, they are cited 14,374 times which represents 88.6 % of the total citations. (2) The cited medical literature is classified as follows: The ratio of cited medical periodicals to the cited medical books is 82.0%. (The books at a rate of 18.0%.) Therefore, the wnter concentrated the efforts on the analysis of periodicals. (3) Classification of the periodicals by countries indicates that about 11.2% of total citations are made from Korean medical literature. The medical activities in Korea are dependent on the advanced foreign countries at ratio of 88.8%. Of the foreign medical periodicals cited, Japanese literature represents only 4.5% while literature of European countries and America constitutes 84.3%. (4) If medical journals are arranged in order of decreasing productivity of articles on a given subject, they may be revealed that it is necessary to have 98 titles of key journals to cover 60% of information in the field of medical science and 60 titles for an average of 50%. (5) For the purpose of measuring the life of medical literature in Korea, the writer has calculated the half lives of periodicals and books as follows: Kinds of Literature 1. Periodicals 2. Books 3. Whole literature Half-lives 7.75 years 4.11 years 6.37 years (6) The half lives of domestic and Japanese literature in the medical science are comparatively short.

  • PDF

A Study on the Construction Cost Index for Calculating Conceptual Estimation : 1970-1999 (개략공사비 산출을 위한 공사비 지수 연구 : 1970-1999)

  • Nam, Song Hyun;Park, Hyung Keun
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.40 no.5
    • /
    • pp.527-534
    • /
    • 2020
  • A significant factor in construction work is cost. At early- and advanced-stage design, costs should be calculated to derive realistic cost estimates according to unit price calculation. Based on these estimates, the economic feasibility of construction work is assessed, and whether to proceed is determined. Through the Korea Institute of Civil Engineering and Building Technology, the construction cost index has been calculated by indirect methods after both the producer price index and construction market labor have been reprocessed to easily adjust the price changes of construction costs in Korea, and the Institute has announced it since 2004. As of January 2000, however, the construction cost index was released, and this has a time constraint on the correction and use of past construction cost data to the present moment. Variables were calculated to compute a rough construction cost that utilized past construction costs through surveys of the producer price index and the construction market labor force consisting of the construction cost index. After significant independent variables among the many variables were selected through correlation analysis, the construction cost index from 1970 to 1999 was calculated and presented through multiple regression analysis. This study therefore has prominent significance in terms of proposing a method of calculating rough construction costs that utilize construction costs that pre-date the 2000s.

Design of e-Learning System for Spectral Analysis of High-Order Pulse (고차원펄스 스펙트럼 분석을 위한 이러닝 시스템의 설계)

  • Oh, Yong-Sun
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.8
    • /
    • pp.475-487
    • /
    • 2011
  • In this paper, we present a systematic method to derive spectrum of high-order pulse and a novel design of e-Learning system that deals with deriving the spectrum using concept-based branching method. Spectrum of high-order pulse can be derived using conventional methods including 'Consecutive Differentiations' or 'Convolutions', however, their complexity of calculation should be too high to be used as the order of the pulse increase. We develop a recursive algorithm according to the order of pulse, and then derive the formula of spectrum connected to the order with a newly designed look-up table. Moving along, we design an e-Learning content for studying the procedure of deriving high-order pulse spectrum described above. In this authoring, we use the concept-based object branching method including conventional page or title-type branching in sequential playing. We design all four Content-pages divided into 'Modeling', 'Impulse Response and Transfer Function', 'Parameters' and 'Look-up Table' by these conceptual objects. And modules and sub-modules are constructed hierarchically as conceptual elements from the Content-pages. Students can easily approach to the core concepts of the analysis because of the effects of our new teaching method. We offer step-by-step processes of the e-Learning content through unit-based branching scheme for difficult modules and sub-modules in our system. In addition we can offer repetitive learning processes for necessary block of given learning objects. Moreover, this method of constructing content will be considered as an advanced effectiveness of content itself.

Planning of Optimal Work Path for Minimizing Exposure Dose During Radiation Work in Radwaste Storage (방사성 폐기물 저장시설에서의 방사선 작업 중 피폭선량 최소화를 위한 최적 작업경로 계획)

  • Park, Won-Man;Kim, Kyung-Soo;Whang, Joo-Ho
    • Journal of Radiation Protection and Research
    • /
    • v.30 no.1
    • /
    • pp.17-25
    • /
    • 2005
  • Since the safety of nuclear power plant has been becoming a big social issue the exposure dose of radiation for workers has been one of the important factors concerning the safety problem. The existing calculation methods of radiation dose used in the planning of radiation work assume that dose rate does not depend on the location within a work space thus the variation of exposure dose by different work path is not considered. In this study, a modified numerical method was presented to estimate the exposure dose during radiation work in radwaste storage considering the effects of the distance between a worker and sources. And a new numerical algorithm was suggested to search the optimal work path minimizing the exposure dose in pre-defined work space with given radiation sources. Finally, a virtual work simulation program was developed to visualize the exposure dose of radiation doting radiation works in radwaste storage and provide the capability of simulation for work planning. As a numerical example, a test radiation work was simulated under given space and two radiation sources, and the suggested optimal work path was compared with three predefined work paths. The optimal work path obtained in the study could reduce the exposure dose for the given test work. Based on the results, tile developed numerical method and simulation program could be useful tools in the planning of radiation work.

Modified Empirical Formula of Dynamic Amplification Factor for Wind Turbine Installation Vessel (해상풍력발전기 설치선박의 수정 동적증폭계수 추정식)

  • Ma, Kuk-Yeol;Park, Joo-Shin;Lee, Dong-Hun;Seo, Jung-Kwan
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.27 no.6
    • /
    • pp.846-855
    • /
    • 2021
  • Eco-friendly and renewable energy sources are actively being researched in recent times, and of shore wind power generation requires advanced design technologies in terms of increasing the capacities of wind turbines and enlarging wind turbine installation vessels (WTIVs). The WTIV ensures that the hull is situated at a height that is not affected by waves. The most important part of the WTIV is the leg structure, which must respond dynamically according to the wave, current, and wind loads. In particular, the wave load is composed of irregular waves, and it is important to know the exact dynamic response. The dynamic response analysis uses a single degree of freedom (SDOF) method, which is a simplified approach, but it is limited owing to the consideration of random waves. Therefore, in industrial practice, the time-domain analysis of random waves is based on the multi degree of freedom (MDOF) method. Although the MDOF method provides high-precision results, its data convergence is sensitive and difficult to apply owing to design complexity. Therefore, a dynamic amplification factor (DAF) estimation formula is developed in this study to express the dynamic response characteristics of random waves through time-domain analysis based on different variables. It is confirmed that the calculation time can be shortened and accuracy enhanced compared to existing MDOF methods. The developed formula will be used in the initial design of WTIVs and similar structures.

Optimized Hardware Design using Sobel and Median Filters for Lane Detection

  • Lee, Chang-Yong;Kim, Young-Hyung;Lee, Yong-Hwan
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.9 no.1
    • /
    • pp.115-125
    • /
    • 2019
  • In this paper, the image is received from the camera and the lane is sensed. There are various ways to detect lanes. Generally, the method of detecting edges uses a lot of the Sobel edge detection and the Canny edge detection. The minimum use of multiplication and division is used when designing for the hardware configuration. The images are tested using a black box image mounted on the vehicle. Because the top of the image of the used the black box is mostly background, the calculation process is excluded. Also, to speed up, YCbCr is calculated from the image and only the data for the desired color, white and yellow lane, is obtained to detect the lane. The median filter is used to remove noise from images. Intermediate filters excel at noise rejection, but they generally take a long time to compare all values. In this paper, by using addition, the time can be shortened by obtaining and using the result value of the median filter. In case of the Sobel edge detection, the speed is faster and noise sensitive compared to the Canny edge detection. These shortcomings are constructed using complementary algorithms. It also organizes and processes data into parallel processing pipelines. To reduce the size of memory, the system does not use memory to store all data at each step, but stores it using four line buffers. Three line buffers perform mask operations, and one line buffer stores new data at the same time as the operation. Through this work, memory can use six times faster the processing speed and about 33% greater quantity than other methods presented in this paper. The target operating frequency is designed so that the system operates at 50MHz. It is possible to use 2157fps for the images of 640by360 size based on the target operating frequency, 540fps for the HD images and 240fps for the Full HD images, which can be used for most images with 30fps as well as 60fps for the images with 60fps. The maximum operating frequency can be used for larger amounts of the frame processing.

Prediction accuracy of incisal points in determining occlusal plane of digital complete dentures

  • Kenta Kashiwazaki;Yuriko Komagamine;Sahaprom Namano;Ji-Man Park;Maiko Iwaki;Shunsuke Minakuchi;Manabu, Kanazawa
    • The Journal of Advanced Prosthodontics
    • /
    • v.15 no.6
    • /
    • pp.281-289
    • /
    • 2023
  • PURPOSE. This study aimed to predict the positional coordinates of incisor points from the scan data of conventional complete dentures and verify their accuracy. MATERIALS AND METHODS. The standard triangulated language (STL) data of the scanned 100 pairs of complete upper and lower dentures were imported into the computer-aided design software from which the position coordinates of the points corresponding to each landmark of the jaw were obtained. The x, y, and z coordinates of the incisor point (XP, YP, and ZP) were obtained from the maxillary and mandibular landmark coordinates using regression or calculation formulas, and the accuracy was verified to determine the deviation between the measured and predicted coordinate values. YP was obtained in two ways using the hamularincisive-papilla plane (HIP) and facial measurements. Multiple regression analysis was used to predict ZP. The root mean squared error (RMSE) values were used to verify the accuracy of the XP and YP. The RMSE value was obtained after crossvalidation using the remaining 30 cases of denture STL data to verify the accuracy of ZP. RESULTS. The RMSE was 2.22 for predicting XP. When predicting YP, the RMSE of the method using the HIP plane and facial measurements was 3.18 and 0.73, respectively. Cross-validation revealed the RMSE to be 1.53. CONCLUSION. YP and ZP could be predicted from anatomical landmarks of the maxillary and mandibular edentulous jaw, suggesting that YP could be predicted with better accuracy with the addition of the position of the lower border of the upper lip.

A Combat Effectiveness Evaluation Algorithm Considering Technical and Human Factors in C4I System (NCW 환경에서 C4I 체계 전투력 상승효과 평가 알고리즘 : 기술 및 인적 요소 고려)

  • Jung, Whan-Sik;Park, Gun-Woo;Lee, Jae-Yeong;Lee, Sang-Hoon
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.2
    • /
    • pp.55-72
    • /
    • 2010
  • Recently, the battlefield environment has changed from platform-centric warfare(PCW) which focuses on maneuvering forces into network-centric warfare(NCW) which is based on the connectivity of each asset through the warfare information system as information technology increases. In particular, C4I(Command, Control, Communication, Computer and Intelligence) system can be an important factor in achieving NCW. It is generally used to provide direction across distributed forces and status feedback from thoseforces. It can provide the important information, more quickly and in the correct format to the friendly units. And it can achieve the information superiority through SA(Situational Awareness). Most of the advanced countries have been developed and already applied these systems in military operations. Therefore, ROK forces also have been developing C4I systems such as KJCCS(Korea Joint Command Control System). And, ours are increasing the budgets in the establishment of warfare information systems. However, it is difficult to evaluate the C4I effectiveness properly by deficiency of methods. We need to develop a new combat effectiveness evaluation method that is suitable for NCW. Existing evaluation methods lay disproportionate emphasis on technical factors with leaving something to be desired in human factors. Therefore, it is necessary to consider technical and human factors to evaluate combat effectiveness. In this study, we proposed a new Combat Effectiveness evaluation algorithm called E-TechMan(A Combat Effectiveness Evaluation Algorithm Considering Technical and Human Factors in C4I System). This algorithm uses the rule of Newton's second law($F=(m{\Delta}{\upsilon})/{\Delta}t{\Rightarrow}\frac{V{\upsilon}I}{T}{\times}C$). Five factors considered in combat effectiveness evaluation are network power(M), movement velocity(v), information accuracy(I), command and control time(T) and collaboration level(C). Previous researches did not consider the value of the node and arc in evaluating the network power after the C4I system has been established. In addition, collaboration level which could be a major factor in combat effectiveness was not considered. E-TechMan algorithm is applied to JFOS-K(Joint Fire Operating System-Korea) system that can connect KJCCS of Korea armed forces with JADOCS(Joint Automated Deep Operations Coordination System) of U.S. armed forces and achieve sensor to shooter system in real time in JCS(Joint Chiefs of Staff) level. We compared the result of evaluation of Combat Effectiveness by E-TechMan with those by other algorithms(e.g., C2 Theory, Newton's second Law). We can evaluate combat effectiveness more effectively and substantially by E-TechMan algorithm. This study is meaningful because we improved the description level of reality in calculation of combat effectiveness in C4I system. Part 2 will describe the changes of war paradigm and the previous combat effectiveness evaluation methods such as C2 theory while Part 3 will explain E-TechMan algorithm specifically. Part 4 will present the application to JFOS-K and analyze the result with other algorithms. Part 5 is the conclusions provided in the final part.

A Pilot Study for the Feasibility of F-18 FLT-PET in Locally Advanced Breast Cancer: Comparison with F-18 FDG-PET (국소진행성 유방암에서 F-18 FLT-PET 적용 가능성에 대한 예비 연구: F-18 FDG-PET와 비교)

  • Hyuen, Lee-Jai;Kim, Euy-Nyong;Hong, Il-Ki;Ahn, Jin-Hee;Kim, Sung-Bae;Ahn, Sei-Hyun;Gong, Gyung-Yup;Kim, Jae-Seung;Oh, Seung-Jun;Moon, Dae-Hyuk;Ryu, Jin-Sook
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.42 no.1
    • /
    • pp.29-38
    • /
    • 2008
  • Purpose: The aim of this study was to investigate the feasibility of 3 ' -[F-18]fluoro-3 ' -deoxythymidine positron emission tomography(FLT-PET) for the detection of locally advanced breast cancer and to compare the degree of FLT and 2' -deoxy-2 ' -[F-18]fluoro-d-glucose(FDG) uptake in primary tumor, lymph nodes and other normal organs. Material & Methods: The study subjects consisted of 22 female patients (mean age; $42{\pm}6$ years) with biopsy-confirmed infiltrating ductal carcinoma between Aug 2005 and Nov 2006. We performed conventional imaging workup, FDG-PET and FLT PET/CT. Average tumor size measured by MRI was $7.2{\pm}3.4$ cm. With visual analysis, Tumor and Lymph node uptakes of FLT and FDG were determined by calculation of standardized uptake value (SUV) and tumor to background (TB) ratio. We compared FLT tumor uptake with FDG tumor uptake. We also investigated the correlation between FLT tumor uptake and FDG tumor uptake and the concordant rate with lymph node uptakes of FLT and FDG. FLT and FDG uptakes of bone marrow and liver were measured to compare the biodistribution of each other. Results: All tumor lesions were visually detected in both FLT-PET and FDG-PET. There was no significant correlation between maximal tumor size by MRI and SUVmax of FLT-PET or FDG-PET (p>0.05). SUVmax and $$SUV_{75} (average SUV within volume of interest using 75% isocontour) of FLT-PET were significantly lower than those of FDG-PET in primary tumor (SUVmax; $6.3{\pm}5.2\;vs\;8.3{\pm}4.9$, p=0.02 /$SUV_{75};\;5.3{\pm}4.3\;vs\;6.9{\pm}4.2$, p=0.02). There is significant moderate correlation between uptake of FLT and FDG in primary tumor (SUVmax; rho=0.450, p=0.04 / SUV75; rho=0.472, p=0.03). But, TB ratio of FLT-PET was higher than that of FDG-PET($11.7{\pm}7.7\;vs\;6.3{\pm}3.8$, p=0.001). The concordant rate between FLT and FDG uptake of lymph node was reasonably good (33/34). The FLT SUVs of liver and bone marrow were $4.2{\pm}1.2\;and\;8.3{\pm}4.9$. The FDG SUVs of liver and bone marrow were $1.8{\pm}0.4\;and\;1.6{\pm}0.4$. Conclusion: The uptakes of FLT were lower than those of FDG, but all patients of this study revealed good FLT uptakes of tumor and lymph node. Because FLT-PET revealed high TB ratio and concordant rate with lymph node uptakes of FDG-PET, FLT-PET could be a useful diagnostic tool in locally advanced breast cancer. But, physiological uptake and individual variation of FLT in bone marrow and liver will limit the diagnosis of bone and liver metastases.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.