• Title/Summary/Keyword: Automatic Test System

Search Result 927, Processing Time 0.027 seconds

A portable electronic nose (E-Nose) system using PDA device (개인 휴대 단말기 (PDA)를 기반으로 한 휴대용 E-Nose의 개발)

  • Yang, Yoon-Seok;Kim, Yong-Shin;Ha, Seung-Chul;Kim, Yong-Jun;Cho, Seong-Mok;Pyo, Hyeon-Bong;Choi, Chang-Auck
    • Journal of Sensor Science and Technology
    • /
    • v.14 no.2
    • /
    • pp.69-77
    • /
    • 2005
  • The electronic nose (e-nose) has been used in food industry and quality controls in plastic packaging. Recently it finds its applications in medical diagnosis, specifically on detection of diabetes, pulmonary or gastrointestinal problem, or infections by examining odors in the breath or tissues with its odor characterizing ability. Moreover, the use of portable e-nose enables the on-site measurements and analysis of vapors without extra gas-sampling units. This is expected to widen the application of the e-nose in various fields including point-of-care-test or e-health. In this study, a PDA-based portable e-nose was developed using micro-machined gas sensor array and miniaturized electronic interfaces. The rich capacities of the PDA in its computing power and various interfaces are expected to provide the rapid and application specific development of the diagnostic devices, and easy connection to other facilities through information technology (IT) infra. For performance verification of the developed portable e-nose system, Six different vapors were measured using the system. Seven different carbon-black polymer composites were used for the sensor array. The results showed the reproducibility of the measured data and the distinguishable patterns between the vapor species. Additionally, the application of two typical pattern recognition algorithms verified the possibility of the automatic vapor recognition from the portable measurements. These validated the portable e-nose based on PDA developed in this study.

Accuracy Analysis of GNSS-based Public Surveying and Proposal for Work Processes (GNSS관측 공공측량 정확도 분석 및 업무프로세스 제안)

  • Bae, Tae-Suk
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.6
    • /
    • pp.457-467
    • /
    • 2018
  • Currently, the regulation and rules for public surveying and the UCPs (Unified Control Points) adapts those of the triangulated traverse surveying. In addition, such regulations do not take account of the unique characteristics of GNSS (Global Navigation Satellite System) surveying, thus there are difficulties in field work and data processing afterwards. A detailed procesure of GNSS processing has not yet been described either, and the verification of accuracy does not follow the generic standards. In order to propose an appropriate procedure for field surveys, we processed a short session (30 minutes) based on the scenarios similar to actual situations. The reference network in Seoul was used to process the same data span for 3 days. The temporal variation during the day was evaluated as well. We analyzed the accuracy of the estimated coordinates depending on the parameterization of tropospheric delay, which was compared with the 24-hr static processing results. Estimating the tropospheric delay is advantageous for the accuracy and stability of the coordinates, resulting in about 5 mm and 10 mm of RMSE (Root Mean Squared Error) for horizontal and vertical components, respectively. Based on the test results, we propose a procedure to estimate the daily solution and then combine them to estimate the final solution by applying the minimum constraints (no-net-translation condition). It is necessary to develop a web-based processing system using a high-end softwares. Additionally, it is also required to standardize the ID of the public control points and the UCPs for the automatic GNSS processing.

A Feasibility Study on the Development of Multifunctional Radar Software using a Model-Based Development Platform (모델기반 통합 개발 플랫폼을 이용한 다기능 레이다 소프트웨어 개발의 타당성 연구)

  • Seung Ryeon Kim ;Duk Geun Yoon ;Sun Jin Oh ;Eui Hyuk Lee;Sa Won Min ;Hyun Su Oh ;Eun Hee Kim
    • Journal of the Korea Society for Simulation
    • /
    • v.32 no.3
    • /
    • pp.23-31
    • /
    • 2023
  • Software development involves a series of stages, including requirements analysis, design, implementation, unit testing, and integration testing, similar to those used in the system engineering process. This study utilized MathWorks' model-based design platform to develop multi-function radar software and evaluated its feasibility and efficiency. Because the development of conventional radar software is performed by a unit algorithm rather than in an integrated form, it requires additional efforts to manage the integrated software, such as requirement analysis and integrated testing. The mode-based platform applied in this paper provides an integrated development environment for requirements analysis and allocation, algorithm development through simulation, automatic code generation for deployment, and integrated requirements testing, and result management. With the platform, we developed multi-level models of the multi-function radar software, verified them using test harnesses, managed requirements, and transformed them into hardware deployable language using the auto code generation tool. We expect this Model-based integrated development to reduce errors from miscommunication or other human factors and save on the development schedule and cost.

Development of the Automatic Fishing System for the Anchovy Scoop nets (I) - The hydraulic winder device for the boom control - (멸치초망 어업의 조업자동화 시스템 개발 (I) -챗대 조작용 유압 권양기 개발-)

  • 박성욱;배봉성;서두옥
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.36 no.3
    • /
    • pp.166-174
    • /
    • 2000
  • Anchovy, EngrauEis japonica scoop nets are used in the coastal of Southern and Cheju of Korea. Especially in the Cheju, the fishing gear of scoop nets consists of upper boom, lower boom, pressing stick and bag net. They are operated by fishing boats of 6 to 10 ton class and 8 persons on board. The booms are controlled by side drum, and the net and pressing stick are hauled by only human power in operating. Therefore this fishery needs to large labor and heavy human power and has much risk. Three kinds of hydraulic winding device which controls two booms was designed and manufactured to reduce heavy labor force of scoop nets, and trial in the sea was carried out to test their performances using the commercial fishing boats of 6 ton class. The proper capacity of hydraulic pump and motor were determined by model test of boom 1/5 scale. The results obtained are as follows, 1. Tension of boom which is being drawn was the strongest and 187.5kgf when the boom's end is in the depth of 4m under the water. 2. The hydraulic motor of the fittest kind of winder has the least leakage per time than the other kinds. 3. In the best type of several winder devices, when the pressure difference was fixed $130kg/^2$ for the safe fishery, the winding velocity of boom line was 2m/sec, is faster 0.48/sec than traditional fishing method and this winder can catch the anchovy of 1.6 tonnage. 4. As a result, the crew were decreased from 8 to 6 and the problem of heavy human power and risk on fishing operation were solved by using the this winder.

  • PDF

Experimental Study on Steel Beam with Embossment Web (엠보싱 웨브를 가지는 보 부재의 실험적 연구)

  • Park, Han-Min;Lee, Hee-Du;Shin, Kyung-Jae;Lee, Swoo-Heon;Chae, Il Soo
    • Journal of Korean Society of Steel Construction
    • /
    • v.29 no.6
    • /
    • pp.479-486
    • /
    • 2017
  • Steel beams with corrugated web have been widely used in the steel structures. However, it is challenging to weld the section between the corrugated web and the flange straight, which increases the cost of production. In order to solve this issue, steel beam with intaglio and embossed web (It is called an IEB) was invented. A web with embossment is produced by cold pressing and welded to flange by automatic welding machine. The loading tests were conducted to investigate the load-carrying capacity of IEB, and its test result was compared with that of H-shaped beam having a same size of flange and web. The test results of IEB series showed about 40% higher load capacities than H-shaped series. As a result of comparing the IEB specimen with Eurocodes for steel beams with corrugated web, all of specimens tested in this study did not meet the design value. Therefore, it is difficult to apply existing formula to IEB and new design formula should be presented for field application.

An Estimation of Probable Precipitation and an Analysis of Its Return Period and Distributions in Busan (부산지역 확률강수량 결정에 따른 재현기간 및 분포도 분석)

  • Lim, Yun-Kyu;Moon, Yun-Seob;Kim, Jin-Seog;Song, Sang-Keun;Hwang, Yong-Sik
    • Journal of the Korean earth science society
    • /
    • v.33 no.1
    • /
    • pp.39-48
    • /
    • 2012
  • In this study, a statistical estimation of probable precipitation and an analysis of its return period in Busan were performed using long-term precipitation data (1973-2007) collected from the Busan Regional Meteorological Administration. These analyses were based on the method of probability weighted moments for parameter estimation, the goodness-of-fit test of chi-square ($x^2$) and the probability plot correlation coefficient (PPCC), and the generalized logistics (GLO) for optimum probability distribution. Moreover, the spatial distributions with the determination of probable precipitation were also investigated using precipitation data observed at 15 Automatic Weather Stations (AWS) in the target area. The return periods for the probable precipitation of 245.2 and 280.6 mm/6 hr with GLO distributions in Busan were estimated to be about 100 and 200 years, respectively. In addition, the high probable precipitation for 1-, 3-, 6-, and 12-hour durations was mostly distributed around Dongrae-gu site, all coastal sites in Busan, Busanjin and Yangsan sites, and the southeastern coastal and Ungsang sites, respectively.

Experience of Reticulocytes Measurement at 720 nm Using Spectrophotometer (분광광도계를 이용한 720 nm에서 망상적혈구 측정 경험)

  • Sung, Hyun-Ho;Seok, Dong-In;Jung, You-Hyun;Kim, Dae-Jung;Lee, Seok-Jae
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.49 no.4
    • /
    • pp.382-389
    • /
    • 2017
  • Currently, reticulocyte experimental calculation technology used in clinical laboratories are divided two types: manual and automated. Manual reticulocyte counting using a microscopy lacks accuracy due particularly to its low reproducibility, affecting the accuracy of manual reticulocyte count. Moreover, Automatic blood corpuscle analyzer flow cytometry is difficult to be used in underdeveloped countries and small scale laboratories due to relatively high cost. Therefore, this study tried to find a new method to complement these drawbacks. The aim of this study was to compare the stained reticulocytes count by spectrophotometer and also to analyze the statistics of spectrophotometer and flow cytometer. The same 8 EDTA samples were repeated 36 times to compare the agreement between spectrophotometer and flow cytometer. This study measured the specimen diluted 600 times at 700~780 nm by 10 differences. Wavelength between 710 to 730 by absorbance showed a positive correlation between standard data and test data (r=0.967, p<0.01), presenting a correlation between variables. Statistical analyses of regression for test and standard parametric data, the optimal dilution factor was 600 times. Therefore, this study tried to technical utilizes such as contributing economical for the reticulocyte absorbance apply from the auto spectrophotometer, a monitoring system for the reticulocyte relation anemia, etc. Therefore, more extensive studies, including an auto chemical analyzer application, will be needed.

Salient Region Detection Algorithm for Music Video Browsing (뮤직비디오 브라우징을 위한 중요 구간 검출 알고리즘)

  • Kim, Hyoung-Gook;Shin, Dong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.2
    • /
    • pp.112-118
    • /
    • 2009
  • This paper proposes a rapid detection algorithm of a salient region for music video browsing system, which can be applied to mobile device and digital video recorder (DVR). The input music video is decomposed into the music and video tracks. For the music track, the music highlight including musical chorus is detected based on structure analysis using energy-based peak position detection. Using the emotional models generated by SVM-AdaBoost learning algorithm, the music signal of the music videos is classified into one of the predefined emotional classes of the music automatically. For the video track, the face scene including the singer or actor/actress is detected based on a boosted cascade of simple features. Finally, the salient region is generated based on the alignment of boundaries of the music highlight and the visual face scene. First, the users select their favorite music videos from various music videos in the mobile devices or DVR with the information of a music video's emotion and thereafter they can browse the salient region with a length of 30-seconds using the proposed algorithm quickly. A mean opinion score (MOS) test with a database of 200 music videos is conducted to compare the detected salient region with the predefined manual part. The MOS test results show that the detected salient region using the proposed method performed much better than the predefined manual part without audiovisual processing.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Homonym Disambiguation based on Mutual Information and Sense-Tagged Compound Noun Dictionary (상호정보량과 복합명사 의미사전에 기반한 동음이의어 중의성 해소)

  • Heo, Jeong;Seo, Hee-Cheol;Jang, Myung-Gil
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.12
    • /
    • pp.1073-1089
    • /
    • 2006
  • The goal of Natural Language Processing(NLP) is to make a computer understand a natural language and to deliver the meanings of natural language to humans. Word sense Disambiguation(WSD is a very important technology to achieve the goal of NLP. In this paper, we describe a technology for automatic homonyms disambiguation using both Mutual Information(MI) and a Sense-Tagged Compound Noun Dictionary. Previous research work using word definitions in dictionary suffered from the problem of data sparseness because of the use of exact word matching. Our work overcomes this problem by using MI which is an association measure between words. To reflect language features, the rate of word-pairs with MI values, sense frequency and site of word definitions are used as weights in our system. We constructed a Sense-Tagged Compound Noun Dictionary for high frequency compound nouns and used it to resolve homonym sense disambiguation. Experimental data for testing and evaluating our system is constructed from QA(Question Answering) test data which consisted of about 200 query sentences and answer paragraphs. We performed 4 types of experiments. In case of being used only MI, the result of experiment showed a precision of 65.06%. When we used the weighted values, we achieved a precision of 85.35% and when we used the Sense-Tagged Compound Noun Dictionary, we achieved a precision of 88.82%, respectively.