• Title/Summary/Keyword: 알고리즘 향상

Search Result 6,850, Processing Time 0.032 seconds

An Implementation of Dynamic Gesture Recognizer Based on WPS and Data Glove (WPS와 장갑 장치 기반의 동적 제스처 인식기의 구현)

  • Kim, Jung-Hyun;Roh, Yong-Wan;Hong, Kwang-Seok
    • The KIPS Transactions:PartB
    • /
    • v.13B no.5 s.108
    • /
    • pp.561-568
    • /
    • 2006
  • WPS(Wearable Personal Station) for next generation PC can define as a core terminal of 'Ubiquitous Computing' that include information processing and network function and overcome spatial limitation in acquisition of new information. As a way to acquire significant dynamic gesture data of user from haptic devices, traditional gesture recognizer based on desktop-PC using wire communication module has several restrictions such as conditionality on space, complexity between transmission mediums(cable elements), limitation of motion and incommodiousness on use. Accordingly, in this paper, in order to overcome these problems, we implement hand gesture recognition system using fuzzy algorithm and neural network for Post PC(the embedded-ubiquitous environment using blue-tooth module and WPS). Also, we propose most efficient and reasonable hand gesture recognition interface for Post PC through evaluation and analysis of performance about each gesture recognition system. The proposed gesture recognition system consists of three modules: 1) gesture input module that processes motion of dynamic hand to input data 2) Relational Database Management System(hereafter, RDBMS) module to segment significant gestures from input data and 3) 2 each different recognition modulo: fuzzy max-min and neural network recognition module to recognize significant gesture of continuous / dynamic gestures. Experimental result shows the average recognition rate of 98.8% in fuzzy min-nin module and 96.7% in neural network recognition module about significantly dynamic gestures.

A Study on Evaluating the Possibility of Monitoring Ships of CAS500-1 Images Based on YOLO Algorithm: A Case Study of a Busan New Port and an Oakland Port in California (YOLO 알고리즘 기반 국토위성영상의 선박 모니터링 가능성 평가 연구: 부산 신항과 캘리포니아 오클랜드항을 대상으로)

  • Park, Sangchul;Park, Yeongbin;Jang, Soyeong;Kim, Tae-Ho
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1463-1478
    • /
    • 2022
  • Maritime transport accounts for 99.7% of the exports and imports of the Republic of Korea; therefore, developing a vessel monitoring system for efficient operation is of significant interest. Several studies have focused on tracking and monitoring vessel movements based on automatic identification system (AIS) data; however, ships without AIS have limited monitoring and tracking ability. High-resolution optical satellite images can provide the missing layer of information in AIS-based monitoring systems because they can identify non-AIS vessels and small ships over a wide range. Therefore, it is necessary to investigate vessel monitoring and small vessel classification systems using high-resolution optical satellite images. This study examined the possibility of developing ship monitoring systems using Compact Advanced Satellite 500-1 (CAS500-1) satellite images by first training a deep learning model using satellite image data and then performing detection in other images. To determine the effectiveness of the proposed method, the learning data was acquired from ships in the Yellow Sea and its major ports, and the detection model was established using the You Only Look Once (YOLO) algorithm. The ship detection performance was evaluated for a domestic and an international port. The results obtained using the detection model in ships in the anchorage and berth areas were compared with the ship classification information obtained using AIS, and an accuracy of 85.5% and 70% was achieved using domestic and international classification models, respectively. The results indicate that high-resolution satellite images can be used in mooring ships for vessel monitoring. The developed approach can potentially be used in vessel tracking and monitoring systems at major ports around the world if the accuracy of the detection model is improved through continuous learning data construction.

Evaluating Global Container Ports' Performance Considering the Port Calls' Attractiveness (기항 매력도를 고려한 세계 컨테이너 항만의 성과 평가)

  • Park, Byungin
    • Journal of Korea Port Economic Association
    • /
    • v.38 no.3
    • /
    • pp.105-131
    • /
    • 2022
  • Even after the improvement in 2019, UNCTAD's Liner Shipping Connectivity Index (LSCI), which evaluates the performance of the global container port market, has limited use. In particular, since the liner shipping connectivity index evaluates the performance based only on the distance of the relationship, the performance index combining the port attractiveness of calling would be more efficient. This study used the modified Huff model, the hub-authority algorithm and the eigenvector centrality of social network analysis, and correlation analysis for 2007, 2017, and 2019 data of Ocean-Commerce, Japan. The findings are as follows: Firstly, the port attractiveness of calling and the overall performance of the port did not always match. However, according to the analysis of the attractiveness of a port calling, Busan remained within the top 10. Still, the attractiveness among other Korean ports improved slowly from the low level during the study period. Secondly, Global container ports are generally specialized for long-term specialized inbound and outbound ports by the route and grow while maintaining professionalism throughout the entire period. The Korean ports continue to change roles from analysis period to period. Lastly, the volume of cargo by period and the extended port connectivity index (EPCI) presented in this study showed a correlation from 0.77 to 0.85. Even though the Atlantic data is excluded from the analysis and the ship's operable capacity is used instead of the port throughput volume, it shows a high correlation. The study result would help evaluate and analyze global ports. According to the study, Korean ports need a long-term strategy to improve performance while maintaining professionalism. In order to maintain and develop the port's desirable role, it is necessary to utilize cooperation and partnerships with the complimentary port and attract shipping companies' services calling to the complementary port. Although this study carried out a complex analysis using a lot of data and methodologies for an extended period, it is necessary to conduct a study covering ports around the world, a long-term panel analysis, and a scientific parameter estimation study of the attractiveness analysis.

A Checklist to Improve the Fairness in AI Financial Service: Focused on the AI-based Credit Scoring Service (인공지능 기반 금융서비스의 공정성 확보를 위한 체크리스트 제안: 인공지능 기반 개인신용평가를 중심으로)

  • Kim, HaYeong;Heo, JeongYun;Kwon, Hochang
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.3
    • /
    • pp.259-278
    • /
    • 2022
  • With the spread of Artificial Intelligence (AI), various AI-based services are expanding in the financial sector such as service recommendation, automated customer response, fraud detection system(FDS), credit scoring services, etc. At the same time, problems related to reliability and unexpected social controversy are also occurring due to the nature of data-based machine learning. The need Based on this background, this study aimed to contribute to improving trust in AI-based financial services by proposing a checklist to secure fairness in AI-based credit scoring services which directly affects consumers' financial life. Among the key elements of trustworthy AI like transparency, safety, accountability, and fairness, fairness was selected as the subject of the study so that everyone could enjoy the benefits of automated algorithms from the perspective of inclusive finance without social discrimination. We divided the entire fairness related operation process into three areas like data, algorithms, and user areas through literature research. For each area, we constructed four detailed considerations for evaluation resulting in 12 checklists. The relative importance and priority of the categories were evaluated through the analytic hierarchy process (AHP). We use three different groups: financial field workers, artificial intelligence field workers, and general users which represent entire financial stakeholders. According to the importance of each stakeholder, three groups were classified and analyzed, and from a practical perspective, specific checks such as feasibility verification for using learning data and non-financial information and monitoring new inflow data were identified. Moreover, financial consumers in general were found to be highly considerate of the accuracy of result analysis and bias checks. We expect this result could contribute to the design and operation of fair AI-based financial services.

Interaction Between TCP and MAC-layer to Improve TCP Flow Performance over WLANs (유무선랜 환경에서 TCP Flow의 성능향상을 위한 MAC 계층과 TCP 계층의 연동기법)

  • Kim, Jae-Hoon;Chung, Kwang-Sue
    • Journal of KIISE:Information Networking
    • /
    • v.35 no.2
    • /
    • pp.99-111
    • /
    • 2008
  • In recent years, the needs for WLANs(Wireless Local Area Networks) technology which can access to Internet anywhere have been dramatically increased particularly in SOHO(Small Office Home Office) and Hot Spot. However, unlike wired networks, there are some unique characteristics of wireless networks. These characteristics include the burst packet losses due to unreliable wireless channel. Note that burst packet losses, which occur when the distance between the wireless station and the AP(Access Point) increase or when obstacles move temporarily between the station and AP, are very frequent in 802.11 networks. Conversely, due to burst packet losses, the performance of 802.11 networks are not always as sufficient as the current application require, particularly when they use TCP at the transport layer. The high packet loss rate over wireless links can trigger unnecessary execution of TCP congestion control algorithm, resulting in performance degradation. In order to overcome the limitations of WLANs environment, MAC-layer LDA(Loss Differentiation Algorithm)has been proposed. MAC-layer LDA prevents TCP's timeout by increasing CRD(Consecutive Retry Duration) higher than burst packet loss duration. However, in the wireless channel with high packet loss rate, MAC-layer LDA does not work well because of two reason: (a) If the CRD is lower than burst packet loss duration due to the limited increase of retry limit, end-to-end performance is degraded. (b) energy of mobile device and bandwidth utilization in the wireless link are wasted unnecessarily by Reducing the drainage speed of the network buffer due to the increase of CRD. In this paper, we propose a new retransmission module based on Cross-layer approach, called BLD(Burst Loss Detection) module, to solve the limitation of previous link layer retransmission schemes. BLD module's algorithm is retransmission mechanism at IEEE 802.11 networks and performs retransmission based on the interaction between retransmission mechanisms of the MAC layer and TCP. From the simulation by using ns-2(Network Simulator), we could see more improved TCP throughput and energy efficiency with the proposed scheme than previous mechanisms.

Development of Measuring Technique for Milk Composition by Using Visible-Near Infrared Spectroscopy (가시광선-근적외선 분광법을 이용한 유성분 측정 기술 개발)

  • Choi, Chang-Hyun;Yun, Hyun-Woong;Kim, Yong-Joo
    • Food Science and Preservation
    • /
    • v.19 no.1
    • /
    • pp.95-103
    • /
    • 2012
  • The objective of this study was to develop models for the predict of the milk properties (fat, protein, SNF, lactose, MUN) of unhomogenized milk using the visible and near-infrared (NIR) spectroscopic technique. A total of 180 milk samples were collected from dairy farms. To determine optimal measurement temperature, the temperatures of the milk samples were kept at three levels ($5^{\circ}C$, $20^{\circ}C$, and $40^{\circ}C$). A spectrophotometer was used to measure the reflectance spectra of the milk samples. Multilinear-regression (MLR) models with stepwise method were developed for the selection of the optimal wavelength. The preprocessing methods were used to minimize the spectroscopic noise, and the partial-least-square (PLS) models were developed to prediction of the milk properties of the unhomogenized milk. The PLS results showed that there was a good correlation between the predicted and measured milk properties of the samples at $40^{\circ}C$ and at 400~2,500 nm. The optimal-wavelength range of fat and protein were 1,600~1,800 nm, and normalization improved the prediction performance. The SNF and lactose were optimized at 1,600~1,900 nm, and the MUN at 600~800 nm. The best preprocessing method for SNF, lactose, and MUN turned out to be smoothing, MSC, and second derivative. The Correlation coefficients between the predicted and measured fat, protein, SNF, lactose, and MUN were 0.98, 0.90, 0.82, 0.75, and 0.61, respectively. The study results indicate that the models can be used to assess milk quality.

The Understanding and Application of Noise Reduction Software in Static Images (정적 영상에서 Noise Reduction Software의 이해와 적용)

  • Lee, Hyung-Jin;Song, Ho-Jun;Seung, Jong-Min;Choi, Jin-Wook;Kim, Jin-Eui;Kim, Hyun-Joo
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.54-60
    • /
    • 2010
  • Purpose: Nuclear medicine manufacturers provide various softwares which shorten imaging time using their own image processing techniques such as UlatraSPECT, ASTONISH, Flash3D, Evolution, and nSPEED. Seoul National University Hospital has introduced softwares from Siemens and Philips, but it was still hard to understand algorithm difference between those two softwares. Thus, the purpose of this study was to figure out the difference of two softwares in planar images and research the possibility of application to images produced with high energy isotopes. Materials and Methods: First, a phantom study was performed to understand the difference of softwares in static studies. Various amounts of count were acquired and the images were analyzed quantitatively after application of PIXON, Siemens and ASTONISH, Philips, respectively. Then, we applied them to some applicable static studies and searched for merits and demerits. And also, they have been applied to images produced with high energy isotopes. Finally, A blind test was conducted by nuclear medicine doctors except phantom images. Results: There was nearly no difference between pre and post processing image with PIXON for FWHM test using capillary source whereas ASTONISH was improved. But, both of standard deviation(SD) and variance were decreased for PIXON while ASTONISH was highly increased. And in background variability comparison test using IEC phantom, PIXON has been decreased over all while ASTONISH has shown to be somewhat increased. Contrast ratio in each spheres has also been increased for both methods. For image scale, window width has been increased for 4~5 times after processing with PIXON while ASTONISH showed nearly no difference. After phantom test analysis, ASTONISH seemed to be applicable for some studies which needs quantitative analysis or high contrast, and PIXON seemed to be applicable for insufficient counts studies or long time studies. Conclusion: Quantitative values used for usual analysis were generally improved after application of the two softwares, however it seems that it's hard to maintain the consistency for all of nuclear medicine studies because result images can not be the same due to the difference of algorithm characteristic rather than the difference of gamma cameras. And also, it's hard to expect high image quality with the time shortening method such as whole body scan. But it will be possible to apply to static studies considering the algorithm characteristic or we can expect a change of image quality through application to high energy isotope images.

  • PDF

OD matrix estimation using link use proportion sample data as additional information (표본링크이용비를 추가정보로 이용한 OD 행렬 추정)

  • 백승걸;김현명;신동호
    • Journal of Korean Society of Transportation
    • /
    • v.20 no.4
    • /
    • pp.83-93
    • /
    • 2002
  • To improve the performance of estimation, the research that uses additional information addition to traffic count and target OD with additional survey cost have been studied. The purpose of this paper is to improve the performance of OD estimation by reducing the feasible solutions with cost-efficiently additional information addition to traffic counts and target OD. For this purpose, we Propose the OD estimation method with sample link use proportion as additional information. That is, we obtain the relationship between OD trip and link flow from sample link use proportion that is high reliable information with roadside survey, not from the traffic assignment of target OD. Therefore, this paper proposes OD estimation algorithm in which the conservation of link flow rule under the path-based non-equilibrium traffic assignment concept. Numerical result with test network shows that it is possible to improve the performance of OD estimation where the precision of additional data is low, since sample link use Proportion represented the information showing the relationship between OD trip and link flow. And this method shows the robust performance of estimation where traffic count or OD trip be changed, since this method did not largely affected by the error of target OD and the one of traffic count. In addition to, we also propose that we must set the level of data precision by considering the level of other information precision, because "precision problem between information" is generated when we use additional information like sample link use proportion etc. And we Propose that the method using traffic count as basic information must obtain the link flow to certain level in order to high the applicability of additional information. Finally, we propose that additional information on link have a optimal counting location problem. Expecially by Precision of information side it is possible that optimal survey location problem of sample link use proportion have a much impact on the performance of OD estimation rather than optimal counting location problem of link flow.

A Methodology for Automatic Multi-Categorization of Single-Categorized Documents (단일 카테고리 문서의 다중 카테고리 자동확장 방법론)

  • Hong, Jin-Sung;Kim, Namgyu;Lee, Sangwon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.77-92
    • /
    • 2014
  • Recently, numerous documents including unstructured data and text have been created due to the rapid increase in the usage of social media and the Internet. Each document is usually provided with a specific category for the convenience of the users. In the past, the categorization was performed manually. However, in the case of manual categorization, not only can the accuracy of the categorization be not guaranteed but the categorization also requires a large amount of time and huge costs. Many studies have been conducted towards the automatic creation of categories to solve the limitations of manual categorization. Unfortunately, most of these methods cannot be applied to categorizing complex documents with multiple topics because the methods work by assuming that one document can be categorized into one category only. In order to overcome this limitation, some studies have attempted to categorize each document into multiple categories. However, they are also limited in that their learning process involves training using a multi-categorized document set. These methods therefore cannot be applied to multi-categorization of most documents unless multi-categorized training sets are provided. To overcome the limitation of the requirement of a multi-categorized training set by traditional multi-categorization algorithms, we propose a new methodology that can extend a category of a single-categorized document to multiple categorizes by analyzing relationships among categories, topics, and documents. First, we attempt to find the relationship between documents and topics by using the result of topic analysis for single-categorized documents. Second, we construct a correspondence table between topics and categories by investigating the relationship between them. Finally, we calculate the matching scores for each document to multiple categories. The results imply that a document can be classified into a certain category if and only if the matching score is higher than the predefined threshold. For example, we can classify a certain document into three categories that have larger matching scores than the predefined threshold. The main contribution of our study is that our methodology can improve the applicability of traditional multi-category classifiers by generating multi-categorized documents from single-categorized documents. Additionally, we propose a module for verifying the accuracy of the proposed methodology. For performance evaluation, we performed intensive experiments with news articles. News articles are clearly categorized based on the theme, whereas the use of vulgar language and slang is smaller than other usual text document. We collected news articles from July 2012 to June 2013. The articles exhibit large variations in terms of the number of types of categories. This is because readers have different levels of interest in each category. Additionally, the result is also attributed to the differences in the frequency of the events in each category. In order to minimize the distortion of the result from the number of articles in different categories, we extracted 3,000 articles equally from each of the eight categories. Therefore, the total number of articles used in our experiments was 24,000. The eight categories were "IT Science," "Economy," "Society," "Life and Culture," "World," "Sports," "Entertainment," and "Politics." By using the news articles that we collected, we calculated the document/category correspondence scores by utilizing topic/category and document/topics correspondence scores. The document/category correspondence score can be said to indicate the degree of correspondence of each document to a certain category. As a result, we could present two additional categories for each of the 23,089 documents. Precision, recall, and F-score were revealed to be 0.605, 0.629, and 0.617 respectively when only the top 1 predicted category was evaluated, whereas they were revealed to be 0.838, 0.290, and 0.431 when the top 1 - 3 predicted categories were considered. It was very interesting to find a large variation between the scores of the eight categories on precision, recall, and F-score.

A Dynamic Prefetch Filtering Schemes to Enhance Usefulness Of Cache Memory (캐시 메모리의 유용성을 높이는 동적 선인출 필터링 기법)

  • Chon Young-Suk;Lee Byung-Kwon;Lee Chun-Hee;Kim Suk-Il;Jeon Joong-Nam
    • The KIPS Transactions:PartA
    • /
    • v.13A no.2 s.99
    • /
    • pp.123-136
    • /
    • 2006
  • The prefetching technique is an effective way to reduce the latency caused memory access. However, excessively aggressive prefetch not only leads to cache pollution so as to cancel out the benefits of prefetch but also increase bus traffic leading to overall performance degradation. In this thesis, a prefetch filtering scheme is proposed which dynamically decides whether to commence prefetching by referring a filtering table to reduce the cache pollution due to unnecessary prefetches In this thesis, First, prefetch hashing table 1bitSC filtering scheme(PHT1bSC) has been shown to analyze the lock problem of the conventional scheme, this scheme such as conventional scheme used to be N:1 mapping, but it has the two state to 1bit value of each entries. A complete block address table filtering scheme(CBAT) has been introduced to be used as a reference for the comparative study. A prefetch block address lookup table scheme(PBALT) has been proposed as the main idea of this paper which exhibits the most exact filtering performance. This scheme has a length of the table the same as the PHT1bSC scheme, the contents of each entry have the fields the same as CBAT scheme recently, never referenced data block address has been 1:1 mapping a entry of the filter table. On commonly used prefetch schemes and general benchmarks and multimedia programs simulates change cache parameters. The PBALT scheme compared with no filtering has shown enhanced the greatest 22%, the cache miss ratio has been decreased by 7.9% by virtue of enhanced filtering accuracy compared with conventional PHT2bSC. The MADT of the proposed PBALT scheme has been decreased by 6.1% compared with conventional schemes to reduce the total execution time.