• Title/Summary/Keyword: 알고리즘 기반

Search Result 13,923, Processing Time 0.044 seconds

Rear Vehicle Detection Method in Harsh Environment Using Improved Image Information (개선된 영상 정보를 이용한 가혹한 환경에서의 후방 차량 감지 방법)

  • Jeong, Jin-Seong;Kim, Hyun-Tae;Jang, Young-Min;Cho, Sang-Bok
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.1
    • /
    • pp.96-110
    • /
    • 2017
  • Most of vehicle detection studies using the existing general lens or wide-angle lens have a blind spot in the rear detection situation, the image is vulnerable to noise and a variety of external environments. In this paper, we propose a method that is detection in harsh external environment with noise, blind spots, etc. First, using a fish-eye lens will help minimize blind spots compared to the wide-angle lens. When angle of the lens is growing because nonlinear radial distortion also increase, calibration was used after initializing and optimizing the distortion constant in order to ensure accuracy. In addition, the original image was analyzed along with calibration to remove fog and calibrate brightness and thereby enable detection even when visibility is obstructed due to light and dark adaptations from foggy situations or sudden changes in illumination. Fog removal generally takes a considerably significant amount of time to calculate. Thus in order to reduce the calculation time, remove the fog used the major fog removal algorithm Dark Channel Prior. While Gamma Correction was used to calibrate brightness, a brightness and contrast evaluation was conducted on the image in order to determine the Gamma Value needed for correction. The evaluation used only a part instead of the entirety of the image in order to reduce the time allotted to calculation. When the brightness and contrast values were calculated, those values were used to decided Gamma value and to correct the entire image. The brightness correction and fog removal were processed in parallel, and the images were registered as a single image to minimize the calculation time needed for all the processes. Then the feature extraction method HOG was used to detect the vehicle in the corrected image. As a result, it took 0.064 seconds per frame to detect the vehicle using image correction as proposed herein, which showed a 7.5% improvement in detection rate compared to the existing vehicle detection method.

Visualization and Localization of Fusion Image Using VRML for Three-dimensional Modeling of Epileptic Seizure Focus (VRML을 이용한 융합 영상에서 간질환자 발작 진원지의 3차원적 가시화와 위치 측정 구현)

  • 이상호;김동현;유선국;정해조;윤미진;손혜경;강원석;이종두;김희중
    • Progress in Medical Physics
    • /
    • v.14 no.1
    • /
    • pp.34-42
    • /
    • 2003
  • In medical imaging, three-dimensional (3D) display using Virtual Reality Modeling Language (VRML) as a portable file format can give intuitive information more efficiently on the World Wide Web (WWW). The web-based 3D visualization of functional images combined with anatomical images has not studied much in systematic ways. The goal of this study was to achieve a simultaneous observation of 3D anatomic and functional models with planar images on the WWW, providing their locational information in 3D space with a measuring implement using VRML. MRI and ictal-interictal SPECT images were obtained from one epileptic patient. Subtraction ictal SPECT co-registered to MRI (SISCOM) was performed to improve identification of a seizure focus. SISCOM image volumes were held by thresholds above one standard deviation (1-SD) and two standard deviations (2-SD). SISCOM foci and boundaries of gray matter, white matter, and cerebrospinal fluid (CSF) in the MRI volume were segmented and rendered to VRML polygonal surfaces by marching cube algorithm. Line profiles of x and y-axis that represent real lengths on an image were acquired and their maximum lengths were the same as 211.67 mm. The real size vs. the rendered VRML surface size was approximately the ratio of 1 to 605.9. A VRML measuring tool was made and merged with previous VRML surfaces. User interface tools were embedded with Java Script routines to display MRI planar images as cross sections of 3D surface models and to set transparencies of 3D surface models. When transparencies of 3D surface models were properly controlled, a fused display of the brain geometry with 3D distributions of focal activated regions provided intuitively spatial correlations among three 3D surface models. The epileptic seizure focus was in the right temporal lobe of the brain. The real position of the seizure focus could be verified by the VRML measuring tool and the anatomy corresponding to the seizure focus could be confirmed by MRI planar images crossing 3D surface models. The VRML application developed in this study may have several advantages. Firstly, 3D fused display and control of anatomic and functional image were achieved on the m. Secondly, the vector analysis of a 3D surface model was defined by the VRML measuring tool based on the real size. Finally, the anatomy corresponding to the seizure focus was intuitively detected by correlations with MRI images. Our web based visualization of 3-D fusion image and its localization will be a help to online research and education in diagnostic radiology, therapeutic radiology, and surgery applications.

  • PDF

Development and Evaluation of Traffic Conflict Criteria at an intersection (교차로 교통상충기준 개발 및 평가에 관한 연구)

  • 하태준;박형규;박제진;박찬모
    • Journal of Korean Society of Transportation
    • /
    • v.20 no.2
    • /
    • pp.105-115
    • /
    • 2002
  • For many rears, traffic accident statistics are the most direct measure of safety for a signalized intersection. However it takes more than 2 or 3 yearn to collect certain accident data for adequate sample sizes. And the accident data itself is unreliable because of the difference between accident data recorded and accident that is actually occurred. Therefore, it is rather difficult to evaluate safety for a intersection by using accident data. For these reasons, traffic conflict technique(TCT) was developed as a buick and accurate counter-measure of safety for a intersection. However, the collected conflict data is not always reliable because there is absence of clear criteria for conflict. This study developed objective and accurate conflict criteria, which is shown below based on traffic engineering theory. Frist, the rear-end conflict is regarded, when the following vehicle takes evasive maneuver against the first vehicle within a certain distance, according to car-following theory. Second, lane-change conflict is regarded when the following vehicle takes evasive maneuver against first vehicle which is changing its lane within the minimum stopping distance of the following vehicle. Third, cross and opposing-left turn conflicts are regarded when the vehicle which receives green sign takes evasive maneuver against the vehicle which lost its right-of-way crossing a intersection. As a result of correlation analysis between conflict and accident, it is verified that the suggested conflict criteria in this study ave applicable. And it is proven that estimating safety evaluation for a intersection with conflict data is possible, according to the regression analysis preformed between accident and conflict, EPDO accident and conflict. Adopting the conflict criteria suggested in this study would be both quick and accurate method for diagnosing safety and operational deficiencies and for evaluation improvements at intersections. Further research is required to refine the suggested conflict criteria to extend its application. In addition, it is necessary to develope other types of conflict criteria, not included in this study, in later study.

Development of Market Growth Pattern Map Based on Growth Model and Self-organizing Map Algorithm: Focusing on ICT products (자기조직화 지도를 활용한 성장모형 기반의 시장 성장패턴 지도 구축: ICT제품을 중심으로)

  • Park, Do-Hyung;Chung, Jaekwon;Chung, Yeo Jin;Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.1-23
    • /
    • 2014
  • Market forecasting aims to estimate the sales volume of a product or service that is sold to consumers for a specific selling period. From the perspective of the enterprise, accurate market forecasting assists in determining the timing of new product introduction, product design, and establishing production plans and marketing strategies that enable a more efficient decision-making process. Moreover, accurate market forecasting enables governments to efficiently establish a national budget organization. This study aims to generate a market growth curve for ICT (information and communication technology) goods using past time series data; categorize products showing similar growth patterns; understand markets in the industry; and forecast the future outlook of such products. This study suggests the useful and meaningful process (or methodology) to identify the market growth pattern with quantitative growth model and data mining algorithm. The study employs the following methodology. At the first stage, past time series data are collected based on the target products or services of categorized industry. The data, such as the volume of sales and domestic consumption for a specific product or service, are collected from the relevant government ministry, the National Statistical Office, and other relevant government organizations. For collected data that may not be analyzed due to the lack of past data and the alteration of code names, data pre-processing work should be performed. At the second stage of this process, an optimal model for market forecasting should be selected. This model can be varied on the basis of the characteristics of each categorized industry. As this study is focused on the ICT industry, which has more frequent new technology appearances resulting in changes of the market structure, Logistic model, Gompertz model, and Bass model are selected. A hybrid model that combines different models can also be considered. The hybrid model considered for use in this study analyzes the size of the market potential through the Logistic and Gompertz models, and then the figures are used for the Bass model. The third stage of this process is to evaluate which model most accurately explains the data. In order to do this, the parameter should be estimated on the basis of the collected past time series data to generate the models' predictive value and calculate the root-mean squared error (RMSE). The model that shows the lowest average RMSE value for every product type is considered as the best model. At the fourth stage of this process, based on the estimated parameter value generated by the best model, a market growth pattern map is constructed with self-organizing map algorithm. A self-organizing map is learning with market pattern parameters for all products or services as input data, and the products or services are organized into an $N{\times}N$ map. The number of clusters increase from 2 to M, depending on the characteristics of the nodes on the map. The clusters are divided into zones, and the clusters with the ability to provide the most meaningful explanation are selected. Based on the final selection of clusters, the boundaries between the nodes are selected and, ultimately, the market growth pattern map is completed. The last step is to determine the final characteristics of the clusters as well as the market growth curve. The average of the market growth pattern parameters in the clusters is taken to be a representative figure. Using this figure, a growth curve is drawn for each cluster, and their characteristics are analyzed. Also, taking into consideration the product types in each cluster, their characteristics can be qualitatively generated. We expect that the process and system that this paper suggests can be used as a tool for forecasting demand in the ICT and other industries.

Analysis of Interactions in Multiple Genes using IFSA(Independent Feature Subspace Analysis) (IFSA 알고리즘을 이용한 유전자 상호 관계 분석)

  • Kim, Hye-Jin;Choi, Seung-Jin;Bang, Sung-Yang
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.3
    • /
    • pp.157-165
    • /
    • 2006
  • The change of external/internal factors of the cell rquires specific biological functions to maintain life. Such functions encourage particular genes to jnteract/regulate each other in multiple ways. Accordingly, we applied a linear decomposition model IFSA, which derives hidden variables, called the 'expression mode' that corresponds to the functions. To interpret gene interaction/regulation, we used a cross-correlation method given an expression mode. Linear decomposition models such as principal component analysis (PCA) and independent component analysis (ICA) were shown to be useful in analyzing high dimensional DNA microarray data, compared to clustering methods. These methods assume that gene expression is controlled by a linear combination of uncorrelated/indepdendent latent variables. However these methods have some difficulty in grouping similar patterns which are slightly time-delayed or asymmetric since only exactly matched Patterns are considered. In order to overcome this, we employ the (IFSA) method of [1] to locate phase- and shut-invariant features. Membership scoring functions play an important role to classify genes since linear decomposition models basically aim at data reduction not but at grouping data. We address a new function essential to the IFSA method. In this paper we stress that IFSA is useful in grouping functionally-related genes in the presence of time-shift and expression phase variance. Ultimately, we propose a new approach to investigate the multiple interaction information of genes.

Automatic Interpretation of Epileptogenic Zones in F-18-FDG Brain PET using Artificial Neural Network (인공신경회로망을 이용한 F-18-FDG 뇌 PET의 간질원인병소 자동해석)

  • 이재성;김석기;이명철;박광석;이동수
    • Journal of Biomedical Engineering Research
    • /
    • v.19 no.5
    • /
    • pp.455-468
    • /
    • 1998
  • For the objective interpretation of cerebral metabolic patterns in epilepsy patients, we developed computer-aided classifier using artificial neural network. We studied interictal brain FDG PET scans of 257 epilepsy patients who were diagnosed as normal(n=64), L TLE (n=112), or R TLE (n=81) by visual interpretation. Automatically segmented volume of interest (VOI) was used to reliably extract the features representing patterns of cerebral metabolism. All images were spatially normalized to MNI standard PET template and smoothed with 16mm FWHM Gaussian kernel using SPM96. Mean count in cerebral region was normalized. The VOls for 34 cerebral regions were previously defined on the standard template and 17 different counts of mirrored regions to hemispheric midline were extracted from spatially normalized images. A three-layer feed-forward error back-propagation neural network classifier with 7 input nodes and 3 output nodes was used. The network was trained to interpret metabolic patterns and produce identical diagnoses with those of expert viewers. The performance of the neural network was optimized by testing with 5~40 nodes in hidden layer. Randomly selected 40 images from each group were used to train the network and the remainders were used to test the learned network. The optimized neural network gave a maximum agreement rate of 80.3% with expert viewers. It used 20 hidden nodes and was trained for 1508 epochs. Also, neural network gave agreement rates of 75~80% with 10 or 30 nodes in hidden layer. We conclude that artificial neural network performed as well as human experts and could be potentially useful as clinical decision support tool for the localization of epileptogenic zones.

  • PDF

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.

The Estimation Model of an Origin-Destination Matrix from Traffic Counts Using a Conjugate Gradient Method (Conjugate Gradient 기법을 이용한 관측교통량 기반 기종점 OD행렬 추정 모형 개발)

  • Lee, Heon-Ju;Lee, Seung-Jae
    • Journal of Korean Society of Transportation
    • /
    • v.22 no.1 s.72
    • /
    • pp.43-62
    • /
    • 2004
  • Conventionally the estimation method of the origin-destination Matrix has been developed by implementing the expansion of sampled data obtained from roadside interview and household travel survey. In the survey process, the bigger the sample size is, the higher the level of limitation, due to taking time for an error test for a cost and a time. Estimating the O-D matrix from observed traffic count data has been applied as methods of over-coming this limitation, and a gradient model is known as one of the most popular techniques. However, in case of the gradient model, although it may be capable of minimizing the error between the observed and estimated traffic volumes, a prior O-D matrix structure cannot maintained exactly. That is to say, unwanted changes may be occurred. For this reason, this study adopts a conjugate gradient algorithm to take into account two factors: estimation of the O-D matrix from the conjugate gradient algorithm while reflecting the prior O-D matrix structure maintained. This development of the O-D matrix estimation model is to minimize the error between observed and estimated traffic volumes. This study validates the model using the simple network, and then applies it to a large scale network. There are several findings through the tests. First, as the consequence of consistency, it is apparent that the upper level of this model plays a key role by the internal relationship with lower level. Secondly, as the respect of estimation precision, the estimation error is lied within the tolerance interval. Furthermore, the structure of the estimated O-D matrix has not changed too much, and even still has conserved some attributes.

Analysis of Genetics Problem-Solving Processes of High School Students with Different Learning Approaches (학습접근방식에 따른 고등학생들의 유전 문제 해결 과정 분석)

  • Lee, Shinyoung;Byun, Taejin
    • Journal of The Korean Association For Science Education
    • /
    • v.40 no.4
    • /
    • pp.385-398
    • /
    • 2020
  • This study aims to examine genetics problem-solving processes of high school students with different learning approaches. Two second graders in high school participated in a task that required solving the complicated pedigree problem. The participants had similar academic achievements in life science but one had a deep learning approach while the other had a surface learning approach. In order to analyze in depth the students' problem-solving processes, each student's problem-solving process was video-recorded, and each student conducted a think-aloud interview after solving the problem. Although students showed similar errors at the first trial in solving the problem, they showed different problem-solving process at the last trial. Student A who had a deep learning approach voluntarily solved the problem three times and demonstrated correct conceptual framing to the three constraints using rule-based reasoning in the last trial. Student A monitored the consistency between the data and her own pedigree, and reflected the problem-solving process in the check phase of the last trial in solving the problem. Student A's problem-solving process in the third trial resembled a successful problem-solving algorithm. However, student B who had a surface learning approach, involuntarily repeated solving the problem twice, and focused and used only part of the data due to her goal-oriented attitude to solve the problem in seeking for answers. Student B showed incorrect conceptual framing by memory-bank or arbitrary reasoning, and maintained her incorrect conceptual framing to the constraints in two problem-solving processes. These findings can help in understanding the problem-solving processes of students who have different learning approaches, allowing teachers to better support students with difficulties in accessing genetics problems.

Evaluation of the Accuracy for Respiratory-gated RapidArc (RapidArc를 이용한 호흡연동 회전세기조절방사선치료 할 때 전달선량의 정확성 평가)

  • Sung, Jiwon;Yoon, Myonggeun;Chung, Weon Kuu;Bae, Sun Hyun;Shin, Dong Oh;Kim, Dong Wook
    • Progress in Medical Physics
    • /
    • v.24 no.2
    • /
    • pp.127-132
    • /
    • 2013
  • The position of the internal organs can change continually and periodically inside the body due to the respiration. To reduce the respiration induced uncertainty of dose localization, one can use a respiratory gated radiotherapy where a radiation beam is exposed during the specific time of period. The main disadvantage of this method is that it usually requests a long treatment time, the massive effort during the treatment and the limitation of the patient selection. In this sense, the combination of the real-time position management (RPM) system and the volumetric intensity modulated radiotherapy (RapidArc) is promising since it provides a short treatment time compared with the conventional respiratory gated treatments. In this study, we evaluated the accuracy of the respiratory gated RapidArc treatment. Total sic patient cases were used for this study and each case was planned by RapidArc technique using varian ECLIPSE v8.6 planning machine. For the Quality Assurance (QA), a MatriXX detector and I'mRT software were used. The results show that more than 97% of area gives the gamma value less than one with 3% dose and 3 mm distance to agreement condition, which indicates the measured dose is well matched with the treatment plan's dose distribution for the gated RapidArc treatment cases.