• Title/Summary/Keyword: 데이터 논문

Search Result 41,256, Processing Time 0.069 seconds

The Research to Correct Overestimation in TOF-MRA for Severity of Cerebrovascular Stenosis (3D-SPACE T2 기법에 의한 TOF-MRA검사 시 발생하는 혈관 내 협착 정도의 측정 오류 개선에 관한 연구)

  • Han, Yong Su;Kim, Ho Chul;Lee, Dong Young;Lee, Su Cheol;Ha, Seung Han;Kim, Min Gi
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.12
    • /
    • pp.180-188
    • /
    • 2014
  • It is very important accurate diagnosis and quick treatment in cerebrovascular disease, i.e. stenosis or occlusion that could be caused by risk factors such as poor dietary habits, insufficient exercise, and obesity. Time-of-flight magnetic resonance angiography (TOF-MRA), it is well known as diagnostic method without using contrast agent for cerebrovascular disease, is the most representative and reliable technique. Nevertheless, it still has measurement errors (also known as overestimation) for length of stenosis and area of occlusion in celebral infarction that is built by accumulation and rupture of plaques generated by hemodynamic turbulence. The purpose of this study is to show clinical trial feasibility for 3D-SPACE T2, which is improved by using signal attenuation effects of fluid velocity, in diagnosis of cerebrovascular disease. To model angiostenosis, strictures of different proportions (40%, 50%, 60%, and 70%) and virtual blood stream (normal saline) of different velocities (0.19 ml/sec, 1.5 ml/sec, 2.1 ml/sec, and 2.6 ml/sec) by using dialysis were made. Cross-examinations were performed for 3D-SPACE T2 and TOF-MRA (16 times each). The accuracy of measurement for length of stenosis was compared in all experimental conditions. 3D-SPACE 2T has superiority in terms of accuracy for measurements of the length of stenosis, compared with TOF-MRA. Also, it is robust in fast blood stream and large stenosis than TOF-MRA. 3D-SPACE 2T will be promising technique to increase diagnosis accuracy in narrow complex lesions as like two cerebral small vessels with stenosis, created by hemodynamic turbulence.

S-MADP : Service based Development Process for Mobile Applications of Medium-Large Scale Project (S-MADP : 중대형 프로젝트의 모바일 애플리케이션을 위한 서비스 기반 개발 프로세스)

  • Kang, Tae Deok;Kim, Kyung Baek;Cheng, Ki Ju
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.8
    • /
    • pp.555-564
    • /
    • 2013
  • Innovative evolution in mobile devices along with recent spread of Tablet PCs and Smart Phones makes a new change not only in individual life but also in enterprise applications. Especially, in the case of medium-large mobile applications for large enterprises which generally takes more than 3 months of development periods, importance and complexity increase significantly. Generally Agile-methodology is used for a development process for the medium-large scale mobile applications, but some issues arise such as high dependency on skilled developers and lack of detail development directives. In this paper, S-MADP (Smart Mobile Application Development Process) is proposed to mitigate these issues. S-MADP is a service oriented development process extending a object-oriented development process, for medium-large scale mobile applications. S-MADP provides detail development directives for each activities during the entire process for defining services as server-based or client-based and providing the way of reuse of services. Also, in order to support various user interfaces, S-MADP provides detail UI development directives. To evaluate the performance of S-MADP, three mobile application development projects were conducted and the results were analyzed. The projects are 'TBS(TB Mobile Service) 3.0' in TB company, mobile app-store in TS company, and mobile groupware in TG group. As a result of the projects, S-MADP accounts for more detailed design information about 'Minimizing the use of resources', 'Service-based designing' and 'User interface optimized for mobile devices' which are needed to be largely considered for mobile application development environment when we compare with existing Agile-methodology. Therefore, it improves the usability, maintainability, efficiency of developed mobile applications. Through field tests, it is observed that S-MADP outperforms about 25% than a Agile-methodology in the aspect of the required man-month for developing a medium-large mobile application.

Effect of Cognitive Affordance of Interactive Media Art Content on the Interaction and Interest of Audience (인터랙티브 미디어아트 콘텐츠의 인지적 어포던스가 관람자의 인터랙션과 흥미에 미치는 영향)

  • Lee, Gangso;Choi, Yoo-Joo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.9
    • /
    • pp.441-450
    • /
    • 2016
  • In this study, we investigate the effect of the level of cognitive affordance which explains an explicit interaction method on the interest of viewers. Viewer's recognition of the interaction method is associated with cognitive affordance as a matter of visual-perceptual exposure of the input device and viewer's cognition of it. The final goal of the research on affordance is to enhance the audience participation rather than the smooth interation. Many interactive media artworks have been designed with hiding the explicit explanation to the artwork due to worry that the explicit explanation may also hinder the induction of impressions leading the viewer to an aesthetic experience and the retainment of interest. In this context, we set up two hypotheses for study on cognitive affordance. First, the more explicit the explanation of interaction method is, the higher the viewer' understanding of interaction method is. Second, the more explicit the explanation of interaction method is, the lower the interest of the viewer is. An interactive media art work was manufactured with three versions which vary in the degree of visual-perceptual information suggestion and we analyzed the participation and interest level of audience in each version. As a result of the experiments, the version with high explicitness of interaction was found to have long time spent on watching and high participation and interest of viewers. On the contrary, the version with an unexplicit interaction method was found to have low interest and satisfaction of viewers. Therefore, regarding usability, the hypothesis that a more explicit explanation of interaction would lower the curiosity and interest in exploration of the viewer was dismissed. It was confirmed that improvement of cognitive affordance raised the interaction of the work of art and interest of the viewer in the proposed interactive content. This study implies that interactive media art work should be designed in view of that the interaction and interest of audience can be lowered when cognitive affordance is low.

A Real-Time Head Tracking Algorithm Using Mean-Shift Color Convergence and Shape Based Refinement (Mean-Shift의 색 수렴성과 모양 기반의 재조정을 이용한 실시간 머리 추적 알고리즘)

  • Jeong Dong-Gil;Kang Dong-Goo;Yang Yu Kyung;Ra Jong Beom
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.6
    • /
    • pp.1-8
    • /
    • 2005
  • In this paper, we propose a two-stage head tracking algorithm adequate for real-time active camera system having pan-tilt-zoom functions. In the color convergence stage, we first assume that the shape of a head is an ellipse and its model color histogram is acquired in advance. Then, the min-shift method is applied to roughly estimate a target position by examining the histogram similarity of the model and a candidate ellipse. To reflect the temporal change of object color and enhance the reliability of mean-shift based tracking, the target histogram obtained in the previous frame is considered to update the model histogram. In the updating process, to alleviate error-accumulation due to outliers in the target ellipse of the previous frame, the target histogram in the previous frame is obtained within an ellipse adaptively shrunken on the basis of the model histogram. In addition, to enhance tracking reliability further, we set the initial position closer to the true position by compensating the global motion, which is rapidly estimated on the basis of two 1-D projection datasets. In the subsequent stage, we refine the position and size of the ellipse obtained in the first stage by using shape information. Here, we define a robust shape-similarity function based on the gradient direction. Extensive experimental results proved that the proposed algorithm performs head hacking well, even when a person moves fast, the head size changes drastically, or the background has many clusters and distracting colors. Also, the propose algorithm can perform tracking with the processing speed of about 30 fps on a standard PC.

Research about feature selection that use heuristic function (휴리스틱 함수를 이용한 feature selection에 관한 연구)

  • Hong, Seok-Mi;Jung, Kyung-Sook;Chung, Tae-Choong
    • The KIPS Transactions:PartB
    • /
    • v.10B no.3
    • /
    • pp.281-286
    • /
    • 2003
  • A large number of features are collected for problem solving in real life, but to utilize ail the features collected would be difficult. It is not so easy to collect of correct data about all features. In case it takes advantage of all collected data to learn, complicated learning model is created and good performance result can't get. Also exist interrelationships or hierarchical relations among the features. We can reduce feature's number analyzing relation among the features using heuristic knowledge or statistical method. Heuristic technique refers to learning through repetitive trial and errors and experience. Experts can approach to relevant problem domain through opinion collection process by experience. These properties can be utilized to reduce the number of feature used in learning. Experts generate a new feature (highly abstract) using raw data. This paper describes machine learning model that reduce the number of features used in learning using heuristic function and use abstracted feature by neural network's input value. We have applied this model to the win/lose prediction in pro-baseball games. The result shows the model mixing two techniques not only reduces the complexity of the neural network model but also significantly improves the classification accuracy than when neural network and heuristic model are used separately.

The Performance Bottleneck of Subsequence Matching in Time-Series Databases: Observation, Solution, and Performance Evaluation (시계열 데이타베이스에서 서브시퀀스 매칭의 성능 병목 : 관찰, 해결 방안, 성능 평가)

  • 김상욱
    • Journal of KIISE:Databases
    • /
    • v.30 no.4
    • /
    • pp.381-396
    • /
    • 2003
  • Subsequence matching is an operation that finds subsequences whose changing patterns are similar to a given query sequence from time-series databases. This paper points out the performance bottleneck in subsequence matching, and then proposes an effective method that improves the performance of entire subsequence matching significantly by resolving the performance bottleneck. First, we analyze the disk access and CPU processing times required during the index searching and post processing steps through preliminary experiments. Based on their results, we show that the post processing step is the main performance bottleneck in subsequence matching, and them claim that its optimization is a crucial issue overlooked in previous approaches. In order to resolve the performance bottleneck, we propose a simple but quite effective method that processes the post processing step in the optimal way. By rearranging the order of candidate subsequences to be compared with a query sequence, our method completely eliminates the redundancy of disk accesses and CPU processing occurred in the post processing step. We formally prove that our method is optimal and also does not incur any false dismissal. We show the effectiveness of our method by extensive experiments. The results show that our method achieves significant speed-up in the post processing step 3.91 to 9.42 times when using a data set of real-world stock sequences and 4.97 to 5.61 times when using data sets of a large volume of synthetic sequences. Also, the results show that our method reduces the weight of the post processing step in entire subsequence matching from about 90% to less than 70%. This implies that our method successfully resolves th performance bottleneck in subsequence matching. As a result, our method provides excellent performance in entire subsequence matching. The experimental results reveal that it is 3.05 to 5.60 times faster when using a data set of real-world stock sequences and 3.68 to 4.21 times faster when using data sets of a large volume of synthetic sequences compared with the previous one.

(A Scalable Multipoint-to-Multipoint Routing Protocol in Ad-Hoc Networks) (애드-혹 네트워크에서의 확장성 있는 다중점 대 다중점 라우팅 프로토콜)

  • 강현정;이미정
    • Journal of KIISE:Information Networking
    • /
    • v.30 no.3
    • /
    • pp.329-342
    • /
    • 2003
  • Most of the existing multicast routing protocols for ad-hoc networks do not take into account the efficiency of the protocol for the cases when there are large number of sources in the multicast group, resulting in either large overhead or poor data delivery ratio when the number of sources is large. In this paper, we propose a multicast routing protocol for ad-hoc networks, which particularly considers the scalability of the protocol in terms of the number of sources in the multicast groups. The proposed protocol designates a set of sources as the core sources. Each core source is a root of each tree that reaches all the destinations of the multicast group. The union of these trees constitutes the data delivery mesh, and each of the non-core sources finds the nearest core source in order to delegate its data delivery. For the efficient operation of the proposed protocol, it is important to have an appropriate number of core sources. Having too many of the core sources incurs excessive control and data packet overhead, whereas having too little of them results in a vulnerable and overloaded data delivery mesh. The data delivery mesh is optimally reconfigured through the periodic control message flooding from the core sources, whereas the connectivity of the mesh is maintained by a persistent local mesh recovery mechanism. The simulation results show that the proposed protocol achieves an efficient multicast communication with high data delivery ratio and low communication overhead compared with the other existing multicast routing protocols when there are multiple sources in the multicast group.

The Construction of QoS Integration Platform for Real-time Negotiation and Adaptation Stream Service in Distributed Object Computing Environments (분산 객체 컴퓨팅 환경에서 실시간 협약 및 적응 스트림 서비스를 위한 QoS 통합 플랫폼의 구축)

  • Jun, Byung-Taek;Kim, Myung-Hee;Joo, Su-Chong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.11S
    • /
    • pp.3651-3667
    • /
    • 2000
  • Recently, in the distributed multimedia environments based on internet, as radical growing technologies, the most of researchers focus on both streaming technology and distributed object thchnology, Specially, the studies which are tried to integrate the streaming services on the distributed object technology have been progressing. These technologies are applied to various stream service mamgements and protocols. However, the stream service management mexlels which are being proposed by the existing researches are insufficient for suporting the QoS of stream services. Besides, the existing models have the problems that cannot support the extensibility and the reusability, when the QoS-reiatedfunctions are being developed as a sub-module which is suited on the specific-purpose application services. For solving these problems, in this paper. we suggested a QoS Integrated platform which can extend and reuse using the distributed object technologies, and guarantee the QoS of the stream services. A structure of platform we suggested consists of three components such as User Control Module(UCM), QoS Management Module(QoSM) and Stream Object. Stream Object has Send/Receive operations for transmitting the RTP packets over TCP/IP. User Control ModuleI(UCM) controls Stream Objects via the COREA service objects. QoS Management Modulel(QoSM) has the functions which maintain the QoS of stream service between the UCMs in client and server. As QoS control methexlologies, procedures of resource monitoring, negotiation, and resource adaptation are executed via the interactions among these comiXments mentioned above. For constmcting this QoS integrated platform, we first implemented the modules mentioned above independently, and then, used IDL for defining interfaces among these mexlules so that can support platform independence, interoperability and portability base on COREA. This platform is constructed using OrbixWeb 3.1c following CORBA specification on Solaris 2.5/2.7, Java language, Java, Java Media Framework API 2.0, Mini-SQL1.0.16 and multimedia equipments. As results for verifying this platform functionally, we showed executing results of each module we mentioned above, and a numerical data obtained from QoS control procedures on client and server's GUI, while stream service is executing on our platform.

  • PDF

Development of Deep Learning Structure to Improve Quality of Polygonal Containers (다각형 용기의 품질 향상을 위한 딥러닝 구조 개발)

  • Yoon, Suk-Moon;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.25 no.3
    • /
    • pp.493-500
    • /
    • 2021
  • In this paper, we propose the development of deep learning structure to improve quality of polygonal containers. The deep learning structure consists of a convolution layer, a bottleneck layer, a fully connect layer, and a softmax layer. The convolution layer is a layer that obtains a feature image by performing a convolution 3x3 operation on the input image or the feature image of the previous layer with several feature filters. The bottleneck layer selects only the optimal features among the features on the feature image extracted through the convolution layer, reduces the channel to a convolution 1x1 ReLU, and performs a convolution 3x3 ReLU. The global average pooling operation performed after going through the bottleneck layer reduces the size of the feature image by selecting only the optimal features among the features of the feature image extracted through the convolution layer. The fully connect layer outputs the output data through 6 fully connect layers. The softmax layer multiplies and multiplies the value between the value of the input layer node and the target node to be calculated, and converts it into a value between 0 and 1 through an activation function. After the learning is completed, the recognition process classifies non-circular glass bottles by performing image acquisition using a camera, measuring position detection, and non-circular glass bottle classification using deep learning as in the learning process. In order to evaluate the performance of the deep learning structure to improve quality of polygonal containers, as a result of an experiment at an authorized testing institute, it was calculated to be at the same level as the world's highest level with 99% good/defective discrimination accuracy. Inspection time averaged 1.7 seconds, which was calculated within the operating time standards of production processes using non-circular machine vision systems. Therefore, the effectiveness of the performance of the deep learning structure to improve quality of polygonal containers proposed in this paper was proven.

Could a Product with Diverged Reviews Ratings Be Better?: The Change of Consumer Attitude Depending on the Converged vs. Diverged Review Ratings and Consumer's Regulatory Focus (평점이 수렴되지 않는 리뷰의 제품들이 더 좋을 수도 있을까?: 제품 리뷰평점의 분산과 소비자의 조절초점 성향에 따른 소비자 태도 변화)

  • Yi, Eunju;Park, Do-Hyung
    • Knowledge Management Research
    • /
    • v.22 no.3
    • /
    • pp.273-293
    • /
    • 2021
  • Due to the COVID-19 pandemic, the size of the e-commerce has been increased rapidly. This pandemic, which made contact-less communication culture in everyday life made the e-commerce market to be opened even to the consumers who would hesitate to purchase and pay by electronic device without any personal contacts and seeing or touching the real products. Consumers who have experienced the easy access and convenience of the online purchase would continue to take those advantages even after the pandemic. During this time of transformation, however, the size of information source for the consumers has become even shrunk into a flat screen and limited to visual only. To provide differentiated and competitive information on products, companies are adopting AR/VR and steaming technologies but the reviews from the honest users need to be recognized as important in that it is regarded as strong as the well refined product information provided by marketing professionals of the company and companies may obtain useful insight for product development, marketing and sales strategies. Then from the consumer's point of view, if the ratings of reviews are widely diverged how consumers would process the review information before purchase? Are non-converged ratings always unreliable and worthless? In this study, we analyzed how consumer's regulatory focus moderate the attitude to process the diverged information. This experiment was designed as a 2x2 factorial study to see how the variance of product review ratings (high vs. low) for cosmetics affects product attitudes by the consumers' regulatory focus (prevention focus vs. improvement focus). As a result of the study, it was found that prevention-focused consumers showed high product attitude when the review variance was low, whereas promotion-focused consumers showed high product attitude when the review variance was high. With such a study, this thesis can explain that even if a product with exactly the same average rating, the converged or diverged review can be interpreted differently by customer's regulatory focus. This paper has a theoretical contribution to elucidate the mechanism of consumer's information process when the information is not converged. In practice, as reviews and sales records of each product are accumulated, as an one of applied knowledge management types with big data, companies may develop and provide even reinforced customer experience by providing personalized and optimized products and review information.