• Title/Summary/Keyword: image analysis system

Search Result 2,972, Processing Time 0.035 seconds

A Study on Damage factor Analysis of Slope Anchor based on 3D Numerical Model Combining UAS Image and Terrestrial LiDAR (UAS 영상 및 지상 LiDAR 조합한 3D 수치모형 기반 비탈면 앵커의 손상인자 분석에 관한 연구)

  • Lee, Chul-Hee;Lee, Jong-Hyun;Kim, Dal-Joo;Kang, Joon-Oh;Kwon, Young-Hun
    • Journal of the Korean Geotechnical Society
    • /
    • v.38 no.7
    • /
    • pp.5-24
    • /
    • 2022
  • The current performance evaluation of slope anchors qualitatively determines the physical bonding between the anchor head and ground as well as cracks or breakage of the anchor head. However, such performance evaluation does not measure these primary factors quantitatively. Therefore, the time-dependent management of the anchors is almost impossible. This study is an evaluation of the 3D numerical model by SfM which combines UAS images with terrestrial LiDAR to collect numerical data on the damage factors. It also utilizes the data for the quantitative maintenance of the anchor system once it is installed on slopes. The UAS 3D model, which often shows relatively low precision in the z-coordinate for vertical objects such as slopes, is combined with terrestrial LiDAR scan data to improve the accuracy of the z-coordinate measurement. After validating the system, a field test is conducted with ten anchors installed on a slope with arbitrarily damaged heads. The damages (such as cracks, breakages, and rotational displacements) are detected and numerically evaluated through the orthogonal projection of the measurement system. The results show that the introduced system at the resolution of 8K can detect cracks less than 0.3 mm in any aperture with an error range of 0.05 mm. Also, the system can successfully detect the volume of the damaged part, showing that the maximum damage area of the anchor head was within 3% of the original design guideline. Originally, the ground adhesion to the anchor head, where the z-coordinate is highly relevant, was almost impossible to measure with the UAS 3D numerical model alone because of its blind spots. However, by applying the combined system, elevation differences between the anchor bottom and the irregular ground surface was identified so that the average value at 20 various locations was calculated for the ground adhesion. Additionally, rotation angle and displacement of the anchor head less than 1" were detected. From the observations, the validity of the 3D numerical model can obtain quantitative data on anchor damage. Such data collection can potentially create a database that could be used as a fundamental resource for quantitative anchor damage evaluation in the future.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

Reproducibility Evaluation of Deep inspiration breath-hold(DIBH) technique by respiration data and heart position analysis during radiation therapy for Left Breast cancer patients (좌측 유방암 환자의 방사선치료 중 환자의 호흡과 심장 위치 분석을 통한 Deep inspiration breath-hold(DIBH) 기법의 재현성 평가)

  • Jo, Jae Young;Bae, Sun Myung;Yoon, In Ha;Lee, Ho Yeon;Kang, Tae Young;Baek, Geum Mun;Bae, Jae Beom
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.26 no.2
    • /
    • pp.297-303
    • /
    • 2014
  • Purpose : The purpose of this study is reproducibility evaluation of deep inspiration breath-hold(DIBH) technique by respiration data and heart position analysis in radiation therapy for Left Breast cancer patients. Materials and Methods : Free breathing(FB) Computed Tomography(CT) images and DIBH CT images of three left breast cancer patients were used to evaluate the heart volume and dose during treatment planing system( Eclipse version 10.0, Varian, USA ). The signal of RPM (Real-time Position Management) Respiratory Gating System (version 1.7.5, Varian, USA) was used to evaluate respiration stability of DIBH during breast radiation therapy. The images for measurement of heart position were acquired by the Electronic portal imaging device(EPID) cine acquisition mode. The distance of heart at the three measuring points(A, B, C) on each image was measured by Offline Review (ARIA 10, Varian, USA). Results : Significant differences were found between the FB and DIBH plans for mean heart dose (6.82 vs. 1.91 Gy), heart $V_{30}$ (68.57 vs. $8.26cm^3$), $V_{20}$ (76.43 vs. $11.34cm^3$). The standard deviation of DIBH signal of each patient was ${\pm}0.07cm$, ${\pm}0.04cm$, ${\pm}0.13cm$, respectively. The Maximum and Minimum heart distance on EPID images were measured as 0.32 cm and 0.00 cm. Conclusion : Consequently, using the DIBH technique with radiation therapy for left breast cancer patients is very useful to establish the treatment plan and to reduce the heart dose. In addition, it is beneficial to using the Cine acquisition mode of EPID for the reproducibility evaluation of DIBH.

Airborne Hyperspectral Imagery availability to estimate inland water quality parameter (수질 매개변수 추정에 있어서 항공 초분광영상의 가용성 고찰)

  • Kim, Tae-Woo;Shin, Han-Sup;Suh, Yong-Cheol
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.1
    • /
    • pp.61-73
    • /
    • 2014
  • This study reviewed an application of water quality estimation using an Airborne Hyperspectral Imagery (A-HSI) and tested a part of Han River water quality (especially suspended solid) estimation with available in-situ data. The estimation of water quality was processed two methods. One is using observation data as downwelling radiance to water surface and as scattering and reflectance into water body. Other is linear regression analysis with water quality in-situ measurement and upwelling data as at-sensor radiance (or reflectance). Both methods drive meaningful results of RS estimation. However it has more effects on the auxiliary dataset as water quality in-situ measurement and water body scattering measurement. The test processed a part of Han River located Paldang-dam downstream. We applied linear regression analysis with AISA eagle hyperspectral sensor data and water quality measurement in-situ data. The result of linear regression for a meaningful band combination shows $-24.847+0.013L_{560}$ as 560 nm in radiance (L) with 0.985 R-square. To comparison with Multispectral Imagery (MSI) case, we make simulated Landsat TM by spectral resampling. The regression using MSI shows -55.932 + 33.881 (TM1/TM3) as radiance with 0.968 R-square. Suspended Solid (SS) concentration was about 3.75 mg/l at in-situ data and estimated SS concentration by A-HIS was about 3.65 mg/l, and about 5.85mg/l with MSI with same location. It shows overestimation trends case of estimating using MSI. In order to upgrade value for practical use and to estimate more precisely, it needs that minimizing sun glint effect into whole image, constructing elaborate flight plan considering solar altitude angle, and making good pre-processing and calibration system. We found some limitations and restrictions such as precise atmospheric correction, sample count of water quality measurement, retrieve spectral bands into A-HSI, adequate linear regression model selection, and quantitative calibration/validation method through the literature review and test adopted general methods.

A Study on Accuracy and Usefulness of In-vivo Dosimetry in Proton Therapy (양성자 치료에서 생체 내 선량측정 검출기(In-vivo dosimety)의 정확성과 유용성에 관한 연구)

  • Kim, Sunyoung;Choi, Jaehyock;Won, Huisu;Hong, Joowan;Cho, Jaehwan;Lee, Sunyeob;Park, Cheolsoo
    • Journal of the Korean Society of Radiology
    • /
    • v.8 no.4
    • /
    • pp.171-180
    • /
    • 2014
  • In this study, the authors attempted to measure the skin dose by irradiating the actual dose on to the TLD(Thermo-Luminescence Dosimeter) and EBT3 Film used as the In-vivo dosimetry after planning the same treatment as the actual patient on a Phantom, because the erythema or dermatitis is frequently occurred on the patients' skin at the time of the proton therapy of medulloblastoma patient receiving the proton therapy. They intended to know whether there is the usefulness for the dosimetry of skin by the comparative analysis of the measured dose values with the treatment planned skin dose. The CT scan from the Brain to the Pelvis was done by placing a phantom on the CSI(Cranio-spinal irradiation) Set-up position of Medulloblastoma, and the treatment Isocenter point was aligned by using DIPS(Digital Image Positioning System) in the treatment room after planning a proton therapy. The treatment Isocenter point of 5 areas that the proton beam was entered into them, and Markers of 2 areas shown in the Phantom during CT scans, that is, in all 7 points, TLD and EBT3 Film pre-calibrated are alternatively attached, and the proton beam that the treatment was planned, was irradiated by 10 times, respectively. As a result of the comparative analysis of the average value calculated from the result values obtained by the repeated measurement of 10 times with the Skin Dose measured in the treatment planning system, the measured dose values of 6 points, except for one point that the accurate measurement was lacked due to the measurement position with a difficulty showed the distribution of the absolute dose value ${\pm}2%$ in both TLD and EBT Film. In conclusion, in this study, the clinical usefulness of the TLD and EBT3 Film for the Enterance skin dose measurement in the first proton therapy in Korea was confirmed.

Correlation analysis of radiation therapy position and dose factors for left breast cancer (좌측 유방암의 방사선치료 자세와 선량인자의 상관관계 분석)

  • Jeon, Jaewan;Park, Cheolwoo;Hong, Jongsu;Jin, Seongjin;Kang, Junghun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.29 no.1
    • /
    • pp.37-48
    • /
    • 2017
  • Purpose: The most basic conditions of radiation therapy is to prevent unnecessary exposure of normal tissue. The risk factors that are important o evaluate the dose emitted to the lung and heart from radiation therapy for breast cancer. Therefore, comparing the dose factors of a normal tissue according to the radion treatment position and Seeking an effective radiation treatment for breast cancer through the analysis of the correlation relationship. Materials and Methods: Computed tomography was conducted among 30 patients with left breast cancer in supine and prone position. Eclipse Treatment Planning System (Ver.11) was established by computerized treatment planning. Using the DVH compared the incident dose to normal tissue by position. Based on the result, Using the SPSS (ver.18) analyzed the dose in each normal tissue factors and Through the correlation analysis between variables, independent sample test examined the association. Finally The HI, CI value were compared Using the MIRADA RTx (ver. ad 1.6) in the supine, prone position Results: The results of computerized treatment planning of breast cancer in the supine position were V20, $16.5{\pm}2.6%$ and V30, $13.8{\pm}2.2%$ and Mean dose, $779.1{\pm}135.9cGy$ (absolute value). In the prone position it showed in the order $3.1{\pm}2.2%$, $1.8{\pm}1.7%$, $241.4{\pm}138.3cGy$. The prone position showed overall a lower dose. The average radiation dose 537.7 cGy less was exposured. In the case of heart, it showed that V30, $8.1{\pm}2.6%$ and $5.1{\pm}2.5%$, Mean dose, $594.9{\pm}225.3$ and $408{\pm}183.6cGy$ in the order supine, prone position. Results of statistical analysis, Cronbach's Alpha value of reliability analysis index is 0.563. The results of the correlation analysis between variables, position and dose factors of lung is about 0.89 or more, Which means a high correlation. For the heart, on the other hand it is less correlated to V30 (0.488), mean dose (0.418). Finally The results of independent samples t-test, position and dose factors of lung and heart were significantly higher in both the confidence level of 99 %. Conclusion: Radiation therapy is currently being developed state-of-the-art linear accelerator and a variety of treatment plan technology. The basic premise of the development think normal tissue protection around PTV. Of course, if you treat a breast cancer patient is in the prone position it take a lot of time and reproducibility of set-up problems. Nevertheless, As shown in the experiment results it is possible to reduce the dose to enter the lungs and the heart from the prone position. In conclusion, if a sufficient treatment time in the prone position and place correct confirmation will be more effective when the radiation treatment to patient.

  • PDF

A Study on the UIC(University & Industry Collaboration) Model for Global New Business (글로벌 사업 진출을 위한 산학협력 협업촉진모델: 경남 G대학 GTEP 사업 실험사례연구)

  • Baek, Jong-ok;Park, Sang-hyeok;Seol, Byung-moon
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.10 no.6
    • /
    • pp.69-80
    • /
    • 2015
  • This can be promoted collaboration environment for the system and the system is very important for competitiveness, it is equipped. If so, could work in collaboration with members of the organization to promote collaboration what factors? Organizational collaboration and cooperation of many people working, or worth pursuing common goals by sharing information and processes to improve labor productivity, defined as collaboration. Factors that promote collaboration are shared visions, the organization's principles and rules that reflect the visions, on-line system developments, and communication methods. First, it embodies the vision shared by the more sympathetic members are active and voluntary participation in the activities of the organization can be achieved. Second, the members are aware of all the rules and principles of a united whole is accepted and leads to good performance. In addition, the ability to share sensitive business activities for self-development and also lead to work to make this a regular activity to create a team that can collaborate to help the environment and the atmosphere. Third, a systematic construction of the online collaboration system is made efficient and rapid task. According to Student team and A corporation we knew that Cloud services and social media, low-cost, high-efficiency services could achieve. The introduction of the latest information technology changes, the members of the organization's systems and active participation can take advantage of continuing education must be made. Fourth, the company to inform people both inside and outside of the organization to communicate actively to change the image of the company activities, the creation of corporate performance is very important to figure. Reflects the latest trend to actively use social media to communicate the effort is needed. For development of systematic collaboration promoting model steps to meet the organizational role. First, the Chief Executive Officer to make a firm and clear vision of the organization members to propagate the faith, empathy gives a sense of belonging should be able to have. Second, middle managers, CEO's vision is to systematically propagate the organizers rules and principles to establish a system would create. Third, general operatives internalize the vision of the company stating that the role of outside companies must adhere. The purpose of this study was well done in collaboration organizations promoting factors for strategic alignment model based on the golden circle and collaboration to understand and reflect the latest trends in information technology tools to take advantage of smart work and business know how student teams through case analysis will derive the success factors. This is the foundation for future empirical studies are expected to be present.

  • PDF

A Study on a Quantified Structure Simulation Technique for Product Design Based on Augmented Reality (제품 디자인을 위한 증강현실 기반 정량구조 시뮬레이션 기법에 대한 연구)

  • Lee, Woo-Hun
    • Archives of design research
    • /
    • v.18 no.3 s.61
    • /
    • pp.85-94
    • /
    • 2005
  • Most of product designers use 3D CAD system as a inevitable design tool nowadays and many new products are developed through a concurrent engineering process. However, it is very difficult for novice designers to get the sense of reality from modeling objects shown in the computer screens. Such a intangibility problem comes from the lack of haptic interactions and contextual information about the real space because designers tend to do 3D modeling works only in a virtual space of 3D CAD system. To address this problem, this research investigate the possibility of a interactive quantified structure simulation for product design using AR(augmented reality) which can register a 3D CAD modeling object on the real space. We built a quantified structure simulation system based on AR and conducted a series of experiments to measure how accurately human perceive and adjust the size of virtual objects under varied experimental conditions in the AR environment. The experiment participants adjusted a virtual cube to a reference real cube within 1.3% relative error(5.3% relative StDev). The results gave the strong evidence that the participants can perceive the size of a virtual object very accurately. Furthermore, we found that it is easier to perceive the size of a virtual object in the condition of presenting plenty of real reference objects than few reference objects, and using LCD panel than HMD. We tried to apply the simulation system to identify preference characteristics for the appearance design of a home-service robot as a case study which explores the potential application of the system. There were significant variances in participants' preferred characteristics about robot appearance and that was supposed to come from the lack of typicality of robot image. Then, several characteristic groups were segmented by duster analysis. On the other hand, it was interesting finding that participants have significantly different preference characteristics between robot with arm and armless robot and there was a very strong correlation between the height of robot and arm length as a human body.

  • PDF

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.

Video Scene Detection using Shot Clustering based on Visual Features (시각적 특징을 기반한 샷 클러스터링을 통한 비디오 씬 탐지 기법)

  • Shin, Dong-Wook;Kim, Tae-Hwan;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.47-60
    • /
    • 2012
  • Video data comes in the form of the unstructured and the complex structure. As the importance of efficient management and retrieval for video data increases, studies on the video parsing based on the visual features contained in the video contents are researched to reconstruct video data as the meaningful structure. The early studies on video parsing are focused on splitting video data into shots, but detecting the shot boundary defined with the physical boundary does not cosider the semantic association of video data. Recently, studies on structuralizing video shots having the semantic association to the video scene defined with the semantic boundary by utilizing clustering methods are actively progressed. Previous studies on detecting the video scene try to detect video scenes by utilizing clustering algorithms based on the similarity measure between video shots mainly depended on color features. However, the correct identification of a video shot or scene and the detection of the gradual transitions such as dissolve, fade and wipe are difficult because color features of video data contain a noise and are abruptly changed due to the intervention of an unexpected object. In this paper, to solve these problems, we propose the Scene Detector by using Color histogram, corner Edge and Object color histogram (SDCEO) that clusters similar shots organizing same event based on visual features including the color histogram, the corner edge and the object color histogram to detect video scenes. The SDCEO is worthy of notice in a sense that it uses the edge feature with the color feature, and as a result, it effectively detects the gradual transitions as well as the abrupt transitions. The SDCEO consists of the Shot Bound Identifier and the Video Scene Detector. The Shot Bound Identifier is comprised of the Color Histogram Analysis step and the Corner Edge Analysis step. In the Color Histogram Analysis step, SDCEO uses the color histogram feature to organizing shot boundaries. The color histogram, recording the percentage of each quantized color among all pixels in a frame, are chosen for their good performance, as also reported in other work of content-based image and video analysis. To organize shot boundaries, SDCEO joins associated sequential frames into shot boundaries by measuring the similarity of the color histogram between frames. In the Corner Edge Analysis step, SDCEO identifies the final shot boundaries by using the corner edge feature. SDCEO detect associated shot boundaries comparing the corner edge feature between the last frame of previous shot boundary and the first frame of next shot boundary. In the Key-frame Extraction step, SDCEO compares each frame with all frames and measures the similarity by using histogram euclidean distance, and then select the frame the most similar with all frames contained in same shot boundary as the key-frame. Video Scene Detector clusters associated shots organizing same event by utilizing the hierarchical agglomerative clustering method based on the visual features including the color histogram and the object color histogram. After detecting video scenes, SDCEO organizes final video scene by repetitive clustering until the simiarity distance between shot boundaries less than the threshold h. In this paper, we construct the prototype of SDCEO and experiments are carried out with the baseline data that are manually constructed, and the experimental results that the precision of shot boundary detection is 93.3% and the precision of video scene detection is 83.3% are satisfactory.