• Title/Summary/Keyword: View Object

Search Result 931, Processing Time 0.024 seconds

Comparative Study on Accuracy and Usefulness of Calibration Using CT T.O.D (단층촬영영상을 이용한 T.O.D Calibration의 정확성과 유용성에 관한 비교연구)

  • Seo, Jeong-Beom;Kim, Dong-Hyeon;Lee, Jeong-Beom
    • Korean Journal of Digital Imaging in Medicine
    • /
    • v.13 no.1
    • /
    • pp.39-48
    • /
    • 2011
  • Uses a Tomographic scan image and Table Object Distance(TOD) price after measuring, uses accuracy and usability of blood vessel diameter(Vessel Diameter) measurement under comparison evaluating boil TOD Calibration. The patient who enforces Prosecuting Attorney abdomen Tomographic scan in the object the superior mesentery artery uses PACS View from abdomen fault image and from blood vessel diameter and the table measures the height until of the blood vessel. Uses Angio Catheter from Angiography(5 Fr.) and enforces is measured from PACS View the height until of the table which and the blood vessel at TOD Calibration price and the size of the superior mesentery artery inputs measures an superior mesentery artery building skill. Catheter Calibration input Agnio Catheter where uses in Angiography the size of the superior mesentery artery at Catheter Calibration price and they measure. Produced an accuracy from monitoring data and comparison evaluated. The statistical program used SPSS. TOD Calibration accuracy was 96.53%, standard deviation is 0.03829. Catheter Calibration accuracy of 92.91%, standard deviation is 0.05085. Represents a statistically significant difference(p = 0). According to age and gender was not statistically significant(p > 0.05). TOD Calibration correlation coefficient R-squared of 88.8%, Catheter Calibration of the R-squared is 75.5%. High accuracy of both methods. Through this study, CT images using the measured distance between the table and the Object, TOD Calibration accuracy higher than two Catheter Calibration was measured. TOD and Catheter Calibration represents a statistically significant difference(p = 0).

  • PDF

3D virtual shopping mall implementation based on the rich media technology (리치미디어 기술 기반의 3D 가상쇼핑몰 구현)

  • Lee, Jun;Kang, Eung-Kwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.2
    • /
    • pp.229-238
    • /
    • 2007
  • Cyber shopping mall is based on technique of the Internet that is purpose in a new business form from cyber space. Cyber shopping mall's economic activity will be much grow up and upswing compare than past object economy. Therefore we proposed the interactive Internet shopping mail system. Especially, the proposed shopping mali is 3D interaction Cyber Shopping mall based on 'Rich media Technique' and it different with existing shopping mall. This system is represent to Hyper view when the customer click the object what they want to buy during cyber space tour. This system can be Move, Rotate, Zoom in, Zoom out and Play the object. This system can increase customer's feeling more interesting, immersion, etc. And 3D objects look like a real thing get out of the conventional 2D image form is great contribute to increase customer's interest.

Augmented Reality Framework to Visualize Information about Construction Resources Based on Object Detection (웨어러블 AR 기기를 이용한 객체인식 기반의 건설 현장 정보 시각화 구현)

  • Pham, Hung;Nguyen, Linh;Lee, Yong-Ju;Park, Man-Woo;Song, Eun-Seok
    • Journal of KIBIM
    • /
    • v.11 no.3
    • /
    • pp.45-54
    • /
    • 2021
  • The augmented reality (AR) has recently became an attractive technology in construction industry, which can play a critical role in realizing smart construction concepts. The AR has a great potential to help construction workers access digitalized information about design and construction more flexibly and efficiently. Though several AR applications have been introduced for on-site made to enhance on-site and off-site tasks, few are utilized in actual construction fields. This paper proposes a new AR framework that provides on-site managers with an opportunity to easily access the information about construction resources such as workers and equipment. The framework records videos with the camera installed on a wearable AR device and streams the video in a server equipped with high-performance processors, which runs an object detection algorithm on the streamed video in real time. The detection results are sent back to the AR device so that menu buttons are visualized on the detected objects in the user's view. A user is allowed to access the information about a worker or equipment appeared in one's view, by touching the menu button visualized on the resource. This paper details implementing parts of the framework, which requires the data transmission between the AR device and the server. It also discusses thoroughly about accompanied issues and the feasibility of the proposed framework.

Vehicle Detection Method Based on Object-Based Point Cloud Analysis Using Vertical Elevation Data (OBPCA 기반의 수직단면 이용 차량 추출 기법)

  • Jeon, Junbeom;Lee, Heezin;Oh, Sangyoon;Lee, Minsu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.8
    • /
    • pp.369-376
    • /
    • 2016
  • Among various vehicle extraction techniques, OBPCA (Object-Based Point Cloud Analysis) calculates features quickly by coarse-grained rectangles from top-view of the vehicle candidates. However, it uses only a top-view rectangle to detect a vehicle. Thus, it is hard to extract rectangular objects with similar size. For this reason, accuracy issue has raised on the OBPCA method which influences on DEM generation and traffic monitoring tasks. In this paper, we propose a novel method which uses the most distinguishing vertical elevations to calculate additional features. Our proposed method uses same features with top-view, determines new thresholds, and decides whether the candidate is vehicle or not. We compared the accuracy and execution time between original OBPCA and the proposed one. The experiment result shows that our method produces 6.61% increase of precision and 13.96% decrease of false positive rate despite with marginal increase of execution time. We can see that the proposed method can reduce misclassification.

Disparity Estimation for Intermediate View Reconstruction of Multi-view Video (다시점 동영상의 중간시점영상 생성을 위한 변이 예측 기법)

  • Choi, Mi-Nam;Yun, Jung-Hwan;Yoo, Ji-Sang
    • Journal of Broadcast Engineering
    • /
    • v.13 no.6
    • /
    • pp.915-929
    • /
    • 2008
  • In this paper, we propose an algorithm for pixel-based disparity estimation with reliability in the multi-view image. The proposed method estimates an initial disparity map using edge information of an image, and the initial disparity map is used for reducing the search range to estimate the disparity efficiently. Furthermore, disparity-mismatch on object boundaries and textureless-regions get reduced by adaptive block size. We generated intermediate-view images to evaluate the estimated disparity. Test results show that the proposed algorithm obtained $0.1{\sim}1.2dB$ enhanced PSNR(peak signal to noise ratio) compared to conventional block-based and pixel-based disparity estimation methods.

Hole Filling Algorithm for a Virtual-viewpoint Image by Using a Modified Exemplar Based In-painting

  • Ko, Min Soo;Yoo, Jisang
    • Journal of Electrical Engineering and Technology
    • /
    • v.11 no.4
    • /
    • pp.1003-1011
    • /
    • 2016
  • In this paper, a new algorithm by using 3D warping technique to effectively fill holes that are produced when creating a virtual-viewpoint image is proposed. A hole is defined as the region that cannot be seen in the reference view when a virtual view is created. In the proposed algorithm, to reduce the blurring effect that occurs on the hole region filled by conventional algorithms and to enhance the texture quality of the generated virtual view, Exemplar Based In-painting algorithm is used. The boundary noise which occurs in the initial virtual view obtained by 3D warping is also removed. After 3D warping, we estimate the relative location of the background to the holes and then pixels adjacent to the background are filled in priority to get better result by not using only adjacent object's information. Also, the temporal inconsistency between frames can be reduced by expanding the search region up to the previous frame when searching for most similar patch. The superiority of the proposed algorithm compared to the existing algorithms can be shown through the experimental results.

A Best View Selection Method in Videos of Interested Player Captured by Multiple Cameras (다중 카메라로 관심선수를 촬영한 동영상에서 베스트 뷰 추출방법)

  • Hong, Hotak;Um, Gimun;Nang, Jongho
    • Journal of KIISE
    • /
    • v.44 no.12
    • /
    • pp.1319-1332
    • /
    • 2017
  • In recent years, the number of video cameras that are used to record and broadcast live sporting events has increased, and selecting the shots with the best view from multiple cameras has been an actively researched topic. Existing approaches have assumed that the background in video is fixed. However, this paper proposes a best view selection method for cases in which the background is not fixed. In our study, an athlete of interest was recorded in video during motion with multiple cameras. Then, each frame from all cameras is analyzed for establishing rules to select the best view. The frames were selected using our system and are compared with what human viewers have indicated as being the most desirable. For the evaluation, we asked each of 20 non-specialists to pick the best and worst views. The set of the best views that were selected the most coincided with 54.5% of the frame selection using our proposed method. On the other hand, the set of views most selected as worst through human selection coincided with 9% of best view shots selected using our method, demonstrating the efficacy of our proposed method.

Spatial View Materialization Technique by using R-Tree Reconstruction (R-tree 재구성 방법을 이용한 공간 뷰 실체화 기법)

  • Jeong, Bo-Heung;Bae, Hae-Yeong
    • The KIPS Transactions:PartD
    • /
    • v.8D no.4
    • /
    • pp.377-386
    • /
    • 2001
  • In spatial database system, spatial view is supported for efficient access method to spatial database and is managed by materialization and non-materialization technique. In non-materialization technique, repeated execution on the same query makes problems such as the bottle-neck effect of server-side and overloads on a network. In materialization technique, view maintenance technique is very difficult and maintenance cost is too high when the base table has been changed. In this paper, the SVMT (Spatial View Materialization Technique) is proposed by using R-tree re-construction. The SVMT is a technique which constructs a spatial index according to the distribution ratio of objects in spatial view. This ratio is computed by using a SVHR (Spatial View Height in R-tree) and SVOC (Spatial View Object Count). If the ratio is higher than the average, a spatial view is materialized and the R-tree index is re-used. In this case, the root node of this index is exchanged a node which has a MBR (Minimum Boundary Rectangle) value that can contains the whole region of spatial view at a minimum size. Otherwise, a spatial view is materialized and the R-tree is re-constructed. In this technique, the information of spatial view is managed by using a SVIT (Spatial View Information Table) and is stored on the record of this table. The proposed technique increases the speed of response time through fast query processing on a materialized view and eliminates additional costs occurred from repeatable query modification on the same query. With these advantages, it can greatly minimize the network overloads and the bottle-neck effect on the server.

  • PDF

Bag of Visual Words Method based on PLSA and Chi-Square Model for Object Category

  • Zhao, Yongwei;Peng, Tianqiang;Li, Bicheng;Ke, Shengcai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.7
    • /
    • pp.2633-2648
    • /
    • 2015
  • The problem of visual words' synonymy and ambiguity always exist in the conventional bag of visual words (BoVW) model based object category methods. Besides, the noisy visual words, so-called "visual stop-words" will degrade the semantic resolution of visual dictionary. In view of this, a novel bag of visual words method based on PLSA and chi-square model for object category is proposed. Firstly, Probabilistic Latent Semantic Analysis (PLSA) is used to analyze the semantic co-occurrence probability of visual words, infer the latent semantic topics in images, and get the latent topic distributions induced by the words. Secondly, the KL divergence is adopt to measure the semantic distance between visual words, which can get semantically related homoionym. Then, adaptive soft-assignment strategy is combined to realize the soft mapping between SIFT features and some homoionym. Finally, the chi-square model is introduced to eliminate the "visual stop-words" and reconstruct the visual vocabulary histograms. Moreover, SVM (Support Vector Machine) is applied to accomplish object classification. Experimental results indicated that the synonymy and ambiguity problems of visual words can be overcome effectively. The distinguish ability of visual semantic resolution as well as the object classification performance are substantially boosted compared with the traditional methods.

Appropriateness Assessment of Illuminance-Based Evaluation Method in Automotive Headlight Visibility Performance (조도 기반 자동차 전조등 시인 성능 평가 방법의 적정성 평가)

  • Cho, Wonbum
    • International Journal of Highway Engineering
    • /
    • v.19 no.6
    • /
    • pp.165-173
    • /
    • 2017
  • PURPOSES : The current practice in car headlight visibility performance evaluation is based on the luminous intensity and illuminance of headlight. Such practice can be inappropriate from a visibility point of view where visibility indicates abilities to perceive an object ahead on the road. This study aimed at evaluating the appropriateness of current headlight evaluation method. METHODS : This study measured the luminance of object and road surface at unlit roadways. The variables were measured by vehicle type and by headlight lamp type. Based on the measurements, the distance where drivers can perceive an object ahead was calculated and then compared against such distance obtained by conventional visibility performance evaluation. RESULTS : The evaluation method based on illuminance of headlight is not appropriate when viewed from the visibility concept that is based on object-perceivable distance. Further, the results indicated a shorter object-perceiving distance even when road surface luminance is higher, thereby suggesting that illuminance of headlight and luminance of road surface are not the representative indices of nighttime visibility. CONCLUSIONS : Considering that this study utilized limited vehicle types and that road surface (background) luminance can vary depending on the characteristics of the given road surface, it would likely go too far to argue that this study's visibility performance evaluation results can get generalized to other conditions. Regardless, there is little doubt that the current performance evaluation criterion which is based on illuminance, is unreasonable. There should be future endeavors on the current subject which will need to explore study conditions further, under which more experiments should be conducted and effective methodologies developed for evaluating automotive headlight visibility performance. Needs are recognized particularly in the development of headlight visibility performance evaluation methodology which will take into account road surface (background) luminance and luminance contrast from various perspectives as the former indicates the driver's perception of the front road alignment and the latter being indicative of object perception performance.