• Title/Summary/Keyword: Convex Sets

Search Result 113, Processing Time 0.027 seconds

TWO STEP ALGORITHM FOR SOLVING REGULARIZED GENERALIZED MIXED VARIATIONAL INEQUALITY PROBLEM

  • Kazmi, Kaleem Raza;Khan, Faizan Ahmad;Shahza, Mohammad
    • Bulletin of the Korean Mathematical Society
    • /
    • v.47 no.4
    • /
    • pp.675-685
    • /
    • 2010
  • In this paper, we consider a new class of regularized (nonconvex) generalized mixed variational inequality problems in real Hilbert space. We give the concepts of partially relaxed strongly mixed monotone and partially relaxed strongly $\theta$-pseudomonotone mappings, which are extension of the concepts given by Xia and Ding [19], Noor [13] and Kazmi et al. [9]. Further we use the auxiliary principle technique to suggest a two-step iterative algorithm for solving regularized (nonconvex) generalized mixed variational inequality problem. We prove that the convergence of the iterative algorithm requires only the continuity, partially relaxed strongly mixed monotonicity and partially relaxed strongly $\theta$-pseudomonotonicity. The theorems presented in this paper represent improvement and generalization of the previously known results for solving equilibrium problems and variational inequality problems involving the nonconvex (convex) sets, see for example Noor [13], Pang et al. [14], and Xia and Ding [19].

Post-processing Technique Based on POCS Using Wavelet Transform (웨이브릿 변환을 이용한 POCS 기반의 후처리 기법)

  • Kwon Goo-Rak;Kim Hyo-Kak;Kim Yoon;Ko Sung-Jea
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.3 s.309
    • /
    • pp.1-8
    • /
    • 2006
  • In this paper, we propose a new post-processing method, based on the theory of the projection onto convex sets (POCS) to reduce the blocking artifacts in decoded images. We propose a few smoothness constraint set (SCS) and its projection operator in the wavelet transform (WT) domain to remove unnecessary high-frequency components caused by blocking artifacts. We also propose a new method to find and preserve the original high frequency components of the image edge. Experimental results show that the proposed method can not only achieve a significantly enhanced subjective quality, but also have the PSNR improvement in the output image.

Theoretical analysis of the projection of filtered data onto the quantization constraint set (양자화 제약 집합에 여과된 데이터를 투영하는 기법의 이론적 고찰)

  • 김동식;박섭형
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.21 no.7
    • /
    • pp.1685-1695
    • /
    • 1996
  • The postprocessing of compressed images based on the projections onto convex sets and the constrained minimization imposes several constraints on the procesed data. The quantization constraint has been commonly used in various algorithms. Quantization is many-to-one mapping, by which all the dat in a quantization region are mapped to the corresponding representative level. The basic idea behind the projection onto the QCS(quantization constraint set) is to prevent the processed data from diverging from the original quantization region in order to redue the artifacts caused by filtering in postprocessing. However, there have been few efforts to analye the POQCS(projection onto the QCS). This paper analyzed mathematically the POQCS of filtered data from the viewpoint of minimizing the mean square error. Our analysis shows that a proper filtering technique followed by the POQCS can reduce the quantization distortion. In the conventional POQCS, the outside data of each quantization region are mapped into the corresponding boundary. Our analysis also shows that mappingthe outside data to the boundary of a subregion of the quantization region yields lower distortion than does the mapping to the boundary of the original region. In addition, several examples and discussions on the theory are introduced.

  • PDF

Post-processing of vector quantized images using the projection onto quantization constraint set (양자화 제약 집합에 투영을 이용한 벡터 양자화된 영상의 후처리)

  • 김동식;박섭형;이종석
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.4
    • /
    • pp.662-674
    • /
    • 1997
  • In order to post process the vector-quantized images employing the theory of projections onto convex sets or the constrained minimization technique, the the projector onto QCS(quantization constraint set) as well as the filter that smoothes the lock boundaries should be investigated theoretically. The basic idea behind the projection onto QCS is to prevent the processed data from diverging from the original quantization region in order to reduce the blurring artifacts caused by a filtering operation. However, since the Voronoi regions in order to reduce the blurring artifacts caused by a filtering operation. However, since the Voronoi regions in the vector quantization are arbitrarilly shaped unless the vector quantization has a structural code book, the implementation of the projection onto QCS is very complicate. This paper mathematically analyzes the projection onto QCS from the viewpoit of minimizing the mean square error. Through the analysis, it has been revealed that the projection onto a subset of the QCS yields lower distortion than the projection onto QCS does. Searching for an optimal constraint set is not easy and the operation of the projector is complicate, since the shape of optimal constraint set is dependent on the statistical characteristics between the filtered and original images. Therefore, we proposed a hyper-cube as a constraint set that enables a simple projection. It sill be also shown that a proper filtering technique followed by the projection onto the hyper-cube can reduce the quantization distortion by theory and experiment.

  • PDF

Analysis of the Spatial Structure of Kazuyo Sejima & Ryue Nishizawa's House Designs (세지마 카즈요 및 니시자와 류에 주택의 공간구조분석 연구)

  • Lee, Ki-Seok
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.5
    • /
    • pp.3220-3230
    • /
    • 2014
  • This paper analyzes the house designs of Kazuyo Sejima and Ryue Nishizawa from the 1900s to the early 2000s' works. By analyzing the degree of space integration of each house using the Convex Map of the Space Syntax Theory, this study arrived at the following conclusions in the respects of private space and public space. First, from period 1 (the 1990s) to period 2 (the first half of the 2000s), the differences between average integration values of private space and public space in their works have decreased. Over time, in general, average integration values of private space have increased, and, on the other hand, those of public space have decreased, leading to smaller differences between two sets of values, which means that, as the integration degrees of private and public spaces have become similar, the boundary which divides spaces becomes blurry. Second, in terms of private space, average integration values of private space in S-3 (House in a Plum Grove) and S-4 (House in China), works of period 2, are the highest among those values of their all 10 works. We can identify that, closure degrees of private space in their works have fallen over time. Third, in terms of public space, average integration values of I-2 (Villa in the Forest), I-5 (S-House), and I-6 (Weekend House), works of period 1, are the highest among those values of their all 10 works. Public space has become more central and open from period 1 to period 2.

DCT Domain Post-Processing Based on POCS (DCT 영역에서의 POCS에 근거한 후처리)

  • Yim Chang hoon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.3C
    • /
    • pp.158-166
    • /
    • 2005
  • Even though post-processing methods based on projections onto convex sets (POCS) have shown good performance for blocking artifact reduction, it is infeasible to implement POCS for real-time practical applications. This paper proposes DCT domain post-processing method based on POCS. The proposed method shows very similar performance compared to the conventional POCS method, while it reduces tremendously the computational complexity. DCT domain POCS performs the lowpass filtering in the DCT domain, and it removes the inverse DCT and forward DCT modules. Through the investigation of lowpass filtering in the iterative POCS method, we define kth order lowpass filtering which is equivalent to the lowpass filtering in the kth iteration, and the corresponding kth order DCT domain POCS. Simulation results show that the kth order DCT domain POCS without iteration gives very similar performance compared to the conventional POCS with k iterations, while it requires much less computations. Hence the proposed DCT domain POCS method can be used efficiently in the practical post-processing applications with real-time constraints.

A Mesh Segmentation Reflecting Global and Local Geometric Characteristics (전역 및 국부 기하 특성을 반영한 메쉬 분할)

  • Im, Jeong-Hun;Park, Young-Jin;Seong, Dong-Ook;Ha, Jong-Sung;Yoo, Kwan-Hee
    • The KIPS Transactions:PartA
    • /
    • v.14A no.7
    • /
    • pp.435-442
    • /
    • 2007
  • This paper is concerned with the mesh segmentation problem that can be applied to diverse applications such as texture mapping, simplification, morphing, compression, and shape matching for 3D mesh models. The mesh segmentation is the process of dividing a given mesh into the disjoint set of sub-meshes. We propose a method for segmenting meshes by simultaneously reflecting global and local geometric characteristics of the meshes. First, we extract sharp vertices over mesh vertices by interpreting the curvatures and convexity of a given mesh, which are respectively contained in the local and global geometric characteristics of the mesh. Next, we partition the sharp vertices into the $\kappa$ number of clusters by adopting the $\kappa$-means clustering method [29] based on the Euclidean distances between all pairs of the sharp vertices. Other vertices excluding the sharp vertices are merged into the nearest clusters by Euclidean distances. Also we implement the proposed method and visualize its experimental results on several 3D mesh models.

Optimization of Economic Load Dispatch Problem for Quadratic Fuel Cost Function with Prohibited Operating Zones (운전금지영역을 가진 이차 발전비용함수의 경제급전문제 최적화)

  • Lee, Sang-Un
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.15 no.5
    • /
    • pp.155-162
    • /
    • 2015
  • This paper proposes a deterministic optimization algorithm to solve economic load dispatch problem with quadratic convex fuel cost function. The proposed algorithm primarily partitions a generator with prohibited zones into multiple generators so as to place them afield the prohibited zone. It then sets initial values to $P_i{\leftarrow}P_i^{max}$ and reduces power generation costs of those incurring the maximum unit power cost. It finally employs a swap optimization process of $P_i{\leftarrow}P_i-{\beta}$, $P_j{\leftarrow}P_j+{\beta}$ where $_{max}\{F(P_i)-F(P_i-{\beta})\}$ > $_{min}\{F(P_j+{\beta})-F(P_j)\}$, $i{\neq}j$, ${\beta}=1.0,0.1,0.01,0.001$. When applied to 3 different 15-generator cases, the proposed algorithm has consistently yielded optimized results compared to those of heuristic algorithms.

Handwritten Numeral Recognition using Composite Features and SVM classifier (복합특징과 SVM 분류기를 이용한 필기체 숫자인식)

  • Park, Joong-Jo;Kim, Tae-Woong;Kim, Kyoung-Min
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.12
    • /
    • pp.2761-2768
    • /
    • 2010
  • In this paper, we studied the use of the foreground and background features and SVM classifier to improve the accuracy of offline handwritten numeral recognition. The foreground features are two directional features: directional gradient feature by Kirsch operators and directional stroke feature by projection runlength, and the background feature is concavity feature which is extracted from the convex hull of the numeral, where concavity feature functions as complement to the directional features. During classification of the numeral, these three features are combined to obtain good discrimination power. The efficiency of our feature sets was tested by recognition experiments on the handwritten numeral database CENPARMI, where we used SVM with RBF kernel as a classifier. The experimental results showed that each combination of two or three features gave a better performance than a single feature. This means that each single feature works with a different discriminating power and cooperates with other features to enhance the recognition accuracy. By using the composite feature of the three features, we achieved a recognition rate of 98.90%.

A Study of Esthetic Facial Profile Preference In Korean (한국인의 연조직측모 선호경향에 대한 연구)

  • Choi, Jun-Gyu;Lee, Ki-Soo
    • The korean journal of orthodontics
    • /
    • v.32 no.5 s.94
    • /
    • pp.327-342
    • /
    • 2002
  • Soft tissue profile is a critical area of interest in the development of an orthodontic treatment and diagnosis. The purpose of this study was to determine the facial profile preference of diversified group and to investigate the relationship between most Preferred facial Profile and existing soft tissue reference lines. A survey instrument of constructed facial silhouettes was evaluated by 894 lay person. The silhouettes had varied nose, lips, chin and soft tissue subnasale point. Seven sets of facial type were computer-generated by an orthodontist to represent distinct facial types. The varied facial profiles were graded on the basis of most preferred to least preferred. Every facial profile were measured by soft tissue reference lines(Ricketts E-line, Burstone B-line) to observe the most preferred facial profile. The results as follows: 1. In reliability test, the childhood group showed lower value than other groups, which means that this group has no concern on facial profile preference. 2. It appears that sexual and age difference made no significant difference in selecting the profile 3. An agreement to least preferred facial profile was higher than an agreement to most preferred facial profile. 4. Coefficient of concordance (Kendall W) was higher in the twentieth group. It means that a profile preference of the twentieth is distinct. 5. A lip protrusion (to Ricketts E-line and Burstone B-line) of most preferred facial profile was similar to measurements of previous study that investigate skeletal and soft tissue of esthetic facial profile of young Korean. So these reference lines can be used valuably in clinics. 6. Profile of excessive lip protrusion or retrusion to E-line & B-line was least preferred. 7. Most preferred profile of all respondents group was straight profile. Profile that showing convex profile was not pre(erred and the least preferred profile was concave profile.