Computer Science/Optimization

3. Convex function

  • -
728x90
반응형

Convex function

f:RnRf:\R^n \to \R is convex if dom f\text{dom }f  is a convex set and

f(θx+(1θ)y)θf(x)+(1θ)f(y)f(\theta x + (1 - \theta)y ) \le \theta f(x) + (1-\theta)f(y)

for all x,ydom f,0θ1x, y\in \text{dom }f, 0 \le\theta \le 1

ff is strictly convex if dom f\text{dom }f is convex and

f(θx+(1θ)y)<θf(x)+(1θ)f(y)f(\theta x + (1 - \theta)y ) < \theta f(x) + (1-\theta)f(y)

for all x,ydom f,0θ1x, y\in \text{dom }f, 0 \le\theta \le 1

Examples

affine functions are convex and concave; all norms are convex

Examples on R\R

  1. affine : ax+ba x + b on R\R, for any a,bRa, b\in \R
  1. exponential : eaxe^{ax}, for any aRa\in \R
  1. powers : xαx^\alpha  on R++\R_{++}, for α1\alpha \ge 1  or α0\alpha \le 0
  1. powers of absolute value : xp|x|^p on R\R, for p1p\ge1
  1. negative entropy : xlogx-x\log x on R++\R_{++}

Examples on Rn\R^n

  • affine function : f(x)=aTx+bf(x) = a^Tx + b
  • norms : xp=(i=1nxip)1/p\|x\|_p = (\sum_{i = 1}^n|x_i|^p)^{1/p} for p1p\ge 1; x=maxkxk\|x\|_\infty = \max_k |x_k|
    • Proof
      θx+(1θ)yθx+(1θ)yθx+(1θ)y\begin{aligned}\|\theta x + (1-\theta)y\| &\le \|\theta x \| + \|(1-\theta) y\| \\ &\le\theta\|x\| + (1-\theta)\|y\|\end{aligned}

Examples on Rm×n\R^{m\times n}

Restriction of a convex function to a line

f:RnRf : \R^n \to \R is convex if and only if the function g:RRg :\R \to \R

g(t)=f(x+tv),dom g={t  x+tvdom f}g(t) = f(x + tv), \quad \text{dom g} = \{t|\; x + tv \in \text{dom }f\}

is convex in t for any xdom f,vRnx\in \text{dom f}, v\in \R^n

We can check convexity of ff by checking convexity of functions of one variable

💡
We can interpret this as a fixed direction and starting point to check the convexity of the given function

Example

f:SnRf:S^n \to \R with f(X)=logdetX,dom f=S++nf(X) = \log \det X, \text{dom }f = S_{++}^n

g(t)=logdet(X+tV)=logdetX+logdet(I+tX1/2VX1/2)=logdetX+i=1nlog(1+tλi)\begin{aligned}g(t) = \log \det(X + tV) &= \log \det X + \log\det (I + tX^{-1/2}VX^{-1/2}) \\ &= \log\det X + \sum_{i = 1}^n \log(1 + t\lambda_i)\end{aligned}

where λi\lambda_i are the eigenvalues of X1/2VX1/2X^{-1/2}VX^{-1/2}

gg is concave in tt  (for any choice of X0,VX\succ 0, V); hence ff is concave.

Note

logdet(X+tV)=logdet(X1/2X1/2+tX1/2X1/2VX1/2X1/2)=logdet(X1/2(I+tX1/2VX1/2)X1/2)=logdet(X(I+tX1/2VX1/2))=logdetX+logdet(I+tX1/2VX1/2)\begin{aligned}\log \det(X + tV) &= \log \det (X^{1/2}X^{1/2} + tX^{1/2}X^{-1/2}VX^{-1/2}X^{1/2})\\ &= \log\det(X^{1/2}(I + tX^{-1/2}VX^{-1/2})X^{1/2}) \\ &= \log\det(X(I + tX^{-1/2}VX^{-1/2})) \\ &= \log \det X + \log\det(I + tX^{-1/2}VX^{-1/2})\end{aligned}

Extended-value extension

It is often convenient to extend a convex function to all of Rn\R^n by defining its value to be \infty outside its domain. If ff is convex we define its extended-value extension f~\tilde f of ff is

f~(x)={f(x)xdom fxdom f\tilde f(x) = \begin{cases}f(x) \quad x\in \text{dom }f \\ \infty \quad x\notin \text{dom }f\end{cases}

The extension can simplify notation, since we don’t need to explicitly describe the domain.

First-order condition

ff is differentiable if dom f\text{dom }f is open and the gradient

f(x)=(f(x)x1,f(x)x2,,f(x)xn)T\nabla f(x) = \bigg (\frac{\partial f(x)}{\partial x_1}, \frac{\partial f(x)}{\partial x_2}, \cdots, \frac{\partial f(x)}{\partial x_n}\bigg)^T

exists at each xdom fx\in \text{dom }f

1st-order condition : differentiable ff with convex domain is convex if and only if

f(y)f(x)+f(x)T(yx),x,ydom ff(y) \ge f(x) + \nabla f(x)^T(y- x), \forall x, y\in \text{dom }f

Second-order conditions

ff is twice differentiable if dom f\text{dom }f is open and the Hessian 2f(x)Sn\nabla^2f(x) \in S^n,

2f(x)ij=2f(x)xixj,i,j=1,,n\nabla^2f(x)_{ij} = \frac{\partial^2f(x)}{\partial x_i\partial x_j}, \quad i, j = 1, \dots, n

or equivalently

2f(x)ij=Hx(i,j)\nabla^2f(x)_{ij} = H_x(i, j)

where HxH_x is a hessian matrix at xx.

2nd-order condition : for twice differentiable ff with convex domain

  • ff is convex if and only if
    2f(x)0,xdom f\nabla^2 f(x) \succeq0, \forall x\in \text{dom }f
  • if 2f(x)0,xdom f\nabla^2 f(x) \succ 0, \forall x\in \text{dom }f, then ff is strictly convex.

Example

  1. Quadratic function : f(x)=12xTPx+qTx+rf(x) = \frac{1}{2}x^TPx + q^Tx + r (with PSn)P\in S^n)
    f(x)=Px+q,2f(x)=P\nabla f(x) = Px + q, \quad \nabla^2f(x) = P

    convex if P0P\succeq 0

  1. Least-squares objective : f(x)=Axb22f(x) = \|Ax -b \|_2^2
    f(x)=2AT(Axb),2f(x)=2ATA\nabla f(x) = 2A^T(Ax - b), \quad \nabla^2 f(x) = 2A^TA

    convex for any AA since ATAA^TA is always positive semi-definite.

  1. quadratic-over-linear : f(x,y)=x2/yf(x,y) = x^2/y
    2f(x,y)=2y3[yx][yx]T0\nabla^2f(x, y) = \frac{2}{y^3}\begin{bmatrix}y \\ -x\end{bmatrix}\begin{bmatrix}y \\ -x\end{bmatrix}^T \succeq0

    convex for y>0y>0

  1. log-sum-exp : f(x)=logk=1nexpxkf(x) = \log\sum_{k = 1}^n \exp x_k
    f(x)=(expx1/k=1nexpxk,expx2/k=1nexpxk,,expxn/k=1nexpxk)T\nabla f(x) = (\exp x_1/\sum_{k = 1}^n\exp x_k, \exp x_2/\sum_{k = 1}^n\exp x_k, \cdots, \exp x_n/\sum_{k = 1}^n\exp x_k)^T

    Then,

    ii2f(x)=expxi/k=1nexpxk+expxiexpxi(k=1nexpxk)2ij2f(x)=expxiexpxj(k=1nexpxk)2(ij)\nabla_{ii}^2 f(x) = \exp x_i/\sum_{k = 1}^n\exp x_k + \exp x_i * -\frac{\exp x_i}{(\sum_{k = 1}^n\exp x_k)^2} \\ \nabla_{ij}^2 f(x) = \exp x_i * -\frac{\exp x_j}{(\sum_{k = 1}^n\exp x_k)^2} (i \ne j)

    Therefore,

    2f(x)=11Tzdiag(z)1(1Tz)2zzT(zk=expxk)\nabla^2 f(x) = \frac{1}{1^Tz}\text{diag}(z) - \frac{1}{(1^Tz)^2}zz^T \quad (z_k = \exp x_k)

    to show 2f(x)\nabla^2 f(x) is positive semi-definite, we must verify that vT2f(x)v0,vv^T\nabla^2f(x) v\ge 0, \forall v

    vT2f(x)v=(kzkvk2)(kzk)(kvkzk)2(kzk)2v^T\nabla^2f(x)v = \frac{(\sum_{k}z_kv_k^2)(\sum_{k}z_k) - (\sum_k v_kz_k)^2}{(\sum_k z_k)^2}

    By Cauchy-Schwarz inequality,

    (kvkzk)2(kzkvk2)(kzk)(\sum_k v_kz_k)^2 \le (\sum_k z_kv_k^2)(\sum_k z_k)

    So, vT2f(x)v0,vv^T\nabla^2f(x)v\ge 0, \forall v.

    Therefore, 2f(x)\nabla^2f(x) is a positive semi-definite matrix.

    💡
    This function is a smooth approximation of a maximum since the maximum number nominate the value. In deep learning or machine learning perspective, we call this function as a softmax function.
  1. geometric mean : f(x)=(k=1nxk)1/nf(x) = (\prod_{k = 1}^n x_k)^{1/n} on R++n\R_{++}^n is concave

Epigraph and sublevel set

α\alpha-sublevel set of f:RnR:f:\R^n\to \R:

Cα={xdom f  f(x)α}C_\alpha = \{x\in \text{dom }f|\;f(x)\le \alpha\}

sublevel sets of convex functions are convex

epigraph of f:RnRf :\R^n\to \R

epi f={(x,t)Rn+1  xdom f,f(x)t}\text{epi }f=\{(x, t)\in \R^{n + 1}|\; x\in \text{dom }f, f(x)\le t\}
💡
ff is convex if and only if epi f\text{epi }f is a convex set.

Jensen’s inequality

basic inequality : if ff is convex, then for 0θ10\le\theta \le 1

f(θx+(1θ)y)θf(x)+(1θ)f(y)f(\theta x + (1 - \theta)y)\le \theta f(x) + (1-\theta)f(y)
💡
It actually extended to the finite sum to infinite sum. Therefore, this equality still holds if we integrate them. That’s why the following inequality holds.

extension : if ff is convex, then

f(Ez)Ef(z)f(E z) \le Ef(z)

for any random variable zz

💡
The clarity of this result stems from the fundamental nature of abstract integration, which essentially involves the summation of infinitesimally small quantities.

Operations that preserve convexity

practical methods for establishing convexity of a function

  1. by using definition
  1. for twice differentiable functions, show 2f(x)0,x\nabla^2f(x)\succeq 0, \forall x
  1. show that ff is obtained from simple convex functions by operations that preserve convexity
    1. nonnegative weighted sum
    1. composition with affine function
    1. point-wise maximum and supremum
    1. minimization
    1. composition with scalar function
    1. vector composition
    1. perspective

Composition with affine function

f(Ax+b)f(Ax + b)

is convex if ff is convex

  • Proof

    Given x,ydom A,θ0x,y\in \text{dom }A, \theta \ge 0

    f(θ(Ax+b)+(1θ)(Ay+b))=f(A(θx+(1θ)y)+b)θf(Ax+b)+(1θ)f(Ay+b)\begin{aligned} f(\theta (Ax+b) + (1-\theta)(Ay + b)) &= f(A(\theta x + (1-\theta)y) + b) \\&\le \theta f(Ax + b) + (1-\theta)f(Ay + b)\end{aligned}

    Therefore f(Ax+b)f(Ax + b) is a convex function.

Examples

  1. log barrier for linear inequalities
    f(x)=i=1mlog(biaiTx)f(x) = -\sum_{i = 1}^m \log (b_i - a_i^Tx)

    where dom f={xaiTx<bi,i=1,,m}\text{dom }f = \{x|a_i^Tx<b_i, i = 1, \dots, m\}

    💡
    Since log is convex and it is composited by the affine function, it is also a convex function
  1. any norm of affine function
    f(x)=Ax+bf(x) = \|Ax + b\|
    💡
    We already know that every norm is a convex function

Pointwise maximum

if f1,,fmf_1, \dots, f_m are convex, then f(x)=max{f1(x),,fm(x)}f(x) = \max\{f_1(x), \dots, f_m(x)\} is convex

  • Proof

    Let x,yi=1mdom fi,θ0x, y\in \cap_{i = 1}^m \text{dom }f_i, \theta\ge 0

    Since fif_i is a convex function,

    fi(θx+(1θ)y)θfi(x)+(1θ)fi(y) f_i(\theta x+(1-\theta)y)\le\theta f_i(x) + (1 - \theta)f_i(y)

    for all i=1,,mi = 1,\dots, m

    Without loss of generality, let f(θx+(1θ)y)=fk(θx+(1θ)y)f(\theta x + (1 - \theta)y) = f_k(\theta x + (1-\theta)y)

    Then,

    f(θx+(1θ)y)=fk(θx+(1θ)y)θfk(x)+(1θ)fk(y)θf(x)+(1θ)f(y)\begin{aligned}f(\theta x + (1 - \theta)y) &= f_k(\theta x + (1-\theta)y) \\ &\le \theta f_k(x) + (1-\theta)f_k(y) \\ &\le \theta f(x) + (1-\theta)f(y)\end{aligned}

    Therefore, ff is a convex function.

Examples

  1. point-wise linear function
    f(x)=maxi=1,,maiTx+bif(x) = \max_{i = 1, \dots, m} a_i^Tx + b_i

    is a convex function

  1. sum of rr largest components of xRnx\in \R^n
    f(x)=x[1]++x[r]f(x) = x_{[1]} + \cdots + x_{[r]}

    where x[i]x_{[i]} is i’th largest component of xx

    is a convex function

    Actually,

    f(x)=max{xi1+xi2++xir1i1<i2<irn}f(x) = \max\{x_{i_1} + x_{i_2} + \cdots + x_{i_r} |1\le i_1 < i_2 < \cdots i_r\le n\}

Pointwise supremum

if f(x,y)f(x, y) is convex in xx for each yAy\in \mathcal A, then

g(x)=supyAf(x,y)g(x) = \sup_{y\in \mathcal A}f(x, y)

is convex

💡
yy can be abstracted object i.e. string
  • Proof

    Let x1,x2dom g,yA,θ0x_1, x_2\in \text{dom }g, y\in \mathcal A, \theta\ge 0

    g(θx1+(1θ)x2)=supyAf(θx1+(1θ)x2,y)g(\theta x_1+ (1 - \theta)x_2) = \sup_{y\in A}f(\theta x_1 + (1 - \theta)x_2, y)

    Since ff is convex in xx for each yAy\in \mathcal A,

    supyAf(θx1+(1θ)x2,y)supyA[θf(x1,y)+(1θ)f(x2,y)]supyAθf(x1,y)+supyA(1θ)f(x2,y)θg(x1)+(1θ)g(x2)\begin{align}\sup_{y\in A}f(\theta x_1 + (1 - \theta) x_2, y) &\le \sup_{y\in \mathcal A}\bigg[\theta f(x_1, y) + (1-\theta)f(x_2, y)\bigg] \\ & \le \sup_{y\in \mathcal A}\theta f(x_1, y) + \sup_{y\in \mathcal A}(1-\theta)f(x_2, y) \\ &\le \theta g(x_1) + (1-\theta)g(x_2)\end{align}

    Therefore, gg is a convex set.

    Actually, we use the property of supremum

    supx(f(x)+g(x))supxf(x)+supxg(x)\sup_x \bigg(f(x)+g(x)\bigg) \le \sup_x f(x) + \sup_x g(x)

    Let supf(x)=α,supg(x)=β\sup f(x) = \alpha, \sup g(x)= \beta

    Since αf(x1)\alpha\ge f(x_1) for all x1dom fx_1\in \text{dom }f and βf(x2)\beta \ge f(x_2) for all x2dom gx_2\in \text{dom }g, α+βf(x)+g(x)\alpha + \beta \ge f(x) + g(x) for all xdom f+gx \in \text{dom }f + g

    So, α+β\alpha + \beta is a upper bound of f+gf + g

    Therefore, supx(f(x)+g(x))supxf(x)+supxg(x)\sup_x \bigg(f(x)+g(x)\bigg) \le \sup_x f(x) + \sup_x g(x)

Epigraph interpretation

In terms of epigraphs, the point-wise supremum of functions corresponds to the intersection of epigraphs: with f,gf, g. we have

epi g=yAepi f(,y)\text{epi }g = \bigcap_{y\in \mathcal A}\text{epi }f(\cdot, y)

Thus, the result follows from the fact that the intersection of a family of convex set is convex.

Red and green areas represent the epigraph of f(x,y)f(x, y) for a fixed xx. As ff is convex in xx, both the red and green areas are convex sets. Our objective is to intersect the epigraph with a given point xx to determine the maximum (supremum) value for the given xx.

Examples

  1. support function of a set CC
    SC(x)=supyCyTxS_C(x) = \sup_{y\in C}y^Tx
    The normal vector of the red dotted line is x
    💡
    Note that CC don’t have to be a convex set.
  1. distance to farthest point in a set CC
    f(x)=supyCxyf(x) = \sup_{y\in C}\|x - y\|
  1. maximum eigenvalue of symmetric matrix
    λmax(X)=supy2=1yTXy\lambda_{\max}(X) = \sup_{\|y\|_2 = 1}y^TXy

    where XSnX\in S^n

    💡
    yTXyy^TXy is a linear function in xx for each yy.
    • Proof

      Since XSnX \in S^n, there exists an orthonormal basis by spectral theorem.

      Therefore,

      Xy=Xi=1nyiei=i=1nyiλieiyTXy=i=1nλi(yi)2Xy = X\sum_{i = 1}^n y_ie_i = \sum_{i = 1}^ny_i\lambda_ie_i \\ \Rightarrow y^TXy = \sum_{i = 1}^n\lambda_i(y_i)^2

      First we want to show that max{yii=1,,n}\max\{y_i|i = 1, \dots, n\}  is a upper-bound of yTXyy^TXy

      Without loss of generality, max{λii=1,,n}=λk\max\{\lambda_i|i = 1, \dots, n\} = \lambda_k

      i=1nλi(yi)2λk(i=1n(yi)2)λk\begin{aligned}\sum_{i = 1}^n \lambda_i(y_i)^2 &\le \lambda_k\bigg(\sum_{i = 1}^n(y_i)^2\bigg) \\ &\le \lambda_k\end{aligned}

      Given ϵ>0\epsilon > 0, we want to show that there exists yy_* such that

      λkϵ<yTXy<λk\lambda_k -\epsilon < y_*^TXy_* < \lambda_k

      where y2=1\|y_*\|_2 = 1

      Without loss of generality, λl\lambda_{l} is the second largest eigenvalue (kl)k \ne l)

      Take yl=ϵ2(λkλl),  yk=2(λkλl)ϵ2(λkλl),  yj=0y_l = \sqrt{\frac{\epsilon}{2(\lambda_k - \lambda_l)}}, \;y_k = \sqrt{\frac{2(\lambda_k - \lambda_l) - \epsilon}{2(\lambda_k - \lambda_l)}}, \; y_j = 0 if jk,lj \ne k, l

      Then,

      i=1nyi=1\sum_{i = 1}^n y_i = 1

      Moreover,

      i=1nλi(yi)2=λlϵ2(λkλl)+λk2(λkλl)ϵ2(λkλl)=λlϵ+2λk(λkλl)λkϵ2(λkλl)=λkϵ2\begin{aligned}\sum_{i = 1}^n \lambda_i(y_i)^2 &= \lambda_l \frac{\epsilon}{2(\lambda_k- \lambda_l)} + \lambda_k\frac{2(\lambda_k - \lambda_l) -\epsilon}{2(\lambda_k - \lambda_l)} \\ &= \frac{\lambda_l \epsilon + 2\lambda_k(\lambda_k - \lambda_l) -\lambda_k\epsilon}{2(\lambda_k - \lambda_l)} \\ &= \lambda_k - \frac{\epsilon}{2}\end{aligned}

      Therefore,

      λmax(X)=supy2=1yTXy\lambda_{\max}(X) = \sup_{\|y\|_2 = 1}y^TXy

Minimization

We have seen that the maximum and supremum of an arbitrary family of convex functions is convex. It turns out the some special forms of minimization also yields convex functions.

if f(x,y)f(x, y) is convex in (x,y)(x, y) and CC is a convex set, then

g(x)=infyCf(x,y)g(x) = \inf_{y\in C}f(x, y)

where g(x)>g(x) > -\infty for all xx

is convex.

We have to compare the condition for the point-wise supremum case. Note that f(x,y)f(x,y) is convex in (x,y)(x, y). In point-wise supremum case, it is enough to satisfy only xx

  • Proof

    Let x1,x2dom g,θ0x_1, x_2\in \text{dom g}, \theta \ge 0

    For given ϵ>0\epsilon > 0, y1,y2C\exists y_1, y_2\in C such that

    g(x1)<f(x1,y1)<g(x1)+ϵ/2g(x2)<f(x2,y2)<g(x2)+ϵ/2g(x_1) < f(x_1, y_1) < g(x_1) + \epsilon /2 \\ g(x_2) < f(x_2, y_2) < g(x_2) + \epsilon /2

    Therefore,

    g(θx1+(1θ)x2)=infyCf(θx1+(1θ)x2,y)f(θx1+(1θ)x2,θy1+(1θ)y2)f(θ(x1,y1)+(1θ)(x2,y2))θf(x1,y1)+(1θ)f(x2,y2)θg(x1)+(1θ)g(x2)+ϵ\begin{aligned}g(\theta x_1 + (1- \theta)x_2) &= \inf_{y\in C } f(\theta x_1 + (1- \theta)x_2, y) \\ &\le f(\theta x_1 + (1-\theta)x_2, \theta y_1 + (1-\theta)y_2) \\ &\le f(\theta(x_1, y_1) + (1-\theta)(x_2, y_2)) \\ &\le \theta f(x_1, y_1) + (1-\theta) f(x_2, y_2) \\ &\le \theta g(x_1) + (1-\theta)g(x_2) + \epsilon\end{aligned}

    Since ϵ\epsilon is arbitrary,

    g(θx1+(1θ)x2)θg(x1)+(1θ)g(x2)g(\theta x_1 + (1-\theta)x_2) \le \theta g(x_1) + (1-\theta)g(x_2)

    Therefore, gg is a convex function.

    💡
    The fundamental idea is that we want to use convexity of ff that makes slightly bigger than we want. We can control the amount of its value by using the definition of infimum.

Epigraph interpretation

Assume the infimum over yCy\in C is attained for each xx (i.e. {f(x,y)yC}\{f(x, y) | y\in C\} is closed set for each xx). We have

epi g={(x,t)  (x,y,t)epi f for some yC}\text{epi }g = \{(x, t) |\; (x,y,t)\in \text{epi }f\text{ for some }y\in C\}

Since the projection of a convex set on some of its components is convex, epi g\text{epi }g is a convex set.

💡
Intuitively, we can interpret this as a projection of epi f\text{epi }f to xtxt plane which is a convex set.
💡
Generally, the union of the convex set is not a convex set. So we can’t apply the same approach for the supremum or maximum case. Therefore, the condition for minimization case is stronger than them (i.e. convexity with respect to (x,y)(x, y))

Example

  1. f(x,y)=xTAx+2xTBy+yTCyf(x, y) = x^TAx + 2x^TBy + y^TCy with
    [ABBTC]0,C0\begin{bmatrix}A & B \\ B^T &C\end{bmatrix} \succeq 0, \quad C\succ 0

    We can express g(x)=infyf(x,y)g(x) = \inf_y f(x, y) as

    g(x)=xT(ABCBT)xg(x) = x^T(A-BC^{\dagger}B^T)x

    where CC^\dagger is the pseudo-inverse of CC. Since gg is convex, ABCBT0A-BC^{\dagger}B^T\succeq 0

    If CC is invertible, then ABC1BTA - BC^{-1}B^T is called the Schur complement of CC in the matrix

    [ABBTC]\begin{bmatrix}A & B \\ B^T &C\end{bmatrix}
    💡
    A convex quadratic function and if minimize that function over some variables, the result is quadratic form.
    Shur's Complement
    유도방법 아래 링크의 1-2페이지 자료를 확인한다. 블록 행렬을 연립방정식 형태로 정리하고 해를 구함으로써 그 형태를 자연스럽게 이해할 수 있다. (link1) (link2)
    http://ranking.uos.ac.kr/blog_page/post_src/post20231031.html
  1. distance to a set : dist(x,S)=infySxy\text{dist}(x, S) =\inf_{y\in S}\|x-y\| is convex if SS is convex.

Composition with scalar functions

composition of g:RnRg:\R^n\to \R and h:RRh : \R \to \R

f(x)=h(g(x))f(x) = h(g(x))

ff is convex if

  1. gg convex, hh convex, h~\tilde h non-decreasing
  1. gg concave, hh convex, h~\tilde h non-increasing

where h~\tilde h is a extended value extension

💡
There are no assumption related to twice differentiability of hh and gg

If we assume that n=1n = 1 and twice differentiable for g,hg, h we can get an intuition

f(x)=h(g(x))g(x)2+h(g(x))g(x)f''(x) = h''(g(x))g'(x)^2 + h'(g(x))g''(x)

Think about this example,

h(x)=x2xR+h(x) = x^2 \quad x\in \R_+

This function is trivially increasing. However,

h~(x)={x2xR+otherwise\tilde h(x) = \begin{cases}x^2 \quad x\in \R_+ \\ \infty \quad \text{otherwise}\end{cases}

this function is not a increasing function.

Interpretation of h~\tilde h

To say that h~\tilde h is nondecreasing means that for any x,yRx, y\in \R  (x<yx<y), we have h~(x)h~(y)\tilde h(x) \le \tilde h(y).

That means ydom hxdom hy\in \text{dom h} \Rightarrow x\in \text{dom }h

Therefore, the domain of hh extends infinitely in the negative direction; it is either R\R or an interval like (,a)(-\infty, a) or (,a](\infty, a]

Similarly, h~\tilde h is non-increasing means that the domain of hh extends infinitely in the positive direction; it is either R\R or an interval like (a,)(a, \infty) or [a,)[a, \infty)

  • Proof

    Let gg is convex, hh is convex, h~\tilde h is nondecreasing and x,ydom f,θ0x,y\in \text{dom }f, \theta \ge 0.

    Since x,ydom fx, y\in \text{dom }f, x,ydom gx, y\in \text{dom }g

    Moreover we already know that gg is convex,

    g(θx+(1θ)y)θg(x)+(1θ)g(y)g(\theta x + (1-\theta)y)\le \theta g(x) + (1-\theta)g(y)

    Similarly since x,ydom fx,y\in \text{dom }f,

    g(x),g(y)dom hθg(x)+(1θ)g(y)dom h (since dom h is convex set)g(θx+(1θ)y)dom h (since h~ is non-decreasing)g(x), g(y)\in \text{dom h} \\ \Rightarrow \theta g(x) + (1-\theta)g(y)\in \text{dom }h \text{ (since dom h is convex set)} \\ \Rightarrow g(\theta x + (1-\theta)y)\in \text{dom }h \text{ (since }\tilde h \text{ is non-decreasing)}

    Since h~\tilde h is non-decreasing,

    h(g(θx+(1θ)y))h(θg(x)+(1θ)g(y))h(g(\theta x + (1-\theta)y)) \le h(\theta g(x) + (1-\theta)g(y))

    In addition, since θg(x)+(1θ)g(y)dom h\theta g(x) + (1-\theta) g(y)\in \text{dom }h and hh is a convex function,

    h(θg(x)+(1θ)g(y))θh(g(x))+(1θ)h(g(y))h(\theta g(x) + (1-\theta)g(y)) \le \theta h(g(x))+ (1-\theta)h(g(y))

    Therefore,

    h(g(θx+(1θ)y))θh(g(x))+(1θ)h(g(y))h(g(\theta x + (1-\theta)y)) \le \theta h(g(x)) + (1-\theta)h(g(y))

    Therefore hgh\circ g is a convex function.

Examples

  1. expg(x)\exp g(x) is convex if gg is convex
    💡
    We already know that exponential function is convex and increasing function on R\R
  1. 1/g(x)1/g(x) is convex if gg is concave and positive.
    💡
    We already know that 1/x1/x is concave and non-increasing

Vector composition

composition of g:RnRkg:\R^n\to \R^k and h:RkRh: \R^k \to \R

f(x)=h(g(x))=h(g1(x),g2(x),,gk(x))f(x) = h(g(x)) = h(g_1(x), g_2(x), \dots, g_k(x))

ff is convex if

  1. gig_i affine, hh convex
  1. gig_i convex, hh convex, h~\tilde h non-decreasing in each argument
  1. gig_i concave, hh convex, h~\tilde h non-increasing in each argument

where h~\tilde h is a extended value extension

  • Proof

    Without loss of generality, assume that gig_i convex, hh convex, h~\tilde h non-decreasing in each argument.

    Let x,ydom f,θ0x,y\in \text{dom f}, \theta\ge 0 (i.e. x,ydom gix, y\in \text{dom }g_i for each ii and g(x),g(y)dom hg(x),g(y)\in \text{dom }h)

    Since gig_i is a convex function and x,ydom gix, y\in \text{dom }g_i

    gi(θx+(1θ)y)θgi(x)+(1θ)gi(y)g_i(\theta x + (1- \theta)y) \le \theta g_i(x) + (1-\theta) g_i(y)

    Since hh is convex, πi(dom h)\pi_i(\text{dom }h) is a convex set for each ii.

    So,

    θgi(x)+(1θ)gi(y)πi(dom h),i\theta g_i(x) + (1-\theta)g_i(y) \in \pi_i(\text{dom }h), \forall i

    Therefore,

    θg(x)+(1θ)g(y)dom h\theta g(x) + (1-\theta)g(y)\in \text{dom h}

    Moreover, h~\tilde h is non-decreasing in each argument and θgi(x)+(1θ)gi(y)πi(dom h),i\theta g_i(x) + (1- \theta)g_i(y)\in \pi_i(\text{dom }h),\forall i,

    gi(θx+(1θ)y)πi(dom h),ig_i(\theta x+(1 - \theta)y) \in \pi_i(\text{dom }h), \forall i

    Therefore,

    g(θx+(1θ)y)dom hg(\theta x + (1-\theta)y) \in \text{dom }h

    In addition,

    h(g(θx+(1θ)y))=h(g1(θx+(1θ)y),,gk(θx+(1θ)y))h(θg1(x)+(1θ)g1(y),,gk(θx+(1θ)y))h(θg1(x)+(1θ)g1(y),,θgk(x)+(1θ)gk(y))=h(θ(g1(x),,gk(x))+(1θ)(g1(y),,gk(x)))=h(θg(x)+(1θ)g(y))θh(g(x))+(1θ)h(g(y)) (since h is convex)\begin{aligned}h(g(\theta x + (1-\theta)y)) &= h(g_1(\theta x + (1-\theta)y), \cdots , g_k(\theta x + (1-\theta)y)) \\ &\le h(\theta g_1(x) + (1-\theta)g_1(y), \cdots, g_k(\theta x + (1-\theta)y)) \\ &\le h(\theta g_1(x) + (1-\theta)g_1(y), \cdots, \theta g_k(x) + (1- \theta)g_k(y)) \\ &= h(\theta(g_1(x), \cdots, g_k(x)) + (1-\theta)(g_1(y), \cdots, g_k(x))) \\ &= h(\theta g(x) + (1-\theta)g(y)) \\ & \le \theta h(g(x)) + (1-\theta)h(g(y)) \text{ (since }h \text{ is convex)}\end{aligned}

    Therefore hgh\circ g is a convex function.

    If gig_i is a affine function,

    gi(θx+(1θ)y)=θgi(x)+(1θ)gi(y)gi(θx+(1θ)y))πi(dom h)g_i(\theta x + (1-\theta)y)= \theta g_i(x) + (1-\theta) g_i(y) \\ g_i(\theta x+ (1-\theta)y)) \in \pi_i(\text{dom }h)

    Therefore, there is no requirement of non-increasing or non-decreasing condition of hh in this case.

We can actually interpret some convex preserving operations by using vector composition.

  1. adding convex function

    We can interpret hh as a summation of each argument. We already know that that kind of function is convex. (Actually, it is affine)

    Moreover h~\tilde h nondecreasing in each argument.

    Therefore, it is a convex function

  1. maximum values within convex functions

    We can interpret hh as a maximum of each argument. We already know that max function is convex. Moreover h~\tilde h is non-decreasing in each argument.

    Therefore, it is a convex function.

Examples

  1. i=1mloggi(x)\sum_{i = 1}^m \log g_i(x) is concave if gig_i are concave and positive
    💡
    Since we already know that log function is concave and it is non-decreasing
  1. logi=1mexpgi(x)\log\sum_{i = 1}^m \exp g_i(x) is convex if gig_i are convex.
    💡
    Since we already know that logi=1mexpzi\log\sum_{i = 1}^m \exp z_i is a convex function and it is non-decreasing for each argument.

Perspective

the perspective of a function f:RnRf:\R^n\to \R is the function g:Rn×RRg:\R^n\times \R \to \R

g(x,t)=tf(x/t),dom g={(x,t)x/tdom f,t>0}g(x, t) = tf(x/t), \quad \text{dom } g = \{(x, t)|x/t\in \text{dom }f, t> 0\}

if ff is convex, then gg is convex.

  • Proof
    (x,t,s)epi gtf(x/t)sf(x/t)s/t(x/t,s/t)epi f\begin{aligned}(x, t, s)\in \text{epi }g &\Leftrightarrow tf(x/t)\le s \\ &\Leftrightarrow f(x/t) \le s/t \\ &\Leftrightarrow (x/t, s/t)\in \text{epi }f\end{aligned}

    Since ff is a convex function, epi f\text{epi }f is a convex sets. Moreover, perspective function preserve a convexity. So, epi g\text{epi }g is a convex set.

    Therefore, gg is a convex function.

Example

  1. f(x)=xTxf(x) = x^Tx is convex, g(x,t)=xTx/tg(x, t) = x^Tx/t is convex for t>0t>0
  1. negative logarithm : f(x)=logxf(x) = -\log x is convex; hence relative entropy (a.k.a KL divergence)
    g(x,t)=tlogttlogx=tlogxt\begin{aligned}g(x, t) &= t\log t - t\log x \\ &= -t\log\frac{x}{t}\end{aligned}

    is convex on R++2\R_{++}^2

  1. if ff is convex, then
    g(x)=(cTx+d)f((Ax+b)/(cTx+d))g(x) = (c^Tx+d)f((Ax+b)/(c^Tx + d))

    is convex on {x  cTx+d>0,(Ax+b)/(cTx+d)dom f}\{x|\; c^Tx+d > 0, (Ax+b)/(c^Tx+d)\in \text{dom }f\}

    💡
    We already know that f(Ax+b)f(Ax + b) is convex

Conjugate function

the conjugate of a function ff is

f(y)=supxdom f(yTxf(x))f^*(y) = \sup_{x\in \text{dom }f}(y^Tx - f(x))

Let f(x)f(x) is a making cost, xx is the number of product, yy is the price for each product.

We want to maximize the profit. So our goal is to find the number of product maximize our benefit.

Note that ff^* is convex (even if ff is not)

  • Proof

    yTxf(x)y^Tx - f(x) is affine with respect to yy.

    Moreover, it is the point-wise supremum of a family of convex (indeed, affine) functions of yy. Therefore, it is actually a corollary of the supremum of the convex function is a convex function.

💡
The conjugate function, also known as the Legendre-Fenchel transform, is a powerful tool in convex analysis that allows us to recover info rmation about a given function. It provides a way to characterize the dual properties of a function, revealing its convexity, smoothness, and other important properties.

Examples

  1. negative logarithm : f(x)=logxf(x) = -\log x
    f(x)=supx>0(xy+logx)={1log(y)y<0otherwise\begin{aligned}f^*(x) &= \sup_{x>0}(xy + \log x) \\ &= \begin{cases}-1-\log(-y) & y< 0 \\ \infty &\text{otherwise}\end{cases}\end{aligned}
  1. strictly convex quadratic : f(x)=(1/2)xTQxf(x) = (1/2)x^TQx with QS++nQ\in S_{++^n}
    f(y)=supx(yTx(1/2)xTQx)=12yTQ1y\begin{aligned}f^*(y) &= \sup_x (y^Tx - (1/2)x^TQx) \\ &= \frac{1}{2}y^TQ^{-1}y\end{aligned}
    💡
    Conjugate of quadratic form is the inverse matrix

Quasi-convex functions

f:RnRf:\R^n\to \R is quasi-convex if dom f\text{dom }f is convex and the sublevel sets

Sα={xdom f  f(x)α}S_\alpha = \{x\in \text{dom }f|\; f(x) \le \alpha\}

are convex for all α\alpha

A function is quasi-concave if f-f is quasi-convex. A function that is both quasi-convex and quasi-concave is called quasi-linear.

Convex functions have convex sub-level sets, and so are quasi-convex.

  • Proof

    We already know that epigraph of a convex function is a convex set. Moreover, {(x,y)  yα}\{(x, y) | \; y \le \alpha\} is a convex set.

    Since the intersection of convex sets is also a convex set.

    Therefore,

    {(x,f(x))  f(x)α,xdom f}\{(x, f(x)) |\; f(x) \le \alpha, x\in \text{dom }f\}

    is a convex set.

    Moreover, we already know that projection of certain coordinate is also a convex set. Therefore,

    {xdom f  f(x)α}\{x \in \text{dom }f|\; f(x) \le \alpha\}

    is a convex set.

    Therefore, every convex function is a quasi-convex function.

Examples

  1. x\sqrt {|x|} is quasi-convex on R\R
  1. ceil(x)=inf{zZ  zx}\text{ceil}(x) = \inf \{z\in \Z|\; z\ge x\} is quasi-linear
  1. logx\log x is quasi-linear on R++\R_{++}
  1. f(x1,x2)=x1x2f(x_1, x_2) = x_1x_2 is quasi-concave on R++2R_{++}^2
  1. linear-fractional function
    f(x)=aTx+bcTx+ddomf={x  cTx+d>0}f(x) = \frac{a^Tx + b}{c^Tx + d} \quad \text{dom} f = \{x | \; c^Tx + d > 0\}

    is quasi-linear

    💡
    Note that the sublevel set is half space
  1. distance ratio
    f(x)=xa2xb2dom f={x  xa2xb2}f(x) = \frac{\|x-a\|_2}{\|x-b\|_2} \quad \text{dom }f = \{x |\; \|x-a\|_2\le \|x-b\|_2\}

    is quasi-convex.

    • Proof

      We want to know whether {xdom f  f(x)t}\{x\in \text{dom }f |\; f(x) \le t\} is a convex set or not for all aa. (For simplicity, we assume that tt is non-negative.)

      Take g:R2Rg : \R^2 \to \R, h1:RR,h2:RRh_1 : \R\to \R, h_2 : \R \to \R as follows

      h1(x)=xa2h2(x)=xb2g(x1,x2)=x1tx2h_1(x) = \|x-a\|_2 \\ h_2(x) = \|x-b\|_2 \\ g(x_1, x_2) = x_1 - tx_2 \quad

      Since h1,h2h_1, h_2 are convex and g~\tilde g are non-decreasing and non-increasing respectively (if tt is negative, g~\tilde g are both non-decreasing), g(h1(x),h2(x))g(h_1(x), h_2(x)) is a convex function.

      As we proved above, every convex function is also a quasi-convex funciton.

      Therefore,

      {x  xa2txb20}{x  xa2xb2t}\{x |\; \|x-a\|_2 - t\|x-b\|_2 \le 0\} \Leftrightarrow \{x|\; \frac{\|x - a\|_2}{\|x - b\|_2} \le t\}

      is a convex set.

      Since tt is arbitrary,

      {x  xa21xb20}{x  xa2xb2}\{x |\; \|x-a\|_2 - 1\|x-b\|_2 \le 0\} \Leftrightarrow \{x|\; \|x-a\|_2 \le \|x-b\|_2\}

      is a convex set.

      Therefore,

      {xxaxbt and xa2xb2}{xdom f  f(x)t}\{x|\frac{\|x - a\|}{\|x-b\|}\le t \text{ and }\|x-a\|_2 \le \|x-b\|_2\}\\ \\\Leftrightarrow \{x\in \text{dom }f |\; f(x) \le t\}

      Therefore, ff is a quasi-convex function.

Properties

  1. modified Jensen’s inequality : for quasi-convex ff
    0θ1f(θx+(1θ)y)max{f(x),f(y)}0\le \theta \le 1 \Rightarrow f(\theta x + (1-\theta)y) \le \max \{f(x), f(y)\}
    • Proof

      Since ff is a quasi-convex function,

      {tdom f  f(t)f(x)}{tdom f  f(t)f(y)}\{t\in \text{dom }f | \; f(t) \le f(x)\} \\ \{t\in \text{dom }f | \; f(t) \le f(y)\}

      are a convex set.

      We already know that intersection of two convex sets is also a convex set. Therefore,

      {tdom f  f(t)max{f(x),f(y)}\{t\in \text{dom f} |\; f(t) \le \max\{f(x), f(y)\}

      is a convex set.

      For simplicity, let’s say

      A={tdom f  f(t)max{f(x),f(y)}A =\{t\in \text{dom f} |\; f(t) \le \max\{f(x), f(y)\}

      Trivially, x,yAx, y\in A. Therefore, for given θ0\theta \ge 0,

      θx+(1θ)yA\theta x + (1-\theta)y \in A

      So,

      f(θx+(1θ)y)max{f(x),f(y)}f(\theta x + ( 1 - \theta)y) \le \max \{f(x), f(y)\}
  1. first-order condition : Suppose f:RnRf :\R^n \to \R is differentiable with dom f\text{dom }f is convex. Then ff is quasi-convex if and only if
    x,ydom f,f(y)f(x)f(x)T(yx)0\forall x, y\in \text{dom }f, f(y)\le f(x) \Rightarrow \nabla f(x)^T (y- x)\le 0
    • Proof

      Let ff is quasi-convex. Then

      A={ydom f  f(y)f(x)}A = \{y\in \text{dom }f|\; f(y) \le f(x)\}

      is a convex set.

      Suppose yA\exists y\in A such that

      f(x)T(yx)>0\nabla f(x)^T(y - x) > 0

      Then, 0<ϵ<1\exists 0< \epsilon < 1 such that

      f(x+(yx)ϵ)>f(x)f(x+ (y - x)\epsilon) > f(x)

      Since x,yAx, y\in A and AA is a convex set,

      x+(yx)ϵA x + (y - x) \epsilon \in A

      It is contradicts to the definition of AA.

      Therefore,

      x,ydom f,f(y)f(x)f(x)T(yx)0\forall x, y\in \text{dom }f, f(y)\le f(x) \Rightarrow \nabla f(x)^T (y- x)\le 0

      We want to show that for given aa

      {xdom f  f(x)a}\{x \in \text{dom }f |\; f(x) \le a\}

      is a convex set

      1. case 1 : xdom f\exist x\in \text{dom }f such that f(x)=af(x) = a

        By assumption,

        y{xdom f  f(x)a}2f(x)T(yx)0ydom f{y2f(x)T(yx)0}y \in \{x \in \text{dom }f |\; f(x) \le a\} \Rightarrow \nabla^2f(x)^T (y - x)\le 0 \\ \Rightarrow y\in \text{dom f}\cap \{y | \nabla^2f(x)^T(y- x)\le 0\}

        Since 2f(x)T(yx)\nabla^2 f(x)^T(y -x) is affine with respect to yy, {y2f(x)T(yx)0}\{y | \nabla^2f(x)^T(y- x)\le 0\} is a convex set.

        Therefore {xdom ff(x)a}\{x\in \text{dom }f |f(x)\le a\}  is a convex set.

      1. case 2 : xdom f\nexists x\in \text{dom }f such that f(x)=af(x) = a

        Since ff is differentiable, ff is continuous for every xx. Therefore, there are only two cases.

        1. f(x)>a,xf(x) > a, \forall x

          In this case, there is no element in sub-level set. Therefore, it is a vacuously true to satisfy the convexity.

        1. f(x)<a,xf(x) < a, \forall x

          In this case, {xdom ff(x)a}=dom f\{x\in \text{dom }f| f(x) \le a\} = \text{dom }f. We already know that dom f\text{dom }f is a convex set.

        Therefore {xdom ff(x)a}\{x\in \text{dom }f |f(x)\le a\}  is a convex set.

    💡
    Taking into account that the sublevel set of a quasi-convex function is a convex set, one can understand the above concept as a restriction of the comparison region.
💡
Note that sums of quasi-convex functions are not necessarily quasi-convex.

Log-concave and log-convex functions

a positive function ff is log-concave if logf\log f

f(θx+(1θ)y)f(x)θf(y)1θf(\theta x + (1-\theta)y)\ge f(x)^\theta f(y)^{1-\theta}

for 0θ10\le \theta \le 1

ff is log-convex if logf\log f is convex.

Examples

  1. powers : xax^a on R++\R_{++} is log-convex for α0\alpha \le 0, log-concave for α0\alpha \ge 0
    • proof
      alog(θx+(1θ)y)aθlog(x)+a(1θ)log(y)a\log(\theta x + (1-\theta)y) \ge a\theta\log(x) + a(1-\theta)\log(y)

      if a0a \ge 0,

      log(θx+(1θ)y)θlog(x)+(1θ)log(y)\log(\theta x + (1-\theta)y) \ge \theta\log(x) + (1-\theta)\log(y)

      the above function is log-concave.

      if α0\alpha \le 0,

      log(θx+(1θ)y)θlog(x)+(1θ)log(y)\log(\theta x + (1-\theta)y) \le \theta\log(x) + (1-\theta)\log(y)

      the above function is log-convex.

  1. many common probability densities are log-concave (ex : normal dist)
    f(x)=1(2π)ndetΣe12(xxˉ)TΣ1(xxˉ)f(x) = \frac{1}{\sqrt{(2\pi)^n\det\Sigma}}e^{-\frac{1}{2}(x-\bar x)^T\Sigma^{-1}(x-\bar x)}
    💡
    If we take logarithm for above function, it is just a quadratic function.
  1. cumulative Gaussian distribution Φ\Phi is log-concave.
    Φ(x)=12πxeu2/2du\Phi(x) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^x e^{-u^2/2}du

Properties

  1. twice differentiable ff with convex domain is log-concave if and only if
    f(x)2f(x)f(x)f(x)Tf(x)\nabla^2f(x) \preceq \nabla f(x) \nabla f(x)^T

    for all xdom fx\in \text{dom }f

    or equivalently

    2f(x)f(x)f(x)Tf(x)\nabla^2 f(x) \preceq \frac{\nabla f(x) \nabla f(x)^T}{f(x)}

    for all xdom fx\in \text{dom }f

    The condition for concave function is 0 for the right side of the above inequality. In log-concave case, the hessian matrix is less than equal to the rank-one matrix. That means we can have at most one positive eigenvalue of the hessian.

    💡
    For a rank-one matrix A=uvTA = uv^T, there is exactly one non-zero eigenvalue, and it is equal to the dot product uTvu^Tv
  1. product of log-concave functions is log-concave
    💡
    basically related to the sum of the concave function is concave.
  1. sum of log-concave functions is not always log-concave
    💡
    mixture of log-concave distribution is not always log-concave
  1. integration : if f:Rn×RmRf:\R^n\times \R^m \to \R is log-concave, then
    g(x)=f(x,y)dyg(x) = \int f(x,y)dy

    is log-concave.

    Prékopa–Leindler inequality
    In mathematics, the Prékopa–Leindler inequality is an integral inequality closely related to the reverse Young's inequality, the Brunn–Minkowski inequality and a number of other important and classical inequalities in analysis. The result is named after the Hungarian mathematicians András Prékopa and László Leindler.&#91;1&#93;&#91;2&#93;
    https://en.wikipedia.org/wiki/Prékopa–Leindler_inequality

Consequences of integration property

  1. convolution fgf* g of log-concave functions f,gf, g is log-concave
    (fg)(x)=f(xy)g(y)dy(f*g)(x) = \int f(x-y)g(y)dy
  1. if CRnC\sub \R^n convex and yy is a random variable with log-concave pdf then
    f(x)=Prob(x+yC)f(x) = \text{Prob}(x + y\in C)

    is log-concave

    • Proof

      Write f(x)f(x) as integral of product of log-concave functions

      f(x)=g(x+y)p(y)dyf(x) =\int g(x+ y)p(y)dy

      where

      g(u)={1uC0uCg(u) = \begin{cases}1 & u\in C \\ 0 & u\notin C\end{cases}

      pp is pdf of yy

Convexity with respect to generalized inequalities

Suppose KRmK\sub \R^m is a proper cone with associated generalized inequality K\preceq_K. We say f:RnRmf:\R^n\to \R^m is KK-convex if for all x,yx, y, and 0θ10\le \theta \le 1,

f(θx+(1θ)y)Kθf(x)+(1θ)f(y)f(\theta x + (1-\theta)y)\preceq_K\theta f(x) + (1-\theta) f(y)

The function is strictly KK-convex if

f(θx+(1θ)y)Kθf(x)+(1θ)f(y)f(\theta x + (1-\theta)y)\prec_K\theta f(x) + (1-\theta) f(y)

for all xyx\ne y and 0<θ<10 < \theta <1

💡
These definitions reduce to ordinary convexity and strict convexity when m=1m = 1 and K=R++K = \R^{++}
반응형
Contents

포스팅 주소를 복사했습니다

이 글이 도움이 되었다면 공감 부탁드립니다.