metadata
stringlengths 7
7
| question
stringlengths 95
2.57k
| solution
stringlengths 552
19.3k
|
---|---|---|
2008-q3 | Let X_0 = 1 and for n \geq 1, let X_n = \max(X_{n-1} - Y_n, 0) + 1, where Y_1, Y_2, \ldots are i.i.d. non-negative integer valued random variables with a finite expectation.
(a) Solve this recursion to obtain an expression for X_n in terms of the random walk process S_k = \sum_{i=1}^k (1-Y_i), k=0,1,2,\ldots.
(b) Give necessary and sufficient conditions for X_n to have a (proper, i.e. non-degenerate), limiting distribution as n \to \infty.
(c) Find this limiting distribution. | $X_0=1$ and $Y_1, Y_2, \ldots$ are \textit{i.i.d.} non-negative integer valued random variable with finite expectation. $X_n = \max(X_{n-1}-Y_n,0) +1$, for all $n \geq 1$.\begin{enumerate}[label=(\alph*)] \item Let $Z_i=1-Y_i$ and $S_k=\sum_{i=1}^k Z_i$ with $S_0:=0$. \begin{align*} X_n = \max(X_{n-1}-Y_n,0) +1 = \max(X_{n-1}+Z_n,1) &= \max(\max(X_{n-2}+Z_{n-1},1)+Z_n,1) \\ &= \max(X_{n-2}+Z_{n-2}+Z_{n-1}+Z_n,1+Z_n,1) \end{align*} Continuing opening the recursion in this way we can see that for $n \geq 1$, $$ X_n = \max \left( X_0 + \sum_{k=1}^n Z_k, 1+\sum_{k=2}^nZ_k, 1+\sum_{k=3}^n Z_k, \ldots, 1+Z_n,1\right) = \max \left( 1+S_n, 1+(S_n-S_1), \ldots, 1+(S_n-S_{n-1}), 1\right).$$ Hence, $$ X_n = 1+ \max_{k=0}^n (S_n-S_k) , \; \forall \; 0 \leq k \leq n.$$ \item Since, $(Z_1, \ldots, Z_n) \stackrel{d}{=} (Z_n, \ldots, Z_1)$, we have $$ (S_n-S_0, S_n-S_1, \ldots, S_n-S_n) = \left(\sum_{i=1}^n Z_i, \sum_{i=2}^n Z_i, \ldots, Z_n, 0\right) \stackrel{d}{=} \left(\sum_{i=1}^n Z_i, \sum_{i=1}^{n-1} Z_i, \ldots, Z_1, 0\right) = (S_n,S_{n-1}, \ldots, S_1,0).$$ Therefore, $$ X_n = 1+ \max_{k=0}^n (S_n-S_k) \stackrel{d}{=} 1+ \max_{k=0}^n S_k.$$ By RW theory we have $\limsup_{n \to \infty} S_n = \infty $ almost surely if $\mathbb{E}(Z_1) > 0$. If $\mathbb{E}(Z_1)=0$ and $\mathbb{P}(Z_1=0)<1$, same conclusion holds true. If $Z_1 \equiv 0$ with probability $1$, we have $X_n=1$ almost surely for all $n$. Thus in all these cases $X_n$ converges to $\infty$ almost surely or converges to $1$ almost surely. The only remaining case is $\mathbb{E}(Z_1) <0$ i.e. $\mathbb{E}(Y_1)>1$. In that case $$ X_n \stackrel{d}{\longrightarrow} 1+ S_{max},$$ where $S_{max} = \sup_{n \geq 0} S_n < \infty$, almost surely. We shall show in the next part that this is a non-degenerate random variable provided $\mathbb{P}(Z_1=1)=\mathbb{P}(Y_1=0) >0$. If $\mathbb{P}(Z_1 =1)=0$, then $Z_1 \leq 0$ almost surely and hence $S_{max}=S_0=0$ almost surely. \item Clearly $\sup_{n \geq 0} S_n$ is non-negative integer valued random variable. Fix $k \geq 1$ and define $\tau_k := \inf \left\{ n \geq 0 \mid S_n \geq k \right\}$. Since the random variable $Z_i$ take values in the set $\left\{1,0,-1,\ldots\right\}$ and on the event $(\tau_k < \infty)$ we have $k \leq S_{\tau_k} = S_{\tau_k-1} + Z_{\tau_k} \leq S_{\tau_{k-1}} + 1 \leq k-1+1=k$, we conclude that $S_{\tau_k}=k$ on the event $(\tau_k<\infty)$. Now suppose $\theta_0 >0$ such that $\mathbb{E}(e^{\theta_0Z_1})=1$. To prove existence of such $\theta_0$ consider $\psi(\theta) := \mathbb{E}(e^{\theta Z_1})$ for $\theta \geq 0$. This is finitely defined since $e^{\theta Z_1} \leq e^{\theta}$ almost surely. Then $\psi(0)=1$, $\psi^{\prime}(0)=\mathbb{E}(Z_1)<0$, where the derivative is the right-hand one. On the otherhand, $\psi(\theta) \geq \exp(\theta)\mathbb{P}(Z_1=1) \to \infty$ as $\theta \to \infty$. Together they imply the existence of such $\theta_0$ after observing that $\psi$ is continuous which can be shown via DCT. Now clearly $\left\{e^{\theta_0 S_n}, n \geq 0\right\}$ is a MG with respect to its canonical filtration. Also $0 \leq \exp(\theta_0S_{\tau_k \wedge n}) \leq e^{\theta_0 k}$ and hence is uniformly integrable. Finally from RW theory we know that $S_n$ converges to $-\infty$ almost surely since $\mathbb{E}(Z_1)<0$ and consequently $\exp(\theta_0 S_{\tau_k \wedge n}) $ converges almost surely to $\exp(\theta_0 S_{\tau_k})\mathbbm{1}(\tau_k < \infty)$. Using OST, we conclude $$ 1= \mathbb{E}\exp(\theta_0 S_0) = \mathbb{E}\left[\exp(\theta_0 S_{\tau_k})\mathbbm{1}(\tau_k < \infty) \right] = e^{\theta_0 k}\mathbb{P}(\tau_k < \infty) = e^{\theta_0 k}\mathbb{P} \left(S_{max} \geq k \right).$$ Since, this is true for all $k \geq 1$, we have $S_{max} \sim \text{Geo}(e^{-\theta_0}).$ In other words, $$ X_n \stackrel{d}{\longrightarrow} 1+ \text{Geo}(e^{-\theta_0}),$$ where $\theta_0 >0$ such that $\mathbb{E}\exp(-\theta_0 Y_1) = \exp(-\theta_0)$. \end{enumerate} |
2008-q4 | Suppose random variable V of zero mean is such that \mathbb{E}[V^2 \exp(uV)] \leq \kappa \mathbb{E}[\exp(uV)] finite, for some k finite and all u \in [0,1].
(a) Show that then \mathbb{E}[\exp(V)] \leq \exp(\kappa/2).
(b) Suppose the discrete-time martingale Z_n = \sum_{k=1}^n U_k for the filtration \mathcal{F}_n is such that
\mathbb{E}[U_k^2 e^{\lambda U_k}|\mathcal{F}_{k-1}] \leq \Gamma \mathbb{E}[e^{\lambda U_k}|\mathcal{F}_{k-1}] < \infty ,
for any |\lambda| \leq 1 and positive integer k. Show that then M_n = \exp(\lambda Z_n - \Gamma \lambda^2 n/2) is a super-martingale for \mathcal{F}_n.
(c) Deduce that for any such (Z_n, \mathcal{F}_n) and non-negative a \leq \Gamma_n,
\mathbb{P}(|Z_n| \geq a) \leq 2 \exp\left(-\frac{a^2}{2\Gamma_n}\right). | \begin{enumerate}[label=(\alph*)] \item Let $g(u) := \log \mathbb{E}\exp(uV)$ for all $u \in [0,1]$. We are given that $g(u)$ is finite for all $u \in [0,1]$. We are also given that $\mathbb{E}(V^2\exp(uV))< \infty$ for $u \in [0,1]$. This also implies that $$ \mathbb{E}|V\exp(uV)| \leq \sqrt{\mathbb{E}(V^2\exp(uV)) \mathbb{E}(\exp(uV))} < \infty,$$ where we have used \textit{Cauchy-Schwarz Inequality}. Now for all $u \in [0,1]$, $$ \dfrac{\partial }{\partial u} g(u) = \dfrac{\dfrac{\partial }{\partial u} \mathbb{E}\exp(uV)}{\mathbb{E}\exp(uV)} = \dfrac{\mathbb{E}(V\exp(uV))}{\mathbb{E}\exp(uV)}, $$ and $$ \dfrac{\partial^2 }{\partial u^2} g(u) = \dfrac{\partial }{\partial u} \dfrac{\mathbb{E}(V\exp(uV))}{\mathbb{E}\exp(uV)} = \dfrac{\mathbb{E}\exp(uV)\mathbb{E}(V^2\exp(uV)) - (\mathbb{E}(V\exp(uV)))^2 }{(\mathbb{E}\exp(uV))^2} \leq \dfrac{\mathbb{E}(V^2\exp(uV))}{\mathbb{E}\exp(uV)}. $$ The interchange of differentiation and expectation in the above derivations can be justified by \cite[Proposition 2.5.5]{cha}. The derivative at $0$ is the right-hand derivative whereas the derivative at $1$ is the right hand one. By assumption, $g^{\prime \prime}(u) \leq \kappa$ for all $u \in [0,1]$. Hence, $g(u) \leq g(0) + g^{\prime}(0)u + \kappa u^2/2$, for all $u \in [0,1]$. Note that $g(0) = \log 1 =0$ and $g^{\prime}(0) = \mathbb{E}(V) =0$. Hence, $g(u) \leq \kappa u^2/2$ for all $u \in [0,1]$. This shows that $\mathbb{E}\exp(uV) \leq \exp(\kappa u^2/2)$, for all $u \in [0,1]$. Putting $u=1$, we get $\mathbb{E}\exp(V) \leq \exp(\kappa/2)$. \item Take $\lambda \in [0,1]$. Then using part (a) for the random variable $U_n$ (probability distribution being R.C.P.D. of $U_n$ conditional on $\mathcal{F}_{n-1}$), we get $$ \mathbb{E} \left[\exp(\lambda U_n) \mid \mathcal{F}_{n-1}\right]\leq \exp(\Gamma \lambda^2/2), \text{ almost surely}.$$ Note that we can do this since $\mathbb{E}(U_n \mid \mathcal{F}_{n-1})=0$ almost surely by the Martingale assumption. Take $\lambda \in [-1,0]$. We can apply part (a) for $-\lambda$ and $-U_n$ (taking again the R.C.P.D. of $-U_n$ conditional on $\mathcal{F}_{n-1}$ ) and conclude $ \mathbb{E} \left[\exp(\lambda U_n) \mid \mathcal{F}_{n-1}\right] \leq \exp(\Gamma \lambda^2/2), \text{ almost surely}.$ Consequently, we have $$ \mathbb{E} \left[\exp(\lambda U_n) \mid \mathcal{F}_{n-1}\right]\leq \exp(\Gamma \lambda^2/2), \text{ almost surely}, \; \forall \;n \geq 0, |\lambda | \leq 1. $$ This shows that $\exp(\lambda U_n)$ is integrable and hence so is $\exp(\lambda Z_n)$. Moreover, $Z_n \in m\mathcal{F}_n$ (by the MG assumption) and hence so is $M_{n, \lambda}:= \exp(\lambda Z_n - \Gamma \lambda^2n/2)$. Finally, \begin{align*} \mathbb{E} \left[ M_{n,\lambda} \mid \mathcal{F}_{n-1}\right] = \mathbb{E} \left[ \exp(\lambda Z_n - \Gamma \lambda^2n/2) \mid \mathcal{F}_{n-1}\right] &= \exp(\lambda Z_{n-1} - \Gamma \lambda^2n/2) \mathbb{E} \left[\exp(\lambda U_n) \mid \mathcal{F}_{n-1}\right] \ &\leq \exp(\lambda Z_{n-1} - \Gamma \lambda^2n/2)\exp(\Gamma \lambda^2/2) = M_{{n-1}, \lambda}, \, \text{a.s.} \end{align*} This shows that $\left\{M_{n, \lambda}, \mathcal{F}_n, n \geq 0\right\}$ is a super-MG. \item Take $0 \leq a \leq \Gamma n$. Then for any $\lambda \in [0,1]$, \begin{align*} \mathbb{P}(Z_n \geq a) = \mathbb{P}(ig(\exp(\lambda Z_n) \geq e^{a\lambda}) = \mathbb{P}(M_{n,\lambda} \geq \exp(a\lambda-\Gamma \lambda^2 n/2)) &\leq \exp(-a\lambda+\Gamma \lambda^2 n/2)\mathbb{E}(M_{n,\lambda}) \ &\leq \exp(-a\lambda+\Gamma \lambda^2 n/2)\mathbb{E}(M_{0,\lambda}) \ & = \exp(-a\lambda+\Gamma \lambda^2 n/2). \end{align*} The right hand side is minimized at $\lambda = a/(\Gamma n) \in [0,1]$ and we get $$ \mathbb{P}(Z_n \geq a) \leq \exp \left( -\dfrac{a^2}{2\Gamma n}\right). $$ Similarly for any $\lambda \in [-1,0]$, \begin{align*} \mathbb{P}(-Z_n \leq -a) = \mathbb{P}(ig(\exp(\lambda Z_n) \geq e^{-a\lambda}) = \mathbb{P}(M_{n,\lambda} \geq \exp(-a\lambda-\Gamma \lambda^2 n/2)) &\leq \exp(a\lambda+\Gamma \lambda^2 n/2)\mathbb{E}(M_{n,\lambda}) \ &\leq \exp(a\lambda+\Gamma \lambda^2 n/2)\mathbb{E}(M_{0,\lambda}) \ & = \exp(a\lambda+\Gamma \lambda^2 n/2). \end{align*} The right hand side is minimized at $\lambda = -a/(\Gamma n) \in [-1,0]$ and we get $$ \mathbb{P}(-Z_n \leq -a) \leq \exp \left( -\dfrac{a^2}{2\Gamma n}\right). $$ Combining them we get $$ \mathbb{E} \left( |Z_n| \geq a \right) \leq \exp \left( -\dfrac{a^2}{2\Gamma n}\right). $$ \end{enumerate} |
2008-q5 | Let X_1, X_2, \ldots be i.i.d. random variables with \mathbb{E}X_1 = 0 and \mathbb{E}X_1^2 = 1. Let S_n = X_1 + \cdots + X_n. Show that the following random variables have limiting distributions as n \to \infty and describe the limiting distribution, as explicitly as possible, in each case:
(a) \max_{\sqrt{n}\leq k \leq n}(S_k - k S_n/n)/\sqrt{n},
(b) (\sum_{i=1}^n X_i S_{i-1})/(\sum_{i=1}^n S_{i-1}^2)^{1/2},
(c) T_n/n^2, where T_n = \inf\{k \geq 1 : k g(S_k/k) \geq n\} and g : \mathbb{R} \to \mathbb{R} is a continuously differentiable function such that g(0) = 0 and g'(0) \neq 0. | We have $X_1, X_2, \ldots$ are \textit{i.i.d.} with mean $0$ and variance $1$. Define, $S_n = \sum_{i=1}^n X_i$, with $S_0 :=0$. Set $\left\{\mathcal{F}_n \right\}_{n \geq 0}$ to be the canonical filtration and then $\left\{S_n, \mathcal{F}_n, n \geq 0\right\}$ is a $L^2$-MG with predictable compensator $$ n^{-1}\langle S \rangle _n = \dfrac{1}{n}\sum_{k=1}^n \mathbb{E} \left( X_k^2 \mid \mathcal{F}_{k-1}\right) = \dfrac{1}{n} \sum_{k=1}^n \mathbb{E}(X_k^2) = 1.$$ Finally for all $\varepsilon >0$, \begin{align*} n^{-1} \sum_{k=1}^n \mathbb{E} \left[(S_k-S_{k-1})^2 \mathbbm{1}\left(|S_k-S_{k-1}| \geq \varepsilon \sqrt{n} \right) \mid \mathcal{F}_{n-1} \right] = n^{-1} \sum_{k=1}^n \mathbb{E} \left[X_k^2; |X_k| \geq \varepsilon \sqrt{n} \right] &= \mathbb{E} \left[X_1^2; |X_1| \geq \varepsilon \sqrt{n} \right] \to 0, \end{align*} as $n \to \infty$ since $\mathbb{E}(X_1^2)=1$. Define, $$ \widehat{S}_n(t) := \dfrac{1}{\sqrt{n}} \left[S_{\lfloor nt \rfloor} +(nt-\lfloor nt \rfloor)(S_{\lfloor nt \rfloor +1}-S_{\lfloor nt \rfloor}) \right]=\dfrac{1}{\sqrt{n}} \left[S_{\lfloor nt \rfloor} +(nt-\lfloor nt \rfloor)X_{\lfloor nt \rfloor +1} \right], $$ for all $0 \leq t \leq 1$. By Martingale CLT, we then have $\widehat{S}_n \stackrel{d}{\longrightarrow} W$ as $n \to \infty$ on the space $C([0,1])$, where $W$ is the standard Brownian Motion on $[0,1]$.\begin{enumerate}[label=(\alph*)] \item First note that \begin{align*} \Bigg \rvert \max_{1 \leq k \leq n} \dfrac{1}{\sqrt{n}}\left(S_k - \dfrac{k}{n}S_n \right) - \max_{\sqrt{n} \leq k \leq n} \dfrac{1}{\sqrt{n}}\left(S_k - \dfrac{k}{n}S_n \right)\Bigg \rvert & \leq \Bigg \rvert \max_{1 \leq k \leq \sqrt{n}} \dfrac{1}{\sqrt{n}}\left(S_k - \dfrac{k}{n}S_n \right) \Bigg \rvert \ & \leq n^{-1/2} \max_{1 \leq k \leq \sqrt{n}} |S_k| + n^{-1}|S_n|. \end{align*} By SLLN, we know that $n^{-1}|S_n|$ converges almost surely to $0$. Also since the process $S$ is a $\mathcal{F}$-MG, we apply \textit{Doob's $L^2$ maximal inequality} to get $$ \mathbb{E} \left[\max_{1 \leq k \leq \sqrt{n}} |S_k| \right]^2 \leq 4 \mathbb{E}S_{\lfloor \sqrt{n} \rfloor}^2 \leq 4\sqrt{n},$$ and hence $n^{-1/2} \max_{1 \leq k \leq \sqrt{n}} |S_k|$ converges to $0$ in $L^2$ and so in probability. Together these two facts give us $$ \Bigg \rvert \max_{1 \leq k \leq n} \dfrac{1}{\sqrt{n}}\left(S_k - \dfrac{k}{n}S_n \right) - \max_{\sqrt{n} \leq k \leq n} \dfrac{1}{\sqrt{n}}\left(S_k - \dfrac{k}{n}S_n \right)\Bigg \rvert = o_p(1).$$ Observe that the function $t \mapsto \widehat{S}_n(t)-t\widehat{S}_n(1)$ on $[0,1]$ is the piecewise linear interpolation using the points $$ \left\{ \left( \dfrac{k}{n}, n^{-1/2}\left(S_k - \dfrac{k}{n}S_n\right)\right) \Bigg \rvert k=0, \ldots, n\right\}.$$ Recall the fact that for a piecewise linear function on a compact set, we can always find a maximizing point among the collection of the interpolating points. Hence, $$ \max_{0 \leq k \leq n} \dfrac{1}{\sqrt{n}}\left(S_k - \dfrac{k}{n}S_n \right) = \max_{1 \leq k \leq n} \dfrac{1}{\sqrt{n}}\left(S_k - \dfrac{k}{n}S_n \right) = \sup_{0 \leq t \leq 1} (\widehat{S}_n(t)-t\widehat{S}_n(1)).$$ Since, $\mathbf{x} \mapsto \sup_{0 \leq t \leq 1} (x(t)-tx(1))$ is a continuous function on $C([0,1])$, we have $$ \sup_{0 \leq t \leq 1} (\widehat{S}_n(t)-t\widehat{S}_n(1)) \stackrel{d}{\longrightarrow} \sup_{0 \leq t \leq 1} (W(t)-tW(1)) = \sup_{0 \leq t \leq 1} B(t),$$ where $\left\{B(t) : 0 \leq t \leq 1\right\}$ is the standard Brownian Bridge. Applying \textit{Slutsky's Theorem} we conclude that $$ \max_{\sqrt{n} \leq k \leq n} \dfrac{1}{\sqrt{n}}\left(S_k - \dfrac{k}{n}S_n \right) \stackrel{d}{\longrightarrow} \sup_{0 \leq t \leq 1} B(t).$$ We know that $$ \mathbb{P}(\sup_{0 \leq t \leq 1} B(t) \geq b) = \exp(-2b^2),$$ for all $b \geq 0$ (see \cite[Exercise 10.2.14]{dembo}). \end{enumerate} |
2008-q6 | The Ornstein–Uhlenbeck process X_t, t \geq 0, is a stationary Gaussian process with \mathbb{E}X_t = \mu and \rho(t) := \text{Cov}(X_0, X_t) = e^{-\alpha t}/(2\alpha) for all t \geq 0, where \alpha > 0 and \mu \in \mathbb{R}, and such that t \mapsto X_t(\omega) are continuous functions.
(a) Evaluate \mathbb{E}[\exp(-\int_0^T X_t \text{d}t)].
(b) Show that X_t has the representation X_t = \mu + \int_{-\infty}^t e^{-\alpha(t-s)} \text{d}B_s, where \{B_t, -\infty < t < \infty\} is a two-sided Brownian motion.
(c) Let Y_t = \int_0^t e^{-\alpha(t-s)} \text{d}B_s, t \geq 0. Show that the Gaussian process Y_t is a time homogeneous Markov process and write down its transition probability density p_t(y|x).
(d) Show that X_t has the representation X_t = \mu + Y_t + e^{-\alpha t} Z, for t \geq 0, where Z \sim \mathcal{N}(0, 1/(2\alpha)) is independent of \{Y_t, t \geq 0\}.
(e) Let M_t = e^{\alpha t} Y_t and let \mathcal{F}_t be the \sigma-field generated by \{B_s, s \leq t\}. Show that \{M_t, \mathcal{F}_t, t \geq 0\} is a continuous square integrable martingale and evaluate \langle M \rangle_t. | \begin{enumerate}[label=(\alph*)] \item Since $t \mapsto X_t(\omega)$ is continuous function, we have $$ \int_{0}^T X_t \, dt = \lim_{n \to \infty} \dfrac{T}{n} \sum_{k=1}^n X_{kT/n}, \; \text{almost surely}.$$ Since, the process is a Gaussian process, we have $\dfrac{T}{n} \sum_{k=1}^n X_{kT/n}$ to be also a Gaussian variable with $$ \mathbb{E} \left[ \dfrac{T}{n} \sum_{k=1}^n X_{kT/n}\right] = \dfrac{T}{n} \sum_{k=1}^n \mathbb{E} X_{kT/n} = \mu T, $$ and \begin{align*} \operatorname{Var} \left(\dfrac{T}{n} \sum_{k=1}^n X_{kT/n} \right) = \dfrac{T^2}{n^2} \sum_{j,k=1}^{n} \operatorname{Cov}(X_{jT/n}, X_{kT/n})& = \dfrac{T^2}{n^2} \sum_{j,k=1}^{n} \dfrac{1}{2\alpha}\exp \left( -\alpha T \dfrac{|j-k|}{n}\right)\ & \stackrel{n \to \infty}{\longrightarrow} \int_{[0,T]^2} \dfrac{1}{2\alpha} \exp(-\alpha |t-s|) \, ds \, dt \ \ &= \int_{0}^T \dfrac{1}{2\alpha} \left[ e^{-\alpha t}\int_{0}^t e^{\alpha s} \, ds + e^{\alpha t}\int_{t}^T e^{-\alpha s} \, ds \right] \, dt \ \ &= \int_{0}^T \dfrac{1}{2\alpha^2} \left( 2-e^{-\alpha t} - e^{-\alpha(T-t)}\right) \, dt = \dfrac{\alpha T - 1 + e^{-\alpha T}}{\alpha^3} =: V_T. \end{align*} Hence, $$ \int_{0}^T X_t \, dt \sim N(\mu T, V_T) \Rightarrow \mathbb{E} \left[ \exp \left(-\int_{0}^T X_t \, dt \right)\right] = \exp \left( -\mu T + V_T/2\right) = \exp \left( -\mu T +\dfrac{\alpha T - 1 + e^{-\alpha T}}{2\alpha^3}\right). $$ \item Before answering this part we need to clearly state some definitions. A two sided BM $\left\{B_t : -\infty< t < \infty\right\}$ is defined in this way. Take two independent standard BM starting from $0$, $\left\{W_t^{(i)} : t \geq 0\right\}$ for $i=1,2$. Define, $$ B_t = W_t^{(1)}\mathbbm{1}(t \geq 0) + W_{-t}^{(2)}\mathbbm{1}(t \leq 0), \; \; \forall \; t\in \mathbb{R}.$$ It is easy to see that the process $B$ also have independent and stationary increments with $B_0=0$, $B_t \sim N(0,|t|)$. Take any $f \in L^2(\mathbb{R})$. For all $t \geq 0$, we can define $\int_{0}^t f(s)\; dB_s$ as $\int_{0}^t f(s)\, dW_s^{(1)}$, which in turn is well-defined via Wiener integral definition. Recall that $\left\{\int_{0}^t f(s)\, dW_s^{(1)}\right\}_{t \geq 0}$ is an $L^2$-bounded continuous MG since $$ \sup_{t \geq 0} \mathbb{E} \left[\int_{0}^t f(s)\, dW_s^{(1)} \right]^2 = \sup_{t \geq 0} \int_{0}^t f(s)^2\, ds = \int_{0}^{\infty} f(s)^2\, ds < \infty.$$ We define the almost sure (which is also $L^2$ limit) limit of $\int_{0}^t f(s)\, dW_s^{(1)}$ as $\int_{0}^{\infty} f(s)\, dW_s^{(1)}$ and $\int_{0}^{\infty} f(s)\, dB_s.$ Similarly, for any $t \in [-\infty,0]$, $$ \int_{t}^0 f(s)\, dB_s := -\int_{0}^{-t} f(-s) \, dW_s^{(2)}.$$ Finally $$ \int_{u}^t f(s) \, dB_s := \int_{u}^0 f(s)\; dB_s + \int_{0}^t f(s)\, dB_s, \; \forall \; -\infty \leq u \leq t \leq \infty.$$ Using the $L^2$ convergences, it is easy to see that $$ \int_{u}^t f(s) \, dB_s = \int_{-\infty}^{\infty} f(s)\mathbbm{1}(u \leq s \leq t) \, dB_s, \text{ a.s.}, \;$$ and $\left\{\int_{-\infty}^{\infty} f(s) \, dB_s : t \in \mathbb{R}\right\}$ is Gaussian process with $$\int_{-\infty}^{\infty} f(s) \, dB_s \sim N(0, ||f||_2^2) ,$$ and for any two $f,g \in L^2(\mathbb{R})$, $$ \operatorname{Cov} \left[ \int_{-\infty}^{\infty} f(s) \, dB_s, \int_{-\infty}^{\infty} g(s) \, dB_s\right] = \int_{-\infty}^{\infty} f(s)g(s)\, ds.$$ With all these definitions in mind, let $$ Z_t := \mu + \int_{-\infty}^{t} e^{-\alpha(t-s)}\, dB_s = \mu + e^{-\alpha t}\int_{-\infty}^{t} e^{\alpha s}\, dB_s\; \forall \; t \geq 0.$$ Since $\int_{-\infty}^t e^{2\alpha s}\, ds < \infty$, the process is well defined. It is clearly a continuous path Gaussian process with mean $\mu$ and covariance function $$ \operatorname{Cov}(Z_t,Z_u) = e^{-\alpha t -\alpha u} \int_{-\infty}^{\infty} e^{2\alpha s} \mathbbm{1}(s \leq t, u) \, ds = e^{-\alpha t -\alpha u} \dfrac{\exp(2\alpha(t \wedge u))}{2\alpha} = \dfrac{\exp(-\alpha|t-u|)}{2\alpha}.$$ This shows that $Z$ is a $L^2$-stationary Gaussian Process and hence stationary. Thus the process $Z$ has exactly the same joint distribution as $X$ and we can consider $Z$ as a representation of $X$. \item Consider $Y_t = e^{-\alpha t}\int_{0}^t e^{\alpha s}\, dB_s$ for all $t \geq 0$. By independent increment property of the BM, we can conclude that the process $\left\{e^{\alpha t}Y_t : t \geq 0 \right\}$ also has independent increments, hence a Markov process (see Proposition \cite[Proposition 9.3.5]{dembo}). Therefore, the process $Y$ is also Markov (see \cite[Exercise 9.3.4]{dembo}). The process $Y$ is clearly a mean zero Gaussian process (by our discussion in part (b)) and hence to find the transition densities we need to find the conditional mean and covariance. For $t,u \geq 0$, we use the independent increment property of the process $\left\{e^{\alpha t}Y_t : t \geq 0 \right\}$ and obtain the following. $$ \mathbb{E}(Y_{t+u}\mid Y_t) = e^{-\alpha(t+u)}\mathbb{E}(e^{\alpha(t+u)}Y_{t+u}\mid e^{\alpha t}Y_t) = e^{-\alpha(t+u)}e^{\alpha t}Y_t = e^{-\alpha u}Y_t,$$ and \begin{align*} \operatorname{Var}(Y_{t+u}\mid Y_t) = e^{-2\alpha(t+u)} \operatorname{Var}(e^{\alpha(t+u)}Y_{t+u}\mid e^{\alpha t}Y_t) = e^{-2\alpha(t+u)} \operatorname{Var}\left[ \int_{t}^{t+u} e^{\alpha s} \ dB_s\right] &= e^{-2\alpha(t+u)} \dfrac{e^{2\alpha(t+u)}-e^{2\alpha t}}{2\alpha} \ &= \dfrac{1-e^{-2\alpha u}}{2\alpha}. \end{align*} Since, the conditional variance and mean depends only on $u$ and not $t$, the Markov process is time-homogeneous with transition densities $$ p_t(y \mid x) = \text{Density of } N \left(e^{-\alpha t}x, \dfrac{1-e^{-2\alpha t}}{2\alpha}\right) \text{ at } y = \sqrt{\dfrac{\alpha}{\pi(1-e^{-2\alpha t})}}\exp \left(- \dfrac{\alpha (y-e^{-\alpha t}x)^2}{(1-e^{-2\alpha t})^2}\right).$$ \item $$ X_t = \mu + \int_{-\infty}^{t}e^{-\alpha(t-s)} \, dB_s = \mu + Y_t + \int_{-\infty}^{0}e^{-\alpha(t-s)} \, dB_s = \mu + Y_t + e^{-\alpha t} Z,$$ where $Z = \int_{-\infty}^0 e^{\alpha s}\, dB_s$. By independent increment property of BM, we have $Z \perp\!\!\perp \left\{B_s : s \geq 0\right\}$ and hence $Z$ is independent of $\left\{Y_s : s \geq 0\right\}$. From discussion in part (b), we have $$ Z \sim N \left(0, \int_{-\infty}^0 e^{2\alpha s}\, ds\right) = N(0, 1/(2\alpha)).$$ \item $M_t = e^{\alpha t}Y_t = \int_{0}^t e^{\alpha s}\, dB_s$. Since, it is a Wiener integral process of a square integrable (on any compact interval of $[0,\infty)$) function, we have that this process $M$ is a continuous $L^2$ MG with $$ \langle M \rangle_t = \int_{0}^t e^{2\alpha s}\, ds = \dfrac{e^{2\alpha t}-1}{2\alpha}.$$ \end{enumerate} |
2009-q1 | Among all probability distributions on $[0, 1]$, which has biggest variance? Prove your answer. | Consider a probability distribution $P$ on $[0,1]$ and a random variable $X \sim P$. Then $$ \operatorname{Var}_P(X) = \mathbb{E} \left[ X-\mathbb{E}(X)\right]^2 = \mathbb{E}\left(X-\dfrac{1}{2}\right)^2 - \left(\mathbb{E}(X) - \dfrac{1}{2} \right)^2 \leq \mathbb{E}\left(X-\dfrac{1}{2}\right)^2 \leq \dfrac{1}{4},$$ since $|X-\frac{1}{2}| \leq \frac{1}{2}$ as $X \in [0,1]$. Equality holds if and only if $\mathbb{E}(X)=\frac{1}{2}$ and $|X-\frac{1}{2}| = \frac{1}{2}$ almost surely. This is equivalent to $X\sim \text{Ber}(\frac{1}{2})$. Therefore, the maximum possible variance is $\frac{1}{4}$, attained only by $\text{Ber}(\frac{1}{2})$ distribution. |
2009-q2 | A random number generator is a function $f : [N] \to [N]$ where $[N] = \{1, 2, \ldots, N\}$. It generates a random sequence $U_0, U_1, U_2, \ldots$ in $[N]$ by choosing $U_0$ uniformly and letting $U_i = f(U_{i-1}), \ 1 \leq i < \infty$. Suppose that $f$ itself is chosen uniformly at random among all functions from $[N]$ to $[N]$. Find the limiting distribution of the first time that the generator "cycles" (smallest $T$ so $U_i = U_T$ for some $i, \ 0 \leq i \leq T$). That is, find $a_N, b_N$ so that, as $N \to \infty$, $P \left\{ \frac{T-a_N}{b_N} \le x \right\} \to F(x)$ for a non-trivial $F(x)$. | $f$ is uniformly randomly chosen from the set $\left\{g \mid g:[N] \to [N]\right\}$. Therefore, it is evident that $$ f(1), \ldots, f(N) \stackrel{iid}{\sim} \text{Uniform} \left([N] \right).$$ We have $U_0 \sim \text{Uniform} \left([N] \right)$ and $U_j=f(U_{j-1})$ for all $j \geq 1$. The random variable of interest is $$T_N := \min \left\{ j \geq 1 \mid U_j = U_i, \; \text{ for some } 0 \leq i \leq j-1\right\}.$$ Evidently, $T_N \leq N$ almost surely as all $U_i$'s take values in $[N]$. For any $2 \leq k \leq N$, we have the following. \begin{align*} \mathbb{P}(T_N \geq k) &= \mathbb{P}\left( U_0, \ldots, U_{k-1} \text{ are all distinct}\right)\\ &= \sum_{i_0, \ldots, i_{k-1}\in [N] : i_j\text{'s are all distinct}} \mathbb{P} \left( U_j=i_j, \;\forall \; 0 \leq j \leq k-1\right) \\ &= \sum_{i_0, \ldots, i_{k-1}\in [N] : i_j\text{'s are all distinct}} \mathbb{P} \left( U_0=i_0; f(i_{j-1})=i_j, \;\forall \; 1 \leq j \leq k-1\right) \\ &= \sum_{i_0, \ldots, i_{k-1}\in [N] : i_j\text{'s are all distinct}} \mathbb{P}(U_0=i_0) \prod_{j=1}^{k-1} \mathbb{P}(f(i_{j-1})=i_j) \\ &= \sum_{i_0, \ldots, i_{k-1}\in [N] : i_j\text{'s are all distinct}} \dfrac{1}{N^k} = \dfrac{k!{N \choose k}}{N^k}. \end{align*} Therefore, $$ \log \mathbb{P}(T_N \geq k) = \log \dfrac{N(N-1)\cdots(N-k+1)}{N^k} = \sum_{j=0}^{k-1} \log \left(1-\dfrac{j}{N} \right).$$ Take a sequence $\left\{k_n\right\}_{n \geq 1} \subseteq \mathbb{N}$ such that $k_n \uparrow \infty$, $1 \leq k_n \leq n$ but $k_n =o(n)$. Recall that $|\log(1+x)-x| \leq Cx^2$, for all $|x| \leq 1/2$, for some $C \in (0, \infty)$. Then for all large enough $N$ such that $k_N \leq N/2$, we have the following. \begin{align*} \Bigg \rvert \sum_{j=0}^{k_N-1} \log \left(1-\dfrac{j}{N} \right) + \sum_{j=0}^{k_N-1} \dfrac{j}{N} \Bigg \rvert &\leq \sum_{j=0}^{k_N-1} \Bigg \rvert \log \left(1-\dfrac{j}{N} \right) + \dfrac{j}{N} \Bigg \rvert \leq C\sum_{j=0}^{k_N-1} \dfrac{j^2}{N^2} \leq \dfrac{C k_N^3}{N^2}. \end{align*} and hence, $$ \log \mathbb{P}(T_N \geq k_N) = \sum_{j=0}^{k_N-1} \log \left(1-\dfrac{j}{N} \right) = \sum_{j=0}^{k_N-1} \dfrac{j}{N} + O \left( \dfrac{k_N^3}{N^2}\right)= \dfrac{k_N(k_N-1)}{2N} + O \left( \dfrac{k_N^3}{N^2}\right).$$ If we take $k_N = \lfloor x\sqrt{N} \rfloor $ for some $x >0$, we have $\log \mathbb{P}(T_N \geq k_N) \longrightarrow x^2/2$. Hence, $T_N/\lfloor \sqrt{N} \rfloor \stackrel{d}{\longrightarrow} G$ where $G(x) = (1-\exp(-x^2/2))\mathbbm{1}(x>0).$ Using \textit{Slutsky's Theorem} and identifying $G$ as the distribution function of \textit{Rayleigh distribution} with scale parameter $1$, we conclude that the following. $$ \dfrac{T_N}{\sqrt{N}} \stackrel{d}{\longrightarrow} \text{Rayleigh}(1), \; \text{as } N \to \infty.$$ |
2009-q3 | Let $(\Omega, \mathcal{F}, P)$ be a probability space. Let $X_t(\omega), \ 0 \le t \le 1$, be an uncountable collection of independent and identically distributed normal $(0, 1)$ random variables. Is $t \mapsto X_t(\omega)$ a Borel-measurable function on $[0, 1]$ with probability 1? | Suppose the stated assertion is true for the probability space $(\Omega, \mathcal{F}, \mathbb{P})$. We assume that a collection of \textit{i.i.d.} standard Gaussian variables $\left\{X_t : 0 \leq t \leq 1\right\}$ can be constructed on this space. Otherwise the problem for this space becomes vacuous. There is indeed such a probability space on which such a collection can be constructed, see \cite[Proposition 8.1.8]{dembo} for an argument guaranteeing this existence. Thus the problem is not entirely trivial. So start with a collection of \textit{i.i.d.} standard Gaussian variables $\left\{X_t : 0 \leq t \leq 1\right\}$ on the space $(\Omega, \mathcal{F}, \mathbb{P})$. By our assumption, there exists $A \in \mathcal{F}$ such that $\mathbb{P}(A)=0$ and $$ \left\{ \omega \in \Omega : t \mapsto X_t(\omega) \text{ is not Borel-measurable on } [0,1]\right\} \subseteq A.$$ Now take $E \subseteq [0,1]$ such that $E$ is not Borel-measurable. Define, $Y_t$ as follows. $$ Y_t := \begin{cases} -X_t, & \text{if } t \in E \\ X_t \mathbbm{1}(X_t \neq 0) + \mathbbm{1}(X_t = 0), & \text{if } t \notin E. \end{cases}$$ Clearly, $Y_t$'s are independent random variables. For $t \in E$, $Y_t = -X_t \sim N(0,1)$. For $t \notin E$, $Y_t \stackrel{a.s.}{=}X_t \sim N(0,1).$ Thus $\left\{Y_t : 0 \leq t \leq 1\right\}$ is a collection of \textit{i.i.d.} standard Gaussian variables on the space $(\Omega, \mathcal{F}, \mathbb{P})$. Consequently, by our assumption, there exists $B \in \mathcal{F}$ such that $\mathbb{P}(B)=0$ and $$ \left\{ \omega \in \Omega : t \mapsto Y_t(\omega) \text{ is not Borel-measurable on } [0,1]\right\} \subseteq B.$$ Let $Z_t=X_t+Y_t$. Then for $\omega \in A^c \cap B^c$, $t \mapsto Z_t(\omega)$ is Borel-measurable on $[0,1]$. Note that $$ Z_t = X_t+Y_t =\begin{cases} 0, & \text{if } t \in E \\ X_t \mathbbm{1}(X_t \neq 0) + \mathbbm{1}(X_t = 0)+X_t = X_t 2\mathbbm{1}(X_t \neq 0) + \mathbbm{1}(X_t = 0) \neq 0 & \text{if } t \notin E. \end{cases}$$ Hence, for any $\omega \in \Omega$, $$ \left\{ t \in [0,1] : Z_t(\omega)=0\right\} = E,$$ which is not Borel-measurable. This implies for any $\omega \in \Omega$, $t \mapsto Z_t(\omega)$ is not Borel-measurable on $[0,1]$. Consequently, $A^c \cap B^c = \emptyset$, which contradicts to $\mathbb{P}(A)=\mathbb{P}(B)=1$. Hence the assertion is not true for any such probability space. We can actually prove a stronger statement which says that there is no such probability space $(\Omega, \mathcal{F}, \mathbb{P})$ on which there exists $ \left\{X_t : 0 \leq t \leq 1\right\}$, a collection of i.i.d. standard Gaussian variables, such that $t \mapsto X_t(\omega)$ is Borel-measurable on $[0,1]$ with probability $1$. Let $\mu$ denotes the law $N(0,1)^{\otimes [0,1]}$ on $(\mathbb{R}^{[0,1]}, \mathcal{B}^{[0,1]})$; this is the law of the random element $(X_t : 0 \leq t \leq 1)$. If $A$ denotes the set of all Borel-measurable functions on $[0,1]$, then the hypothesis says that $A^c$ is a null set with respect to the measure $\mu$. In particular, $A^c$, hence $A$, belongs to the completion of the $\sigma$-algebra $\mathcal{B}^{[0,1]}$ with respect to the measure $\mu$. This gives a contradiction, see \cite[Exercise 8.1.11.(d)]{dembo}. |
2009-q4 | Consider an $n \times n$ grid, with edges identified to make a discrete torus. Points are chosen uniformly at random, and each chosen point with its four nearest neighbors are colored red. Let $T$ be the first time that all points are colored red. Assuming $n$ is large: (a) Find a limiting approximation for the distribution of $T$. That is, find $a_n, b_n$ so that, as $n \to \infty$, $P \left\{ \frac{T-a_n}{b_n} \le x \right\} \to F(x)$ for a non-trivial $F(x)$ (which you should identify). (b) Prove your answer, using ‘standard tools’ from your probability course. | Let $\mathbb{T}_n$ denotes the discrete torus obtained by identifying the edges of $n \times n $ grid. $V_n$ will denote the vertex set of $\mathbb{T}_n$. By $n \times n$ grid, I assume that it refers to the square lattice with $(n+1)$ vertices and $n$ edge segments on each side. It is easy to see that $|V_n|=n^2$. Let $X^{(n)}_{m,i}$ denotes the indicator variable for the event that the point $i$ has not been coloured red at or before time $m$. The superscript $(n)$ denotes the fact that we are on $\mathbb{T}_n$. Then $W^{(n)}_m=\sum_{i \in V_n} X^{(n)}_{m,i}$ is the number of points in $V_n$ which has not been coloured red at or before time $m$. Then $$ T_n = \inf \left\{m \geq 1 \mid W_m^{(n)}=0\right\} \Rightarrow \mathbb{P}(T_n > m) = \mathbb{P}(W_m^{(n)} >0),$$ where the implication follows from the observation that $m \mapsto W_m^{(n)}$ is non-increasing. We should therefore focus on finding the asymptotics of $W_m^{(n)}$. Let $\mathcal{N}_{n,i}$ denotes the set of neighbours of the point $i$ in $V_n$; $|\mathcal{N}_{n,i}|=4$. Notice that for all $i \in V_n$, $$ \mathbb{E}X^{(n)}_{m,i} = \mathbb{P} \left(\text{No point from } \mathcal{N}_{n,i} \cup \left\{i\right\} \text{was chosen in first $m$ draws} \right) = \left[ \dfrac{n^2-5}{n^2}\right]^m,$$ and hence $$ \mathbb{E}W_m^{(n)} = n^2 \left[ \dfrac{n^2-5}{n^2}\right]^m.$$ To get a non-degenerate limit, we should vary $m=m_n$ with $n$ in such a way that $\mathbb{E}W_{m_n}^{(n)}$ is convergent. Since, $n^2(1-5n^{-2})^{m_n} \approx \exp(2\log n -5m_n/n^2)$, an intuitive choice seems to be $m_n = \lfloor (2n^2\log n +cn^2)/5 \rfloor$ for some $c \in \mathbb{R}$. We now focus to prove the convergence with this particular choice. The situation clearly demands for a Poisson approximation technique. Since, $X_{m_n,i}^{(n)}$'s are not independent for different $i$'s, we go for \textit{Chen-Stein Bound}. We shall use the following variant of \textit{Chen-Stein Bound}, see \cite[Theorem 4.31]{B05} for reference. \begin{theorem} Suppose we have a collection of \textit{Bernoulli} random variables $\left\{X_i : i \in I\right\}$ such that for every $i \in I$, we can construct random variables $\left\{Y_j^{(i)} : j \in I, j \neq i\right\}$ satisfying the following properties. \begin{enumerate} \item $\left( Y_j^{(i)} : j \neq i \right) \stackrel{d}{=} \left( X_j : j \neq i \right) \mid X_i=1.$ \item $Y_j^{(i)} \leq X_j$, for all $j \neq i$. \end{enumerate} Let $W=\sum_{i \in I} X_i$ and $\lambda = \mathbb{E}(W)$. Then $$ \Big \rvert \Big \rvert \mathcal{L}(W) - \text{Poi}(\lambda) \Big \rvert \Big \rvert_{TV} \leq \lambda - \operatorname{Var}(W).$$ \end{theorem} To apply the above theorem, first we need to establish that the collection $\left\{X_{m_n,i}^{(n)} : i =1, \ldots, n\right\}$ satisfies the condition. For this part we are dropping the index $n$ (denoting torus dimension) from the notations for the sake of notational simplicity. The construction of the coupled random variables is quite easy. Suppose that $X_{m,i}=1$, i.e. all the first $m$ draws have avoided the set $\mathcal{N}_{i} \cup \left\{i\right\}$. Thus the conditional joint distribution of $(X_{m,j} : j \neq i)$ given $X_{m,i}=1$ is same as the joint distribution under the draw mechanism which draws uniformly at random from $( \mathcal{N}_{i} \cup \left\{i\right\})^c$, each draw being independent of each other. The way to couple this with our original scheme is simple. You first perform the original scheme, where you take points uniformly at random from the entire torus. If $X_{m,i}=1$, do nothing. Otherwise, consider all the draws that have drawn points from $\mathcal{N}_{i} \cup \left\{i\right\}$. Delete each of them (and their resulting colouring impacts) and replace them with equal number of \textit{i.i.d.} draws from $(\mathcal{N}_{i} \cup \left\{i\right\})^c$. Define $Y_{m,j}^{(i)}$ to be $1$ if the vertex $j$ is not red after this alteration, and $0$ otherwise. Clearly this collection satisfies the properties mentioned in the above theorem. Now define, $$ p_{n,i} := \mathbb{E}X^{(n)}_{m_n,i} = (1-5n^{-2})^{m_n}, \; \lambda_n := \sum_{i \in V_n} p_{n,i} = n^2(1-5n^{-2})^{m_n}.$$ By the \textit{Chen-Stein Bound}, we have $$ \Big \rvert \Big \rvert \mathcal{L}(W_{m_n}^{(n)}) - \text{Poi}(\lambda_n) \Big \rvert \Big \rvert_{TV} \leq \lambda_n - \operatorname{Var}(W^{(n)}_{m_n}).$$ Note that \begin{align*} \log \lambda_n = 2\log n +m_n \log (1-5n^{-2}) = 2 \log n +m_n (-5n^{-2}+O(n^{-4})) &= 2\log n -5 m_n/n^2 + O(n^{-2}) + O(n^{-2}\log n) \end{align*} and hence this gives us an insight on how to proceed forward. We can then plug in the necessary $n$ values to evaluate our terms further. Thus, we can derive the final asymptotics needed for our condition. Ultimately, with the help of stochastic convergence tools, we can demonstrate that if we assume certain independence conditions of Bernoulli random variables, we have the convergence we need and so we obtain $$\mathbb{P}(T_n > m_n) \longrightarrow \mathbb{P}(\text{Poi}(e^{-c}) >0) = 1-\exp(-e^{-c}).$$ The above holds for any $c$. Fix $c_1>c$. Then for large enough $n$, $\lfloor (2n^2\log n + c_1 n^2)/5 \rfloor > (2n^2\log n + c n^2)/5$ and hence \begin{align*} 1-\exp(e^{-c}) = \lim_{n \to \infty} \mathbb{P}\left[T_n >\lfloor (2n^2\log n + cn^2)/5 \rfloor \right] &\geq \limsup_{n \to \infty} \mathbb{P}\left[T_n >(2n^2\log n + cn^2)/5\right] \end{align*} |
2009-q5 | Let $X_t$ denote a standard Brownian motion for $0 \le t < \infty$. For $\varepsilon \ge 0$ and $c > 0$, let $T = \inf \{ t : t \ge \varepsilon, |X_t| \ge ct^{1/2} \}$. Find $E(T)$ as a function of $\varepsilon$ and $c$. | We have $\left\{X_t, \mathcal{F}_t, t \geq 0\right\}$ to be the standard BM on $[0,\infty)$ with canonical filtration. For $\varepsilon \geq 0$ and $c >0$, we define $T:= \inf \left\{ t \mid t \geq \varepsilon, |X_t| \geq ct^{1/2}\right\}$. If $\varepsilon=0$, we have $T=0$ obviously. Hence, we shall only concentrate on the case $\varepsilon >0$. Let $B_t := X_{t + \varepsilon} -X_{\varepsilon}$, for all $t \geq 0$. By independent and stationary increment property of the BM, we know that $\left\{B_t : t \geq 0\right\}$ is also a standard BM and independent of $\mathcal{F}_{\varepsilon}$, in particular $X_{\varepsilon}$. Therefore, \begin{align*} T = \inf \left\{ t \mid t \geq \varepsilon, |X_t| \geq ct^{1/2}\right\} = \inf \left\{ t \mid t \geq \varepsilon, X_t^2 \geq c^2t\right\} &= \inf \left\{ s \mid s \geq 0, (B_s +X_\varepsilon)^2 \geq c^2(s+\varepsilon)\right\} + \varepsilon. \end{align*} The above expression tells us that, $$ T \mid( X_{\varepsilon} = x) \stackrel{d}{=} \tau_{x,c^2,c^2\varepsilon} + \varepsilon,$$ where $\tau_{y,r,b} = \inf \left\{s \geq 0 \mid W_s^2 \geq rs + b\right\}$, with $W$ being standard BM on $[0,\infty)$ starting from $W_0=y$ and $r>0, b > 0$. Hence we should focus on analysing $\tau_{y,r,b}$. The first step would be to show that $\tau_{y,r,b}< \infty$ almost surely. To establish that fact, recall \textit{Law of Iterated Logarithm} which implies $$ \limsup_{s \to \infty} \dfrac{W_s^2}{2s \log \log s} =1, \; \text{a.s.} \Rightarrow \limsup_{s \to \infty} \dfrac{W_s^2}{s} =\infty, \; \text{a.s.} $$ This yields that $\tau_{y,r,b}< \infty$ almost surely. If $W_0^2 =y^2\geq b$, then $\tau_{y,r,b}=0$, almost surely and hence $\mathbb{E}\tau_{y,r,b}=0$. So let us focus on the case $b>y^2$. By sample-path-right-continuity of BM , we have $W_{\tau_{y,r,b}}^2 \geq r\tau_{y,r,b}+b$. In that case $\tau_{y,r,b} >0$ almost surely and hence using BM sample-path-left-continuity, we conclude that $W_{\tau_{y,r,b}}^2 = r\tau_{y,r,b}+b$. Recall that $\left\{W_s^2-s; s \geq 0\right\}$ is a continuous MG with respect to the canonical filtration. Using OST, we get \begin{equation}{\label{OST}} \mathbb{E} \left[ W_{\tau_{y,r,b} \wedge s}^2 -(\tau_{y,r,b} \wedge s) \right] = \mathbb{E}(W_0^2)=y^2, \; \forall \; s \geq 0. \end{equation} Since by definition, $W_{\tau_{y,r,b} \wedge s}^2 \leq r(\tau_{y,r,b} \wedge s)+b$ for all $s \geq 0$, we conclude $$ y^2 + \mathbb{E} (\tau_{y,r,b} \wedge s) \leq r\mathbb{E} (\tau_{y,r,b} \wedge s) +b, \; \forall \; s \geq 0.$$ First consider the case $0 <r <1$. Then $$ (1-r)\mathbb{E} (\tau_{y,r,b} \wedge s) \leq b-y^2, \; \forall \; s \geq 0 \stackrel{(MCT)}{\Rightarrow} \mathbb{E}\tau_{y,r,b} \leq \dfrac{b-y^2}{1-r} < \infty.$$ Notice that $\left\{W_{\tau_{y,r,b} \wedge s}, s \geq 0\right\}$ is a continuous MG with $\lim_{s \to \infty} W_{\tau_{y,r,b} \wedge s} = W_{\tau_{y,r,b}}$ almost surely. Moreover, $$ \sup_{s \geq 0} \mathbb{E} W_{\tau_{y,r,b} \wedge s}^2 \leq \sup_{s \geq 0} r\mathbb{E} (\tau_{y,r,b} \wedge s) +b \leq \dfrac{r(b-y^2)}{1-r} + b < \infty.$$ Employ \textit{Doob's $L^p$-MG convergence Theorem} to conclude that $$ W_{\tau_{y,r,b} \wedge s} \stackrel{L^2}{\longrightarrow} W_{\tau_{y,r,b}} \Rightarrow \mathbb{E}W_{\tau_{y,r,b} \wedge s}^2 \longrightarrow \mathbb{E}W_{\tau_{y,r,b}}^2 = r \mathbb{E}\tau_{y,r,b} +b. $$ Combining this with (\ref{OST}), we get $$ y^2 + \mathbb{E}\tau_{y,r,b} = r \mathbb{E}\tau_{y,r,b} +b \Rightarrow \mathbb{E}\tau_{y,r,b} = \dfrac{b-y^2}{1-r} .$$ Now consider the case $r \geq 1$. For any $q \in (0,1)$, it is obvious from the definition that $\tau_{y,r,b} \geq \tau_{y,q,b}$, almost surely. Hence, $$ \mathbb{E}\tau_{y,r,b} \geq \sup_{q \in (0,1)} \mathbb{E}\tau_{y,q,b} = \sup_{q \in (0,1)} \dfrac{b-y^2}{1-q} = \infty.$$ Combining all of these cases, we can write $$ \mathbb{E} \tau_{y,r,b} = \dfrac{b-y^2}{(1-r)_{+}} \mathbbm{1}(b >y^2).$$ Coming back to our original problem, we observe that $$ \mathbb{E}(T) = \mathbb{E} \left[ \mathbb{E}(T \mid X_{\varepsilon})\right] = \varepsilon + \mathbb{E} \left[ \dfrac{c^2\varepsilon-X_{\varepsilon}^2}{(1-c^2)_+} \mathbbm{1}\left(X_{\varepsilon}^2 < c^2\varepsilon \right)\right] = \varepsilon + \mathbb{E} \left[ \dfrac{\varepsilon(c^2-Z^2)}{(1-c^2)_+} \mathbbm{1}\left(Z^2 < c^2 \right)\right],$$ where $Z \sim N(0,1)$. The last equality is true since $X_{\varepsilon} \sim N(0,\varepsilon)$. The case of $c \geq 1$ is clearly trivial. So let us focus on $c \in (0,1)$. Then \begin{align*} \mathbb{E}(T) = \varepsilon + \dfrac{\varepsilon}{1-c^2} \mathbb{E} \left[ (c^2-Z^2)\mathbbm{1}(Z^2<c^2)\right] &= \varepsilon + \dfrac{\varepsilon}{1-c^2} \left[ c^2 \mathbb{P}(|Z| <c) - \mathbb{E}\left(Z^2; |Z|<c \right)\right] \end{align*} thus yielding the final answer. |
2009-q6 | Let $\{X_n, \mathcal{F}_n, n \ge 1\}$ be a martingale difference sequence such that, for all $n$, $E(X_n^2|\mathcal{F}_{n-1}) = 1$ and $\sup_{n \ge 1} E(|X_n|^r | \mathcal{F}_{n-1}) < \infty$ a.s. for some $r > 2$. Suppose that $u_n$ is $\mathcal{F}_{n-1}$-measurable and that $n^{-1} \sum_{i=1}^n u_i^2 \overset{P}{\to} c$ for some constant $c > 0$. Write an explicit formula for $$ \lim_{n \to \infty} P \left\{ \max_{1\le k \le n} \left( \sum_{i=1}^k u_i X_i \right) / \sqrt{n} \ge c \right\}. $$ | Since we are not given any integrability condition on $u$, we need to perform some truncation to apply MG results. Let $v_{n,k}=u_k \mathbbm{1}(|u_k| \leq n).$ Note that \begin{align*} \mathbb{P} \left( \max_{1 \leq k \leq n} \sum_{i=1}^k u_iX_i \neq \max_{1 \leq k \leq n} \sum_{i=1}^k v_{n,i}X_i \right) &\leq \mathbb{P}(\exists \; 1 \leq k \leq n, \; u_k \neq v_{n,k}) = \mathbb{P}(\exists \; 1 \leq k \leq n, \; |u_k| > n) \leq \mathbb{P}(\max_{1 \leq k \leq n} |u_k| >n), \end{align*} where the last term goes to $0$ by Lemma~\ref{lemma} and the fact that $n^{-1}\sum_{k=1}^n u_k^2 \stackrel{p}{\longrightarrow} c$, since this implies $n^{-1}\max_{1 \leq k \leq n} u_k^2 \stackrel{p}{\longrightarrow} 0$. Thus $$ \dfrac{1}{\sqrt{n}} \max_{1 \leq k \leq n} \sum_{i=1}^k u_iX_i =\dfrac{1}{\sqrt{n}} \max_{1 \leq k \leq n} \sum_{i=1}^k v_{n,i}X_i + o_p(1).$$ We have $v_{n,i}X_i$ to be square-integrable for all $n,i \geq 1$. Clearly, $v_{n,i}X_i \in m\mathcal{F}_i$ and $\mathbb{E}(v_{n,i}X_i \mid \mathcal{F}_{i-1}) = v_{n,i} \mathbb{E}(X_i \mid \mathcal{F}_{i-1})=0$. Hence $\left\{S_{n,k}=\sum_{i=1}^k v_{n,i}X_i, \mathcal{F}_k, k \geq 0 \right\}$ is a $L^2$-Martingale with $S_{n,0} :=0$. Its predictable compensator process is as follows. $$ \langle S_n \rangle_l = \sum_{k=1}^l \mathbb{E}(v_{n,k}^2X_k^2 \mid \mathcal{F}_{k-1}) = \sum_{k=1}^l v_{n,k}^2 \mathbb{E}(X_k^2 \mid \mathcal{F}_{k-1}) = \sum_{k=1}^l v_{n,l}^2, \; \forall \; n,l \geq 1.$$ For any $q>0$ and any sequence of natural numbers $\left\{l_n \right\}$ such that $l_n =O(n)$, we have $$ \mathbb{P} \left(\sum_{k=1}^{l_n} |v_{n,k}|^q \neq \sum_{k=1}^{l_n} |u_k|^q \right) \leq \mathbb{P}\left( \max_{k=1}^{l_n} |u_k| >n \right) \longrightarrow 0, $$ which follows from Lemma~\ref{lemma}. Thus for any sequence of positive real numbers $\left\{c_n \right\}$ we have \begin{equation}{\label{error}} \dfrac{1}{c_n} \sum_{k=1}^{l_n} |v_{n,k}|^q = \dfrac{1}{c_n} \sum_{k=1}^{l_n} |u_k|^q + o_p(1). \end{equation} In particular, for any $0 \leq t \leq 1$, $$ n^{-1}\langle S_n \rangle_{\lfloor nt \rfloor} = \dfrac{1}{n}\sum_{k=1}^{\lfloor nt \rfloor} v_{n,k}^2 = \dfrac{1}{n}\sum_{k=1}^{\lfloor nt \rfloor} u_k^2 + o_p(1) = \dfrac{\lfloor nt \rfloor}{n}\dfrac{1}{\lfloor nt \rfloor}\sum_{k=1}^{\lfloor nt \rfloor} u_k^2 + o_p(1) \stackrel{p}{\longrightarrow} ct.$$ Also for any $\varepsilon >0$, \begin{align*} \dfrac{1}{n} \sum_{k=1}^n \mathbb{E} \left[(S_{n,k}-S_{n,k-1})^2; |S_{n,k}-S_{n,k-1}| \geq \varepsilon \sqrt{n} \mid \mathcal{F}_{k-1} \right] &= \dfrac{1}{n} \sum_{k=1}^n v_{n,k}^2\mathbb{E} \left[X_k^2; |v_{n,k}X_k| \geq \varepsilon \sqrt{n} \mid \mathcal{F}_{k-1} \right] \end{align*} which establishes the desired conditions needed to apply \textit{Martingale CLT}. Define, $$ \widehat{S}_n(t) := \dfrac{1}{\sqrt{cn}} \left[S_{n,\lfloor nt \rfloor} +(nt-\lfloor nt \rfloor)(S_{n,\lfloor nt \rfloor +1}-S_{n,\lfloor nt \rfloor}) \right]=\dfrac{1}{\sqrt{cn}} \left[S_{n,\lfloor nt \rfloor} +(nt-\lfloor nt \rfloor)v_{n,\lfloor nt \rfloor +1}X_{\lfloor nt \rfloor +1} \right], $$ for all $0 \leq t \leq 1$. We then have $\widehat{S}_n \stackrel{d}{\longrightarrow} W$ as $n \to \infty$ on the space $C([0,1])$, where $W$ is the standard Brownian Motion on $[0,1]$. Observe that the function $t \mapsto \widehat{S}_n(t)$ on $[0,1]$ is the piecewise linear interpolation using the points $$ \left\{ \left( \dfrac{k}{n}, \dfrac{1}{\sqrt{cn}} S_{n,k} \right) \Bigg \rvert k=0, \ldots, n\right\}.$$ Recall the fact that for a piecewise linear function on a compact set, we can always find a maximizing point among the collection of the interpolating points. Hence, $$ \dfrac{1}{\sqrt{n}}\max_{1 \leq k \leq n} \sum_{i=1}^k u_i X_i = \dfrac{1}{\sqrt{n}} \max_{1 \leq k \leq n} \sum_{i=1}^k v_{n,i}X_i + o_p(1)=\max_{1 \leq k \leq n} \dfrac{1}{\sqrt{n}} S_{n,k} +o_p(1) = \sqrt{c} \sup_{0 \leq t \leq 1} \widehat{S}_n(t) +o_p(1).$$ Since, $\mathbf{x} \mapsto \sup_{0 \leq t \leq 1} x(t)$ is a continuous function on $C([0,1])$, we have $$ \sup_{0 \leq t \leq 1} \widehat{S}_n(t) \stackrel{d}{\longrightarrow} \sup_{0 \leq t \leq 1} W(t) \stackrel{d}{=}|W_1|.$$ Noting that $\sup_{0 \leq t \leq 1} W(t)$ has a continuous distribution function, we can conclude that $$ \mathbb{P} \left( \max_{1 \leq k \leq n} \left( \sum_{i=1}^k u_i X_i \right) \middle/\sqrt{n} \geq c \right) \longrightarrow \mathbb{P}(\sqrt{c} |W(1)| \geq c) = 2\bar{\Phi}(\sqrt{c}).$$ |
2010-q1 | Suppose \( S_n = \sum_{k=1}^{n} X_k \), where \(\{X, X_k\}\) are i.i.d. random variables of probability density function \( f_X(x) = x^{-2} \mathbf{1}_{|x|\geq 2} \). (a). What is the limiting distribution of \( \bar{S}_n = n^{-1} S_n \)? (suffices to find it up to a constant scaling factor).
(b). Prove that \( Z_n = \frac{1}{n \log n} \sum_{k=1}^{n} |X_k| \) converges in probability and find its limit. | We have i.i.d. random variables \(\left\{X_n \right\}_{n \geq 1}\) with probability density function \(f(x) = x^{-2}\mathbbm{1}_{|x| \geq 2}\). \(S_n := \sum_{k=1}^n X_k\) and \(\bar{S}_n := S_n/n\).
\begin{enumerate} [label=(\alph*)]
\item
The random variables does not have finite mean, so SLLN will not apply. One plausible brute-force method is to try Characteristic function.
\begin{equation}{\label{first}}
\mathbb{E}\left(e^{i t \bar{S}_{n}}\right)=\mathbb{E} \left(e^{itS_n/n}\right)=\prod_{i=1}^{n} \mathbb{E}\left(e^{itX_{i}/n}\right)=\left[\mathbb{E}\left(e^{it X_{1}/n}\right)\right]^{n}.
\end{equation}
Now, for any \(u >0\),
\begin{align*}
\mathbb{E}( ext{exp}(iuX_1)) = \int_{|x| \geq 2} e^{iux}x^{-2}\, dx &= 2 \int_{2}^{\infty} \cos (ux) x^{-2} \, dx \\
& = 2 u \int_{2u}^{\infty}y^{-2} \cos (y) \, dy \\
& = 2u \left[\int_{2u}^{\infty}y^{-2} \, dy - \int_{2u}^{\infty}y^{-2} (1-\cos y) \, dy\right] \\
& = 1 - 4u \int_{2u}^{\infty}y^{-2} \sin^2(y/2) \, dy \\
& = 1 - 2u \int_{u}^{\infty}z^{-2} \sin^2(z) \, dz.
\end{align*}
Thus as \(u \downarrow 0\), we have
$$ \mathbb{E}( ext{exp}(iuX_1)) = 1 - 2u (A+o(1)),$$
where \(A := \int_{0}^{\infty}z^{-2} \sin^2(z) \, dz = \pi /2 < \infty\).
Plugging in the above derived value in (\ref{first}), we conclude that
for any \(t >0\),
$$ \mathbb{E}\left(e^{i t \bar{S}_{n}}\right) = \left[1 - 2t(A+o(1))/n\right]^{n} \longrightarrow \exp(-2tA) = \exp(-\pi t). $$
Since the density \(f\) is symmetric around \(0\), we have \(\bar{S}_n \stackrel{d}{=} -\bar{S}_n.\) Therefore, for all \(t \in \mathbb{R}\),
$$ \mathbb{E}\left(e^{i t \bar{S}_{n}}\right) = \mathbb{E}\left(e^{i |t| \bar{S}_{n}}\right) \longrightarrow \exp(-\pi |t|),$$
the case for \(t=0\) being trivial. This shows that the limiting distribution of \(\bar{S}_n\) is \(Cauchy(0,\pi)\). To see a short proof of why \(A=\pi/2\) notice that
\begin{align*}
\int_{-\infty}^{\infty} \mathbbm{1}_{[-1,1]}(1-|x|) \exp(i \xi x) \; dx = 2\int_{0}^1 (1-x) \cos (\xi x)\, dx &= 2(1-x) \dfrac{\sin(\xi x)}{\xi} \bigg \vert_{0}^{1} + 2\int_{0}^1 \dfrac{\sin(\xi x)}{\xi} \; dx \\
& = \dfrac{2(1-\cos(\xi))}{\xi^2} = \dfrac{\sin^2(\xi/2)}{(\xi/2)^2}.
\end{align*}
Since \(\mathbbm{1}_{[-1,1]}(1-|x|)\) is a density on \(\mathbb{R}\) and the function \(\xi \mapsto \dfrac{\sin^2(\xi/2)}{(\xi/2)^2}\) is integrable, we obtain by \textit{Inversion Theorem} the following.
$$ \dfrac{1}{2\pi} \int_{-\infty}^{\infty} \dfrac{\sin^2(\xi/2)}{(\xi/2)^2} \, d \xi = 1.$$
This, after a change of variable, gives \(A=\pi/2\).
\item
We shall solve this problem by two methods. First solution is by truncation argument. Second solution illustrates that you can still manage this problem by brute-force calculations, even if can not think of proper truncation-type trick.
\begin{enumerate}[label=(\arabic*)]
\item (Solution by truncation)
Let us fix a sequence of positive real numbers \(\left\{c_n\right\}_{n \geq 1} \subseteq [2, \infty)\) which will be our truncation thresholds and we shall later make the correct choice after observing what properties this sequence should have to make he argument work.
Define \(X_{n,k} = X_{k} \mathbbm{1}_{|X_k| \leq c_n}\), for all \(1 \leq k \leq n\). Then
\begin{align*}
\mathbb{P}\left[ \dfrac{1}{n \log n} \sum_{k=1}^n |X_k| \neq \dfrac{1}{n \log n} \sum_{k=1}^n |X_{n,k}| \right] &\leq \mathbb{P} \left[ |X_k| > c_n, \; \text{for some } 1 \leq k \leq n \right] \\
& \leq n \mathbb{P}(|X_1| > c_n) = 2nc_n^{-1}.
\end{align*}
Provided \(n/c_n \longrightarrow 0\), we have
$$ \dfrac{1}{n \log n} \sum_{k=1}^n |X_k| -\dfrac{1}{n \log n} \sum_{k=1}^n |X_{n,k}| \stackrel{p}{\longrightarrow} 0.$$
On the otherhand,
\begin{align*}
\mathbb{E}\left[ \dfrac{1}{n \log n} \sum_{k=1}^n |X_{n,k}| \right] = \dfrac{2}{\log n} \int_{2}^{c_n} x^{-1} \; dx = \dfrac{2 \log (c_n/2)}{\log n},
\end{align*}
and
\begin{align*}
\operatorname{Var}\left[ \dfrac{1}{n \log n} \sum_{k=1}^n |X_{n,k}| \right] = \dfrac{1}{n(\log n)^2} \operatorname{Var}(|X_{n,1}|) \leq \dfrac{1}{n(\log n)^2} \mathbb{E}|X_{n,1}|^2 = \dfrac{2}{n(\log n)^2} \int_{2}^{c_n} dx = \dfrac{2(c_n-2)}{n(\log n)^2}.
\end{align*}
We need the expectation to converge to some real number and variance to go to zero. Clearly the choice \(c_n = n \log n\) works and for that choice we get \( \dfrac{2 \log (c_n/2)}{\log n} \longrightarrow 2\). Hence,
$$ \dfrac{1}{n \log n} \sum_{k=1}^n |X_k| \stackrel{p}{\longrightarrow} 2.$$
\item (Solution by brute-force calculation) Since the concerned random variables are non-negative we shall try to use Laplace transforms. If the limit turns out to be constant, we can conclude our assertion from that. Thus it is enough to show that for all \(t >0\),
$$ \mathbb{E} \left[ \exp \left(-t\dfrac{1}{n \log n} \sum_{k=1}^n |X_k| \right)\right] \longrightarrow \exp(-2t).$$
We have for all \(s >0\),
\begin{align*}
\mathbb{E}\exp(-s|X_1|) = 2 \int_{2}^{\infty} x^{-2}e^{-sx}\, dx = 2s \int_{2s}^{\infty} y^{-2}e^{-y}\, dy \
&= -2s y^{-1}e^{-y} \bigg \vert_{2s}^{\infty} - 2s \int_{2s}^{\infty} y^{-1}e^{-y}\, dy \\
&= \exp(-2s) - 2s \int_{2s}^{\infty} y^{-1}e^{-y}\, dy.
\end{align*}
Since \(\int_{0}^{\infty} y^{-1}e^{-y}\, dy = \infty\), using \textit{L'Hospital's Rule} we obtain
$$ \lim_{u \downarrow 0} \dfrac{\int_{u}^{\infty} y^{-1}e^{-y}\, dy}{-e^{-u}\log u} = \lim_{u \downarrow 0} \dfrac{-u^{-1}e^{-u}}{-e^{-u}u^{-1}+e^{-u}\log u} =1.$$
and hence as \(s \downarrow 0\),
$$ \mathbb{E} \exp(-s|X_1|) = \exp(-2s) + 2s \exp(-2s) \log(2s) (1+o(1)). $$
Therefore, we obtain (setting \(c_n=n \log n\)),
\begin{align*}
\log \mathbb{E} \left[ \exp \left(-tc_n^{-1} \sum_{k=1}^n |X_k| \right)\right] &= n \log \mathbb{E} \left[ \exp(-t|X_1|/c_n)\right] \\
& \sim -n \left\{1 - \mathbb{E} \left[ \exp(-t|X_1|/c_n)\right] \right\} \\
&= -n \left[1 - \exp(-2t/c_n) - 2tc_n^{-1}\exp(-2tc_n^{-1})\log(2t/c_n)(1+o(1)) \right] \\
& = -n \left[2tc_n^{-1}(1+o(1)) - 2tc_n^{-1}\exp(-2tc_n^{-1})\log(2t/c_n)(1+o(1)) \right] \\
& = (2t/ \log n) \exp(-2tc_n^{-1}) \log(2t/c_n)(1+o(1)) + o(1) \\
& = - 2t \dfrac{\log c_n - \log 2t}{\log n}(1+o(1)) + o(1) \\
& = - 2t \dfrac{\log n +\log \log n - \log 2t}{\log n}(1+o(1)) + o(1) \longrightarrow -2t.
\end{align*}
This completes the proof.
\end{enumerate}
|
2010-q2 | Suppose \( \{X_n,\mathcal{F}_n, n \geq 0\} \) is a martingale with \( \mathbb{E}(X_0) \leq 0 \) such that \( \mathbb{P}(X_n > 0 \text{ for some } n \geq 0) = 1 \). Let \( M_\infty = \sup_{n \geq 0} (X_n)_- \) (where \( (X_n)_- \) denotes the negative part of the random variable \( X_n \)). Is \( \mathbb{E}(M_\infty) \) necessarily infinite? Prove or give a counter-example.
Hint: Consider \( Z_n = \min(1, X_n) \). | We have a Martingale \(\left\{X_n, \mathcal{F}_n\right\}_{n \geq 0}\) such that \(\mathbb{E}(X_0) \leq 0\) and \(\mathbb{P}(X_n >0 \text{ for some } n \geq 0) =1\). Define, \(M_{\infty} := \sup_{n \geq 0} (X_n)_{-}\). We claim that \(\mathbb{E}(M_{\infty}) = \infty\).
We prove the assertion by contradiction. Suppose \(\mathbb{E}(M_{\infty}) < \infty\). Define the \(\mathcal{F}\)-stopping time
$$ T := \inf \left\{n \geq 0 : X_n >0 \right\}.$$
By hypothesis, \(\mathbb{P}(T < \infty) =1\) and hence \(X_{T \wedge n} \stackrel{a.s.}{\longrightarrow} X_T\) as \(n \to \infty\). On the otherhand, by \textit{Optional Stopping Theorem}, \(\mathbb{E} X_{T \wedge n} = \mathbb{E} X_0\), for all \(n \geq 0\). Hence by \textit{Fatou's Lemma},
\begin{align*}
\mathbb{E}|X_T| \leq \liminf_{n \to \infty} \mathbb{E}|X_{T \wedge n}| &= \liminf_{n \to \infty} \left[ \mathbb{E}(X_{T \wedge n})_{+} + \mathbb{E}(X_{T \wedge n})_{-} \right] \\
&= \liminf_{n \to \infty} \left[ \mathbb{E}(X_{T \wedge n}) + 2\mathbb{E}(X_{T \wedge n})_{-} \right] \\
& = \mathbb{E}(X_0) + 2 \liminf_{n \to \infty} \mathbb{E}(X_{T \wedge n})_{-}.\
\end{align*}
Since, \(x \mapsto x_{-}\) is continuous we have \((X_{T \wedge n})_{-} \stackrel{a.s.}{\longrightarrow} (X_T)_{-}\) as \(n \to \infty\). By our assumption, \(\sup_{n \geq 0} (X_{T \wedge n})_{-} \leq \sup_{n \geq 0} (X_n)_{-} = M_{\infty} \in L^1\). By \textit{Dominated Convergence Theorem}, we conclude that \(\mathbb{E}(X_{T \wedge n})_{-} \stackrel{n \to \infty}{\longrightarrow} \mathbb{E}(X_T)_{-}=0\), since \(X_T >0\) almost surely by definition. Plugging in this result in above inequality, we get \(\mathbb{E}|X_T| \leq \mathbb{E} X_0 \leq 0\), contradicting to the fact that \(X_T >0\) almost surely. Hence, \(\mathbb{E}M_{\infty} = \infty.\)
(\textbf{Solution using the hint}) We again prove by contradiction. Set \(Z_n = \min(1,X_n)\). Since, \(x \mapsto \min(1,x)\) is a concave function we have \(\left\{Z_n, \mathcal{F}_n, n \geq 0\right\}\) is a sup-MG. Use the same stopping time \(T\) as the previous solution. Note that
$$ \sup_{n \geq 0} |Z_{T \wedge n}| \leq \sup_{n \geq 0} |Z_{n}| \leq 1+ \sup_{n \geq 0} (X_n)_{-} = 1+ M_{\infty} \in L^1.$$
Thus \(\left\{Z_{T \wedge n}, \mathcal{F}_n, n \geq 0\right\}\) is an uniformly integrable sub-MG and hence \(\mathbb{E}Z_{T \wedge n} \to \mathbb{E}Z_T\).
Being a sup-MG we have \(\mathbb{E}Z_{T \wedge n} \leq \mathbb{E} Z_0 \leq \mathbb{E}(X_0) \leq 0\), for all \(n \geq 0\). Therefore, \(\mathbb{E}Z_T \leq 0\), contradicting to the fact that \(Z_T >0\) almost surely.
|
2010-q3 | At time \( n = 0 \), zero is the only “occupied” site in \( \mathbb{Z} \), and a particle placed at zero begins a symmetric simple random walk on the integers. As soon as it reaches a site that is not occupied, it stops and occupies the site. (This will occur on the first step, and the the second occupied site will be either 1 or -1, each happening with probability 1/2.) A new particle immediately begins a new symmetric simple random walk starting from zero, until it too reaches a site that is not occupied. Then it stops and occupies that site. This continues indefinitely. The random variable \( N_n \) denotes the number of occupied sites among the positive integers at the time that the \( n\)-th random walk first occupies a site.
(a). Explain why \( \{N_n\} \) is an (inhomogeneous) Markov chain.
(b). Compute its transition probabilities \( p_n(x, y) \) for \( n \geq 1, x, y \in \mathbb{Z} \) and find as simple expression for \( \mathbb{E}(N_n) \) as possible (note that \( \mathbb{E}(N_1) = 1/2 \)). | $N_n$ is the number of occupied sites on \(\mathbb{Z}_{+}\) after the \(n\)-th RW has occupied some site. We also define \(M_n\) to be the number of occupied sites on \(\mathbb{Z}_{-}\) after the \(n\)-th RW has occupied some site. Clearly, \(N_n+M_n=n\), for all \(n \geq 0\), since \(0\) is occupied by the \(0\)-th RW. It is also clear that \(N_n-N_{n-1}\) can take values \(0\) and \(1\) only and the same observation goes for \(M_n-M_{n-1}\). Also \(N_0=M_0=0\).
\begin{enumerate}[label=(\alph*)]
\item Suppose we condition on the event \((N_0=x_0,\ldots,N_{n}=x_n)\), assuming that this event has positive probability. Since, the SRW on \(\mathbb{Z}\) has step size \(1\), the sites on \(\mathbb{Z}_{+}\) are occupied one by one consecutively starting from \(1\). Thus on the above mentioned event the set of occupied sites in \(\mathbb{Z}_{+}\) after \(n\) RWs have been completed has to be \(\left\{ 1,\ldots, x_n\right\}\) (or the null set \(\emptyset\) if \(x_n =0\)). By similar argument, on the above event \(M_n=n-x_n\) and hence the set of occupied sites in \(\mathbb{Z}_{-}\) after \(n\) RWs has to be \(\left\{ -1,\ldots, -(n-x_n)\right\}\) (or the null set \(\emptyset\) if \(x_n =n\)).
Now consider the \((n+1)\)-th RW. It starts from \(0\) and on the above event it will finally occupy the site \(x_n+1\) or \(-(n+1-x_n)\), which ever it reaches first. Recall that for SRW on \(\mathbb{Z}\) and \(a,b>0\), the probability that the walk, starting from \(0\), would reach \(b\) before it reaches \(-a\) is given by \(a/(a+b)\). Hence,
$$ \mathbb{P} \left[ N_{n+1}=x_n+1 \rvert N_{i}=x_i, \; \forall \; 0 \leq i \leq n \right] = \dfrac{n+1-x_n}{n+2}.$$
Therefore,
$$ \mathbb{P} \left[ N_{n+1} = k | N_{n}, \ldots, N_0\right] = \begin{cases}
(n+1-N_n)/(n+2), & \text{if } k=N_n+1 ,\\
(N_n+1)/(n+2), & \text{if } k=N_n,\\
0, & \text{otherwise}.
\end{cases}$$
Since the transition probability depends only on \(N_n\) (not on \(N_{n-1}, \ldots, N_0\)) and also time-dependent (i.e., depends on \(n\)), we have \(\left\{N_n \right\}_{n \geq 0}\) to be a time-inhomogeneous Markov chain.
\item As we have computed earlier, the transition probabilities takes the following form for all \(n \geq 1\).
$$ p_n(x,y) := \mathbb{P}\left[N_{n}=y \rvert N_{n-1}=x \right] = \begin{cases}
(n-x)/(n+1), & \text{if } y=x+1 ,\\
(x+1)/(n+1), & \text{if } y=x,\\
0, & \text{otherwise}.
\end{cases}$$
From the above expression, it is easy to see that \(\mathbb{E}\left[N_n \rvert N_{n-1}\right] = N_{n-1} + 1 - (N_{n-1}+1)/(n+1)\). Taking expectations on both sides we get
$$(n+1) \mathbb{E} N_n = n \mathbb{E}N_{n-1} + n,\; \forall \; n \geq 1.$$
Unfolding the recursion relation we obtain,
$$ (n+1) \mathbb{E}N_n = \sum_{k=1}^{n} k+ \mathbb{E}N_0, \; \forall \; n \geq 1.$$
Since, \(\mathbb{E}(N_0) =0\), we have \(\mathbb{E}(N_n) = n/2.\)
We can compute the expectation without any calculation. In order to do that, first notice that due to the symmetry of SRW and the fact that we are starting each from \(0\), we have \(N_n \stackrel{d}{=}M_n\), for all \(n \geq 0\). On the otherhand, \(N_n+M_n=n\), for all \(n \geq 1\). Together they implies that \(\mathbb{E}(N_n) = n/2\).
\end{enumerate}
|
2010-q4 | Fixing $p\in(0, 1)$, let $c\_p=(2-p)/(1-p)$. (a). Show that if non-negative random variables \( V \) and \( A \) are such that
\[ \mathbb{P}(V \geq u) \leq \mathbb{P}(A \geq u) + \mathbb{E}[\min(A/u, 1)] \] for any \( u > 0 \), then \( \mathbb{E}[V^p] \leq c_p \mathbb{E}[A^p] \).
(b). Let \( A_n \) be the \( \mathcal{F}_n\)-predictable, non-decreasing sequence in Doob’s decomposition of a non-negative sub-martingale \( (X_n, \mathcal{F}_n) \) that starts at \( X_0 = 0 \) (that is, \( A_0 = 0 \) and \( Y_n = X_n - A_n \text{ is an } \mathcal{F}_n\)-martingale). Show that \( \mathbb{P}(V_\tau \geq u) \leq u^{-1}\mathbb{E}(A_\tau) \) for \( V_n = \max(X_0, X_1, \ldots, X_n) \), any finite \( \mathcal{F}_n\)-stopping time \( \tau \) and all \( u > 0 \).
(c). Conclude that in this case \( \mathbb{E}[V_\tau^2] \leq c_p\mathbb{E}[A_\tau^2] \). | \begin{enumerate}[label=(\alph*)]
\item Since \(V\) and \(A\) are non-negative random variables, using \textit{Integration by parts} formula and \textit{Fubini's Theorem} we get the following. Note that while evaluating the integrals in \((i)\) we have used \(p \in (0,1)\).
\begin{align*}
\mathbb{E}\left[ V^p\right] = \int_{0}^{\infty} pu^{p-1}\mathbb{P}(V \geq u) \, du & \leq \int_{0}^{\infty} pu^{p-1}\mathbb{P}(A \geq u) \, du + \int_{0}^{\infty} pu^{p-1}\mathbb{E} \left( \min(A/u,1)\right) \, du \\
&= \mathbb{E}\left[ A^p\right] + \mathbb{E} \left[\int_{0}^{\infty} pu^{p-1} \min(A/u,1) \,du \right] \\
& = \mathbb{E}\left[ A^p\right] + \mathbb{E} \left[ \int_{0}^{A} pu^{p-1} \,du + \int_{A}^{\infty} pu^{p-1} Au^{-1}\,du\right] \\
& \stackrel{(i)}{=} \mathbb{E}\left[ A^p\right] + \mathbb{E} \left[A^p + \dfrac{pA^p}{1-p}\right] = \dfrac{2-p}{1-p} \mathbb{E}\left[ A^p\right] = c_p \mathbb{E}\left[ A^p\right].
\end{align*}
\item We have non-negative sub-MG \(\left\{X_n, \mathcal{F}_n,n \geq 0\right\}\) with \(X_0=0\) and corresponding increasing predictable process \(\left\{A_n\right\}_{n \geq 0}\) with \(A_0=0\), i.e. \(\left\{Y_n=X_n-A_n, \mathcal{F}_n\right\}_{n \geq 0}\) is a MG. By definition, \(V_n := \max(X_0, \ldots, X_n)\). Therefore \(\left\{ X_{\tau \wedge n}, \mathcal{F}_n, n \geq 0\right\}\) is also a non-negative sub-MG and \(\left\{ Y_{\tau \wedge n}, \mathcal{F}_n, n \geq 0\right\}\) is MG. This gives \(\mathbb{E}X_{\tau \wedge n} = \mathbb{E}A_{\tau \wedge n} + \mathbb{E}Y_{\tau \wedge n} = \mathbb{E}A_{\tau \wedge n} + \mathbb{E}Y_0 = \mathbb{E}A_{\tau \wedge n}.
By \textit{Doob's Inequality} we have for \(u >0\),
\begin{equation}{\label{ineq}}
\mathbb{P}\left[V_{\tau \wedge n} \geq u \right] = \mathbb{P}\left[\max_{k=0}^{\tau \wedge n} X_{k} \geq u \right] = \mathbb{P}\left[\max_{k=0}^n X_{\tau \wedge k} \geq u \right] \leq u^{-1} \mathbb{E}\left[X_{\tau \wedge n} \right] = u^{-1} \mathbb{E}\left[A_{\tau \wedge n} \right].
\end{equation}
Since \(\tau\) is finite almost surely, by \textit{Monotone Convergence Theorem}, as \(n \to \infty\), \(\mathbb{E}A_{\tau \wedge n} \uparrow \mathbb{E}A_{\tau}.*\) Similarly,
$$ (V_{\tau \wedge n} \geq u) =\left( \max_{k=0}^{\tau \wedge n} X_{k} \geq u \right) = \left( \max_{k=0}^n X_{\tau \wedge k} \geq u \right) \uparrow \left( \sup_{k=0}^{\infty} X_{\tau \wedge k} \geq u \right) = \left( \sup_{k=0}^{\tau} X_{k} \geq u \right) = (V_{\tau} \geq u).$$
Hence, taking \(n \to \infty\) in (\ref{ineq}) we get \(\mathbb{P}\left[V_{\tau} \geq u \right] \leq u^{-1}\mathbb{E}A_{\tau}.
*\item Consider the \(\mathcal{F}\)-stopping time \(\theta := \left\{ n \geq 0 : A_{n+1}>u\right\}\. Since, \(\left\{A_n\right\}_{n \geq 0}\) is predictable, this is a \(\mathcal{F}\)-stopping time. Note that,
$$ \mathbb{P}(V_{\tau} \geq u, \theta \geq \tau) \leq \mathbb{P}(V_{\tau \wedge \theta} \geq u) \leq u^{-1}\mathbb{E}A_{\tau \wedge \theta} \leq u^{-1} \mathbb{E} \left[A_{\tau} \wedge u \right],$$
where the first inequality is true since the process \(V\) is non-decreasing, the second inequality follows from the arguments similar to the one presented in (b), since \(\tau \wedge \theta\) is almost surely finite \(\mathcal{F}\)-stopping time and the third inequality follows from the fact that the process \(A\) is non-decreasing with \(A_{\theta} \leq u\. Therefore,
\begin{align*}
\mathbb{P} \left[V_{\tau} \geq u \right] = \mathbb{P} \left[V_{\tau} \geq u , \theta \geq \tau\right] + \mathbb{P} \left[V_{\tau} \geq u, \theta < \tau \right] &\leq u^{-1} \mathbb{E} \left[A_{\tau} \wedge u \right] + \mathbb{P} \left[\theta < \tau \right]\\
& = \mathbb{E} \left[(A_{\tau}/u) \wedge 1 \right] + \mathbb{P}(A_{\tau} > u).
\end{align*}
Now we use part (a) to conclude that \(\mathbb{E}\left[ V_{\tau}^p\right] \leq c_p\mathbb{E}\left[ A_{\tau}^p\right].
|
2010-q5 | Recall that the Ornstein-Uhlenbeck process \( U_t = e^{-t/2} W_{e^t}, t \geq 0 \), derived from a standard Brownian motion \(\{ W_t, t \geq 0 \} \) (with \( W_0 = 0 \), is a homogeneous, zero-mean, Gaussian Markov process, such that \( \mathbb{E}(U_t | U_0) = e^{-t/2}U_0 \) and \( \text{Var}(U_t | U_0) = 1 - e^{-t} \).
(a). What is the transition probability kernel \( p_t(x, y) \) for the Markov semi-group of this process?
(b). Find the law of \( U_t \) and explain why \( (U_t, t \geq 0) \) is a stationary process.
(c). Suppose \( (X,Y) \) is a zero mean Gaussian random vector such that \( \mathbb{E}X^2 = \mathbb{E}Y^2 = 1 \) and \( \mathbb{E}[XY] = c \geq 0 \). Fixing a non-empty interval \( I = [a, b) \) of the real line, is the function \( g(c) = \mathbb{P}( X \in I, Y \in I ) \) nondecreasing in \( c \)? Prove or give a counter-example.
Hint: For the Hilbert space \( \mathcal{H} \) of norm \( ||h|| = (h,h)^{1/2} \) and inner product \( (h,f) = \frac{1}{\sqrt{2 \pi}} \int h(x)f(x)e^{-x^2/2}dx \), it is known that for any \( t > 0 \) and \( f \in \mathcal{H} \),
\[ (p_tf)(x) := \int p_t(x,y)f(y)dy = \kappa \sum_{n = 0}^{\infty} e^{-ancynot{a}_n t}(\psi_n, f)\psi_n(x), \] with \( \kappa > 0 \) a finite constant, \( 0 < \fancynot{a}_0 < \fancynot{a}_1 < \ldots \) and functions \( \psi_n(.) \) such that \( ||\psi_n|| = 1 \), and where the series on the right-side converges weakly in \( S \) to \( p_tf \). | \begin{enumerate}[label=(\alph*)]
\item The O-U process is homogeneous Gaussian Markov process with \(\mathbb{E}(U_t|U_0) =\exp(-t/2)U_0\) and \(\operatorname{Var}(U_t|U_0) = 1-\exp(-t)\). Therefore, the transition probability kernel is as follows.
\begin{align*}
p_t(x,y) &= \text{Density of } N\left( \exp(-t/2)x,1-\exp(-t)\right) \text{ distribution at } y \\
&= (1-e^{-t})^{-1/2} \phi \left( (1-e^{-t})^{-1/2}(y-xe^{-t/2}) \right) \\
&= \dfrac{1}{\sqrt{2 \pi (1-e^{-t})}} \exp \left( - \dfrac{(y-xe^{-t/2})^2}{2(1-e^{-t})}\right).
\end{align*}
\item \(U_t = \exp(-t/2)W_{e^t}\) and \(W_s \sim N(0,s)\) for all \(s \geq 0\). Hence, \(U_t \sim N(0,1).\)
\n Note that for any \(s,t \geq 0\), we have
$$\operatorname{Cov}(U_t,U_s) = \exp(-t/2-s/2)\operatorname{Cov}(W_{e^t},W_{e^s}) = \exp(-t/2-s/2)(e^t \wedge e^s) = \exp(-|t-s|/2). $$
Thus the O-U Process is weakly stationary and being a Gaussian process it implies that it is (strongly) stationary.
\item Set \(t = -2\log c\). Then for \(c \in (0,1)\), we clearly have from the covariance expression of part (b) that \((X,Y) \stackrel{d}{=} (U_0,U_t)\). Set \(f := \mathbbm{1}_{[a,b)}\). Clearly,
\(\int_{\mathbb{R}} f^2(x) \phi(x) \, dx < \infty\) and hence \(f \in \mathbb{H}\.
\begin{align*}
g(c)=\mathbb{P}(X \in I, Y \in I) = \mathbb{P}(U_0 \in I, U_t \in I) = \mathbb{E}\left[f(U_0)f(U_t) \right] &= \int_{\mathbb{R}} \mathbb{E}\left[f(U_t) |U_0=x\right] f(x) \phi(x) \, dx \\
&= \int_{\mathbb{R}} \left[\int_{\mathbb{R}}f(y)p_t(x,y)\,dy \right] f(x) \phi(x) \, dx \\
&= \int_{\mathbb{R}} p_tf(x) f(x) \phi(x) \, dx = (p_tf,f).
\end{align*}
Set
$$ h_n := \kappa \sum_{k=0}^n \exp(-\lambda_kt)(\psi_k,f)\psi_k.$$
We know that \(h_n\) converges weakly to \(p_tf\) and therefore \((h_n,f) \longrightarrow (p_tf,f)\). On the otherhand,
$$ (h_n,f) = \kappa \left(\sum_{k=0}^n \exp(-\lambda_kt)(\psi_k,f)\psi_k,f \right) = \kappa \sum_{k=0}^n \exp(-\lambda_kt)(\psi_k,f)(\psi_k,f) = \kappa \sum_{k=0}^n \exp(-\lambda_kt)|(\\psi_k,f)|^2 ,$$
and hence,
$$ g(c)=\mathbb{P}(X \in I, Y \in I) = (p_tf,f) = \kappa \sum_{k=0}^{\infty} \exp(-\lambda_kt)|(\\psi_k,f)|^2$$
Since, \(\kappa>0\) and \(\lambda_k \geq 0\) for all \(k \geq 0\), we have the right hand side and hence \(g\) to be non-decreasing on \((0,1)\). The function \(g\) is continuous on \([0,1]\) as can be seen from its integral expression. Therefore, \(g\) is non-decreasing on \([0,1]\.
\end{enumerate}
|
2010-q6 | A function \( L : (0, \infty) \rightarrow \mathbb{R} \) is said to be slowly varying (at \( \infty \)) if
\[ \lim_{x \to \infty} \frac{L(cx)}{L(x)} = 1 \text{ for all } c > 0. \]
(a). Suppose martingale \( (M_t,\mathcal{F}_t) \) starting at \( M_0 = 0 \), is such that \( \max_{k \leq n} |M_k - M_{k-1}| \leq a_n \) and for any \( 0 < \epsilon \leq 1 \), as \( n \to \infty \),
\[ \frac{1}{nL(a_n)} \sum_{k=1}^{n} \mathbb{E} \left( (M_k - M_{k-1})^2 \mathbf{1}(|M_k - M_{k-1}| \leq \epsilon a_n)| \mathcal{F}_{k-1} \right) \xrightarrow{P} 1 \]evergs in distribution on \( C([0, 1]) \) to the standard Brownian motion. | \begin{enumerate}[label=(\alph*)]
\item Consider the triangular MG sequence \(\left\{M_{n,l}, \mathcal{F}_{n,l}; n,l \geq 0\right\}\) defined as
$$ M_{n,l} := \dfrac{M_l}{\sqrt{nL(a_n)}}, \; \mathcal{F}_{n,l} = \mathcal{F}_l, \; \forall \; n\geq 1, l \geq 0.$$
Clearly, each row \(\left\{M_{n,l}, \mathcal{F}_{n,l}, l \geq 0\right\}\) is a MG with \(M_{n,0}=\dfrac{M_0}{\sqrt{n}} =0\) and since \(\sup_{k \leq n} |M_{k}-M_{k-1}| \leq a_n\), we get \(|M_n| \leq na_n\) implying that each row is \(L^2\)-MG.
Now, fix \(t \in [0,1]\.
\begin{align*}
\langle M_n \rangle_{\lfloor nt \rfloor} = \sum_{k=1}^{\lfloor nt \rfloor} \mathbb{E} \left[(M_{n,k}-M_{n,k-1})^2 \mid \mathcal{F}_{n,k-1} \right] &= \dfrac{1}{nL(a_n)} \sum_{k=1}^{\lfloor nt \rfloor} \mathbb{E} \left[(M_k-M_{k-1})^2 \mid \mathcal{F}_{k-1} \right]\ \\
&= \dfrac{1}{nL(a_n)} \sum_{k=1}^{\lfloor nt \rfloor} \mathbb{E} \left[(M_k-M_{k-1})^2\mathbbm{1}(|M_k-M_{k-1}|\leq a_{\lfloor nt \rfloor}) \mid \mathcal{F}_{k-1} \right]\
&= \dfrac{\lfloor nt \rfloor L(a_{\lfloor nt \rfloor})}{nL(a_n)}(1+o_p(1)) \stackrel{p}{\longrightarrow} t,
\end{align*}
where in the last line we have used the assumption that \(L(a_{\lfloor nt \rfloor})/L(a_n) \longrightarrow 1\) as \(n \to \infty\).
Now fix \(\varepsilon \in (0,1)\). Then using the fact that \(nL(a_n)=a_n^2\), we obtain the following.
\begin{align*}
g_n(\varepsilon) &:= \sum_{k=1}^{n} \mathbb{E} \left[(M_{n,k}-M_{n,k-1})^2 \mathbbm{1}(|M_{n,k}-M_{n,k-1}|> \varepsilon)\mid \mathcal{F}_{n,k-1} \right] \\
&= \dfrac{1}{nL(a_n)} \sum_{k=1}^{n} \mathbb{E} \left[(M_k-M_{k-1})^2\mathbbm{1} \left(|M_k-M_{k-1}|> \varepsilon \sqrt{nL(a_n)} \right) \mid \mathcal{F}_{k-1} \right]\ \\
&= \dfrac{1}{nL(a_n)} \sum_{k=1}^{n} \mathbb{E} \left[(M_k-M_{k-1})^2\mathbbm{1} \left(|M_k-M_{k-1}| > \varepsilon a_n \right) \mid \mathcal{F}_{k-1} \right]\ \\
&= \dfrac{1}{nL(a_n)} \sum_{k=1}^{n} \mathbb{E} \left[(M_k-M_{k-1})^2\mathbbm{1} \left(|M_k-M_{k-1}| \leq a_n \right) \mid \mathcal{F}_{k-1} \right] \\
& \hspace{1 in} - \dfrac{1}{nL(a_n)} \sum_{k=1}^{n} \mathbb{E} \left[(M_k-M_{k-1})^2\mathbbm{1} \left(|M_k-M_{k-1}| \leq \varepsilon a_n \right) \mid \mathcal{F}_{k-1} \right]\
&= (1+o_p(1)) - (1+o_p(1)) = o_p(1).
\end{align*}
Consider the linearly interpolated process
$$ W_n(t) = M_{n,\lfloor nt \rfloor} + \left( nt- \lfloor nt \rfloor \right) \left( M_{n,\lfloor nt \rfloor +1} - M_{n,\lfloor nt \rfloor}\right) = \dfrac{1}{\sqrt{nL(a_n)}} \left[ M_{\lfloor nt \rfloor} + \left( nt- \lfloor nt \rfloor \right) \left( M_{\lfloor nt \rfloor +1} - M_{\lfloor nt \rfloor}\right)\right].$$
Then by \textit{ Lindeberg's Martingale CLT}, we have \(W_n \stackrel{d}{\longrightarrow} W\) on \(C([0,1])\) where \(W \) is the standard BM on \([0,1]\).
\item
We have \(X_1, X_2, \ldots, \stackrel{iid}{\sim}\) from a distribution which is symmetric around \(0\) and in the domain of attraction of Normal distribution. Define \(S_n = \sum_{k=1}^n X_k\) and
Consider the triangular MG sequence \(\left\{M_{n,l}, \mathcal{F}_{n,l}; n,l \geq 0\right\}\) defined as
$$ M_{n,l} := \dfrac{\sum_{k=1}^l X_k\mathbbm{1}(|X_k| \leq a_n)}{\sqrt{nL(a_n)}}, \; \mathcal{F}_{n,l} = \sigma(X_i : 1 \leq i \leq l), \; \forall \; n,l \geq 1,$$
with \(M_{n,0}:=0\). Here \(L(x) := \mathbb{E}(X_1^2 \mathbbm{1}(|X_1| \leq x))\) is slowly varying at \(\infty\), and \(a_n\) is such that
$$a_n \to \infty, n\mathbb{P}(|X_1|>a_n)=o(1), nL(a_n)=a_n^2, \text{ and } a_{\lfloor cn \rfloor} \sim a_n, \; \forall \, c>0.$$
Clearly, the symmetry of the common distribution around \(0\) guarantees that each row \(\left\{M_{n,l}, \mathcal{F}_{n,l}, l \geq 0\right\}\) is a \(L^2\)-MG.
Now, fix \(t \in [0,1]\.
\begin{align*}
\langle M_n \rangle_{\lfloor nt \rfloor} = \sum_{k=1}^{\lfloor nt \rfloor} \mathbb{E} \left[(M_{n,k}-M_{n,k-1})^2 \mid \mathcal{F}_{n,k-1} \right] &= \dfrac{1}{nL(a_n)} \sum_{k=1}^{\lfloor nt \rfloor} X_k^2\mathbbm{1}(|X_k| \leq a_n)= \dfrac{\lfloor nt \rfloor L(a_{\lfloor nt \rfloor})}{nL(a_n)}(1+o_p(1)) \stackrel{p}{\longrightarrow} t,
\end{align*}
where in the last line we have used the assumption that \(L(a_{\lfloor nt \rfloor})/L(a_n) \longrightarrow 1\) as \(n \to \infty\).
Now fix \(\varepsilon \in (0,1)\). Then using the fact that \(nL(a_n)=a_n^2\), we obtain the following.
\begin{align*}
g_n(\varepsilon) &:= \sum_{k=1}^{n} \mathbb{E} \left[(M_{n,k}-M_{n,k-1})^2 \mathbbm{1}(|M_{n,k}-M_{n,k-1}|> \varepsilon)\mid \mathcal{F}_{n,k-1} \right] \\
&= \dfrac{1}{nL(a_n)} \sum_{k=1}^{n} X_k^2 \mathbbm{1}(|X_k|\leq a_n) \mathbbm{1}(|X_k|> \varepsilon a_n)\ \\
&= \dfrac{1}{nL(a_n)} \sum_{k=1}^{n} X_k^2 \mathbbm{1}(|X_k|\leq a_n) - \dfrac{1}{nL(a_n)} \sum_{k=1}^{n} X_k^2 \mathbbm{1}(|X_k|\leq \varepsilon a_n) \\
&= (1+o_p(1)) - (1+o_p(1)) = o_p(1).
\end{align*}
Consider the linearly interpolated process
$$ W_n(t) = M_{n,\lfloor nt \rfloor} + \left( nt- \lfloor nt \rfloor \right) \left( M_{n,\lfloor nt \rfloor +1} - M_{n,\lfloor nt \rfloor}\right).$$
Then by \textit{ Lindeberg's Martingale CLT}, we have \(W_n \stackrel{d}{\longrightarrow} W\) on \(C([0,1])\) where \(W \) is the standard BM on \([0,1]\). Now consider another interpolated process,
$$ W^{*}_n(t) = \dfrac{1}{\sqrt{nL(a_n)}} \left[ S_{\lfloor nt \rfloor} + \left( nt- \lfloor nt \rfloor \right) \left( S_{\lfloor nt \rfloor +1} - S_{\lfloor nt \rfloor}\right)\right].$$
Then
\begin{align*}
\mathbb{P}\left( ||W_n-W_n^*||_{\infty} >0\right) &= \mathbb{P} \left(\exists \; 0 \leq l \leq n, \text{ such that } M_{n,l}\neq \dfrac{1}{\sqrt{nL(a_n)}}S_l \right) \\
&\leq \mathbb{P} \left(\exists \; 0 \leq l \leq n, \text{ such that } |X_l| > a_n\right) \\
&\leq n\mathbb{P}(|X_1|>a_n)=o(1).
\end{align*}
This shows \(W_n-W_n^{*}\stackrel{p}{\longrightarrow} 0\) and hence by \textit{Slutsky's Theorem}, we have \(W_n^{*}\stackrel{d}{\longrightarrow} W\). Note that if
$$ \widetilde{W}_n(t) := \dfrac{1}{V_n} \left[ S_{\lfloor nt \rfloor} + \left( nt- \lfloor nt \rfloor \right) \left( S_{\lfloor nt \rfloor +1} - S_{\lfloor nt \rfloor}\right)\right],$$
where \(V_n^2 = \sum_{k=1}^n X_k^2\), then to show \(\widetilde{W}_n \stackrel{d}{\longrightarrow} W\), it is enough to show that \(V_n^2/(nL(a_n)) \stackrel{p}{\longrightarrow} 1\). Set \(V_n^{*2} = \sum_{k=1}^n X_k^2\mathbbm{1}(|X_k| \leq a_n)\). Then
$$ \mathbb{P}(V_n \neq V_n^{*}) \leq \sum_{k=1}^n \mathbb{P}(|X_k|>a_n) = n\mathbb{P}(|X_1|>a_n)=o(1),$$
implying that \(V_n/V_n^{*} \stackrel{p}{\longrightarrow} 1\). Thus it is enough to show that \(V_n^{*2}/(nL(a_n)) \stackrel{p}{\longrightarrow} 1\), which is a given fact.
\end{enumerate}
|
2011-q1 | Let $F_1 : (x,y) \to F_1(x,y)$, $F_2 : (y,z) \to F_2(y,z)$, $F_3 : (z,x) \to F_3(z,x)$ be three distribution functions on $\mathbb{R}^2$. Let $H(x,y,z) \equiv F_1(x,y)F_2(y,z)F_3(z,x)$. Prove that $H$ is a distribution function on $\mathbb{R}^3$. | Get $(X_1,X_2) \sim F_1, (X_3,X_4) \sim F_2$ and $(X_5,X_6) \sim F_3$ such that these three random vectors form an independent collection. Then define $Y_1 :=\max(X_1,X_6)$, $Y_2:= \max(X_2,X_3)$ and $Y_3=\max(X_4,X_5)$. Then for any $x,y,z \in \mathbb{R}$,
\begin{align*}
\mathbb{P}\left[ Y_1 \leq x, Y_2 \leq y, Y_3 \leq z\right] &= \mathbb{P}\left[ X_1 \leq x, X_2 \leq y, X_3 \leq y, X_4 \leq z, X_5 \leq z, X_6 \leq x\right] \\
&= \mathbb{P}\left[ X_1 \leq x, X_2 \leq y\right]\mathbb{P}\left[ X_3 \leq y, X_4 \leq z\right]\mathbb{P}\left[X_5 \leq z, X_6 \leq x\right] \\
&= F_1(x,y)F_2(y,z)F_3(z,x) = H(x,y,z).
\end{align*}
Thus $H$ is the distribution function of $(Y_1,Y_2,Y_3)$ and hence indeed a distribution function. |
2011-q2 | Suppose $Z_n$ is a non-decreasing sequence of finite valued, non-negative random variables such that $Z_n \to \infty$ almost surely. Show that then $ \lim \sup_n (\log Z_n / (\log \mathbb{E}Z_n)) \leq 1$ almost surely. | We have non-negative increasing sequence of almost surely finite random variables $\left\{Z_n \right\}_{n \geq 1}$ such that $Z_n \uparrow \infty$ almost surely. We need to show
$$ \limsup_{n \to \infty} \dfrac{\log Z_n}{\log \mathbb{E}Z_n} \leq 1, \; \text{almost surely}.$$\nWithout loss of generality we can assume that $a_n := \mathbb{E}Z_n$ is finite for all $n \geq 1$, because otherwise $a_n =\infty$ for all $n \geq N$, for some $N \in \mathbb{N}$ and therefore the concerned limsup will be $0$ almost surely.
\nSince $Z_n \uparrow \infty$ almost surely, by MCT we have $a_n \uparrow \infty$. Hence we can construct the subsequence $\left\{n_k \right\}_{k \geq 0}$ as follows. $n_0:=1$ and
$$ n_{l+1} := \inf \left\{j \geq n_l+1 : a_j > 2^{l+1} \right\}, \;\forall \; l \geq 0.$$\nFor any $\delta >0$,
\begin{align*}
\sum_{l \geq 1} \mathbb{P} \left[ \dfrac{\log Z_{n_{l+1}-1}}{\log \mathbb{E}Z_{n_l}} \geq 1+ \delta \right] = \sum_{l \geq 1} \mathbb{P} \left[ Z_{n_{l+1}-1} \geq a_{n_l}^{1+\delta} \right] \leq \sum_{l \geq 1} \dfrac{a_{n_{l+1}-1}}{a_{n_l}^{1+\delta}} \stackrel{(i)}{\leq} \sum_{l \geq 1} 2^{1-l\delta} < \infty,
\end{align*}
where $(i)$ holds true since either $n_{l+1}=n_l+1$ in which case the corresponding summand is at most $a_{n_l}^{-\delta} \leq 2^{-l\delta}$ or $n_{l+1}>n_l+1$ in which case $a_{n_{l+1}-1} \leq 2^{l+1}$ and hence the summand is at most $2^{l+1}2^{-l(1+\delta)}$.
Hence, by \textit{Borel-Cantelli Lemma}, we have
$$ \mathbb{P} \left[ \dfrac{\log Z_{n_{l+1}-1}}{\log \mathbb{E}Z_{n_l}} \geq 1+ \delta \textit{ infinitely often }\right] = 0 \Rightarrow \limsup_{l \to \infty} \dfrac{\log Z_{n_{l+1}-1}}{\log \mathbb{E}Z_{n_{l}}} \leq 1+\delta, \; \text{almost surely}.$$\nSince, this holds true for all $\delta >0$, we have\n\begin{equation} {\label{conv}}\n\limsup_{l \to \infty} \dfrac{\log Z_{n_{l+1}-1}}{\log \mathbb{E}Z_{n_{l}}} \leq 1, \; \text{almost surely}.
\end{equation}\n\nFor all $k \geq 1$, let $m(k) := \sup \left\{l \geq 0 : n_l \leq k \right\}$. Observe that $m(k) \longrightarrow \infty$ as $k \longrightarrow \infty$ and $n_{m(k)+1}-1 \geq k \geq n_{m(k)}$. Using the monotonicity of the sequence $\left\{Z_n\right\}_{n \geq 1}$ and $\left\{a\right\}_{n \geq 1}$ we get
$$ \limsup_{k \to \infty} \dfrac{\log Z_k}{\log \mathbb{E}Z_k} \leq
\limsup_{k \to \infty} \dfrac{\log Z_{n_{m(k)+1}-1}}{\log \mathbb{E}Z_{n_{m(k)}}} \leq 1, \; \text{almost surely},$$\nwhere the last assertion follows from (\ref{conv}). This completes the proof. |
2011-q3 | Let $X_i = \sum_{j=-m}^{m} w_j \varepsilon_{i+j}, \; i \geq 1$ be a moving average sequence with $\varepsilon_i, i > -m$ i.i.d. standard normal random variables and $\{w_j\}$ are non-random weights, such that $w_m \neq 0$ and $\sum_{j=-m}^{m} w_j^2 = 1$. (a) Describe the joint law of $\{X_i\}$ and show in particular that $X_i$ and $X_{i+k}$ are independent whenever $|k| > m$. (b) Setting $\eta(t) = \sum_{i \neq j} \mathbb{P}(X_i \geq t \mid X_j \geq t)$, show that $\eta(t) \to 0$ as $t \to \infty$. (c) Let $E_n = \sum_{i=1}^{n} 1(X_i \geq t_n)$, count the number of extremes above $t_n$, where the non-random $t_n$ are chosen such that $\lambda_n = \mathbb{E}(E_n)$ converges as $n \to \infty$ to some positive and finite constant $\lambda$. Show that then both $b_{1,n} = \sum_{i=1}^{n} \sum_{|j-i| \leq m} \mathbb{P}(X_i \geq t_n) \mathbb{P}(X_j \geq t_n)$, and $b_{2,n} = \sum_{i=1}^{n} \sum_{1 \leq j - i \leq m} \mathbb{P}(X_i \geq t_n, X_j \geq t_n)$, decay to zero as $n \to \infty$. Then, deduce from the Chen-Stein bound \[ \| \mathcal{L}(E_n) - \text{Poisson}(\lambda_n) \|_{\text{tv}} \leq 2(b_{1,n} + b_{2,n}), \] on the total variation distance between the law of such count $E_n$ and a Poisson random variable of matching mean $\lambda_n$, that in this setting $E_n$ converges in distribution to a Poisson($\lambda$) random variable (you are given the Chen-Stein bound and do not need to prove it). | \begin{enumerate}[label=(\alph*)]\n\item Since $X_i$s are linear combinations of Gaussian variables with non-random weights, the process $\left\{X_i\right\}_{i \geq 1}$ is clearly a Gaussian process. Therefore to describe the joint law, it is enough to describe the mean and covariance function.\n$$ \mu_i := \mathbb{E}X_i = \sum_{j =-m}^m w_j \mathbb{E} \varepsilon_{i+j} = 0, \;\; \forall \; i \geq 1,$$\nand for $1 \leq i \leq j$,\n\begin{align*}\n \sigma_{ij} := \operatorname{Cov} \left(X_i,X_j \right) &= \operatorname{Cov} \left(\sum_{k=-m+i}^{m+i} w_{k-i}\varepsilon_k, \sum_{k=-m+j}^{m+j} w_{k-j}\varepsilon_k \right)\\n & = \begin{cases}\n \sum_{k=-m+j}^{m+i} w_{k-i}w_{k-j} = \sum_{l=0}^{2m-|j-i|} w_{m-l}w_{m-|j-i|-l}, & \text{if } |j-i| \leq 2m,\\n0, & \text{if } |j-i| > 2m.\n\end{cases} \n\end{align*}\nIn particular, $\sigma_{ii}=\sum_{l=0}^{2m} w_{m-l}^2 =1$ and therefore $X_i \sim N(0,1)$ for all $i \geq 1$. Also, for $|k| >2m$, we have $X_i$ and $X_{i+k}$ are uncorrelated and hence independent.\n\n\n\item If $X$ and $Y$ are two standard Gaussian variables with correlation $\rho \in [0,1]$ then define $g(\rho, t) := \mathbb{P} \left( Y \geq t | X \geq t\right).$ Obviously, $Y | X=x \sim N(\rho x,1-\rho^2 )$. Thus,\n$$ g(\rho,t) = \dfrac{\int_{t}^{\infty} \bar{\Phi}\left[(1-\rho^2)^{-1/2}(t-\rho x) \right] \phi(x) \,dx}{\bar{\Phi}(t)} .$$\nObserve that, $\eta(t) := \sup_{i \neq j} \mathbb{P} \left( X_i \geq t | X_j \geq t\right) = \max_{j=2}^{2m+2} g(\sigma_{1j},t)$. This is because the process $X$ is stationary and $\sigma_{ij}=0$ for $|j-i| \geq 2m+1$. Also note that for $j >1$, using the fact that $w_m \neq 0$, we can conclude\n\begin{align*}\n\sigma_{1j}^2 = \left(\sum_{l=0}^{2m-|j-1|} w_{m-l}w_{m-|j-1|-l} \right)^2 & \leq \left( \sum_{l=0}^{2m-|j-1|} w_{m-l}^2 \right) \left( \sum_{l=0}^{2m-|j-1|} w_{m-l-|j-1|}^2 \right) \\
& \leq \sum_{l=0}^{2m-|j-1|} w_{m-l-|j-1|}^2 \\
& = \sum_{l=j-1}^{2m} w_{m-l}^2 \leq 1-w_m^2 <1.\n\end{align*} \nTherefore in order to show that $\eta(t) \to 0$, it is enough to show that $g(\rho,t) \to 0$ as $t \to \infty$, for $\rho \in [0,1)$. Fix $\rho \in [0,1)$ and $c \in (1,1/\rho)$.\n\begin{align*}\n\bar{\Phi}(t) g(\rho,t) &= \int_{t}^{ct} \bar{\Phi}\left[(1-\rho^2)^{-1/2}(t-\rho x) \right] \phi(x) \,dx + \int_{ct}^{\infty} \bar{\Phi}\left[(1-\rho^2)^{-1/2}(t-\rho x) \right] \phi(x) \,dx \\
& \leq \bar{\Phi}\left[(1-\rho^2)^{-1/2}(t-\rho ct) \right] \int_{t}^{ct} \phi(x) \,dx + \int_{ct}^{\infty} \phi(x) \,dx \\
& \leq \bar{\Phi}\left[(1-\rho^2)^{-1/2}(t-\rho ct) \right]\bar{\Phi}(t) + \bar{\Phi}(ct).\n\end{align*} \nHence, \n$$ g(\rho,t) \leq \bar{\Phi}\left[(1-\rho^2)^{-1/2}(1-\rho c)t \right] + \bar{\Phi}(ct)/\bar{\Phi}(t).$$\nSince, $\rho c <1$, we have $ \bar{\Phi}\left[(1-\rho^2)^{-1/2}(1-\rho c)t \right] \longrightarrow 0$ as $t \longrightarrow \infty$. On the otherhand, by \textit{L'Hospital's Rule},\n$$ \lim_{t \to \infty} \dfrac{\bar{\Phi}(ct)}{\bar{\Phi}(t)} = \lim_{t \to \infty} \dfrac{c\phi(ct)}{\phi(t)} = \lim_{t \to \infty} c\exp(-t^2(c^2-1)/2) =0.$$\nThis completes the proof.\n\n\n\item We have $E_n = \sum_{i=1}^n \mathbbm{1}_{(X_i \geq t_n)}$. This gives, $\lambda_n = \mathbb{E}(E_n) = n \bar{\Phi}(t_n) \to \lambda \in (0,\infty)$ as $n \to \infty$. Clearly, $\bar{\Phi}(t_n) \to 0$ and hence $t_n \to \infty$ as $n \to \infty$.\n\n$$ b_{1,n} := \sum_{i=1}^n \sum_{j:|j-i| \leq 2m} \mathbb{P}(X_i \geq t_n) \mathbb{P}(X_j \geq t_n) = \sum_{i=1}^n \sum_{j:|j-i| \leq 2m} \bar{\Phi}(t_n)^2 \leq (4m+1)n\bar{\Phi}(t_n)^2 \longrightarrow 0,$$\nand\n$$ b_{2,n} := \sum_{i=1}^n \sum_{j:1 \leq |j-i| \leq 2m} \mathbb{P}(X_i \geq t_n, X_j \geq t_n) \leq \sum_{i=1}^n \sum_{j:1 \leq |j-i| \leq 2m} \eta(t_n)\bar{\Phi}(t_n) \leq 4mn\bar{\Phi}(t_n)\eta(t_n) \longrightarrow 0,$$\nwhere the convergences follows from the fact that $n \bar{\Phi}(t_n)=O(1)$, and $\eta(t_n)=o(1)$ (from part (b)). From \textit{Chen-Stein bound},\n$$ || \mathcal{L}(E_n) - \text{Poisson}(\lambda_n)||_{TV} \leq 2(b_{1,n}+b_{2,n}) =o(1).$$\nSince, $\lambda_n \to \lambda$, we have $\mathbb{P}(\text{Poisson}(\lambda_n)=j) \to \mathbb{P}(\text{Poisson}(\lambda)=j)$ and therefore by \textit{Scheffe's Lemma},\n$$ || \text{Poisson}(\lambda_n) - \text{Poisson}(\lambda)||_{TV} = \dfrac{1}{2}\sum_{j \in \mathbb{Z}} |\mathbb{P}(\text{Poisson}(\lambda_n)=j) - \mathbb{P}(\text{Poisson}(\lambda)=j)| =o(1).$$\n\textit{Triangle inequality} for \textit{Total Variation Distance} now implies that $|| \mathcal{L}(E_n) - \text{Poisson}(\lambda)||_{TV} =o(1)$, which implies $E_n \stackrel{d}{\longrightarrow} \text{Poisson}(\lambda).$\n\end{enumerate} |
2011-q4 | Suppose $\{X_n\}_{n \in \mathbb{N}}$ is a sequence of random variables taking values in $\mathbb{Z}$. (a) Assume that $\{X_n\}_{n \in \mathbb{N}}$ converges in distribution to a random variable $X$. Show that $\mathbb{P}(X \in \mathbb{Z}) = 1$. (b) Prove that $\{X_n\}_{n \in \mathbb{N}}$ converges in distribution if and only if there exists a sequence of numbers $\{p_j\}_{j \in \mathbb{Z}}, p_j \in [0, 1]$ such that $\sum_{j \in \mathbb{Z}} p_j = 1$ and $\lim_{n \to \infty} \mathbb{P}(X_n = j) = p_j$ for all $j$. (c) Assume that there exists $\varepsilon > 0$ and a sequence $\{a_n\}_{n \in \mathbb{N}}$ such that $a_n \to \infty$ and $\mathbb{P}(X_n = a_n) \geq \varepsilon$. Prove that $\{X_n\}_{n \in \mathbb{N}}$ does not converge in distribution. | \begin{enumerate}[label=(\alph*)]\n\item Since $X_n \stackrel{d}{\longrightarrow} X$, $\mathbb{P}(X_n \in \mathbb{Z})=1$ for all $n \geq 1$ and $\mathbb{Z}$ is closed in $\mathbb{R}$, by \textit{Portmanteau Theorem} we have\n$$ 1= \limsup_{n \to \infty} \mathbb{P}(X_n \in \mathbb{Z}) \leq \mathbb{P}(X \in \mathbb{Z}).$$\nHence $\mathbb{P}(X \in \mathbb{Z}) =1$.\n \n\n\item Suppose $X_n \stackrel{d}{\longrightarrow} X$. Then by part (a), $X$ takes values in $\mathbb{Z}$ almost surely. Set $p_j := \mathbb{P}(X =j) \in [0,1]$. Clearly, $\sum_{j \in \mathbb{Z}} p_j =1$. Since, $(j-1/2,j+1/2)$ is Borel set with $\partial(j-1/2,j+1/2) = \left\{j-1/2,j+1/2\right\}$, we have $\mathbb{P}(X \in \partial(j-1/2,j+1/2)) =0$. Hence, by \textit{Portmanteau Theorem}, we have\n$$ \lim_{n \to \infty} \mathbb{P}(X_n =j) = \lim_{n \to \infty} \mathbb{P}(X_n \in (j-1/2,j+1/2)) = \mathbb{P}(X \in (j-1/2,j+1/2)) = \mathbb{P}(X =j) =p_j.$$\nThis proves the only if part. \n\nNow suppose we have $\left\{p_j\right\}_{j \geq 1} \subseteq [0,1]$ with $\sum_{j \in \mathbb{Z}} p_j =1$ and $\mathbb{P}(X_n =j) \to p_j$ as $n \to \infty$, for all $j \in \mathbb{Z}$. Let $p_{n,j} := \mathbb{P}(X_n=j)$, for all $n,j$; and $X$ be the random variable with $\mathbb{P}(X =j)=p_j$ for all $j$. Then\n\begin{align*}\n\sum_{j \in \mathbb{Z}} |p_{n,j}-p_j| = \sum_{j \in \mathbb{Z}} (p_{n,j}+p_j) - 2\sum_{j \in \mathbb{Z}} \min(p_{n,j},p_j) = 2- 2\sum_{j \in \mathbb{Z}} \min(p_{n,j},p_j).\n\end{align*}\nBy \textit{Fatou's Lemma}\n$$ \liminf_{n \to \infty} \sum_{j \in \mathbb{Z}} \min(p_{n,j},p_j) \geq \sum_{j \in \mathbb{Z}} \liminf_{n \to \infty} \min(p_{n,j},p_j) = \sum_{j \in \mathbb{Z}} p_j = 1.$$\nTherefore, \n$$ \limsup_{n \to \infty} \sum_{j \in \mathbb{Z}} |p_{n,j}-p_j| = 2- 2 \liminf_{n \to \infty} \sum_{j \in \mathbb{Z}} \min(p_{n,j},p_j) \leq 0.$$\nThis for any $x \in \mathbb{R}$,\n$$ |\mathbb{P}(X_n \leq x) -\mathbb{P}(X \leq x) | = \left|\sum_{j \in \mathbb{Z}, j \leq x} p_{n,j} - \sum_{j \in \mathbb{Z}, j \leq x} p_{j} \right| \leq \sum_{j \in \mathbb{Z}} |p_{n,j}-p_j| \longrightarrow 0. $$\nThis shows the if part.\n\n\n\item Suppose $X_n$ converges in distribution to the random variable $X$. Fix $M<\infty$. By hypothesis, $a_n \geq M$ for all $n \geq N$, for some $N \in \mathbb{N}$ and therefore $\mathbb{P}(X_n \geq M) \geq \mathbb{P}(X_n=a_n) \geq \varepsilon$, for all $n \geq M$. Since, $[M,\infty)$ is a closed set, by \textit{Portmanteau Theorem},\n$$ \mathbb{P}(X \geq M) \geq \limsup_{n \geq \infty} \mathbb{P}(X_n \geq M) \geq \varepsilon.$$\nThus $\mathbb{P}(X \geq M) \geq \varepsilon$ for all $M \in \mathbb{R}$; which implies $\mathbb{P}(X=\infty) \geq \varepsilon$ contradicting to $X$ being a (proper) random variable. \n\end{enumerate} |
2011-q5 | Suppose $\{X_k\}_{k \geq 1}$ is a martingale and $\{b_n\}_{n \geq 1}$ is a non-negative, non-decreasing, unbounded sequence of constants, such that $\sum_k \mathbb{E}[(X_k - X_{k-1})^2]/b_k^2$ is finite. Show that then $X_n / b_n \to 0$ almost surely. | We have $\left\{X_n, \mathcal{F},n \geq 0\right\}$ to be a MG and $\left\{b_n : n \geq 1\right\}$ to be a non-negative, non-decreasing, unbounded sequence such that $ \sum_{k \geq 1} \mathbb{E}(X_k-X_{k-1})^2/b_k^2 < \infty$.
\nSet $Y_0 := 0$ and $Y_{n} := Y_{n-1} + (X_{n}-X_{n-1})/b_n$, for all $n \geq 1$. Clearly $\left\{Y_n, \mathcal{F},n \geq 0\right\}$ is a MG with predictable compensator process
$$ \left\langle Y \right\rangle _n = Y_0^2 + \sum_{k=1}^n \mathbb{E} \left[ \dfrac{(X_k-X_{k-1})^2}{b_k^2} \bigg \rvert \mathcal{F}_{k-1}\right]. $$\nBy hypothesis,
$$ \mathbb{E} \left\langle Y \right\rangle _{\infty} = \sum_{k=1}^{\infty} \mathbb{E} \left[ \dfrac{(X_k-X_{k-1})^2}{b_k^2} \right] < \infty. $$\nHence, $\sup_{n \geq 0} \mathbb{E}Y_n^2 = \sup_{n \geq 0} \mathbb{E}\left\langle Y \right\rangle_n = \mathbb{E} \left\langle Y \right\rangle _{\infty} < \infty.$ By \textit{Doob's $L^p$ Martingale Convergence Theorem}, we have $Y_n \longrightarrow Y_{\infty}$ almost surely, for some $Y_{\infty} \in L^2$. This yields that $Y_{\infty} = \sum_{n \geq 1} (X_n-X_{n-1})/b_n$ is finite almost surely. Since, the sequence $\left\{b_n\right\}$ is non-negative, non-decreasing and unbounded, use \textit{Kronecker's Lemma} to conclude that $\left[ \sum_{k=1}^n (X_k-X_{k-1})\right]/b_n = (X_n-X_0)/b_n \longrightarrow 0$, almost surely. Since, $b_n \uparrow \infty$, we finally get $X_n/b_n \longrightarrow 0$, almost surely. |
2011-q6 | Set $Z_i(t) = \mu_i t + W_i(t)$, $i = 1, 2$, for non-random $\mu_1, \mu_2$ and independent, standard Brownian motions $W_1(t)$ and $W_2(t)$ (so $W_i(0) = W_2(0) = 0$). Fixing non-random $b > 0$ and $m$, consider the stopping time $T = \inf\{t \geq 0 : Z_2(t) = b - m Z_1(t)\}$. (a) Find the values of $r$ and $c$ such that $T$ has the same law as $T_{r,c} = \inf\{t \geq 0 : W_i(t) + r t \geq c\}$. (b) For which values of $\mu_1, \mu_2, m$ and $b > 0$ is $\mathbb{E}(T)$ finite? (justify your answer). (c) Evaluate $\mathbb{E}(T)$ and the Laplace transform $\mathbb{E}(\exp(-\lambda T))$ for non-random $\lambda > 0$. | \begin{enumerate}[label=(\alph*)]\n\item Let $Y(t) =(1+m^2)^{-1/2}(W_2(t)+mW_1(t)),$ for all $t \geq 0$. Clearly $Y$ is a Gaussian mean-zero process with $Y_0=0$ and covariance function
$$ \operatorname{Cov}\left( Y(t),Y(s)\right) = (1+m^2)^{-1}\operatorname{Cov}\left( W_2(t)+mW_1(t),W_2(s)+mW_1(s)\right) = t \wedge s.$$\nThus $Y$ is a standard Brownian Motion. Set $r :=(1+m^2)^{-1/2}(\mu_2+m\mu_1)$ and $c := (1+m^2)^{-1/2}b >0$.
\begin{align*}\nT = \inf \left\{t \geq 0 : Z_2(t)=b-mZ_1(t)\right\} &= \inf \left\{t \geq 0 : W_2(t)+mW_1(t)=b-\mu_2t-m\mu_1t\right\} \\
&= \inf \left\{t \geq 0 : Y(t)=c-rt\right\} \\
& \stackrel{(ii)}{=} \inf \left\{t \geq 0 : Y(t) \geq c-rt\right\} \\
& \stackrel{d}{=} \inf \left\{t \geq 0 : W_1(t) \geq c-rt\right\} = T_{r,c},\n\end{align*}\nwhere $(ii)$ is true since $c>0$.\n\item
\begin{lemma}\n$\mathbb{E}(T_{r,c}) = c/r_{+}, \; \forall \; c>0, r \in \mathbb{R}.$\n\end{lemma}\n\begin{proof}\nFirst consider $r>0$. Since, $\limsup_{t \to \infty} W_1(t)=\infty$, almost surely, we have $\mathbb{P}(T_{r,c}< \infty) =1$. Let $\mathcal{F}_t := \sigma(W_1(s):0 \leq s \leq t).$ Then $\left\{W_1(t), \mathcal{F}_t, t \geq 0\right\}$ is a MG with $T_{r,c}$ being $\mathcal{F}$-stopping time. Hence, by \textit{Optional Stopping theorem}, for all $t \geq 0$ we have $\mathbb{E} \left[W_1(T_{r,c} \wedge t) \right] = \mathbb{E}W_1(0) =0. $ By definition, $W_1(T_{r,c} \wedge t) + r(T_{r,c}\wedge t) \leq c$ and hence, $\mathbb{E}\left[r(T_{r,c}\wedge t)\right] =\mathbb{E} \left[ W_1(T_{r,c} \wedge t) + r(T_{r,c}\wedge t)\right] \leq c.$ Applying MCT as we take $t \uparrow \infty$, we get $\mathbb{E}T_{r,c} \leq c/r < \infty$. On the otherhand, since $\left\{W_1(t)^2-t, \mathcal{F}_t, t \geq 0\right\}$ is a MG, applying OST we have $\mathbb{E} \left[W_1(T_{r,c}\wedge t) \right]^2 = \mathbb{E} (T_{r,c} \wedge t) \leq c/r.$ The continuous MG $\left\{W_1(T_{r,c} \wedge t), \mathcal{F}_t, t\geq 0\right\}$ is thus $L^2$-bounded and hence by \textit{Doob's $L^p$ Martingale convergence theorem} and sample-path-continuity of $W_1$, we have $W_1(T_{r,c} \wedge t)$ converges to $W_1(T_{r,c})$ almost surely and in $L^2$. In particular, $\mathbb{E}W_1(T_{r,c}) =0$. Due to sample-path-continuity of $W_1$ we have $W_1(T_{r,c}) + rT_{r,c}=c$. Therefore, $\mathbb{E}T_{r,c} = c/r$.\n\nIt is obvious from the definition that for $r_1 \leq r_2$, we have
$\left\{t \geq 0 : W_1(t) +r_1t\geq c\right\} \subseteq \left\{t \geq 0 : W_1(t)+r_2t \geq c\right\} $ and hence $T_{r_1,c} \geq T_{r_2,c}$. Take $r \leq 0$ and by the previous observation $\mathbb{E}T_{r,c} \geq \sup_{s >0} \mathbb{E}T_{s,c} = \sup_{s >0} c/s = \infty.$ This completes the proof of the lemma. \n\end{proof}\n\nFrom the above lemma and part (a), we can conclude that $\mathbb{E}(T)$ is finite if and if $r>0$ which is equivalent to $\mu_2+m\mu_1>0$. The answer does not depend on $b$ as long as $b >0$. \n\item From part (b), we have that $\mathbb{E}T = c/r_{+} = b/(\mu_2+m\mu_1)_{+}$. \n\To find the Laplace transform, take $\theta > \max(-2r,0)$. Consider the exponential MG $\left\{ \exp(\theta W_1(t)-\theta^2t/2), \mathcal{F}_t, t \geq 0\right\}$. By \textit{Optional Stopping Theorem}, $\mathbb{E}\exp \left[\theta W_1(T_{r,c} \wedge t)-\theta^2(T_{r,c} \wedge t)/2 \right] =1.$ By definition,
$$ \exp \left[\theta W_1(T_{r,c} \wedge t)-\theta^2(T_{r,c} \wedge t)/2 \right] \leq \exp \left[\theta c -r \theta(T_{r,c} \wedge t)-\theta^2(T_{r,c} \wedge t)/2 \right] \leq \exp(-c\theta),$$ \nfor all $t \geq 0$, since $\theta>0$ and $\theta r+\theta^2/2 >0$. From the above inequalities, it is clear that $\exp \left[\theta W_1(T_{r,c} \wedge t)-\theta^2(T_{r,c} \wedge t)/2 \right]$ converges to $0$ almost surely on the event $(T_{r,c}=\infty)$. Use this observation, sample path continuity of $W_1$ and apply DCT to conclude
$$\mathbb{E} \left[ \exp \left[\theta W_1(T_{r,c} )-\theta^2(T_{r,c})/2 \right] \mathbbm{1}(T_{r,c}< \infty) \right]=1.$$\nSince, on $(T_{r,c}<\infty)$ we have $W_1(T_{r,c})+rT_{r,c}=c$ almost surely, we can write $$\mathbb{E} \left[ \exp\left( \theta c-(r\theta+\theta^2/2)T_{r,c}\right)\mathbbm{1}(T_{r,c}< \infty)\right] =1,$$ which is equivalent to $\mathbb{E} \exp \left[-(r\theta+\theta^2/2)T_{r,c} \right] = \exp(-c\theta ),$ since $r+\theta/2 >0$. \n\nNow fix $\lambda >0$. Solve the quadratic equation $\lambda =r\theta +\theta^2/2$ for $\theta$. The solutions are $-r \pm \sqrt{r^2+2\lambda}.$ The solution $\theta^{*} = \sqrt{r^2+2\lambda} -r$ is clearly at most $\max(-2r,0)$ since $\sqrt{r^2+2\lambda} > |r|.$ Plugging in this value for $\theta$, we obtain
$$ \mathbb{E} \exp(-\lambda T_{r,c}) = \exp \left[ -c\theta^{*}\right] = \exp \left[ -c\sqrt{r^2+2\lambda}+cr\right].$$\nPlugging in the values for $r$ and $c$ from part (a) we obtain for all $b>0, \lambda>0,\mu_1,\mu_2,m$ we have
$$ \mathbb{E} \exp(-\lambda T) = \exp \left[ -b\sqrt{\dfrac{(\mu_2+m\mu_1)^2}{(1+m^2)^2}+
\dfrac{2\lambda}{1+m^2}}+\dfrac{b(\mu_2+m\mu_1)}{1+m^2}\right].$$\n\end{enumerate} |
2012-q1 | Fixing positive integer n and i.i.d. U_k, k=0,1,2,..., each of which is uniformly distributed on {0,1,2,...,n-1}, let N_n = \sum_{i=0}^{n-1} \sum_{j=0}^{n-1} I_{ij}, where for any i,j \in {0,1,...,n-1}, I_{ij} = 1_{U_i+U_j=U_{i+j} \mod n \mod n} (here both the indices and the value are taken mod n). (a) Compute EN_n and deduce that \mathbb{P}(N_n \geq \theta n) \leq 1/\theta for any \theta > 0. (b) Prove that there exists finite C such that \mathbb{P}(N_n \geq \theta n) \leq C\theta^{-2}, for all n and any \theta > 0. | We shall write $\oplus$ to denote addition modulo $n$. We have $U_0,U_1 \ldots \stackrel{iid}{\sim} \text{Uniform}(\mathbb{Z}_n)$, where $\mathbb{Z}_n := \left\{0,1,\ldots, n-1\right\}$.
\begin{enumerate}[label=(\alph*)]
\item Take any $i \neq j \in \mathbb{Z}_n$. Notice the following three facts.
\begin{itemize}
\item $0 \oplus (-U_i) \sim \text{Uniform}(\mathbb{Z}_n)$.
\item $U_i \oplus U_j \sim \text{Uniform}(\mathbb{Z}_n)$.
\item $U_i \oplus U_j \perp\!\!\!\perp U_k, U_i \oplus (-U_j) \perp\!\!\!\perp U_k$, for all $k \geq 0$.
\end{itemize}
The first fact follows directly from the definition. To prove the first part of the third fact, it is enough to show that $ U_i \oplus U_j \perp\!\!\!\perp U_i$. This is true since,
$$ \mathbb{P}(U_i \oplus U_j =k, U_i =l) = \mathbb{P}(U_i=l, U_j = k \oplus (-l)) = \dfrac{1}{n^2}, \; \forall \; k,l \in \mathbb{Z}_n. $$
Summing over $l$ we prove the second fact and as a result conclude $U_i \oplus U_j \perp\!\!\!\perp U_i$. The proof of the second part of the third fact follows similarly.
From the second and third fact, we get that for any $i \neq j \in \mathbb{Z}_n$,
$$ \mathbb{E}(I_{ij}) = \mathbb{P}(U_i \oplus U_j = U_{i \oplus j}) = \sum_{k=0}^{n-1} \mathbb{P}(U_i \oplus U_j =k = U_{i \oplus j}) = \sum_{k=0}^{n-1} \dfrac{1}{n^2}= \dfrac{1}{n}.$$
For $i \neq 0$, we have $2i \neq i$ and hence
$$ \mathbb{E}(I_{ii}) = \mathbb{P}(2U_i=U_{2i}) = \sum_{k=0}^{n-1}\mathbb{P}(U_{2i}=2k \mid U_{i}=k)\mathbb{P}(U_{i}=k) = \sum_{k=0}^{n-1}\mathbb{P}(U_{2i}=2k)\mathbb{P}(U_{i}=k) = \sum_{k=0}^{n-1} \dfrac{1}{n^2} = \dfrac{1}{n}.$$
Finally, $\mathbb{E}(I_{00}) = \mathbb{P}(2U_{0}=U_{0}) = \mathbb{P}(U_0 = 0)=1/n$. Thus $\mathbb{E}(I_{ij})=1/n$ for all $i,j$. Hence,
$$ \mathbb{E}(N_n) = \sum_{i,j} \mathbb{E}(I_{ij}) = \dfrac{n^2}{n}=n.$$
Using \textit{Markov's Inequality}, for any $\theta >0$,
$$ \mathbb{P}(N_n \geq \theta n) \leq \dfrac{\mathbb{E}(N_n)}{\theta n} = \dfrac{n}{\theta n} = \dfrac{1}{\theta}.$$
\item Using \textit{Markov's Inequality},
$$ \mathbb{P}(N_n \geq \theta n) \leq \dfrac{\mathbb{E}(N_n^2)}{\theta^2 n^2} = \dfrac{\operatorname{Var}(N_n)+n^2}{\theta^2 n^2} = \left( n^{-2} \operatorname{Var}(N_n) + 1\right)\theta^{-2}.$$
Hence it is enough to show that $\operatorname{Var}(N_n)=O(n^2)$.
Consider the following collection.
$$ \mathcal{C}_n := \left\{ (i_1,j_1,i_2,j_2) \mid 0 \leq i_1,j_1, i_2, j_2 \leq n-1; \left\{i_1,j_1, i_1 \oplus j_1 \right\} \cap \left\{i_2,j_2, i_2 \oplus j_2 \right\} = \emptyset\right\}.$$
It is evident that $(i_1,j_1,i_2,j_2) \in \mathcal{C}_n$ implies $I_{i_1,j_1}$ and $I_{i_2,j_2}$ are independent. Let us now give a lower bound to the cardinality of $\mathcal{C}_n$. There are $n^2$ many choices or the pair $(i_1,j_1)$. For each such choice there are at least $(n-3)$ many choices for $i_2$ (barring from $i_1,j_1, i_1 \oplus j_1$) and for each such choice of $i_2$, we have at least $(n-6)$ many choices for $j_2$ (barring from $i_1,j_1, i_1 \oplus j_1, i_1 \oplus (-i_2), j_1 \oplus (-i_2), i_1 \oplus j_1 \oplus (-i_2)$). Thus $|\mathcal{C}_n| \geq n^2(n-3)(n-6) = n^4 - 9n^3 + 18n^2$. Therefore,
\begin{align*}
\operatorname{Var}(N_n) &= \sum_{i,j=0}^{n-1} \operatorname{Var}(I_{ij}) + \sum_{i_1,j_1,i_2,j_2 =0}^{n-1} \operatorname{Cov}(I_{i_1,j_1},I_{i_2,j_2}) \\
&= n^2\dfrac{1}{n}\left(1- \dfrac{1}{n}\right) + \sum_{(i_1,j_1,i_2,j_2) \in \mathbb{Z}_{n}^4 \setminus \mathcal{C}_n } \operatorname{Cov}(I_{i_1,j_1},I_{i_2,j_2}) \\
& \leq n-1 + \sum_{(i_1,j_1,i_2,j_2) \in \mathbb{Z}_{n}^4 \setminus \mathcal{C}_n } \mathbb{E}(I_{i_1,j_1}I_{i_2,j_2}) \\
& \leq n-1 + \sum_{(i_1,j_1,i_2,j_2) \in \mathbb{Z}_{n}^4 \setminus \mathcal{C}_n } \mathbb{E}(I_{i_1,j_1}) \\
& \leq n-1 + \dfrac{n^4-|\mathcal{C}_n|}{n} \leq n + \dfrac{9n^3}{n} = n+9n^2 = O(n^2).
\end{align*}
\end{enumerate} |
2012-q2 | Let \bar{B}(t) be a standard Brownian bridge on [0,1]. Express \mathbb{P}(\bar{B}(t) < 1-t \ \forall t \in [0,1/4]) in terms of the cumulative distribution function of a standard Gaussian. Hint: What kind of process is (1+s)\bar{B}(s/(1+s))? | Define the new process $W(s) := (1+s)\widetilde{B}(s/(1+s))$, for all $s \geq 0$. Notice that $\widetilde{B}(t) = (1-t)W(t/(1-t))$, for all
$t \in [0,1) $. Since $t \mapsto t/(1-t)$ in strictly increasing on $[0,1)$ we have the following.
\begin{align*}
\mathbb{P} \left[ \widetilde{B}(t) < 1-t, \; \forall \; t \in [0,1/4]\right] &= \mathbb{P} \left[ (1-t)W(t/(1-t)) < 1-t, \; \forall \; t \in [0,1/4]\right] \\
&= \mathbb{P} \left[ W(t/(1-t)) < 1, \; \forall \; t \in [0,1/4]\right] \\
&= \mathbb{P} \left[ W(s) < 1, \; \forall \; s \in [0,1/3]\right].
\end{align*}
Since $\tilde{B}$ is mean-zero Gaussian Process with continuous sample paths, clearly the same is true also for the process $W$ and
\begin{align*}
\operatorname{Cov}(W(t),W(s)) &= (1+s)(1+t) \operatorname{Cov}\left[\widetilde{B}(t/(1+t)),\widetilde{B}(s/(1+s))\right] \\
& = (1+s)(1+t) \left[ \left(\dfrac{s}{1+s}\right)\wedge \left(\dfrac{t}{1+t}\right)-\dfrac{st}{(1+s)(1+t)}\right] \\
&= (s(1+t)) \wedge (t(1+s)) -st \\
& = (s \wedge t + st)-st = s\wedge t.
\end{align*}
Therefore, $W$ is the standard Brownian Motion. Using \textit{Reflection Principle}, we get
\begin{align*}
\mathbb{P} \left[ \widetilde{B}(t) < 1-t, \; \forall \; t \in [0,1/4]\right] &= \mathbb{P} \left[ W(s) < 1, \; \forall \; s \in [0,1/3]\right] \\
&= 1- \mathbb{P} \left[ \sup_{0 \leq s \leq 1/3} W(s) \geq 1\right] \\
&= 1- 2\mathbb{P}(W(1/3) \geq 1) = 1- 2\bar{\Phi}(\sqrt{3}) = 2\Phi(\sqrt{3})-1.
\end{align*} |
2012-q3 | Consider Polya’s urn, where one starts with two balls in an urn, one labeled 0 and one labeled 1. Each time a ball is chosen uniformly from the urn and replaced by itself and another ball of same label. Let R_n = 1 + \sum_{i=1}^n J_i where J_i denotes the label of the i-th chosen ball. (a). Prove that R_n is uniformly distributed on {1,...,n+1} and that M_n = (n+2)^{-1}R_n converges a.s. to M_{\infty} which is uniformly distributed on [0,1]. (b). Provide explicit upper bound, strictly less than 1, on \mathbb{P}(\sup_{n\geq 0} M_n > y) first for y > 1/2 and then (harder!) for y = 1/2 (where exchangeability of {J_i} might be handy). | We have $R_n = 1+\sum_{i=1}^n J_i$ to be number of balls in the urn labelled $1$ after $n$-th draw. Set $\mathcal{F}_n := \sigma (J_i : 1\leq i \leq n)$, with $\mathcal{F}_0$ being the trivial $\sigma$-algebra. Then we have $J_n \mid \mathcal{F}_{n-1} \sim \text{Ber} \left(R_{n-1}/(n+1) \right)$, since after $n$-th draw we have in total $(n+2)$ balls in the urn.
\begin{enumerate}[label=(\alph*)]
\item We prove this via induction. Since, $J_1 \sim \text{Ber}(1/2)$, we clearly have $R_1 \sim \text{Uniform}(\left\{1,2 \right\}) $. Suppose $R_m \sim \text{Uniform}(\left\{1,\ldots,m+1 \right\}). $ Then
for all $k=2, \ldots, m+2$, we have
\begin{align*}
\mathbb{P}(R_{m+1}=k) &= \mathbb{P}(R_{m+1}=k \mid R_m=k-1)\mathbb{P}( R_m=k-1) + \mathbb{P}(R_{m+1}=k \mid R_m=k) \mathbb{P}(R_m=k) \\
&= \mathbb{P}(J_{m+1}=1 \mid R_m=k-1)\mathbb{P}( R_m=k-1) + \mathbb{P}(J_{m+1}=0 \mid R_m=k)\mathbb{P}(R_m=k) \\
&= \dfrac{k-1}{m+2}\dfrac{1}{m+1} + \left(1-\dfrac{k}{m+2} \right)\dfrac{1}{m+1} = \dfrac{1}{m+2}.
\end{align*}
Since $R_{m+1} \in \left\{1, \ldots, m+2 \right\}$, this proves that $R_{m+1} \sim \text{Uniform}(\left\{1,\ldots,m+2 \right\}).$ This concludes the proof. Moreover, for $n \geq 1$,
\begin{align*}
\mathbb{E} \left(R_n \mid \mathcal{F}_{n-1} \right) = R_{n-1} + \mathbb{E} \left(J_n \mid \mathcal{F}_{n-1} \right) = R_{n-1} + R_{n-1}/(n+1) = \dfrac{(n+2)R_{n-1}}{n+1}.
\end{align*}
Therefore, $\left\{M_n=(n+2)^{-1}R_n, \mathcal{F}_n, n \geq 0\right\}$ is a MG. Since, $0 \leq R_n \leq n+1$, the MG is uniformly bounded and hence $M_n$ converges to $M_{\infty}$ almost surely and in $L^p$ for all $p>0$. Notice that for any $x \in (0,1)$,
$$ \mathbb{P}(M_n \leq x) = \mathbb{P}(R_n \leq (n+2)x) = \dfrac{\lfloor (n+2)x \rfloor \wedge (n+1)}{n+1} \stackrel{n \to \infty}{\longrightarrow} x.$$
Thus $M_{\infty} \sim \text{Uniform}(0,1)$.
\item By \textit{Doob's Maximal Inequality}, we have for $n \geq 1$,
$$ \mathbb{P} \left( \max_{k=0}^n M_k > y\right) \leq y^{-1}\mathbb{E}(M_n)_{+} = y^{-1}\mathbb{E}(M_n) = y^{-1}\mathbb{E}(M_0) = \dfrac{1}{2y}.$$
Taking $n \uparrow \infty$, we get
$$ \mathbb{P} \left( \max_{k=0}^{\infty} M_k > y\right) \leq \dfrac{1}{2y} <1,$$
for $y>1/2$. For $y=1/2$, note that for any $j_1, \ldots, j_n \in \left\{0,1\right\}$ with $\sum_{k=1}^n j_k=r$, the following holds true.
$$ \mathbb{P} \left[ J_i=j_i, \;\;\forall \; 1 \leq i \leq n \right] = \dfrac{\prod_{i=0}^{r-1}(1+i) \prod_{i=0}^{n-r-1}(1+i)}{\prod_{i=0}^{n-1} (2+i)}.$$
The above statement can be proved easily by induction. The expression shows that $\left\{J_i : i \geq 1\right\}$ is an exchangeable collection. Let $\mathcal{E}$ be the exchangeable $\sigma$-algebra generated by this collection. Then by \textit{De Finetti's Theorem}, we have $ J_1, J_2, \ldots \mid \mathcal{E} \stackrel{iid}{\sim} \text{Ber}(P),$ where $P=\mathbb{E}(J_1 \mid \mathcal{E})$. By \cite[Lemma 5.5.25]{dembo},
$$ \mathbb{E}(J_1 \mid \mathcal{E}) = \lim_{n \to \infty} \dfrac{1}{n}\sum_{i=1}^n J_i = \dfrac{R_n-1}{n} = \dfrac{(n+2)M_n-1}{n} = M_{\infty}.$$
Hence, $P \sim \text{Uniform}(0,1)$. Set $S_n := \sum_{i=1}^n Y_i$ with $S_0:=0$ where $Y_i=2J_i-1$ for all $i \geq 1$. Then conditional on $\mathcal{E}$, $\left\{S_n\right\}_{n \geq 0}$ is a $SRW$ starting from $0$ with parameter $P$. Therefore,
\begin{align*}
\mathbb{P} \left[ \sup_{n \geq 0} M_n > 1/2 \Big \rvert \mathcal{E}\right] = \mathbb{P} \left[ \sup_{n \geq 0} \left(R_n - (n+2)/2 \right) > 0 \Big \rvert \mathcal{E}\right]
&= \mathbb{P} \left[ \sup_{n \geq 0} \left(\sum_{j=1}^n J_i - n/2 \right) > 0 \Big \rvert \mathcal{E}\right] \\
&= \mathbb{P} \left[ \sup_{n \geq 0} S_n > 0 \Big \rvert \mathcal{E}\right] = \dfrac{P}{1-P} \wedge 1,
\end{align*}
where at the last line we have used the fact that for a SRW with parameter $p$ starting from $0$, the probability that it will not reach $1$ ever is $\frac{p}{1-p} \wedge 1$. Therefore,
\begin{align*}
\mathbb{P} \left[ \sup_{n \geq 0} M_n > 1/2 \right] = \mathbb{E} \left[ \dfrac{P}{1-P} \wedge 1\right] &= \int_{0}^1 \left( \dfrac{u}{1-u} \wedge 1\right) \, du \\
&= \dfrac{1}{2} + \int_{0}^{1/2} \dfrac{u}{1-u} \, du \\
&= \dfrac{1}{2} + (-\log(1-u)-u) \Big \rvert_{0}^{1/2} = \log 2 <1.\\
\end{align*}
\end{enumerate} |
2012-q4 | Let C_n \equiv [-1/2,1/2]^{2^n} \subset \mathbb{R}^n be the unit cube in n dimensions and u,v \in \mathbb{R}^n uniformly random orthogonal unit vectors (i.e. such that ||u||_2 = ||v||_2 = 1 and \langle u,v \rangle = \sum_{i=1}^n u_i v_i = 0). (a). Prove that the random subset S_n \equiv {(\langle u,x \rangle, \langle v,x \rangle) : x \in C_n} of the real line, is a line segment, symmetric around zero, whose length \rho_n = 2 \sup {z : z \in S_n} satisfies n^{-1/2}/\rho_n \rightarrow \sqrt{2/\pi} in probability, as n \rightarrow \infty. (b). Define the random convex subset of the plane P_n \subset \mathbb{R}^2 by letting P_n \equiv {(\langle u,x \rangle, \langle v,x \rangle) : x \in C_n}. Let R_n be the radius of the smallest circle circumscribed to P_n, and r_n the biggest circle inscribed in P_n. R_n = \sup {||z|| : z \in P_n}, r_n = \inf {||z|| : z \in \mathbb{R}^2 \ P_n}. Prove that n^{-1/2}P_n converges to a circle, in the sense that R_n/r_n \rightarrow 1 in probability as n \rightarrow \infty. [The joint distribution of u, v \in \mathbb{R}^n is defined as the only probability distribution such that: (i) ||u||_2 = ||v||_2 = 1 almost surely; (ii) \langle u,v \rangle = 0 almost surely; (iii) For any orthogonal matrix O (i.e. with O^TO = I) the pair (Ou,Ov) is distributed as (u,v)] | We write $u_n$ and $v_n$ to make the dependence of the random vectors on $n$ explicit.
\begin{enumerate}[label=(\alph*)]
\item Since the set $\mathcal{C}_n$ is convex and compact and $x \mapsto \left\langle u_n,x\right\rangle$ is a linear continuous map, the image $\mathcal{S}_n$ of $\mathcal{C}_n$ under this map is a convex compact subset of $\mathbb{R}$ and hence a line segment. On the otherhand, $\mathcal{S}_n$ is symmetric around zero since
$$ - \mathcal{S}_n = \left\{-\left\langle u_n,x\right\rangle : x \in \mathcal{C}_n\right\} =\left\{\left\langle u_n,-x\right\rangle : x \in \mathcal{C}_n\right\} = \left\{\left\langle u_n,x\right\rangle : x \in -\mathcal{C}_n\right\} = \mathcal{S}_n,$$
since $\mathcal{C}_n = -\mathcal{C}_n$. Moreover,
\begin{align*}
\rho_n := 2 \sup \left\{z : z \in \mathcal{S}_n\right\} & = 2 \sup \left\{\langle u_n,x \rangle : ||x||_{\infty} \leq 1/2\right\} = ||u_n||_1 \stackrel{p}{\longrightarrow} \sqrt{2/\pi},
\end{align*}
where we have used Proposition~\ref{haar}.
\item %Before proceeding with the main objective of the problem, we first show that there indeed exists a joint distribution on $\mathbb{R}^n \times \mathbb{R}^n$ such that if $(u_n,v_n)$ is a sample from this joint distribution then
We have
$$ \mathcal{P}_n ;= \left\{\left( \langle u_n,x \rangle, \langle v_n, x \rangle \right) : x \in \mathcal{C}_n\right\},$$
and
$$ R_n := \sup \left\{||z||_2 : z \in \mathcal{P}_n\right\}, \; \; r_n := \inf \left\{||z||_2 : z \in \mathbb{R}^2 \setminus \mathcal{P}_n\right\}.$$
Note that $\mathcal{P}_n$ is a compact and convex subset of $\mathbb{R}^2$ since it is the image of the compact convex set $\mathcal{C}_n$ under the continuous linear map $x \mapsto \left( \langle u_n,x \rangle, \langle v_n, x \rangle \right)$.
We partition $\mathcal{P}_n$ according to the angle the points in it make with the $x$-axis, i.e. we define
$$ \mathcal{P}_{n,\theta} := \left\{ z \in \mathcal{P}_n : \langle (\sin \theta, -\cos \theta), z\rangle = 0 \right\}, \;\mathcal{C}_{n,\theta} := \left\{x \in \mathcal{C}_n : \left(\langle u_n,x \rangle, \langle v_n, x \rangle \right) \in \mathcal{P}_{n,\theta} \right\},\; \forall \; \theta \in [0, 2\pi], \; n \geq 1.$$
It is easy to see that $\mathcal{P}_n = \cup_{0 \leq \theta \leq \pi} \mathcal{P}_{n,\theta}.$ Since $\mathcal{P}_n$ is convex and compact, so is $\mathcal{P}_{n,\theta}$; and hence it is a closed line segment passing through $(0,0)$. Moreover, $\mathcal{C}_n$ being symmetric around $0$ implies that so is $\mathcal{P}_n$ and hence $\mathcal{P}_{n,\theta}$. Combining all of these observations, we conclude that
$$ \mathcal{P}_{n,\theta} = \left\{t(\cos \theta, \sin \theta) : |t| \leq \rho_{n,\theta}\right\}, \; \text{ where } \rho_{n,\theta} := \sup \left\{||z||_2 : z \in \mathcal{P}_{n,\theta}\right\}.$$
Therefore,
\begin{align*}
R_n = \sup \left\{||z||_2 : z \in \mathcal{P}_n\right\} = \sup_{\theta \in [0,\pi]} \sup_{z \in \mathcal{P}_{n,\theta}} ||z||_2 = \sup_{\theta \in [0,\pi]} \rho_{n,\theta}.
\end{align*}
On the otherhand, $\mathbb{R}^2 \setminus \mathcal{P}_n = \cup_{0 \leq \theta \leq \pi} \left(\mathcal{L}_{\theta} \setminus \mathcal{P}_{n,\theta} \right)$, where
$$ \mathcal{L}_{\theta} := \left\{ z \in \mathbb{R}^2 : \langle (\sin \theta, -\cos \theta), z\rangle = 0 \right\} = \left\{t(\cos \theta, \sin \theta) : t \in \mathbb{R}\right\}.$$
Therefore,
$$r_n = \inf \left\{||z||_2 : z \notin \mathcal{P}_n\right\} = \inf_{\theta \in [0,\pi]} \inf_{z \in \mathcal{L}_{\theta} \setminus \mathcal{P}_{n,\theta}} ||z||_2 = \inf_{\theta \in [0,\pi]} \rho_{n,\theta}.$$
Readily we observe that $r_n \leq R_n$. We therefore concentrate on the quantities $\rho_{n,\theta}$ for $\theta \in [0,2\pi]$. Observe that
\begin{align*}
\rho_{n,\theta}^2 &= \sup \left\{||z||^2_2 : z \in \mathcal{P}_{n,\theta}\right\} \\
& = \sup \left\{||\left( \langle u_n,x \rangle, \langle v_n,x \rangle\right)||^2_2 \mid x \in \mathcal{C}_n , \langle u_n,x \rangle \sin \theta - \langle v_n,x \rangle \cos \theta =0\right\} \\
& = \sup \left\{\langle u_n,x \rangle^2 + \langle v_n,x \rangle^2 \mid x \in \mathcal{C}_n , \langle u_n \sin \theta - v_n \cos \theta,x \rangle=0\right\}.
\end{align*}
Define $w_n(\theta) := u_n \cos \theta +v_n \sin \theta $ for any $\theta$. Note that $w_n(\pi/2+\theta) = -u_n \sin \theta + v_n \cos \theta.$ Since $ \langle w_n(\theta), w_n(\pi/2+\theta) \rangle = 0$ almost surely, an observation which follows from the joint distribution of $(u_n,v_n)$, it is clear that $\left\{w_n(\theta), w_n(\pi/2,\theta)\right\}$ forms an orthonormal basis for the linear space spanned by $\left\{u_n,v_n\right\}$. Therefore,
\begin{align*}
\langle u_n,x \rangle^2 + \langle v_n,x \rangle^2 &= || \text{ Orthogonal projection of } x \text{ onto the space spanned by } \left\{u_n,v_n\right\}||_2^2 \\
&= \langle w_n(\theta),x \rangle^2 + \langle w_n(\pi/2+\theta),x \rangle^2,
\end{align*}
and thus
$$ \rho_{n,\theta} = \sup \left\{|\langle w_n(\theta),x \rangle| \mid x \in \mathcal{C}_n , \langle w_n(\pi/2+\theta),x \rangle =0 \right\}.$$
It is also easy to see that $(Ow_n(\theta),Ow_n(\pi/2+\theta)) \stackrel{d}{=} (w_n(\theta),w_n(\pi/2+\theta))$ and hence by the uniqueness property proved in Proposition~\ref{haar}, we can claim that $(w_n(\theta),w_n(\pi/2+\theta)) = (u_n,v_n)$, for all $\theta$. This also implies that $\rho_{n, \theta} \stackrel{d}{=} \rho_{n,0}$ for all $\theta$.
Remaining proof is divided into several steps for sake of convenience.
\begin{enumerate}[label=(\Roman*)]
\item \textbf{Step 1 :} In this step we shall prove that $ \mathbb{P}(n^{-1/2}R_n > \sqrt{1/(2\pi)} + \delta ) \longrightarrow 0, \; \text{ as } n \to \infty, \; \forall \; \delta >0.$
We start by observing that
$$ \rho_{n,\theta} = \sup \left\{|\langle w_n(\theta),x \rangle| \mid x \in \mathcal{C}_n , \langle w_n(\pi/2+\theta),x \rangle =0 \right\} \leq \sup \left\{|\langle w_n(\theta),x \rangle| \mid x \in \mathcal{C}_n\right\} = \dfrac{1}{2}||w_n(\theta)||_1, $$
and hence $R_n \leq \dfrac{1}{2}\sup_{\theta \in [0,\pi]} ||w_n(\theta)||_1.$ For any $\theta_1, \theta_2 \in [0,\pi]$, we can write
\begin{align*}
\bigg \rvert ||w_n(\theta_1)||_1 - ||w_n(\theta_2)||_1 \bigg \rvert \leq ||w_n(\theta_1) - w_n(\theta_1)||_1 & \leq |\cos \theta_1 - \cos \theta_2| \, ||u_n||_1 + |\sin \theta_1 - \sin \theta_2| \, ||v_n||_1 \\
& \leq |\theta_1-\theta_2|\left( ||u_n|_1 + ||v_n||_1\right).
\end{align*}
Take any $\varepsilon >0$ and finite $\varepsilon$-net of $[0,\pi]$, say $\mathcal{N}_{\varepsilon}$. Then
$$ R_n \leq \dfrac{1}{2}\sup_{\theta \in [0,\pi]} ||w_n(\theta)||_1 \leq \dfrac{1}{2}\sup_{\theta \in \mathcal{N}_{\varepsilon}} ||w_n(\theta)||_1 + \dfrac{\varepsilon}{2} \left( ||u_n|_1 + ||v_n||_1\right).$$
Since for any $\theta$, we have $w_n(\theta) \stackrel{d}{=} u_n$, we can use part (a) to guarantee that
$$ n^{-1/2} \max_{\theta \in \mathcal{N}_{\varepsilon}} ||w_n(\theta)||_1 \stackrel{p}{\longrightarrow} \sqrt{2/\pi}.$$
Since $v_n = w_n(\pi/2)$, we also have $n^{-1/2}||v_n||_1$ converging in probability to $\sqrt{2/\pi}$. Combining all of these, we can guarantee that
$$ \dfrac{1}{2\sqrt{n}}\sup_{\theta \in \mathcal{N}_{\varepsilon}} ||w_n(\theta)||_1 + \dfrac{\varepsilon}{2\sqrt{n}} \left( ||u_n|_1 + ||v_n||_1\right) \stackrel{p}{\longrightarrow} \sqrt{1/(2\pi)} + \varepsilon \sqrt{2/\pi}.$$
Since this is true for any $\varepsilon >0$, we have our required assertion.
\item \textbf{Step 2:} In this step we shall prove that $n^{-1/2}\rho_n(0)$ converges in probability to $\sqrt{1/(2\pi)}$.
The upper bound is trivial since $\rho_{n,0} \leq |w_n(0)||_1/2 = ||u_n||_1/2$ and $n^{-1/2}||u_n||_1$ converges to $\sqrt{2/\pi}$ in probability. On the other hand, recall that
$$ \rho_{n,0} = \sup \left\{|z||_2 : z \in \mathcal{P}_{n,0}\right\} = \sup \left\{ t : (t,0) \in \mathcal{P}_n\right\}.$$
Consider $x_n = \operatorname{sgn}(u_n)/2, y_n = \operatorname{sgn}(v_n)/2 \in \mathcal{C}_n$ and set $a_n= (\langle u_n,x_n \rangle, \langle v_n,x_n \rangle) \in \mathcal{P}_n.$ Also define
$$b_n := - \operatorname{sgn}(\langle v_n,x_n \rangle)(\langle u_n,y_n \rangle, \langle v_n,y_n \rangle) \in \mathcal{P}_n, $$
since $\mathcal{P}_n$ is symmetric around $(0,0)$. Since $\langle v_n,y_n \rangle = ||v_n||_1/2 >0$, the $y$-coordinates of $a_n$ and $b_n$ are either both zero or have different signs. In former case, $a_n \in \mathcal{P}_{n,0}$ and hence $\rho_{n,0} \geq \langle u_n,x_n \rangle = ||u_n||_1/2$. In the later case, convexity of $\mathcal{P}_n$ guarantees that $\mathcal{P}_n$ contains the line segment joining $a_n$ and $b_n$; hence $\mathcal{P}_n$ contains the point $c_n \in \mathcal{P}_{n,0}$ where this line segment crosses the $x$-axis. Easy computation yields
$$ c_n = \left( \dfrac{- \operatorname{sgn}(\langle v_n,x_n \rangle)\langle u_n,y_n \rangle \langle v_n, x_n \rangle + \operatorname{sgn}(\langle v_n,x_n \rangle)\langle v_n,y_n \rangle \langle u_n,x_n \rangle }{\langle v_n, x_n \rangle + \operatorname{sgn}(\langle v_n,x_n \rangle)\langle v_n,y_n \rangle},0 \right). $$
Therefore,
\begin{align*}
n^{-1/2}\rho_{n,0} &\geq \dfrac{1}{\sqrt{n}}\Bigg \rvert \dfrac{- \operatorname{sgn}(\langle v_n,x_n \rangle)\langle u_n,y_n \rangle \langle v_n, x_n \rangle + \operatorname{sgn}(\langle v_n,x_n \rangle)\langle v_n,y_n \rangle \langle u_n,x_n \rangle }{\langle v_n, x_n \rangle + \operatorname{sgn}(\langle v_n,x_n \rangle)\langle v_n,y_n \rangle}\Bigg \rvert \mathbbm{1}\left( \langle v_n,x_n \rangle \neq 0 \right)\\
& \hspace{ 3.5 in} + \dfrac{1}{2\sqrt{n}} ||u_n||_1 \mathbbm{1}\left( \langle v_n,x_n \rangle = 0 \right) \\
& \geq \dfrac{1}{\sqrt{n}} \dfrac{\langle v_n,y_n \rangle \langle u_n,x_n \rangle- |\langle u_n,y_n \rangle| |\langle v_n, x_n \rangle| }{|\langle v_n, x_n \rangle| + |\langle v_n,y_n \rangle|} \mathbbm{1}\left( \langle v_n,x_n \rangle \neq 0 \right) + \dfrac{1}{2\sqrt{n}}||u_n||_1 \mathbbm{1}\left( \langle v_n,x_n \rangle = 0 \right) \\
&=: T_n \mathbbm{1}\left( \langle v_n,x_n \rangle \neq 0 \right) + \dfrac{1}{2\sqrt{n}}||u_n||_1 \mathbbm{1}\left( \langle v_n,x_n \rangle = 0 \right),
\end{align*}
where
\begin{align*}
T_n := \dfrac{1}{\sqrt{n}} \dfrac{\langle v_n,y_n \rangle \langle u_n,x_n \rangle- |\langle u_n,y_n \rangle| |\langle v_n, x_n \rangle| }{|\langle v_n, x_n \rangle| + |\langle v_n,y_n \rangle|} &= \dfrac{\dfrac{1}{2\sqrt{n}}||u_n||_1 \dfrac{1}{2\sqrt{n}}||v_n||_1- \dfrac{1}{4} \bigg \rvert \dfrac{1}{\sqrt{n}}\langle u_n,\operatorname{sgn}(v_n) \rangle \bigg \rvert \, \bigg \rvert \dfrac{1}{\sqrt{n}}\langle v_n,\operatorname{sgn}(u_n) \rangle \bigg \rvert}{ \dfrac{1}{2} \bigg \rvert \dfrac{1}{\sqrt{n}}\langle v_n,\operatorname{sgn}(u_n) \rangle \bigg \rvert + \dfrac{1}{2\sqrt{n}}||v_n||_1} \\
& \stackrel{p}{\longrightarrow} \sqrt{\dfrac{1}{2\pi}},
\end{align*}
using Proposition~\ref{haar} and the fact that $(u_n,v_n)\sim (v_n,u_n)$. Therefore, for any $\varepsilon >0$
$$ \mathbb{P}\left(n^{-1/2}\rho_{n,0} < \sqrt{\dfrac{1}{2\pi}} - \varepsilon \right) \leq \mathbb{P} \left( T_n < \sqrt{\dfrac{1}{2\pi}} - \varepsilon \right) + \mathbb{P} \left( \dfrac{1}{2\sqrt{n}}||u_n||_1 < \sqrt{\dfrac{1}{2\pi}} - \varepsilon \right) \to 0,$$
completing the proof of the lower bound.
\item \textbf{Step 3:} In this step we shall prove that $\mathbb{P}(n^{-1/2}r_n < (1-\delta)\sqrt{1/(2\pi)}) \to 0,$ as $n \to \infty$, for any $\delta >0$.
Take $0 \leq \theta_1 < \theta_2 \leq \pi$ such that $\theta_2 -\theta_1 < \pi/2$. Let $P_1,P_2$ and $P_{0}$ be the points $\rho_{n,\theta_1}(\cos \theta_1, \sin \theta_1)$, $\rho_{n,\theta_2}(\cos \theta_2, \sin \theta_2)$ and $(0,0)$ respectively. All of these points lie in $\mathcal{P}_n$ and hence the triangle $\Delta P_1P_0P_2$ is also contained in $\mathcal{P}_n$. For $\theta \in [\theta_1,\theta_2]$, the line $\mathcal{L}_{\theta}$ intersects the line segment $\overline{P_1P_2}$ at some point, say $P_{\theta}$, and hence $\rho_{n,\theta} \geq \operatorname{length}(\overline{P_0P_{\theta}})$. Thus
$$ \inf_{\theta \in [\theta_1,\theta_2]} \rho_{n,\theta} \geq \inf_{P \in \overline{P_1P_2}} \operatorname{length}(\overline{P_0P}).$$
If the triangle $\Delta P_1P_0P_2$ is not acute-angled, then clearly
$$\inf_{P \in \overline{P_1P_2}} \operatorname{length}(\overline{P_0P}) = \operatorname{length}(\overline{P_0P_1}) \wedge \operatorname{length}(\overline{P_0P_2}).$$
If the triangle $\Delta P_1P_0P_2$ is indeed acute-angled, then $\inf_{P \in \overline{P_1P_2}} \operatorname{length}(\overline{P_0P})$ is the length of the perpendicular from $P_0$ on the line segment $\overline{P_1P_2}$. Since $\angle P_0P_1P2 + \angle P_0P_2P_1 = \pi - (\theta_2-\theta_1)$, at least one of them is greater than or equal to $\pi/2 -(\theta_2-\theta_1)/2$ and hence
$$ \inf_{P \in \overline{P_1P_2}} \operatorname{length}(\overline{P_0P}) \geq \left( \operatorname{length}(\overline{P_0P_1}) \wedge \operatorname{length}(\overline{P_0P_2})\right) \sin (\pi/2-(\theta_2-\theta_1)/2).$$
Combining these previous observations, we conclude that
$$ \inf_{\theta \in [\theta_1,\theta_2]} \rho_{n,\theta} \geq \left(\rho_{n,\theta_1} \wedge \rho_{n,\theta_2} \right) \cos \dfrac{\theta_2-\theta_1}{2}.$$
Fix $m \geq 3$ and observe that
\begin{align*}
r_n = \inf_{\theta \in [0,\pi]} \rho_{n, \theta} = \min_{k=1}^m \inf_{\theta \in [(k-1)\pi/m,k\pi/m]} \rho_{n, \theta} \geq \cos(\pi/m)\min_{k=0}^m \rho_{n,k\pi/m}.
\end{align*}
Take any $\varepsilon >0$ and get $m \geq 3$ such that $\cos(\pi/m) > 1- \varepsilon/2$.
\begin{align*}
\mathbb{P}\left( n^{-1/2}r_n < (1-\varepsilon)\sqrt{1/(2\pi)}\right) &\leq \sum_{k=0}^m \mathbb{P} \left( n^{-1/2}\rho_{n, k\pi/m} < (1-\varepsilon)\sqrt{1/(2\pi)} /(1-\varepsilon/2)\right) \\
& = m\mathbb{P} \left( n^{-1/2}\rho_{n, 0} < (1-\varepsilon)\sqrt{1/(2\pi)} /(1-\varepsilon/2)\right) \to 0,
\end{align*}
where we have used the result from Step 2. This completes this step.
\end{enumerate}
The assertions of Step 1 and Step 3, along with the fact that $r_n \leq R_n$ guarantees that $n^{-1/2}R_n$ and $n^{-1/2}r_n$ both converges in probability to $1/\sqrt{2\pi}$. In particular $R_n/r_n$ converges in probability to $1$.
To make sense of the convergence of $n^{-1/2}\mathcal{P}_n$, we need a few definitions. Let $\mathcal{T}$ be the collection of all compact subsets of $\mathbb{R}^2$. For any $S \subseteq \mathbb{R}^2$, $x \in \mathbb{R}^2$ and $r>0$, we define $d(x,S):= \inf\left\{ ||x-y||_2 : y \in S\right\}$ and $U(S,r):= \left\{x : d(x,S) \leq r\right\}$. We define a metric, called Hausdorff metric, on $\mathcal{T}$ as follows.
$$ d_H(S,T) := \inf \left\{r >0 : S \subseteq U(T,r), T \subseteq U(S,r)\right\}, \; \forall \; S,T \in \mathcal{T}.$$
It is easy to show that this is a valid metric and $(\mathcal{T},d_H)$ is a complete separable metric space. We endow this space with its Borel $\sigma$-algebra. It is a routine exercise to check that $n^{-1/2}\mathcal{P}_n$ is indeed a random element in $\mathcal{T}$ and if $\Gamma_c$ is the disc of radius $c=1/\sqrt{2\pi}$ around the origin, then
$$ \Gamma_c \subseteq U(n^{-1/2}\mathcal{P}_n, (c-n^{-1/2}r_n)_{+}), \; n^{-1/2}\mathcal{P}_n \subseteq U(\Gamma_c, (n^{-1/2}R_n-c)_{+}).$$
Therefore,
$$ d_{H}(n^{-1/2}\mathcal{P}_n, \Gamma_c) \leq (n^{-1/2}R_n-c)_{+} \vee (c-n^{-1/2}r_n)_{+} \stackrel{p}{\longrightarrow} 0, $$
since $n^{-1/2}r_n,n^{-1/2}R_n$ both converges in probability to $c$. Hence,
$$ n^{-1/2}\mathcal{P}_n \stackrel{p}{\longrightarrow} \Gamma_{1/\sqrt{2\pi}} = \text{Disc of radius } \dfrac{1}{\sqrt{2\pi}} \text{ around the origin }.$$
For more details on Hausdorff metric, look at \cite[Section 7.3,7.4]{hauss}.
\end{enumerate} |
2012-q5 | Equip the set C of continuous functions on \mathbb{R} with the topology of uniform convergence on compact sets. (a). Show that for any fixed s > 0 there exists centered Gaussian stochastic process g_s(·) of auto-covariance c_s(x,y) = e^{-(x-y)^2/(2s)}/\sqrt{2\pi s}, whose sample functions are (C, B_C)-measurable. Remark: For full credit you are to show that c_s(·,·) is non-negative definite. (b). Taking s ↓ 0 corresponds formally to the stochastic process g_0(·) such that {g_0(x), x \in \mathbb{R}}, are mutually independent, standard normal random variables. How should you construct this process? Is there anything unsatisfactory about your construction? Explain. (this part is intentionally open-ended). | \begin{enumerate}[label=(\alph*)]
\item First we need to show that the function $c_s$ is non-negative definite. $c_s$ is clearly symmetric in its arguments. Observe that
\begin{align*}
c_s(x,y) = \dfrac{1}{\sqrt{2\pi s}} \exp(-(x-y)^2/(2s)) &= \dfrac{1}{\sqrt{2\pi s}} \int_{-\infty}^{\infty} \exp \left( i \xi \dfrac{(x-y)}{\sqrt{s}} \right) \phi(\xi) \, d \xi \\
&= \dfrac{1}{\sqrt{2\pi s}} \int_{-\infty}^{\infty} \exp \left( i \xi \dfrac{x}{\sqrt{s}} \right) \exp \left( -i \xi \dfrac{y}{\sqrt{s}} \right)\phi(\xi) \, d \xi \\
&= \dfrac{1}{\sqrt{2\pi s}} \int_{-\infty}^{\infty} \exp \left( i \xi \dfrac{x}{\sqrt{s}} \right) \overline{\exp \left( i \xi \dfrac{y}{\sqrt{s}} \right)}\phi(\xi) \, d \xi = \langle f_x, f_y \rangle,
\end{align*}
where $f_x \in L^2(\mathbb{C},\phi) := \left\{ h : \mathbb{R} \mapsto \mathbb{C} \mid \int_{\mathbb{R}} |h|^2 \phi < \infty \right\}$ is defined as $f_x(\xi) := \exp(i\xi x/\sqrt{s})$ for all $\xi \in \mathbb{R}$. The inner product $\langle \cdot, \cdot \rangle$ on $L^2(\mathbb{C},\phi)$ is defined as
$$ \langle h_1, h_2 \rangle := \dfrac{1}{\sqrt{2\pi s}} \int_{-\infty}^{\infty} h_1(\xi) \overline{h_2(\xi)} \, d \xi.$$
Then take any $x_1, \ldots, x_n, a_1, \ldots, a_n \in \mathbb{R}$ and notice that
$$ \sum_{i,j} a_ia_jc_s(x_i,x_j) = \sum_{i,j} a_ia_j \langle f_{x_i}, f_{x_j} \rangle = \Big \rvert \Big \rvert \sum_{i=1}^n a_if_{x_i}\Big \rvert \Big \rvert^2 \geq 0,$$
proving that the function $c_s$ is non-negative definite.
Since we have the mean function to be zero function and we also have a non-negative covariance function, this defines a consistent collection of finite-dimensional distributions for the required Gaussian process. Apply \textit{Kolmogorov Extension Theorem} (see \cite[Proposition 8.1.8]{dembo}) and construct a probability space $(\Omega, \mathcal{F}, \mathbb{P})$ and a Gaussian process $\left\{G_s(x) : x \in \mathbb{R}\right\}$ on this space with $\mathbb{E}(G_s(x))=0$ and $\operatorname{Cov}(G_s(x), G_s(y))=c_s(x,y)$ for all $x, y \in \mathbb{R}$. Observe that for all $x,y \in \mathbb{R}$,
\begin{align*}
\mathbb{E}|G_s(x)-G_s(y)|^2 = \operatorname{Var}(G_s(x)-G_s(y)) = c_s(x,x)+c_s(y,y)-2c_s(x,y) &= \dfrac{1}{\sqrt{2\pi s}} \left[ 2- 2\exp(-(x-y)^2/(2s)) \right] \\
& \leq \dfrac{(x-y)^2}{s\sqrt{2\pi s}} = C|x-y|^2.
\end{align*}
Fix $n \in \mathbb{N}$. Using \textit{Kolmogorov Centsov Continuity Theorem} (see \cite[Theorem 8.2.6]{dembo}), we can get a stochastic process $\left\{G_{s,n}(x) : x \in [-n,n]\right\}$ (on the same probability space) which has continuous paths and $\mathbb{P}(G_{s,n}(x) \neq G_{s}(x))=0$ for all $x \in [-n,n]$. Define the following events.
$$ A_n := \bigcup_{x \in \mathbb{Q} \cap [-n,n]} \left(G_{s,n}(x) \neq G_s(x) \right), \;\; A:= \bigcup_{n \geq 1} A_n. $$
Clearly, $\mathbb{P}(A)=0$. Observe that for $\omega \in A^{c}$, we have $G_{s,n}(x)(\omega)=G_s(x)(\omega)$ for all $x \in [-n,n]\cap \mathbb{Q}$ and for all $n \geq 1$. Since the stochastic processes $G_{s,n}$ all has continuous paths, we can say that $G_{s,n}(x)(\omega)=G_{s,m}(x)(\omega)$ for all $\omega \in A^c$ and for all $x \in [-n,n]$ such that $1 \leq n \leq m.$ Define,
$$ g_s(x)(\omega) := \sum_{n \geq 1} G_{s,n}(x)(\omega) \mathbbm{1}(n-1 \leq |x|<n)\mathbbm{1}(\omega \in A^c) .$$
Since $G_{s,n}$ was a measurable function from $(\Omega, \mathcal{F})$ to $(\mathbb{R}^{\mathbb{R}}, \mathcal{B}^{\mathbb{R}})$, the same is true for $g_s$. For all $\omega \in A$, the function $x \mapsto g_s(x)(\omega)$ is constant zero function. For all $\omega \in A^c$ and $x$ such that $(n-1) \leq |x| <n$, we have
$$ g_s(x)(\omega) = G_{s,n}(x)(\omega) = G_{s,m}(x)(\omega), \;\forall \; m \geq n.$$
In other words, $x \mapsto g_s(x)(\omega)$ coincides with $x \mapsto G_{s,n}(x)(\omega)$ on $[-n,n]$ for all $\omega \in A^c$. Recall that $x \mapsto G_{s,n}(x)(\omega)$ has continuous paths for all $n$; hence $x \mapsto g_s(x)(\omega)$ also has continuous paths on $(-\infty, \infty)$ for all $\omega$. Since $\mathbb{P}(A)=0$, for any $x_1, \ldots, x_m$ with $|x_i| \leq n$ for all $1 \leq i \leq m$, we have
$$ (g_s(x_1), \ldots,g_s(x_m)) \stackrel{a.s.}{=} (G_{s,n}(x_1), \ldots,G_{s,n}(x_m)) \stackrel{d}{=} (G_s(x_1), \ldots, G_s(x_m)).$$
Since this holds for all $n,m$ and all choices of $x_i$'s we can conclude that $\left\{g_s(x) : x \in \mathbb{R}\right\}$ is a Gaussian stochastic process on the probability space $(\Omega, \mathcal{F}, \mathbb{P})$, i.e. a measurable function from $(\Omega, \mathcal{F})$ to $(\mathbb{R}^{\mathbb{R}}, \mathcal{B}^{\mathbb{R}})$ with zero mean function, covariance function $c_s$ and continuous sample-paths.
Now let $\mathcal{C}$ be the space of all continuous real-valued functions on $\mathbb{R}$ and $\mathcal{B}_{\mathcal{C}}$ is the Borel $\sigma$-algebra generated by the topology of uniform convergence on compact sets. This topology is metrizable and we can also show that $ \mathcal{B}_{\mathcal{C}} = \left\{ D \cap \mathcal{C} : D \in \mathcal{B}^{\mathbb{R}}\right\}$ (see \cite[Exercise 8.2.9]{dembo}). Since the process $g_s$ has continuous sample paths by construction, i.e. $g_s(\omega) \in \mathcal{C}$ for all $\omega$, we can consider $g_s$ as a function from $\Omega$ to $\mathcal{C}$. Then for any $D \in \mathcal{B}^{\mathbb{R}}$, we have $g_s^{-1}(D \cap \mathcal{C})=g_s^{-1}(D) \in \mathcal{F}$. This shows that $g_s : (\Omega, \mathcal{F}) \to (\mathcal{C}, \mathcal{B}_{\mathcal{C}})$ is measurable.
\item
\textbf{Correction :} As $s \downarrow 0$, it is not clear why it formally goes to i.i.d. standard normal collection since $c_s(x,x)$ blows up. So for this problem assume that we have to construct $\left\{g_0(x) : x \in \mathbb{R}\right\}$ as a i.i.d. standard normal collection.
By using \textit{Kolmogorov Extension Theorem}, it is straight-forward to construct $g_0$ as a stochastic process. But this process does not have any version which is measurable (see \cite[Definition 8.2.19]{dembo} ). To see why this is true suppose $g_0$ as a stochastic process is measurable.
Consider the process $Z$ defined as
$$ Z_x(\omega) := \int_{0}^x g_0^2(y)(\omega)\, dy, \; \forall \; \omega \in \Omega, \; x \in [0,1].$$
The process $Z$ is well-defined due to the joint-measurability of $g_0$. Now apply \textit{Fubini's Theorem} to conclude that for all $x \in [0,1]$,
$$ \mathbb{E}(Z_x) = \mathbb{E} \left[ \int_{0}^x g_0^2(y)\, dy\right] = \int_{0}^x \mathbb{E}(g_0^2(y))\, dy = \int_{0}^x \, dy=x, $$
and
$$ \mathbb{E}(Z_x^2) = \mathbb{E} \left[ \int_{0}^x \int_{0}^x g_0^2(y)g_0^2(z)\, dy\, dz\right] = \int_{0}^x \int_{0}^x \mathbb{E}(g_0^2(y)g_0^2(z))\, dy\, dz = \int_{0}^x \int_{0}^x (3\mathbbm{1}(y=z)+\mathbbm{1}(y \neq z))\,dy\ dz =x^2. $$
Therefore, $Z_x=x$ almost surely for all $x \in [0,1]$. This yields that $\mathbb{P} \left( Z_x =x, \; \forall \; x \in [0,1]\cap \mathbb{Q}\right)=1$. Considering the fact that $x \mapsto Z_x(\omega)$ is non-decreasing for all $\omega \in \Omega$, the previous discussion yields that $Z_x=x$ for all $x \in [0,1]$ almost surely. Apply \textit{Lebesgue Differentiation Theorem} to conclude that $g_0^2(y)=1$ a.e.[$y$] on $[0,1]$ with probability $1$, i.e.,
$$ \int_{0}^1 \mathbbm{1}(g_0^2(y) \neq 1) \, dy =0, \; \text{almost surely}.$$
But by \textit{Fubini's Theorem},
$$ \mathbb{E} \left[\int_{0}^1 \mathbbm{1}(g_0^2(y) \neq 1) \, dy \right] = \int_{0}^1 \mathbb{P}(g_0^2(y) \neq 1) \, dy = \int_{0}^1 \, dy =1,$$
yielding a contradiction.
\end{enumerate} |
2012-q6 | Consider two independent standard Brownian motions W(t) and B(t) and for x>0, the random time \tau_x = \inf{t \geq 0 : W(t) - rt = x} (with the convention that infimum over empty set is +∞). (a) What is \mathbb{P}(\tau_x < \infty) as a function of x>0 and r \in \mathbb{R}? (b) What are the mean and variance of the conditional distribution of B(\tau_x) given that \tau_x < \infty? (c) Suppose that n \rightarrow \infty. What is then the appropriately normalized limiting distribution of B(\tau_n) conditional on \tau_n < \infty? (do not forget to deal with the case of r=0). | \begin{enumerate}[label=(\alph*)]
\item Since $x>0$, $\tau_{r,x} := \inf \left\{t \geq 0 : W(t)-rt=x \right\} = \inf \left\{t \geq 0 : W(t)-rt \geq x \right\}$. Set $\mathcal{F}_W(t):= \sigma(W(s):0 \leq s \leq t)$. Consider first the case of $r>0$. We have the exponential Martingale $\left\{ \exp(-2rW(t)-2r^2t), \mathcal{F}_W(t), t \geq 0\right\}$. By OST, $\mathbb{E} \left[ \exp \left(2rW(\tau_{r,x} \wedge t)-2r^2(\tau_{r,x} \wedge t) \right)\right] =1$, for all $t \geq 0$. By definition, $W(\tau_{r,x} \wedge t) \leq x+r(\tau_{r,x} \wedge t)$, for all $t \geq 0$ and on the event $(\tau_{r,x} < \infty)$, the sample-path-continuity of $W$ gives $W(\tau_{r,x})-r\tau_{r,x}=x$. Hence,
$$ \exp \left(2rW(\tau_{r,x} \wedge t)-2r^2(\tau_{r,x} \wedge t) \right) \leq \exp(2rx), \;\forall \; t \geq 0,$$
and on the event $(\tau_{r,x} = \infty)$, we have $t^{-1}\left[2rW(\tau_{r,x} \wedge t)-2r^2(\tau_{r,x} \wedge t) \right] = 2rt^{-1}\left[W(t)-rt \right] \to -2r^2 <0$, almost surely, since $W(t)/t \to 0$ almost surely as $t \to \infty$ (see LIL) This gives $\exp \left(2rW(\tau_{r,x} \wedge t)-2r^2(\tau_{r,x} \wedge t) \right) \to 0$ almost surely on $(\tau_{r,x} = \infty)$. We can now apply DCT to conclude that
\begin{align*}
1= \mathbb{E} \left[ \lim_{t \to \infty}\exp \left(2rW(\tau_{r,x} \wedge t )-2r^2(\tau_{r,x} \wedge t) \right) \right] &= \mathbb{E} \left[\exp \left(2rW(\tau_{r,x})-2r^2\tau_{r,x} \right) \mathbbm{1}(\tau_{r,x} < \infty) \right] \\
&= \exp(2rx) \mathbb{P}(\tau_{r,x} < \infty).
\end{align*}
Hence, for $r>0$, we have $\mathbb{P}(\tau_{r,x} < \infty) = \exp(-2rx)$.
Now note that for any $r_1 <r_2$, $\left\{t \geq 0 : W(t)-r_2t \geq x \right\} \subseteq \left\{t \geq 0 : W(t)-r_1t \geq x \right\}$ and hence $\tau_{x,r_1} \leq \tau_{x,r_2},$ which implies $\mathbb{P}(\tau_{x,r_1}< \infty) \geq \mathbb{P}(\tau_{x,r_2}< \infty)$. Therefore, for any $r \leq 0$,
$$ \mathbb{P}(\tau_{r,x} < \infty) \geq \sup_{s>0} \mathbb{P}(\tau_{s,x} < \infty) = \sup_{s>0} \exp(-2sx) = 1.$$
Thus, $\mathbb{P}(\tau_{r,x}< \infty) = \exp(-2xr_{+}).$
\item The process $B$ and $W$ being independent, we have $\tau_{r,x} \indep \left\{B_t : t \geq 0\right\}$. Hence, $B(\tau_{r,x}) | \tau_{r,x} =t \sim N(0,t)$ for all $t \in (0, \infty)$. Thus
$$\mathbb{E} \left[B(\tau_{r,x})\mathbbm{1}(\tau_{r,x}< \infty) \rvert \tau_{r,x} \right] = 0, \;\; \mathbb{E} \left[B(\tau_{r,x})^2\mathbbm{1}(\tau_{r,x}< \infty) \rvert \tau_{r,x} \right] = \tau_{r,x}\mathbbm{1}(\tau_{r,x}< \infty).$$
This gives
$$\mathbb{E} \left[B(\tau_{r,x})\mathbbm{1}(\tau_{r,x}< \infty) \right] = 0, \;\; \mathbb{E} \left[B(\tau_{r,x})^2\mathbbm{1}(\tau_{r,x}< \infty) \right] = \mathbb{E} \left[\tau_{r,x}\mathbbm{1}(\tau_{r,x}< \infty)\right].$$
We use the Laplace transform to compute the above quantities. Recall that for any $\lambda >0$,
$$ \mathbb{E} \exp \left[-\lambda\tau_{r,x}\mathbbm{1}(\tau_{r,x}< \infty) \right] = \mathbb{E} \exp(-\lambda\tau_{r,x}) = \exp \left[-x\sqrt{r^2+2\lambda}-rx \right].$$
(See Quals 2011, Problem 6 Solutions for derivations). Differentiating with respect to $\lambda$ and taking $\lambda \downarrow 0$, we get the following.
\begin{align*}
\mathbb{E} \left[\tau_{r,x}\mathbbm{1}(\tau_{r,x}< \infty)\right] = - \lim_{\lambda \downarrow 0} \dfrac{\partial}{\partial \lambda} \mathbb{E} \exp \left[-\lambda\tau_{r,x}\mathbbm{1}(\tau_{r,x}< \infty) \right]& = - \lim_{\lambda \downarrow 0} \dfrac{x}{\sqrt{r^2+2\lambda}} \exp \left[-x\sqrt{r^2+2\lambda}-rx \right] \\
&= x|r|^{-1}\exp(-2xr_{+}).
\end{align*}
Hence,
$$ \mathbb{E} \left[B(\tau_{r,x})|\tau_{r,x}< \infty \right] = 0, \;\; \operatorname{Var} \left[B(\tau_{r,x})|\tau_{r,x}< \infty \right] = \mathbb{E} \left[B(\tau_{r,x})^2|\tau_{r,x}< \infty \right]= \dfrac{\mathbb{E} \left[\tau_{r,x}\mathbbm{1}(\tau_{r,x}< \infty)\right]}{\mathbb{P}(\tau_{r,x}<\infty)} = \dfrac{x}{|r|}.$$
The conditional mean and variance are therefore $0$ and $x/|r|$ respectively (variance is $\infty$ when $r=0$).
\item For any $t \in \mathbb{R}$, we apply independence of $B$ and $\tau_{r,n}$ and obtain
$$ \mathbb{E} \left[e^{itB(\tau_{r,n})}\mathbbm{1}(\tau_{r,n}< \infty) \rvert \tau_{r,n} \right] = \exp(-\tau_{r,n}t^2/2)\mathbbm{1}(\tau_{r,n}< \infty). $$
This yields the following.
\begin{align*}
\mathbb{E}\left[ e^{itB(\tau_{r,n})}|\tau_{r,n}< \infty \right] = \dfrac{\mathbb{E} \left[\exp(-\tau_{r,n}t^2/2)\mathbbm{1}(\tau_{r,n}< \infty) \right]}{\mathbb{P}(\tau_{r,n}<\infty)} &= \exp \left[-n\sqrt{r^2+t^2}-rn+2nr_{+} \right] \\
&= \exp \left[-n\sqrt{r^2+t^2}+n|r| \right].
\end{align*}
\begin{enumerate}[label=(\arabic*)]
\item Case 1: $r \neq 0$.
$$ \mathbb{E}\left[ e^{itB(\tau_{r,n})}|\tau_{r,n}< \infty \right] = \exp \left[-n\sqrt{r^2+t^2}+|r|n \right] = \exp \left[-\dfrac{nt^2}{\sqrt{r^2+t^2}+|r|} \right].$$
Hence, for all $\xi \in \mathbb{R}$, we have
$$ \mathbb{E}\left[ e^{i\xi B(\tau_{r,n})/\sqrt{n}}|\tau_{r,n}< \infty \right] = \exp \left[-\dfrac{\xi^2}{\sqrt{r^2+n^{-1}\xi^2}+|r|} \right] \longrightarrow \exp \left(-\dfrac{\xi^2}{2|r|} \right). $$
Thus for $r>0$,
$$ \dfrac{B(\tau_{r,n})}{\sqrt{n}} \bigg \rvert \tau_{r,n}< \infty \stackrel{d}{\longrightarrow} N(0,|r|^{-1}), \;\; \text{as } n \to \infty.$$
\item Case 2 : $r=0$.
$$ \mathbb{E}\left[ e^{itB(\tau_{r,n})}|\tau_{r,n}< \infty \right] = \exp(-n|t|).$$
Therefore, for all $n \geq 1$,
$$ \dfrac{B(\tau_{r,n})}{n} \bigg \rvert \tau_{r,n}< \infty \sim \text{Cauchy}(0,1).$$
\end{enumerate}
\end{enumerate} |
2013-q1 | For integer \( 1 \leq n \leq \infty \) and i.i.d. \( T_k, k = 1, 2, \ldots \), each of which has the exponential distribution of mean one, let \[ S_n := \sum_{k=1}^n k^{-1} (T_k - 1) . \] (a) Show that with probability one, \( S_\infty \) is finite valued and \( S_n \to S_\infty \) as \( n \to \infty \). (b) Show that \( S_n + \sum_{k=1}^n k^{-1} \) has the same distribution as \[ M_n := \max_{k=1, \ldots,n} \{T_k\} . \] (c) Deduce that for all \( x \in \mathbb{R}\), \[ \mathbb{P}(S_\infty + \gamma \le x) = e^{-e^{-x}}, \] where \( \gamma \) is Euler's constant, namely the limit as \( n \to \infty \) of \[ \gamma_n := \left( \sum_{k=1}^n k^{-1} \right) - \log n . \] | \begin{enumerate}[label=(\alph*)]\item Define the filtration \(\mathcal{F}_n:= \sigma(T_k : 1 \leq k \leq n)\), for all \(n \geq 1\). \mathcal{F}_0 be the trivial \(\sigma\)-algebra. Since, \(\mathbb{E}(T_k \rvert \mathcal{F}_{k-1})=\mathbb{E}T_k=1\), for all \(k \geq 1\), we have \(\left\{ S_n, \mathcal{F}_n, n \geq 0\right\}\) to be a MG with \(S_0=0\). Note that $$ \mathbb{E}S_n^2 = \operatorname{Var}(S_n) = \sum_{i=1}^n k^{-2} \operatorname{Var}(T_k) = \sum_{i=1}^n k^{-2} \leq \sum_{i=1}^{\infty} k^{-2} < \infty.$$ Thus the MG is \(L^2\) bounded and hence by \textit{Doob's $L^p$ Martingale Convergence Theorem}, \(S_n\) converges almost surely and in \(L^2\) to some $ S_{\infty}$ which is finite almost surely.\item Set \(T_{k:n}\) to be the \(k\)-th smallest among the collection \(\left\{T_1, \ldots, T_n\right\}\). Since \(\left\{T_k :k \geq 1\right\}\) is a collection of i.i.d. Exponential($1$) random variables, we have the joint density of \((T_{1:n}, \ldots, T_{n:n})\) as $$ f_{T_{1:n}, \ldots, T_{n:n}}(x_1, \ldots, x_n) = n! \exp \left( -\sum_{i=1}^n x_i\right)\mathbbm{1}(0<x_1<\cdots<x_n).$$ Using the simple change of variable formula, we get the joint density of \((T_{1:n},T_{2:n}-T_{1:n}, \ldots, T_{n:n}-T_{n-1:n})\) as \begin{align*} f_{T_{1:n},T_{2:n}-T_{1:n} \ldots, T_{n:n}-T_{n-1:n}}(y_1, \ldots, y_n) &= n! \exp \left( -\sum_{i=1}^n \sum_{j=1}^i y_j \right)\mathbbm{1}(\min(y_1,\ldots, y_n)>0) \\ &= n! \exp \left( -\sum_{j=1}^n (n-j+1) y_j \right)\mathbbm{1}(\min(y_1,\ldots, y_n)>0) \\ &= \prod_{j=1}^n \left[ (n-j+1)\exp \left( -(n-j+1)y_j \right)\mathbbm{1}(y_j >0)\right] \\ &= \prod_{j=1}^n f_{(n-j+1)^{-1}T_{n-j+1}}(y_j).\end{align*} Thus $$ \left(T_{1:n},T_{2:n}-T_{1:n}, \ldots, T_{n:n}-T_{n-1:n} \right) \stackrel{d}{=} \left( \dfrac{T_n}{n}, \ldots, \dfrac{T_1}{1}\right).$$ Hence, \(\max_{i=1}^n T_i = T_{n:n} \stackrel{d}{=} \sum_{j=1}^n (n-j+1)^{-1}T_{n-j+1} = S_n + \sum_{k=1}^n k^{-1}.\)\item We have \(S_n + \gamma_n \stackrel{d}{=} T_{n:n} - \log n\) and for all \(x \in \mathbb{R}\), $$ \mathbb{P}(T_{n:n} - \log n \leq x) = \left(\mathbb{P}(T_1 \leq x+\log n) \right)^n = (1-\exp(-x-\log n))^n = (1-n^{-1}\exp(-x))^n \longrightarrow \exp(-\exp(-x)).$$ Thus \(S_n + \gamma_n \stackrel{d}{\longrightarrow} X\), where \(\mathbb{P}(X \leq x) = \exp(-\exp(-x)).\) Since, \(S_n \stackrel{a.s.}{\longrightarrow} S_{\infty}\) and \(\gamma_n \to \gamma\), we have \(S_{\infty} + \gamma \stackrel{d}{=}X\).\end{enumerate} |
2013-q2 | Fix a sequence \( \{n_k\}_{k=1}^\infty \) with \( n_k \to \infty \). Let \[ \hat{S}_k := n_k^{-1/2} \sum_{j=1}^{n_k} X_{kj} , \] where \( \{X_{kj}, 1 \le j \le n_k, k \ge 1 \} \) of zero mean, variance one and finite sup\(_{k,j}\, \mathbb{E}|X_{kj}|^3\), are all defined on the same probability space. (a) Show that if in addition, per fixed \( k \) the random variables \( \{X_{kj}, 1 \le j \le n_k\} \) are mutually independent, then \( \hat{S}_k \) converge in distribution as \( k \to \infty \), to a standard normal variable. (b) Construct a collection as above, where \( \mathbb{P}(X_{kj} = 1) = \mathbb{P}(X_{kj} = -1) = 1/2 \) and per fixed \( k \) the variables \( \{X_{kj}, 1 \le j \le n_k\} \) are only pairwise independent, such that \( \hat{S}_k \) converge in distribution to some non-normal \( \hat{S}_\infty \). Identify the law \( Q_\infty \) of \( \hat{S}_\infty \) in your example. | \begin{enumerate}[label=(\alph*)]\item Apply \textit{Lindeberg-Feller CLT} on the triangular array $Y_{k,j} = n_k^{-1/2}X_{k,j}$, for all $1 \leq j \leq n_k$ and $k \geq 1$. Clearly each row is an independent collection with $\mathbb{E}Y_{k,j}=0$; $$ \sum_{j=1}^{n_k} \mathbb{E} Y_{k,j}^2 = \dfrac{1}{n_k}\sum_{j=1}^{n_k} \mathbb{E} X_{k,j}^2 =1,$$ and $$ \sum_{j=1}^{n_k} \mathbb{E} \left[ Y_{k,j}^2; |Y_{k,j}| \geq \varepsilon \right] = \dfrac{1}{n_k}\sum_{j=1}^{n_k} \mathbb{E} \left[ X_{k,j}^2; |X_{k,j}| \geq \varepsilon \sqrt{n_k} \right] \leq \dfrac{1}{n_k}\sum_{j=1}^{n_k} \mathbb{E} \left[ \dfrac{1}{\varepsilon \sqrt{n_k}}|X_{k,j}|^3 \right] \leq \dfrac{1}{\varepsilon \sqrt{n_k}} \sup_{k,j} \mathbb{E} |X_{k,j}|^3 = o(1).$$ So applying \textit{Lindeberg Feller CLT}, we get required result. \item Let $n_k = \binom{k}{2} $ and index $X_{kj}$ by $X_{k,(l,m)}$ where $1\le l<m\le k$ . Now, let $Y_{k,1},\ldots, Y_{k,k}$ be $iid$ Uniform($\left\{-1,+1\right\}$) random variables and define $X_{k,(l,m)} = Y_{k,l}Y_{k,m}$. Then it's clear that $X_{k,(l,m)}$'s are all Uniform($\left\{-1,+1\right\}$) variables and are pairwise independent. The pairwise independence follows from the fact that if $Z_1,Z_2,Z_3 \sim \text{Uniform}(\left\{-1,+1\right\})$, then $Z_1Z_2 \perp \!\! \!ot Z_1Z_3$. Now we have \begin{align*} \widehat{S}_k = \dfrac{1}{\sqrt{\binom{k}{2}}} \sum_{1\le l<m\le k} Y_{k,l}Y_{k,m} &= \dfrac{1}{2 \sqrt{\binom{k}{2}}} \left( \left(\sum_{l=1}^{k} Y_{k,l}\right)^2 - \sum_{l=1}^{k} Y_{k,l}^2\right)\\ &= \dfrac{\sqrt{k}}{\sqrt{2(k-1)}} \left( \left(\dfrac{\sum_{l=1}^{k} Y_{k,l}}{\sqrt{k}}\right)^2 - \frac{\sum_{l=1}^{k} Y_{k,l}^2}{k}\right)\\\hspace{45mm} \rightarrow \frac{1}{\sqrt{2}} (\chi_{1}^{2}-1), \end{align*} where the last convergence follows from the facts that $\sum_{l=1}^k Y_{k,l}^2 =k$ and $k^{-1/2}\sum_{l=1}^k Y_{k,l} \stackrel{d}{\longrightarrow} N(0,1)$ (which follows from part (a)). \end{enumerate} |
2013-q3 | Given \( p \in (0, 1) \), consider the infinite directed, simple random graph \( G = (\mathbb{Z}, E) \) having only directed edges \((x, y)\) in agreement with the ordering of \( \mathbb{Z} \) (namely if \( x < y \) we can have \((x, y) \in E \) but never \((y, x) \in E \) nor \((x, x) \in E \)), where the events \( \{(x, y) \in E\} \) for \( x, y \in \mathbb{Z}, x < y, \) are mutually independent, each occurring with probability \( p \). A directed path from \( x \) to \( y \) in \( G \), of length \( k \in \mathbb{N} \), consists of a collection of integers \( x_0 = x < x_1 < x_2 < \cdots < x_k = y \) such that \( (x_j, x_{j+1}) \in E \) for all \( j \in \{0, \ldots , k-1 \}. \) Given a realization of \( G \), we say that \( x \) is connected to \( y \), denoted by \( x \to y \), if there exists a directed path (in \( G \)), from \( x \) to \( y \), and denote by \( L(x, y) \) the length of the longest directed path connecting \( x \) to \( y \). When there is no such path we write \( x \not\to y \) and set \( L(x,y) = 0 \) (by convention). In particular, \( 0 \le L(x, y) \le |x-y| \) for any \( x, y \in \mathbb{Z}, \) and \( y \not\to x \) whenever \( x < y.\) (a) Given a realization of \( G, \) we say that \( x \in \mathbb{Z} \) is a pivot if \( y \to x \) for each \( y < x, \) and \( x \to y \) for each \( y > x. \) Denoting the (random) set of all pivots by \( \mathcal{P} \subset \mathbb{Z}, \) verify that \[ q := \prod_{k \ge 1} \Big( 1 - (1 - p)^k \Big) , \] is strictly positive and show that \( \mathbb{P}(x \in \mathcal{P}) = q^2 \) for any fixed \( x \in \mathbb{Z}. \) (b) Setting \( \mathbb{P} \cap \mathbb{Z}_{>0} := \{z^{(1)}, z^{(2)}, z^{(3)}, \ldots \}, \) where \( 0 < z^{(1)} < z^{(2)} < z^{(3)} < \cdots , \) show that \( J_i := (z^{(i+1)} - z^{(i)}) , i \ge 1 \) are i.i.d. random variables which are (mutually) independent of \( z^{(1)}. \) Further, prove that \( L[z^{(i)}, z^{(i+1)}) , i \ge 1 \) are i.i.d. random variables which are (mutually) independent of \( z^{(1)}. \) (c) Considering a suitable sub-collection of directed paths, verify that \( \mathbb{P}(x \not\to y) \le c'e^{-c(w-x)} \) for some \( c > 0 \) and any fixed \( y > x. \) Deduce that \( y^{-1}L(0, y) \) converges as \( y \to \infty, \) in probability, to some non-random \( \nu(p) \in [0, 1]. \) [Hint: If needed, use without proof the identity \( \mathbb{E}_I = 1/\mathbb{P}(x \in \mathcal{P}) \) and the fact that the longest directed path from \( 0 \) to \( y \) contains (at least) all pivots \( z^{(i)} \) with \( 0 \le z^{(i)} \le y. \) | \begin{enumerate}[label=(\alph*)]\item Recall that for any sequence $\left\{a_n\right\}_{n \geq 1}$ we have $\prod_{i \geq 1} (1-a_i)$ is positive if and only if $\sum_{i \geq 1} a_i$ is finite. Since $p \in (0,1)$, we have $\sum_{i \geq 1} (1-p)^i = (1-p)/p < \infty$ and hence $q >0$. Fix $x \in \mathbb{Z}$. We claim that the event $(y \to x \; \forall \; y<x)^c$ is same as the event $\cup_{y<x} \cap_{z:y<z \leq x}((y,z) \notin E)$. To see why this is true, first note that the later event is clearly contained in the former. Consider now that the former event happens. Take the largest $y<x$ such that $y \nrightarrow x$. Then clearly $(y,z) \notin E$ for all $y < z \leq x$ since all such $z$ satisfies $z \to x$. Thus the later event also happens. Similar argument shows that the event $(x \to y \; \forall \; x<y)^c$ is same as the event $\cup_{y>x} \cap_{z:x \leq z < y}((z,y) \notin E).$ Therefore, $$ (x \in \mathcal{P}) = (y \to x \; \forall \; y<x) \cap (x \to y \; \forall \; x<y) = \left[\cap_{y<x} \cup_{z:y<z \leq x}((y,z) \in E) \right]\cap \left[ \cap_{y>x} \cup_{z:x \leq z < y}((z,y) \in E)\right].$$ Using independence of the age-occurrences, we have \begin{align*} \mathbb{P}(x \in \mathcal{P}) &= \mathbb{P} \left[\cap_{y<x} \cup_{z:y<z \leq x}((y,z) \in E) \right]\mathbb{P} \left[ \cap_{y>x} \cup_{z:x \leq z < y}((z,y) \in E)\right] \\ &= \prod_{k \geq 1} \mathbb{P} \left[ \cup_{z:x-k<z \leq x}((x-k,z) \in E) \right] \prod_{k \geq 1} \mathbb{P} \left[ \cup_{z:x \leq z < x+k}((z,x+k) \in E)\right] \\ &= \prod_{k \geq 1}\left[1- \mathbb{P} \left( \cap_{z:x-k<z \leq x}((x-k,z) \notin E)\right) \right] \prod_{k \geq 1} \left[1-\mathbb{P} \left( \cap_{z:x \leq z < x+k}((z,x+k) \notin E)\right)\right] \\ &= \prod_{k \geq 1}\left[1- \prod_{i=0}^{k-1} \mathbb{P} \left( (x-k,x-i) \notin E\right) \right] \prod_{k \geq 1} \left[1-\prod_{i=0}^{k-1}\mathbb{P} \left( (x+i,x+k) \notin E\right)
ight] \\ &=\prod_{k \geq 1}\left[1- (1-p)^k \right] \prod_{k \geq 1} \left[1-(1-p)^k\right] = q^2.\end{align*}\item Define $\mathcal{F}_n = \sigma( \mathbbm{1}((x,y) \in E); x,y \leq n)$ to be the sigma field containing the information of edges to the left of $n$. We define some important events. For $j_1<j_2$, let $A_{j_1,j_2}$ be the event that for all $k \in \mathbb{Z} \cap (j_1,j_2)$, we have either some $j_1 \leq t <k$ such that $t \nrightarrow k$ or we have some $k<t\leq j_2$ such that $k \nrightarrow t$. For $j_1<j_2$, let $BL_{j_1,j_2}$ be the event that $k \rightarrow j_2$ for all $j_1 \leq k <j_2$ and $BR_{j_1,j_2}$ be the event that $j_1 \rightarrow k$ for all $j_1<k \leq j_2$. In particular, $BL_{-\infty,j}$ is the event that $k \rightarrow j$ for all $k<j$ and $BR_{j,\infty}$ is the event that $j \rightarrow k$ for all $k >j$. Finally, for $j>0$, let $T_j$ be the event that for all $k \in \mathbb{Z} \cap (0,j)$, we have either some $t <k$ such that $t \nrightarrow k$ or we have some $k<t\leq j$ such that $k \nrightarrow t$. The following expression then holds true for any $j \geq 2$, $0<l_1 < \ldots < l_{j}$. \begin{align*} \left(z^{(1)}=l_1, \ldots, z^{(j)}=l_j \right) &= T_{l_1} \cap \left[\cap_{k=1}^{j-1} A_{l_k,l_{k+1}} \right] \cap BL_{-\infty,l_1} \cap \left[\cap_{k=2}^{j} BL_{l_{k-1},l_k} \right] \cap \left[\cap_{k=1}^{j-1} BR_{l_k,l_{k+1}} \right] \cap BR_{l_j, \infty} \\ &= D_{l_1, \ldots,l_j} \cap BR_{l_j, \infty},\end{align*} where $D_{l_1,\ldots, l_j} \in \mathcal{F}_{l_j}$. The last line follows from observing the fact that $A_{j_1,j_2}, BL_{j_1,j_2}, BL_{-\infty,j_2}, BR_{j_1,j_2}, T_{j_2} \in \mathcal{F}_{j_2}$. Therefore, for $k \geq 1$, \begin{align*} \mathbb{P}\left( z^{(j+1)}-z^{(j)}=k \mid z^{(1)}=l_1, \ldots, z^{(j)}=l_j \right) &= \mathbb{P} \left( A_{l_j,l_j+k} \cap BL_{l_j,l_j+k} \cap BR_{l_j+k, \infty} \mid D_{l_1, \ldots,l_j}, BR_{l_j, \infty} \right).\end{align*} By construction, $A_{l_j,l_j+k}, BL_{l_j,l_j+k}, BR_{l_j+k, \infty} \perp \!\!\!\perp \mathcal{F}_{l_j}$ and therefore, \begin{align*} \mathbb{P}\left( z^{(j+1)}-z^{(j)}=k \mid z^{(1)}=l_1, \ldots, z^{(j)}=l_j \right) &= \mathbb{P} \left( A_{l_j,l_j+k} \cap BL_{l_j,l_j+k} \cap BR_{l_j+k, \infty} \mid BR_{l_j, \infty} \right) \\ &= \mathbb{P} \left( A_{0,k} \cap BL_{0,k} \cap BR_{k, \infty} \mid BR_{0, \infty} \right),\end{align*} where the last line follows from the fact that the probabilities of those events are translation-invariant. Since the last expression does not depend on $j,l_1,\ldots,l_j$, it proves that $(z^{(j+1)}-z^{(j)}) \perp \!
obreak\!\perp (z^{(j)},\ldots,z^{(1)})$ and $J_j:=z^{(j+1)}-z^{(j)} \sim G_J$, for some distribution function $G_J$ that does not depend on $j$. Apply induction to conclude that the collection $\left\{J_i\right\}_{j \geq 1}$ is an $iid$ collection which is independent to $z^{(1)}.$ The proof for the next part is quite similar. Again for any $j \geq 2$, $0<l_1<\cdots<l_j$, $1 \leq d_i \leq |l_{i+1}-l_i|$ for all $1 \leq i \leq j-1$ and $k \geq 1$ with $1 \leq d \leq k$, we can observe that the event $\left( z^{(1)}=l_1, \ldots, z^{(j)}=l_j,L_1=d_1, \ldots,L_{j-1}=d_{j-1}\right)$ can be written as $\widetilde{D}_{l_1, \ldots,l_j,d_1,\ldots,d_{j-1}} \cap BR_{l_j, \infty}$ where $\widetilde{D}_{l_1, \ldots,l_j,d_1,\ldots,d_{j-1}} \in \mathcal{F}_{l_j}$. Hence, the probability of the event $\left(z^{(j+1)}-z^{(j)}=k, L(z^{(j)},z^{(j+1)})=d \right)$ conditioned on the event $\left( z^{(1)}=l_1, \ldots, z^{(j)}=l_j,L_1=d_1, \ldots,L_{j-1}=d_{j-1}\right)$ is equal to $\mathbb{P} \left( A_{l_j,l_j+k} \cap BL_{l_j,l_j+k} \cap BR_{l_j+k, \infty} \cap (L(l_j,l_j+k)=d) \mid BR_{l_j, \infty} \right)$. By translation invariance, the last expression becomes $\mathbb{P} \left( A_{0,k} \cap BL_{0,k} \cap BR_{k, \infty} \cap (L(0,k)=d) \mid BR_{0, \infty} \right)$ which does not depend on $j,l_1,\ldots, l_j,$ and $d_1, \ldots, d_{j-1}$. The conclusion now follows similarly as in the previous case. \item By translation invariance we have $\mathbb{P}(x \nrightarrow y) = \mathbb{P}(0 \nrightarrow y-x)$. Note that for any $k \geq 2$, we have \begin{align*} \mathbb{P}(0 \nrightarrow k) \leq \mathbb{P}\left(\bigcap_{j=1}^{k-1} \left[ (0,j) \notin E \text{ or } (j,k) \notin E\right] \bigcap (0,k) \notin E)\right) &= \mathbb{P}((0,k) \notin E)\prod_{j=1}^{k-1} \mathbb{P} \left[ (0,j) \notin E \text{ or } (j,k) \notin E \right] \\ & = (1-p)(1-p^2)^{k-1} \leq (1-p^2)^k.\end{align*} On the otherhand, $\mathbb{P}(0 \nrightarrow 1) = \mathbb{P}((0,1) \notin E) = 1-p \leq 1-p^2$. Thus $\mathbb{P}(x \nrightarrow y) \leq \exp(-\gamma(y-x))$ where $\gamma=-\log(1-p^2)$. Set $z^{(0)}:=0$ and for $y>0$, let $N_y:= \max \left\{i \geq 0 \mid z^{(i)} \leq y\right\}=\sum_{x=1}^y \mathbbm{1}(x \in \mathcal{P})$. The remaining part is an exercise in Renewal Theory. We shall solve it in several steps. \begin{enumerate}[label=(\arabic*)] \item \textbf{Step 1 :} In this step we shall show that $z^{(1)}< \infty$ almost surely and $\mathbb{E}J=1/q^2.$ If $|\mathcal{P} \cap \mathbb{Z}_{>0}| < \infty$, then $\lim_{y \to \infty} N_y/y =0$. Otherwise, $z^{(1)}<\infty$, $J_i < \infty$ for all $i \geq 1$ and hence $N_y \uparrow \infty$ as $y \uparrow \infty$. Moreover we have $$ \dfrac{z^{(1)}+\sum_{1 \leq i \leq N_y-1} J_i}{N_y} \leq \dfrac{y}{N_y} \leq \dfrac{z^{(1)}+\sum_{1 \leq i \leq N_y} J_i}{N_y}. $$ Since, $z^{(1)} < \infty$, by SLLN we can conclude that on $(|\mathcal{P} \cap \mathbb{Z}_{>0}| = \infty)$, we have $\lim_{y \to \infty} N_y/y =1/\mathbb{E}J$, almost surely. Therefore, $$ \lim_{y \to \infty} \dfrac{N_y}{y} = \dfrac{1}{\mathbb{E}J}\mathbbm{1}\left(|\mathcal{P} \cap \mathbb{Z}_{>0}| = \infty \right), \; \; \text{almost surely}$$ Since, $0 \leq N_y \leq y$, DCT implies that $$ \lim_{y \to \infty} \dfrac{\mathbb{E}N_y}{y} = \dfrac{1}{\mathbb{E}J}\mathbb{P}\left(|\mathcal{P} \cap \mathbb{Z}_{>0}| = \infty \right).$$ But $\mathbb{E}N_y = \sum_{x=1}^y \mathbb{P}(x \in \mathcal{P})=yq^2.$ Hence, $\mathbb{P}\left(|\mathcal{P} \cap \mathbb{Z}_{>0}| = \infty \right)=q^2\mathbb{E}J$ and hence $\mathbb{E}J < \infty$. Notice that \begin{align*} \mathbb{P} (J>k) = \mathbb{P}\left( 1,\ldots, k \notin \mathcal{P} \mid 0 \in \mathcal{P}\right) &= \dfrac{\mathbb{P}\left( 1,\ldots, k \notin \mathcal{P}\right)-\mathbb{P}\left( 0,\ldots, k \notin \mathcal{P}\right)}{\mathbb{P}(0 \in \mathcal{P}) }\ \&= \dfrac{\mathbb{P}\left( 1,\ldots, k \notin \mathcal{P}\right)-\mathbb{P}\left( 1,\ldots, k+1 \notin \mathcal{P}\right)}{q^2} \\ & = \dfrac{\mathbb{P}\left( z^{(1)}>k\right)-\mathbb{P}\left( z^{(1)}>k+1 \right)}{q^2}.\end{align*} Since, $\mathbb{E}J = \sum_{k \geq 0} \mathbb{P}(J >k) < \infty$, we have $\mathbb{P}(z^{(1)}>k) \to 0$ as $k \to \infty$. Hence, $z^{(1)} < \infty$, almost surely. Along with the fact that $\mathbb{E}J< \infty$, this implies $z^{(1)},z^{(2)}, \ldots, < \infty$ almost surely and hence $\mathbb{P}\left(|\mathcal{P} \cap \mathbb{Z}_{>0}| = \infty \right)=1$. This yields that $\mathbb{E}J=1/q^2$. \item \textbf{Step 2 :} we have $$ \sum_{y \geq 1} \mathbb{P}(0 \nrightarrow y) \leq \sum_{y \geq 1} \exp(-\gamma y) < \infty. $$ By \textit{Borel-Cantelli Lemma 1}, we have $0 \rightarrow y$ for all large enough $y$ almost surely. \item \textbf{Step 3 :} Observe that the longest path from $0$ to $y$ should pass through $\mathcal{P}\cap [0,y]$. This is true because if an edge of the path passes over any pivot, we can replace that edge by a path, connecting the left end of the earlier edge to the pivot and then connecting to the right end. Since this increases path length by at least $1$, our assertion holds. Using step 2, we can therefore write that, almost surely for large enough $y$, $$ \dfrac{L(0,z^{(1)})+\sum_{1 \leq j \leq N_y-1}L_j }{y} \leq \dfrac{L(y)}{y} \leq \dfrac{L(0,z^{(1)})+\sum_{1 \leq j \leq N_y}L_j }{y}. $$ Since, $N_y \uparrow \infty$ as $y \uparrow \infty$ almost surely, we have by SLLN the following. $$ \lim_{y \to \infty} \dfrac{L(y)}{y} = \mathbb{E}L\lim_{y \to \infty} \dfrac{N_y}{y} = \dfrac{\mathbb{E}L}{\mathbb{E}J}=q^{-2}\mathbb{E}(L) = v_{*}(p), \; \text{almost surely}.$$ Since, $L(y) \leq y$ for all $y \geq 0$, we have $v_{*}(p) \in (0,1]$. It can't be $0$ since $\mathbb{E}L \geq 1$.\end{enumerate}\end{enumerate} |
2013-q4 | The random vector \( Z = (Z_1, \ldots, Z_d) \) follows the multivariate normal distribution \( N(\mu, V) \) of mean \( \mu_i = \mathbb{E}Z_i = 0 \) for all \( i \), and covariance \( V_{ij} := \mathbb{E}[Z_iZ_j] \) such that \( V_{ii} = 1 \) for all \( i \). (a) Provide necessary and sufficient condition on \( V \) to ensure that with probability one, the index \[ j^* := \arg \max_{1 \le j \le d} Z_j , \] is uniquely attained. (b) Assuming hereafter that \( j^* \) is unique with probability one, define the random variables \[ M_j := \max_{1 \le i \le d,i \ne j} \left\{ \frac{Z_i - V_{ij} Z_j}{1 - V_{ij}} \right\} , \quad 1 \le j \le d . \] Show that \( M_j \) is independent of \( Z_j, \) for each \( 1 \le j \le d. \) (c) Let \( \Phi(x) := \int_{-\infty}^x \frac{e^{-t^2/2}}{\sqrt{2\pi}} \, dt \) and \[ U(z, m) := \frac{1-\Phi(z)}{1-\Phi(m)} . \] Verify that \( j^* = j \) if and only if \( Z_j \ge M_j \), then prove that \( U(Z_{j^*}, M_{j^*}) \) is uniformly distributed on (0, 1). | \begin{enumerate}[label=(\alph*)]\item We need to have $\sup_{i\neq j}V_{ij}<1$. If this is true then with probability one all $Z_i$'s are distinct and hence the maximum is attained uniquely. On the other hand, if $V_{ij}=1$ for some $i \neq j$, then $Z_i = Z_j$ almost surely and with positive probability $Z_i \geq Z_k$, for all $k \neq i$, in which case $j^{\star} \in \{i,j\}.$ \item Note that $\dfrac{Z_i-V_{ij}Z_j}{1-V_{ij}}$ is independent of $Z_j$ for all $i$. This follows from the joint normality of $\left(Z_i-V_{ij}Z_j,Z_j \right)$ and the fact that $\operatorname{Cov}(Z_i-V_{ij}Z_j,Z_j) = \operatorname{Cov}(Z_i,Z_j)-V_{ij}\operatorname{Var}(Z_j)=0$. Hence if we take the maximum over $i$ the result, $M_j$, is still independent of $Z_j$ . \item Note that, since $V_{ij}<1$ for all $i \neq j$ we have, \begin{align*} j^{*} = j \iff Z_j \geq Z_i, \;orall \; 1 \leq i \leq d, i \neq j & \iff Z_i-V_{ij}Z_j \leq (1-V_{ij})Z_j, \;\forall \; 1 \leq i \leq d, i \neq j \ \& \iff \dfrac{ Z_i-V_{ij}Z_j}{1-V_{ij}} \leq Z_j, \;\forall \; 1 \leq i \leq d, i \neq j \ \& \iff M_j = \max_{1 \leq i \leq d , i \neq j} \dfrac{ Z_i-V_{ij}Z_j}{1-V_{ij}} \leq Z_j. \end{align*} Thus $j^{*}=j$ if and only if $Z_j \geq M_j$. In particular, $M_{j^{*}} \leq Z_{j^{*}}$. Therefore, $$ U(Z_{j^{*}},M_{j^{*}}) = \dfrac{(1-\Phi(Z_{j^{*}}))}{(1-\Phi(M_{j^{*}}))} \in [0,1].$$ Since, $Z_j \sim N(0,1)$, we have $1-\Phi(Z_j) \sim \text{Uniform}(0,1)$. Fix $u \in (0,1)$. \begin{align*} \mathbb{P}\left[U(Z_{j^{*}},M_{j^{*}}) \leq u \right] = \sum_{j=1}^d \mathbb{P}\left[U(Z_{j^{*}},M_{j^{*}}) \leq u, j^{*}=j \right] &= \sum_{j=1}^d\mathbb{P}\left[1-\Phi(Z_j) \leq u(1-\Phi(M_j)), Z_j \geq M_j \right] \\ &\stackrel{(i)}{=} \sum_{j=1}^d\mathbb{P}\left[1-\Phi(Z_j) \leq u(1-\Phi(M_j)) \right] \\ &\stackrel{(ii)}{=} u\sum_{j=1}^d \mathbb{E} \left[ 1-\Phi(M_j)\right], \end{align*} where $(i)$ is true since $u \leq 1$ and $(ii)$ is true because $Z_j \perp \!\!\perp M_j$ with $1-\Phi(Z_j) \sim \text{Uniform}(0,1)$ and $u(1-\Phi(M_j)) \in [0,1]$. Plugging in $u=1$, we get $$ 1= \mathbb{P}\left[U(Z_{j^{*}},M_{j^{*}}) \leq 1 \right] = \sum_{j=1}^d \mathbb{E} \left[ 1-\Phi(M_j)\right].$$ Thus, $\mathbb{P}\left[U(Z_{j^{*}},M_{j^{*}}) \leq u \right]=u$, for all $u \in [0,1]$ concluding our proof. \end{enumerate} |
2013-q5 | You are given integrable random variables \( X, Y_0 \) and \( Z_0 \) on the same probability space \( (\Omega, \mathcal{F}, \mathbb{P}) \), and two \( \sigma \)-algebras \( \mathcal{A} \subset \mathcal{F}, \mathcal{B} \subset \mathcal{F}. \) For \( k = 1, 2, \ldots, \), let \[ Y_k := \mathbb{E}[X|\sigma(\mathcal{A}, Z_0, \ldots, Z_{k-1})] , \quad Z_k := \mathbb{E}[X|\sigma(\mathcal{B}, Y_0, \ldots, Y_{k-1})] . \] (a) Show that there exist integrable random variables \( Y_\infty \) and \( Z_\infty \) such that as \( n \to \infty \) both \( \mathbb{E}[|Y_n - Y_\infty|] \to 0 \) and \( \mathbb{E}[|Z_n - Z_\infty|] \to 0 \). (b) Prove that almost surely \( Y_\infty = Z_\infty. \) | \begin{enumerate}[label=(\alph*)]\item Introduce the notations, $\mathcal{F}_n:=\sigma(\mathcal{A},Z_0, \ldots, Z_{n-1})$ and $\mathcal{G}_n:=\sigma(\mathcal{B},Y_0, \ldots, Y_{n-1})$, for all $n \geq 1$ and $\mathcal{F}_0:=\mathcal{A}$, $\mathcal{G}_0:=\mathcal{B}$. Clearly, both are filtrations and by definition, $Y_n=\mathbb{E} \left[X \rvert \mathcal{F}_n \right]$ and $Z_n=\mathbb{E} \left[X \rvert \mathcal{G}_n \right]$, for all $n \geq 0$. Thus $\left\{Y_n\right\}_{n \geq 0}$ (or $\left\{Z_n\right\}_{n \geq 0}$ ) is a \textit{Doob's Martingale} with respect to filtration $\left\{\mathcal{F}_n\right\}_{n \geq 0}$ (or $\left\{\mathcal{G}_n\right\}_{n \geq 0}$) and hence is U.I., $X$ being integrable. (see \cite[Proposition 4.2.33]{dembo}). Now apply the fact that a Martingale is U.I. if and only if it converges in $L^1$ (see \cite[Proposition 5.3.12]{dembo}) and conclude. \item We provide two different proofs. \begin{enumerate}[label=(1)] \item Observe the following identity which follows from \textit{Tower Property}. \begin{equation}{\label{tower}} \mathbb{E} \left[Y_n \rvert \mathcal{G}_n \right] = \mathbb{E} \left[ \mathbb{E} \left[X \rvert \mathcal{F}_n \right] \rvert \mathcal{G}_n \right] = \mathbb{E} \left[ \mathbb{E} \left[X \rvert \mathcal{G}_n \right] \rvert \mathcal{F}_n \right] = \mathbb{E} \left[Z_n \rvert \mathcal{F}_n \right].\end{equation} Note that $\mathbb{E}\rvert \mathbb{E} \left[Y_n \rvert \mathcal{G}_n \right] - \mathbb{E} \left[Y_{\infty} \rvert \mathcal{G}_n \right]\rvert \leq \mathbb{E} |Y_n - Y_{\infty}| =o(1)$ and by \textit{Levy's Upward Theorem}, $\mathbb{E} \left[Y_{\infty} \rvert \mathcal{G}_n \right] \stackrel{L^1}{\longrightarrow} \mathbb{E} \left[Y_{\infty} \rvert \mathcal{G}_{\infty} \right]$, where $\mathcal{G}_{\infty} = \sigma \left( \cup_{n \geq 0} \mathcal{G}_n\right) = \sigma(\mathcal{B},Y_0,Y_1 \ldots).$ It is evident that $Y_{\infty} \in m\mathcal{G}_{\infty}$ (since it is almost sure limit of $Y_n \in m\mathcal{G}_n$) and hence $\mathbb{E} \left[Y_{\infty} \rvert \mathcal{G}_{\infty} \right]=Y_{\infty}$ almost surely. Combining the above observations we conclude that $\mathbb{E} \left[Y_n \rvert \mathcal{G}_n \right] \stackrel{L^1}{\longrightarrow} Y_{\infty}.$ Similarly, $\mathbb{E} \left[Z_n \rvert \mathcal{F}_n \right] \stackrel{L^1}{\longrightarrow} Z_{\infty}.$ Now invoke Equation~({\ref{tower}}), to conclude that $Y_{\infty}=Z_{\infty}$ almost surely. \item Since $X$ is integrable, we have by \textit{Levy's Upward Theorem}, $Y_{\infty}= \lim_{n \to \infty} Y_n = \lim_{n \to \infty}\mathbb{E} \left[X \rvert \mathcal{F}_n \right] = \mathbb{E} \left[X \rvert \mathcal{F}_{\infty} \right].$ Similarly, $Z_{\infty} = \mathbb{E} \left[X \rvert \mathcal{G}_{\infty} \right].$ On the otherhand, in first proof we have observed that $Y_{\infty} \in m\mathcal{G}_{\infty}$ and hence $Y_{\infty} \in m\mathcal{G}_{\infty} \cap m\mathcal{F}_{\infty} = m\left(\mathcal{G}_{\infty} \cap \mathcal{F}_{\infty}\right)$. This yields the following. $$ Y_{\infty} = \mathbb{E} \left[Y_{\infty} \rvert \mathcal{G}_{\infty} \cap \mathcal{F}_{\infty} \right] = \mathbb{E} \left[ \mathbb{E} \left[X \rvert \mathcal{F}_{\infty} \right]\rvert \mathcal{G}_{\infty} \cap \mathcal{F}_{\infty} \right] = \mathbb{E} \left[X\rvert \mathcal{G}_{\infty} \cap \mathcal{F}_{\infty} \right]. $$ Similarly, $Z_{\infty} = \mathbb{E} \left[X\rvert \mathcal{G}_{\infty} \cap \mathcal{F}_{\infty} \right].$ This proves that $Y_{\infty}=Z_{\infty}$ almost surely. \end{enumerate} \end{enumerate} |
2013-q6 | Let \( X(s, t) := \inf_{u \in [s,t]} \{W(u)\}, \) for standard Brownian motion \( W(t), \) starting at \( W(0) = 0. \) For any \( t > 1 \) and \( \varepsilon > 0, \) denote by \( f_{t, \varepsilon}(x) \) the probability density of \( W(1) \) at \( x \ge 0, \) conditioned on the event \( \{X(0, t) > -\varepsilon\}. \) (a) Express \( f_{t, \varepsilon}(x) \) in terms of the standard normal distribution function \( \Phi(\cdot).\) (b) Compute the density \( g(x) \) of \( R := \sqrt{G_1^2 + G_2^2 + G_3^2}, \) for i.i.d. standard normal variables \( G_i, i = 1,2,3. \) Then deduce from (a) that \[ g(x) = \lim_{t \to \infty, \varepsilon \downarrow 0} \{f_{t, \varepsilon}(x)\} . \] | \begin{enumerate}[label=(\alph*)]\item We define $M_t = \max_{u \in [0,t]} W_u$. By definition, for $x >0$, \begin{align*} f_{t,\varepsilon}(x) = \dfrac{\partial}{\partial y }\dfrac{\mathbb{P}(\min_{u \in [0,t]} W_u > -\varepsilon, W_1 \leq y)}{\mathbb{P}P(\min_{u \in [0,t]} W_u > -\varepsilon)} \Bigg \rvert_{y=x} &= \dfrac{\partial}{\partial y }\dfrac{\int_{-\infty}^{y}\mathbb{P}(\min_{u \in [0,t]} W_u > -\varepsilon \mid W_1 = t)f_{W_1}(t) \, dt}{\mathbb{P}(\min_{u \in [0,t]} W_u > -\varepsilon)} \Bigg \rvert_{y=x} \\ &= \dfrac{\mathbb{P}(\min_{u \in [0,t]} W_u > -\varepsilon \mid W_1 = x)f_{W_1}(x)}{\mathbb{P}(\min_{u \in [0,t]} W_u > -\varepsilon)}. \end{align*} By symmetry of BM and \textit{Reflection Principle}, we have the denominator as follows. \begin{equation}{\label{deno}} \mathbb{P}\left(\min_{u \in [0,t]} W_u > -\varepsilon \right) = \mathbb{P} \left(M_t=\max_{u \in [0,t]} W_u < \varepsilon \right) = 1-\mathbb{P}(M_t \geq \varepsilon) = 1- 2\mathbb{P}(W_t \geq \varepsilon) = 2\Phi \left(\dfrac{\varepsilon}{\sqrt{t}}\right) - 1. \end{equation} In the numerator, $\mathbb{P}(\min_{u \in [0,t]} W_u > -\varepsilon \mid W_1 = x)$ equals $\mathbb{P}(M_t < \varepsilon \mid W_1 = -x)$ by symmetry. Using the Markov property, we can further split this as \begin{align*} \mathbb{P}(M_t < \varepsilon \mid W_1 = -x) &= \mathbb{P}(M_1 < \varepsilon \mid W_1 = -x)\mathbb{P}\left(\max_{1 \leq u \leq t} W_u < \varepsilon \mid W_1 = -x, M_1< \varepsilon \right) \\ &=\mathbb{P}(M_1 < \varepsilon \mid W_1 = -x)\mathbb{P}\left(\max_{1 \leq u \leq t} W_u < \varepsilon \mid W_1 = -x \right). \end{align*} So the numerator of $f_{t,\varepsilon}(x)$ is \begin{equation}{\label{num}} \mathbb{P}(M_t < \varepsilon \mid W_1 = -x) f_{W_1}(x) = \mathbb{P}(M_1 < \varepsilon \mid W_1 = -x) f_{W_1}(-x) \mathbb{P}(\max_{1 \leq u \leq t} W_u < \varepsilon \mid W_1 = -x). \end{equation} Using the independent increment property for BM, the last term of (\ref{num}) reduces to $\mathbb{P}(\max_{1 \leq u \leq t} W_u < \varepsilon \mid W_1 = -x) = \mathbb{P}(M_{t-1} < \varepsilon + x) = 2 \Phi\left(\frac{\varepsilon + x}{\sqrt{t-1}} \right) - 1.$ To compute the first two terms of (\ref{num}), we integrate the joint density of $(W_1,M_1)$. By \cite[Exercise 10.1.12]{dembo}, the joint density of $(W_1,M_1)$ is $f_{W_1,M_1}(x,u) = \dfrac{2(2u-x)}{\sqrt{2\pi}} \exp\left(-\dfrac{1}{2}(2u-x)^2\right) \mathbbm{1}(u>x \vee 0).$ So the first two terms of (\ref{num}) are \begin{align*} \mathbb{P}(M_1 < \varepsilon \mid W_1 = -x) f_{W_1}(x) &= \mathbb{P}(M_1 < \varepsilon \mid W_1 = -x) f_{W_1}(-x) \\ &= f_{W_1}(-x) \int_0^\varepsilon \dfrac{f_{W_1,M_1}(-x,u)}{f_{W_1}(-x)} du \\ &=\int_0^\varepsilon \dfrac{2(2u+x)}{\sqrt{2\pi}} \exp\left(-\dfrac{1}{2}(2u+x)^2\right) du \\ &=\int_{x^2/2}^{(2\varepsilon+x)^2/2} \dfrac{1}{\sqrt{2\pi}} \exp\left(-y\right) dy \\ &=\frac{1}{\sqrt{2\pi}} \left( e^{-x^2/2} - e^{-(2\varepsilon+x)^2/2}\right). \end{align*} Putting it all together and simplifying, we get \[f_{t,\varepsilon}(x) = \dfrac{\frac{1}{\sqrt{2\pi}} \left( e^{-x^2/2} - e^{-(2\varepsilon+x)^2/2}\right) \left(2\Phi\left(\dfrac{\varepsilon+x}{\sqrt{t-1}}\right) - 1\right)}{2\Phi\left(\dfrac{\varepsilon}{\sqrt{t}}\right) - 1}. \] \item It is clear that $R$ is the square root of a $\chi^2_3 = \text{Gamma}(3/2,1/2)$ random variable. Hence the density of $R^2$ is $$ f_{R^2}(b) = \dfrac{1}{\sqrt{2\pi}} b^{1/2}\exp(-b/2)\mathbbm{1}(b>0).$$ Using change of variable formula on the above ,we get \[f_R(x) = g(x) = \sqrt{\frac{2}{\pi}} x^2 e^{-x^2/2}\mathbbm{1}(x>0).\] First use \textit{L'Hospital's Rule} to take limit as $\varepsilon \downarrow 0$ to $f_{t,\varepsilon}(x)$ to get the following. \begin{align*} \lim_{\varepsilon \downarrow 0} f_{t,\varepsilon}(x) &= \left(2\Phi\left(\dfrac{x}{\sqrt{t-1}}\right) - 1\right) \dfrac{\exp(-x^2/2)}{\sqrt{2\pi}} \lim_{\varepsilon \downarrow 0} \dfrac{\left( 1 - e^{-2\varepsilon x-2\varepsilon^2}\right) }{2\Phi\left(\dfrac{\varepsilon}{\sqrt{t}}\right) - 1} \\ &= \left(2\Phi\left(\dfrac{x}{\sqrt{t-1}}\right) - 1\right) \dfrac{\exp(-x^2/2)}{\sqrt{2\pi}} \lim_{\varepsilon \downarrow 0} \dfrac{\exp\left({-2\varepsilon x-2\varepsilon^2}\right)(2x+4\varepsilon) }{2\phi(\varepsilon t^{-1/2})t^{-1/2}} \\ &= \sqrt{t}x\exp(-x^2/2) \left(2 \Phi\left(\dfrac{x}{\sqrt{t-1}}\right) -1 \right). \end{align*} We have as $t \to \infty$, $$ 2 \Phi\left(\dfrac{x}{\sqrt{t-1}}\right) -1 = \int_{-x/\sqrt{t-1}}^{x/\sqrt{t-1}} \phi(u) \; du = \phi(0)\dfrac{2x}{\sqrt{t-1}}(1+o(1)),$$ and therefore, $$ \lim_{t \uparrow \infty, \varepsilon \downarrow 0} f_{t,\varepsilon}(x) = 2x^2\exp(-x^2/2)\phi(0) = \sqrt{\frac{2}{\pi}} x^2 e^{-x^2/2}.$$ This matches $g(x)$, so we are done. \end{enumerate} \begin{thebibliography}{99} \bibitem{dembo} \url{https://statweb.stanford.edu/~adembo/stat-310b/lnotes.pdf} \end{thebibliography} \end{document} |
2014-q1 | Let \[ f(x) = \frac{\Gamma \left( \frac{3}{2} \right)}{1}{2\sqrt{2}} \left( 1 + \frac{x^2}{2} \right)^{-3/2} \quad -\infty < x < \infty \] be Student's \( t \)-density on two degrees of freedom. Prove or disprove the following statement: Let \( X_1, X_2, \ldots, X_n \) be independent and identically distributed with density \( f(x) \). Let \( S_n = X_1 + \cdots + X_n \). Then there exists \( b_n \) so that for all \( x \), as \( n \) tends to \( \infty \), \[ P \left\{ \frac{S_n}{b_n} \leq x \right\} \longrightarrow \int_{-\infty}^x \frac{e^{-t^2/2}}{\sqrt{2\pi}} \, dt. \] | The trick is to use truncation and then apply \textit{Lindeberg-Feller CLT}.
Define, for $t> 0$, $$ H(t) := \mathbb{E}\left[ |X_1|^2\mathbbm{1}(|X_1| \leq t)\right] = \int_{-t}^{t} x^2f(x) \,dx = 4\sqrt{2} \Gamma(3/2) \int_{0}^t x^2 (1+x^2/2)^{-3/2} \, dx = 4\sqrt{2} \Gamma(3/2)2^{3/2} \int_{0}^t x^2 (2+x^2)^{-3/2} \, dx.$$ Set $\kappa = 4\sqrt{2} \Gamma(3/2)2^{3/2} = 8\sqrt{\pi}.$ It is clear that $H(\infty) = \infty$, since $$ H(\infty)= \kappa \int_{0}^{\infty} x^2 (2+x^2)^{-3/2} \, dx \geq \kappa \int_{1}^{\infty} x^2 (2+x^2)^{-3/2} \, dx \geq \kappa \int_{1}^{\infty} x^2 (4x^2)^{-3/2} \, dx = \infty.$$
We need to find how $H$ behaves at $\infty$ for reasons to be evident later. Note that $H^{\prime}(t) = \kappa t^2(2+t^2)^{-3/2} \sim \kappa t^{-1}$ as $t \to \infty$ and hence by \textit{L'Hospital's Rule}, $H(t) \sim \kappa \log t$ as $t \to \infty$.
Introduce the truncated variables $X_{n,k} := X_k \mathbbm{1}(|X_k| \leq c_n)$, where the truncation thresholds $\left\{c_n\right\}_{n \geq 1}$ is a non-decreasing unbounded sequence to be chosen later.
\begin{align*} \mathbb{P} \left[ S_n/b_n \neq \sum_{k=1}^n X_{n,k}/b_n\right] &\leq \mathbb{P} \left[ |X_k| > c_n, \text{ for some } 1 \leq k \leq n\right] \\
& \leq n \mathbb{P}(|X_1| \geq c_n) \\
& = n\kappa \int_{c_n}^{\infty} (2+x^2)^{-3/2} \, dx \\
& \leq \kappa n \int_{c_n}^{\infty} x^{-3} \; dx \leq Cnc_n^{-2},\end{align*}
for some positive constant $C$. Hence, provided $nc_n^{-2} =o(1)$, we have $$ S_n/b_n - \dfrac{1}{b_n} \sum_{k=1}^n X_{n,k} \stackrel{p}{\longrightarrow} 0. $$ Using the symmetry of the t-distribution, $\mathbb{E}X_{n,k}/b_n =0$, for all $n,k$ and we also have $$ \sum_{k=1}^n \mathbb{E}b_n^{-2}X_{n,k}^2 = nb_n^{-2} \mathbb{E}X_{n,1}^2 = nb_n^{-2}H(c_n) \sim \kappa \dfrac{n \log c_n}{b_n^2}.$$ Suppose $\kappa n b_n^{-2}\log c_n \longrightarrow 1$ and we can make the choice for $b_n$ and $c_n$ such that $c_n/b_n \to 0$ as $n \to \infty$. Fix $\varepsilon >0$. Then, for large enough $n$, we have $ \varepsilon b_n > c_n$ and thus $$ \sum_{k=1}^n \mathbb{E} \left[ b_n^{-2}X_{n,k}^2\mathbbm{1}(b_n^{-1}|X_{n,k}| \geq \varepsilon)\right] = nb_n^{-2} \mathbb{E} \left[ X_{n,1}^2\mathbbm{1}(|X_{n,1}| \geq \varepsilon b_n)\right] = 0,$$ since $|X_{n,1}| \leq c_n$ almost surely.
Combining the above conclusions and applying \textit{Lindeberg-Feller CLT} along with \textit{Slutsky's Lemma} we conclude that $b_n^{-1}\sum_{k=1}^n X_{n,k}$ and hence $S_n/b_n$ converges in distribution to $N(0,1)$ provided $nc_n^{-2}=o(1)$, $c_n=o(b_n)$ and $\kappa nb_n^{-2} \log c_n = 1+o(1)$. Take the choices $b_n =\sqrt{\kappa n \log n/2}$ and $c_n = \sqrt{n \log \log n}$. These choices can be seen to work out and hence $$ \dfrac{ S_n }{b_n} = \dfrac{S_n}{\sqrt{\kappa n \log n/2}} = \dfrac{S_n}{2\sqrt{ \sqrt{\pi} n \log n}} \stackrel{d}{\longrightarrow} N(0,1),$$ i.e., $$ \dfrac{S_n}{\sqrt{ n \log n}} \stackrel{d}{\longrightarrow} N(0,4\sqrt{\pi}),$$ as $n \to \infty.$ |
2014-q2 | Let \( X_i, 1 \leq i < \infty \), be independent identically distributed positive integer random variables with \( P(X_i = j) = p_j \). Let \( N_n \) be the number of distinct values among \( X_1, X_2, \ldots, X_n \). Prove that \( N_n/n \to 0 \) almost surely as \( n \to \infty \). | We have $\left\{X_i : i \geq 1\right\}$ are i.i.d. positive integer random variables with $\mathbb{P}(X_i=j) = p_j$. $N_n$ is the number of distinct values among $X_1, \ldots, X_n$. Fix any $k \geq 1$. It is easy to see that \begin{align*} N_n = \sum_{j \geq 1} \mathbbm{1}\left( X_i=j, \; \text{for some } 1 \leq i \leq n\right) &\leq \sum_{j=1}^k \mathbbm{1}\left( X_i=j, \; \text{for some } 1 \leq i \leq n\right) + \sum_{i=1}^n \mathbbm{1}(X_i>k) \\
& \leq k + \sum_{i=1}^n \mathbbm{1}(X_i>k).\end{align*} Hence, $\limsup_{n \to \infty} N_n/n \leq \limsup_{n \to \infty} n^{-1} \sum_{i=1}^n \mathbbm{1}(X_i>k) = \mathbb{P}(X_1 >k)$, almost surely. Taking $k \uparrow \infty$, and noting that $N_n \geq 0$, we get $N_n/n \longrightarrow 0$, almost surely as $n \to \infty$. |
2014-q3 | Let \( X_1, X_2, \ldots \) be independent identically distributed standard normal random variables. Let \( W_n = \# \{ i|1 \leq i \leq n - 1 : X_i < X_{i+1} \} \). \( a \) Find the mean \( \mu_n \) of \( W_n \). \( b \) Prove that for all \( \varepsilon > 0 \), \[ P \left\{ \left| \frac{W_n}{n} - \frac{1}{2} \right| > \varepsilon \right\} \to 0. \] | We have $X_1,X_2, \ldots$ are i.i.d. standard normal variables. Let $W_n := \sum_{i=1}^{n-1} \mathbbm{1}(X_i < X_{i+1}).$ \begin{enumerate}[label=(\alph*)] \item By symmetry, $Y_i :=\mathbbm{1}(X_i<X_{i+1}) \sim Ber(1/2)$. Hence, $\mu_n = \mathbb{E}W_n = (n-1) \mathbb{P}(X_1<X_2) = (n-1)/2.$ \item Note that if $j-i \geq 2$, then $Y_i$ and $Y_j$ are independent. On the otherhand, all the permutations of $(i,i+1,i+2)$ are equally likely to be the ordering of the indices when $X_i,X_{i+1}$ and $X_{i+2}$ ordered from smallest to largest. So, For $1 \leq i \leq n-2$, $$ \operatorname{Cov}(Y_i,Y_{i+1}) = \mathbb{E}Y_iY_{i+1} -1/4 = \mathbb{P}(X_i<X_{i+1}<X_{i+2}) - 1/4 = -1/12.$$ Thus, \begin{align*} \operatorname{Var}(W_n) = \sum_{i=1}^{n-1} \operatorname{Var}(Y_i) + 2 \sum_{1 \leq i < j \leq n-1} \operatorname{Cov}(Y_i,Y_j) = \dfrac{n-1}{4} + 2 \sum_{i=1}^{n-2} \operatorname{Cov}(Y_i,Y_{i+1}) = \dfrac{n-1}{4} - \dfrac{n-2}{6} = \dfrac{n+1}{12}.
\end{align*} Therefore, $$ \mathbb{E}(W_n/n) = (n-1)/2n \longrightarrow 1/2, \;\; \operatorname{Var}(W_n/n) = \dfrac{n+1}{12n^2} \longrightarrow 0,$$ and hence $W_n/n \stackrel{p}{\longrightarrow} 1/2$, as $n \to \infty$. \end{enumerate} |
2014-q4 | Hellinger distance: This problem studies some properties of a notion of the distance between two probability densities on the real line. Let \( f \) and \( g \) be two such densities; the Hellinger distance between them is defined to be \[ H(f,g) := \left( \int_{-\infty}^{\infty} (\sqrt{f(x)} - \sqrt{g(x)})^2 \, dx \right)^{1/2}. \] \( a \) Show that \[ H(f,g) = \sqrt{2\,(1 - C(f,g))} \] where \[ C(f,g) = \int_{-\infty}^{\infty} \sqrt{f(x)g(x)} \, dx. \] Among all pairs \( (f,g) \) of densities, what is the largest \( H(f,g) \) can be? Under what conditions on \( f \) and \( g \) is the maximum achieved? Similarly, what is the smallest \( H(f,g) \) can be, and under what conditions is the minimum achieved? \( b \) Let \( (f_1, g_1), (f_2, g_2), \ldots \) be an infinite sequence of pairs of densities. Show that \[ \lim_{n \to \infty} C(f_n, g_n) = 0 \] if and only if there exists an infinite sequence \( A_1, A_2, \ldots \) of subsets of \( \mathbb{R} \) such that \[ \lim_{n \to \infty} \int_{A_n} f_n(x) \, dx = 0 = \lim_{n \to \infty} \int_{A_n^c} g_n(x) \, dx. \] | \begin{enumerate}[label=(\alph*)] \item Since $f$ and $g$ both are probability densities, we have $$ \int_{-\infty}^{\infty} f(x)\, dx = 1 = \int_{-\infty}^{\infty} g(x)\, dx.$$ Therefore, \begin{align*} H(f,g)^2 = \int_{-\infty}^{\infty} \left(\sqrt{f(x)}-\sqrt{g(x)} \right)^2\, dx &= \int_{-\infty}^{\infty} \left[ f(x) + g(x) - 2\sqrt{f(x)g(x)} \right]\, dx \\\n&= 2 - 2 \int_{-\infty}^{\infty} \sqrt{f(x)g(x)}\, dx = 2(1-C(f,g)). \end{align*} Thus $H(f,g) = \sqrt{2(1-C(f,g))}$.
Since by definition, $C(f,g) \geq 0$; we have $H(f,g) = \sqrt{2(1-C(f,g))} \leq \sqrt{2}$. Equality holds if and only if $C(f,g)=0$ which is equivalent to $fg=0$ almost everywhere with respect to Lebesgue measure on $\mathbb{R}$. The last condition is equivalent to $f$ and $g$ having disjoint support, i.e., the set $\left\{ x : f(x) >0 \right\} \cap \left\{ x : g(x) >0 \right\}$ has Lebesgue measure $0$.
Again the integrand in the definition of $H$ being non-negative, we have $H(f,g) \geq 0$ with equality holding if and only if $f=g$ almost everywhere with respect to Lebesgue measure on $\mathbb{R}$. \item Suppose $C(f_n,g_n) \to 0$ as $n \to \infty$. Set $A_n := \left\{x : f_n(x) \leq g_n(x)\right\}$ for all $n \geq 1$. $f_n$ and $g_n$ being measurable, the set $A_n$ is also measurable. On the set $A_n^{c}$ we have $f_n(x) > g_n(x).$ Hence, $$ 0 \leq \int_{A_n} f_n(x) \, dx \leq \int_{A_n} \sqrt{f_n(x)g_n(x)} \, dx \leq \int_{\mathbb{R}} \sqrt{f_n(x)g_n(x)} \, dx = C(f_n,g_n) \to 0,$$ and $$ 0 \leq \int_{A_n^c} g_n(x) \, dx \leq \int_{A_n^c} \sqrt{f_n(x)g_n(x)} \, dx \leq \int_{\mathbb{R}} \sqrt{f_n(x)g_n(x)} \, dx = C(f_n,g_n) \to 0.$$ Thus $$ \lim_{n \to \infty} \int_{A_n} f_n(x) \, dx = 0 = \lim_{n \to \infty} \int_{A_n^c} g_n(x) \, dx.$$ Now start with the above condition as the hypothesis for some $\left\{A_n\right\}_{n \geq 1}$. Then \begin{align*} 0 \leq C(f_n,g_n) = \int_{\mathbb{R}} \sqrt{f_n(x)g_n(x)} \, dx &= \int_{A_n} \sqrt{f_n(x)g_n(x)} \, dx + \int_{A_n^c} \sqrt{f_n(x)g_n(x)} \, dx \\
& \stackrel{(i)}{\leq} \left[\int_{A_n} f_n(x) \, dx \right]^{1/2}\left[\int_{A_n} g_n(x) \, dx \right]^{1/2} + \left[\int_{A_n^c} f_n(x) \, dx \right]^{1/2}\left[\int_{A_n^c} g_n(x) \, dx \right]^{1/2} \\
&\leq \left[\int_{A_n} f_n(x) \, dx \right]^{1/2}\left[\int_{\mathbb{R}} g_n(x) \, dx \right]^{1/2} + \left[\int_{\mathbb{R}} f_n(x) \, dx \right]^{1/2}\left[\int_{A_n^c} g_n(x) \, dx \right]^{1/2} \\
&=\left[\int_{A_n} f_n(x) \, dx \right]^{1/2} + \left[\int_{A_n^c} g_n(x) \, dx \right]^{1/2} \longrightarrow 0,\end{align*} where $(i)$ follows from \textit{Cauchy-Schwarz Inequality}. This completes the proof. \end{enumerate} |
2014-q5 | Stopping times: Let \( T = [0, \infty) \) be the set of nonnegative real numbers and let \( (\mathcal{F}_t)_{t \in T} \) be a collection of nondecreasing \( \sigma \)-fields (so \( \mathcal{F}_s \subset \mathcal{F}_t \) for \( 0 \leq s \leq t < \infty \)), all sub-\( \sigma \)-fields of some \( \sigma \)-field \( \mathcal{F} \). A stopping time \( \tau \) is defined as a \( \mathcal{F} \)-measurable random variable for which \( \{ \tau \leq t \} \in \mathcal{F}_t \) for all \( t \in T \). \( a \) Let \( \tau \) be a stopping time and put \[ \mathcal{F}_\tau = \{ A \in \mathcal{F} : A \cap \{ \tau \leq t \} \in \mathcal{F}_t \text{ for all } t \in T \}. \] (Loosely speaking, \( \mathcal{F}_\tau \) consists of those events that can be defined in terms of what happens up through time \( \tau \).) Show that \( \mathcal{F}_\tau \) is a \( \sigma \)-field and that \( \tau \) is \( \mathcal{F}_\tau \)-measurable. \( b \) Let \( \sigma \) and \( \tau \) be two stopping times and assume that \( \sigma \leq \tau \) (everywhere). Show that \( \mathcal{F}_\sigma \subset \mathcal{F}_\tau \). \( c \) Let \( \sigma \) and \( \tau \) be two stopping times and define \( \sigma \wedge \tau \) by \[ (\sigma \wedge \tau)(\omega) = \min (\sigma(\omega), \tau(\omega)) \] for each sample point \( \omega \). Show that \( \sigma \wedge \tau \) is a stopping time and that \( \mathcal{F}_{\sigma \wedge \tau} = \mathcal{F}_\sigma \cap \mathcal{F}_\tau \). \( d \) Let \( \sigma \) and \( \tau \) be two stopping times. Show that \( \{ \sigma \leq \tau \} \in \mathcal{F}_{\sigma \wedge \tau} \). | \begin{enumerate}[label=(\alph*)] \item To see why $\mathcal{F}_{\tau}$ is a $\sigma$-field, note that \begin{enumerate}[label=(\arabic*)] \item $\omega \cap (\tau \leq t) = (\tau \leq t) \in \mathcal{F}_t$, and so $\Omega \in \mathcal{F}_{\tau}$. \item $A \in \mathcal{F}_{\tau} \Rightarrow A \cap (\tau \leq t) \in \mathcal{F}_t \Rightarrow A^c \cap (\tau \leq t) = (\tau \leq t) \cap (A \cap (\tau \leq t))^c \in \mathcal{F}_t$ and so $A^c \in \mathcal{F}_{\tau}$. \item Let $A_i \in \mathcal{F}_{\tau}$ for all $i \geq 1$. Then $$ A_i \cap (\tau \leq t) \in \mathcal{F}_t, \; \forall \; i \geq 1 \Rightarrow \left( \cup_{i \geq 1} A_i \right) \cap (\tau \leq t) = \cup_{i \geq 1} \left[ A_i \cap (\tau \leq t)\right] \in \mathcal{F}_t.$$ Hence, $\cup_{i \geq 1} A_i \in \mathcal{F}_{\tau}$. \end{enumerate} For any $s,t \geq 0$, we have $(\tau \leq s) \cap (\tau \leq t) = (\tau \leq s \wedge t) \in \mathcal{F}_{s \wedge t} \subseteq \mathcal{F}_t.$ For $s <0$, we have $(\tau \leq s) = \emptyset \in \mathcal{F}_{\tau}$. Hence, $(\tau \leq s) \in \mathcal{F}_{\tau}$, for all $s$. This shows that $\tau$ is $\mathcal{F}_{\tau}$-measurable. \item Take $A \in \mathcal{F}_{\sigma}$. Then $A \cap (\sigma \leq t) \in \mathcal{F}_t$, for all $t \geq 0$. So, $$ A \cap (\tau \leq t) = A \cap (max(\tau,\sigma) \leq t) = A \cap (\sigma \leq t) \cap (\tau \leq t) \in \mathcal{F}_t,$$ since both $A \cap (\sigma \leq t)$ and $(\tau \leq t)$ are $\mathcal{F}_t$-measurable. This shows $A \in \mathcal{F}_{\tau}$ and thus $\mathcal{F}_{\sigma} \subseteq \mathcal{F}_{\tau}$. \item For all $t \geq 0$, $ (\sigma \wedge \tau \leq t) = (\sigma \leq t) \cup (\tau \leq t) \in \mathcal{F}_t$. Hence, $\sigma \wedge \tau$ is a stopping time. \par Since, $\sigma \wedge \tau \leq \sigma, \tau$; we have from part (b) that $\mathcal{F}_{\sigma \wedge \tau} \subseteq \mathcal{F}_{\sigma} \cap \mathcal{F}_{\tau}$. Now suppose $A \in \mathcal{F}_{\sigma} \cap \mathcal{F}_{\tau}$. Then for all $t \geq 0$, $A \cap (\tau \leq t), A \cap (\sigma \leq t) \in \mathcal{F}_t$ and hence $A \cap (\sigma \wedge \tau \leq t) = A \cap \left[ (\sigma \leq t) \cup (\tau \leq t) \right] = \left[ A \cap (\sigma \leq t) \right] \cup \left[ A \cap (\tau \leq t) \right] \in \mathcal{F}_t$. This shows that $A \in \mathcal{F}_{\sigma \wedge \tau} $ and hence $\mathcal{F}_{\sigma \wedge \tau} \supseteq \mathcal{F}_{\sigma} \cap \mathcal{F}_{\tau}$. This proves $\mathcal{F}_{\sigma \wedge \tau} = \mathcal{F}_{\sigma} \cap \mathcal{F}_{\tau}$. \item For any $t \geq 0$, $$ (\tau < \sigma) \cap (\sigma \leq t) = (\tau < \sigma \leq t) = \cup_{q \in \mathbb{Q}} (\tau <t-q<\sigma \leq t) = \cup_{t \in \mathbb{Q}} \left[ (\tau < t-q) \cap (\sigma > t-q) \cap (\sigma \leq t)\right] \in \mathcal{F}_t.$$ Thus $(\tau < \sigma) \in \mathcal{F}_{\sigma}$. Similarly, $$(\tau < \sigma) \cap (\tau \leq t) = (\tau < \sigma \leq t) \cup (\tau \leq t < \sigma) \in \mathcal{F}_t,$$ since $(\tau<\sigma \leq t) \in \mathcal{F}_t$ (we already proved this) and $(\tau \leq t < \sigma) = (\tau \leq t) \cap (\sigma \leq t)^c \in \mathcal{F}_t$. This shows $(\tau < \sigma) \in \mathcal{F}_{\tau}$. Therefore, $(\tau < \sigma) \in \mathcal{F}_{\sigma} \cap \mathcal{F}_{\tau} = \mathcal{F}_{\sigma \wedge \tau}.$ Hence, $(\sigma \leq \tau) = (\tau < \sigma)^c \in \mathcal{F}_{\sigma \wedge \tau}.$ \end{enumerate} |
2014-q6 | Let \( \ell \in \mathbb{Z}_{>0} \), and consider the homogeneous Markov chain on state space \( \mathbb{X} = \{ 0, 1, \ldots, \ell \} \) and with transition probabilities, for \( x, y \in \mathbb{X} \), \[ q(x,y) = \binom{\ell}{y} \left( \frac{x}{y} \right)^y \left( 1 - \frac{x}{\ell} \right)^{\ell - y}. \] (Here the convention \( 0^0 = 1 \) is understood.) \( a \) Is this chain irreducible? What are recurrent states? Prove your answers. \( b \) Let \( \{X_n\}_{n \geq 0} \) be the canonical construction of the Markov chain starting at \( x \in \mathbb{X} \), with probability space \( (\mathbb{X}_\infty, \mathcal{X}_\infty, \mathbb{P}_x) \). Prove that \( X_n \to X_\infty \) almost surely for some random variable \( X_\infty \). Compute the law of \( X_\infty \). | \textbf{Correction :} $$q(x,y) = \left(\begin{array}{c} l \\y \end{array} \right) \left(\dfrac{x}{l} \right)^y \left(1- \dfrac{x}{l} \right)^{l-y}, \; \forall \; x, y \in \mathbb{X}. $$ \begin{enumerate}[label=(\alph*)] \item The chain is not irreducible since $q(0,0)=1$ and $q(0,y)=0$ for all $y=1, \ldots, l$. One the chain reaches $0$ it stays at $0$, hence $0$ is an absorbing state. Similarly, $q(l,l)=1$ and $l$ is an absorbing state. \par For any $x \neq 0,l$, we have $q(x,0) = (1-x/l)^l >0$. Since $0$ is an absorbing state (as shown earlier), the chain starting from $x$ will not return to $x$ with probability at least $1-(1-x/l)^l >0$. Thus $x$ is a transient state. $0$ and $l$, being absorbing states, are recurrent. Thus the only recurrent states are $0$ and $l$. \item It is clear from the transition kernel that $X_n | X_{n-1} \sim \text{Bin}(l,X_{n-1}/l)$. Thus setting $\mathcal{F}_n := \sigma(X_m : 0 \leq m \leq n)$ to be the canonical filtration associated this this process, we get $\left\{X_n, \mathcal{F}_n, n \geq 0\right\}$ to be a Martingale. Since, $\left\{X_n\right\}_{n \geq 1}$ is uniformly bounded, we have $X_n \longrightarrow X_{\infty}$ as $n \to \infty$, almost surely and in $L^p$ for all $p \geq 1$, by \textit{Doob's $L^p$ Martingale Convergence Theorem.} Hence, $$ \mathbb{E}X_{\infty} = \lim_{n \to \infty} \mathbb{E}X_n = \mathbb{E}X_0 = x.$$ Since, $X_n | X_{n-1} \sim \text{Bin}(l,X_{n-1}/l)$, we have \begin{align*} \mathbb{E} \left[ X_n(l-X_n) | X_{n-1}\right] = l\mathbb{E} \left[ X_n | X_{n-1}\right] -\mathbb{E} \left[ X_n^2 | X_{n-1}\right] &=lX_{n-1} - l(X_{n-1}/l)(1-X_{n-1}/l) - X_{n-1}^2 \
&= \dfrac{l-1}{l}X_{n-1}(l-X_{n-1}),\end{align*} and hence $$ \mathbb{E}\left[X_{\infty}(l-X_{\infty}) \right] = \lim_{n \to \infty} \mathbb{E}\left[X_{n}(l-X_{n}) \right] = \lim_{n \to \infty} (1-1/l)^n x(l-x) =0.$$ Therefore, $X_{\infty} =0$ or $l$ with probability $1$. This, along with the fact that $X_{\infty}$ has expectation $x$, gives us $ \mathbb{P}(X_{\infty}=0) = 1-\frac{x}{l}$ and $\mathbb{P}(X_{\infty}=l) = \frac{x}{l}.$ So, $X_{\infty} \sim l \text{Ber}(x/l)$. \end{enumerate} |
2015-q1 | Consider a sequence $X_1, X_2, X_3, \ldots$ of \{0,1\} random variables with the following joint distribution: $P(X_1 = 1) = p_1$. For subsequent trials, the chance that $X_{n+1}$ is 1 is $\alpha$ or $\beta$ as $X_n$ is 1 or 0.
(a) Let $p_n = P(X_n = 1)$. Set $\delta = \alpha - \beta$, $p = \beta/(1 - \delta)$. (We assume that $\alpha, \beta, \delta \neq 0, 1$.) Show that $p_n = p + (p_1 - p)\delta^{n-1}$.
(b) Suppose now that $p_1 = p$. Let $S_n = X_1 + \cdots + X_n$. Show that, with probability 1, $$ \lim_{n\to\infty} \frac{S_n}{n} = p. $$ | We have $\left\{X_n\right\}_{n \geq 1}$ to be a Markov Chain on the state space $\left\{0,1\right\}$ with transition probabilities $p_{11}=\alpha, p_{10}=1-\alpha, p_{01}=\beta, p_{00}=1-\beta.$
\begin{enumerate}[label=(\alph*)]
\item We have for all $n \geq 2$,
\begin{align*}
p_n = \mathbb{P}(X_n=1) &= \mathbb{P}(X_n=1|X_{n-1}=1)\mathbb{P}(X_{n-1}=1) + \mathbb{P}(X_n=1|X_{n-1}=0)\mathbb{P}(X_{n-1}=0) \\
&= \alpha p_{n-1} + \beta(1-p_{n-1}) = \beta + (\alpha-\beta)p_{n-1} = \beta + \delta p_{n-1}.
\end{align*}
Therefore,
$$ p_n-p = \beta + \delta p_{n-1} - p = \beta - (1-\delta)p +\delta (p_{n-1}-p) = \delta (p_{n-1}-p). $$
Unfolding the recursion, we get $p_n-p = (p_1-p)\delta^{n-1}$, i.e.,
$p_n = p+ (p_1-p)\delta^{n-1}.$
\item Since $\alpha, \beta \in (0,1)$, the chain is irreducible and aperiodic. Let us try to find its invariant distribution. Let $\pi=(\pi_0,\pi_1)$ be an invariant distribution. Then we are solve the following equations.
$$ (1-\beta)\pi_0 + (1-\alpha)\pi_1 = \pi_0, \;\; \beta\pi_0+\alpha \pi_1 = \pi_1.$$
Solving the above equations we get $\pi_1=\beta/(\beta+1-\alpha) = \beta/(1-\delta)=p$ and $\pi_0=1-p$. $\alpha,\beta \in (0,1)$, we have both $p, 1-p >0$ and hence the chain is positive recurrent. Letting $\tau_1$ to denote the first return time to $1$, we have
$$ \lim_{n \to \infty} n^{-1}S_n = \lim_{n \to \infty} n^{-1}\sum_{i=1}^n \mathbbm{1}(X_i=1) = \dfrac{1}{\mathbb{E}_1(\tau_1)} = \pi_1 =p,$$
almost surely.
\end{enumerate} |
2015-q2 | Suppose that under a probability measure $P$ the random variables $X_1,X_2, \ldots$ are independent, standard normal. For each $n \geq 1$ let $\mathcal{F}_n$ denote the $\sigma$-algebra generated by the first $n$ X's, with $\mathcal{F}_0$ the $\sigma$-algebra consisting of the empty set and the entire space. Let $Q$ be a second probability measure. Assume that under $Q$, $X_1$ has the same value as under $P$ and for each $n > 1$ the conditional distribution of $X_n$ given $\mathcal{F}_{n-1}$ is normal with variance 1 and (conditional) expectation $\theta_n$, where each $\theta_n$ is measurable with respect to $\mathcal{F}_{n-1}$ and the $\theta_n$ are uniformly bounded ($|\theta_n(\omega)| \leq 1$ for all $\omega$). Let $S = \sum_{i=1}^{\infty} \theta_i^2$.
(a) Let $Q_n$ denote the restriction of $Q$ to $\mathcal{F}_n$ and $P_n$ the corresponding restriction of $P$. What is the Radon–Nikodym derivative $dQ_n/dP_n$?
(b) Assume that $Q$ and $P$ are singular. What can you say about $S$ under the measure $P$? Under $Q$? Explain.
(c) Assume that $P$ and $Q$ are mutually absolutely continuous. Now what can you say about $S$? Explain. | \begin{enumerate}[label=(\alph*)]
\item
Let us introduce some notations first. Set $\theta_1=0$. Since, $\theta_n$ is $\mathcal{F}_{n-1}=\sigma(X_i : 1 \leq i \leq n-1)$-measurable, we can get a measurable function $g_{n} :\mathbb{R}^{n-1} \to \mathbb{R}$ such that $\theta_n(\omega) =g_n(x_1(\omega),\ldots,x_{n-1}(\omega))$, a.e.[$Q$]. For $n=1$, we can set $g_1 \equiv 0$.
Let $\widehat{P}_n := P \circ (X_1, \ldots, X_n)^{-1} = P_n \circ (X_1, \ldots, X_n)^{-1}$ and $\widehat{Q}_n := Q \circ (X_1, \ldots, X_n)^{-1} = Q_n \circ (X_1, \ldots, X_n)^{-1},$ for all $n \geq 1$. Then $\widehat{P}_n$ and $\widehat{Q}_n$ are probability measures on $(\mathbb{R}^n, \mathcal{B}_{\mathbb{R}^n})$ while it is clear that both of them are absolutely continuous with respect to Lebesgue measure on $\mathbb{R}^n$, denoted by $\lambda_n$, with Radon-Nikodym derivatives given as follows.
$$ \dfrac{d\widehat{P}_n}{d \lambda _n}(x_1, \ldots, x_n) = \prod_{i=1}^n \phi(x_i), \; \; \dfrac{d\widehat{Q}_n}{d \lambda _n}(x_1, \ldots, x_n) = \prod_{i=1}^n \phi(x_i-g_i(x_1, \ldots,x_{i-1})), \; \; a.e.[x_1,\ldots,x_n].$$
Therefore,
$$ \dfrac{d\widehat{Q}_n}{d \widehat{P} _n}(x_1, \ldots, x_n) = \prod_{i=1}^n \dfrac{\phi(x_i-g_i(x_1, \ldots,x_{i-1}))}{\phi(x_i)}, \; \; a.e.[x_1,\ldots,x_n].$$
Now take any $A \in \mathcal{F}_n$. Since $\mathcal{F}_n = \sigma(X_i : i \leq n)$, we have $A=(X_1, \ldots,X_n)^{-1}(B)$, for some $B \in \mathcal{B}_{\mathbb{R}^n}$. Therefore,
\begin{align*}
Q_n(A) = \widehat{Q}_n(B) = \int_{B} \dfrac{d\widehat{Q}_n}{d \widehat{P} _n}(x_1, \ldots, x_n) \; d\widehat{P}_n(x_1, \ldots,x_n) &= \int_{B} \prod_{i=1}^n \dfrac{\phi(x_i-g_i(x_1, \ldots,x_{i-1}))}{\phi(x_i)} \; d\widehat{P}_n(x_1, \ldots,x_n) \\
& = \int_A \prod_{i=1}^n \dfrac{\phi(X_i(\omega)-g_i(X_1(\omega), \ldots,X_{i-1}(\omega)))}{\phi(X_i(\omega))} \; dP_n(\omega) \\
& = \int_A \prod_{i=1}^n \dfrac{\phi(X_i(\omega)-\theta_i(\omega))}{\phi(X_i(\omega))} \; dP_n(\omega).
\end{align*}
This proves that
$$ M_n := \dfrac{d Q_n}{d P_n} = \prod_{i=1}^n \dfrac{\phi(X_i-\theta_i)}{\phi(X_i)} = \prod_{i=1}^n \exp(X_i \theta_i - \theta_i^2/2) = \exp \left( \sum_{i=1}^n X_i \theta_i - \dfrac{1}{2}\sum_{i=1}^n \theta_i^2\right).$$
\item Actually in this case we can say nothing about $S$. To see why this is the case note that the behaviour of $S$ should only be affected by the relationship between $P_{\infty}$ and $Q_{\infty}$ where $P_{\infty}$ and $Q_{\infty}$ are restrictions of $P$ and $Q$ on $\mathcal{F}_{\infty}=\sigma(X_1, \ldots)$. The information that $P$ and $Q$ are singular tells us nothing about the relationship between $P_{\infty}$ and $Q_{\infty}$.
For example, let $\Omega = \left\{0,1\right\} \times \mathbb{R}^{\mathbb{N}}$ and $\mathcal{F} = 2^{\left\{0,1\right\}} \otimes \mathcal{B}_{\mathbb{N}}$. Let $X_i(\omega)=\omega_i$ for all $ i \geq 0$, where $\omega = (\omega_0, \omega_1, \ldots)\in \Omega$. Suppose under $P$, $X_i$'s are all independent with $X_0=0$ with probability $1$ and $X_i \sim N(0,1)$ for all $i \geq 1$. Under $Q$, $X_0=1$ with probability $1$ while $X_1 \sim N(0,1)$ and $X_n \sim N(\theta_n,1)$ conditioned on $X_1, \ldots, X_{n-1}$ for all $n \geq 2$. Clearly, $P$ and $Q$ are mutually singular since
$P(X_0=1)=0, Q(X_0=1)=1$. But this gives no information about $S$.
\textbf{Correction :} Assume that $P_{\infty}$ and $Q_{\infty}$ are mutually singular.
Recall the Lebesgue decomposition of $Q_{\infty}$ as $Q_{\infty}=Q_{\infty,ac} + Q_{\infty,s}$, where $Q_{\infty,ac} = M_{\infty}P_{\infty}$ and $Q_{\infty,s} = \mathbbm{1}(M_{\infty}= \infty)Q_{\infty}.$ See \cite[Theorem 5.5.11]{dembo}. $M_{\infty}$ is the limit of $\left\{ M_n\right\}$, which exists almost surely with respect to both $P$ and $Q$; and $M_{\infty}$ is finite a.s.[$P$].
$Q_{\infty}$ and $P_{\infty}$ are mutually singular if and only if $Q_{\infty,ac}=0$ and $Q_{\infty,s}=Q_{\infty}$ which is equivalent to $P(M_{\infty}=0)=1$ and $Q(M_{\infty}=\infty)=1$.
Set $Y_0=0$. Note that under $P$, $\left\{Y_n:=\sum_{i=1}^n X_i\theta_i, \mathcal{F}_n, n \geq 0 \right\}$ is an $L^2$- MG (since $\theta_n$s are uniformly bounded by $1$) with predictable compensator being
$$ \left\langle Y \right\rangle_n = \sum_{k=1}^n \mathbb{E} \left[ (Y_k-Y_{k-1})^2|\mathcal{F}_{k-1}\right] = \sum_{k=1}^n \mathbb{E} \left[ X_k^2\theta_k^2 |\mathcal{F}_{k-1}\right] = \sum_{k=1}^n \theta_k^2.$$
Therefore $\sum_{i=1}^n X_i(\omega)\theta_i(\omega)$ converges to a finite limit for a.e.[$P$] $\omega$ for which $\sum_{i=1}^{\infty} \theta_i^2(\omega)$ is finite (See \cite[Theorem 5.3.33]{dembo}). This shows that, $ P \left( (S < \infty) \cap (M_{\infty} =0)\right)=0$ and therefore, we must have $P(S < \infty) =0$ i.e., $S$ is infinite a.s.[$P$].
On the otherhand, setting $Z_0=0$ we get, $\left\{Z_n:=\sum_{i=1}^n (X_i-\theta_i)\theta_i, \mathcal{F}_n, n \geq 0 \right\}$ is an $L^2$- MG (since $\theta_n$s are uniformly bounded by $1$) under $Q$ with predictable compensator being
$$ \left\langle Z \right\rangle_n = \sum_{k=1}^n \mathbb{E} \left[ (Z_k-Z_{k-1})^2|\mathcal{F}_{k-1}\right] = \sum_{k=1}^n \mathbb{E} \left[ (X_k-\theta_k)^2\theta_k^2 |\mathcal{F}_{k-1}\right] = \sum_{k=1}^n \theta_k^2.$$
Therefore $\sum_{i=1}^n (X_i(\omega)-\theta_i(\omega))\theta_i(\omega)$ converges to a finite limit for a.e.[$Q$] $\omega$ for which $\sum_{i=1}^{\infty} \theta_i^2(\omega)$ is finite and hence for such $\omega$, $\sum_{i=1}^n X_i(\omega)\theta_i(\omega)$ will be finite. This shows that, $ Q \left( (S < \infty) \cap (M_{\infty} =\infty)\right)=0$ and therefore, we must have $Q(S < \infty) =0$ i.e., $S$ is infinite a.s.[$Q$].
In conclusion, if $P_{\infty}$ and $Q_{\infty}$ are singular, then $S$ is almost surely infinite with respect to both $P$ and $Q$.
\item Here we need no further assumption since $P$ and $Q$ are mutually absolutely continuous implies that $P_{\infty}$ and $Q_{\infty}$ are mutually absolutely continuous.
From the Lebesgue decomposition, we have $Q_{\infty}<<P_{\infty}$ if and only if $Q_{\infty,s} =0$ and $Q_{\infty,ac}=Q_{\infty}$, which is equivalent to $\mathbb{E}_{P}M_{\infty}=1$ and $Q(M_{\infty}<\infty)=1$.
Apply \cite[Theorem 5.3.33(b)]{dembo} to the MG $\left\{Z_n\right\}$ with respect to $Q$. We have $Z_n(\omega)/\sum_{i=1}^n \theta_i^2(\omega)$ converges to $0$ a.e.[$Q$] $\omega$ for which $S(\omega)=\infty$. For such $\omega$,
$$ \dfrac{\sum_{i=1}^n X_i(\omega) \theta_i(\omega) - \dfrac{1}{2}\sum_{i=1}^n \theta_i^2(\omega)}{\sum_{i=1}^n \theta_i^2(\omega)} = \dfrac{Z_n(\omega)+\dfrac{1}{2}\sum_{i=1}^n \theta_i^2(\omega)}{\sum_{i=1}^n \theta_i^2(\omega)} \to 1/2,$$
and hence $M_{\infty}(\omega)=\infty$. Hence $Q((S=\infty)\cap(M_{\infty}<\infty))=0$. This implies $Q(S=\infty)=0$.
We have $Q_{\infty}=Q_{\infty,ac}=M_{\infty}P_{\infty}$ and we also have $P_{\infty}<<Q_{\infty}$. This implies that $P(0<M_{\infty}<\infty)=1$. Apply \cite[Theorem 5.3.33(b)]{dembo} to the MG $\left\{Y_n\right\}$ with respect to $P$. We have $Y_n(\omega)/\sum_{i=1}^n \theta_i^2(\omega)$ converges to $0$ a.e.[$P$] $\omega$ for which $S(\omega)=\infty$. For such $\omega$,
$$ \dfrac{\sum_{i=1}^n X_i(\omega) \theta_i(\omega) - \dfrac{1}{2}\sum_{i=1}^n \theta_i^2(\omega)}{\sum_{i=1}^n \theta_i^2(\omega)} = \dfrac{Y_n(\omega)-\dfrac{1}{2}\sum_{i=1}^n \theta_i^2(\omega)}{\sum_{i=1}^n \theta_i^2(\omega)} \to -1/2,$$
and hence $M_{\infty}(\omega)=0$. Hence $P((S=\infty)\cap(M_{\infty}>0))=0$. This implies $P(S=\infty)=0$.
In conclusion, if $P$ and $Q$ are mutually absolutely continuous, then $S$ is almost surely finite with respect to both $P$ and $Q$.
\end{enumerate} |
2015-q3 | Let $B_t$ be standard Brownian motion started at zero, and let $W_t := 2B_t$. Are the laws of $(B_t)_{0\leq t \leq 1}$ and $(W_t)_{0\leq t \leq 1}$ mutually singular or not? Justify your answer. | Both $B$ and $W$ induce probability measures on the space $(C([0,\infty),\mathcal{B}_{C([0,\infty)}).$ Consider the following subset of $C([0,\infty)$.
$$ L_{b} := \left\{ x \in C([0,\infty) : \limsup_{t \to \infty} \dfrac{x(t)}{\sqrt{2t \log \log t}} < b\right\}.$$
To see why $L_{b}$ is measurable note that for each $t >0$, the map $x \mapsto x(t)/\sqrt{2t \log \log t}$ is measurable and since the space consists only of continuous functions, we have
$$ L_{b} = \left( \limsup_{t \to \infty} \dfrac{x(t)}{\sqrt{2t \log \log t}} < b \right) = \left(\limsup_{t \to \infty, t \in \mathbb{Q}} \dfrac{x(t)}{\sqrt{2t \log \log t}} < b \right) = \cup_{t \in \mathbb{Q}} \cap_{s \in \mathbb{Q},s>t} \left( \dfrac{x(s)}{\sqrt{2s \log \log s}} < b \right).$$
This shows that $L_b \in \mathcal{B}_{C([0,\infty)}$.
Let $P$ and $Q$ be respectively the probability measures on $(C([0,\infty),\mathcal{B}_{C([0,\infty)})$ induced by $B$ and $W$.
By the \textit{Law of Iterated Logarithm}, almost surely the following hold true.
$$ \limsup_{t \to \infty} \dfrac{B(t)}{\sqrt{2t \log \log t}} = 1, \;\; \limsup_{t \to \infty} \dfrac{W(t)}{\sqrt{2t \log \log t}} = \limsup_{t \to \infty} \dfrac{2B(t)}{\sqrt{2t \log \log t}} = 2. $$
Therefore, $P(L_{3/2}) = \mathbb{P}(B \in L_{3/2}) =1$, whereas $Q(L_{3/2}) = \mathbb{P}(W \in L_{3/2}) =0$. Thus the measures $P$ and $Q$ are mutually singular. |
2015-q4 | (A true story) A New York Times reporter once called up one of your professors with the following request: “My kid collects baseball cards. They come five to a package. Let’s assume that each package is a random sample (without repetitions) of the 600 players in the National League. How many packs of cards should we buy to have a 95% chance of getting at least one of each player? I’m doing a story about this for the Times and would really appreciate a solution.” Give a careful heuristic, your best version of a numerical answer, and a mathematical justification of your heuristic. | We have $n$ players (here $n=600$) and each pack is a SRSWOR sample of size $k$ (here $k=5$) from the player set and packs are independent to each other. Let $T_n$ denotes the minimum number of packs required to get all the players. We are to estimate $\mathbb{P}(T_n > l_n)$.
Let $A_{n,i}$ denotes the event that $i$-th player has not appeared in first $l_n$-many packs. For any $1 \leq i_1, \ldots, i_m \leq n$ distinct indices we have the following.
$$ \mathbb{P}\left[ \cap_{j=1}^m A_{n,i_j}\right] = \left[\mathbb{P} \left(\text{Players } i_1,\ldots, i_m \text{ are not in the first pack }\right) \right]^{l_n} = \left[\dfrac{\binom{n-m}{k}}{\binom{n}{k}} \right]^{l_n}.$$
Therefore,
$$ \mathbb{P}(T_n > l_n) =\mathbb{P}\left[ \cup_{j=1}^n A_{n,j}\right] = \sum_{m=1}^n (-1)^{m-1} \sum_{1\leq i_1< \cdots < i_m \leq n} \mathbb{P}\left[ \cap_{j=1}^m A_{n,i_j}\right] = \sum_{m=1}^n (-1)^{m-1} \binom{n}{m} \left[\dfrac{\binom{n-m}{k}}{\binom{n}{k}} \right]^{l_n}.$$
We are to estimate the right hand side of the above. Observe that for all $m \geq 1$,
\begin{align*}
p_{n,m} := \binom{n}{m} \left[\dfrac{\binom{n-m}{k}}{\binom{n}{k}} \right]^{l_n} &= \dfrac{n(n-1)\cdots (n-m+1)}{m!} \left[ \dfrac{(n-m)(n-m-1)\cdots (n-m-k+1)}{n(n-1)\cdots(n-k+1)}\right]^{l_n}\\
&= \dfrac{1}{m!} \left[\prod_{i=0}^{m-1}(n-i) \right] \left[ \prod_{i=0}^{k-1} \left( 1-\dfrac{m}{n-i}\right)\right]^{l_n}.
\end{align*}
We try to find asymptotic behaviour of $p_{n,m}$ as $n \to \infty$. Elementary calculus shows that for small enough $\delta$, we have $|\log(1-x)+x| \leq x^2$ for all $|x|<\delta$. Hence for large enough $n$,
\begin{align*}
\Bigg \rvert \log \left[ \prod_{i=0}^{k-1} \left( 1-\dfrac{m}{n-i}\right)\right]^{l_n} + \dfrac{mkl_n}{n} \Bigg \rvert & \leq l_n \sum_{i=0}^{k-1} \Bigg \rvert \log\left( 1-\dfrac{m}{n-i}\right) +\dfrac{m}{n} \Bigg \rvert \\
& \leq l_n \sum_{i=0}^{k-1} \Bigg \rvert \log\left( 1-\dfrac{m}{n-i}\right) +\dfrac{m}{n-i} \Bigg \rvert + l_n \sum_{i=0}^{k-1} \Bigg \rvert \dfrac{m}{n-i} -\dfrac{m}{n} \Bigg \rvert \\
& \leq m^2l_n \sum_{i=0}^{k-1} \dfrac{1}{(n-i)^2} + mkl_n \sum_{i=0}^{k-1} \dfrac{1}{(n-i)^2} \\
&= m(m+k)l_n \sum_{j=0}^{k-1} (n-j)^{-2} \leq mk(m+k)l_n(n-k)^{-2}.
\end{align*}
Provided that $l_n=o(n^{2})$, we have
$$ p_{n,m} \sim \dfrac{1}{m!} \left[\prod_{i=0}^{m-1}(n-i) \right] \exp(-mkl_n/n) \sim \dfrac{1}{m!} n^m \exp(-mkl_n/n) = \dfrac{1}{m!} \exp \left( m \log n - mkl_n/n\right).$$
Fix $c \in \mathbb{R}$ and take $l_n = \lfloor \dfrac{n \log n +cn}{k}\rfloor = o(n^2)$. Then $p_{n,m} \sim \exp(-cm)/m!$. On the otherhand, for all $n,m$; we apply the inequality $\log(1-x) \leq -x$ to get the following.
$$ p_{n,m} \leq \dfrac{1}{m!} n^m \exp \left( -ml_n \sum_{i=0}^{k-1} (n-i)^{-1}\right) \leq \dfrac{1}{m!} n^m \exp \left( -mkl_n/n \right) \leq \dfrac{1}{m!} \exp(-cm + mk/n) \leq \dfrac{1}{m!} \exp(-cm + mk).$$
The right hand side being summable, we use DCT to conclude that
$$ \mathbb{P}(T_n > l_n) = \sum_{m \geq 1} (-1)^{m-1} p_{n,m} \longrightarrow \sum_{m \geq 1} (-1)^{m-1} \dfrac{1}{m!} \exp(-cm)= 1- \exp(-e^{-c}).$$
Fix $\alpha \in (0,1)$. Then we get
$$ \mathbb{P} \left[ T_n > \Big \lfloor \dfrac{n \log n-n\log(-\log(1-\alpha))}{k} \Big \rfloor \right] \longrightarrow \alpha. $$
Plugging in $n=600, k=5, \alpha=0.05$ we get that at least $1124$ packs of cards should be bought to have a $95\%$ chance of getting at least one of each player.
The above estimate is asymptotic. For finite sample result note that,
$$ \mathbb{P}(T_n > l_n) \leq \sum_{i=1}^n \mathbb{P}(A_{n,i}) = n \left[\dfrac{\binom{n-1}{k}}{\binom{n}{k}} \right]^{l_n} = n (1-k/n)^{l_n} \leq \exp(\log n -kl_n/n) \leq \exp(-c+k/n), $$
and hence
$$ \mathbb{P} \left[ T_n > \Big \lfloor \dfrac{n \log n-n\log\alpha+k}{k} \Big \rfloor \right] \leq \alpha.$$
Plugging in the values we get the required number of packs to be at least $1128$. This shows that the asymptotic approximation is quite accurate for $n=600$.
The problem could have also been solved using Chen-Stein bound for Poisson Approximation. Let $X^{(n)}_{m,i}$ denotes the indicator for the event that player $i$ has not appeared in any of the first $m$ packs and $W^{(n)}_m :=\sum_{i=1}^n X^{(n)}_{m,i}$ denotes the total number of players not appearing in any of the first $m$ many packs. Then
$$ T_n = \inf \left\{m \geq 1 \mid W_m^{(n)}=0\right\} \Rightarrow \mathbb{P}(T_n > m) = \mathbb{P}(W_m^{(n)} >0),$$
where the implication follows from the observation that $m \mapsto W_m^{(n)}$ is non-increasing. We should therefore focus on finding the asymptotics of $W_m^{(n)}$. %Let $\mathcal{N}_{n,i}$ denotes the set of neighbours of the point $i$ in $V_n$; $|\mathcal{N}_{n,i}|=4$.
Notice that for all $i \in [n]:=\left\{1,\ldots,n\right\}$,
$$ \mathbb{E}X^{(n)}_{m,i} = \mathbb{P} \left(\text{First $m$ draws do not contain player } i \right) = \left[ \dfrac{{n-1 \choose k}}{{n \choose k}}\right]^m =\left( \dfrac{n-k}{n}\right)^m ,$$
and hence
$$ \mathbb{E}W_m^{(n)} = n\left( \dfrac{n-k}{n}\right)^m .$$
To get a non-degenerate limit, we should vary $m=m_n$ with $n$ in such a way that $\mathbb{E}W_{m_n}^{(n)}$ is convergent. Since, $n(1-kn^{-1})^{m_n} \approx \exp(\log n -km_n/n)$, an intuitive choice seems to be $m_n = \lfloor (n\log n +cn)/k \rfloor$ for some $c \in \mathbb{R}$. We now focus to prove the convergence with this particular choice. %The situation clearly demands for a Poisson approximation technique. Since, $X_{m_n,i}^{(n)}$'s are not independent for different $i$'s, we go for \textit{Chen-Stein Bound}.
We shall use the following variant of \textit{Chen-Stein Bound}, see \cite[Theorem 4.31]{B05} for reference.
\begin{theorem}
Suppose we have a collection of \textit{Bernoulli} random variables $\left\{X_i : i \in I\right\}$ such that for every $i \in I$, we can construct random variables $\left\{Y_j^{(i)} : j \in I, j \neq i\right\}$ satisfying the following properties.
\begin{enumerate}
\item $\left( Y_j^{(i)} : j \neq i \right) \stackrel{d}{=} \left( X_j : j \neq i \right) \mid X_i=1.$
\item $Y_j^{(i)} \leq X_j$, for all $j \neq i$.
\end{enumerate}
Let $W=\sum_{i \in I} X_i$ and $\lambda = \mathbb{E}(W)$. Then
$$ \Big \rvert \Big \rvert \mathcal{L}(W) - \text{Poi}(\lambda) \Big \rvert \Big \rvert_{TV} \leq \lambda - \operatorname{Var}(W).$$
\end{theorem}
To apply the above theorem, first we need to establish that the collection $\left\{X_{m_n,i}^{(n)} : i =1, \ldots, n\right\}$ satisfies the condition. For this part we are dropping the index $n$ (denoting total number of players) from the notations for the sake of notational simplicity. The construction of the coupled random variables is quite easy. Suppose that $X_{m,i}=1$, i.e. all of the first $m$ packs have avoided the $i$-th player. Thus the conditional joint distribution of $(X_{m,j} : j \neq i)$ given $X_{m,i}=1$ is same as the joint distribution under the draw mechanism under which packs are drawn according to SRSWOR scheme from the pool of all players excluding $i$, each draw being independent of each other.
The way to couple this with our original scheme is simple. You first perform the original scheme, where you draw packs from the entire pool of players. If $X_{m,i}=1$, do nothing. Otherwise, consider all the packs drawn that have player $i$. Delete player $i$ from the pack and replace with an player uniformly drawn from the pool of players excluding player $i$ and the players already present in the pack. Define $Y_{m,j}^{(i)}$ to be $1$ if the player $j$ is not present in any of the $m$ packs after this alteration, and $0$ otherwise. Clearly this collection satisfies the properties mentioned in the above theorem.
Now define,
$$ p_{n,i} := \mathbb{E}X^{(n)}_{m_n,i} = (1-kn^{-1})^{m_n}, \; \lambda_n := \sum_{i =1}^n p_{n,i} = n(1-kn^{-1})^{m_n}.$$
By the \textit{Chen-Stein Bound}, we have
$$ \Big \rvert \Big \rvert \mathcal{L}(W_{m_n}^{(n)}) - \text{Poi}(\lambda_n) \Big \rvert \Big \rvert_{TV} \leq \lambda_n - \operatorname{Var}(W^{(n)}_{m_n}).$$
Note that
\begin{align*}
\log \lambda_n = \log n +m_n \log (1-kn^{-1}) = \log n +m_n (-kn^{-1}+O(n^{-2})) &= \log n -k m_n/n + O(n^{-1}\log n) \\
&= \log n - (\log n +c) + O(n^{-1}) + O(n^{-1}\log n) \\
&\longrightarrow -c,
\end{align*}
in other words, $\lambda_n \to \exp(-c)$. On the otherhand,
\begin{align*}
\operatorname{Var}(W^{(n)}_{m_n}) = \sum_{i =1}^n \operatorname{Var}(X_{m_n,i}^{(n)}) + \sum_{i \neq j \in [n]} \operatorname{Cov}(X_{m_n,i}^{(n)}, X_{m_n,j}^{(n)}) &= \sum_{i \in [n]} p_{n,i}(1-p_{n,i}) + \sum_{i \neq j \in [n]} \operatorname{Cov}(X_{m_n,i}^{(n)}, X_{m_n,j}^{(n)}) \\
&= \lambda_n - \dfrac{\lambda_n^2}{n} + \sum_{i \neq j \in [n]} \operatorname{Cov}(X_{m_n,i}^{(n)}, X_{m_n,j}^{(n)}).
\end{align*}
For any $i \neq j$,
\begin{align*}
\operatorname{Cov}(X_{m_n,i}^{(n)}, X_{m_n,j}^{(n)}) &= \mathbb{P}(X_{m_n,i}^{(n)}=X_{m_n,j}^{(n)}=1) - p_{n,i}^2 \\
&= \mathbb{P} \left(\text{Player $i$ and $j$ both are not present in the first $m_n$ packs} \right) - p_{n,i}^2 \\
& = \left[ \dfrac{{n-2 \choose k}}{{n \choose k}}\right]^{m_n} - \left[ 1- \dfrac{k}{n}\right]^{2m_n} = \left[1- \dfrac{2nk-k^2-k}{n(n-1)}\right]^{m_n} - \left[ 1- \dfrac{k}{n}\right]^{2m_n} <0.
\end{align*}
Using these observations, we can write
\begin{align*}
\sum_{i \neq j \in [n]} \operatorname{Cov}(X_{m_n,i}^{(n)}, X_{m_n,j}^{(n)}) &= n(n-1) \left( \left[1- \dfrac{2nk-k^2-k}{n(n-1)}\right]^{m_n} - \left[ 1- \dfrac{k}{n}\right]^{2m_n} \right).
\end{align*}
Note that for any $a_n = a+ O(1/n)$ with $a >0$, the following asymptotics holds.
\begin{align*}
m_n \log (1-a_nn^{-1}) = m_n(-a_nn^{-1}+O(n^{-2})) = -a_n( \log n +c)/k + O(n^{-1})+ O(n^{-1}\log n),
\end{align*}
and hence
$$ (1-a_nn^{-1})^{m_n} = \exp \left( -\dfrac{a_n}{k}\log n - \dfrac{a_nc}{k} + O(n^{-1}\log n)\right) = \exp \left( -\dfrac{a}{k}\log n - \dfrac{ac}{k} + o(1)\right) .$$
Since, $(2nk-k^2-k)/(n-1) = 2k + O(1/n)$, plugging in these asymptotics, we get
\begin{align*}
\Bigg \rvert \sum_{i \neq j \in [n]} \operatorname{Cov}(X_{m_n,i}^{(n)}, X_{m_n,j}^{(n)}) \Bigg \rvert
& \leq n^2 \left( \left[ 1- \dfrac{k}{n}\right]^{2m_n} -\left[1- \dfrac{2nk-k^2-k}{n(n-1)}\right]^{m_n} \right) \\
& = \exp(2 \log n) \left[ \exp \left(-\dfrac{2k}{k}\log n -\dfrac{2kc}{k} + o(1) \right) -\exp \left(-\dfrac{2k}{k}\log n -\dfrac{2kc}{k} + o(1) \right) \right]= o(1).
\end{align*}
Therefore,
$$\lambda_n - \operatorname{Var}(W^{(n)}_{m_n}) = \dfrac{\lambda_n^2}{n} - \sum_{i \neq j \in [n]} \operatorname{Cov}(X_{m_n,i}^{(n)}, X_{m_n,j}^{(n)}) = O(n^{-1}) + o(1) = o(1).$$
\textit{Chen-Stein Bound}, hence, implies that $\Big \rvert \Big \rvert \mathcal{L}(W_{m_n}^{(n)}) - \text{Poi}(\lambda_n) \Big \rvert \Big \rvert_{TV}$ converges to $0$. Since, $\lambda_n \to e^{-c}$, we get
$$\Big \rvert \Big \rvert \mathcal{L}(W_{m_n}^{(n)}) - \text{Poi}(e^{-c}) \Big \rvert \Big \rvert_{TV} \to 0.$$
This follows from the fact that if $\nu_n \to \nu \in (0,\infty)$, then $\text{Poi}(\nu_n)$ converges in distribution to $\text{Poi}(\nu)$ and hence the convergence occurs in total variation distance (by \textit{Scheffe's Lemma}). From the convergence in total variation distance, we conclude that
$$ \mathbb{P}(T_n > m_n) = \mathbb{P}(W_{m_n}^{(n)} > 0) \longrightarrow \mathbb{P}(\text{Poi}(e^{-c}) >0) = 1-\exp(-e^{-c}).$$
Taking $c=-\log(-\log(1-\alpha))$ we get back our previous answer. |
2015-q5 | Call a subset A of [0, 1] “small” if every Borel subset of A has Lebesgue measure zero. Last year, a Stanford statistics student needed to know if the union of two small sets is small. What would you say? Justify your answer. | I shall present one existential solution. I couldn't do a constructive one. You are welcome to give a simpler constructive solution.
We assume that union of any two small sets is a small set and shall show that this leads to a contradiction.
We divide the proof into several steps to make it more presentable.
\begin{enumerate}
\item \textbf{Step 1 :} Consider the following two definitions of \textit{Outer} and \textit{Inner} measures. Below, $\lambda$ denotes the \textit{Lebesgue measure} on $[0,1]$. For any $A \subseteq [0,1]$, let
$$ \lambda_{*}(A) := \sup \left\{ \lambda(F) : F \subseteq A, F \in \mathcal{B}_{[0,1]}\right\}, \;\; \lambda^{*}(A) := \inf \left\{ \lambda(G) : G \supseteq A, G \in \mathcal{B}_{[0,1]}\right\}.$$
From the definition, it is obvious that $\lambda_{*} \leq \lambda^{*}$. Also both the set-functions are clearly non-negative and non-decreasing. The definition above implies that for Borel set $A$, $\lambda(A) \leq \lambda_{*}(A) \leq \lambda^{*}(A) \leq \lambda(A)$, which means $\lambda^{*}=\lambda=\lambda_{*}$ on $\mathcal{B}_{[0,1]}$.
\item \textbf{Step 2:} \textit{For all $A \subseteq [0,1]$, we can find $F,G \in \mathcal{B}_{[0,1]}$, $F \subseteq A \subseteq G$ such that $\lambda(F)=\lambda_{*}(A)$ and $\lambda(G) = \lambda^{*}(A).$}
To prove the above statement, note that by definition there exists $F_n \in \mathcal{B}_{[0,1]}$, $F_n \subseteq A$ such that $\lambda(F_n) \geq (1-1/n)\lambda_{*}(A)$, for all $n \geq 1$. Set $F= \cup_{n \geq 1} F_n \in \mathcal{B}_{[0,1]}$ and $F \subseteq A$, hence $\lambda(F) \leq \lambda_{*}(A)$. On the otherhand, $\lambda(F) \geq \lambda(F_n) \geq (1-1/n)\lambda_{*}(A)$, for all $n \geq 1$, implying that $\lambda(F) \geq \lambda_{*}(A).$ This proves $\lambda(F)= \lambda_{*}(A).$
For the other part the argument is exactly similar. There exists $G_n \in \mathcal{B}_{[0,1]}$, $G_n \supseteq A$ such that $\lambda(G_n) \leq (1+1/n)\lambda^{*}(A)$, for all $n \geq 1$. Set $G= \cap_{n \geq 1} G_n \in \mathcal{B}_{[0,1]}$ and $G \supseteq A$, hence $\lambda(G) \geq \lambda^{*}(A)$. On the otherhand, $\lambda(G) \leq \lambda(G_n) \leq (1+1/n)\lambda^{*}(A)$, for all $n \geq 1$, implying that $\lambda(G) \leq \lambda^{*}(A).$ This proves $\lambda(G)= \lambda^{*}(A).$
\item \textbf{Step 3:} \textit{For any disjoint collection of subsets of $[0,1]$, say $\left\{A_i : i \geq 1\right\}$ we have the following.}
$$ \sum_{i \geq 1} \lambda_{*}(A_i) \leq \lambda_{*} \left( \bigcup_{i \geq 1} A_i \right) \leq \lambda^{*} \left( \bigcup_{i \geq 1} A_i \right) \leq \sum_{i \geq 1} \lambda^{*}(A_i).$$
It is enough to prove the left-most and the right-most inequalities.
Fix $\varepsilon >0$ and get $F_n \subseteq A_n \subseteq G_n$, $F_n, G_n \in \mathcal{B}_{[0,1]}$ such that
$$ \lambda_{*}(F_n) \geq \lambda_{*}(A_n) - 2^{-n}\varepsilon; \;\; \lambda^{*}(G_n) \leq \lambda^{*}(A_n) + 2^{-n}\varepsilon, \; \forall \; n \geq 1.$$
Note that $F_n$'s are disjoint and so are $G_n$'s. $\cup_{n \geq 1} F_n \subseteq A := \cup_{n \geq 1} A_n \subseteq \cup_{n \geq 1} G_n$. Also both the first and the last sets in the previous line are Borel-measurable. Therefore,
$$ \lambda_{*}(A) \geq \lambda \left( \bigcup_{n \geq 1} F_n \right) = \sum_{n\geq 1} \lambda(F_n) \geq \sum_{n\geq 1} \left( \lambda_{*}(A_n) - 2^{-n}\varepsilon \right) = \sum_{n\geq 1} \lambda_{*}(A_n) - \varepsilon,$$
and
$$ \lambda^{*}(A) \leq \lambda \left( \bigcup_{n \geq 1} G_n \right) = \sum_{n\geq 1} \lambda(G_n) \leq \sum_{n\geq 1} \left( \lambda^{*}(A_n) + 2^{-n}\varepsilon \right) = \sum_{n\geq 1} \lambda^{*}(A_n) + \varepsilon.$$
Taking $\varepsilon \downarrow 0$, we complete the proof.
Notice that until now we haven't used the assumption that is supposed to help us to arrive at a contradiction. The next and the last step uses that assumption.
\item \textbf{Step 4:} \textit{$\lambda_{*}(A)=\lambda^{*}(A)$ for any $A \subseteq [0,1]$, under the assumption.}
Fix $A \subseteq [0,1]$ and get $F, G$ as in Step 2. Let $C$ be any Borel subset of $A \setminus F$. Then $F \cup C$ is also a Borel subset of $A$ and hence
$$ \lambda(F) = \lambda_{*}(A) \geq \lambda(F \cup C) = \lambda(F) + \lambda(C),$$
implying that $\lambda(C)=0$. Therefore, $A \setminus F$ is a small set. Similar argument shows that $G \setminus A$ is also a small set. Hence, by assumption, $G \setminus F = (G \setminus A) \cup (A \setminus F)$ is also a small set. But $G \setminus F$ is itself a Borel set and therefore, $\lambda(G \setminus F) =0$. Since $F \subseteq G$, this implies that $\lambda(F)=\lambda(G)$, completing our proof.
\end{enumerate}
Step 3 and 4 together implies that $\lambda^{*}$ is a countably-additive set function on $\mathcal{P}([0,1])$, the power set of $[0,1]$. By Step 1, $\lambda^{*}$ is non-negative and matches with $\lambda$ on $\mathcal{B}_{[0,1]}$. This implies that any subset of $[0,1]$ is Lebesgue-measurable, which is a contradiction. |
2015-q6 | Given an even number n, let $(X_{n,1}, \ldots ,X_{n,n})$ be a random vector chosen uniformly from the set of all n-tuples of −1’s and 1’s that contain an equal number of −1’s and 1’s. For $\alpha \in (0, 1)$, let $$ S^{(\alpha)}_n := \frac{1}{\sqrt{n}} \sum_{i=1}^{\lfloor \alpha n \rfloor} X_{n,i} $$ where $\lfloor \alpha n \rfloor$ is the integer part of $\alpha n$. Compute the limiting distribution of $S^{(\alpha)}_n$ when $\alpha$ is fixed and $n \to \infty$ through even numbers. | We shall present two different solutions. Each of them has something to teach.
\begin{enumerate}[label=(\arabic*)]
\descitem{CLT for exchangeable arrays}
The following theorem is an useful result which extends the CLT for exchangeable arrays.
\begin{theorem}{\label{thm}}
Suppose $\left\{X_{n,i}\right\}_{1 \leq i \leq k_n; n \geq 1}$ is a triangular array of random variables such that $\left\{X_{n,i}\right\}_{1 \leq i \leq k_n}$ is an exchangeable collection for each $n \geq 1$. Also suppose the following assumptions.
\begin{enumerate}
\item $\sum_{i=1}^{k_n} X_{n,i} =0$, $\sum_{i=1}^{k_n} X_{n,i}^2 =k_n$
and $k_n \to \infty$ with $m_n/k_n \to \alpha \in (0,1)$.
\item
$$ \max_{1 \leq i \leq k_n} \dfrac{X_{n,i}}{\sqrt{k_n}} \stackrel{p}{\longrightarrow} 0$$.
\end{enumerate}
Then
$$ \dfrac{1}{\sqrt{m_n}} \sum_{i=1}^{m_n} X_{n,i} \stackrel{d}{\longrightarrow} N(0,1-\alpha). $$
\end{theorem}
See \cite[Theorem 5.8]{B02} for more details. To get more stronger results, see \cite[Theorem 2]{B03}. They might be useful in some other problems.
In our problem $\left\{X_{n:1}, \ldots, X_{n:n}\right\}$ is clearly an exchangeable collection. Set $k_n=n, m_n= \lfloor \alpha n \rfloor$. The conditions are clearly satisfied since $|X_{n,i}|=1$ almost surely. Therefore
$$ S_n^{(\alpha)} = \dfrac{\sqrt{m_n}}{\sqrt{n}} \dfrac{1}{\sqrt{m_n}} \sum_{i=1}^{m_n} X_{n,i} \stackrel{d}{\longrightarrow} \sqrt{\alpha} N(0,1-\alpha) = N(0,\alpha(1-\alpha)).$$
\descitem{Reverse MG and MG CLT}
This arguments basically follows the proof of Theorem~\ref{thm} and picks out the bits needed and presents a simplified version for this special case.
Define, $S_{n,k}:= \sum_{j=1}^k X_{n,j}$, for all $1 \leq k \leq n.$ Consider the following $\sigma$-algebras. Let
$$\mathcal{F}_{n,-j}:=\sigma(S_{n,k} : j \leq k \leq n), \forall \; 1 \leq j \leq n. $$
Clearly, $\left\{\mathcal{F}_{n,-j} : 1 \leq j \leq n\right\}$ is a non-increasing filtration. Since the collection $(X_{n,1}, \ldots, X_{n,n})$ is exchangeable, we have
$$ \mathbb{E}(X_{n,1} \mid \mathcal{F}_{n,-j}) = \dfrac{1}{j}S_{n,j}, \; \forall \; 1 \leq j \leq n.$$
Also note that, since $S_{n,n}=0$, we have $\mathbb{E}(X_{n,1} \mid \mathcal{F}_{n,-n}) = 0.$ Therefore,
$$ \sum_{k=1}^{\lfloor \alpha n \rfloor} X_{n,k} = S_{n, \lfloor \alpha n \rfloor} = \lfloor \alpha n \rfloor \mathbb{E}(X_{n,1} \mid \mathcal{F}_{n, - \lfloor \alpha n \rfloor}) = \lfloor \alpha n \rfloor \sum_{j=\lfloor \alpha n \rfloor +1}^n \left( \mathbb{E}(X_{n,1} \mid \mathcal{F}_{n,-(j-1)}) - \mathbb{E}(X_{n,1} \mid \mathcal{F}_{n,-j}) \right).$$
The above expression shows that we have written the quantity of our interest as a sum of MG-difference sequence and paves the way for applying MG CLT.
Define,
$$\mathcal{G}_{n,k} = \mathcal{F}_{n,k-n}, \; \forall \; 0 \leq k \leq n-1; \; \; D_{n,k} = \mathbb{E}(X_{n,1} \mid \mathcal{G}_{n,k}) - \mathbb{E}(X_{n,1} \mid \mathcal{G}_{n,k-1}), \; \forall \; 1 \leq k \leq n-1.$$
Clearly, $\left\{D_{n,k}, \mathcal{G}_{n,k}, 1 \leq k \leq n-1\right\}$ is a MG-difference sequence with
$$ \dfrac{1}{\lfloor \alpha n \rfloor} \sum_{k=1}^{\lfloor \alpha n \rfloor} X_{n,k} = \sum_{j=\lfloor \alpha n \rfloor +1}^n D_{n,n-j+1} = \sum_{j=1}^{n-\lfloor \alpha n \rfloor } D_{n,j}.$$
We shall apply a slightly different version of MG CLT which is more amenable to our cause. The referred version is stated in the Appendix as Theorem~\ref{sethu} along with further citations. In view of Theorem~\ref{sethu}, we need to analyse two quantities asymptotically,
$$ \mathbb{E} \max_{j=1}^{n- \lfloor \alpha n \rfloor} |D_{n,j}| , \; \sum_{j=1}^{n - \lfloor \alpha n \rfloor} D_{n,j}^2.$$
All the $D_{n,k}$'s take values in $[-2,2]$ and
$$ D_{n,n-j+1} = \mathbb{E}(X_{n,1} \mid \mathcal{F}_{n,-(j-1)}) - \mathbb{E}(X_{n,1} \mid \mathcal{F}_{n,-j}) = \dfrac{S_{n,j-1}}{j-1} - \dfrac{S_{n,j}}{j}= \dfrac{S_{n,j-1}}{j(j-1)} - \dfrac{X_{n,j}}{j}.$$
Clearly,
$$ |D_{n,n-j+1}| \leq \dfrac{|S_{n,j-1}|}{j(j-1)} + \dfrac{|X_{n,j}|}{j} \leq \dfrac{2}{j},$$
and hence
$$ \max_{j=1}^{n- \lfloor \alpha n \rfloor} |D_{n,j}| \leq \max_{j=\lfloor \alpha n \rfloor +1}^n |D_{n,n-j+1}| \leq \max_{j=\lfloor \alpha n \rfloor +1}^n \dfrac{2}{j} \leq \dfrac{2}{\alpha n}.$$
On the otherhand,
\begin{align*}
\sum_{j=1}^{n - \lfloor \alpha n \rfloor} D_{n,j}^2 = \sum_{j= \lfloor \alpha n \rfloor +1}^n D_{n,n-j+1}^2 &= \sum_{j= \lfloor \alpha n \rfloor +1}^n \dfrac{1}{j^2} \left( \dfrac{S_{n,j-1}}{j-1} - X_{n,j}\right)^2 \\
&= \sum_{j= \lfloor \alpha n \rfloor +1}^n j^{-2} - 2\sum_{j= \lfloor \alpha n \rfloor +1}^n \dfrac{S_{n,j-1}X_{n,j}}{j^2(j-1)} + \sum_{j= \lfloor \alpha n \rfloor +1}^n \dfrac{S_{n,j-1}^2}{j^2(j-1)^2}.
\end{align*}
The idea is to show that asymptotically only the first term matters among the above three.
$$\sum_{j= \lfloor \alpha n \rfloor +1}^n j^{-2} = n^{-1} \dfrac{1}{n}\sum_{j= \lfloor \alpha n \rfloor +1}^n \left( \dfrac{j}{n}\right)^{-2} = n^{-1} \left( \int_{\alpha}^{1} x^{-2}\, dx +o(1)\right) = n^{-1}\left( \dfrac{1-\alpha}{\alpha} + o(1)\right).$$
We shall show that
$$ n \sum_{j= \lfloor \alpha n \rfloor +1}^n \dfrac{S_{n,j-1}X_{n,j}}{j^2(j-1)},\;\; n\sum_{j= \lfloor \alpha n \rfloor +1}^n \dfrac{S_{n,j-1}^2}{j^2(j-1)^2} \stackrel{p}{\longrightarrow} 0.$$
By \textit{Cauchy-Schwarz Inequality},
$$ n^2 \Bigg \rvert \sum_{j= \lfloor \alpha n \rfloor +1}^n \dfrac{S_{n,j-1}X_{n,j}}{j^2(j-1)}\Bigg \rvert^2 \leq n^2\left(\sum_{j= \lfloor \alpha n \rfloor +1}^n \dfrac{X_{n,j}^2}{j^2} \right) \left(\sum_{j= \lfloor \alpha n \rfloor +1}^n \dfrac{S_{n,j-1}^2}{j^2(j-1)^2} \right) = \left(n\sum_{j= \lfloor \alpha n \rfloor +1}^n \dfrac{1}{j^2} \right) \left(n\sum_{j= \lfloor \alpha n \rfloor +1}^n \dfrac{S_{n,j-1}^2}{j^2(j-1)^2} \right).$$
Thus it is enough to show that
$$ n\sum_{j= \lfloor \alpha n \rfloor +1}^n \dfrac{S_{n,j-1}^2}{j^2(j-1)^2} \stackrel{p}{\longrightarrow} 0.$$
We shall actually show more, i.e.,
$$ n\sum_{j= \lfloor \alpha n \rfloor +1}^n \dfrac{\mathbb{E}S_{n,j-1}^2}{j^2(j-1)^2} \longrightarrow 0.$$
Note that by exchangeable property, we have
$$ \mathbb{E}(X_{n,1}X_{n,2}) = \dfrac{1}{n(n-1)} \sum_{1 \leq i \neq j \leq n} \mathbb{E}(X_{n,i}X_{n,j}) = \dfrac{1}{n(n-1)} \mathbb{E} \left[ S_{n,n}^2 - \sum_{k=1}^n X_{n,k}^2\right] = - \dfrac{1}{n-1}, $$
and hence
$$ \mathbb{E}(S_{n,k}^2) = k\mathbb{E}(X_{n,1}^2) + k(k-1)\mathbb{E}(X_{n,1}X_{n,2}) = k - \dfrac{k(k-1)}{n-1} \leq k.$$
Thus
$$ n\sum_{j= \lfloor \alpha n \rfloor +1}^n \dfrac{\mathbb{E}S_{n,j-1}^2}{j^2(j-1)^2} \leq n\sum_{j= \lfloor \alpha n \rfloor +1}^n \dfrac{1}{j^2(j-1)} \leq \dfrac{n^2}{(\alpha n)^2(\alpha n-1)} \longrightarrow 0.$$
Combining all of these we get
$$ n \sum_{j=1}^{n-\lfloor \alpha n \rfloor} D_{n,j}^2 \stackrel{p}{\longrightarrow} \dfrac{1-\alpha}{\alpha}.$$
Therefore,
$$ \left\{ \dfrac{\sqrt{n\alpha}}{\sqrt{1-\alpha}} D_{n,k}, \mathcal{G}_{n,k}, 1 \leq k \leq n-\lfloor \alpha n \rfloor\right\}$$
is a triangular array of MG differences satisfying conditions stated in Theorem ~\ref{sethu}. Therefore,
$$\dfrac{\sqrt{n\alpha}}{\sqrt{1-\alpha}} \dfrac{1}{\lfloor \alpha n \rfloor}\sum_{j=1}^{\lfloor \alpha n \rfloor} X_{n,j} =\sum_{j=1}^{n-\lfloor \alpha n \rfloor} \dfrac{\sqrt{n\alpha}}{\sqrt{1-\alpha}} D_{n,j} \stackrel{d}{\longrightarrow} N(0,1),$$
and hence \textit{Slutsky's Theorem} implies that
$$ \dfrac{1}{\sqrt{n}}\sum_{k=1}^{\lfloor \alpha n \rfloor} X_{n,k}\stackrel{d}{\longrightarrow} N(0,\alpha(1-\alpha)).$$
\descitem{Argument using Brownian Bridge}
The catch is to identify the situation as a discrete version of standard Brownian Bridge.
Let $\left\{Y_i\right\}_{i \geq 1}$ is an i.i.d. collection of $\text{Uniform}\left( \left\{-1,+1\right\}\right)$ variables. Set $T_n := \sum_{i=1}^n Y_i$. Then it is quite evident that for any even $n$, conditional on $(T_n=0)$ the random vector $(Y_1, \ldots, Y_n)$ is uniformly distributed over the set of all $n$-tuples of $+1$'s and $-1$'s such that the sum of them is $0$. In other words,
$$ (Y_1, \ldots, Y_n) \Big \rvert (T_n=0) \stackrel{d}{=} (X_{n,1}, \ldots, X_{n,n}).$$
Therefore, for any $\alpha \in (0,1)$, we have
$$ S_n^{(\alpha)} := \dfrac{1}{\sqrt{n}}\sum_{i=1}^{\lfloor \alpha n \rfloor} X_{n,i} \stackrel{d}{=} \dfrac{1}{\sqrt{n}} \sum_{i=1}^{\lfloor \alpha n \rfloor} Y_i \Big \rvert (T_n=0) \stackrel{d}{=} \dfrac{1}{\sqrt{n}} T_{\lfloor \alpha n \rfloor} \Big \rvert (T_n=0).$$
Since, $\mathbb{E}Y_1=0$ and $\mathbb{E}Y_1^2=1$, we have by \textit{Multivariate Central Limit Theorem}, we have
$$ \left(\dfrac{1}{\sqrt{n}} T_{\lfloor \alpha n \rfloor}, \dfrac{1}{\sqrt{n}} (T_n- T_{\lfloor \alpha n \rfloor}) \right) = \left(\dfrac{1}{\sqrt{n}} \sum_{i=1}^{\lfloor \alpha n \rfloor} Y_i , \dfrac{1}{\sqrt{n}} \sum_{i=\lfloor \alpha n \rfloor+1}^n Y_i\right) \stackrel{d}{\longrightarrow} N(0,\alpha) \otimes N(0,1-\alpha),$$
and hence,
$$ \left(\dfrac{1}{\sqrt{n}} T_{\lfloor \alpha n \rfloor}, \dfrac{1}{\sqrt{n}} T_n \right) \stackrel{d}{\longrightarrow} (W(\alpha), W(1)),$$
where $W$ is the standard Brownian Motion. The assertion can also be proved by applying \textit{Donsker's Invariance Principle} directly.
Therefore,
$$ S_n^{(\alpha)} \stackrel{d}{=} \dfrac{1}{\sqrt{n}} T_{\lfloor \alpha n \rfloor} \Big \rvert (T_n=0) \stackrel{d}{\longrightarrow} W(\alpha) \Big \rvert (W(1)=0) \sim N(0,\alpha(1-\alpha)).$$
The last argument is not entirely rigorous. To make it completely rigorous, use \textit{Local Limit Theorem}.
By \cite[Theorem 3.5.2]{B01}, we have
$$ \mathbb{P}(T_n=0) = \mathbb{P}(T_n/\sqrt{n}=0) \sim \phi(0)/\sqrt{n}.$$
A computation similar to the proof of \cite[Theorem 3.5.2]{B01} shows that
$$ \mathbb{P}\left(T_n=0, \dfrac{1}{\sqrt{n}}T_{\lfloor \alpha n \rfloor} \leq x\right) = \mathbb{P}\left(T_n=0, \dfrac{1}{\sqrt{n}}T_{\lfloor \alpha n \rfloor}-\dfrac{\alpha}{\sqrt{n}}T_n \leq x\right) \sim \dfrac{\mathbb{P}(W(\alpha)-\alpha W(1) \leq x)\phi(0)}{\sqrt{n}}.$$
Combining the above asymptotics, we complete the proof.
\end{enumerate} |
2016-q1 | The Brownian bridge is a Gaussian process $B^0(t), 0 \leq t \leq 1$, of zero mean, covariance $E[B^0(t)B^0(s)] = s \wedge t - st$ and such that $B^0(0) = B^0(1) = 0.\ (a)\ \text{Let } B(t), 0 \leq t \leq 1, \text{ be standard Brownian motion. Show that } [B(t) - tB(1)]_{0 \leq t \leq 1} \stackrel{L}{=} [B^0(t)]_{0 \leq t \leq 1}.\ (b)\ \text{Find the distribution of } A(t) = \int_0^t B^0(s)\ ds, 0 \leq t \leq 1, \text{ as a process.} | $(Z_t)_{0\leq t \leq 1} = (B_t -t B_1)_{0 \leq t\leq 1}$ is a Gaussian process with mean $0$ and covariance function:
$$
\operatorname{Cov}(B_t - t B_1, B_s -s B_1) = s \wedge t - st -st +st = s\wedge t- st. $$
An immediate computation proves that $Z_0 = Z_1 = 0$, which allows to conclude that this process is a Brownian bridge.
$A$ is a Gaussian process, since this is the almost sure limit of the following sequence of Gaussian processes as $n \to \infty$,
$$ A^{(n)}_t := \dfrac{1}{n}\sum_{k=0}^{\lfloor nt \rfloor} B^0(k/n) =\dfrac{1}{n} \sum_{k=1}^n B^{0}(k/n) \mathbbm{1}(k/n \leq t), \;\; t \in [0,1].$$
The above statement is true since $B^0$ has almost surely continuous paths as a process. A Gaussian process is completely characterized by its mean and covariance function, hence it is enough to calculate those for $A$. Note that, $B$ is a continuous time stochastic process on $[0,1]$ with continuous sample paths and hence is progressively measurable with respect to its natural filtration $\left\{ \mathcal{F}_t : 0 \leq t \leq 1 \right\}$. In particular, $B_t(\omega) :[0,1]\times \Omega \to \mathcal{B}_{[0,1]} \otimes \mathcal{F}_1$ is jointly measurable and hence same can be guaranteed for $(t,\omega) \mapsto B^0(t,\omega)$. We can therefore use \textit{Fubini's Theorem} to conclude the following.
$$ \mathbb{E} A_t = \mathbb{E} \int_{[0,1]} B^0(s)\mathbbm{1}(0 \leq s \leq t)\, ds =\int_{[0,1]} \mathbb{E}B^0(s)\mathbbm{1}(0 \leq s \leq t)\, ds =0,$$
and
\begin{align*}
\operatorname{Cov}(A_s,A_t) = \mathbb{E} A_sA_t &= \mathbb{E} \int_{[0,1]^2} B^0(u)B^0(v)\mathbbm{1}(0 \leq u \leq s, 0 \leq v \leq t)\, du \, dv \\
&= \int_{[0,1]^2} \mathbb{E} \left(B^0(u)B^0(v)\right) \mathbbm{1}(0 \leq u \leq s, 0 \leq v \leq t)\, du \, dv \\
&= \int_{0}^t \int_{0}^s \left( u \wedge v - uv\right) \, du\, dv \\
&= \dfrac{(s \wedge t)^2 (s \vee t)}{2} - \dfrac{(s \wedge t)^3 }{6} - \dfrac{s^2t^2}{4}.
\end{align*} |
2016-q2 | Suppose $U_1, \ldots, U_n$ are points in the unit square $[0, 1]^2$, chosen independently at random, according to the same probability density $\rho$. Assume that $u \mapsto \rho(u)$ is continuous on $[0, 1]^2$ and $\kappa^{-1} \leq \rho(u) \leq \kappa$ for some constant $\kappa < \infty$. Let $\Delta_{i,n}$ denote the Euclidean distance of $U_i$ from its nearest neighbor among the remaining $n-1$ points.\ (a)\ \text{Show that } \sum_{i=1}^n \Delta_{i,n}^2 \text{ converges in probability, as } n \to \infty \text{, to a constant } \gamma.\ (b)\ \text{Give a formula for } \gamma \text{ in terms of the probability density } \rho. | Let $\mu$ be the probability measure on $\left(\mathbb{R}^2, \mathcal{B}_{\mathbb{R}^2}\right)$ with density $f = \rho \mathbbm{1}_{[0,1]^2}$. It is evident that the collection $\left\{\Delta_{i,n} : 1 \leq i \leq n\right\}$ is an exchangeable one. Therefore setting $S_n = \sum_{i=1}^n \Delta_{i,n}^2$, we can write the following.
\begin{equation}{\label{expec}}
\mathbb{E} \left(S_n \right) = n \mathbb{E} \Delta_{1,n}^2; \;\; \operatorname{Var}(S_n) = \mathbb{E}(S_n^2) - \left( \mathbb{E}S_n\right)^2 = n \mathbb{E} \left( \Delta_{1,n}^4\right) + n(n-1) \mathbb{E} \left( \Delta_{1,n}^2 \Delta_{2,n}^2\right) - n^2 \left( \mathbb{E} \Delta_{1,n}^2\right)^2.
\end{equation}
The following lemmas will now help us to complete our proof. Before that define the following quantities. For $k \in \mathbb{N}$, let
\begin{align*}
\gamma_k := \int_{0}^{\infty} \int_{[0,1]^2}k u^{k-1} \rho(x) \exp \left(-\pi \rho(x) u^2 \right) \, dx \, du &= \int_{[0,1]^2} \int_{0}^{\infty} ku^{k-1} \rho(x)\exp \left(-\pi \rho(x) u^2 \right) \, du \, dx \\
&= \dfrac{\Gamma(k/2)}{2 \pi^{k/2}}\int_{[0,1]^2} k\rho(x)^{1-k/2} \, dx.
\end{align*}
\begin{lemma}
For any $k \in \mathbb{N}$, we have $n^{k/2} \mathbb{E}\Delta_{1,n}^k \longrightarrow \gamma_k$ as $n \to \infty.\end{lemma}
\begin{proof}
$B_{x,r}$ shall denote the ball of radius $r$ around $x$. Then for any $r>0$, we have
\begin{align*}
\mathbb{P} \left(\Delta_{1,n} \geq r \right) &= \mathbb{P} \left( U_j \notin B_{U_1,r}; \; \forall \; 2 \leq j \leq n \right) \\
&= \int_{[0,1]^2} \mathbb{P} \left( U_j \notin B_{U_1,r}, \; \forall \; 2 \leq j \leq n \mid U_1=x\right) \, d\mu(x) \\
&= \int_{[0,1]^2} \mathbb{P} \left( U_j \notin B_{x,r}, \; \forall \; 2 \leq j \leq n\right) \, d\mu(x) \\
&= \int_{(0,1)^2} \left( 1-\mu \left( B_{x,r}\right)\right)^{n-1} \, \rho(x) \, dx.
\end{align*}
Note that in the last line the range of integration has been changed to $(0,1)^2$ since by definition $\mu([0,1]^2) = \mu((0,1)^2)=1$. Using \textit{Integration by Parts}, we get
\begin{align*}
n^{k/2} \mathbb{E}\Delta_{1,n}^k = \mathbb{E}\left[\left(\sqrt{n}\Delta_{1,n}\right)^k \right] &= \int_{0}^{\infty} kr^{k-1} \mathbb{P} \left(\sqrt{n}\Delta_{1,n} \geq r \right) \; dr = \int_{0}^{\infty} \int_{(0,1)^2}kr^{k-1} \rho(x) A_{n}(x,r) \, dx \, dr,
\end{align*}
where $A_{n}(x,r) := \left(1-\mu \left( B_{x,r/\sqrt{n}}\right) \right)^{n-1}$. We shall apply DCT to find the limit of the above integral. Since $\rho$ is continuous on $(0,1)^2$, we note that
\begin{align*}
\Big \rvert \mu \left( B_{x,r/\sqrt{n}}\right) - \rho(x)\pi r^2/n \Big \rvert = \Big \rvert \int_{B_{x,r/\sqrt{n}}}\rho(y) \; dy - \int_{B_{x,r/\sqrt{n}}}\rho(x) \; dy \Big \rvert \leq \dfrac{\pi r^2 ||\rho(\cdot)-\rho(x)||_{\infty, B_{x,r/\sqrt{n}}}}{n} = o(n^{-1}),
\end{align*}
for all $x \in (0,1)^2$. Hence for all $x \in (0,1)^2$, as $n \to \infty$, $$ \log A_{n}(x,r) = (n-1) \log \left( 1- \mu \left( B_{x,r/\sqrt{n}}\right)\right) \sim - n\mu \left( B_{x,r/\sqrt{n}}\right) = -n\left(\rho(x)\pi r^2/n +o(n^{-1}) \right) \sim -\pi r^2 \rho(x).$$
On the otherhand, for any $n \geq 2$ and $x \in (0,1)^2, r >0$, \begin{align*}\log A_{n}(x,r) = (n-1) \log \left( 1- \mu \left( B_{x,r/\sqrt{n}}\right)\right) &\leq - (n-1)\mu \left( B_{x,r/\sqrt{n}}\right) \\
& \leq - (n-1) \kappa^{-1} \lambda \left(B_{x,r/\sqrt{n}} \cap [0,1]^2 \right) \\
& \stackrel{(i)}{\leq} -(n-1)\kappa^{-1} \dfrac{\pi r^2}{4n} \leq - \dfrac{\pi r^2}{8 \kappa},
\end{align*}
where $(ii)$ follows from the fact that for any disc drawn around $x \in [0,1]^2$, at least a quarter (by area) of the disc must lie inside $[0,1]^2$. Since, $$ \int_{0}^{\infty} \int_{(0,1)^2}kr^{k-1} \rho(x) \exp \left( - \dfrac{\pi r^2}{8 \kappa} \right) \, dx \, dr = \int_{0}^{\infty} kr^{k-1} \exp \left( - \dfrac{\pi r^2}{8 \kappa} \right) \, dr < \infty, $$we apply DCT to conclude that$$ n^{k/2} \mathbb{E}\Delta_{1,n}^k \longrightarrow \int_{0}^{\infty} \int_{(0,1)^2}kr^{k-1} \rho(x) \exp \left( -\pi r^2 \rho(x)\right) \, dx \, dr = \gamma_k. \end{proof}
\begin{lemma}
For any $k,l \in \mathbb{N}$, we have $n^{(k+l)/2} \mathbb{E}\left(\Delta_{1,n}^k \Delta_{2,n}^l\right) \longrightarrow \gamma_k \gamma_l$ as $n \to \infty.$
\end{lemma}
\begin{proof}
We shall use the following \textit{Integration by Parts formula}. For any $X,Y \geq 0$, $$ \mathbb{E}\left(X^k Y^l \right)= \int_{(0,\infty)^2} klt^{k-1}s^{l-1} \mathbb{P}\left( X \geq t, Y \geq s\right)\, dt \, ds. $$For any $r,s>0$, we have\begin{align*} \mathbb{P} \left(\Delta_{1,n} \geq r, \Delta_{2,n} \geq s \right) &= \mathbb{P} \left( U_j \notin B_{U_1,r}, B_{U_2,s}; \; \forall \; 3 \leq j \leq n; \; ||U_1-U_2||_2 \geq r,s \right) \\&= \int_{[0,1]^2 \times [0,1]^2} \mathbb{P} \left( U_j \notin B_{U_1,r} \cup B_{U_2,s}, \; \forall \; 3 \leq j \leq n \mid U_1=x, U_2=y\right) \, \mathbbm{1}(||x-y||_2 \geq r,s)d(\mu \otimes \mu)(x,y) \\&= \int_{(0,1)^2 \times (0,1)^2} \left( 1-\mu \left( B_{x,r} \cup B_{y,s}\right)\right)^{n-2} \, \rho(x)\rho(y) \mathbbm{1}(||x-y||_2 \geq r,s) \, dx \, dy.\end{align*}Using \textit{Integration by Parts}, we get\begin{align*} n^{(k+l)/2} \mathbb{E}\left(\Delta_{1,n}^k \Delta_{2,n}^l \right) = \mathbb{E}\left[\left(\sqrt{n}\Delta_{1,n}\right)^k \left(\sqrt{n}\Delta_{2,n}\right)^l \right] &= \int_{(0,\infty)^2} klr^{k-1}s^{l-1} \mathbb{P} \left(\sqrt{n}\Delta_{1,n} \geq r, \sqrt{n}\Delta_{2,n} \geq s \right) \; dr \; ds\
& = \int_{(0,\infty)^2} \int_{(0,1)^2 \times (0,1)^2}klr^{k-1}s^{l-1} \rho(x)\rho(y) A_{n}(x,y,r,s) \, dx \,dy \, dr \, ds,\end{align*}
where $A_{n}(x,y,r,s) := \left(1-\mu \left( B_{x,r/\sqrt{n}} \cup B_{y,s/\sqrt{n}}\right) \right)^{n-2}\mathbbm{1}(\\sqrt{n}||x-y||_2 \geq r,s)$. For $x \neq y \in (0,1)^2$, and $r,s>0$, we have $B_{x,r/\sqrt{n}}$ and $B_{y,s/\sqrt{n}}$ to be disjoint for large enough $n$ and therefore, as $n \to \infty$, $$ \mu \left( B_{x,r/\sqrt{n}} \cup B_{y,s/\sqrt{n}} \right) = \mu \left( B_{x,r/\sqrt{n}} \right) + \mu \left( B_{y,s/\sqrt{n}} \right) = \rho(x)\pi r^2/n + \rho(y)\pi s^2/n + o(n^{-1}).$$ \n Hence for all $x \neq y \in (0,1)^2$; $r,s >0$; as $n \to \infty$, $ \log A_{n}(x,y,r,s) \longrightarrow -\pi r^2 \rho(x)-\pi s^2\rho(y).$ On the otherhand, for any $n \geq 3$ and $x \neq y \in (0,1)^2, r >s >0$, \begin{align*} \log A_{n}(x,y,r,s) = (n-2) \log \left( 1- \mu \left( B_{x,r/\sqrt{n}} \cup B_{y,s/\sqrt{n}} \right)\right) &\leq - (n-2) \mu \left( B_{x,r/\sqrt{n}} \cup B_{y,s/\sqrt{n}} \right) \\& \leq - (n-2) \mu \left( B_{x,r/\sqrt{n}} \right) \\&\leq -(n-2) \kappa^{-1} \lambda \left(B_{x,r/\sqrt{n}} \cap [0,1]^2 \right) \\& \leq -(n-2)\kappa^{-1} \dfrac{\pi r^2}{4n} \leq - \dfrac{\pi r^2}{12 \kappa} \leq - \dfrac{\pi (r^2+s^2)}{24 \kappa} , \end{align*} Since, $$ \int_{(0,\infty)^2} \int_{(0,1)^2 \times (0,1)^2}klr^{k-1}s^{l-1} \rho(x)\rho(y) \exp \left( - \dfrac{\pi (r^2+s^2)}{24 \kappa} \right) \, dx \, dy \, dr \, ds= \int_{(0,\infty)^2} klr^{k-1}s^{l-1} \exp \left( - \dfrac{\pi (r^2+s^2)}{24 \kappa} \right) \, dr\, ds $$is finite, we apply DCT to conclude that$$ n^{(k+l)/2} \mathbb{E}\left(\Delta_{1,n}^k\Delta_{2,n}^l \right) \longrightarrow \int_{(0,\infty)^2} \int_{(0,1)^2 \times (0,1)^2}kr^{k-1}ls^{l-1} \rho(x) \rho(y)\exp \left( -\pi r^2 \rho(x)-\pi s^2\rho(y)\right) \, dx \, dy\, dr\, ds = \gamma_k\gamma_l. \end{align*}\end{proof}
Using Equation~(\ref{expec}) and the above lemmas, we get $\mathbb{E}\left(S_n \right) \longrightarrow \gamma_2$ as $n \to \infty$ and $$ \operatorname{Var}(S_n) = n \left( n^{-2}\gamma_4 + o(n^{-2})\right) + n(n-1) \left( n^{-2}\gamma_2^2 + o(n^{-2})\right) - n^2 \left( n^{-1}\gamma_2 +o(n^{-1})\right)^2 =o(1). $$
Therefore,$$ S_n = \sum_{i=1}^n \Delta_{i,n}^2 \stackrel{P}{\longrightarrow} \gamma_2 = \dfrac{1}{2 \pi}\int_{[0,1]^2} 2 \, dx = \dfrac{1}{\pi}. $$ |
2016-q3 | Suppose $\tilde{S}_n = n^{-1/2} \sum_{i=1}^n X_i$ for i.i.d. $\mathbb{R}$-valued, random variables $\{X_i\}$ that are defined on the same probability space $\Omega, \mathcal{F}, \mathbb{P}$.\ Let $\mathcal{H} = \bigcap_{n=1}^\infty \mathcal{H}_n$ for $\mathcal{H}_n = \sigma(\tilde{S}_k, k \geq n)$. That is, $\mathcal{H}_n \subseteq \mathcal{F}$ is the smallest $\sigma$-algebra on which each $\tilde{S}_k$ for $k \geq n$ is measurable.\ (a)\ \text{Prove that } \mathcal{H} \text{ is trivial. That is, } \mathbb{P}(A) = [\mathbb{P}(A)]^2 \text{ for any } A \in \mathcal{H}.\ (b)\ \text{Is it possible to have independent but non-identically distributed } \{X_i\}, \text{ with } \mathbb{E}X_i = 0 \text{ and } \mathbb{E}X_i^2 = 1 \text{ such that } \tilde{S}_n \text{ converges in distribution to a limit } G \text{ having Var(G) = 2? Justify your answer.} | Let $X_{i:n}$ denote the $i$-th smallest element of the set $\left\{X_1, \ldots, X_n\right\}$.
Note that since
$$\mathcal{H}_n \subseteq \sigma\{ X_{1:n},...,X_{n:n}, X_{n+1}, X_{n+2}, \ldots\},$$
we have $\mathcal{H}_n \subseteq \mathcal{E}_n$ where $\mathcal{E}_n$ is the collection of events that are invariant under permutations of first $n$ variables.
We can immediately conclude that $\mathcal{H} \subseteq \cap_{m \geq 1}\mathcal{E}_m = \mathcal{E}$, the exchangeable $\sigma$-algebra, which is itself trivial from \textit{Hewitt-Savage 0-1 law}.
Suppose that we can find such sequence $\{X_i\}_{i\geq 1}$. Note that \n\mathbb{E}\widehat{S}_n =0$ and $\operatorname{Var} (\widehat{S}_n)=1$ under the assumptions. Using \textit{Skorokhod's Representation Theorem}, we can find $\left\{T_n\right\}_{n \geq 1}$ such that $T_n \stackrel{a.s.}{\longrightarrow} T$, $\operatorname{Var}(T)=2$, and $T_n \stackrel{d}{=}\widehat{S}_n$ for all $n \geq 1$. Then by \textit{Fatou's Lemma},
$$ 2 \leq \mathbb{E}(T^2) \leq \liminf_{n\to \infty} \mathbb{E}(T_n^2) = \liminf_{n\to \infty} \mathbb{E}(\widehat{S}_n^2) =1. $$
leading to a contradiction. |
2016-q4 | Attach a $[0, 1]$-valued weight $\omega_e$ to each edge $e$ of the integer lattice $\mathbb{Z}^d$, such that the $\omega_e$'s are i.i.d. random variables. Define the weight $w_\Gamma$ of a path $\Gamma$ in the lattice to be the sum of weights $\omega_e$ over the edges $e$ along the path $\Gamma$. A path $\Gamma$ is called "self-avoiding" if it never visits a vertex more than once. Let $T_n$ be the minimum among the weights $w_\Gamma$ of all self-avoiding paths of length $n$ which start at the origin.\ (a)\ \text{Show that if } \mathbb{P}(\omega_e = 0) = 0, \text{ then there exists a constant } \epsilon > 0 \text{ such that } \mathbb{P}(T_n \geq \epsilon n) \to 1 \text{ when } n \to \infty.\ (b)\ \text{Show that } \text{Var}(T_n) \leq Cn \text{ for some } C < \infty \text{ and all } n. | Let $\psi(\theta) := \mathbb{E} \exp(-\theta \omega_e)$, for all $ \theta>0$. Since, $\omega_e \geq 0$ almost surely, we can apply DCT and conclude that $\psi(\theta) \longrightarrow \mathbb{E}\left(\mathbbm{1}(\omega_e=0)\right) =0$ as $\theta \to \infty$.
Let $\Lambda_n$ be the set of all self-avoiding paths of length $n$ starting from origin. Since each vertex in the lattice has $2d$ neighbours, when we construct an self-avoiding path, we have at most $2d$ many choices for adding a new edge at each step. Thus $|\Lambda_n| \leq (2d)^n$. Therefore, for any $\theta, \varepsilon>0$,
\begin{align*}
\mathbb{P}\left(T_n < \varepsilon n \right) &= \mathbb{P} \left(\exists \, \Gamma \in \Lambda_n \text{ such that } w_{\Gamma} < \varepsilon n \right) \\
& \leq \sum_{\Gamma \in \Lambda_n} \mathbb{P} \left( w_{\Gamma} < \varepsilon n \right) \\
& = \sum_{\Gamma \in \Lambda_n} \mathbb{P} \left[ \exp(-\theta w_{\Gamma}) > \exp \left(-\theta \varepsilon n \right) \right] \\
& \leq |\Lambda_n| \exp \left(\theta \varepsilon n \right) \left[\psi(\theta)\right]^n \\
& \leq \exp \left[ n \log (2d) + \theta \varepsilon n + n \log \psi(\theta)\right].
\end{align*}
Get $\theta_0 >0$ finite but large enough such that $-\log \psi(\theta_0) > \log(2d) + 2$. This is possible by the observation in the first paragraph. Set $\varepsilon = 1/\theta_0 >0$. Then $\log(2d) + \theta_0 \varepsilon + \log \psi(\theta_0) < -1$ and hence
$$ \mathbb{P}(T_n < \varepsilon n) \leq \exp(-n) \longrightarrow 0.$$
This completes the proof.
We shall use a version of \textit{Efron-Stein's Inequality} which we state below. For proof and more details see \cite[Theorem 3.1]{B01}.
\begin{theorem}
Let $X_1, \ldots, X_n$ be independent random variables and $Z=f(X)$ be a square-integrable function of $X=(X_1,\ldots,X_n)$. Let $X_1^{\prime}, \ldots, X_n^{\prime}$ be independent copies of $X_1, \ldots, X_n$ and if we define, for every $i=1, \ldots, n$, $Z_i^{\prime} = f(X_1, \ldots, X_{i-1}, X_i^{\prime}, X_{i+1},\ldots, X_n)$; then
$$ \operatorname{Var}(Z) \leq \sum_{i=1}^n \mathbb{E} \left[(Z-Z_i^{\prime})_{-}^2 \right].$$
\end{theorem}
Let $A_n$ be the collection of all edges with both endpoints in $[-n,n]^d$. Note that $Y_n = \left(\omega_e : e \in A_n\right)$ is an i.i.d. collection and all self-avoiding paths of length $n$ starting from $0$ consists only of edges in $A_n$. We define the function $f$ to be taking input as the weights of the edges in $A_n$ and output is the weight of the minimum weighted path. Therefore by our construction $T_n=f(Y_n)$. Let $Y_n^{\prime}(e)$ be the weight collection if we replace the weight of edge $e$ with an independent copy $\omega_e^{\prime}$, for $e \in A_n$. Then by \textit{Efron-Stein's Inequality},
$$ \operatorname{Var}(T_n) \leq \sum_{e \in A_n} \mathbb{E}\left[f(Y_n) - f(Y_n^{\prime}(e)) \right]_{-}^2.$$
Let $\Gamma_n$ be a self-avoiding path of minimum weight among the paths of length $n$ starting from $0$, with edge weights given by $Y_n$. Note that, the weight of the minimum weight path will not increase if the weight of edge $e$ is changed only for some edge $e \notin \Gamma_n$. If the weight of edge $e$ is changed for $e \in \Gamma_n$, then the weight of minimum weight path can't increase by more than $1$ since all weights lie in $[0,1]$. Thus we have $\left[f(Y_n) - f(Y_n^{\prime}(e)) \right]_{-} \leq \mathbbm{1}(e \in \Gamma_n)$, for all $ e \in A_n$. Hence,
$$ \operatorname{Var}(T_n) \leq \sum_{e \in A_n} \mathbb{E}\left[f(Y_n) - f(Y_n^{\prime}(e)) \right]_{-}^2 \leq \sum_{e \in A_n} \mathbb{E}\left[\mathbbm{1}(e \in \Gamma_n)\right] = \mathbb{E} \left( \text{number of edges in } \Gamma_n \right) =n.$$
This completes the proof. |
2016-q5 | Let $\mathcal{P}$ denote the collection of all probability measures on the $\sigma$-algebra $\mathcal{B}_{[0,1]}$ of all Borel subsets of $[0, 1]$ and recall that $\nu \in \mathcal{P}$ is \emph{absolutely continuous} with respect to a $\sigma$-finite measure $\mu$ if $\mu(B) > 0$ whenever $\nu(B) > 0$.\ Does there exist a $\sigma$-finite measure $\mu$ on $\mathcal{B}_{[0,1]}$, with respect to which every $\nu \in \mathcal{P}$ is absolutely continuous? Prove your answer. | Take any $\sigma$-finite measure $\mu$ on $\mathcal{B}_{[0,1]}$. Get $B_n \in \mathbb{B}_{[0,1]}$ such that $\mu(B_n) < \infty$ for all $n \geq 1$ and $\cup_{n \geq 1} B_n =[0,1]$. Let $A_n := \left\{ x \in [0,1] \mid \mu \left( \left\{x\right\} \right) > 1/n\right\}$. Then $\cup_{n \geq 1} A_n =: A = \left\{ x \in [0,1] \mid \mu \left( \left\{x\right\} \right) > 0\right\}$. Note that $(1/n)|A_n \cap B_m| \leq \mu (A_n \cap B_m) \leq \mu(B_m) < \infty$, and therefore $|A_n \cap B_m| < \infty$ for all $n, m \geq 1$. Since, $A = \cup_{n,m \geq 1} (A_n \cap B_m)$, we can conclude that $A$ is countable. Take $y \in [0,1] \setminus A$, possible since $A$ is countable. Then the probability measure $\delta_y$, which assigns mass $1$ at $y$, is not absolutely continuous with respect to $\mu$ since $\mu \left( \left\{y\right\} \right)=0$. This proves the non-existence of such $\sigma$-finite measure with respect to whom every probability measure in $\mathcal{P}$ is absolutely continuous. |
2016-q6 | Let $\mathcal{Q}$ denote the collection of all probability measures $\mu$ on $(\mathbb{R}, \mathcal{B}_{\mathbb{R}})$ such that $\int |x| d\mu(x) < \infty$, with $F_\mu(t) := \mu((-\infty, t])$ denoting the corresponding cdf.\ (a)\ \text{Show that the Wasserstein distance } W(\mu, \nu) = \inf_{X \sim \mu, Y \sim \nu} \mathbb{E}|X-Y| \text{(where the infimum is over all joint laws of } (X, Y) \text{ with the prescribed marginals), satisfies the triangle inequality, with } 0 < W(\mu, \nu) < \infty \text{ for any } \mu \neq \nu \in \mathcal{Q}, \text{ and }\ W(\mu, \nu) \leq \int_{-\infty}^\infty |F_\mu(t) - F_\nu(t)|\ dt.\ (b)\ \text{Fixing } k \geq 1 \text{ and } f : \mathbb{R} \to \mathbb{R} \text{ such that } |f(x) - f(x')| \leq L |x - x'| \text{ for some } L < 1/k \text{ and all } x, x', \text{ let } T(\mu) \text{ denote the law of } Y := Z + \sum_{i=1}^k f(X_i),\ \text{for a standard Normal variable } Z \text{ which is independent of the i.i.d. } X_i \sim \mu.\ \text{Show that } \mathcal{T}: \mathcal{Q} \to \mathcal{Q} \text{ with } W(T(\mu), T(\nu)) \leq kLW(\mu, \nu), \text{ and deduce that for any } \mu_0 \in \mathcal{Q} \text{ the sequence } \mu_n = \mathcal{T}^n(\mu_0) \text{ is (uniformly) tight and converges weakly to the unique } \mu_{\star} \in \mathcal{Q} \text{ such that } \mu_{\star} = T(\mu_{\star}). | In the solution, $M_{\mu} := \mathbb{E}_{X \sim \mu} |X| = \int |x| \, d\mu(x)$, for $\mu \in \mathcal{Q}$.\begin{enumerate}[label=(\alph*)]\item Take $\mu_1, \mu_2, \mu_3 \in \mathcal{Q}$. Fix $\varepsilon >0$. Then we can get $X_1, X_2, Y_2, Y_3$ such that $X_1 \sim \mu_1$, $X_2, Y_2 \sim \mu_2$ and $Y_3 \sim \mu_3$ and $$ \mathbb{E} |X_1-X_2| \leq W(\mu_1,\mu_2) + \varepsilon, \;\; \mathbb{E} |Y_2-Y_3| \leq W(\mu_2,\mu_3) + \varepsilon.$$ Since, $Y_2$ and $Y_3$ are both real-valued random variable, there exists R.C.P.D. of $Y_3$ conditioned on $Y_2$, $\hat{\mathbb{P}}_{Y_3 | Y_2} (\cdot, \cdot) : \mathbb{R} \times \mathcal{B}_{\mathbb{R}} \mapsto [0,1]$ (See \cite[Exercise 4.4.5]{dembo}). Simulate $X_3$ from the probability measure $\hat{\mathbb{P}}_{Y_3 | Y_2} (X_2, \cdot)$. Then $(Y_2, Y_3) \stackrel{d}{=}(X_2,X_3)$ and therefore $$ W(\mu_1, \mu_3) \leq \mathbb{E} |X_1-X_3| \leq \mathbb{E} |X_1-X_2| + \mathbb{E} |X_2-X_3| = \mathbb{E} |X_1-X_2| + \mathbb{E} |Y_2-Y_3| \leq W(\mu_1, \mu_2) + W(\mu_2, \mu_3) + 2 \varepsilon.$$ Taking $\varepsilon \downarrow 0$, we conclude the triangle inequality. By definition, $W(\mu, \nu) \geq 0$. $W(\mu, \nu)=0$ implies that their exists $(X_n,Y_n)$ such that $X_n \sim \mu$, $Y_n \sim \nu$ and $\mathbb{E}|X_n-Y_n| \leq 1/n$. This implies that $X_n-Y_n \stackrel{p}{\longrightarrow} 0$. Since, $Y_n \sim \nu$, employ $\textit{Slutsky's Theorem}$ and obtain that $X_n \stackrel{d}{\longrightarrow} \nu$. But $X_n \sim \mu$ for all $n$ and therefore we have $\mu=\nu$. Also it is trivial from the definition that $W(\mu, \nu) \leq M_{\mu} + M_{\nu} < \infty$ as $\mu, \nu \in \mathcal{Q}$.
Take $U \sim \text{Uniform}(0,1)$. Then $X :=F_{\mu}^{\leftarrow}(U) \sim \mu$ and $Y :=F_{\nu}^{\leftarrow}(U) \sim \nu.$ Therefore,
\begin{align*}\ W(\mu, \nu) &\leq \mathbb{E} |X-Y| \ = \int_{0}^1 |F_{\mu}^{\leftarrow}(u) - F_{\nu}^{\leftarrow}(u)| \, du \ \
&= \int_{0}^1 \mathbbm{1}(F_{\mu}^{\leftarrow}(u) \geq F_{\nu}^{\leftarrow}(u)) \int_{-\infty}^{\infty} \mathbbm{1}(F_{\mu}^{\leftarrow}(u) > t \geq F_{\nu}^{\leftarrow}(u)) \, dt\, du \\
& \hspace{1.5 in} + \int_{0}^1 \mathbbm{1}(F_{\mu}^{\leftarrow}(u) < F_{\nu}^{\leftarrow}(u)) \int_{-\infty}^{\infty} \mathbbm{1}(F_{\nu}^{\leftarrow}(u) > t \geq F_{\mu}^{\leftarrow}(u)) \, dt\, du \\
&\stackrel{(ii)}{=} \int_{-\infty}^{\infty} \int_{0}^1 \mathbbm{1}(F_{\mu}(t) < u \leq F_{\nu}(t)) \, du\, dt + \int_{-\infty}^{\infty}\int_{0}^1 \mathbbm{1}(F_{\nu}(t) < u \leq F_{\mu}(t)) \, du\, dt \\
&= \int_{-\infty}^{\infty} \mathbbm{1}(F_{\nu}(t) > F_{\mu}(t)) | F_{\nu}(t)-F_{\mu}(t)| \, dt + \int_{-\infty}^{\infty} \mathbbm{1}(F_{\nu}(t) < F_{\mu}(t)) |F_{\mu}(t) - F_{\nu}(t)| \, du \\
&= \int_{-\infty}^{\infty} | F_{\nu}(t)-F_{\mu}(t)| \, dt, \end{align*}
where in $(ii)$ we have used the fact that for any right-continuous non-decreasing function $H$, the following two statements hold true.
$$ H^{\leftarrow}(y) \leq t \iff y \leq H(t); \;\; H^{\leftarrow}(y) > t \iff y > H(t).$$\n\item Fix $\varepsilon >0$. Get $X$ and $\widetilde{X}$ such that $X \sim \mu, \widetilde{X} \sim \nu$ and $\mathbb{E} |X-\widetilde{X}| \leq W(\mu, \nu) + \varepsilon.$ Let $(X_1,\widetilde{X}_1), \ldots, (X_n,\widetilde{X}_n)$ are independent copies of $(X,\widetilde{X})$. Set $$ Y = Z + \sum_{i=1}^k f(X_i), \;\; \widetilde{Y} = Z + \sum_{i=1}^k f(\widetilde{X}_i), $$ where $Z \sim N(0,1)$. Then $ Y \sim T(\mu)$ and $\widetilde{Y} \sim T(\nu)$. Clearly,
$$ |Y| = |Z + \sum_{i=1}^k f(X_i) | \leq |Z| + k|f(0)| + \sum_{i=1}^k |f(X_i)-f(0)| \leq |Z| + k|f(0)| + L\sum_{i=1}^k |X_i|,$$ and therefore $\mathbb{E}|Y| \leq \mathbb{E}|Z| + k|f(0)| + kL \mathbb{E}|X| < \infty$. Thus $T(\mu) \in \mathcal{Q}$ if $\mu \in \mathcal{Q}$. Moreover,
$$W(T(\mu),T(\nu)) \leq \mathbb{E} |Y - \widetilde{Y}| \leq \sum_{i=1}^k \mathbb{E} |f(X_i) - f(\widetilde{X}_i)| \leq \sum_{i=1}^k \mathbb{E}\left( L|X_i - \widetilde{X}_i| \right) = Lk \mathbb{E} |X - \widetilde{X}| \leq Lk \left(W(\mu, \nu) + \varepsilon \right),$$ since $Lk <1$. Taking $\varepsilon \downarrow 0$, we get $$ W(T(\mu), T(\nu)) \leq kL W(\mu, \nu).$$\end{enumerate} |
2017-q1 | Suppose that you have $n$ standard dice, and roll them all. Remove all those that turn up sixes. Roll the remaining pile again, and repeat the process. Let $M_n$ be the number of rolls needed to remove all the dice.
(a) Produce a sequence of constants $a_n$ such that $M_n/a_n \to 1$ in probability as $n \to \infty$.
(b) Show that there does not exist sequences $b_n$ and $c_n$ such that $(M_n - b_n)/c_n$ converges in distribution to a non-degenerate limit. | For the sake of simplicity, assume that we roll all the dices at every step but do not record/care about the outcome for those dices already turned $6$ in the past. Let $T_{i}$ be the first time the $i$-th dice turns up $6$. Then $T_i$'s are independently and identically distributed as $ ext{Geo}(1/6)$, i.e., $ ext{P}(T_1 =k) = 6^{-k}5^{k-1} ext{1}(k ext{ in } ext{N})$. It is evident that $M_n = ext{max} ig
eap{ T_1,
i
eq T_n
imes}$.
(a) From the above discussion it is easy to see that for $x>0$, we have the following.
$$ ext{P}(M_n >= x) = 1- ig(1- ext{P}(T_1 >= x)ig)^n = 1- ig(1- rac{5}{6}^{ ext{ ceil}( x) -1}ig)^n.$$
Take any sequence $ig{a_nig{}}_{n >= 1}$ in $ ext{R}^{+)$ such that $a_n to ext{infinity and fix smallvarepsilon >0$. Then
$$ ext{log} ext{P}(M_n < a_n(1+ ext{varepsilon})) = n ext{log} ig(1- rac{5}{6}^{ ext{ ceil}(a_n(1+ ext{varepsilon})) -1} ig) sim -n(rac{5}{6}^{ ext{ ceil}( a_n(1+ ext{varepsilon}) -1}) = - ext{ exp} ig( ext{log} n + ig( ext{ ceil}( a_n(1+ ext{varepsilon}) -1 ig) ext{log}(rac{5}{6})ig).$
Similarly,
$$ ext{log} ext{P}(M_n < a_n(1- ext{varepsilon})) sim - ext{exp} ig( ext{log} n + ig( ext{ ceil}( a_n(1- ext{varepsilon}) -1 ig) ext{log}(rac{5}{6})ig). $
From the above expressions, it is clear that if we set $a_n = rac{ ext{log} n}{ ext{ log} (6/5)}$, then $ ext{log} ext{P}(M_n < a_n(1+ ext{varepsilon})) o 1$ whereas $ ext{log} ext{P}(M_n < a_n(1- ext{varepsilon})) o 0$. Therefore,
$$ rac{M_n ext{ log} (6/5)}{ ext{ log} n} ext{P}{->} 1.$$
(b)
ext{Correction :} $c_n >0$.
Suppose there exists sequences $ig
eap{b_nig{}}_{n >= 1} ig
eap{c_nig{}}_{n >= 1}$ such that $(M_n-b_n)/c_n$ converges in distribution to non-degenerate distribution function $F$.
Get a continuity point t of $F$, such that $0 < F(t)<1$. We can do that since $F$ is non-degenerate. Without loss of generality we can assume that $t=0$, otherwise just replace $b_n$ by $b_n+tc_n$. Then
$$ ext{log}(F(0)) = ext{lim}_{n o ext{infinity}} ext{log} ext{P} ig(M_n < b_nig) = ext{lim}_{n o ext{infinity}} - rac{6n}{5} ext{exp} ig( ext{ceil}(b_n) ext{log} (5/6)ig).$$
Therefore,
$$ rac{ ext{exp}ig( ext{ceil}(b_{n+1}) ext{log}(5/6)ig)}{ ext{exp}ig( ext{ceil}(b_n) ext{log}(5/6)ig)} ext{P}{n o ext{infinity}} 1 ext{oprightarrow} ext{ceil}(b_{n+1}) - ext{ceil}(b_n) o 0.
$$
Since, $ig
eap{ ext{ceil}(b_{n+1}) - ext{ceil}(b_n)ig{}}_{n >= 1}$ is a sequence in $ ext{Z}$, this convergence implies that $ ext{ceil}(b_{n+1}) - ext{ceil}(b_n) =0$, eventually for all n, which in turn yields that $b_n=O(1)$. By part (a), we have $M_n to ext{infinity}$ almost surely (since $M_n$ is monotone in n) and hence $ ext{P}(M_n < b_n) to 0$, contradicting the fact that $F(0) >0$.
|
2017-q2 | Let $\mathcal{X}$ be a set, $\mathcal{B}$ be a countably generated $\sigma$-algebra of subsets of $\mathcal{X}$. Let $\mathcal{P}(\mathcal{X}, \mathcal{B})$ be the set of all probability measures on $(\mathcal{X}, \mathcal{B})$. Make $\mathcal{P}(\mathcal{X}, \mathcal{B})$ into a measurable space by declaring that the map $P \mapsto P(A)$ is Borel measurable for each $A \in \mathcal{B}$. Call the associated $\sigma$-algebra $\mathcal{B}^*$.
(a) Show that $\mathcal{B}^*$ is countably generated.
(b) For $\mu \in \mathcal{P}(\mathcal{X}, \mathcal{B})$, show that $\{\mu\} \in \mathcal{B}^*$.
(c) For $\mu, \nu \in \mathcal{P}(\mathcal{X}, \mathcal{B})$, let
\[ \|\mu - \nu\| = \sup_{A\in\mathcal{B}} |\mu(A) - \nu(A)|. \]
Show that the map $(\mu, \nu) \mapsto \|\mu - \nu\|$ is $\mathcal{B}^* \times \mathcal{B}^*$ measurable. | Since $ ext{B}$ is countably generated, there are $B_1, B_2,
e ext{B}$ such that $ ext{B} = ext{sigma} igg( B_i : i >= 1 igg)$. Without loss of generality, we can assume that $ig
eap{B_i : i >=1}$ is a $ ext{pi-system}$, since otherwise we can take the collection of all finite intersections of the elements of $ig
eap{B_i : i >= 1}$. For $A
e ext{B}$, let $ ext{Psi}_A$
be the map from $ ext{P}( ext{X}, ext{B})$ to $[0,1]$ which takes $P$ to $P(A)$. $ ext{B}^{*}$ is the smallest $ ext{sigma}$-algebra on $ ext{X}^{*}= ext{P}( ext{X}, ext{B})$ with respect to which $ ext{Psi_A}$ is $[0,1],$ $ ext{B}_{[0,1]}$-measurable for all A
e ext{B}.
(a) We claim that
$$ ext{B}^{*} = ext{sigma} igg( ext{Psi}_{B_i}^{-1}((a,b]) igg| i
e 1, a<b
e ext{Q} igg).$$
Proving this claim will clearly prove that $ ext{B}^{*}$ is countably generated. Let us call the right hand side of the above claim as $ ext{A}^*$. It is clear that $ ext{A}^* ext{subseteq} ext{B}^{*}$ since $ ext{Psi}_{B_i} : ( ext{X}^*, ext{B}^*) o ([0,1], ext{B}_{[0,1]})$ is measurable, for all i >= 1.
Thus it is enough to show that $ ext{A}^* ext{superset} ext{B}^*$ which will follow if we prove that $ ext{Psi}_{A} : ( ext{X}^*, ext{A}^*) o ([0,1], ext{B}_{[0,1]})$ is measurable for all $A
e ext{B}$. In order to do that, we employ extit{Good Set principle}. Let
$$ ext{C} := ig{A
e ext{B} : ext{Psi}_{A} : ( ext{X}^*, ext{A}^*) o ([0,1], ext{B}_{[0,1]}) ext{is measurable.} ig}$$
. First we show that $ ext{C}$ is indeed a $ ext{λ}$-system. Note that $ ext{Psi}_{ ext{emptyset}}
e 0$ and $ ext{Psi}_{ ext{B}}
e 1$; hence $ ext{A}^*$-measurable. Thus shows $ ext{emptyset}, ext{B}
e ext{C}$. Also, $ ext{Psi}_{A^c} = 1- ext{Psi}_{A}$; therefore $ ext{A}^*$-measurability of $ ext{Psi}_{A}$ (i.e., $A ext{in} C$) implies that $ ext{Psi}_{A_{c}}$ is also $ ext{A}^*$-measurable (i.e., $A^c
e ext{C}$). Now take sequence of disjoint subsets $A_i
e C$. Since $ ext{Psi}_{A_i}$ is $ ext{A}^*$-measurable, we have $ ext{Psi}_{igcup_{i >= 1}A_i} = ig{igcup_{i >= 1} ext{Psi}_{A_i}$, thus $ig( ext{A}^* ne a)ig)$. This proves that $ ext{C}$ is a $ ext{λ}$-system.
By definition, $ ext{Psi}_{B_i}$ is $ ext{A}^*$-measurable; therefore $ ext{C}$ contains the $ ext{pi}$-system $ig
eap{B_i : i >= 1ig}$. Employ extit{Dynkin's $ ext{pi-λ}$ Theorem} to conclude that $ ext{C}
e ext{B}$, hence $ ext{C}= ext{B}$, i.e., $ ext{Psi}_{A} : ( ext{X}^*, ext{A}^*) o ([0,1], ext{B}_{[0,1]})$ is measurable for all A
e ext{B}. This completes the proof.
(b) Recall that $ig{ xig}
e ext{B}_{[0,1]}$ for all $0
e x
e 1$. We claim that for $
u
e ext{X}^*$, we have
$$ ig{
u} = igcap_{i
e 1} ext{Psi}_{B_i}^{-1}ig{igg{
u}(B_i)igg{igg{}}}$$
If the claim is true, then we can conclude $ig{
u} ne ext{B}^*$, since $ ext{Psi}_{B_i} : ( ext{X}^*, ext{B}^*) o ([0,1], ext{B}_{[0,1]})$ is measurable for all cases i >= 1. In order to prove the claim, note that $
u
e igcap_{i >= 1} ext{Psi}_{B_i}^{-1}ig{ig{y}:
u(B_i)=
u(B_i)ig} ext{Ne}$. Let $
u
e igcap_{i >= 1} ext{Psi}_{B_i}^{-1}ig{igg{
u(B_i)igg{}}}$ is a non graphic actions phase in the Tau, this proves the claim.
(c) Set $ ext{B}_n = ext{sigma}igg(B_i : 1 <= i <= nigg)$ and $ ext{B}_{ ext{infinity}} = igcup_{n >= 1} ext{B}_n$. Clearly $ ext{B}_{ ext{infinity}}$ is an algebra and generates the $ ext{sigma}$-algebra $ ext{B}$. We claim that
$$ ||
u -
u || := ext{sup}_{B ext{ in } ext{B}} |
u(B) -
u(B)| = ext{ super}_{B ext{ in } ext{B}_{ ext{infinity}}}|
u(B)-
u(B)|.$$
Let us first see why this claim proves what we need to establish. For any $B ext{ in } ext{B}_{ ext{infinity}}$, we have $
u ext{ in }
u(B)$ is B∗-measurable and therefore $(
u,
u) ext{ in } |
u(B)-
u(B)|$ is $ ext{A}^* imes ext{A}^*$-measurable. Since, $ ext{B}_{ ext{infinity}}$ is countable, this implies that $(
u,
u) ext{ in } ext{super}_{B ext{ in B }}|
u -
u |$ is also $ ext{A}^*× ext{A}^*$measurable, completing the proof.
To prove the claim, it is enough to show $ ext{super}_{B ext{ in } ext{B}_{ ext{infinity}}} |
u(B) -
u(B)| ext{ in } r$ implies $ ext{supremum(B ext{ in } ext{B})} |
u(B)-
u(B)| ext{ in } r$, for any r >= 0.
To advance in that direction, we use good-set-principle again. Let
$$ ext{D} := igg{ B ext{ in } ext{B} : |
u(B)-
u(B)| ne rigg{ }}$$
By assumption, $ ext{D}$ contains the algebra of $ ext{B}_{ ext{infinity}}$ as shown to prove $ ext{D} = ext{B}$, it is enough to show that $ ext{D}$ is a monotone class and then apply extit{Monotone Class Theorem}. Take increasing/decreasing sequence of sets $ig{A_iig}_{i >= 1}$ in $ ext{D}$, with limit A in B. Since $
u,
u$ are both probability measures,
$$ |
u(A)-
u(A)| = | ext{lim}_{n ext{ in } ext{infinity}}
u(A_n) - ext{lim}_{n ext{ in } ext{infinity}}
u(A_n)| = ext{lim}_{n ext{ in nfinity}} |
u(A_n)-
u(A_n)| ext{in} r,$$
and hence A in $ ext{D}$.
Thus completes the proof.
|
2017-q3 | Let $X, Y$ two independent random variables with densities $f$ and $g$ with respect to the Lebesgue measure on $\mathbb{R}$.
(a) Prove that $Z = X/Y$ also has a density. Call it $h$.
(b) Derive a formula for $h$ in terms of $f$ and $g$. (This should be analogous the the standard formula for the convolution of two densities.)
(c) Apply this formula to the case of two standard normal random variables $X,Y \sim \mathcal{N}(0,1)$. What is the density of $Z$ in this case? Call this density $h_C$.
(d) Prove that, for a random variable with density $h_C$, $Z$ has the same density as $(1+Z)/(1-Z)$. | (a) For any $z ne ext{R}$, we have
$$ ext{H}( z) := ext{P}(Z <= z ) = ext{P} igg{(rac{X}{Y}} <= z igg{)} = ext{P} (X <= zY, Y >0)+ ext{P}(X >= zY, Y <0).$$
$$= ext{int(0 < ext{infinity})} ext{P}(X <= zy) g(y) ext{dy} + ext{int(- ext{infinity},0)} ext{P}(X >= zy) g(y) ext{dy}
$$ = ext{int(0, ext{infinity}} igg{(X ext{P}ig( zy )g(y]{dy]} + ext{ int(- ext{infinity} ,0)} ig( ext{P}(X >= zy) g(y) ext{dy.}$$
$$ = ext{int}(0, ext{infinity}) ext{int(- ext{infinity)}^{zy} f(x) g(y) dx dy + ext{int(- ext{infinity}) ^ 0 }( zy )g(y) dy } ext{dy}
$$ = ext{int}(0, ext{infinity)} ext{int(- ext{infinity})^{z} ext{f}(ty) g(y) y} dt dy + ext{int(- ext{infinity})^0 ext{int}(z} ^ { ext{-infinity}} {f(tyg(y) y dt dy.
$$ ext{Fubini ext{infinity}} o ext{int}(z, - ext{infinity}) = ext{int (0, ext{infinity})} f(ty)g(y) dy.$$
From the last expression in the above equation, it is clear that $h$ is non-negative, measurable function which is also integrable since $ ext{int}{- ext{infinity})h(t) dt =1$. Therefore, by extit{Lebesgue Differentiation Theorem} we conclude that $H$ (z) is almost everywhere differentiable with $H^{prime} = h$, almost everywhere. This proves that Z has a density.
(b) The discussion in part (a) gives the formula for density $h$ in terms of $f$ in terms of function.
(c) In this face f=g=$ ext{phi}$, the standard normal density. Therefore,
: ext{h(z)} = ext{int (- ext{infinity})^{ ext{a}_{ ext{y}}}} igg{|y|igg{ } ext{f}( ext{ty}) ext{g(y } dy ext{y} =2 ext{(2,} ext{ ext{ }( ext{b}) ext{(1}{)) =0.1} =}{ ext{=|?}}, J}.
$$ ext{h(z)} = { ?}.
$$ ext{ =exp ext{-rac{}}{-rac{(2)}} . $$
(d) $$ ( a) Let X and Y are independent standard Normal variables. Then from part (c), ZD = X/Y.
$$ ext{(1+Z)( 1-Z)} = rac{Y + X}{Y - X} = rac{(Y + X) / ext{(} ext{sqrt}(2) ext{)}} {(Y - X) / ext{sqrt}; 2} ext{.}$$ Observing that $(Y - X) / ext{sqrt(2)}$ and $(Y + X) / ext{sqrt(2)}$ are independent standard Normal variable, we invoke part (c) to conclude that $(1+z) / (1-z)$ has also density of h_{C}. |
2017-q4 | Let $X_1,\ldots,X_n$ be independent real valued random variables with a symmetric density $f(x)$. Suppose that there are some $\epsilon, \delta > 0$ such that $f(x) > \epsilon$ when $|x| < \delta$. Define the harmonic mean
\[ H_n = \frac{1}{X_1} + \frac{1}{X_2} + \cdots + \frac{1}{X_n}. \]
Prove that $H_n$ converges in distribution to a Cauchy random variable as $n \to \infty$. | Since the reciprocal of a Cauchy random variable (with location parameter $0$) is also a Cauchy random variable (with location parameter $0$), using textit{Continuous Mapping Theorem} it is enough to prove $H_n^{-1}$ converges in distribution to a (symmetric) Cauchy variable. We use the extit{Characteristic functions}.
$$ ext{E} [ ext{exp(itH_{n}^{(-1)})}]=igg( ext{E}[ ext{exp }ig( rac{(it)}{(nX_1)},igg)=igg( ext{E}ig[ ext{exp(i heta X_{1}^{-1}}){)/}^{ ext{(}}\ ext{E}ig[ ext{exp }igg{( rac{ t.}{n (X_1)}igg) = ext{E(}( {X))/theta}$$
where $ heta
e 0$. Since the distribution of $X_1$ is symmetric around $0$, such is the distribution of $X_1^{-1}$; hence we have the following for $ heta >0$. $$ 1- ext{psi} ( heta) =1 - ext{E} [ ext{exp } igg{( rac{{(it)( heta)}}{ X_1}} igg)=1-2 ext{(int)(0, ext{infinity}),} igg{(0}- cos[{d{ackslash}in}}igg] { negligible}ig)igg[ 1 - ext{cos}{1-rac{thic}{R_{0}} =0}} ext{exp(y-d o{exp{ heta}}$
ext{=}}0.$$$$1-2 ext{int}(0, ext{infinity}) [.2dy x}igg]$$
Thus, $ 1- ext{psi} heta = hetaigg {gamma }(rac { heta}{2}) $, where
$$ extg( heta) = 2 ext{(int)(-^{ ext{-infinity})}{ }igg{J}$ where $[ ig(};igg{y : f})igg; f ackslash
e -2$$
$ ext{ ( 2 )} . $
Suppose $ ext{g -2 Sigma}, ext{theta} o 0).$
Therefore for all $t >0$,$$ ext {log} ext{E} ext{(} ext{exp} ext{ itH =0)(t^{-1})} ext{(exp}(-t ext {gamma}}}} ext{(t^{-1})}$$
[[ ext{thus leaving the Sigma-log-lim}]} of exta$6|$ has been evaluated to be symmetric around 0 proves the assertion.
Using final notes we need to inquire $ ext {gamma(}.0)$ ext{ne 0,} from prior discussions.
Define $g_{J}- J^{0,$, under strict conditions establish non-[J.]}$$
[a)=( ext{step 1.)]}If $$f) ne f_bounded && f ext{to} ext {eta } < 0 end{0} after it converges:)= ext {gamma( ext{here for } ext {0})}${})$ > 0,;
$$[= heta) = Zin 2^{0}2$(t_{-j})= heta$,))$$ R(t)J
e 2((t - y = 2) heta]=$$ ext {}}] ext { ] : ackslash - / X]}{, - y<)$ ext{[ 1;L],}}=0;2 $$$$$$.g)
eq 4
ight,), ext {$ :]{ >>}}, which are many general conditions where existence of $g$ means convergence, towards describing $H_n$. In fact, we have obtained the prescriptive variants needed, acknowledging the $ ext{Y_rac(2)}$ functions near their creations requesting further integration to overcome the respective environments of the direct implications.
((2)[step 2)]) If this is true, use a greater proportion of coverage known as $ ext{D} = 2 igg{[ -2}igg{]}= ext{lim}(i = 0) ext{(A) }
e 0 r ext{.]0} = 0}x}
e 0$ .$
$$?(
e ext{lim}(0)
e 0}}$| 0$ ext {sq- try ( rac{-1}{ heta}ig)18 ;}
e 0``$ ext{=> starts new at top}$ ext{
e 0.}
$$}}|0 east$$ ext{_gamma( heta)/0}
e .4,
+ .0^
eq f[r]}=1--$ |
on ext{
e Check again.}}
$ |
2017-q5 | Let $(B_t)_{t\geq 0}$ be standard Brownian motion starting at zero. Let
\[ T = \sup\{t : B_t \geq t\}. \]
(a) Prove that $T < \infty$ a.s.
(b) Show that $T$ is not a stopping time with respect to the Brownian filtration.
(c) Compute the probability density function of $T$. | (a) By extit{Law of Iterated Logarithm}, we have almost surely,
$$ ext{limsupt}_{ o infinity} rac{B_t}{ ext{sqrt}{(2t ext{log } ext{log } t)}}= 1 ext{ }. ext{Therefore } rac{B_t}{t} = 0$$ On contrary,
ext{E}( ext{T}= ext{infinity}) >0 ext{ and } orall t_k ext{s.t.} t_k ext{u} ext{rising} ext{infinity } ext{ }B_{t_k} ext{ }= ext{t_k} k ext{ 0} ext{ . Otherwise} ext {P} ext{( } T< ext{infinity }
e 0 ext{ . ext{therefore this is ext{converse} itself.} } ext{ ackslash M(T< ext{infinity})}{ }
e 1.$$$ ext{(b)} Let $ ext{F}_t =ackslash ext{F}_t : B_s{ig|0 ext{sтайа extiriza\} ext{}} ext{By taking the R}``(t; ext{t) it';})$ we can see all from $B ext { where the real increments of } ext{all}$ ext{measurable }, ext{ where} T_u= ext{retur}^i{ ext {of positive } \}$\ |. ext(T ext { expected (not tangible hence verified)}$$\ ext{(}T ext{ } ext{remaining quantities on } t ext{
e } ext{F}_{u} ext {} ext{diffuse of the management ext{ the precipices ext...}\ackslash}
(c) Derived fromB ext{)} during which the incoming behavior remains}\ ext{strongly} ext{valid post chosen parameter gathered from above.F ext{...}]} ext{determinable around m}(* ext{} L)(1-1) ext { }
ight| + T(ackslash) ext{ defined to } ext{and we can get used to}, $
e *$}}0
e ext{d}}{{min}-1 ${
eq {-+ ext{ given so L are decreasing } ext{ in, group preparedness. T } ext { prospective)}$ ext{}}$ $ ext{}$ | P{C} ext{Ty H}^{0}....(f)(t)=\z.. gives us;},\ ext{(1.by 0 under f }0 ext{the behavior remains low. Under general guidelines conditions.}
|
2017-q6 | Suppose $G = (V,E)$ is a connected (possibly infinite) graph of uniformly bounded degrees with weights $w(\{u,v\}) \in [1,2]$ at each of its edges $\{u,v\} \in E$ and $U_i$ are i.i.d. Uniform(0,1) random variables.
(a) Fixing $v_* \in V$, let $w(v) := \sum_u w(\{u,v\})$ and $M_0(v) := w(\{v_*, v\})$ for any $v \in V$. Show that $M_i := \sum_u M_i(v)$ is a (discrete time) martingale with respect to the filtration $\mathcal{H}_i := \sigma(U_0,\ldots,U_i)$, where
\[ M_i(v) := \sum_u w(\{u,v\})1\{M_{i-1}(u)\geq w(\{u\})U_i\} \]
for $i = 1,2,\ldots$ and any $v \in V$.
(b) Construct a martingale $(M_t, t \ge 0)$ of continuous sample path, for the canonical filtration $\mathcal{F}_t$ of a standard Brownian motion $(B_s)_{s\ge 0}$, that coincides with $\{M_i\}$ of part (a) above when $t = i$ is non-negative integer, in such a way that the stopping time $\tau := \inf\{u \ge 0 : M_u \le 0\}$ will take values in $\{1,2,\ldots, \infty\}$. (Hint: Take $U_i = \Phi(B_{i+1} - B_i)$, where $\Phi$ is the normal c.d.f.) | From the all variables used we have,$M_0 = igg{(} ext{sum},M_0(v)= ext{sum}_{v}^{ackslash} w( ext{ g to set v ackslash{)), and for all i {@ } M_i \= .]){ ext {sum} :-} ext{sum}}M_0(v)=B_{ackslash v(I- remove,notatione.g only persons close to the last expected perturbations}.} ext{, }$$ ext{at which $M$ will deviations remain inversed here and long}; [m]^2[3$=2G~^{n}$, n< ext {set trace non-positive meditations} ext{.} on a metric scale; |){.new,} ;R$;} {where}
e igg{0 } e ext{ . being involved}igg{ackslash P(G**)= 0 ;} ext{exp guided limits on the magnitude}\ ext{ (}mUnder a continuum on still in background) + other none ruled apply ($ ext{ implied otherwise } ext{ get under)}$etc ext{----- more uniform --, to cross measure records that optimize records fade to them not outperform later on depend on strong $P{ ext vary P)} ext{{various before as expected over time}....]} ext{, or from here }T ext{ resented )"](v)[context applied -$[summable])$ in }.\ ext{remain resolute under } ext{ ext {positive variable updates she takes us backmore variety variations depend on. &)$|...(1{ = ] ”)} |
2018-q1 | Patterns in noise Let $X_0, X_1, X_2, X_3, \ldots$ be a sequence of independent Normal(0, 1) random variables. Say there is a peak at time $j \ge 1$ if $X_{j-1} < X_j > X_{j+1}$. Let $P_n$ denote the number of peaks up to time $n$.
(a) Find the value of $\mu_n = E(P_n)$.
(b) Prove that, for any $\epsilon > 0$,
\[ \mathbb{P} \left\{ \left| \frac{P_n}{n} - \frac{\mu_n}{n} \right| > \epsilon \right\} \to 0. \] | For any three \textit{i.i.d.} $N(0,1)$ random variables $X,Y,Z$, we have by symmetry the following.
$$ \mathbb{P}(X<Y>Z) = \mathbb{P}(X<Z<Y) + \mathbb{P}(Z<X<Y) = \dfrac{1}{3}.$$
\begin{enumerate}[label=(\alph*)]
\item Let $A_i$ be the event that there is a peak at time $i$. Then
$\mathbb{P}(A_i) = \mathbb{P}(X_{i-1}<X_i>X_{i+1}) =1/3.$ Hence,
$$ \mu_n = \mathbb{E}(P_n) = \mathbb{P} \left[ \sum_{j=1}^n \mathbbm{1}_{A_j}\right] = \sum_{j=1}^n \mathbb{P}(A_j) = \dfrac{n}{3}.$$
\item Enough to show that $\operatorname{Var}(P_n/n)=o(1)$. Note that $A_i$ and $A_j$ are independent if $|j-i| \geq 2$. Therefore,
$$ \operatorname{Var}(P_n) = \operatorname{Var} \left( \sum_{j=1}^n \mathbbm{1}_{A_j} \right) = \sum_{j=1}^n \operatorname{Var}(\mathbbm{1}_{A_j}) +2\sum_{j=1}^{n-1} \operatorname{Cov}(\mathbbm{1}_{A_j}, \mathbbm{1}_{A_{j+1}}) \leq 3n, $$
and hence $\operatorname{Var}(P_n/n) \leq 3/n =o(1)$.
\end{enumerate}
|
2018-q2 | Nearest neighbors
(a) Let $X(1), X(2), \ldots$ be a sequence of integer valued i.i.d. random variables. For each $n \ge 2$, let $Y(n)$ be the nearest neighbor of $X(1)$ among $X(2), \ldots, X(n)$, with ties broken at random. Show that $Y(n)$ converges to $X(1)$ almost surely as $n$ tends to infinity.
(b) Same as (a) above but now $X(i)$ are i.i.d. random vectors in $\mathbb{R}^2$ with its Borel sets and usual Euclidean distance. You are not allowed to make any assumptions about the law of $X(i)$; in particular, $X(i)$ need not take integer values. | \begin{enumerate}[label=(\alph*)]
\item Let $D_n := |X(1)-Y(n)|. We have to show that $D_n$ converges to $0$ almost surely. Since, $D_n$ is non-increasing in $n$ with probability $1$, it is enough to show that $D_n$ converges to $0$ in probability. Let $p_j$ denotes the probability of $X(1)$ taking the value $j \in \mathbb{Z}$. Note that
$$\mathbb{P}(D_n \neq 0) = \mathbb{P} \left( X(j) \neq X(1), \; \forall \; 2 \leq j \leq n\right) = \sum_{k \in \mathbb{Z}} \mathbb{P} \left( X(j) \neq k, \; \forall \; 2 \leq j \leq n \mid X(1)=k\right)p_k = \sum_{k} p_k(1-p_k)^{n-1}.$$
For all $k$, we have $p_k(1-p_k)^{n-1} \to 0$ as $n \to \infty$ and $p_k(1-p_k)^{n-1} \leq p_k$ which is summable over $k$. Hence we can apply DCT and conclude that $\mathbb{P}(D_n \neq 0)=o(1)$. This completes the proof.
\item Like part (a), define $D_n := ||X(1)-Y(n)||_2$. By same arguments as in part (a), it is enough to show that $D_n$ converges to $0$ in probability. Let $
u$ denotes the probability measure on $\mathbb{R}^2$ induced by $X(1)$ and $B_{x,r}$ will denote the open ball in $\mathbb{R}^2$ around $x$ of radius $r$. Then for any $\varepsilon>0$, we have,
\begin{align}
\mathbb{P}(D_n \geq \varepsilon) = \mathbb{E} \left[\mathbb{P} \left( ||X(j)-X(1)||_2 \geq \varepsilon \mid X(1)\right) \right] &= \mathbb{E} \left[ 1- \mu(B_{X(1), \varepsilon}) \right]^{n-1} \nonumber\\
&= \int (1-\mu(B_{x,\varepsilon}))^{n-1}\, d\mu(x) \nonumber\\
&= \int_{\text{supp}(\mu)} (1-\mu(B_{x,\varepsilon}))^{n-1}\, d\mu(x), \label{target}
\end{align}
where $\text{supp}(\mu) := \left\{x : \mu(G) >0 \text{ for all } G \text{ open neighbourhood of }x\right\}$. We know that $\text{supp}(\mu)$ is a closed set, hence measurable and $\mu(\text{supp}(\mu))=1$ (see \cite[Exercise 1.2.48]{dembo}). For all $x \in \text{supp}(\mu)$, we have $\mu(B_{x, \varepsilon}) >0$ and hence $(1-\mu(B_{x,\varepsilon}))^{n-1} \to 0$. On the otherhand, $(1-\mu(B_{x,\varepsilon}))^{n-1} \leq 1$; therefore \textit{Bounded Convergence Theorem} implies that (
ef{target}) goes to $0$, which completes the proof.
\end{enumerate}
|
2018-q3 | A gambling problem An ancient text asks: "A and B, having an equal number of chances to win one game, engage a spectator S with the proposal that after an even number of games $n$ has been played the winner shall give him as many pieces as he wins over and above one-half the number of games played. It is demanded to show how the expectation of $S$ is to be determined."
(a) With $p$ denoting the chance that A wins one game, show that the answer is the case $p = 1/2$ of the formula
\[ \mathbb{E}|S_n - np| = 2\nu (1 - p)b(\nu, n, p), \]
where $S_n$ is a Binomial$(n, p)$ random variable,
\[ b(k, n, p) = \binom{n}{k}p^k(1 - p)^{n-k}, \]
and $\nu$ is the unique integer $np < \nu \le np + 1$.
(b) Prove the formula in part (a) above.
(c) Assuming that the formula in part (a) above is true, show that it implies the $L^1$ law of large numbers,
\[ \mathbb{E} \left| \frac{S_n}{n} - p \right| \to 0. \]
(This was the real motivation behind the quote from the ancient text above.) | \begin{enumerate}[label=(\alph*)]
\item Let $S_n$ denotes the total number of wins by player $A$ and so $n-S_n$ is the total number of wins for player $B$, after $n$ games. Clearly $S_n \sim \text{Bin}(n,1/2)$. By definition,
$$ S = (S_n-n/2)\mathbbm{1}(S_n \geq n-S_n) + (n-S_n-n/2)\mathbbm{1}(n-S_n > S_n) = |S_n-n/2|.$$
Thus $\mathbb{E}S = \mathbb{E}|S_n-n/2|.$
\item We have $S_n \sim \text{Bin}(n,p)$. We need to compute $\mathbb{E}|S_n-np|$. $\nu$ is the unique integer $np < \nu \leq np+1$, i.e. $\nu=\lfloor np+1 \rfloor$. Note that $\mathbb{E}S_n=np$ and therefore,
\begin{align*}
\mathbb{E}|S_n-np| = 2\mathbb{E}(S_n-np)_{+} - \mathbb{E}(S_n-np) = 2\mathbb{E}(S_n-np)_{+} &= 2\sum_{k=\nu}^n (k-np){n \choose k}p^k(1-p)^{n-k} \\
&= 2\sum_{k=\nu}^n n{n-1 \choose k-1}p^k(1-p)^{n-k} - 2\sum_{k=\nu}^n np{n \choose k}p^k(1-p)^{n-k} \\
&= 2np \sum_{l=\nu-1}^{n-1} {n-1 \choose l}p^l(1-p)^{n-1-l} - 2np \mathbb{P}(S_n \geq \nu) \\
&= 2np \mathbb{P}(S_{n-1}\geq \nu-1)-2np \mathbb{P}(S_n \geq \nu).
\end{align*}
Observe that $(S_n \geq \nu) \subseteq (S_{n-1}\geq \nu-1)$ and hence
\begin{align*}
\mathbb{P}(S_{n-1}\geq \nu-1)- \mathbb{P}(S_n \geq \nu) &= \mathbb{P}(S_{n-1}\geq \nu-1, S_n \geq \nu) +\mathbb{P}(S_{n-1}\geq \nu-1, S_n < \nu) - \mathbb{P}(S_{n}\geq \nu) \\
&= \mathbb{P}( S_n \geq \nu) +\mathbb{P}(S_{n-1}\geq \nu-1, S_n < \nu) - \mathbb{P}(S_{n}\geq \nu) \\
&= \mathbb{P}(S_{n-1}\geq \nu-1, S_n < \nu) \\
&= \mathbb{P}(S_{n-1}=S_n = \nu-1) = \mathbb{P}(S_{n-1}=\nu-1, S_{n}-S_{n-1}=0) = (1-p)\mathbb{P}(S_{n-1}=\nu-1).
\end{align*}
Therefore,
\begin{align*}
\mathbb{E}|S_n-np| = 2np(1-p)\mathbb{P}(S_{n-1}=\nu-1) = 2np(1-p){n-1 \choose \nu-1}p^{\nu-1}(1-p)^{n-\nu} = 2\nu(1-p){n \choose \nu}p^{\nu}(1-p)^{n-\nu},
\end{align*}
which completes the proof.
\item First consider the case $p \in (0,1)$ and we write $\nu_n = \lfloor np+1 \rfloor,$ to make the dependency on $n$ apparent. Note that $\nu_n/n \sim p$ as $n \to \infty$ and hence $\nu_n \to \infty$ as $n \to \infty$. Using \textit{Stirling's Approximation}, we get
$$ {n \choose \nu_n} = \dfrac{n!}{(n-\nu_n)!(\nu_n)!} \sim \dfrac{\sqrt{2\pi}n^{n+1/2}e^{-n}}{\sqrt{2\pi}(n-\nu_n)^{n-\nu_n+1/2}e^{-(n-\nu_n)}\sqrt{2\pi }{\nu_n}^{\nu_n+1/2}e^{-\nu_n}} = \dfrac{1}{\sqrt{2 \pi n}} \left(\dfrac{n}{\nu_n}\right)^{\nu_n+1/2} \left(\dfrac{n}{n-\nu_n}\right)^{n-\nu_n+1/2}.$$
This implies that
\begin{align*}
\mathbb{E}|S_n-np| = 2\nu_n(1-p){n \choose \nu_n}p^{\nu_n}(1-p)^{n-\nu_n} &\sim \dfrac{2\nu_n (1-p)}{\sqrt{2\pi n}}\left(\dfrac{n}{\nu_n}\right)^{\nu_n+1/2} \left(\dfrac{n}{n-\nu_n}\right)^{n-\nu_n+1/2} \\
& \sim \dfrac{2 np (1-p)}{\sqrt{2\pi n}}\dfrac{1}{\sqrt{p(1-p)}}\left(\dfrac{np}{\nu_n}\right)^{\nu_n} \left(\dfrac{n(1-p)}{n-\nu_n}\right)^{n-\nu_n} \\
&= \dfrac{\sqrt{2np(1-p)}}{\sqrt{\pi}} \left(\dfrac{np}{\nu_n}\right)^{\nu_n} \left(\dfrac{n(1-p)}{n-\nu_n}\right)^{n-\nu_n}.
\end{align*}
To find exact asymptotics of the last expression, note that $np < \nu_n \leq np+1$ and hence $\nu_n = np+O(1).$
$$ \log \left(\dfrac{np}{\nu_n}\right)^{\nu_n} = -\nu_n \log \left(\dfrac{np+O(1)}{np}\right) = - \nu_n \log \left( 1 + O(n^{-1}) \right) = - \nu_n O(n^{-1}) = O(1). $$
Similarly, we can show that
$$ \left(\dfrac{n(1-p)}{n-\nu_n}\right)^{n-\nu_n} = O(1),$$
and combining them we get $\mathbb{E}|S_n-np| = O(n^{1/2}).$ This implies that
$$ \mathbb{E}\Bigg \rvert \dfrac{S_n}{n}-p\Bigg \rvert = O(n^{-1/2}) \to 0, \; \text{ as } n \to \infty.$$
For $p=0,1$, we have $\mathbb{E}|S_n-np|=0$ and hence the convergence follows trivially.
\end{enumerate}
|
2018-q4 | A problem of contagion Consider an urn containing one red and one white ball, At each time $j$ a ball is chosen uniformly at random from the current urn and replaced together with $b_j$ balls of the color chosen. Here $b_1, b_2, \ldots$ is a prespecified sequence of positive integers. Let $S_n$ be the number of red balls in the urn at time $n$ and $B_n = 2 + b_1 + \cdots + b_n$.
(a) Show that $\lim_{n\to\infty} S_n / B_n$ exists almost surely.
(b) Suppose $b_j = 2$ for all $j$. Determine the distribution of the limit.
(c) Suppose $b_j = 2^j$. Determine the distribution of the limit; in each case, justify your answer by a proof or reference to a standard theorem. | \begin{enumerate}[label=(\alph*)]
\item Note that $B_n = 2 + \sum_{i=1}^n b_i$ is the number of balls in the urn after time $n$. Let $\mathcal{F}_n := \sigma (S_i \mid 0 \leq i \leq n)$ be the $\sigma$-algebra containing information upto time $n$. Then we have for all $n \geq 1$,
$$ S_n - S_{n-1} \mid \mathcal{F}_{n-1} \sim b_n \text{Ber}(S_{n-1}/B_{n-1}).$$
Therefore,
$$ \mathbb{E} \left[ \dfrac{S_n}{B_n} \Bigg \rvert \mathcal{F}_{n-1}\right] = \dfrac{S_{n-1}+\dfrac{b_nS_{n-1}}{B_{n-1}}}{B_n} = \dfrac{S_{n-1}}{B_n}\left(1 + \dfrac{b_n}{B_{n-1}} \right) = \dfrac{S_{n-1}}{B_n}\dfrac{b_n+B_{n-1}}{B_{n-1}} = \dfrac{S_{n-1}}{B_{n-1}}.$$
Thus $\left\{S_n/B_n, \mathcal{F}_n, n \geq 0\right\}$ is a MG with $0 \leq S_n/B_n \leq 1$. Hence by \textit{martingale Convergence Theorem}, we have $S_n/B_n$ converges almost surely.
\item Here $b_j=2$ for all $j \geq 1$ and hence $B_n=2(n+1)$. Let us first find out the probability mass function of $S_n$. Clearly, $S_n$ can take values in $\left\{1, 3, \ldots, 2n+1\right\}$. It is easy to see that for $k=0, \ldots, n$,
\begin{align*}
\mathbb{P}(S_n = 2k+1) = {n \choose k} \dfrac{\prod_{j=0}^{k-1} (2j+1) \prod_{j=0}^{n-k-1}(2j+1)}{\prod_{i=1}^n B_{i-1}} = {n \choose k} \dfrac{\dfrac{(2k)!}{2^k k!} \dfrac{(2n-2k)!}{2^{n-k}(n-k)!}}{2^n n!} &= \dfrac{(2k)!(2n-2k)!}{4^n (k!)^2 ((n-k)!)^2} \\
&= 2^{-2n}{2k \choose k} {2n-2k \choose n-k}.
\end{align*}
Let $W_n = (S_n-1)/2$. We shall use \textit{Sterling's approximation}. Fix $\varepsilon >0$ and we can say that for large enough $k$,
$$ (1-\varepsilon) \sqrt{2\pi} k^{k+1/2} e^{-k} \leq k! \leq (1+\varepsilon) \sqrt{2\pi} k^{k+1/2} e^{-k}. $$
Fix $0 <a<b <1$. Then for large enough $n$, we have
$$ {2k \choose k}{2n-2k \choose n-k} = \dfrac{(2k)!(2n-2k)!}{(k!)^2((n-k)!)^2} \in [C_1(\varepsilon),C_2(\varepsilon)] \dfrac{1}{2\pi}\dfrac{(2k)^{2k+1/2}(2n-2k)^{2n-2k+1/2}}{k^{2k+1}(n-k)^{2n-2k+1}},\
for all $ an \leq k \leq bn$, where $C_1(\varepsilon)=(1-\varepsilon)^2(1+\varepsilon)^{-4}$ and $C_2(\varepsilon)=(1+\varepsilon)^2(1-\varepsilon)^{-4}.$
Hence, setting $a_n = \lceil an \rceil $ and $b_n=\lfloor bn \rfloor$, we have
\begin{align*}
\limsup_{n \to \infty} \mathbb{P} \left( a \leq \dfrac{W_n}{n} \leq b \right) &= \limsup_{n \to \infty} \sum_{k=a_n}^{b_n} 2^{-2n} {2k \choose k}{2n-2k \choose n-k} \\
& \leq C_2(\varepsilon) \limsup_{n \to \infty} \sum_{k=a_n}^{b_n} 2^{-2n} \dfrac{1}{2\pi}\dfrac{(2k)^{2k+1/2}(2n-2k)^{2n-2k+1/2}}{k^{2k+1}(n-k)^{2n-2k+1}} \\
& = C_2(\varepsilon) \limsup_{n \to \infty} \sum_{k=a_n}^{b_n} \dfrac{1}{\pi} k^{-1/2}(n-k)^{-1/2} \\
& = C_2(\varepsilon)\dfrac{1}{\pi} \dfrac{1}{n} \limsup_{n \to \infty} \sum_{k=a_n}^{b_n} \left( \dfrac{k}{n}\right)^{-1/2}\left( 1-\dfrac{k}{n}\right)^{-1/2} = C_2(\varepsilon)\dfrac{1}{\pi} \int_{a}^b \dfrac{1}{\sqrt{x(1-x)}} \, dx.
\end{align*}
Similarly,
$$\liminf_{n \to \infty} \mathbb{P} \left( a \leq \dfrac{W_n}{n} \leq b \right) \geq C_1(\varepsilon)\dfrac{1}{\pi} \int_{a}^b \dfrac{1}{\sqrt{x(1-x)}} \, dx.$$
Taking $\varepsilon \downarrow 0$, we conclude that
$$ \mathbb{P} \left( a \leq \dfrac{W_n}{n} \leq b \right) \rightarrow \int_{a}^b \dfrac{1}{\pi\sqrt{x(1-x)}} \, dx = \mathbb{P}(a \leq \text{Beta}(1/2,1/2) \leq b).$$
Since, this holds true for all $0<a<b<1$, we have $W_n/n \stackrel{d}{\longrightarrow} \text{Beta}(1/2,1/2).$ Therefore,
$$ \dfrac{S_n}{B_n} = \dfrac{2W_n+1}{2(n+1)} = \dfrac{W_n}{n}\dfrac{n}{n+1} + \dfrac{1}{2(n+1)} \stackrel{d}{\longrightarrow} \text{Beta}(1/2,1/2).$$
\item $b_j = 2^j$, for all $i \geq 1$.
In this case $B_n = 2 + \sum_{i=1}^{n} 2^i = 2^{n+1}$.Setting $R_n := S_n/B_n$, we also have,
$$ \operatorname{Var}(R_n \mid \mathcal{F}_{n-1}) = \dfrac{1}{B_n^2} \operatorname{Var}(S_n \mid \mathcal{F}_{n-1}) = \dfrac{1}{B_n^2} \operatorname{Var}(S_n-S_{n-1} \mid \mathcal{F}_{n-1}) = \dfrac{b_n^2}{B_n^2} R_{n-1}(1-R_{n-1}), $$
and therefore,
$$ \mathbb{E}(R_n^2 \mid \mathcal{F}_{n-1}) = \dfrac{b_n^2}{B_n^2} R_{n-1}(1-R_{n-1}) + R_{n-1}^2 = \dfrac{1}{4} R_{n-1}(1-R_{n-1}) + R_{n-1}^2.$$
Combining them we obtain,
$$ \mathbb{E}(R_n(1-R_n) \mid \mathcal{F}_{n-1}) = R_{n-1} - \dfrac{1}{4} R_{n-1}(1-R_{n-1}) - R_{n-1}^2 = \dfrac{3}{4}R_{n-1}(1-R_{n-1}), \; \forall \; n \geq 1.$$
Since, $R_n$ is uniformly bounded (by $1$) and converges almost surely, $R_n$ converges to $R_{\infty}$ in $L^2$. Therefore,
$$ \mathbb{E}(R_{\infty}(1-R_{\infty})) = \lim_{n \to \infty} \mathbb{E}(R_n(1-R_n)) = \lim_{n \to \infty}\dfrac{3^n}{4^n}\mathbb{E}(R_0(1-R_0)) = \lim_{n \to \infty}\dfrac{3^n}{4^{n+1}} = 0 . $$
This implies that $R_{\infty} \in \left\{0,1\right\}$ with probability $1$. The $L^2$ convergence also implies,
$$ \mathbb{E}(R_{\infty}) = \lim_{n \to \infty} \mathbb{E}(R_n) = \mathbb{E}(R_0) = \dfrac{1}{2},$$
we conclude that $R_{\infty} \sim \text{Ber}(1/2)$, i.e., $S_n/B_n \stackrel{d}{\longrightarrow} \text{Ber}(1/2).$
\end{enumerate}
|
2018-q5 | Let $(\Omega, \mathcal{F}, \mathbb{Q})$ be some probability space. Recall that an event $A \in \mathcal{F}$ is an atom of $\mathbb{Q}$ if $\mathbb{Q}(A) \neq 0$ and for each $C \subseteq A, C \in \mathcal{F}$, either $\mathbb{Q}(C) = 0$ or $\mathbb{Q}(C) = \mathbb{Q}(A)$.
(a) Show that if $\mathbb{Q}$ has no atoms, then $1/3 \leq \mathbb{Q}(C) \leq 2/3$ for some $C \in \mathcal{F}$.
Hint: Show that if there is no such $C$, then $\mathbb{Q}(B) < 1/3$ for some $B \in \mathcal{F}$ such that the complement of $B$ includes an atom.
(b) Show that if $\mathbb{Q}$ has no atoms, then the range of $\mathbb{Q}$ is $[0, 1]$. That is, for every $\mu \in [0, 1]$ there is some $C \in \mathcal{F}$ such that $\mathbb{Q}(C) = \mu$.
Hint: Repeat the argument of part (a) restricted to $C$ and to its complement, to get some events of intermediate measure, and iterate to get a dense set of values $\mu$ in the range of $\mathbb{Q}$. | I shall first prove a lemma which will be useful in proving the statements. I present it separately to make the later solutions less cluttered.
\begin{lemma}
Let $(\Omega,\mathcal{F}, Q)$ is a probability space. Then for any $A \in \mathcal{F}$ with $Q(A) \in (0,1]$ and $\varepsilon >0$, we can find $B \in \mathcal{F}, B \subseteq A$ such that $0 < Q(B) < \varepsilon$, provided that $A$ contains no atoms.
\end{lemma}
\begin{proof}
Since $A$ contains no atom, it is not an atom itself and hence we can get $C \in \mathcal{F}$, $C \subseteq A$ such that $0<Q(C)< Q(A)$. Then either $0 < Q(C) \leq Q(A)/2$ or $0 < Q(A \setminus C) \leq Q(A)/2$. In any case we get an $A_1 \in \mathcal{F}, A_1 \subseteq A_0 :=A$ such that $0 < Q(A_1) \leq Q(A_0)/2$. By assumption, $A_1$ also contains no atoms and therefore we can apply the same steps on $A_1$ to get an $A_2 \in \mathcal{F}, A_2 \subseteq A_1$ such that $0 < Q(A_2) \leq Q(A_1)/2.$ Going on in this way, we can get $A_n \in \mathcal{F}, A_n \subseteq A$ such that $0 < Q(A_n) \leq 2^{-n}Q(A)$, for all $n \geq 1$. Now take $n$ large enough such that $2^{-n}Q(A)< \varepsilon,$ the corresponding $A_n$ does the job.
\end{proof}
\begin{enumerate}[label=(\alph*)]
\item Suppose there is no such $C$. We shall construct such $B$ as suggested in the hint. Start with $H_0= \emptyset$ and fix some $\varepsilon >0$. For $n \geq 1$, let
$$ a_n := \sup \left\{Q(D) \mid D \in \mathcal{F}, D \subseteq \Omega \setminus H_{n-1}, Q(H_{n-1} \cup D) \leq 1/3\right\},$$
and get $D_n \in \mathcal{F}, D_n \subseteq \Omega \setminus H_{n-1}$ such that $Q(D_n) \geq (1-\varepsilon)a_n$. Define $H_n := H_{n-1} \cup D_n$. We shall show that $B := \cup_{n \geq 1} H_n = \cup_{n \geq 1} D_n$ serves our purpose.
First of all, note that the supremum in the definition is always taken over a non-empty set since the null set always belongs to that collection. This follows from the fact that $Q(H_n) \leq 1/3$, for all $n \geq 1$ by definition. This observation also implies that $Q(B) = \lim_{n \to \infty} Q(H_n) \leq 1/3$ and hence by our hypothesis $Q(B)<1/3$. Moreover, observing that $D_n$'s are disjoint by definition, we can conclude,
$$ 1/3 > Q(B) = \sum_{n \geq 1} Q(D_n) \geq (1-\varepsilon)\sum_{n \geq 1} a_n \Rightarrow a_n =o(1).
$$
Now we will be done if we can show that $B^c$ contains an atom. Suppose this is not true. Then by the lemma, we can find $E \in \mathcal{F}, E \subseteq B^c$ such that $0<Q(E) < 1/3 - Q(B).$ Then for all $n \geq 1$, we have $E \subseteq B^c \subseteq H_{n-1}^c$ and $Q(H_{n-1}\cup E) = Q(H_{n-1}) + Q(E) \leq Q(B) + Q(E) < 1/3.$ This implies that $a_n \geq Q(E) >0$ for all $n \geq 1$, contradicting to $a_n=o(1)$.
\item Technically, in part (a) we have showed that there exists $C \in \mathcal{F}$ such that $Q(C)=1/3$. There is nothing special about $1/3$ and the argument can be used without any change to prove that there exists $C \in \mathcal{F}$ such that $Q(C)=\mu$, for any $\mu \in [0,1]$. But I shall still present a proof assuming that we only proved the assertion asked to prove in (a) and following the hint. The following lemma will be useful.
\begin{lemma}
For all $n \geq 1$, we can construct $\left\{A_{n,i} \mid 0 \leq i \leq 2^n-1\right\} \subseteq \mathcal{F}$ satisfying the following conditions.
\begin{enumerate}[label=(\arabic*)]
\item $\left\{A_{n,i} \mid 0 \leq i \leq 2^n-1\right\}$ forms a partition of $\Omega$.
\item $A_{n,k} = A_{n+1,2k} \cup A_{n+1,2k+1},\; \forall \; 0 \leq k \leq 2^n-1; n \geq 1$.
\item $ (1/3)^n \leq Q(A_{n,k}) \leq (2/3)^n, \; \forall \; 0 \leq k \leq 2^n-1; n \geq 1$.
\end{enumerate}
\end{lemma}
\begin{proof}
We shall prove it by induction. For $n=1$, get $C \in \mathcal{F}$ such that $1/3 \leq Q(C)\leq 2/3$ and set $A_{1,0}=C, A_{1,1}=C^c$. Clearly the conditioned are satisfied. Suppose we have constructed for $n=1, \ldots, m$, for $m \geq 1$. Consider $0 \leq k \leq 2^{m}-1$. By the third assumption, $Q(A_{m,k})>0$ and hence we can think of the probability measure $Q_{A_{m,k}}$ defined as
$$ Q_{A_{m,k}}(\cdot) := Q( \cdot \mid A_{m,k}) = \dfrac{Q(\cdot \cap A_{m,k})}{Q(A_{m,k})}.$$
It is easy to see that non-atomicity of $Q$ implies the same for $ Q_{A_{m,k}}$ and hence by part (a), we get $D \in \mathcal{F}$ such that $1/3 \leq Q_{A_{m,k}}(D) \leq 2/3$. Set $A_{m+1,2k} = D \cap A_{m,k}$ and $A_{m+1,2k+1} = D^c \cap A_{m,k}$. Condition (2) is clearly satisfied. By definition, $(1/3)Q(A_{m,k}) \leq Q(A_{m+1,2k}), Q(A_{m+1,2k+1}) \leq (2/3)Q(A_{m,k})$, which proves the third condition. By construction,
$$ \bigcup_{0 \leq k \leq 2^{m+1}-1} A_{m+1,k} = \bigcup_{0 \leq k \leq 2^m-1} A_{m,k} = \Omega,$$
and the disjointness is also evident. This proves the first condition.
\end{proof}
Now take $\mu \in [0,1]$ and define
$$ k_n := \inf \left\{ j \; \bigg \rvert \sum_{l=0}^j Q(A_{n,l}) = Q\left( \bigcup_{l=0}^j A_{n,l}\right) \geq \mu\right\}, \; U_n := \bigcup_{l=0}^{k_n} A_{n,l}, \; \forall \; n \geq 1.$$
Note that,
$$ Q\left( \bigcup_{t=0}^{2k_n+1} A_{n+1,t}\right) = Q\left( \bigcup_{l=0}^{k_{n}} A_{n,l}\right) \geq \mu \Rightarrow k_{n+1} \leq 2k_n+1 \Rightarrow U_{n+1} = \bigcup_{t=0}^{k_{n+1}} A_{n+1,t} \subseteq \bigcup_{t=0}^{2k_n+1} A_{n+1,t} = \bigcup_{l=0}^{k_{n}} A_{n,l} = U_n.$$
Also,
$$ Q(U_n) = Q\left( \bigcup_{l=0}^{k_{n}} A_{n,l}\right) \geq \mu > Q\left( \bigcup_{l=0}^{k_{n}-1} A_{n,l}\right) \Rightarrow 0 \leq Q(U_n)-\mu \leq Q(A_{n,k_n}) \leq (2/3)^n. $$
Therefore,
$$ Q \left(\bigcap_{n \geq 1} U_n \right) = \lim_{n \to \infty} Q(U_n) = \lim_{n \to \infty} \left[ \mu + O((2/3)^n)\right] = \mu.
$$
This completes the proof.
|
2018-q6 | (a) Let $U$ denote a Uniform[0,1] random variable and $Y_n = W(n) \mod 1$, for a standard Brownian motion $W(t)$ that starts at $W(0) = 0$. Show that
\[
d_{tv}(Y_n, U) := \sup_{\|f\|_\infty \le 1} \left| \mathbb{E}f(Y_n) - \mathbb{E}f(U) \right| \le \frac{\sqrt{2}}{\sqrt{\pi n}}, \]
hence $Y_n$ converges to $U$ in total variation distance when $n \to \infty$.
(b) Consider the sequence $X_n = (S_n) \mod 1$, for $S_n = \sum_{k=1}^n \xi_k$, where $\{\xi_k\}$ are i.i.d. real-valued random variables. Show that $X_n$ converges in distribution to $U$ when $n \to \infty$, if and only if $\sup_\alpha \mathbb{P}(\xi_1 = \alpha) < 1$. | \begin{enumerate}[label=(\alph*)]
\item Note that $W(t) \sim N(0,t)$ and since $Y(t)=W(t)(\!\!\!\mod 1)$, for any $0 \leq x <1$, we have
\begin{align*}
\mathbb{P}(Y(t) \leq x) = \sum_{m \in \mathbb{Z}} \mathbb{P}(m \leq W(t) < m+x) = \sum_{m \in \mathbb{Z}} \int_{m}^{m+x} f_{W(t)}(y)\, dy& = \sum_{m \in \mathbb{Z}} \int_{0}^{x} f_{W(t)}(z+m)\, dz \\
& = \int_{0}^x \left[ \sum_{m \in \mathbb{Z}} f_{W(t)}(z+m)\right] \, dz,
\end{align*}
implying that the random variable $Y(t)$ has density
$$ f_{Y(t)}(z) = \left[\sum_{m \in \mathbb{Z}} f_{W(t)}(z+m) \right] \mathbbm{1}(0 \leq z \leq 1).$$
We shall try to give appropriate bounds on $f_{Y(t)}$ using other techniques. Note that
$$ f_{Y(t)}(z) = \sum_{m \in \mathbb{Z}} f_{W(t)}(z+m) = \sum_{m \in \mathbb{Z}} \dfrac{1}{\sqrt{2\pi t}} \exp \left( -\dfrac{(z+m)^2}{2t}\right) = \sum_{m \in \mathbb{Z}} h\left(\dfrac{m}{t}; z,t \right), \; \forall \; 0 \leq z \leq 1,$$
where
$$ h(u;z,t) := \dfrac{1}{\sqrt{2\pi t}} \exp \left( -\dfrac{(z+ut)^2}{2t}\right) = \dfrac{1}{t} \sqrt{t} \varphi \left( \sqrt{t} \left(u+\dfrac{z}{t} \right) \right), \; \forall \; u \in \mathbb{R}, \; 0 \leq z \leq 1, \; t >0,$$
$\varphi$ being the standard normal density. For fixed $z,t$, the function $u \mapsto h(u;z,t)$ is maximized at $u=-z/t \in [-1/t,0]$. Moreover, it is non-decreasing on $(-\infty,-1/t]$ and non-increasing on $[0,\infty)$. Therefore,
\begin{align*}
f_{Y(t)}(z) = \sum_{m \in \mathbb{Z}} h\left(\dfrac{m}{t}; z,t \right) &= \sum_{m \leq -1} h\left(\dfrac{m}{t}; z,t \right) + \sum_{m \geq 0} h\left(\dfrac{m}{t}; z,t \right) \\
&\geq \sum_{m \leq -1} t \int_{(m-1)/t}^{m/t} h\left(u; z,t \right) \, du + \sum_{m \geq 0} t\int_{m/t}^{(m+1)/t} h\left(u; z,t \right)\, du \\
& = t \int_{-\infty}^{\infty} h\left(u; z,t \right) \, du - t\int_{-1/t}^{0} h\left(u; z,t \right) \, du \\
& = \int_{-\infty}^{\infty} \sqrt{t} \varphi \left( \sqrt{t} \left(u+\dfrac{z}{t} \right) \right)\, du - \int_{-1/t}^{0} \sqrt{t} \varphi \left( \sqrt{t} \left(u+\dfrac{z}{t} \right) \right) \, du \\
& \geq 1 - ||\varphi||_{\infty}/\sqrt{t} = 1- \dfrac{1}{\sqrt{2 \pi t}},
\end{align*}
whereas
\begin{align*}
f_{Y(t)}(z) = \sum_{m \in \mathbb{Z}} h\left(\dfrac{m}{t}; z,t \right) &= \sum_{m \leq -2} h\left(\dfrac{m}{t}; z,t \right) + \sum_{m \geq 1} h\left(\dfrac{m}{t}; z,t \right) + \sum_{m =-1,0} h\left(\dfrac{m}{t}; z,t \right)\
&\leq \sum_{m \leq -2} t \int_{m/t}^{(m+1)/t} h\left(u; z,t \right) \, du + \sum_{m \geq 1} t\int_{(m-1)/t}^{m/t} h\left(u; z,t \right)\, du + \sum_{m =-1,0} h\left(\dfrac{m}{t}; z,t \right) \\
& = t \int_{-\infty}^{\infty} h\left(u; z,t \right) \, du - t\int_{-1/t}^{0} h\left(u; z,t \right) \, du + \sum_{m =-1,0} h\left(\dfrac{m}{t}; z,t \right) \\
& = \int_{-\infty}^{\infty} \sqrt{t} \varphi \left( \sqrt{t} \left(u+\dfrac{z}{t} \right) \right)\, du - \int_{-1/t}^{0} \sqrt{t} \varphi \left( \sqrt{t} \left(u+\dfrac{z}{t} \right) \right) \, du \\
& \hspace{ 3 in} + \sum_{m =-1,0} \dfrac{1}{\sqrt{t}} \varphi \left(\sqrt{t}\left( \dfrac{m+z}{t}\right) \right) \\
& \leq 1 +2 ||\varphi||_{\infty}/\sqrt{t} = 1+ \dfrac{2}{\sqrt{2 \pi t}}.
\end{align*}
Thus we have shown that $||f_{Y(t)} - f_U ||_{\infty} \leq 2/\sqrt{2 \pi t}$. Therefore,
$$ d_{TV}(Y(t), U) = \dfrac{1}{2}\int_{0}^1 \Bigg \rvert f_{Y(t)}(z) - f_{U}(z)\Bigg \rvert \, dz \leq \dfrac{1}{\sqrt{2 \pi t}}, \; \forall \; t >0.$$
Since convergence in total variation distance implies weak convergence, we can conclude that $W(t)(\!\!\!\mod 1) \stackrel{d}{\longrightarrow} \text{Uniform}(0,1)$, as $t \to \infty$.
We can get better upper bound on the total variation distance using \textit{Poisson Summation Formula} which states that for any infinitely differentiable function $g : \mathbb{R} \mapsto \mathbb{R}$ satisfying
$$ \sup_{x \in \mathbb{R}} |x|^{\beta} |g^{(k)}(x)| < \infty, \; \forall k \geq 0, \beta >0,$$
we have the following identity.
$$ \sum_{m \in \mathbb{Z}} g(x+m) = \sum_{m \in \mathbb{Z}} \widehat{g}(m)e^{i2\pi m x}, \; \forall \; x \in \mathbb{R},$$
where
$$ \widehat{g}(m) := \int_{-\infty}^{\infty} g(y) e^{-i2\pi m y}\, dy, \; \forall \; m \in \mathbb{Z}.$$
Fix $t>0$ and $c(t) \in \mathbb{R}$. We shall apply the formula with $g(y)=f_{W(t)+c(t)}(y) = (2\pi t)^{-1/2}\exp(-(y-c(t))^2/2t)$. The function is exponentially decaying at $\pm \infty$ and hence clearly satisfies the conditions required to apply the formula. Note that
$$ \widehat{f}_{W(t)+c(t)}(m) = \int_{-\infty}^{\infty} f_{W(t)+c(t)}(y) e^{-i2\pi m y}\, dy = \mathbb{E} \exp(-i 2\pi m (W(t)+c(t))) = \exp(-2\pi^2m^2t-i2\pi mc(t)), \; \forall \; m \in \mathbb{Z}.$$
Therefore, for $0 \leq z \leq 1,$
$$ f_{(W(t)+c(t))(\\!\!\!\mod 1)}(z) =\sum_{m \in \mathbb{Z}} f_{W(t)+c(t)}(z+m) = \sum_{m \in \mathbb{Z}} \widehat{f}_{W(t)+c(t)}(m) e^{i2\pi m z} = \sum_{m \in \mathbb{Z}} \exp(-2\pi^2m^2t) e^{i2\pi m (z-c(t))}.$$
Plugging in this expression we obtain for $Y^{\prime}(t):=(W(t)+c(t))(\\!\!\!\mod 1)$ the following.
\begin{align*}
d_{TV}(Y^{\prime}(t), U) = \dfrac{1}{2}\int_{0}^1 \Bigg \rvert f_{Y^{\prime}(t)}(z) - f_{U}(z)\Bigg \rvert \, dz &= \dfrac{1}{2}\int_{0}^1 \Bigg \rvert \sum_{m \in \mathbb{Z}} \exp(-2\pi^2m^2t) e^{i2\pi m (z-c(t))} - 1\Bigg \rvert \, dz \\
&= \dfrac{1}{2}\int_{0}^1 \Bigg \rvert \sum_{m \neq 0} \exp(-2\pi^2m^2t) e^{i2\pi m (z-c(t))}\Bigg \rvert \, dz \\
& \leq \dfrac{1}{2}\int_{0}^1 \sum_{m \neq 0} \exp(-2\pi^2m^2t)\, dz \\
& = \dfrac{1}{2}\sum_{m \neq 0} \exp(-2\pi^2m^2t) = \sum_{m \geq 1} \exp(-2\pi^2m^2t).
\end{align*}
Now observe that for all $t >0$,
\begin{align*}
\sum_{m \geq 1} \exp(-2\pi^2m^2t) \leq \sum_{m \geq 1} \exp(-2\pi^2mt) = \dfrac{\exp(-2\pi^2 t)}{1-\exp(-2\pi^2 t)} = \dfrac{1}{\exp(2\pi^2 t)-1} \leq \dfrac{1}{2\pi^2 t}.
\end{align*}
Therefore, $(W(t)+c(t))(\\!\!\!\mod 1)$ converges to $ ext{Uniform}(0,1)$ distribution in total variation distance as $t \to \infty$ for any choice of $t \mapsto c(t)$.
\item Suppose $\sup_{\alpha} \mathbb{P}(\xi_1 = \alpha)=1$ which implies that $\xi_1 \equiv c$ almost surely for some constant $c$. Then $S_n=cn$ and $X_n = c_n \mod 1$. Since $X_n$ is a sequence of degenerate random variables it can not converge in distribution to a uniform variable.
The converse part is not true. Take $\xi_1 \sim \text{Ber}(1/2)$. Then $S_n \in \mathbb{Z}$ with probability $1$ implying that $Y(n)\equiv 0$ for all $n$. Here $ \sup_{\alpha} \mathbb{P}(\xi_1 = \alpha)=1/2$.
We shall proof the following ``kind of converse". The choice of the random variable $n^{-1/2}S_{n^2}$ is inspired by the observation that this is the random variable (after centering and scaling) corresponding to $W(n)$ in Donsker's Theorem.
\begin{result}{\label{res}}
Let $\xi_1$ is a square integrable random variable and the probability measure induced by $\xi_1$ is not singular with respect to Lebesgue measure. Then $n^{-1/2}S_{n^2}(\!\!\!\mod 1)$ converges to $\text{Uniform}(0,1)$ distribution in total variation distance as $n \to \infty$.
\end{result}
\begin{proof}
We shall use the fact that for any measurable function $h:\mathbb{R} \to \mathbb{R}$ and random variables $X$ and $Y$, we have
\begin{equation}{\label{tv}}
d_{TV}(h(X),h(Y)) \leq d_{TV}(X,Y).
\end{equation}
Clearly equality holds in (
ef{tv}) if $h$ is bijective and its inverse is also measurable.
Let us denote the mean and variance of $\xi_1$ by $\mu$ and $\sigma^2 >0$ (since $\xi_1$ is not degenerate under the hypothesis) respectively. We shall make use of the fact that CLT holds in total variation distance under the assumptions of Result~\ref{res}. In other words,
$$ d_{TV} \left( \dfrac{S_n -n\mu}{\sigma \sqrt{n}}, W(1) \right) \longrightarrow 0.$$
See \cite[Theorem 2.5]{B03} for reference. Therefore, using (
ef{tv}) several times we can write the following.
\begin{align*}
oalign{\hskip 0.5cm}
oindent
o(1) = d_{TV} \left( \dfrac{S_{n^2} -n^2\mu}{\sigma n}, W(1) \right) &= d_{TV} \left( S_{n^2} -n^2\mu, W(\sigma^2n^2) \right) \\
oalign{\hskip 0.5cm} &= d_{TV} \left( S_{n^2} , W(\sigma^2n^2)+\mu n^2 \right) \\
oalign{\hskip 0.5cm} &= d_{TV} \left( \dfrac{S_{n^2}}{\sqrt{n}} , W(n\sigma^2)+\mu n^{3/2} \right) \\
oalign{\hskip 0.5cm} & \geq d_{TV} \left( \dfrac{S_{n^2}}{\sqrt{n}}(\!\!\!\!\mod 1) , \left(W(n\sigma^2)+\mu n^{3/2}\right)(\!\!\!\!\mod 1) \right).
\end{align*}
In light of part (a), we have
$$ d_{TV}\left( \left(W(n\sigma^2)+\mu n^{3/2}\right)(\!\!\!\!\mod 1), \text{Uniform}(0,1)\right) \longrightarrow 0.$$
Combining the last two equations and using triangle inequality for total variation distance, we complete the proof.
\end{proof}
In fact the same proof works to show that under the assumptions of (
ef{res}), $n^{-1/4}S_{n}(\!\!\!\mod 1)$ converges to $\text{Uniform}(0,1)$ distribution in total variation distance as $n \to \infty$.
\end{enumerate}
\begin{thebibliography}{99}
\bibitem{dembo} \url{https://statweb.stanford.edu/~adembo/stat-310b/lnotes.pdf}
\bibitem {B03} Bally, V. and Caramellino, L. (2016). Asymptotic development for the CLT in total variation distance. {
\it Bernoulli} {\bf 22(4)}, 2442--2484.
\end{thebibliography}
|
2019-q1 | Suppose you have n boxes, and balls are dropped independently and uniformly at random into these boxes, one per unit time. At any time, say that a box is a "leader" if it has the maximum number of balls among all boxes at that time. Prove that with probability one, every box becomes a leader infinitely many times. | Let $X_i$ denotes the label of the box in which the $i$-th ball has been dropped, for all $i \geq 1$. Then $X_1, \ldots \stackrel{iid}{\sim} \text{Uniform}(\left\{1,\ldots, n\right\})$. Let $A_{k,i}$ be the event that after $i$-th ball has been dropped, $k$-th box is a leader. It is obvious that
$$ A_{k,i} \in \sigma \left( \sum_{j=1}^i \mathbbm{1}(X_j=l) \mid 1 \leq l \leq n\right) \subseteq \mathcal{E}_j, \; \forall \; 1 \leq j \leq i,$$ where $\mathcal{E}_j$ is the $\sigma$-algebra consisting of the events which are invariant under permutations of $(X_1, \ldots, X_j)$. Thus, $\bigcup_{i \geq j} A_{k,i} \in \mathcal{E}_j$ and hence
$$ B_k :=(\text{Box $k$ has been leader i.o.}) = \bigcap_{j \geq 1} \bigcup_{i \geq j} A_{k,i} \in \bigcap_{j \geq 1} \mathcal{E}_j = \mathcal{E},$$ where $\mathcal{E}$ is the exchangeable $\sigma$-algebra for the \textit{i.i.d.} sequence $\left\{X_i\right\}_{i \geq 1}$ and hence is trivial (by \textit{Hewitt-Savage $0-1$ Law}). This implies that $\mathbb{P}(B_k)=0$ or $1$ for all $1 \leq k \leq n$. Using the symmetry of the boxes, it can be concluded that $\mathbb{P}(B_k)=c$ and $c$ does not depend on $k$. On the otherhand, $\bigcup_{k \geq 1} B_k$ has probability $1$ since the total number of boxes is finite. Hence,
$$ 1= \mathbb{P} \left( \bigcup_{k = 1}^n B_k\right) \leq \sum_{k=1}^n \mathbb{P}(B_k) = cn,$$ implying that $c >0$ and hence $c=1$. Thus each box becomes leader infinitely often with probability $1$. |
2019-q2 | (a) Let \( \mathcal{F} \) be the \( \sigma \)-algebra of all subsets of a countable space \( \Omega \). Let \( P(\omega) > 0 \) and \( \sum_{\omega \in \Omega} P(\omega) = 1 \). Show that \( \mathcal{F} \) cannot contain an infinite sequence \( A_1, A_2, \ldots \) of independent subsets each of probability 1/2. (The sets need not be disjoint.)
(b) Now let \( \Omega \) be the unit interval. Let \( \mathcal{F} \) be the Lebesgue-measurable sets and let \( P \) be Lebesgue measure. Show that there does not exist \( X_t : \Omega \rightarrow \{0, 1\} \), measurable, independent with \( P(X_t = 1) = 1/2 \) for all \( t, \ 0 \leq t \leq 1 \). | We have an infinite sequence of independent events $\left\{A_i : i \geq 1\right\}$ on $(\Omega, \mathcal{F}, \mathbb{P})$ such that $\mathbb{P}(A_i) = 1/2$, for all $i \geq 1$. We know $\Omega$ is countable and $\mathcal{F}=\mathcal{P}(\Omega)$. Now since $\sum_{i \geq 1} \mathbb{P}(A_i) = \infty$ and the events are independent, we can employ \textit{Borel-Cantelli Lemma II} and conclude that
$$ \mathbb{P}( A_i \text{ occurs i.o.} ) = 1.$$ Now take any $\omega \in \Omega$ with $\mathbb{P}(\left\{\omega\right\}) >0$. There exists such $\omega$ since $\sum_{\omega^{\prime} \in \Omega} \mathbb{P}(\left\{\omega^{\prime}\right\}) =1$ and $\Omega$ is countable. Then $\omega \in ( A_i \text{ occurs i.o.} ) $. Get a subsequence $\left\{n_l\right\}_{l \geq 1}$ (which depends on $\omega$) such that $\omega \in \bigcap_{l \geq 1} A_{n_l}$. Therefore,
$$ 0 < \mathbb{P}(\left\{\omega\right\}) \leq \mathbb{P} \left(\bigcap_{l \geq 1} A_{n_l} \right) = \lim_{k \to \infty} \mathbb{P} \left(\bigcap_{l = 1}^k A_{n_l} \right) = \lim_{k \to \infty} \prod_{l=1}^k \mathbb{P} \left( A_{n_l} \right) = \lim_{k \to \infty} 2^{-k} = 0, $$ which gives a contradiction. We have $\Omega=[0,1]$, $\mathcal{F}$ to be the Lebesgue $\sigma$-algebra on $\Omega$ and $\mathbb{P}$ is the Lebesgue measure. We have measurable functions $X_t : \Omega \to \left\{0,1\right\}$ which are independent and $\mathbb{P}(X_t=1)=1/2$, for all $0 \leq t \leq 1$. We shall need the following lemma.
\begin{lemma}
$L^2(\Omega, \mathcal{F}, \mathbb{P})$ is separable.
\end{lemma}
\begin{proof}
We shall show that the following collection is a countable dense subset of $L^2(\Omega, \mathcal{F}, \mathbb{P})$.
$$ \mathcal{C} = \left\{ \sum_{i =1}^k q_i \mathbbm{1}_{[a_i,b_i]} \Bigg \rvert q_i \in \mathbb{Q}, a_i \leq b_i \in [0,1]\cap \mathbb{Q}; \forall \; 1 \leq i \leq k \right\}.$$ Clearly $\mathcal{C}$ is countable. We need to show that $ \overline{\mathcal{C}} = L^2(\Omega, \mathcal{F}, \mathbb{P}).$ Since $\mathcal{C}$ is closed under addition, so is $\overline{\mathcal{C}}$. Thus it is enough to show $f \in \overline{\mathcal{C}}$, for any $f \in L^2(\Omega, \mathcal{F}, \mathbb{P})$ with $f \geq 0$. Now we can get non-negative simple functions $\left\{f_n\right\}_{n \geq 1}$ such that $f_n \uparrow f$ pointwise. Since, $0 \leq (f-f_n)^2 \leq f^2$ and $f^2$ is integrable, we use DCT to conclude that $\int (f_n-f)^2 \, d \mathbb{P} \rightarrow 0$, i.e., $f_n$ converges to $f$ in $L^2$. Thus it is enough to show that $f \in \overline{\mathcal{C}}$ for $f$ non-negative simple function. Using the fact that $\overline{\mathcal{C}}$ is closed under addition and $\mathbb{Q}$ is dense in $\mathbb{R}$, it is enough to show that $\mathbbm{1}_A \in \overline{\mathcal{C}}$ for any $A \in \mathcal{F}$. Take any $A \in \mathcal{F}$. Since $\mathcal{F}$ is the completion of $\mathcal{B}_{[0,1]}$, we can find $B,N \in \mathcal{B}_{[0,1]}$, such that $B \subseteq A, A \setminus B \subseteq N$ with $\operatorname{Leb}(N)=0$. Therefore, $\mathbbm{1}_A=\mathbbm{1}_B$ almost everywhere and hence in $L^2$. Thus it is enough to prove that $\mathbbm{1}_A \in \overline{\mathcal{C}}$ for any $A \in \mathcal{B}_{[0,1]}.$
Now we shall apply Good-Set-Principle. Let
$$ \mathcal{L} := \left\{A \in \mathcal{B}_{[0,1]} \mid \mathbbm{1}_A \in \overline{\mathcal{C}}\right\}.$$ Clearly, $\emptyset, \Omega \in \mathcal{L}$ since constant functions taking rational values are in $\mathcal{C}$, by definition. If $A \in \mathcal{L}$, we can get $f_n \in \mathcal{C}$ such that $f_n \stackrel{L^2}{\longrightarrow} \mathbbm{1}_A$. By definition, $1-f_n \in \mathcal{C}$ and $1-f_n \stackrel{L^2}{\longrightarrow} \mathbbm{1}_{A^c}$; thus $A^c \in \mathcal{L}$. If $A_i \in \mathcal{L}$ are disjoint sets, fix $\varepsilon >0$ and take $f_i \in \mathcal{C}$ such that $||\mathbbm{1}_{A_i} - f_i ||_2 \leq 2^{-i-1}\varepsilon.$ Also get $N$ such that $\mathbb{P}\left( \cup_{i > N} A_i\right) = \sum_{i > N} \mathbb{P}(A_i) \leq \varepsilon^2/4.$ This is possible by the disjoint assumption. Since $\mathcal{C}$ is closed under addition, we have $\sum_{i=1}^N f_i \in \mathcal{C}$ and
\begin{align*}
\big \rvert \big \rvert \mathbbm{1}_{\bigcup_{i \geq 1} A_i} - \sum_{i =1}^N f_i \big \rvert \big \rvert_2 \leq \big \rvert \big \rvert \sum_{i \geq 1} \mathbbm{1}_{ A_i} - \sum_{i=1}^N f_i \big \rvert \big \rvert_2 &\leq \sum_{i=1}^N \big \rvert \big \rvert \mathbbm{1}_{ A_i} - f_i \big \rvert \big \rvert_2 + \big \rvert \big \rvert \sum_{i > N} \mathbbm{1}_{ A_i} \big \rvert \big \rvert_2 \
& \leq \sum_{i=1}^N \varepsilon 2^{-i-1} + \sqrt{\mathbb{P}\left( \cup_{i > N} A_i\right)} \\
&\leq \sum_{i=1}^N \varepsilon 2^{-i-1} + \varepsilon/2 \leq \varepsilon.\
\end{align*}
As the above holds for all $\varepsilon >0$, we conclude that $\bigcup_{i \geq 1} A_i \in \mathcal{L}$. Thus $\mathcal{L}$ is a $\lambda$-system containing the $\pi$-system $\mathcal{D} := \left\{[a,b] \mid a \leq b \in \mathbb{Q} \cap [0,1]\right\}$ which generates the $\sigma$-algebra $\mathcal{B}_{[0,1]}$. Hence $\mathcal{L}= \mathcal{B}_{[0,1]}$, which completes the proof.
\end{proof}
Coming back to our original problem, consider a countable dense subset $\mathcal{M} = \left\{f_i : i \geq 1\right\}$ of $L^2(\Omega, \mathcal{F}, \mathbb{P})$. Then,
$$ L^2(\Omega, \mathcal{F}, \mathbb{P}) = \bigcup_{i \geq 1} B(f_i,1/4),$$ where $B(f,r)$ is the open ball of radius $r$ around $f$ in $L^2(\Omega, \mathcal{F}, \mathbb{P})$. Note that,
$$ ||X_t-X_s||_2 = \left[ \mathbb{E}(X_t-X_s)^2\right]^{1/2} = \dfrac{1}{\sqrt{2}}\mathbbm{1}(t \neq s),$$ and hence
$$ \Big \rvert \left\{ t\in [0,1] \mid X_t \in B(f_i,1/4)\right\}\Big \rvert \leq 1, \forall \; i \geq 1.$$ Therefore, the set $\left\{t \in [0,1] \mid X_t \in \bigcup_{i \geq 1} B(f_i, 1/4)\right\}$ is at most countable contradicting that $[0,1]$ is uncountable. |
2019-q3 | Fix \( a \in (0, 1) \). Let \( e_i \) be i.i.d. \( \pm1 \) with probability 1/2. Let \( X_a = \sum_{j=0}^{\infty} a^j e_j \).
(a) Show that for 0 < a < 1/2, \( X_a \) is supported on a set of measure 0.
(b) Show that if \( a = 1/2 \), \( X_a \) is uniformly distributed on \([-1, 1]\).
(c) Find a single value of \( a > 1/2 \) with \( X_a \) having a density (with respect to Lebesgue measure on \( \mathbb{R} \)). Justify your answer. | We shall need the following lemma. We would not need full force of this result, but it may be useful elsewhere.
\begin{lemma}
If $\mu$ and $\nu$ are two probability measures on $(\mathbb{R},\mathcal{B}_{\mathbb{R}})$, then
$$ \operatorname{Supp}(\mu * \nu) = \operatorname{cl}\left(\operatorname{Supp}(\mu) + \operatorname{Supp}(\nu)\right) ,$$ where $*$ denotes convolution and $\operatorname{cl}$ denotes closure.
\end{lemma}
\begin{proof}
Get $X \sim \mu, Y \sim \nu , X \perp \!\!\perp Y$. Then
$$ \mathbb{P} \left(X+Y \notin \operatorname{cl}\left(\operatorname{Supp}(\mu) + \operatorname{Supp}(\nu) \right)\right) \leq \mathbb{P}(X \notin \operatorname{Supp}(\mu)) + \mathbb{P}(Y \notin \operatorname{Supp}(\nu)) =0,$$ implying that $ \operatorname{Supp}(\mu * \nu) \subseteq \operatorname{cl}\left(\operatorname{Supp}(\mu) + \operatorname{Supp}(\nu)\right). $ In order to show the reverse implication, take $x\in \operatorname{Supp}(\mu), y \in \operatorname{Supp}(\nu)$ and any open set $U$ containing $x+y$. Then we can get $\delta >0$ such that $B(x+y, 2\delta) \subseteq U$. Therefore,
$$ \mathbb{P}(X+Y \in U) \geq \mathbb{P}(X+Y \in B(x+y, 2\delta)) \geq \mathbb{P}(X \in B(x,\delta), Y \in B(y,\delta)) = \mathbb{P}(X \in B(x,\delta)) \mathbb{P}( Y \in B(y,\delta)) >0.$$ Thus $\operatorname{Supp}(\mu) + \operatorname{Supp}(\nu) \subseteq \operatorname{Supp}(\mu * \nu)$. Since $\operatorname{Supp}(\mu * \nu)$ is closed, this proves the other direction.
\end{proof}
Now we get back to our original problem. Let us denote the support of $X_a$ by $B_a$. Note that
$$ X_a = \sum_{j \geq 1} a^j \varepsilon_j = a\varepsilon_1 + a \sum_{j \geq 1} a^j \varepsilon_{j+1} \stackrel{d}{=}a\varepsilon_1+aX_a^{\prime},$$ where $X_a \stackrel{d}{=}X_a^{\prime}$ and $X_a^{\prime} \perp\!\!\perp \varepsilon_1.$ Therefore, using the above lemma, we obtain,
$$ B_a = \operatorname{Supp}(X_a) = \operatorname{cl}\left( \operatorname{Supp}(a\varepsilon_1)+\operatorname{Supp}(aX_a^{\prime})\right)= \operatorname{cl}\left( \left\{-a,+a\right\}+ aB_a\right) = \operatorname{cl}\left((a+aB_a)\bigcup(-a+aB_a)\right).$$ Recall that $B_a$, being support of a probability measure, is a closed set and therefore so is $-a+aB_a$ and $a+aB_a$. Since union of two closed sets is a closed set, we conclude that
$$ B_a = (a+aB_a)\bigcup(-a+aB_a).$$ Now note that, $|X_a| \leq \sum_{j \geq 1} a^j =a/(1-a)$ and therefore, $B_a \subseteq [-a/(1-a), a /(1-a)].$ Thus yields that
$$ \sup (-a+aB_a) \leq -a + \dfrac{a^2}{1-a} = \dfrac{-a+2a^2}{1-a} \stackrel{(i)}{<} \dfrac{a-2a^2}{1-a} = a - \dfrac{a^2}{1-a} \leq \inf (a+aB_a),$$ where $(i)$ is true since $0 < a < 1/2$. Thus $(-a+B_a)$ and $(a+B_a)$ are mutually disjoint. This gives,
$$ \operatorname{Leb}(B_a) = \operatorname{Leb} \left( (a+aB_a)\bigcup(-a+aB_a) \right) =\operatorname{Leb}(a+aB_a) + \operatorname{Leb}(-a+aB_a) = 2a\operatorname{Leb}(B_a). $$ Since $2a<1$, this implies that $\operatorname{Leb}(B_a)=0.$ We shall use characteristic functions. For any $t \in \mathbb{R}$,
\begin{align*}
\varphi_{X_{1/2}}(t) := \mathbb{E} \left[ \exp(itX_{1/2})\right] = \mathbb{E} \left[ \prod_{j \geq 1} \exp(it2^{-j}\varepsilon_j)\right] &\stackrel{(ii)}{=} \prod_{j \geq 1} \mathbb{E} \left[ \exp(it2^{-j}\varepsilon_j)\right] \\
&= \prod_{j \geq 1} \dfrac{\exp(it2^{-j}) + \exp(-it2^{-j})}{2} \\
&= \prod_{j \geq 1} \cos(t2^{-j}) \\
&= \prod_{j \geq 1} \dfrac{\sin(t2^{-j+1})}{2\sin(t2^{-j})} \\
&= \lim_{n \to \infty} \prod_{j=1}^n \dfrac{\sin(t2^{-j+1})}{2\sin(t2^{-j})} \\
&= \lim_{n \to \infty} \dfrac{\sin(t)}{2^n\sin(t2^{-n})} = \dfrac{\sin t}{t} = \int_{-\infty}^{\infty} \exp(itx)\dfrac{\mathbbm{1}_{[-1,1]}}{2}\, dx,
\end{align*} where $(ii)$ can be justified using DCT.
Therefore, $X_{1/2} \sim \text{Uniform}([-1,1])$. Take $a=1/\sqrt{2}.$ Note that
$$ X_{1/\sqrt{2}} = \sum_{j \geq 1} 2^{-j/2}\varepsilon_j = \sum_{j \geq 1} 2^{-j}\varepsilon_{2j} + \sum_{j \geq 1} 2^{-j+1/2}\varepsilon_{2j-1} = \sum_{j \geq 1} 2^{-j}\varepsilon_{2j} + \sqrt{2}\sum_{j \geq 1} 2^{-j}\varepsilon_{2j-1} \stackrel{d}{=} X^{\prime}_{1/2} + \sqrt{2}X^{\prime \prime}_{1/2},$$ where $ X^{\prime}_{1/2}, X^{\prime \prime}_{1/2}$ are \textit{i.i.d.} copies of $X_{1/2}$ and hence \textit{iid} $\text{Uniform}([-1,1])$ variables. Thus
$$ X_{1/\sqrt{2}} \sim \text{Uniform}([-1,1]) * \text{Uniform}([-
ho{2},\sqrt{2}]).$$ Since, convolutions of two probability measures having density w.r.t. Lebesgue measure also has density with respect to Lebesgue measure, we are done. |
2019-q4 | Let \( (\Omega, \mathcal{F}) \) be a measurable space, and let \( (S, d) \) be a complete separable metric space endowed with its Borel \( \sigma \)-algebra. If \( \{f_n\}_{n \geq 1} \) is a sequence of measurable functions from \( \Omega \) into \( S \), show that the set of all \( \omega \) where \( \lim_{n \to \infty} f_n(\omega) \) exists is a measurable subset of \( \Omega \). | Since $S$ is complete metric space, any sequence $\left\{x_n \right\}_{n \geq 1}$ in $S$ is convergent if and only if it is Cauchy. Note that the Cauchy condition for the sequence $\left\{x_n \right\}_{n \geq 1}$ can be equivalently written as "for all $k \geq 1$, there exists $m \geq 1$ such that $d_S(x_n,x_m) \leq 1/k$, for all $n \geq m$". Therefore,
\begin{align*}
\left\{ \omega : \lim_{n \to \infty} f_n (\omega) \text{ exists } \right\} &= \left\{ \omega : \left\{f_n(\omega)\right\}_{n \geq 1} \text{ is Cauchy }\right\} \\
&= \bigcap_{k \geq 1} \bigcup_{m \geq 1} \bigcap_{n \geq m} \left\{ \omega : d_S(f_n(\omega), f_m(\omega)) < k^{-1} \right\}.
\end{align*} Therefore it is enough to show that $\left\{ \omega : d_S(f_n(\omega), f_m(\omega)) < k^{-1} \right\} \in \mathcal{F}$, for any $m,n,k \geq 1$. Note that $d_S : S \times S \to \mathbb{R}$ is jointly continuous, hence $\mathcal{B}_S \otimes \mathcal{B}_S$-measurable. Each $f_n$ being measurable, we have $(f_n,f_m) : (\Omega, \mathcal{F}) \mapsto (S \times S, \mathcal{B}_S \otimes \mathcal{B}_S )$ is also measurable. Combining them we get $\omega \mapsto d_S(f_n(\omega), f_m(\omega))$ is $\mathcal{F}$-measurable and hence implies our required assertion. |
2019-q5 | Let \( (B_t)_{t \geq 0} \) be standard Brownian motion starting at 0. Prove that with probability one, there does not exist any finite constant \( C \) such that \( |B_t - B_s| \leq C|t - s| \) for all \( 0 \leq s, t \leq 1 \). | We need to show that for BM $\left\{B_t : t \geq 0\right\}$ starting from $0$, we have
$$ \sup_{0 \leq s < t \leq 1 } \dfrac{|B_t-B_s|}{|t-s|} = \sup_{0 \leq s < t \leq 1, s, t\in \mathbb{Q} } \dfrac{|B_t-B_s|}{|t-s|} = \infty, \; \text{almost surely }.$$ The first equality is true since $B$ has continuous sample paths almost surely. Fix $C >0$. Let $A_i := (|B_{2^{-i+1}}-B_{2^{-i}}| \geq C2^{-i}),$ for all $i \geq 1$. By independent increment property of BM, we know that $\left\{A_i\right\}_{i \geq 1}$ is a collection of independent events. Also,
$$ \sum_{i \geq 1} \mathbb{P}(A_i) = \sum_{i \geq 1} \mathbb{P}(|N(0,2^{-i})| \geq C2^{-i}) =2 \sum_{i \geq 1} \bar{\Phi}(C2^{-i/2}) \geq 2 \sum_{i \geq 1} \bar{\Phi}(C) = \infty.$$ Therefore by \textit{Borel-Cantelli Lemma II}, we have
$$ \mathbb{P}(A_i \; \text{i.o.}) = 1,$$ and hence $\mathbb{P}(\bigcup_{i \geq 1} A_i) =1$. Note that on $\bigcup_{i \geq 1} A_i$, we have
$$ \sup_{0 \leq s < t \leq 1 } \dfrac{|B_t-B_s|}{|t-s|} \geq C.$$ Thus with probability $1$,
$$ \sup_{0 \leq s < t \leq 1 } \dfrac{|B_t-B_s|}{|t-s|} \geq C.$$ Since this holds true for any $C>0$, we conclude our required assertion. |
2019-q6 | Let \( \pi \) be a uniformly random permutation of \( \{1, 2, \ldots, n\} \). Let \( Y_i = \# \{i : \pi(i) = i\} \).
(a) Show that \( E(Y_1) = 1 \).
(b) Continuing from part (a) above, throw out \( \{i : \pi(i) = i\} \) and relabel the remaining integers as \( 1, \ldots, n - Y_1 \). Let \( Y_2 \) be the number of fixed points of a uniform random rearrangement of \( \{1, 2, \ldots, n - Y_1\} \). Continue this process, next permuting \( 1, 2, \ldots, n-Y_1-Y_2 \). Stop at the first time \( N \) that \( Y_1 + \cdots + Y_N = n \). Find \( E(N) \); justify your answer. | (a) We have $Y_1 : = |\left\{i \mid \pi(i)=i\right\}$ where $\pi$ is a uniformly chosen random element from $\mathcal{S}_n$, the set of all permutations of $\left\{1, \ldots, n\right\}$. Clearly, for any $1 \leq i,j \leq n$,
\begin{align*}
\mathbb{P}(\pi(i)=j) = \dfrac{\rvert \left\{\nu \in \mathcal{S}_n \mid \nu(i)=j \right\}\rvert }{|\mathcal{S}_n|} = \dfrac{(n-1)!}{n!} = \dfrac{1}{n}.
\end{align*}
So,
$$ \mathbb{E}(Y_1) = \mathbb{E} \left( \sum_{i=1}^n \mathbbm{1}(\pi(i)=i)\right) = \sum_{i=1}^n \mathbb{P}(\pi(i)=i) = \sum_{i=1}^n \dfrac{1}{n} = 1.$$ (b) Since, $Y_k$ is the number of fixed points in a random permutation of $n-\sum_{i=1}^{k-1}Y_i$, we have $\mathbb{E}(Y_k \mid Y_1, \ldots, Y_{k-1})(\omega)=1$, if $Y_1(\omega) + \ldots + Y_{k-1}(\omega) < n$ which is same as $N \geq k$. Define the random variable $X_k$ for all $k \geq 1$ as follows.
$$ X_k = Y_k\mathbbm{1}(N \geq k).$$ By construction, $N := \inf \left\{k \mid \sum_{j=1}^k X_j=n\right\}$ and $\sum_{k \geq 1} X_k =n$. Also from previous discussion we can write the following,
$$ \mathbb{E}(X_k \mid Y_1, \ldots, Y_{k-1}) = \mathbb{E}(Y_k \mathbbm{1}(N \geq k)\mid Y_1, \ldots, Y_{k-1}) = \mathbbm{1}(N \geq k), $$ since $(N \geq k)$ is $\sigma(Y_1, \ldots,Y_{k-1})$-measurable. Thus taking expectations on both side, we can conclude that $\mathbb{E}(X_k) = \mathbb{P}(N \geq k)$. Hence,
$$ \mathbb{E}(N) = \sum_{k \geq 1} \mathbb{P}(N \geq k) = \sum_{k \geq 1} \mathbb{E}(X_k) \stackrel{(\text{MCT})}{=} \mathbb{E} \left[ \sum_{k \geq 1} X_k\right] =n.$ |
2020-q1 | Let \((\Omega, \mathcal{F}, \mathbb{P})\) be a probability space and let \(X : \Omega \to C[0,1]\) be a measurable map, where \(C[0,1]\) is endowed with the usual sup-norm topology and the Borel \(\sigma\)-algebra generated by it. Let \(f : \mathbb{R} \to \mathbb{R}\) be a bounded measurable function.
(a) Show that the map \(\omega \mapsto \int_0^1 f(X(\omega)(t)) \, dt\) is measurable.
(b) Show that the map \(t \mapsto \int_{\Omega} f(X(\omega)(t)) \, d\mathbb{P}(\omega)\) is measurable. | We have a probability space $(\Omega, \mathcal{F}, \mathbb{P})$ and $X : \Omega \to C[0,1]$ is a measurable map, where $C[0,1]$ is equipped with usual sup-norm topology and Borel $\sigma$-algebra. First consider the function $$h:(\Omega \times [0,1], \mathcal{F} \otimes \mathcal{B}_{[0,1]}) \mapsto \left( C[0,1] \times [0,1], \mathcal{B}_{C[0,1]} \otimes \mathcal{B}_{[0,1]}\right), \; \text{defined as } h(\omega, t) := (X(\omega),t), \; \forall \; \omega \in \Omega, \; t \in [0,1].$$ It is easy to see that $h$ is measurable. Take $A \in \mathcal{B}_{C[0,1]}$ and $B \in \mathcal{B}_{[0,1]}$. Then $$ h^{-1}(A \times B) = \left\{ (\omega,t) : X(\omega)\in A, t \in B\right\} = X^{-1}(A) \times B \in \mathcal{F} \otimes \mathcal{B}_{[0,1]}.$$ Since the collection $\left\{ A \times B \mid A \in \mathcal{B}_{C[0,1]}, B \in \mathcal{B}_{[0,1]}\right\}$ generates the $\sigma$-algebra $\mathcal{B}_{C[0,1]} \otimes \mathcal{B}_{[0,1]}$, this proves that $h$ is measurable. Now consider the function $$ h_1 : \left( C[0,1] \times [0,1], \mathcal{B}_{C[0,1]} \otimes \mathcal{B}_{[0,1]}\right) \mapsto (\mathbb{R},\mathcal{B}_{\mathbb{R}}), \text{ defined as } h_1(x,t)=x(t), \; \forall \; x \in C[0,1], t \in [0,1].$$ It is easy to see that $h_1$ is jointly continuous. Take $x_n$ converging to $x$ in $C[0,1]$ and $t_n$ converging to $t$ in $[0,1]$. Then $$ |h_1(x_n,t_n)-h_1(x,t)| = |x_n(t_n)-x(t)| \leq |x_n(t_n)-x(t_n)| + |x(t_n)-x(t)| \leq ||x_n-x||_{\infty} + |x(t_n)-x(t)| \to 0,$$ since $x$ is continuous. Joint continuity of the function $h_1$ implies that it is also measurable. Now take $$ g :(\Omega \times [0,1], \mathcal{F} \otimes \mathcal{B}_{[0,1]}) \mapsto (\mathbb{R},\mathcal{B}_{\mathbb{R}}), \text{ defined as } g(\omega,t) = h_1(h(\omega,t)) = X(\omega)(t), \; \forall \; \omega \in \Omega, t \in [0,1].$$ Since, $g=h_1 \circ h$ and both $h_1, h$ are measurable, we can conclude that $g$ is also measurable. Since, $f :(\mathbb{R}, \mathcal{B}_{\mathbb{R}}) \to (\mathbb{R}, \mathcal{B}_{\mathbb{R}}) $ is bounded measurable, $g_1 := f \circ g$ is also bounded measurable. Let $|f| \leq M < \infty$ and then we have $|g_1| \leq M.$ Therefore, $$ \int_{\Omega \times [0,1]} |g_1(\omega,t)| \, d(\mathbb{P} \otimes \text{Leb}_{[0,1]})(\omega,t) \leq M \int_{\Omega \times [0,1]} d(\mathbb{P} \otimes \text{Leb}_{[0,1]})(\omega,t) \leq \mathbb{P}(\Omega) \text{Leb}_{[0,1]}([0,1])=M < \infty.$$ Therefore, using the proof of \textit{Fubini's Theorem} (see \cite[Theorem 1.4.29]{dembo}), we conclude that $$ t \mapsto \int_{\Omega} g_1(\omega,t)\, d\mathbb{P}(\omega) = \int_{\Omega} f(X(\omega)(t))\, d\mathbb{P}(\omega) \text{ is measurable as a function from } ([0,1], \mathcal{B}_{C[0,1]}) \text{ to } (\mathbb{R}, \mathcal{B}_{\mathbb{R}}),$$ and $$ \omega \mapsto \int_{0}^1 g_1(\omega,t)\, dt = \int_{0}^1 f(X(\omega)(t))\, dt \text{ is measurable as a function from } (\Omega, \mathcal{F}) \text{ to } (\mathbb{R}, \mathcal{B}_{\mathbb{R}}).$$ |
2020-q2 | Let \(X_1, X_2, \ldots\) be a sequence of i.i.d. random variables with mean zero and variance one. For each \(n\), let \(S_n := \sum_{i=1}^n X_i\), with \(S_0 = 0\). Prove that, as \(n \to \infty\), the random variable \(
\frac{1}{n} \sum_{i=1}^n 1_{\{S_i \geq 0\}}\) converges in law to \(
\int_0^1 1_{\{B(t) \geq 0\}} \, dt\),
where \(B\) is standard Brownian motion. (Here \(1_A\) denotes the indicator of an event \(A\), as usual.) | The set-up of this problem is conducive to using \textit{Donsker's Theorem}. The problem comes from the observation that $t \to \mathbbm{1}(t \geq 0)$ is not continuous. So we have to approximate it by continuous functions. We have $X_1, X_2, \ldots$ to be an \textit{i.i.d.} sequence with mean zero and variance $1$. Define, $S_n=\sum_{k=1}^n X_k$ with $S_0=0$. Consider the scaled linearly interpolated process $$ \widehat{S}_n(t) := \dfrac{1}{\sqrt{n}} \left[S_{\lfloor nt \rfloor} + (nt-\lfloor nt \rfloor)X_{\lfloor nt \rfloor +1} \right], \; \forall \; t \in [0,1].$$ By \textit{Donsker's Theorem}, we have $\widehat{S}_n \stackrel{d}{\longrightarrow} B$ on $C[0,1]$, where $B$ is the standard BM on $[0,1]$. Fix $\varepsilon >0$ and define the following two linear approximations of $f(x)=\mathbbm{1}(x \geq 0)$ as follows. $$ f_{\varepsilon,-}(x) := \begin{cases} 0, & \text{if } x < 0, \\ \dfrac{x}{\varepsilon}, & \text{if } 0 \leq x \leq \varepsilon, \\ 1, & \text{if } x > \varepsilon. \end{cases}, \;\; f_{\varepsilon,+}(x) := \begin{cases} 0, & \text{if } x < -\varepsilon, \\ \dfrac{x+\varepsilon}{\varepsilon}, & \text{if } -\varepsilon \leq x \leq 0, \\ 1, & \text{if } x > 0. \end{cases}.$$ Clearly, $f_{\varepsilon,-}, f_{\varepsilon,+}$ are both Lipschitz continuous with Lipschitz constant $\varepsilon^{-1}$; $f_{\varepsilon,-} \leq f \leq f_{\varepsilon,+}$ and $(f_{\varepsilon,+}-f_{\varepsilon,-}) \leq \mathbbm{1}(|x| \leq \varepsilon).$ Recall the fact that for any Lipschitz continuous function $g : \mathbb{R} \to \mathbb{R}$ with Lipschitz constant $L$ and any to $\mathbf{x}, \mathbf{y} \in C[0,1]$, we have $$ \Bigg \rvert \int_{0}^1 g(\mathbf{x}(t))\, dt - \int_{0}^1 g(\mathbf{y}(t))\, dt\Bigg \rvert \leq \int_{0}^1 |g(\mathbf{x}(t))-g(\mathbf{y}(t))|\, dt \leq \int_{0}^1 L|\mathbf{x}(t)-\mathbf{y}(t)|\, dt \leq L||\mathbf{x}-\mathbf{y}||_{\infty},$$ and hence $\mathbf{x} \mapsto \int_{0}^1 g(\mathbf{x}(t))\, dt$ is Lipschitz continuous on $C[0,1]$. Hence, \begin{equation}{\label{firstconv}} \int_{0}^1 f_{\varepsilon,-}(\widehat{S}_n(t))\, dt \stackrel{d}{\longrightarrow} \int_{0}^1 f_{\varepsilon,-}(B(t))\, dt, \; \int_{0}^1 f_{\varepsilon,+}(\widehat{S}_n(t))\, dt \stackrel{d}{\longrightarrow} \int_{0}^1 f_{\varepsilon,+}(B(t))\, dt. \end{equation} On the otherhand, \begin{align*} \Bigg \rvert \dfrac{1}{n} \sum_{k=1}^n f_{\varepsilon,-}(\widehat{S}_n(k/n)) - \int_{0}^1 f_{\varepsilon,-}(\widehat{S}_n(t))\, dt \Bigg \rvert &\leq \sum_{k=1}^n \int_{(k-1)/n}^{k/n} \Bigg \rvert f_{\varepsilon,-}(\widehat{S}_n(k/n)) - f_{\varepsilon,-}(\widehat{S}_n(t)) \Bigg \rvert \, dt \\ & \leq \varepsilon^{-1}\sum_{k=1}^n \int_{(k-1)/n}^{k/n} \Bigg \rvert \widehat{S}_n(k/n) - \widehat{S}_n(t) \Bigg \rvert \, dt \\ &\leq \varepsilon^{-1}\sum_{k=1}^n \int_{(k-1)/n}^{k/n} \dfrac{|X_k|}{\sqrt{n}} \, dt = \dfrac{1}{\varepsilon n^{3/2}}\sum_{k=1}^n |X_k| \stackrel{p}{\longrightarrow} 0, \end{align*} as $n \to \infty$, where the last assertion follows by WLLN. Combining this with (\ref{firstconv}) by \textit{Slutsky's Theorem}, we can conclude that $$ \dfrac{1}{n} \sum_{k=1}^n f_{\varepsilon,-}(\widehat{S}_n(k/n)) \stackrel{d}{\longrightarrow} \int_{0}^1 f_{\varepsilon,-}(B(t))\, dt.$$ Similar argument shows, $$ \dfrac{1}{n} \sum_{k=1}^n f_{\varepsilon,+}(\widehat{S}_n(k/n)) \stackrel{d}{\longrightarrow} \int_{0}^1 f_{\varepsilon,+}(B(t))\, dt.$$ Finally note that $$ \dfrac{1}{n} \sum_{k=1}^n f_{\varepsilon,-}\left(\widehat{S}_n(k/n)\right) \leq \dfrac{1}{n} \sum_{k=1}^n f(\widehat{S}_n(k/n))=\dfrac{1}{n} \sum_{k=1}^n \mathbbm{1}\left(\widehat{S}_n(k/n) \geq 0 \right) = \dfrac{1}{n} \sum_{k=1}^n \mathbbm{1}\left(S_k \geq 0 \right) \leq \dfrac{1}{n} \sum_{k=1}^n f_{\varepsilon,+}\left(\widehat{S}_n(k/n) \right).$$ Fix $y \in \mathbb{R}$. Recall the fact that if $Y_n$ converges in distribution to $Y$, then $$\mathbb{P}(Y<y) \leq \liminf_{n \to \infty}\mathbb{P}(Y_n \leq y) \leq \limsup_{n \to \infty}\mathbb{P}(Y_n \leq y) \leq \mathbb{P}(Y \leq y.$$ Using this fact along with already established assertions, we conclude \begin{align*} \mathbb{P} \left( \int_{0}^1 f_{\varepsilon,+}(B(t))\, dt <y \right) \leq \liminf_{n \to \infty} \mathbb{P} \left( \dfrac{1}{n} \sum_{k=1}^n f_{\varepsilon,+}(\widehat{S}_n(k/n)) \leq y\right) & \leq \liminf_{n \to \infty} \mathbb{P} \left(\dfrac{1}{n} \sum_{k=1}^n \mathbbm{1}\left(S_k \geq 0 \right) \leq y\right) \\ & \leq \limsup_{n \to \infty} \mathbb{P} \left(\dfrac{1}{n} \sum_{k=1}^n \mathbbm{1}\left(S_k \geq 0 \right) \leq y\right) \\ & \leq \limsup_{n \to \infty} \mathbb{P} \left( \dfrac{1}{n} \sum_{k=1}^n f_{\varepsilon,-}(\widehat{S}_n(k/n)) \leq y\right) \\ & \leq \mathbb{P} \left( \int_{0}^1 f_{\varepsilon,-}(B(t))\, dt \leq y \right). \end{align*} The above holds for all $\varepsilon >0$. Let us first prove that $$ \int_{0}^1 f_{\varepsilon,+}(B(t))\, dt - \int_{0}^1 f_{\varepsilon,-}(B(t))\, dt \stackrel{p}{\longrightarrow} 0,$$ as $\varepsilon >0$. Notice that \begin{align*} \mathbb{E} \Bigg \rvert \int_{0}^1 f_{\varepsilon,+}(B(t))\, dt - \int_{0}^1 f_{\varepsilon,-}(B(t))\, dt \Bigg \rvert \leq \mathbb{E} \left[ \int_{0}^1 \mathbbm{1}(|B(t)|\leq \varepsilon) \, dt\right] & \stackrel{(\text{Fubini})}{=} \int_{0}^1 \mathbb{P}(|B(t)| \leq \varepsilon) \, dt \\ & = \int_{0}^1 \int_{-\varepsilon}^{\varepsilon} t^{-1/2}\phi(t^{-1/2}x)\, dx \, dt \\ & \leq C \varepsilon \int_{0}^1 t^{-1/2}\, dt = 2C\varepsilon \longrightarrow 0. \end{align*} Since, $$ \int_{0}^1 f_{\varepsilon,-}(B(t))\, dt \leq \int_{0}^1 f(B(t))\, dt \leq \int_{0}^1 f_{\varepsilon,+}(B(t))\, dt,$$ we have $$ \int_{0}^1 f_{\varepsilon,-}(B(t))\, dt, \int_{0}^1 f_{\varepsilon,+}(B(t))\, dt \stackrel{p}{\longrightarrow} \int_{0}^1 f(B(t))\, dt,$$ as $\varepsilon \downarrow 0$. If $y$ is a continuity point for the distribution of \int_{0}^1 f(B(t))\, dt$, we have $$ \mathbb{P} \left( \int_{0}^1 f_{\varepsilon,-}(B(t))\, dt \leq y\right), \mathbb{P} \left( \int_{0}^1 f_{\varepsilon,+}(B(t))\, dt < y\right) \stackrel{\varepsilon \downarrow 0}{\longrightarrow} \mathbb{P} \left( \int_{0}^1 f(B(t))\, dt \leq y\right),$$ which in turn implies that $$ \mathbb{P} \left(\dfrac{1}{n} \sum_{k=1}^n \mathbbm{1}\left(S_k \geq 0 \right) \leq y\right) \stackrel{n \to \infty}{\longrightarrow} \mathbb{P} \left( \int_{0}^1 f(B(t))\, dt \leq y\right).$$ Therefore, $$ \dfrac{1}{n} \sum_{k=1}^n \mathbbm{1}\left(S_k \geq 0 \right) \stackrel{d}{\longrightarrow} \int_{0}^1 f(B(t))\, dt = \int_{0}^1 \mathbbm{1}(B(t)\geq 0)\, dt.$$ |
2020-q3 | Consider three gamblers initially having \(a, b, c\) dollars. Each trial consists of choosing two players uniformly at random and having them flip a fair coin; they transfer $1 in the usual way. Once players are ruined, they drop out. Let \(S_1\) be the number of games required for one player to be ruined. Let \(S_2\) be the number of games required for two players to be ruined. Find \(\mathbb{E}\{S_1\}\) and \(\mathbb{E}\{S_2\}\). | Let $X_n,Y_n,Z_n$ be the fortunes of those three players respectively after $n$ trials with $X_0=a, Y_0=b, Z_0=c$. $$ S_1 := \inf \left\{ n \geq 0 : \min(X_n, Y_n,Z_n)=0\right\} = \inf \left\{ n \geq 0 : X_nY_nZ_n=0\right\},$$ From the gambling mechanism, it is clear that for all $n \geq 0$, \begin{equation}{\label{dym}} (X_{n+1},Y_{n+1},Z_{n+1}) -(X_n,Y_n,Z_n)\mid \left((X_n,Y_n,Z_n), S_1 >n \right) \sim G, \end{equation} where $G$ is the uniform distribution over the set $$\left\{(1,-1,0), (-1,1,0), (0,1,-1),(0,-1,1),(1,0,-1),(-1,0,1)\right\}.$$ Also define $\mathcal{F}_n := \sigma(X_i,Y_i,Z_i : i \leq n)$. Observe that $X_n+Y_n+Z_n = X_0+Y_0+Z_0=a+b+c$ for all $n \geq 0$. Let $R_n = X_nY_nZ_n$ for $n \geq 1$. First note that $0 \leq R_n \leq (a+b+c)^3$ for all $n \geq 0$ with $R_0=abc$. Then for all $n \geq 0$, using (\ref{dym}) we get almost surely on the event $(S_1>n)$, \begin{align*} \mathbb{E}(R_{n+1} \mid \mathcal{F}_{n}) &=\dfrac{1}{6} \left[(X_n+1)(Y_n-1)+(X_n-1)(Y_n+1)\right]Z_n + \dfrac{1}{6} \left[(Y_n+1)(Z_n-1)+(Y_n-1)(Z_n+1)\right]X_n \\ & \hspace{2 in} + \dfrac{1}{6} \left[(X_n+1)(Z_n-1)+(X_n-1)(Z_n+1)\right]Y_n \end{align*} \begin{align*} & = \dfrac{2(X_nY_n-1)Z_n + 2(Y_nZ_n-1)X_n + 2(X_nZ_n-1)Y_n}{6} \end{align*} \begin{align*} & = \dfrac{6R_n - 2(X_n+Y_n+Z_n)}{6} = R_n - \dfrac{a+b+c}{3}. \end{align*} On the otherhand, on the event $(S_1 \leq n)$, we have $R_n=R_{n+1}=0$ almost surely. Combining them together, we can write \begin{align*} \mathbb{E}(R_{S_1 \wedge (n+1)} + (S_1 \wedge (n+1))\kappa \mid \mathcal{F}_n) &= \mathbbm{1}(S_1 \geq n+1) \mathbb{E}(R_{n+1}+(n+1)\kappa \mid \mathcal{F}_n) + \mathbbm{1}(S_1 \leq n)S_1 \kappa \\ & = \mathbbm{1}(S_1 \geq n+1) (R_n + n \kappa) + \mathbbm{1}(S_1 \leq n)(R_{S_1}+S_1 \kappa) = R_{S_1 \wedge n} + (S_1 \wedge n)\kappa, \end{align*} where $\kappa = (a+b+c)/3.$ Thus $\left\{R_{S_1 \wedge n} + (S_1 \wedge n)\kappa, \mathcal{F}_n, n \geq 0\right\}$ is a MG. We apply OST and conclude that \begin{equation}{\label{eqn1}} \mathbb{E}R_{S_1 \wedge n} + \kappa \mathbb{E}S_1 \wedge n = \mathbb{E}R_0 = abc \Rightarrow \mathbb{E}S_1 \wedge n = \dfrac{abc - \mathbb{E}R_{S_1 \wedge n}}{\kappa} \leq \dfrac{abc}{\kappa}. \end{equation} This shows that $\mathbb{E}S_1 < \infty$; in particular $S_1 < \infty$ with probability $1$ and hence $R_{S_1 \wedge n}$ converges almost surely to $R_{S_1}=0$. Since $R_{S_1 \wedge n}$ is uniformly bounded, we get $\mathbb{E}R_{S_1 \wedge n} \to 0$. Plugging in this observation in (\ref{eqn1}), we conclude that $$ \mathbb{E}(S_1) = \dfrac{abc}{\kappa}=\dfrac{3abc}{a+b+c}.$$ Now, $$ S_2 := \inf \left\{n \geq 0 : \text{ at least two people got ruined after time } n\right\} = \inf \left\{ n \geq 0 : T_n =0\right\},$$ where $T_n=X_nY_n+Y_nZ_n+Z_nX_n,$ for $n \geq 0$. Note that $0 \leq T_n \leq 3(a+b+c)^2$ for all $n \geq 0$. From the dynamics of the scheme it is clear that $$ (X_{n+1},Y_{n+1})-(X_n,Y_n) \mid (X_n,Y_n,Z_n) \sim \begin{cases} \text{Uniform} \left( \left\{ (1,-1),(-1,1),(0,1),(0,-1),(1,0),(-1,0)\right\}\right), & \text{if } X_n,Y_n,Z_n >0, \\ \text{Uniform} \left( \left\{ (1,-1),(-1,1)\right\}\right), & \text{if } X_n,Y_n >0, Z_n =0. \end{cases} $$ Hence, almost surely on the event $(X_n,Y_n,Z_n >0)$, we have \begin{align*} \mathbb{E}(X_{n+1}Y_{n+1}\mid \mathcal{F}_n) &= \dfrac{1}{6} \left[(X_n+1)(Y_n-1)+(X_n-1)(Y_n+1)+X_n(Y_n+1)+X_n(Y_n-1)+(X_n+1)Y_n+(X_n-1)Y_n \right] \\ &= X_nY_n - \dfrac{1}{3}, \end{align*} whereas almost surely on the event $(X_n,Y_n >0, Z_n =0)$, we have \begin{align*} \mathbb{E}(X_{n+1}Y_{n+1}\mid \mathcal{F}_n) &= \dfrac{1}{2} \left[(X_n+1)(Y_n-1)+(X_n-1)(Y_n+1) \right] = X_nY_n - 1. \end{align*} Finally, on the event $(X_nY_n=0)$, we have $\mathbb{E}(X_{n+1}Y_{n+1} \mid \mathcal{F}_n)=0=X_nY_n.$ Similar calculations can be done for $Y_nZ_n, Z_nX_n$. Combining all of them we get, \begin{align*} \mathbb{E}(T_{n+1} \mid \mathcal{F}_n) &= (T_n-1) \mathbbm{1}(X_n,Y_n,Z_n >0) + (X_nY_n-1+Y_nZ_n+Z_nX_n)\mathbbm{1}(X_n,Y_n>0, Z_n=0) \ \ & \hspace{.1 in} +(X_nY_n+Y_nZ_n-1+Z_nX_n)\mathbbm{1}(X_n=0, Y_n,Z_n>0) + (X_nY_n+Y_nZ_n+Z_nX_n-1)\mathbbm{1}(X_n,Z_n>0, Y_n=0) \ \ & \hspace{1 in} +T_n \mathbbm{1}(T_n=0) \end{align*} \begin{align*} &= (T_n-1)\mathbbm{1}(S_2 > n) = T_n - \mathbbm{1}(S_2 >n). \end{align*} Arguments, similar to those used earlier, can be employed now to show that $\left\{T_{S_2 \wedge n} + S_2 \wedge n, \mathcal{F}_n, n \geq 0\right\}$ is a MG and $$ \mathbb{E}S_2 = \mathbb{E}T_0 = ab+bc+ca.$$ |
2020-q4 | Poisson and Brown compete in a race according to the following rules. Poisson moves by jumps of size 1 at rate \(\lambda\), while (independently) Brown moves according to standard Brownian motion. Both start at 0, and the winner is the first one to reach the positive integer \(K\).
(a) Find an expression for the probability that Brown wins.
(b) Evaluate your expression when \(K = 1\). | We have a Poisson process with intensity $\lambda$, $\left\{N_t : t \geq 0\right\}$ and independent to that we have a standard Brownian Motion $\left\{W_t : t \geq 0\right\}$ with $N_0=W_0=0$. $$ \tau_K := \inf \left\{ t \geq 0 \mid N_t \geq K\right\}, \; T_K := \inf \left\{ t \geq 0 \mid W_t \geq K\right\},$$ where $K \in \mathbb{N}$. \begin{enumerate}[label=(\alph*)] \item We have to evaluate $\mathbb{P}(T_K < \tau_K)$. Since holding times for Poisson process are \textit{i.i.d.} exponential, we have $\tau_K \sim \text{Gamma}(K,\lambda)$. On the otherhand, $$ q_{K}:= \mathbb{P}(T_K \leq t) = \mathbb{P}(\sup_{0 \leq s \leq t} W_s \geq K) = \mathbb{P}(|W(t)| \geq K) = 2\bar{\Phi} \left( \dfrac{K}{\sqrt{t}}\right).$$ Since the two processes are independent, we have $\tau_K \perp \! \! \perp T_K$ and hence \begin{align*} \mathbb{P}(T_K < \tau_K) &= \int_{0}^{\infty} \mathbb{P}(T_K < t) f_{\tau_K}(t)\, dt \ = \int_{0}^{\infty} \left[ 2\bar{\Phi} \left( \dfrac{K}{\sqrt{t}}\right)\right] \left[ \dfrac{\lambda^K e^{-\lambda t }t^{K-1}}{(K-1)!}\right] \, dt \ \ &= \dfrac{2\lambda^K}{(K-1)!} \int_{0}^{\infty} e^{-\lambda t}t^{K-1} \bar{\Phi} \left( \dfrac{K}{\sqrt{t}}\right) \, dt. \end{align*} \end{enumerate} \item For $K=1$, \begin{align*} q_1 = \mathbb{P}(T_1 < \tau_1) = \int_{0}^{\infty} \mathbb{P}(\tau_1 >t) f_{T_1}(t)\, dt &= \int_{0}^{\infty} \mathbb{P}(\text{Exponential}(\lambda) >t) f_{T_1}(t)\, dt \ \ &= \int_{0}^{\infty} e^{-\lambda t} f_{T_1}(t)\, dt = \mathbb{E}\exp(-\lambda T_1). \end{align*} We know that $\left\{\exp(\theta W_t - \theta^2t/2 ) : t \geq 0\right\}$ is a continuous MG with respect to its canonical filtration for any $\theta > 0$. Also for any $t \geq 0$, $$ 0 \leq \exp(\theta W_{T_1 \wedge t} - \theta^2(T_1 \wedge t)/2) \leq e^{\theta}< \infty,$$ and hence $\left\{ \exp(\theta W_{T_1 \wedge t} - \theta^2(T_1 \wedge t)/2) : t \geq 0\right\}$ is U.I. We now apply \textit{Optional Stopping Theorem} to conclude that $\mathbb{E}\exp(\theta W_{T_1}-\theta^2 T_1/2)=1$. By path-continuity of $W$, we get $W_{T_1}=1$ and hence $\mathbb{E}\exp(-\theta^2T_1/2)=\exp(-\theta).$ Plug in $\theta = \sqrt{2\lambda}$ and conclude $$ q_1 = \mathbb{E}\exp(-\lambda T_1) = \exp(-\sqrt{2\lambda}).$$ |
2020-q5 | Recall that, given two probability measures \(\mu, \nu\) on \(\mathbb{R}\), their bounded Lipschitz distance is defined as
\[d_{BL}(\mu, \nu) := \sup \left\{ \left| \int f \, d\mu - \int f \, d\nu \right| : \|f\|_\infty \leq 1, \|f\|_{\text{Lip}} \leq 1 \right\},\]
where \(\|f\|_{\text{Lip}}\) is the Lipschitz modulus of \(f\). This distance metrizes weak convergence (which we will denote by \(\Rightarrow\)).
Let \((x_n)_{n \geq 1}, x_n \in \mathbb{R}^n\) be a sequence of deterministic vectors \(x_n = (x_{n,1}, \ldots, x_{n,n})\) indexed by \(n \in \mathbb{N}\), and \((Z_i)_{i \geq 1}\) be a sequence of i.i.d. standard Gaussian random variables. Finally, denote by \(\gamma_m\), the Gaussian measure on \(\mathbb{R}\) with mean \(m\) and variance \(v\). Define the three sequences of probability measures
\[\mu_n := \frac{1}{n} \sum_{i=1}^n \delta_{x_{i,n}}, \quad \nu_n^{(1)} := \frac{1}{n} \sum_{i=1}^n \delta_{x_{i,n} + Z_i}, \quad \nu_n^{(2)} := \frac{1}{n} \sum_{i=1}^n \gamma_{x_{i,n}, 1}.\]
Note that \(\nu_n^{(1)}\) is a random measure.
(a) Assume that \(\mu_n \Rightarrow \mu_\infty\). Prove that \( \lim_{n \to \infty} d_{BL}(\nu_n^{(1)}, \nu_n^{(2)}) = 0 \) almost surely.
(b) Assume now a weaker condition to hold: \(\{\mu_n\}_{n \geq 1}\) is a uniformly tight family. Prove that \(\lim_{n \to \infty} d_{BL}(\nu_n^{(1)}, \nu_n^{(2)}) = 0\) almost surely. Give an example in which this happens (i.e., \(\{\mu_n\}_{n \geq 1}\) is tight), but neither \(\nu_n^{(1)}\) nor \(\nu_n^{(2)}\) converge weakly as \(n \to \infty\). Justify your claim.
(c) Give an example in which \(\{\mu_n\}_{n \geq 1}\) is not tight, but still \(\lim_{n \to \infty} d_{BL}(\nu_n^{(1)}, \nu_n^{(2)}) = 0 \) almost surely. Justify your claim.
(d) Give an example in which \(\mathbb{P}(\lim_{n \to \infty} d_{BL}(\nu_n^{(1)}, \nu_n^{(2)}) = 0) = 0 \). Justify your claim. | We have a distance on the space of probability measures on $(\mathbb{R}, \mathcal{B}_{\mathbb{R}})$. We shall denote this space by $\mathcal{P}$. Define, $$ \mathcal{F} := \left\{ f : \mathbb{R} \to \mathbb{R} \text{ Borel measurable} \mid ||f||_{\infty} \leq 1 , ||f||_{\text{Lip}} \leq 1\right\}, \; d_{BL}(\mu, \nu) := \sup \left\{ \Bigg \rvert \int f \, d \mu - \int f \, d\nu\Bigg \rvert\ : f \in \mathcal{F} \right\}, \; \forall \; \mu, \nu \in \mathcal{P}.$$ $d_{BL}$ is clearly a metric on $\mathcal{P}$. For the sake of completeness, I am adding a proof of the fact that it metrizes weak convergence. We have to show that for any sequence of probability measures $\left\{\mu_n : 1 \leq n \leq \infty\right\}$ in $\mathcal{P}$, we have $$ \mu_n \stackrel{d}{\longrightarrow} \mu_{\infty} \iff d_{BL}(\mu_n,\mu_{\infty}) \longrightarrow 0, \; \text{ as } n \to \infty.$$ First suppose that $d_{BL}(\mu_n,\mu_{\infty}) \longrightarrow 0.$ This implies that $\mu_n(f) \to \mu_{\infty}(f)$ for all $f \in \mathcal{F}.$ Take any closed subset $C$ of $\mathbb{R}$ and fix $\varepsilon \in (0,1)$. Introduce the notation $d(x, C) := \inf_{y \in C} |x-y|$, for all $x \in \mathbb{R}$. The function $x \mapsto d(x, C)$ is continuous with Lipschitz modulus $1$ and therefore the function $f_{\varepsilon}:\rac\mathbb{R} \to \mathbb{R}$ defined as $f_{\varepsilon}(x) = \varepsilon \left( 1 - \varepsilon^{-1}d(x,C) \right)_{+}$ is continuous with Lipschitz modulus $1$. Also observe that $||f_{\varepsilon}||_{\infty} \leq \varepsilon \leq 1$ and thus $f_{\varepsilon} \in \mathcal{F}$. Since $f_{\varepsilon} \geq 0$ with $f_{\varepsilon} \rvert_C \equiv \varepsilon$, we conclude that \begin{align*} \varepsilon \limsup_{n \to \infty} \mu_n(C) \leq \limsup_{n \to \infty} \mu_n(f_{\varepsilon}) = \mu_{\infty}(f_{\varepsilon}), \forall \; \varepsilon \in (0,1). \end{align*} Observe that $C$ being closed, $d(x,C) >0$ for $x \notin C$ and hence $\varepsilon^{-1}f_{\varepsilon}(x) = \mathbbm{1}(d(x,C)=0) = \mathbbm{1}(x \in C),$ as $\varepsilon \downarrow 0$. Since $0 \leq \varepsilon^{-1}f_{\varepsilon} \leq 1$, we use DCT and conclude that $$ \limsup_{n \to \infty} \mu_n(C) \leq \limsup_{\varepsilon \downarrow 0} \mu_{\infty}(\varepsilon^{-1}f_{\varepsilon}) = \mu_{\infty}(\mathbbm{1}_C)=\mu_{\infty}(C).$$ Applying \textit{Portmanteau Theorem}, we conclude that $\mu_n \stackrel{d}{\longrightarrow}\mu_{\infty}.$ Now suppose that $\mu_n \stackrel{d}{\longrightarrow}\mu_{\infty}.$ Fix $\varepsilon >0$. For every $x \in \mathbb{R}$, take a open neighbourhood $U_x$ around $x$ of diameter at most $\varepsilon$ such that $\mu_{\infty}(\partial U_x)=0$. Since $\left\{U_x : x \in \mathbb{R}\right\}$ is a open cover of $\mathbb{R}$, we can get a countable sub-cover of it, say $\left\{U_{x_i} : i \geq 1\right\}$. Here we have used the fact that any open cover of $\mathbb{R}$ has a countable sub-cover. Set $V_1=U_{x_1}$ and recursively define $V_j := U_{x_j} \setminus \bigcup_{i=1}^{j-1} V_i$, for all $j\geq 2$. Clearly $V_j$'s are Borel-measurable $\left\{V_j : j \geq 1\right\}$ forms a partition of $\mathbb{R}$. Without loss of generality, we can assume that for all $i$, $V_i \neq \emptyset$ (otherwise just throw away that $V_i$) and fix $y_i \in V_i$. Recall the fact that for any two subsets $A$ and $B$ of $\mathbb{R}$, we have $\partial (A \cap B) \subseteq (\partial A) \cup (\partial B)$. Using this many times we get $$ \mu_{\infty}(\partial V_n) =\mu_{\infty}\left(\partial \left(U_{x_n} \bigcap \left(\bigcap_{i=1}^{n-1} V_i^c\right) \right) \right)\leq \mu_{\infty} \left( (\partial U_{x_n}) \bigcup \left( \bigcup_{i=1}^{n-1} \partial V_i^c \right)\right) \leq \mu_{\infty}(\partial U_{x_n}) + \sum_{i=1}^{n-1} \mu_{\infty}(\partial V_i) = \sum_{i=1}^{n-1} \mu_{\infty}(\partial V_i).$$ Using the fact that $\mu_{\infty}(\partial V_1)=\mu_{\infty}(\partial U_{x_1})=0$, we apply induction to conclude that $\mu_{\infty}(\partial V_n)=0$ for all $n \geq 1$ and hence $\mu_n(V_j) \to \mu_{\infty}(V_j)$ for all $j \geq 1$. Now we define probability measure $\nu_n$ which approximates $\mu_n$ as follows. $$ \nu_n = \sum_{j \geq 1} \mu_n(V_j) \delta_{y_j}, \;\, \forall \; 1 \leq n \leq \infty.$$ Take $f \in \mathcal{F}$ and notice that $|f(x)-f(y_j)| \leq |x-y_j| \leq \operatorname{Diam}(V_j) \leq \varepsilon$, for all $x \in V_j, j \geq 1$. Hence, for all $1 \leq n \leq \infty$, \begin{align*} \Bigg \rvert \int f\, d\mu_n - \int f\, d\nu_n\Bigg \rvert &\leq \sum_{j \geq 1} \Bigg \rvert \int_{V_j} f\, d\mu_n - \int_{V_j} f\, d\nu_n\Bigg \rvert = \sum_{j \geq 1} \Bigg \rvert \int_{V_j} f\, d\mu_n - f(y_j)\nu_n(V_j)\Bigg \rvert &\leq \sum_{j \geq 1} \int_{V_j} |f(x)-f(y_j)| \, d\mu_n(x) \ \ &\leq \varepsilon \sum_{j \geq 1} \mu_n(V_j) = \varepsilon. \end{align*} Therefore, \begin{align*} \Bigg \rvert \int f\, d\mu_n - \int f\, d\mu_{\infty}\Bigg \rvert &\leq \Bigg \rvert \int f\, d\mu_n - \int f\, d\nu_n\Bigg \rvert + \Bigg \rvert \int f\, d\mu_{\infty} - \int f\, d\nu_{\infty}\Bigg \rvert + \Bigg \rvert \int f\, d\nu_n - \int f\, d\nu_{\infty}\Bigg \rvert \ \ & \leq 2 \varepsilon + \sum_{j \geq 1} |\mu_n(V_j)-\mu_{\infty}(V_j)|, \end{align*} which implies $$ d_{BL}(\mu_n, \mu_{\infty}) \leq 2 \varepsilon + \sum_{j \geq 1}|\mu_n(V_j)-\mu_{\infty}(V_j)|.$$ We have $\mu_n(V_j) \to \mu_{\infty}(V_j)$ for all $j$ and $\sum_{j \geq 1} \mu_n(V_j) =1 = \sum_{j \geq 1} \mu_{\infty}(V_j)$ for all $n$. Apply \textit{Scheffe's Lemma} to conclude that $\sum_{j \geq 1}|\mu_n(V_j)-\mu_{\infty}(V_j)| \to 0$. This gives $\limsup_{n \to \infty} d_{BL}(\mu_n, \mu_{\infty}) \leq 2\varepsilon$. Take $\varepsilon \downarrow 0$ and finish the proof. Now we return to our original problem. $$ \mu_n = \dfrac{1}{n} \sum_{i=1}^n \delta_{x_{i,n}}, \; \nu_n^{(1)} = \dfrac{1}{n}\sum_{i=1}^n \delta_{x_{i,n}+Z_i}, \; \nu_n^{(2)} = \dfrac{1}{n}\sum_{i=1}^n \gamma_{x_{i,n},1},$$ where $Z_i$'s are i.i.d. standard Gaussian variables and $\gamma_{m,v}$ denotes Gaussian measure on $\mathbb{R}$ with mean $m$ and variance $v$. \begin{enumerate}[label=(\alph*)] \item We have $\mu_n \stackrel{d}{\longrightarrow} \mu_{\infty}.$ We need to show that $d_{BL}(\nu_n^{(1)}, \nu_n^{(2)}) \to 0$ almost surely. \begin{itemize} \item[\textbf{Step 1:}] Fix any $h : \mathbb{R} \to \mathbb{C}$ bounded measurable. Note that $$ \nu_n^{(1)}(h) = \dfrac{1}{n}\sum_{j =1}^n h(x_{j,n}+Z_j), \; \nu_n^{(2)}(h) = \dfrac{1}{n}\sum_{j =1}^n \gamma_{x_{j,n},1}(h) = \dfrac{1}{n}\sum_{j =1}^n \mathbb{E}h(x_{j,n}+Z_j).$$ Since, $h$ is bounded, we apply Proposition~{\ref{arrayas}} (on real and imaginary parts separately) to conclude that $$\nu^{(1)}_n(h) -\nu^{(2)}_n(h)= \dfrac{1}{n}\sum_{k=1}^n \left(h(x_{j,n}+Z_j) - \mathbb{E}h(x_{j,n}+Z_j) \right) \to 0,\; \text{almost surely}.$$ \item[\textbf{Step 2:}] $\mu_n \stackrel{d}{\longrightarrow} \mu_{\infty}$ implies that $$ \dfrac{1}{n} \sum_{j =1}^n f(x_{j,n}) = \mu_n(f) \to \mu_{\infty}(f)\, \; \forall \; f \in \mathcal{F}_b,$$ where $\mathcal{F}_b$ is the collection of all bounded continuous functions on $\mathbb{R}$. Fix one such $f$. Let $g_f$ be defined as $g_f(x):=\mathbb{E}f(x+Z)$ where $Z \sim N(0,1)$. Note that $$ |g_f(x)| \leq \mathbb{E}|f(x+Z)| \leq ||f||_{\infty} \leq 1.$$ On the otherhand, take a sequence $x_n$ of real numbers converging to $x$. Then $f(x_n+Z)$ converges almost surely to $f(x+Z)$ and using boundedness of $f$ and DCT, we can conclude that $g_f(x_n)=\mathbb{E}f(x_n+Z) \to \mathbb{E}f(x+Z)=g_f(x).$ Thus $g_f \in \mathcal{F}_b$ and hence $$ \nu_n^{(2)}(f) = \dfrac{1}{n}\sum_{j =1}^n \gamma_{x_{i,n},1}(f) = \dfrac{1}{n}\sum_{j =1}^n \mathbb{E}f(x_{i,n}+Z) = \dfrac{1}{n}\sum_{j =1}^n g_f(x_{i,n}) =\mu_n(g_f) \longrightarrow \mu_{\infty}(g_f) = \int \mathbb{E}f(x+Z)\, d\mu_{\infty}(x). $$ Define $\nu = \mu_{\infty} * \gamma_{0,1}$ where $*$ denotes convolution. Then by \textit{Fubini's Theorem}, we have $$ \nu_n^{(2)}(f) \to \int \mathbb{E}f(x+Z)\, d\mu_{\infty}(x) = \nu(f), \;\, \forall \; f \in \mathcal{F}_b.$$ This implies $\nu_n^{(2)}$ converges weakly to $\nu$ and hence $d_{BL}(\nu_n^{(2)}, \nu) \to 0$. \item[\textbf{Step 3 :}] In this step, we shall show that $\left\{\nu_n^{(1)}(\omega) : n \geq 1\right\}$ is a tight family, almost surely. Let $$ C_{M} := \left\{ \omega : \nu_n^{(1)}([-M,M])(\omega) -\nu_n^{(2)}([-M,M]) \to 0, \; \text{ as } n \to \infty\right\}, \;\forall \; M \in \mathbb{N}.$$ We have $\mathbb{P}(C_M)=1$ for all $M$(by Step 1) and hence $\mathbb{P}(C)=1$ where $C=\bigcap_{M \geq 1} C_M$. On the event $C$, we have $\nu_n^{(1)}([-M,M])(\omega) -\nu_n^{(2)}([-M,M]) \to 0$ for all $M$. Apply Lemma~{\ref{tight}} with the aid of the fact that $\left\{\nu_n^{(2)} : n \geq 1\right\}$ is a tight family (which is an implication of what we proved in Step 2) and conclude that on the event $C$, $\left\{\nu_n^{(1)}(\omega) : n \geq 1\right\}$ is a tight family. This completes this step. \item[\textbf{Step 4 :}] Apply Lemma~{\ref{chac}} to get such a $\mathcal{G}$. Let $$ A_g := \left\{ \omega : \nu_n^{(1)}(g)(\omega)-\nu_n^{(2)}(g)\to 0, \; \text{ as } n \to \infty\right\}.$$ From Step 1, $\mathbb{P}(A_g)=1$ for all $g \in \mathcal{G}$ and hence $\mathbb{P}(A)=1$ where $A=\bigcap_{g \in \mathcal{G}} A_g.$ Take $\omega \in A \cap C$, where $C$ is as in Step 3. We shall show that $d_{BL}(\nu_n^{(1)}(\omega), \nu_n^{(2)})\to 0$. Take a subsequence $\left\{n_k : k \geq 1\right\}$. Since $\left\{\nu_n^{(1)}(\omega) : n \geq 1\right\}$ and $\left\{\nu_n^{(2)} : n \geq 1\right\}$ are both tight families, we can get a further subsequence $\left\{n_{k_l} : l \geq 1\right\}$ such that $\nu_{n_{k_l}}^{(1)}(\omega)$ converges weakly to $P$ and $\nu_{n_{k_l}}^{(2)}$ converges weakly to $Q$, as $l \to \infty$, where $P$ and $Q$ are two probability measures. Therefore, $g(\nu_{n_{k_l}}^{(1)}(\omega))\to g(P)$ and $g(\nu_{n_{k_l}}^{(2)})\to g(Q)$, for all $g \in \mathcal{G}$. But $\omega \in A$ implies that $g(\nu_{n_{k_l}}^{(1)}(\omega)) - g(\nu_{n_{k_l}}^{(2)})\to 0$, for all $g \in \mathcal{G}$. Hence $g(P)=g(Q)$ for all $g \in \mathcal{G}$ and hence by Lemma~{\ref{chac}}, $P=Q$. Since $d_{BL}$ metrizes weak convergence, we have $d_{BL}(\nu_{n_{k_l}}^{(1)}(\omega), \nu_{n_{k_l}}^{(2)})\to 0$. Thus any subsequence of $\left\{d_{BL}(\nu_n^{(1)}(\omega), \nu_n^{(2)}) : n \geq 1 \right\}$ has a further subsequence which converges to $0$. This proves the assertion and completes the proof since $\mathbb{P}(A \cap C)=1$. \end{itemize} \end{enumerate} \item Assume $\left\{\mu_n : n \geq 1\right\}$ is a tight family. We need to prove $d_{BL}(\nu_n^{(1)}, \nu_n^{(2)}) \to 0$ almost surely. Note that the only pace in part (a) where we have used the assumption that $\mu_n \stackrel{d}{\longrightarrow} \mu_{\infty}$ is in Step 2. And we have only used the fact that $\left\{\nu_n^{(2)} : n \geq 1\right\}$ is a tight family from what we established in Step 2. So it is enough to just prove the fact that $\left\{\nu_n^{(2)} : n \geq 1\right\}$ is still a tight family under the weakened condition. We shall employ \textit{Prokhorov's Theorem} and shall show that any subsequence of $\left\{\nu_n^{(2)} : n \geq 1\right\}$ has a further convergent subsequence. Take any subsequence $\left\{n_k : k \geq 1\right\}$. Since $\left\{\mu_n : n \geq 1\right\}$ is tight, apply \textit{Prokhorov's Theorem} to conclude that there exists a further subsequence $\left\{n_{k_l} : l \geq 1\right\}$ such that $\mu_{n_{k_l}} \stackrel{d}{\longrightarrow} \mu^{\prime}$ as $l \to \infty$. Now repeat Step 2 of part (a) with this subsequence and conclude that $\nu_{n_{k_l}}^{(2)} \stackrel{d}{\longrightarrow} \mu^{\prime} * \gamma_{0,1}$ as $l \to \infty$. This proves the claim we made in the previous paragraph. To give an example of this situation, take $x_{i,n}=(-1)^n$ for all $1 \leq i \leq n.$ In this case $\mu_n = \delta_{(-1)^n}$ for all $n$ and hence $\left\{\mu_n : n \geq 1\right\}$ is clearly a tight family. Now $\nu_n^{(2)} = \gamma_{(-1)^n,1}$ and hence does not converge weakly. If $\nu_n^{(1)}$ converges weakly to $\nu^{(1)}$(a random measure), almost surely then we have $d_{BL}(\nu_n^{(1)},\nu^{(1)}) \to 0$, almost surely. This would imply, in light of what we have proven previously in (b), that $d_{BL}(\nu_n^{(2)},\nu^{(1)}) \to 0$, almost surely. This contradicts our observation that $\nu_n^{(2)}$ (a sequence of non-random measures) does not converge weakly. \end{enumerate} \item Take $x_{i,n}=n$ for all $1 \leq i \leq n$. Then $\mu_n = \delta_n$ and thus clearly does not form a tight family. Also take $x^{\prime}_{i,n}=0$ for all $1 \leq i \leq n.$ and define $\mu^{\prime}_n = n^{-1}\sum_{i=1}^n \delta_{x^{\prime}_{i,n}} = \delta_0.$ Define, $$ \nu_n^{(1)} = \dfrac{1}{n}\sum_{i=1}^n \delta_{x_{i,n}+Z_i} = \dfrac{1}{n}\sum_{i=1}^n \delta_{n+Z_i}, \; \nu_n^{(2)}= \dfrac{1}{n}\sum_{i=1}^n \gamma_{x_{i,n},1} = \gamma_{n,1},$$ and $$ \nu_n^{\prime(1)} = \dfrac{1}{n}\sum_{i=1}^n \delta_{x^{\prime}_{i,n}+Z_i} = \dfrac{1}{n}\sum_{i=1}^n \delta_{Z_i}, \; \nu_n^{\prime(2)}= \dfrac{1}{n}\sum_{i=1}^n \gamma_{x^{\prime}_{i,n},1} = \gamma_{0,1},$$ Observe that if $f \in \mathcal{F}$, then so is $f_n$ defined as $f_n(x)=f(n+x)$ for any $n$. Also note $$ \int f \, d\nu_n^{(1)} = \dfrac{1}{n}\sum_{i=1}^n f(n+Z_i) = \dfrac{1}{n}\sum_{i=1}^n f_n(Z_i) = \int f_n \, d\nu_n^{\prime (1)}, \int f \, d\nu_n^{(2)} = \mathbb{E} f(n+Z) = \mathbb{E} f_n(Z) = \int f_n \, d\nu_n^{\prime (2)},$$ and therefore, $ d_{BL}(\nu_n^{(1)},\nu_n^{(2)}) \leq d_{BL}(\nu_n^{\prime(1)},\nu_n^{\prime(2)}).$ The latter term goes to $0$ almost surely by part (a) (since $\mu^{\prime}_n= \delta_0$ for all $n$) and hence so is the former. \item Set $x_{j,n}=3j^2$ for all $1 \leq j \leq n$ and consider the intervals $I_j = (3j^2 \pm j)$. The intervals are clearly mutually disjoint. Note that, \begin{align*} \sum_{j \geq 1} \mathbb{P}(3j^2+Z_j \notin I_j) = \sum_{j \geq 1} \mathbb{P}(|Z_j|\geq j) \leq \sum_{j \geq 1} 2 \exp(-j^2/2)<\infty, \end{align*} and hence $\mathbb{P}(D)=1$, where $D:=(3j^2+Z_j \in I_j, \; \text{eventually }\forall \;j )$. Consider the functions $\varphi_{t} : \mathbb{R} \to \mathbb{R}$ defined as $\varphi_t(x)=\sin(x-t)\mathbbm{1}(|x-t| \leq \pi)$. Clearly $\varphi_t \in \mathcal{F}$ for all $t \in \mathbb{R}$. Since the intervals $[3j^2 \pm \pi]$ are mutually disjoint the following (random)function is well-defined. $$ g(x) := \sum_{j \geq 1} \operatorname{sgn}(Z_j)\varphi_{3j^2}(x), \; \forall \; x \in \mathbb{R}.$$ The fact that the intervals $[3j^2 \pm \pi]$ are mutually disjoint also implies that $g \in \mathcal{F}$. Finally the following observation about $g$ is important that if $x \in I_j, j \geq 4$, then $g(x)=\operatorname{sgn}(Z_j)\sin(x-3j^2)\mathbbm{1}(|x-3j^2| \leq \pi)$. Now note that $\nu_n^{(2)} = n^{-1}\sum_{j =1}^n \gamma_{3j^2,1}$. By the previous observation, for all $j \geq 4$, \begin{align*} \int g \; d\gamma_{3j^2,1} &= \int_{I_j} g \; d\gamma_{3j^2,1} + \int_{I_j^c} g \; d\gamma_{3j^2,1} \ = \operatorname{sgn}(Z_j) \int_{I_j} \sin(x-3j^2)\mathbbm{1}(|x-3j^2| \leq \pi)\, d\gamma_{3j^2,1}(x) + \int_{I_j^c} g \; d\gamma_{3j^2,1} \end{align*} \begin{align*} &= \operatorname{sgn}(Z_j) \mathbb{E}\sin(Z)\mathbbm{1}(|Z| \leq \pi, j) + \int_{I_j^c} g \; d\gamma_{3j^2,1} \end{align*} \begin{align*} = \int_{I_j^c} g \; d\gamma_{3j^2,1}. \end{align*} Since $||g||_{\infty} \leq 1$, we get $$ \Bigg \rvert \int g \; d\gamma_{3j^2,1} \Bigg \rvert \leq \mathbb{P}(3j^2 + Z \notin I_j) \leq 2 \exp(-j^2/2),$$ and therefore, for all $n \geq 1$, $$ \Bigg \rvert \int g \; d\nu_n^{(2)} \Bigg \rvert \leq \dfrac{1}{n}\sum_{j =1}^n 2\exp(-j^2/2) + 3/n \leq C/n,$$ for a finite real constant $C$. On the otherhand, for any $\omega \in D$, $g(3j^2+Z_j) = \operatorname{sgn}(Z_j)\sin(Z_j)\mathbbm{1}(|Z_j| \leq \pi)$, for all large enough $j$, say for $j \geq N(\omega)$. Then for all $n \geq N(\omega)$, \begin{align*} d_{BL}(\nu_n^{(1)},\nu_n^{(2)}) \geq \int g \; d\nu_n^{(1)} - \int g \; d\nu_n^{(2)} &\geq \dfrac{1}{n} \sum_{j=1}^n g(x_{j,n}+Z_j) - \dfrac{C}{n} \ \ &= \dfrac{1}{n} \sum_{j=1}^n g(3j^2+Z_j) - \dfrac{C}{n} \ \ & =\dfrac{1}{n}\sum_{j=1}^{N(\omega)-1} g(3j^2+Z_j) + \dfrac{1}{n}\sum_{j=N(\omega)}^{n} |\sin(Z_j(\omega))|\mathbbm{1}(|Z_j(\omega)| \leq \pi) -\dfrac{C}{n}. \end{align*} Taking $n \to \infty$ and recalling that $\mathbb{P}(D)=1$, we conclude that almost surely $$ \liminf_{n \to \infty} d_{BL}(\nu_n^{(1)},\nu_n^{(2)}) \geq \liminf_{n \to \infty} \dfrac{1}{n}\sum_{j=1}^n |\sin(Z_j)|\mathbbm{1}(|Z_j| \leq \pi) = \mathbb{E}\left(|\sin(Z)|\mathbbm{1}(|Z| \leq \pi) \right) >0,$$ and hence $\mathbb{P} \left( \lim_{n \to \infty}d_{BL}(\nu_n^{(1)},\nu_n^{(2)}) =0 \right)=0.$ \end{enumerate} \item If $\mathcal{P}$ is a (non-empty) semi-algebra on the set $\Omega$, we have $$ \mathcal{A} := \left\{ A \; \Bigg \rvert \; A = \bigcup_{i=1}^n A_i, \; A_1, A_2, \ldots, A_n \in \mathcal{P} \text{ mutually disjoint }, n \geq 0 \right\},$$ is an algebra. \begin{proof} We check the conditions for being an algebra one-by-one. \begin{enumerate}[label=(\Alph*)] \item Clearly $\emptyset \in \mathcal{A}$ (taking $n=0$). Since $\mathcal{P}$ is non-empty, get $A \in \mathcal{P}$ and $A_1,\ldots,A_n \in \mathcal{P}$, mutually disjoint, such that $A^c=\bigcup_{i=1}^n A_i.$ Setting $A_0=A$, we get $\left\{A_i : 0 \leq i \leq n\right\}$ to be a collection of mutually disjoint elements of $\mathcal{P}$ and their union is $\Omega$. Hence $\Omega \in \mathcal{A}.$ \item We want to show that $\mathcal{A}$ is closed under finite intersections. It is enough to show that for $A,B \in \mathcal{A}$, we have $A \cap B \in \mathcal{A}.$ Take representations of $A$ and $B$ as $$ A = \bigcup_{i=1}^n A_i, \; B = \bigcup_{j=1}^m B_j,$$ where $\left\{A_i : 1 \leq i \leq n\right\}$ is a mutually disjoint collection in $\mathcal{P}$ and so is $\left\{B_j : 1 \leq j \leq m\right\}.$ Clearly, $$\left\{C_{i,j}:= A_i \cap B_j : 1 \leq i \leq n, 1 \leq j \leq m\right\}$$ is also a mutually disjoint collection in $\mathcal{P}$ (since $\mathcal{P}$ is closed under intersection) and their union is $$A \cap B = \left( \bigcup_{i=1}^n A_i \right) \bigcap \left( \bigcup_{j=1}^m B_j \right) = \bigcup_{i,j} C_{i,j},$$ and thus $A \cap B \in \mathcal{A}.$ \item We want to show that $\mathcal{A}$ is closed under taking complements. By virtue of $\mathcal{P}$ being a semi-algebra, we have $A^c \in \mathcal{A}$ for all $A \in \mathcal{P}$. Now take $A_1, \ldots, A_n \in \mathcal{P}$, mutually disjoint. We want show that $(\bigcup_{i=1}^n A_i)^c \in \mathcal{A}$. But $(\bigcup_{i=1}^n A_i)^c = \bigcap_{i=1}^n A_i^c$ and we already showed that $\mathcal{A}$ is closed under finite intersections. Hence, $(\bigcup_{i=1}^n A_i)^c \in \mathcal{A}.$ \item Finally we want to show that $\mathcal{A}$ is closed under finite unions. It is enough to show that for $A,B \in \mathcal{A}$, we have $A \cap B \in \mathcal{A}.$ Since, $$ A \cup B = (A \cap B) \bigcup (A \cap B^c) \bigcup (A^c \cap B),$$ and the set of sets $(A \cap B)$ and $(A \cap B^c)$ are mutually disjoint. By using the fact that $A \cup B\in \mathcal{A}$ (because $A \in \mathcal{A}, B \in \mathcal{A}$). \end{enumerate} We finish the proof that $\mathcal{A}$ is indeed an algebra. \end{proof} \end{proposition} |
2020-q6 | Suppose a measurable map \(T\) on the probability space \((P, \Omega, \mathcal{F})\) is \(P\)-measure preserving. Consider the \(\sigma\)-algebras \(T_n = \{T^{-n}A : A \in \mathcal{F}\}\) and \(T_\infty = \bigcap_n T_n\).
(a) Suppose there exists \(c > 0\) and a \(\pi\)-system \(\mathcal{P}\) generating \(\mathcal{F}\), such that for any \(A \in \mathcal{P}\) there exists \(n_A\) finite with
\[P(A \cap T^{-n_A}B) \geq cP(A)P(B), \quad \forall B \in \mathcal{P}.\]
Show that then \(T_\infty\) is \(P\)-trivial (that is, \(P(C) \in \{0,1\}\) for any \(C \in T_\infty\)).
(b) Show that if \(T_\infty\) is \(P\)-trivial then \(T\) is mixing; that is, as \(n \to \infty\),
\[P(A \cap T^{-n}B) \to P(A)P(B), \quad \forall A, B \in \mathcal{F}.
\]
(c) Is a mixing, measure-preserving map \(T\) necessarily also ergodic? Justify your answer. | Since $T$ is measurable, we have $T^{-1}(A) \in \mathcal{F}$ and hence $\mathcal{T}_1 \subseteq \mathcal{T}_0 := \mathcal{F}$. Observe that if $\mathcal{F}_1 \subseteq \mathcal{F}_2$ are two sub-$\sigma$-algebras, then $$ T^{-1}(\mathcal{F}_1) :=\left\{T^{-1}(A) : A \in \mathcal{F}_1\right\} \subseteq \left\{T^{-1}(A) : A \in \mathcal{F}_2\right\} =:T^{-1}(\mathcal{F}_2).$$ Using this repeatedly and observing that $\mathcal{T}_{n+1} = T^{-1}(\mathcal{T}_n)$ for all $n \geq 1$, we conclude that $\mathcal{T}_n \downarrow \mathcal{T}_{\infty}.$ In particular, we see that $\mathcal{T}_{\infty}$ is contained in each of the previous $\mathcal{T}_n$ and hence is a $\sigma$-algebra as well. To show that it is trivial, we take $A \in \mathcal{T}_{\infty}$. We follow the same line of reasoning from earlier. For any $B \in \mathcal{T}_{n_A}$, we can say that.$$ \mathbb{P}(A \cap B) = \lim_{n \to \infty} \mathbb{P}(A \cap T^{-n}(B)) \geq c \mathbb{P}(A)\mathbb{P}(B).$$ In summary, there are infinitely many things here to conclude. Hence, it follows that $\mathbb{P}(A) \in \{0,1\}$. $
abla$ |
2021-q1 | Show that a \( \sigma \)-algebra cannot be countably infinite: its cardinality must be finite or at least that of the continuum. Show by example that an algebra of sets can be countably infinite. | Suppose we have a sample space $\Omega$ and a $\sigma$-algebra $\mathcal{F}$ on $\Omega$. We shall use the notations $A^1$ and $A^0$ to denote $A$ and $A^c$ respectively for any $A \subseteq \Omega$. If $\mathcal{F}$ is finite, we are already done. Assume that $\mathcal{F}$ is infinite, i.e., we can get distinct sets $A_1, A_2, \ldots \in \mathcal{F}$. For any $\mathbf{i} =(i_1, i_2, \ldots,) \in \left\{0,1\right\}^{\mathbb{N}}$, we define the following sets. $$ B_{\mathbf{i}} := \bigcap_{j \geq 1} A_j^{i_j}. $$ Clearly, $B_{\mathbf{i}} \in \mathcal{F}$ and the collection $\mathcal{A} := \left\{B_{\mathbf{i}} : \mathbf{i} \in \left\{0,1\right\}^{\mathbb{N}} \right\} \subseteq \mathcal{F}$ form a partition of $\Omega$. If $\mathcal{A}$ is countable, then $$ A_k = \bigcup_{\mathbf{i} : i_k=1} B_{\mathbf{i}} \in \sigma(\mathcal{A}) \; \forall \; k \geq 1,$$ since the union is over a countably infinite collection at most (other subsets in the union being null sets); implying that $\sigma(\mathcal{A}) = \mathcal{F}.$ If $\mathcal{A}$ is finite, then clearly $\mathcal{F}=\sigma(\mathcal{A})$ is finite since in this case we have $$ \mathcal{F} = \sigma(\mathcal{A}) = \left\{\bigcup_{k=1}^n C_k \Big \rvert C_k \in \mathcal{A} \; \forall \; 1 \leq k \leq n, n \geq 0\right\},$$ a fact which follows from the observation that $\mathcal{A}$ constitutes a partition of $\Omega$. Therefore the only case that remains to be considered is when $\mathcal{A} \supseteq \left\{C_1, C_2, \ldots\right\}$, where $C_i$'s are non-empty and disjoint with each other. Define, $$ D_{S} := \bigcup_{i \in S} C_i \in \sigma(\mathcal{A}) \subseteq \mathcal{F}, \; \forall \; S \subseteq \mathbb{N}.$$ Disjointness and non-emptiness of $C_i$'s guarantee that the map $S \mapsto D_S$ is injective from $2^{\mathbb{N}}$ to $\mathcal{F}$. Since $2^{\mathbb{N}}$ has cardinality of the continuum, the $\sigma$-algebra has cardinality at least that of the continuum. For an example of countably infinite algebra, take $\Omega=\mathbb{N}$ and observe that $$\mathcal{B} := \left\{S \subseteq \mathbb{N} \mid |S| < \infty \text{ or } |\mathbb{N} \setminus S| < \infty\right\}$$ is a countably infinite algebra on $\mathbb{N}$. |
2021-q2 | Let \( X_1, X_2, \ldots \) be a sequence of identically distributed (but not necessarily independent) integrable random variables. Show that \[ n^{-1} \max_{1 \leq i \leq n} |X_i| \longrightarrow 0 \] in probability as \( n \to \infty \). | Fix $M \in (0, \infty)$. Then $$ \max_{i=1}^n |X_i| \leq M + \sum_{i=1}^n \left( |X_i| - M\right)_{+}.$$ Therefore, $$ \limsup_{n \to \infty} \mathbb{E} \left[ \dfrac{1}{n} \max_{i=1}^n |X_i| \right] \leq \limsup_{n \to \infty} \left[ \dfrac{M}{n} + \dfrac{1}{n} \sum_{i=1}^n \mathbb{E} \left(|X_i| - M\right)_{+} \right] = \limsup_{n \to \infty} \left[ \dfrac{M}{n} + \mathbb{E} \left(|X_1| - M\right)_{+} \right] = \mathbb{E} \left(|X_1| - M\right)_{+}.$$ Since $(|X_1|-M)_{+} \leq |X_1| \in L^1$, we take $M \to \infty$ and apply DCT to conclude that $$ \dfrac{1}{n} \max_{i=1}^n |X_i| \stackrel{L^1}{\longrightarrow} 0.$$ This completes the proof, since $L^1$ convergence implies convergence in probability. Interestingly in this case we can show that $n^{-1}\max_{k=1}^n |X_k|$ converges almost surely to $0$. In light of Lemma~\ref{lem1}, it is enough to show that $|X_n|/n$ converges to $0$ almost surely as $n \to \infty$. Note that for any $\varepsilon>0$, we have \begin{align*} \sum_{n \geq 1} \mathbb{P}(|X_n| > n\varepsilon) &\leq \sum_{n \geq 1} \mathbb{P} \left( \lfloor \varepsilon^{-1}|X_1| \rfloor > n\right) \leq \mathbb{E} \lfloor \varepsilon^{-1}|X_1| \rfloor \leq \varepsilon^{-1}\mathbb{E}|X_1| < \infty, \end{align*} which shows that with probability $1$, $|X_n|/n > \varepsilon$ finitely often. Since this holds true for all $\varepsilon >0$, we have $|X_n|/n$ converging to $0$ almost surely. |
2021-q3 | Let \( (W_t)_{t \geq 0} \) be a standard Wiener process.
(a) Fix real numbers \( a_1 < b_1 < a_2 < b_2 \), and define \( M_i := \max_{t \in [a_i, b_i]} W_t \) for \( i \in \{ 1, 2 \} \). Prove that \( M_1 - W_{b_1}, W_{a_2} - W_{b_1}, M_2 - W_{a_2} \) are mutually independent random variables.
(b) Show that, if \( X, Y \) are independent random variables, and \( X \) has a density (with respect to the Lebesgue measure), then \( \Pr\{X = Y\} = 0 \).
(c) Deduce that (for \( M_1, M_2 \) defined as in point (a) above), almost surely, \( M_1 \neq M_2 \).
(d) Show that the Wiener process \( (W_t)_{t \geq 0} \) has countably many local maxima with locations \( T_i \), and values \( M_i = W_{T_i} \), \( i \in \mathbb{N} \). (Of course, the locations \( T_i \) are random. We will choose them so that \( i \neq j \Rightarrow T_i \neq T_j \)).
(e) Use point (c) above to show that \( \Pr\{ M_i \neq M_j \; \forall i \neq j \} = 1 \). | Let $\left\{W_t : t \geq 0\right\}$ be a standard Brownian motion with $\left\{\mathcal{F}_t : t \geq 0\right\}$ being the natural filtration associated with it. We assume that $\mathcal{F}_0$ is complete with respect to the measure induced by $W$.\begin{enumerate}[label=(\alph*)] \item We have $a_1<b_1<a_2<b_2$ and $M_i := \sup_{t \in [a_i,b_i]} W_t$, for $i=1,2$. Note that $$M_2-W_{a_2} = \sup_{t \in [a_2,b_2]} (W_t-W_{a_2}) \perp \!\!\perp \mathcal{F}_{a_2},$$ since $\left\{W_t-W_{a_2} : t \geq a_2\right\} \perp \!\!\perp \mathcal{F}_{a_2}$ by independent increment property of BM. On the otherhand, $M_1-W_{a_1}, W_{a_2}-W_{b_1} \in m\mathcal{F}_{a_2}$, and therefore $M_2-W_{a_2} \perp \!\!\perp \left(M_1-W_{a_1}, W_{a_2}-W_{b_1} \right)$. Similarly, $W_{a_2}-W_{b_1} \perp \!\!\perp \mathcal{F}_{b_1}$ and $M_1-W_{b_1} \in m\mathcal{F}_{b_1}$; implying that $W_{a_2}-W_{b_1} \perp \!\!\perp M_1-W_{b_1}$. Combining these we conclude that $M_1-W_{b_1}, W_{a_2}-W_{b_1}, M_2-W_{a_2}$ are mutually independent variables. \item If $X$ and $Y$ are independent random variables with $X$ having density $f$ with respect to Lebesgue measure then for any $z \in \mathbb{R}$, $$ \mathbb{P}\left( X-Y \leq z\right) = \int_{\mathbb{R}} \mathbb{P}(X \leq y+z) \, d\mu_Y(y) = \int_{\mathbb{R}} \int_{-\infty}^{y+z} f(x)\, dx \, d\mu_Y(y) = \int_{-\infty}^z \left[\int_{\mathbb{R}} f(u+y) \; d\mu_Y(y) \right]\; du. $$ Thus $X-Y$ has density with respect to Lebesgue measure and hence $\mathbb{P}(X=Y)=0$. \item \begin{align*} \mathbb{P} \left( M_1=M_2\right) &= \mathbb{P} \left( M_1-W_{b_1} = M_2-W_{a_2}+W_{a_2}-W_{b_1}\right). \end{align*} Since $W_{a_2}-W_{b_1}$ is a Gaussian variable (having density with respect to Lebesgue measure) and independent of $M_2-W_{a_2}$, we know that (by part (b)) their sum $M_2-W_{b_1}$ also has a density and independent of $M_1-W_{b_1}$, by part (a). Therefore, by part (b), we can conclude that $\mathbb{P}(M_1=M_2)=0$. \item We prove this result using two lemmas. In the following, we say that a function $g : [0, \infty) \to \mathbb{R}$ has strict local maxima at $x$ if there exists $\delta >0$ such that $f(x)>f(y)$ for all $y \in (x-\delta,x+\delta)\cap[0, \infty)$. \begin{lemma} The set of strict local maxima of a function $f :[0, \infty) \to \mathbb{R}$ is countable. \end{lemma} \begin{proof} Define $$ S_{\delta} := \left\{x \in [0, \infty) \big \rvert f(x) > f(y), \; \forall \; y \in (x-\delta, x+\delta) \cap [0, \infty)\right\}, \; \forall \; \delta >0.$$ Obviously for any $x,y \in S_{\delta}$ with $x \neq y$ we have $|x-y| \geq \delta$. Thus $|S_{\delta} \cap [0,n]| \leq \delta^{-1}n +1< \infty$ for all $n \geq 1$ and hence $S_{\delta}$ is countable. Note that if $S$ is the set of all strict local maxima of $f$ then $S = \cup_{k \geq 1} S_{1/k}$ and hence $S$ is also countable. \end{proof} \begin{lemma} With probability $1$, any local maxima of BM is strict local maxima. \end{lemma} \begin{proof} Consider any function $f : [0, \infty) \to \mathbb{R}$ which has a not-strict local maxima at $x \geq 0$. By definition, there exists $\delta>0$ such that $f(x) \geq f(y)$ for all $(x-\delta,x+\delta)\cap [0, \infty)$ but there also exists $z$ with $|z-x| < \delta$ with $f(x)=f(z)$. We can therefore get rational numbers $q_1< q_2$ and $ q_3< q_4 \in (x-\delta,x+\delta)\cap [0, \infty)$ such that $[q_1,q_2]\cap [q_3,q_4]=\phi$, $x\in [q_1,q_2]$ and $z \in [q_3,q_4]$; and hence $f(x)=\sup_{t \in [q_1,q_2]} f(t)= \sup_{t \in [q_3,q_4]} f(t) =f(z)$. Therefore, \begin{align*} \left( W \text{ has a not-strict local maxima }\right) \subseteq \bigcup_{0 \leq q_1<q_2<q_3<q_4 \in \mathbb{Q}}\left( \sup_{t \in [q_1,q_2]} W_t = \sup_{t \in [q_3,q_4]} W_t \right). \end{align*} The individual events in the RHS above has probability $0$ by part (c). Since the union is over a countable collection and $\mathcal{F}_0$ is complete, we conclude that $W$ has only strict local maxima with probability $1$. \end{proof} Combining the above two lemmas, we can conclude that $W$ has countably many local maxima with probability $1$. \item We note that $$ \left( \exists \; i \neq j \text{ with }M_i = M_j\right) \subseteq \bigcup_{0 \leq q_1<q_2<q_3<q_4 \in \mathbb{Q}}\left( \sup_{t \in [q_1,q_2]} W_t = \sup_{t \in [q_3,q_4]} W_t \right).$$ The rest follows as in part (d). \end{enumerate} |
2021-q4 | Let \( \{ X_i \}_{i=1}^{\infty}, \{ Y_j \}_{j=1}^{\infty} \) be binary random variables. Say they are "I-exchangeable" if for every \( n, m \geq 1 \), \[ \Pr\{ X_1 = e_1, X_2 = e_2, \ldots, X_n = e_n, Y_1 = d_1, Y_2 = d_2, \ldots, Y_m = d_m \} = \Pr\{ X_{\sigma(1)} = e_1, \ldots, X_{\sigma(n)} = e_n, Y_{\tau(1)} = d_1, \ldots, Y_{\tau(m)} = d_m \}. \] This is to hold for all \( e_i, d_j \in \{0, 1\} \) and permutations \( \sigma \in S_n, \tau \in S_m \). Prove the following. Theorem. \( \{ X_i \}_{i=1}^{\infty}, \{ Y_j \}_{j=1}^{\infty} \) are I-exchangeable if and only if for all \( n, m \), \[ \Pr\{ X_1 = e_1, X_2 = e_2, \ldots, X_n = e_n, Y_1 = d_1, Y_2 = d_2, \ldots, Y_m = d_m \} = \int_0^1 \int_0^1 \theta^{T_n}(1 - \theta)^{n-T_n} \eta^{W_m}(1 - \eta)^{m-W_m} \mu(d\theta, d\eta) \] where \( T_n = e_1 + \cdots + e_n, W_m = d_1 + \cdots + d_m \). Here \( \mu \) is a probability on the Borel sets of \( [0, 1]^2 \) which does not depend on \( n, m \) or the \( e_i, d_j \). | This problem can be directly solved by mimicking the steps of the proof for \textit{de Finetti's Theorem} and adjusting accordingly, see \cite[Theorem 5.5.28]{dembo}. But we shall illustrate a proof here which only uses \textit{de Finetti's Theorem}, not it's proof techniques. Let $p(e_1, \ldots, e_n, d_1,\ldots, d_m; n,m)$ be defined as $\mathbb{P}\left( X_i=e_i, \; \forall \; 1 \leq i \leq n; Y_j=d_j, \; \forall \; 1 \leq j \leq m\right)$, for any $e_i,d_j\in \left\{0,1\right\}$, for all $i,j$. It is easy to observe that the I-exchangeability criterion is equivalent to the condition that for fixed $n,m$; $p(e_1, \ldots, e_n, d_1,\ldots, d_m; n,m)$ depends on $(e_1, \ldots,e_n,d_1,\ldots,d_m)$ only through $(\sum_{i=1}^n e_i, \sum_{j=1}^m d_j)$. Now suppose we have a probability measure $\mu$ on $[0,1]^2$ such that \begin{equation}{\label{eq}} p(e_1, \ldots, e_n, d_1,\ldots, d_m; n,m) = \int_{[0,1]^2} \theta^{\sum_{i=1}^n e_i} \left(1-\theta \right)^{n-\sum_{i=1}^n e_i} \eta^{\sum_{j=1}^m d_j} \left( 1- \eta\right)^{m-\sum_{j=1}^m d_j} \; \mu(d\theta, d\eta), \end{equation} for all $n,m,e_i,d_j$. Clearly this implies the equivalent condition to the I-exchangeability that we just stated and hence $\left\{X_i\right\}_{i=1}^{\infty}$ and $\left\{Y_j\right\}_{j=1}^{\infty}$ are I-exchangeable. For the converse, observe that I-exchangeability implies exchangeability for the sequence $\left\{(X_i,Y_i) : i \geq 1\right\}$. Applying \textit{de Finetti's Theorem}, we can conclude that conditioned on $\mathcal{E}$, the exchangeable $\sigma$-algebra for this exchangeable sequence, we have $\left\{(X_i,Y_i) : i \geq 1\right\}$ to be an i.i.d. collection. As a technical point, we are using the version of \textit{de Finetti's Theorem} stated in \cite[Theorem 5.5.28]{dembo}, which only states it for sequence of real-valued random variable. Since any discrete set can be embedded in $\mathbb{R}$, it is also valid for an exchangeable sequence of random elements, taking values in some discrete set. Coming back to the original problem, let us define some parameters for the joint distribution of $(X_1,Y_1)$ given $\mathcal{E}$. $$ p:= \mathbb{P}(Y_1=1 \mid \mathcal{E});\;\alpha := \dfrac{\mathbb{P}(X_1=1,Y_1=1 \mid \mathcal{E})}{\mathbb{P}(Y_1=1 \mid \mathcal{E})} \mathbbm{1}\left( \mathbb{P}(Y_1=1 \mid \mathcal{E}) > 0\right); \; \beta := \dfrac{\mathbb{P}(X_1=1,Y_1=0 \mid \mathcal{E})}{\mathbb{P}(Y_1=0 \mid \mathcal{E})} \mathbbm{1}\left( \mathbb{P}(Y_1=0 \mid \mathcal{E}) > 0\right).$$ By definition, $p,\alpha, \beta$ are $[0,1]$-valued random variables measurable with respect to $\mathcal{E}$. Fix $n,m \in \mathbb{N}$; $0 \leq k,l \leq n$ and $0 \vee (k+l-n) \leq t \leq k \wedge l$. We can get $e_1,\ldots,e_n$ and $d_1, \ldots,d_n$ such that $\sum_{i=1}^n e_i =k, \sum_{i=1}^n d_i=l$ and $\sum_{i=1}^n e_id_i=t$. In this case, \begin{align*} h(k,l;n,n) := p(e_1, \ldots, e_n, d_1, \ldots, d_n; n,n) &= \mathbb{P} \left( X_i=e_i, \; \forall \; 1 \leq i \leq n; Y_j=d_j, \; \forall \; 1 \leq j \leq n\right) \\ & = \mathbb{E} \left[ \prod_{i=1}^n \mathbb{P} \left( X_i=e_i, Y_i=d_i \mid \mathcal{E} \right)\right] \\ & = \mathbb{E} \left[ \prod_{i=1}^n (\alpha p)^{e_id_i}((1-\alpha)p)^{(1-e_i)d_i} (\beta(1-p))^{e_i(1-d_i)} ((1-\beta)(1-p))^{(1-e_i)(1-d_i)}\right] \\ & = \mathbb{E} \left[ (\alpha p)^{t}((1-\alpha)p)^{l-t} (\beta(1-p))^{k-t} ((1-\beta)(1-p))^{n-k-l+t}\right]. \end{align*} We have used the convention that $0^0:=1$. In particular, if we take $n=2k=2l \geq 2$, then we have for any $0 \leq t \leq k$, $$h(k,k; 2k,2k) = \mathbb{E} \left[ p^k(1-p)^{k}\alpha^t(1-\alpha)^{k-t}\beta^{k-t}(1-\beta)^t\right].$$ Therefore, \begin{align*} 0 = \sum_{t=0}^k (-1)^t {k \choose t}h(k,k;2k,2k) & = \sum_{t=0}^k (-1)^t {k \choose t} \mathbb{E} \left[ p^k(1-p)^{k}\alpha^t(1-\alpha)^{k-t}\beta^{k-t}(1-\beta)^{t}\right] \\ & = \sum_{t=0}^k (-1)^t {k \choose t} \mathbb{E} \left[ \left(p(1-p)\alpha (1-\beta) \right)^t\left( p(1-p)(1-\alpha)\beta\right)^{k-t}\right] \\ & = \mathbb{E} \left[ p(1-p)(1-\alpha)\beta - p(1-p)\alpha(1-\beta)\right]^k = \mathbb{E} p^k(1-p)^k(\beta-\alpha)^k. \end{align*} Considering the situation when $k$ is even, we can conclude that with probability $1$, either $p \in \left\{0,1\right\}$ or $\alpha=\beta$. Setting $U :=p$ and $V := \alpha p + \beta(1-p)$, it is easy to see that under both of these two situations, $$ p^l(1-p)^{n-l} \alpha^t(1-\alpha)^{l-t} \beta^{k-t}(1-\beta)^{n-k-l+t} = U^l(1-U)^{n-l} V^k(1-V)^{n-k}, \; \; \forall \; 0 \vee (k+l-n) \leq t \leq k \wedge l; \; \; 0 \leq k,l \leq n.$$ This yields that with probability $1$, $$ \mathbb{P} \left( X_i=e_i, \; \forall \; 1 \leq i \leq n; Y_j=d_j, \; \forall \; 1 \leq j \leq n \mid \mathcal{E}\right) = V^{\sum_{i=1}^n e_i} \left(1-V \right)^{n-\sum_{i=1}^n e_i} U^{\sum_{j=1}^n d_j} \left( 1- U\right)^{n-\sum_{j=1}^n d_j} .$$ Since, $U,V \in m\mathcal{E}$, we basically have that, conditioned on $\mathcal{E}$, $$ X_i \stackrel{i.i.d.}{\sim} \text{Ber}(V), \; Y_j \stackrel{i.i.d.}{\sim} \text{Ber}(U),\;\text{ and } \; \left\{X_i : i \geq 1\right\} \perp \!\!\perp \left\{Y_j : j \geq 1\right\}.$$ Observe that $U, V \in [0,1]$ with probability $1$, by construction. Let $\mu$ be the probability measure induced by $(V,U)$ on $[0,1]^2.$ Clearly $\mu$ satisfies Equation~(\ref{eq}). This completes the proof. |
2021-q5 | Consider the two-dimensional integer lattice \( \mathbb{Z}^2 \) with edges \( e = \{ x, y \} \) connecting any pair of vectors \( x, y \in \mathbb{Z}^2 \) such that \( ||x-y||_2 = 1 \). Assigning i.i.d. passage times \( \tau(e) \geq 0 \) to edges of this graph induces, for any path \( \pi = \{ x = x_0, x_1, \ldots, x_n = y \} \) of finite length from vertex \( x \in \mathbb{Z}^2 \) to some other \( y \in \mathbb{Z}^2 \), the travel time \( T(\pi) = \sum_{i=1}^n \tau(\{x_{i-1}, x_i\}) \). The first passage time \( T(x, y) \) from \( x \) to \( y \) is then the infimum of all travel times over paths from \( x \) to \( y \). Denoting by \( z_n = (n, n) \) the points on the main diagonal of \( \mathbb{Z}^2 \), let \( D(z_i, z_j) \) denote the infimum of travel times over those paths from \( z_i \) to \( z_j \) which only pass vertices on the main diagonal of \( \mathbb{Z}^2 \) or their nearest neighbors (i.e., whose horizontal and vertical coordinates differ by at most one). We are interested in the asymptotics of \( n^{-1} T(0, z_n) \) and \( n^{-1} D(0, z_n) \) as \( n \to \infty \).
(a) Show that a.s. \( n^{-1} D(0, z_n) \to L \), where \( L := \int_0^\infty \int_0^\infty P(\tau + \tau' > u^2) d u \) and \( \tau, \tau' \), denote two independent copies of the random variable \( \tau(e) \).
(b) Assuming that \( L \) is finite, show that a.s. \( n^{-1} T(0, z_n) \to \gamma \), with \( \gamma \geq 0 \) some non-random constant.
(c) Show that if \( L \) is finite for some non-degenerate variable \( \tau(e) \), then necessarily \( \gamma < L \). | Since $\tau \geq 0$ with probability $1$, it is enough to consider finite length self-avoiding paths only, i.e. only those paths which has finite length and never visits a vertex twice. Also, for any path $\pi$, $E_{\pi}$ will denote the edges in it. \begin{enumerate}[label=(\alph*)] \item Let $\mathbb{D}$ denote the diagonal along with its nearest neighbours, i.e. $$ \mathbb{D} := \left\{(x_1,x_2) \in \mathbb{Z}^2 : |x_1-x_2| \leq 1\right\}.$$ The crucial observation is that any finite length self-avoiding path $\pi_n$, going from $0$ to $z_n$ and only passing through vertices in $\mathbb{D}$, should be of the form $\pi_n = \left\{x_0, \ldots, x_{2n}\right\}$ with $x_{2k}=z_k=(k,k)$ for all $0 \leq k \leq n$ and $x_{2k+1} \in \left\{(k+1,k),(k,k+1)\right\}$, for all $0 \leq k \leq n-1$. Introducing the notations $z_k^1:=(k+1,k)$ and $z_k^2:=(k,k+1)$, we can therefore write \begin{align}{\label{D}} D(0,z_n) = \sum_{k=1}^n \min \left[\tau\left( \left\{z_{k-1},z_{k-1}^1\right\} \right)+ \tau\left( \left\{z_{k-1}^1,z_k\right\}\right), \tau\left( \left\{z_{k-1},z_{k-1}^2\right\} \right)+ \tau\left( \left\{z_{k-1}^2,z_k\right\}\right) \right]. \end{align} Applying SLLN, we can conclude that \begin{align*} n^{-1}D(0,z_n) \stackrel{a.s.}{\longrightarrow} \mathbb{E} \left[ \min \left\{\tau_1+\tau_2, \tau_3+\tau_4\right\}\right] &= \int_{0}^{\infty} \mathbb{P} \left( \min \left\{\tau_1+\tau_2, \tau_3+\tau_4\right\} >u \right) \, du\\ & = \int_{0}^{\infty} \mathbb{P} \left( \tau + \tau^{\prime} >u \right)^2 \, du =:L, \end{align*} where $\tau_1, \tau_2, \tau_3, \tau_4,$ are independent copies of the random variable $\tau$, and so is $\tau^{\prime}$. It is also clear that $\mathbb{E}D(0,z_n)=nL$, for all $n \geq 0$. \item For any $x,y \in \mathbb{Z}^2$. let $P_{x,y}$ denote the collection of all finite length self-avoiding paths starting from $x$ and ending at $y$. %Also for any $l \in \mathbb{Z}$, define $A_{l}$ to be the set of all finite length self-avoiding paths which does not visit any vertex with vertical co-ordinate less than $l$. For any $m <n \in \mathbb{Z}$ and $k \geq 0$, set \begin{itemize} \item \textbf{Step I :} $n^{-1}T(0,z_n)$ converges almost surely and in $L^1$ to an integrable random variable $\xi$. We shall use Sub-additive Ergodic Theorem to complete this step. Trying to be in accordance with the notation of \cite[Corollary 7.4.2]{dembo}, we define $X_{m,n} := T(z_m,z_n)$, for any $0 \leq m <n$. By translation invariance of our model, it is obvious that the law of $\left\{X_{m+l,m+n} = T(z_{m+l},z_{m+n}) \mid 0 \leq l <n\right\}$ does not depend on $m$. Translation invariance also guarantees that $\left\{X_{nk,(n+1)k} = T(z_{nk},z_{(n+1)k}) : n \geq 0\right\}$ is a stationary sequence for any $k \in \mathbb{N}$. Moreover $\mathbb{E}|X_{m,n}| = \mathbb{E}T(0,z_{n-m}) \leq \mathbb{E}D(0,z_{n-m}) = (n-m)L < \infty$. $L < \infty$ also guarantees that $$ \inf_{n} n^{-1}\mathbb{E}X_{0,n} = \inf_{n} n^{-1}\mathbb{E}T(0,z_n) \leq \inf_{n} n^{-1}\mathbb{E}D(0,z_n) =L < \infty.$$ All that remains now is to check the sub-additivity condition. Take $\varepsilon >0$ and get $\pi_1 \in P_{0, z_m}, \pi_2 \in P_{z_m,z_n}$ such that $T(\pi_1) - T(0,z_m), T(\pi_2)-T_{z_m,z_n}\leq \varepsilon.$ We can concatenate the paths $\pi_1$ and $\pi_2$; remove any self-returning part if exists and get a path $\pi \in P_{0,z_n}$ which satisfies $$ T(0,z_n) \leq T(\pi) \leq T(\pi_1)+T(\pi_2) \leq T(0,z_m) + T_{z_m,z_n} + 2\varepsilon.$$ Taking $\varepsilon \downarrow 0$, we conclude that $X_{0,n} \leq X_{0,m} + X_{m,n}$. We now apply \cite[Corollary 7.4.2]{dembo} to complete this step. \item \textbf{Step II :} $\xi$ is degenerate random variable. Consider any enumeration of $E$, the set of all edges in the lattice $\mathbb{Z}^2$; $E=\left\{e_1,e_2, \ldots, \right\}$. We have $\tau(e_i)$'s to be i.i.d. random variables. Take $\left\{\tau^{\prime}(e_i) : i \geq 1\right\}$ be a independent copy of $\left\{\tau(e_i) : i \geq 1\right\}$ and define $$ \tau^{(k)}(e_i) := \begin{cases} \tau(e_i), & \text{ if } i >k, \ \tau^{\prime}(e_i), & \text{ if } 1 \leq i \leq k, \end{cases}$$ for any $k \geq 1$. It is clear that $\left\{\tau^{(k)}(e_i) : i \geq 1\right\}$ is also an copy of $\left\{\tau(e_i) : i \geq 1\right\}$, independent of $\left\{\tau(e_i)\right\}_{i=1}^k$; and hence so is $\xi^{(k)}$, where $$ T^{(k)}(0,z_n) := \inf_{\pi \in P_{0,z_n}} T^{(k)}(\pi) := \inf_{\pi \in P_{0,z_n}} \sum_{e \in \pi} \tau^{(k)}(e); \; \text{ and } n^{-1}T^{(k)}(0,z_n) \stackrel{a.s.}{\longrightarrow} \xi^{(k)}.$$ However note that for any $\pi \in P_{0,z_n}$, we have $$ \bigg \rvert T^{(k)}(\pi) - T(\pi)\bigg \rvert \leq \sum_{e \in \pi \cap \left\{e_i : 1 \leq i \leq k\right\}} |\tau^{(k)}(e)-\tau(e)| \leq \sum_{i=1}^k |\tau^{(k)}(e_i)-\tau(e_i)|,$$ and hence $$ |T^{(k)}(0,z_n) - T(0,z_n)| \leq \sum_{i=1}^k |\tau^{(k)}(e_i)-\tau(e_i)|.$$ Dividing both sides by $n$ and taking limit as $n \to \infty$, we conclude that $\xi=\xi^{(k)}$, almost surely and hence $\xi$ is independent of $\left\{\tau(e_i)\right\}_{i=1}^k$. Since $\xi$ is measurable with respect to $\left\{\tau(e_i)\right\}_{i\geq 1}$, we apply Levy's upward theorem to conclude that for any $A \subseteq \mathbb{R}$ measurable, $$ \mathbbm{1}(\xi \in A) = \mathbb{P}(\xi \in A \mid \tau(e_1), \tau(e_2), \ldots) \stackrel{a.s}{=} \lim_{k \to \infty} \mathbb{P}(\xi \in A \mid \tau(e_1), \tau(e_2), \ldots,\tau(e_k)) = \mathbb{P}(\xi \in A).$$ Therefore $\xi$ is a degenerate random variable, i.e. $\xi \equiv \gamma$ where $\gamma \geq 0$ is some non-random constant. \end{itemize} \item Suppose that $\gamma = L$. Observe that $$ \mathbb{E} \min_{i=1}^4 T(\pi_i) = L = \gamma \leq \mathbb{E} \min_{i=1}^5 T(\pi), $$ where $\pi_1, \ldots, \pi_4$ are the elements of $P_{z_0,z_2}$, lying entirely in $\mathbb{D}$; whereas $\pi_5$ is the path $(0,0) \to (0,1) \to (0,2) \to (1,2) \to (2,2)$. Therefore, $T(\pi_5) \geq \min_{i=1}^4 T(\pi_i)$, with probability $1$. Note that $E_{\pi_5}$ has only two edges in common with $\cup_{i=1}^4 E_{\pi_i}$; they are $\left\{(0,0), (0,1)\right\}$ and $\left\{(1,2),(2,2)\right\}$. Since all the five paths, which we are concerned with, have four edges, we can conclude that for any $a \in \mathbb{R}$, \begin{align*} 0=\mathbb{P}\left( T(\pi_5) < \min_{i=1}^4 T(\pi_i)\right) &\geq \mathbb{P}\left( \tau(e) \geq a, \; \forall \; e \in \cup_{i=1}^4 E_{\pi_i} \setminus E_{\pi_5}; \; \tau(e) < a \; \forall \; e \in E_{\pi_5}\right) \\ & = \mathbb{P}(\tau < a)^4 \mathbb{P}(\tau \geq a)^6, \end{align*} and hence $\mathbb{P}(\tau \leq a) \in \left\{0,1\right\}$, for all $a \in \mathbb{R}$. This implies that $\tau$ is degenerate, a contradiction. Therefore, $\gamma < L.$ \end{enumerate} |
2021-q6 | A coin with probability \( p \neq 1/2 \) will be tossed repeatedly. A gambler has initial fortune of 1 unit. Just before the \( n \)th coin toss, the gambler is allowed to bet any fraction of his current fortune on heads and the complementary fraction on tails. The gambler wins twice the amount bet on the outcome that occurs and loses the amount bet on the outcome that does not occur. Note that this allows the gambler NOT to bet any particular amount by simply betting half of that amount on heads and half on tails. Let \( F_n \) denote the gambler's fortune after \( n \) coin tosses. The gambler would like to maximize the expected rate of return: \( E[\log(F_n)] \).
(a) Suppose the gambler knows the value of \( p \). What betting fractions maximize the expected rate of return for large \( n \)? If we denote this expected rate by \( Cn \), what is the value of \( C \)?
(b) Suppose the gambler does not know the value of \( p \) (but knows that \( p \neq 1/2 \)). Suggest a strategy for the gambler that produces the same asymptotic expected rate of return as in part (a) above. Find a strategy for which \( E[\log(F_n)] - Cn| = O(\log n) \). | For all $n \geq 1$, define $$ X_n := \begin{cases} 1 & \text{ if $n$-th toss has outcome heads }, \\ -1 & \text{ if $n$-th toss has outcome tails}, \end{cases} $$ and $\mathcal{F}_n := \sigma \left( X_i : 1 \leq i \leq n\right)$, with $\mathcal{F}_0$ being defined as the trivial $\sigma$-algebra. Suppose that just before the $n$-th toss, the gambler bets $B_{n-1} \in [0,1]$ fraction of his current fortune $F_{n-1}$ on heads and remaining on tails. Of course we have $B_{n-1} \in m\mathcal{F}_{n-1}$. If $X_n=1$, then the gambler's fortune increases by $F_{n-1}B_{n-1}-F_{n-1}(1-B_{n-1})$ whereas for $X_n=-1$, the gambler's fortune decreases by the same amount. Therefore we have $$ F_n = F_{n-1} + X_n \left( F_{n-1}B_{n-1}-F_{n-1}(1-B_{n-1})\right) = F_{n-1}\left[ 1+ (2B_{n-1}-1)X_n\right], \; \forall \; n \geq 1,$$ with $F_0=1$. Hence, for all $n \geq 1$, \begin{align*} \log F_n = \sum_{k=1}^n \log \left[ 1+ (2B_{k-1}-1)X_k\right] \Rightarrow \mathbb{E} \log F_n &= \sum_{k=1}^n \mathbb{E} \log \left[ 1+ (2B_{k-1}-1)X_k\right] \\ & = \sum_{k=1}^n \mathbb{E} \left[ \mathbb{E} \left(\log \left[ 1+ (2B_{k-1}-1)X_k \right] \mid \mathcal{F}_{k-1} \right)\right] \\ & = n \log 2 + \sum_{k=1}^n \mathbb{E} \left[ p \log B_{k-1} + (1-p) \log (1-B_{k-1})\right]. \end{align*} \begin{enumerate}[label=(\alph*)] \item We assume that the gambler knows the value of $p \in [0,1]$. Observe that the function $g_p: [0,1] \to [-\infty,\infty)$ defined as $g_p(r) := p \log r + (1-p) \log (1-r)$, for all $r \in [0,1]$, is maximized uniquely at $r=p$. This can be checked by computing first and second order derivatives of $g_p$. Therefore, \begin{equation}{\label{maximum}} \mathbb{E} \log F_n = n \log 2 + \sum_{k=1}^n \mathbb{E} g_p(B_{k-1}) \leq n \log 2 + n g_p(p), \end{equation} and this maximum is attained if the gambler chooses $B_k \equiv p$, for all $k \geq 0$. Thus the betting fraction maximizing the expected rate of return is exactly $p$ and in this case $$ \text{ Optimum expected rate of return } = n \log 2 + n g_p(p) = Cn,$$ with $C= \log 2 + g_p(p) = \log 2 + p \log p +(1-p) \log (1-p).$ \item Since the gambler does not know the exact value of $p$, the sensible strategy will be to estimate $p$ from already observed toss outcomes and use that estimate as the betting fraction. The most traditional choice will be $B_k = P_k := k^{-1}\sum_{l=1}^k X_l$, the observed proportion of heads until and including the $k$-th toss, for all $k \geq 1$. We make small adjustment to this choice by taking our betting choice to be $$ B_k^* := \min \left( \max \left(P_k, \dfrac{1}{k+1} \right), \dfrac{k}{k+1} \right), \; \forall \; k \geq 1.$$ This choice makes it sure that we always bet something on both head ans tails. For $k =0$, we just take $B_0^*=1/2$. We shall now establish that this choice of betting fractions indeed satisfy the condition $|\mathbb{E} \log F_n - Cn| = O(\log n)$ and hence produces the same asymptotic expected rate of return as in part (a). In light of (\ref{maximum}), it is enough to show that $$ \sum_{k=1}^n \mathbb{E}g_p(B_{k-1}^*) - ng_p(p) \geq -K \log n, \; \forall \; n \geq 2,$$ for some universal finite constant $K$. First consider the non-trivial situation, i.e. $p \in (0,1)$. Fix $\varepsilon \in (0,1)$ and consider the event $A_k :=\left( |P_k-p| \leq p\varepsilon \wedge (1-p)\varepsilon \right)$, for all $ k \geq 1$. For large enough (non-random) $k$, we have $P_k=B_k^*$ on the event $A_k$ and hence \begin{align} g_p(B_{k}^*) - g_p(p) &= p \log \dfrac{B_{k}^*}{p} + (1-p) \log \dfrac{1-B_{k}^*}{1-p} \nonumber \\ &= p \left[ \dfrac{P_k-p}{p} - \eta \dfrac{(P_k-p)^2}{p^2}\right] + (1-p) \left[\dfrac{p-P_k}{1-p} - \eta \dfrac{(P_k-p)^2}{(1-p)^2} \right] = -\dfrac{\eta (P_k-p)^2}{p(1-p)}. \label{bound} \end{align} To conclude (\ref{bound}), we have used the fact that $\log(1+x) \geq x - \eta x^2$, for some $\eta \in (0, \infty)$ and $|x| \leq \varepsilon.$ On the otherhand, we have $1/(k+1) \leq B_k^* \leq k/(k+1)$ and hence $g(B_k^*) \geq - \log (k+1)$. Also observe that $$ \mathbb{P}(A_k^c) = \mathbb{P} \left( |P_k-p| > p\varepsilon \wedge (1-p)\varepsilon\right) \leq \exp \left( -2k \varepsilon^2 (p \wedge 1-p)^2\right),$$ which follows from the fact that $P_k$ has mean $p$ and is sub-Gaussian with proxy $1/4k$. Combining these observations, we can conclude that for large enough $k$, \begin{align*} \mathbb{E}g_p(B_k^*) - g_p(p) &\geq -\dfrac{\eta}{p(1-p)}\mathbb{E} \left[(P_k-p)^2; A_k \right] +(-\log (k+1) -g(p))\mathbb{P}(A_k^c) \\ & \geq -\dfrac{\eta}{p(1-p)}\mathbb{E} \left[(P_k-p)^2 \right] - \left(-\log (k+1) -g(p)\right)\mathbb{P}(A_k^c) \\ & = - \dfrac{\eta}{k} - \exp \left( \log \log (k+1)-2k \varepsilon^2 (p \wedge 1-p)^2\right) \geq - \dfrac{2\eta}{k} \geq -\dfrac{4\eta}{k+1}. \end{align*} Since $B_k^*$ is bounded away from $0$ and $1$, we have $\mathbb{E}g_p(B_k^*) - g_p(p)$ to be a finite real number and hence we can get $\eta_0 \in (0, \infty)$ such that $\mathbb{E}g_p(B_k^*) - g_p(p) \geq - \eta_0/(k+1)$, for all $k \geq 0$. Therefore, $$ \mathbb{E} \log F_n - Cn = \sum_{k=1}^n \mathbb{E}g_p(B_{k-1}^*) - ng_p(p) \geq - \sum_{k=1}^n \dfrac{\eta_0}{k} \geq -2 \eta_0 \log n, \; \forall \; n \geq 2. $$ Now we concentrate on the trivial cases, i.e. $p=0,1$. For $p=0$, we have $P_k \equiv 0$ and hence $B_k^*=1/(k+1)$, for all $k \geq 1$. For this choice of betting fractions, we have \begin{align*} \mathbb{E} \log F_n - ng_0(0) = g_0(1/2) + \sum_{k=2}^{n} g_0(1/k) = \log (1/2) + \sum_{k=2}^{n} \log (1-1/k) &\geq - \log 2 - \sum_{k=2}^n \dfrac{1}{k} - \eta_1 \sum_{k=2}^n \dfrac{1}{k^2} \\ & \geq - C_0 - \log n \geq - C_1\log n, \end{align*} for some finite constants $\eta_0, C_0,C_1$. Similar analysis can be done for the case $p=1$. This completes the proof. \end{enumerate} |