metadata
stringlengths 7
7
| question
stringlengths 95
2.57k
| solution
stringlengths 552
19.3k
|
---|---|---|
1991-q1 | Let r be a fixed positive integer, and let W be the waiting time till the rth head in a sequence of fair coin tosses. Find simple formulae in terms of r for (a) the mean of W (b) the median of W, that is, the least integer w so that P(W > w) \leq 1/2. (c) the mode of W, that is, an integer w so that P(W = w) \geq P(W = k) for all integers k. | Let $Y_i$ is the waiting time between the $(i-1)$-th head and the $i$-th head (including the time point of the $i$-th head, excluding that for $(i-1)$-th), for $i \geq 1$. Clearly, $Y_i$'s are \textit{i.i.d.} Geometric($1/2$) random variables.
\begin{enumerate}[label=(\alph*)]
\item By definition, $W=\sum_{i=1}^r Y_i$. Hence, $\mathbb{E}(W)=r\mathbb{E}(Y_1)=2r.$
\item It is easy to see that
$$\mathbb{P}(W>w) = \mathbb{P}(\text{There has been less than $r$ heads in first $w$ trials}) = \sum_{k=0}^{r-1} {w \choose k} 2^{-w} \leq \dfrac{1}{2} \iff \sum_{k=0}^{r-1}{w \choose k} \leq 2^{w-1}.$$
Recall that for any $n \geq 1$,
$$ \sum_{l=0}^k {n \choose l} \leq 2^{n-1} \iff k \leq \lfloor \dfrac{n-1}{2}\rfloor.$$
Hence,
$$ \mathbb{P}(W>w) \leq \dfrac{1}{2} \iff r-1 \leq \lfloor \dfrac{w-1}{2}\rfloor \iff r-1 \leq (w-1)/2 \iff w \geq 2r-1.$$
Thus the median is $2r-1$.
\item It is easy to see that for any $w \in \mathbb{N}$, we have
$$ p(w) :=\mathbb{P}(W=w) = {w-1 \choose r-1} 2^{-w}. $$
This is positive only for $w \in \left\{r,r+1,r+2, \ldots\right\}.$ Observe that for $w \geq r$,
$$ \dfrac{p(w)}{p(w+1)} = \dfrac{2(w-r+1)}{w} \geq 1 \iff w \geq 2r-2, \; \dfrac{p(w)}{p(w+1)} = \dfrac{2(w-r+1)}{w} \leq 1 \iff w \leq 2r-2.$$
The first condition gives that $p(2r-1)=p(2r-2) \geq p(w)$ for all $w \geq 2r-1$ and the second condition implies that $p(2r-2) \geq p(w)$ for all $w \leq 2r-2. $ Since $2r-1 \geq r$, a mode is $M=2r-1.$
\end{enumerate} |
1991-q2 | Let (Y_n, n = 1, 2, 3, \cdots) be an arbitrary sequence of random variables on a probability space (\Omega, \mathcal{F}, P), and let (\nu_k, k = 1, 2, 3, \cdots) be a sequence of positive integer valued random variables on the same space. Define Y_{\nu_k} by Y_{\nu_k}(\omega) = Y_{\nu_k(\omega)}(\omega), \omega \in \Omega Consider the following conditions: (a) Y_n \to Y a.s. (b) Y_n \to Y in probability (\alpha) \nu_k \to \infty a.s. (\beta) \nu_k \to \infty in probability (A) Y_{\nu_k} \to Y a.s. (B) Y_{\nu_k} \to Y in probability where (\beta) means for every positive \lambda , P(\nu_k > \lambda) \to 1 as k \to \infty. Say whether the following statements are true or false (give proof or counterexample). (i) (a) and (\alpha) together imply (A). (ii) (a) and (\alpha) together imply (B). (iii) (a) and (\beta) together imply (A). (iv) (a) and (\beta) together imply (B). (v) (b) and (\alpha) together imply (A). (vi) (b) and (\beta) together imply (B). | \begin{enumerate}[label=(\roman*)]
\item \textbf{TRUE}. Set $A :=(Y_n \to Y), B:=(\nu_k \to \infty) $. Then $\mathbb{P}(A \cap B)=1$ (since $\mathbb{P}(A), \mathbb{P}(B)=1$) and for $\omega \in A \cap B$, clearly we have
$Y_{\nu_k(\omega)}(\omega) \to Y(\omega),$ as $k \to \infty$.
\item \textbf{TRUE}. This follows from (i), since $Y_{\nu_k} \stackrel{a.s.}{\rightarrow} Y$ implies that $Y_{\nu_k} \stackrel{p}{\rightarrow} Y.$
\item \textbf{FALSE}. As a counter-example, take $Y_n \equiv 1/n$, $Y \equiv 0$ and $\left\{\nu_k : k \geq 1\right\}$ to be independent collection with
$$ \mathbb{P}(\nu_k=1)=1/k, \; \mathbb{P}(\nu_k=k)=1-1/k, \; \forall \; k \geq 1.$$
Clearly, $Y_n$ converges to $Y$ almost surely and for any $\lambda >0$, $\mathbb{P}(\nu_k > \lambda) = 1-1/k, \;\forall \; k > \lambda +1$ and hence $\mathbb{P}(\nu_k > \lambda) \to 1$. So $\nu_k \to \infty$ in probability. Since, $\nu_k$'s are independent and $\sum_{k \geq 1} \mathbb{P}(\nu_k =1) = \infty$, we have with probability $1$, $\nu_k=1$ infinitely often and hence $Y_{\nu_k}=Y_1=1$ infinitely often. Thus $\limsup_{k \to \infty} Y_{\nu_k}=1$ almost surely and hence $Y_{\nu_k}$ does not converge to $Y$ almost surely.
\item \textbf{TRUE}. Set $Z_n=\sup_{m \geq n} |Y_m-Y|$. Since, $Y_n$ converges almost surely to $Y$, we know that $Z_n \downarrow 0$, almost surely. Fix $\varepsilon>0$. Then for any $N \geq 1$,
$$ \mathbb{P}(|Y_{\nu_k}-Y| \geq \varepsilon) \leq \mathbb{P}(Z_N \geq \varepsilon) + \mathbb{P}(\nu_k < N).$$
Take $k \to \infty$ and use $\nu_k \to \infty$ in probability to get,
$$\limsup_{k \to \infty} \mathbb{P}(|Y_{\nu_k}-Y| \geq \varepsilon) \leq \mathbb{P}(Z_N \geq \varepsilon).$$
Now take $N \to \infty$ and use that $Z_N$ converges to $0$ almost surely to complete the proof.
\item \textbf{FALSE}. As a counter-example, take $\nu_k \equiv k$, and $\left\{Y_n : n \geq 1\right\}$ to be independent collection with
$$ \mathbb{P}(Y_n=1)=1/n, \; \mathbb{P}(Y_n=0)=1-1/n, \; \forall \; n \geq 1.$$
Clearly, $\nu_k$ converges to $\infty$ almost surely and $Y_k$ converges to $Y \equiv 0$ in probability. But $Y_{\nu_k}=Y_k$ does not converge to $0$ almost surely since $Y_k$'s are independent and $\sum_{k \geq 1} \mathbb{P}(Y_k=1)=\infty$ implying that $Y_k=1$ infinitely often almost surely.
\item \textbf{FALSE}. As a counter-example, take $\left\{Y_n : n \geq 1\right\}$ to be independent collection with
$$ \mathbb{P}(Y_n=1)=1/n, \; \mathbb{P}(Y_n=0)=1-1/n, \; \forall \; n \geq 1.$$
Clearly, $Y_n$ converges to $Y \equiv 0$ in probability. Since $Y_k$'s are independent and $\sum_{k \geq 1} \mathbb{P}(Y_k=1)=\infty$, we know that $Y_k=1$ infinitely often almost surely. Set
$$\nu_k := \inf \left\{n > \nu_{k-1} : Y_n=1\right\}, \; \forall \; k \geq 1,$$
where $\nu_0:=0$. Therefore $\nu_k \to \infty$ almost surely and hence in probability. But $Y_{\nu_k} \equiv 1$, for all $k \geq 1$ and hence does not converge to $Y$ in probability.
\end{enumerate} |
1991-q3 | Let X be a random variable with 0 < E(X^2) < \infty. Let Y and Z be i.i.d. with the same law as X, and suppose the law of (Y + Z)/\sqrt{2} is the same as the law of X. Show that the law of X is normal with mean 0 and variance \sigma^2 for some positive real \sigma^2. | Let $\mu:=\mathbb{E}(X)$ and $\sigma^2:=\operatorname{Var}(X).$ Since $\mathbb{E}(X^2)< \infty$, both of these quantities are finite. We are given that for $Y,Z$ i.i.d. with same distribution as $X$, we have $X \stackrel{d}{=}(Y+Z)/\sqrt{2}$. Taking expectations on both sides, we get $\mu = \sqrt{2}\mu$ and hence $\mu=0$. Then, $\sigma^2 = \operatorname{Var}(X)=\mathbb{E}(X^2)>0.$
Let $\phi$ be characteristic function of $X$. By CLT, we have
\begin{equation}{\label{CLT}}
\phi\left( \dfrac{t}{\sqrt{n}}\right)^n \stackrel{n \to \infty}{\longrightarrow} \exp \left( -\dfrac{t^2}{2\sigma^2}\right), \; \forall \; t \in \mathbb{R}.
\end{equation}
By the condition given , we can write
$$ \phi(t) = \mathbb{E}\exp(itX) = \mathbb{E}\exp(it(Y+Z)/\sqrt{2}) = \phi \left(\dfrac{t}{\sqrt{2}} \right)^2, \;\forall \; t \in \mathbb{R}.$$
Using this relation $k$-times we get
$$\phi(t) = \phi \left(\dfrac{t}{\sqrt{2^k}} \right)^{2^k}, \;\forall \; t \in \mathbb{R}, \; k \geq 0.$$
Taking $k \to \infty$ and using (\ref{CLT}), we conclude that $\phi(t)= \exp(-t^2/2\sigma^2)$, for all $t \in \mathbb{R}$. Therefore, $X \sim N(0,\sigma^2)$. |
1991-q4 | David Williams says , "In the Tale of Peredur ap Efrawg in the very early Welsh folk tales ... there is a magical flock of sheep, some black, some white. We sacrifice poetry for precision in specifying its behaviour. At each of times 1, 2, 3, \cdots a sheep (chosen randomly from the entire flock, independently of previous events) bleats; if this bleating sheep is white, one black sheep (if any remain) instantly becomes white; if the bleating sheep is black, one white sheep (if any remain) instantly becomes black. No births or deaths occur." Suppose the flock starts with n sheep, x of which are black. Find the probability that (a) eventually the flock has black sheep only (b) eventually the flock has white sheep only (c) the flock always has sheep of both colors [Hint: You can try Markov chains, or martingales, or a very elementary approach: let P_z be the probability in (a), and compare P_{z+1} - P_z with P_z - P_{z-1}.] (d) Approximate the probability in (a) when n = 500 and x = 200. | Let $X_n$ be the number of black sheep in the flock after time $n$. Clearly, $X_0=x$ and $\left\{X_m : m \geq 0\right\}$ is a Markov chain on the state space $\left\{0,1, \ldots, n\right\}$ with transition probabilities as follows.
$$ p(i,i+1) = i/n, p(i,i-1) = 1-i/n, \; \forall \; i=1, \ldots, n-1,$$
with $0,n$ being absorbing states.
\begin{enumerate}[label=(\alph*)]
\item Let $P_y := \mathbb{P}(X_m =n, \; \text{for some }m \mid X_0=y),$ where $0 \leq y \leq n$. Then if $y \neq 0,n$,
\begin{align*}
P_y &= \mathbb{P}(X_m =n, \; \text{for some }m \mid X_0=y) \\
&= \mathbb{P}(X_m =n, \; \text{for some }m \geq 1\mid X_1=y+1)(y/n) + \mathbb{P}(X_m =n, \; \text{for some }m \geq 1 \mid X_1=y-1)(1-y/n) \\
&= \mathbb{P}(X_m =n, \; \text{for some }m \mid X_0=y+1)(y/n) + \mathbb{P}(X_m =n, \; \text{for some }m \mid X_0=y-1)(1-y/n) \\
&= P_{y+1}(y/n) +P_{y-1}(1-y/n).
\end{align*}
Thus
$$ (P_{y}-P_{y-1}) = \dfrac{y}{n-y}(P_{y+1}-P_{y}), \; \forall \; 1 \leq y \leq n-1,$$
with the boundary conditions $P_0=0, P_n=1.$ Opening the recursion, we get
$$ P_{y} - P_{y-1} = \left[ \prod_{j=y}^{n-1} \dfrac{j}{n-j} \right](P_{n}-P_{n-1}) = (1-P_{n-1}) \dfrac{(n-1)!}{(y-1)!(n-y)!} = {n-1 \choose y-1} (1-P_{n-1}), \; \; 1 \leq y \leq n.$$
Summing over $y$, we get
\begin{align*}
1= P_n-P_0 = \sum_{y =1}^n (P_{y}-P_{y-1}) \\
&= (1-P_{n-1}) \sum_{y =1}^{n} {n-1 \choose y-1} = 2^{n-1}(1-P_{n-1}).
\end{align*}
Thus $1-P_{n-1}=2^{-(n-1)}$ and for all $0 \leq x \leq n-1$,
\begin{align*}
1-P_x= P_n-P_x = \sum_{y =x+1}^n (P_{y}-P_{y-1}) \\
&=(1- P_{n-1}) \sum_{y =x+1}^{n} {n-1 \choose y-1} = 2^{-n+1}\sum_{z=x}^{n-1} {n-1 \choose z}.
\end{align*}
Thus the probability that the flock eventually has black sheep only (and no white sheep) is $P_{x}$ and
$$ P_{x} = 2^{-n+1}\sum_{z=0}^{x-1} {n-1 \choose z},\, \forall \; 1 \leq x \leq n, \; P_0=0.$$
\item Consider a MC $\left\{Y_m : m \geq 0\right\}$ on the state space $\left\{0, \ldots, n\right\}$ with transition probabilities
$$ q(i,i+1) = i/n, p(i,i-1) = 1-i/n, \; \forall \; i=1, \ldots, n-1; q(0,1)=1, q(n,n-1)=1.$$
Since, $p(i, \cdot)=q(i, \cdot)$ for all $1 \leq i \leq n-1$, we have for any $1 \leq x \leq n-1$,
\begin{align*}
\mathbb{P}(X_m =0,n \text{ for some } m \geq 0|X_0=x) =\mathbb{P}(Y_m =0,n \text{ for some } m \geq 0|Y_0=x) = 1,
\end{align*}
where the last equality is true since $q$ gives a irreducible chain on a finite state space. Hence, if the probability that the flock will eventually has only white sheep is $Q_x$, then clearly $Q_0=1, Q_n=0$ and we have argued above that $Q_x=1=P_x$ for $1 \leq x \leq n-1$. Thus
$$ Q_{x} = 2^{-n+1}\sum_{z=x}^{n-1} {n-1 \choose z},\, \forall \; 1 \leq x \leq n, \; Q_n=0.$$
\item By the argument in part (b), the event that the flock will always have sheep of both colours is $0$.
\item For large enough $n,x,n-x$,
$$P_x = 2^{-n+1}\sum_{z=0}^{x-1} {n-1 \choose z} = \mathbb{P}(\text{Bin}(n-1,1/2) \leq x-1) \approx \Phi \left(\dfrac{x-1-(n-1)/2}{\sqrt{(n-1)/4}} \right).$$
If $n=500,x=200$, this becomes
$$ P_{200,n=500} \approx \Phi \left(\dfrac{-101}{\sqrt{499}} \right) \approx \Phi(-100/\sqrt{500}) = \Phi(-2\sqrt{5}) \approx 3.87 \times 10^{-6}.$$
\end{enumerate} |
1991-q5 | Let (X_n) be an irreducible Markov chain with countable state space I. Let B \subset I be infinite, and let T_B = \inf\{n \geq 0 : X_n \in B\} be the first hitting time of B, counting a hit at time 0. For x \in I, let f(x) = P(T_B < \infty|X_0 = x) (a) Show that f(X_n) is a supermartingale, with respect to \sigma-fields which you should specify. (b) Show that either f(x) = 1 for all x \in I, or \inf\{f(x): x \in I\} = 0. (c) Show \{X_n \text{ visits } B \text{ infinitely often}\} = \{f(X_n) \to 1\} \text{ a.s.} and \{X_n \text{ visits } B \text{ finitely often}\} = \{f(X_n) \to 0\} \text{ a.s.} [ Hint: use the fact that if F_n \uparrow F, Y_n \to Y a.s. and the Y_n are bounded, then E(Y_n|F_n) \to E(Y|F) a.s. This fact was proved in 310B, so you need not prove it here.] | Let $\mathcal{F}_n:= \sigma(X_k : 0 \leq k \leq n)$, for all $n \geq 0$.
\begin{enumerate}[label=(\alph*)]
\item We know that $x \mapsto p(x,\mathcal{C}) = \mathbb{P}(X_{\cdot} \in \mathcal{C} \mid X_0=x)$ is a measurable function from $I$ to $[0,1]$ for any $\mathcal{C} \in \mathcal{B}_{I}^{\mathbb{Z}_{\geq 0}}$ where $\mathcal{B}_I$ is the Borel $\sigma$-algebra for $I$, which is the power set of $I$ since $I$ is countable. Hence $f$ is a bounded measurable function implying that $f(X_n) \in m \mathcal{F}_n$ with $\mathbb{E}|f(X_n)|< \infty$.
Let $p(\cdot,\cdot)$ be the transition probability for the chain. If $x \in B$, then by definition we have $f(x)=1 = \sum_{y \in I} p(x,y) \geq \sum_{y \in I} f(y)p(x,y)$, since $0 \leq f(y) \leq 1$. If $x \notin B$, then
\begin{align*}
f(x)=\mathbb{P}(T_B< \infty \mid X_0=x) &= \mathbb{P}(X_n \in B \; \text{for some }n \geq 1 \mid X_0=x) \\
&= \sum_{y \in I} \mathbb{P}(X_n \in B \; \text{for some }n \geq 1 \mid X_1=y)\mathbb{P}(X_1=y \mid X_0=x) \\
&= \sum_{y \in I} \mathbb{P}(X_n \in B \; \text{for some }n \geq 0 \mid X_0=y)p(x,y) = \sum_{y \in I} f(y)p(x,y).
\end{align*}
Hence, for all $n \geq 1$,
\begin{align*}
\mathbb{E}(f(X_n)\mid \mathcal{F}_{n-1}) = \mathbb{E}(f(X_n)\mid X_{n-1}) = \sum_{y \in I} f(y)p(X_{n-1},y) \leq f(X_{n-1}),
\end{align*}
establishing that $\left\{f(X_n), \mathcal{F}_n, n \geq 0\right\}$ is a super-MG.
\item Suppose $\inf \left\{f(x) : x \in I\right\} =c >0$.
Also define $\Gamma^B := (X_n \in B \text{ i.o.})$. Note that $\Gamma_{n} :=\bigcup_{m \geq n}(X_m \in B) \downarrow \Gamma^B$, in other words $\mathbbm{1}_{\Gamma_n} \stackrel{a.s.}{\longrightarrow} \mathbbm{1}_{\Gamma^B}$.
Let $h: (I^{\mathbb{Z}_{\geq 0}},\mathcal{B}_{I}^{\mathbb{Z}_{\geq 0}}) \to (\mathbb{R}, \mathcal{B}_{\mathbb{R}})$ be the bounded measurable function
defined as
$$ h(\left\{x_i :i \geq 0\right\}) = \mathbbm{1}(x_i \in B, \; \text{ for some } i \geq 0).$$
By strong Markov property, we have
$$ \mathbb{E}\left[ h(X_{n+\cdot}) \mid \mathcal{F}_n\right](\omega) = \mathbb{E}_{X_n(\omega)}[h] = f(X_n(\omega)) .$$
By the hypothesis, we have almost surely,
$$ \mathbb{E} \left[ \mathbbm{1}_{\Gamma_n} \mid \mathcal{F}_n \right] = \mathbb{P}(X_m \in B, \; \text{ for some } m \geq n \mid \mathcal{F}_n) = \mathbb{E}\left[ h(X_{n+\cdot}) \mid \mathcal{F}_n\right] = f(X_n) \geq c,$$
By \textit{Levy's upward Theorem}, we have
$$ \mathbb{E} \left[ \mathbbm{1}_{\Gamma_{n}} \mid \mathcal{F}_n \right] \stackrel{a.s.}{\longrightarrow} \mathbb{E} \left[ \mathbbm{1}_{\Gamma^B} \mid \mathcal{F}_{\infty} \right] = \mathbbm{1}_{\Gamma^B},$$
where $\mathcal{F}_{\infty}=\sigma(X_0, X_1, \ldots,)$. Thus we can conclude that $\mathbbm{1}_{\Gamma^B} \geq c >0$ almost surely. Hence, $\mathbb{P}(T_B < \infty) \geq \mathbb{P}(\Gamma^B) =1$ for any choice of $X_0$. In particular if we take $X_0\equiv x$, then we get $f(x)=1$.
\item We have actually proved in part (b) that $f(X_n) \stackrel{a.s.}{\longrightarrow} \mathbbm{1}_{\Gamma^B}$. Thus
$$ (f(X_n) \to 1) = \Gamma^B = (X_n \in B \text{ i.o. }), \; \text{a.s.},$$
and
$$ (f(X_n) \to 0) = (\Gamma^B)^c = (X_n \in B \text{ f.o. }), \; \text{a.s.}.$$
\end{enumerate} |
1991-q6 | Let S_n = X_2 + X_3 + \cdots + X_n where the X_j's are independent, and P(X_j = j) = 1/j, P(X_j = 0) = 1 - 1/j, \ j \geq 2. (a) Discuss the behavior of S_n as n \to \infty. (b) Find constants \{a_n\} and positive constants \{b_n\} so that the law of (S_n - a_n)/b_n tends to a nondegenerate limit as n \to \infty. | We have $X_j$'s independent with $\mathbb{P}(X_j=j)=j^{-1}$ and $\mathbb{P}(X_j=0)=1-1/j$ for all $j \geq 1$. $S_n = \sum_{j=1}^n X_j.$
\begin{enumerate}[label=(\alph*)]
\item We have
$$ \sum_{j \geq 1} \mathbb{P}(X_j=j) = \sum_{j \geq 1} j^{-1}=\infty,$$
and hence by \textit{Borel-Cantelli Lemma II}, we can conclude $\mathbb{P}(X_j=j \text { i.o.})=1$. Since on the event $(X_j=j \text { i.o.})$, we have $S_n \to \infty$; we can say that $S_n \stackrel{a.s.}{\longrightarrow} \infty$ as $n \to \infty.$
\item
Note that $\mathbb{E}(S_n) = \sum_{j=1}^n \mathbb{E}(X_j) = n$ and $\operatorname{Var}(S_n) = \sum_{j=1}^n \operatorname{Var}(X_j) = \sum_{j=1}^n (j-1)=n(n-1)/2.$ Thus we expect that the proper scaling and centering would likely be $a_n=n$, $b_n=n$. In other words, in is enough to try to evaluate asymptotics of $S_n/n$.
Fix $t \in \mathbb{R}$. We get,
$$ \mathbb{E} \exp\left[ it\dfrac{S_n}{n}\right] = \prod_{k=1}^n \mathbb{E} \exp\left[ it\dfrac{X_k}{n}\right] = \prod_{k=1}^n \left[ \dfrac{1}{k}\exp \left( \dfrac{itk}{n}\right) + \left(1-\dfrac{1}{k}\right) \right] = \prod_{k=1}^n z_{n,k}(t), $
where
$$ z_{n,k}(t) :=\dfrac{1}{k}\exp \left( \dfrac{itk}{n}\right) + \left(1-\dfrac{1}{k}\right) , \; \forall \; 1 \leq k \leq n.$$
We intend to apply ~\cite[Lemma 3.3.31]{dembo}. Observe that
\begin{align*}
\sum_{k=1}^n (1-z_{n,k}(t)) = \sum_{k=1}^n \left( 1-\dfrac{1}{k}\exp \left( \dfrac{itk}{n}\right) - \left(1-\dfrac{1}{k}\right) \right) &= \sum_{k=1}^n \dfrac{1}{k}\left(1-\exp \left( \dfrac{itk}{n}\right) \right) \\
&= \dfrac{1}{n}\sum_{k=1}^n \dfrac{n}{k}\left(1-\exp \left( \dfrac{itk}{n}\right) \right) \\
& \longrightarrow \int_{0}^1 \dfrac{1-\exp(itx)}{x}\, dx,
\end{align*}
and
\begin{align*}
\sum_{k=1}^n |1-z_{n,k}(t)|^2 = \sum_{k=1}^n \Bigg \rvert 1-\dfrac{1}{k}\exp \left( \dfrac{itk}{n}\right) - \left(1-\dfrac{1}{k}\right) \Bigg \rvert^2 &= \sum_{k=1}^n \dfrac{1}{k^2}\Bigg \rvert 1-\exp \left( \dfrac{itk}{n}\right) \Bigg \rvert^2 \\
&\leq \sum_{k=1}^n \dfrac{1}{k^2} \Bigg \rvert \dfrac{itk}{n}\Bigg \rvert = \dfrac{|t|}{n}\sum_{k=1}^n \dfrac{1}{k} = \dfrac{|t|( ext{log } n +O(1))}{n} \to 0.
\end{align*}
Therefore,
$$ \mathbb{E} \exp\left[ it\dfrac{S_n}{n}\right] = \prod_{k=1}^n (1+z_{n,k}(t)-1) \longrightarrow \exp \left( - \int_{0}^1 \dfrac{1-\exp(itx)}{x}\, dx\right) =: \psi(t).$$
Since, $|e^{ix}-1| \leq |x|$, we have
$$ \int_{0}^1 \Bigg \rvert\dfrac{1-\exp(itx)}{x}\Bigg \rvert\, dx \leq \int_{0}^1 |t| \, dx =|t| < \infty.$$
Hence, $\psi$ is a well defined function. The above inequality also shows that $\psi(t) \to 1 =\psi(0)$ as $t \to 0$. Hence, by \textit{Levy's Continuity Theorem}, we conclude that $S_n/n \stackrel{d}{\longrightarrow} G$ where $G$ is a distribution function with corresponding characteristic function being $\psi$.
From the expression of $\psi$, we can be certain that $G$ is not a degenerate distribution because otherwise we would have
$$ \int_{0}^1 \dfrac{1-\cos (tx)}{x} \, dx =0, \; \forall \; t \in \mathbb{R}.$$ |
1991-q7 | Find a sequence of Borel measurable functions f_n : [0, 1]^n \to \{0, 1\} such that whenever X_1, X_2, X_3, \cdots are i.i.d. random variables with values in [0, 1], f_n(X_1, X_2, \cdots, X_n) \to 0 \text{ a.s. in case } E(X) < 1/2, and f_n(X_1, X_2, \cdots, X_n) \to 1 \text{ a.s. in case } E(X) \geq 1/2. | Define $f_n :[0,1]^n \to \left\{0,1\right\}$ as
$$ f_n(x_1,\ldots,x_n) =
\mathbbm{1}\left( \sum_{i=1}^n x_i \geq \dfrac{n}{2} - n^{2/3}\right), \; \forall \; x_1, \ldots,x_n \in [0,1], \; n \geq 1.$$
Clearly $f_n$ is Borel-measurable. Now suppose $X_1, X_2,\ldots$ are \textit{i.i.d.} $[0,1]$-valued random variables with $\mu=\mathbb{E}(X_1)$ and $S_n := \sum_{k=1}^n X_k$. Then
$$ f_n(X_1, \ldots, X_n) = \mathbbm{1}(S_n \geq n/2 - n^{2/3}).$$
For $\mu <1/2$, we have by SLLN, $S_n/n \stackrel{a.s.}{\longrightarrow} \mu$ and hence
$$ S_n - \dfrac{n}{2} +n^{2/3} = n\left( \dfrac{S_n}{n}-\dfrac{1}{2}+n^{-1/3}\right) \stackrel{a.s.}{\longrightarrow} -\infty \Rightarrow f_n(X_1, \ldots, X_n) \stackrel{a.s.}{\longrightarrow} 0.$$
For $\mu >1/2$, we have by SLLN, $S_n/n \stackrel{a.s.}{\longrightarrow} \mu$ and hence
$$ S_n - \dfrac{n}{2} +n^{2/3} = n\left( \dfrac{S_n}{n}-\dfrac{1}{2}+n^{-1/3}\right) \stackrel{a.s.}{\longrightarrow} \infty \Rightarrow f_n(X_1, \ldots, X_n) \stackrel{a.s.}{\longrightarrow} 1.$$
For $\mu=1/2$, note that since $X_1$ takes value in $[0,1]$, it is $1/4$-sub-Gaussian and thus $S_n$ is $n/4$-sub-Gaussian. Hence,
$$ \mathbb{P}\left( S_n - \dfrac{n}{2} < -t\right) \leq \exp\left(-\dfrac{2t^2}{n}\right), \; \forall \; t \geq 0,$$
and thus
$$ \sum_{n \geq 1} \mathbb{P}(f_n(X_1, \ldots, X_n)=0) = \sum_{n \geq 1} \mathbb{P} \left( S_n - \dfrac{n}{2} < -n^{2/3}\right) \leq \sum_{n \geq 1} \exp \left( -2n^{1/3}\right)<\infty.$$
Applying \textit{Borel-Cantelli Lemma I}, we conclude that
$ f_n(X_1, \ldots, X_n) \stackrel{a.s.}{\longrightarrow} 1.$ |
1992-q1 | A gambler wins or loses $1 according as he bets correctly or incorrectly on the outcome of tossing a coin which comes up heads with probability p and tails with probability q = 1 - p. It is known that p \neq 1/2, but not whether p > q or p < q. The gambler uses the following strategy. On the first toss he bets on heads. Thereafter he bets on heads if and only if the accumulated number of heads on past tosses at least equals the accumulated number of tails. Suppose the gambler's initial fortune is $k, where k is a positive integer. What is the probability that the gambler is eventually ruined? | Let $W_n$ be the gambler's winnings upto time $n$. For the sake of simplicity of notation, we shall assume that the gamblers winning can go down to $-\infty$ and the ruin probability is defined as the probability that $W_n \leq 0$ for some $n$. We are given that $W_0=k$. Let $S_n$ be the difference between number of heads and number of tails upto time $n$. $\left\{S_n : n \geq 0\right\}$ is thus a SRW on $\mathbb{Z}$ with probability of going to the right being $p=1-q$. The betting strategy then boils down to ``Bet on heads at the $n$-th toss if and only if $S_{n-1} \geq 0$".
The proof depends upon the following crucial but tricky observation. For all $n \geq 1$,
$$ (W_n-|S_n|)-(W_{n-1}-|S_{n-1}|) = -2\mathbbm{1}(S_{n-1}=0, S_n=-1).$$
To make sense of the above observation, we consider a case by case basis.
\begin{itemize}
\item When $S_{n-1}>0$, the gambler bets on heads at the $n$-th toss. If the coin turns up head, $W_n=W_{n-1}+1$ and $|S_n| = S_n=S_{n-1}+1=|S_{n-1}|+1.$ Similarly, if the coin turns up tail, $W_n-W_{n-1}=-1, |S_n| - |S_{n-1}| = -1.$
\item When $S_{n-1}<0$, the gambler bets on tails at the $n$-th toss. If the coin turns up head, $W_n=W_{n-1}-1$ and $|S_n| = -S_n=-S_{n-1}-1=|S_{n-1}|-1.$ Similarly, if the coin turns up tail, $W_n-W_{n-1}=1, |S_n| - |S_{n-1}| = 1.$
\item When $S_{n-1}=0$ and $S_n=1$, it means the gambler bets on heads on $n$-th toss and the coin turns up head. In this case $W_n=W_{n-1}+1$ and hence the observation holds.
\item When $S_{n-1}=0$ and $S_n=-1$, it means the gambler bets on heads on $n$-th toss and the coin turns up tail. In this case $W_n=W_{n-1}-1$ and hence the observation holds.
\end{itemize}
Now define the stopping times as follows. $\tau_0:=0$ and
$$ \tau_j := \inf \left\{ n >\tau_{j-1} : S_n=-1, S_{n-1}=0\right\}, \; j \geq 1.$$
Also let $Y_n := W_n -|S_n|$ for all $n \geq 0$. The observation now implies that
$$ Y_{\tau_j}\mathbbm{1}(\tau_j < \infty) = (Y_0-2j)\mathbbm{1}(\tau_j < \infty) = (k-2j)\mathbbm{1}(\tau_j < \infty).$$
Introduce the notation $P_{i,j} := \mathbb{P}(S_n=j, \text{ for some n} \mid S_0=i)$, where $i,j \in \mathbb{Z}$. From the theory of SRW, we have $P_{i,j}=P_{0,1}^{j-i}$, for $i<j$ and $P_{i,j}=P_{0,-1}^{i-j}$ for $i >j$, with
$$ P_{0,1} = \left( \dfrac{p}{q}\right) \wedge 1, \; P_{0,-1} = \left( \dfrac{q}{p}\right) \wedge 1.$$
\textit{Strong Markov property} implies that
$$ \mathbb{P}(\tau_j < \infty) = \mathbb{P}(\tau_1< \infty) (\mathbb{P}(\tau < \infty))^{j-1}, \; \forall \; j \geq 1,$$
where
$$ \mathbb{P}(\tau_1 < \infty) = \mathbb{P}( S_{n-1}=0, S_n=-1, \text{ for some } n \geq 1 \mid S_0=0) = \mathbb{P}(S_n=-1 \text{ for some } n \geq 1 \mid S_0=0)=P_{0,-1},$$
and
$$ \mathbb{P}(\tau < \infty) = \mathbb{P}( S_{n-1}=0, S_n=-1, \text{ for some } n \geq 1 \mid S_0=-1) = \mathbb{P}(S_n=-1, S_m=0 \text{ for some } n \geq m \geq 1 \mid S_0=-1).$$
To see why the above statements are true note that the first visit to $-1$, say at time $n$, after visiting $0$ at time $m \leq n$, must be preceded by a visit to $0$ at time $n-1$. By \textit{Strong Markov Property},
\begin{align*}
\mathbb{P}(\tau < \infty) &= \mathbb{P}(S_n=-1, S_m=0 \text{ for some } n \geq m \geq 1 \mid S_0=-1)\\
& = \mathbb{P}(S_n=0, \text{ for some } n \mid S_0=-1)\mathbb{P}(S_n=-1, \text{ for some n} \mid S_0=0)=P_{0,1}P_{0,-1}.
\end{align*}
Combining these observations together we get
$$ \mathbb{P}(\tau_j < \infty) = P_{0,-1}^{j}P_{0,1}^{j-1}, \; \forall \; j \geq 1.$$
Now the stopping time of our interest is $T := \inf \left\{n \geq 1 : W_n \leq 0\right\} = \inf \left\{n \geq 1 : W_n = 0\right\}.$ Since $W_n = Y_n + |S_n| \geq Y_n$ and $Y_n$ is non-increasing in $n$, we have
$$ T \geq \inf \left\{n \geq 1 : Y_n \leq 0\right\} = \tau_{k/2}\mathbbm{1}(k \text{ is even }) + \tau_{(k+1)/2}\mathbbm{1}(k \text{ is odd }) = \tau_{\lfloor \frac{k+1}{2}\rfloor}. $$
Now we shall consider these even and odd cases separately.
\begin{itemize}
\item[\textbf{Case 1} :] $k=2m-1$ for some $m \in \mathbb{N}$.
In this case $T \geq \tau_m$. On the otherhand, if $ au_m < \infty$, then $Y_{\tau_m}=-1$ and by definition $S_{\tau_m}=-1$, which implies that $W_{\tau_m}=0$ and hence $T \leq \tau_m.$ This yields that $T=\tau_m$ and hence
$$ \mathbb{P}(T < \infty) = \mathbb{P}(\tau_m < \infty) = P_{0,-1}^{m}P_{0,1}^{m-1}.$$
\item[\textbf{Case 2} :] $k=2m$ for some $m \in \mathbb{N}$.
In this case $T \geq \tau_m$. On the otherhand, if $ au_m < \infty$, then $Y_{\tau_m}=0$ and by definition $S_{\tau_m}=-1$, which implies that $W_{\tau_m}=1$. Let $\theta_{m+1}$ be the first time the walk $S$ returns to $0$ after time $\tau_m$, provided that $\tau_m$ is finite. Clearly, $\theta_{m+1} < \tau_{m+1}$ and for any $\tau_m \leq k < \theta_{m+1}$, we have $Y_k=Y_{\tau_m}=0$ (since the process $Y$ is constant on the interval $[\tau_m, \tau_{m+1})$), and $W_k = |S_k| + Y_k = |S_k| >0$. Thus $T \geq \theta_{m+1}$. But $Y_{\theta_{m+1}}=0$ with $S_{\theta_{m+1}}=0$, implying that $W_{\theta_{m+1}}=0$. Hence $T=\theta_{m+1}.$
Apply \textit{Strong Markov Property} to get
\begin{align*}
\mathbb{P}(T < \infty) &= \mathbb{P}(\theta_{m+1}< \infty) = \mathbb{P}(\tau_m < \infty) \mathbb{P}(S_n=0, \text{ for some n } \mid S_0=-1) = P_{0,-1}^{m}P_{0,1}^{m-1}P_{-1,0} = P_{0,-1}^{m}P_{0,1}^{m}.
\end{align*}
\end{itemize}
Combining these two cases together we can write
$$ \mathbb{P}(\text{Ruin} \mid W_0=k) = P_{0,-1}^{\lceil k/2 \rceil} P_{0,1}^{\lfloor k/2 \rfloor} = \left( \dfrac{q}{p} \wedge 1\right)^{\lceil k/2 \rceil} \left( \dfrac{p}{q} \wedge 1\right)^{\lfloor k/2 \rfloor}= \begin{cases}
\left( \dfrac{q}{p}\right)^{\lceil k/2 \rceil}, & \text{if } p>q, \\
1, & \text{if } p=q, \\
\left( \dfrac{p}{q} \right)^{\lfloor k/2 \rfloor}, & \text{if } p<q.
\end{cases}$$ |
1992-q2 | Let X_1, X_2, \cdots be independent, with E(X_n) = 0 for all n. Let S_n = X_1 + X_2 + \cdots + X_n and suppose \sup_n E(|S_n|) < \infty. Let T be an a.s. finite stopping time with respect to the natural filtration of the X's. Show E(S_T) = 0. | We have independent mean-zero random variables $X_1, X_2, \ldots$ such that $\sup_{n \geq 1} \mathbb{E}|X_n| =C < \infty$, where $S_1 = \sum_{k=1}^n X_k$. We shall use \textit{Ottaviani's Inequality}. For completeness we are adding a proof of it here.
Take any $t,s >0$. Let $B_n:=(|S_1|, |S_2|, \ldots, |S_{n-1}| < t+s, |S_n| \geq t+s)$ for all $n \geq 1$. Clearly the events $B_n$'s are mutually disjoint. Using the fact that $B_k$ is independent to $S_n-S_k$ for all $n \geq k$, we have the following.
\begin{align*}
\mathbb{P}\left(\max_{k=1}^n |S_k| \geq t+s\right) = \mathbb{P}\left(\bigcup_{k=1}^n B_k\right) = \sum_{k=1}^n \mathbb{P}(B_k) &= \sum_{k=1}^n \mathbb{P}(B_k, |S_n| \geq t) + \sum_{k=1}^n \mathbb{P}(B_k, |S_n| < t) \\
& \leq \mathbb{P}(|S_n| \geq t) + \sum_{k=1}^{n-1} \mathbb{P}(B_k, |S_n| < t) \\
&\leq \mathbb{P}(|S_n| \geq t) + \sum_{k=1}^{n-1} \mathbb{P}(B_k, |S_n-S_k| > s) \\
&\leq \mathbb{P}(|S_n| \geq t) + \sum_{k=1}^{n-1} \mathbb{P}(B_k) \mathbb{P}(|S_n-S_k| > s) \\
& \leq \mathbb{P}(|S_n| \geq t) + \dfrac{2C}{s}\sum_{k=1}^{n-1} \mathbb{P}(B_k) \leq \mathbb{P}(|S_n| \geq t)+ \dfrac{2C\mathbb{P}(\max_{k=1}^n |S_k| \geq t+s)}{s}.
\end{align*}
Choose $s >2C$. Then
\begin{align*}
\left(1-\dfrac{2C}{s} \right) \mathbb{P} \left(\max_{k=1}^n |S_k| \geq t+s \right) = \left(1-\dfrac{2C}{s} \right) \mathbb{P} \left(\left(\max_{k=1}^n |S_k|- s\right)_{+} \geq t \right) \leq \mathbb{P}(|S_n| \geq t).
\end{align*}
Integrating over $t$ from $0$ to $\infty$, we get
$$ \mathbb{E} \left( \max_{k=1}^n |S_k|- s\right)_{+} \leq \left(1-\dfrac{2C}{s}\right)^{-1} \mathbb{E}|S_n| \leq \dfrac{Cs}{s-2C}.$$
Take $s=3C$ and conclude that
$$ \mathbb{E}\left( \max_{k=1}^n |S_k| \right) \leq \mathbb{E} \left( \max_{k=1}^n |S_k|- 3C\right)_{+} + 3C \leq 6C.$$ Taking $n \uparrow \infty$, we conclude that $\mathbb{E}(\sup_{n \geq 1} |S_n|) < \infty$. Now $S_n$ is a MG with respect to its canonical filtration and $T$ is a stopping time. Apply OST to get that \mathbb{E}(S_{T \wedge n})=0$ for all $n$. Since $T<\infty$ almost surely, we have $S_{T \wedge n}$ converges to $S_T$ almost surely. Moreover,
$$ \mathbb{E}\left( \max_{k\geq 1} |S_{T \wedge k}| \right) \leq \mathbb{E}\left( \max_{k\geq 1} |S_{k}| \right) < \infty,$$
and hence $\left\{S_{T \wedge n}, n \geq 1\right\}$ is uniformly integrable. Use \textit{Vitali's Theorem} to conclude that $\mathbb{E}(S_T)=0.$ |
1992-q3 | Suppose X is N(0,1), \epsilon is N(0, \sigma^2) and is independent of X, and Y = X + \epsilon. A statistician observes the value of Y and must decide whether the (unobserved) inequality |Y - X| \le |X| is satisfied. Consider the following two classes of strategies: (a) For some c \ge 0 predict that |Y - X| \le |X| is satisfied if |Y| \ge c. (b) For some 0 < p < 1 predict that |Y - X| \le |X| is satisfied if P(|Y - X| \le |X| \mid Y) \ge p. Are these classes of strategies equivalent? If so, what is the relation between c and p? | As a first step note that
$$ (X,Y) \sim N_2 \left[\begin{pmatrix}
0 \\
0
\end{pmatrix}, \begin{pmatrix}
1 & 1 \\
1 & 1+ \sigma^2
\end{pmatrix} \right], $$
and hence
$$ X \mid Y \sim N \left[ \dfrac{Y}{1+\sigma^2},\dfrac{\sigma^2}{1+\sigma^2} \right]=: N(\nu Y, \gamma^2).$$
Note that almost surely,
\begin{align*}
\mathbb{P}\left( |Y-X| \leq |X| \mid Y \right) &= \mathbb{P}\left( |Y-X| \leq |X|, X >0 \mid Y \right) + \mathbb{P}\left( |Y-X| \leq |X|, X <0 \mid Y \right) \\
&= \mathbb{P} \left( 0 \leq Y \leq 2X \mid Y\right) + \mathbb{P} \left( 2X \leq Y \leq 0 \mid Y\right) \\
& =\mathbb{P}(X \geq Y/2 \mid Y)\mathbbm{1}(Y \geq 0) + \mathbb{P}(X \leq Y/2 \mid Y)\mathbbm{1}(Y \leq 0) \\
&= \bar{\Phi}\left( Y \dfrac{1/2-\nu}{\gamma}\right)\mathbbm{1}(Y \geq 0) + \Phi\left( Y \dfrac{1/2-\nu}{\gamma}\right)\mathbbm{1}(Y \leq 0) \\
&= \bar{\Phi}\left( \dfrac{(1-2\nu)|Y|}{2\gamma}\right) = \Phi\left( \dfrac{(2\nu-1)|Y|}{2\gamma}\right).
\end{align*}
Thus
\begin{enumerate}[label=(\Roman*)]
\item If $\nu > 1/2$, i.e., $\sigma^2 <1$, then
$$ \Phi\left( \dfrac{(2\nu-1)|Y|}{2\gamma}\right) \geq p \iff |Y| \geq \dfrac{2\gamma \Phi^{-1}(p)}{2\nu-1}.$$
\item If $\nu < 1/2$, i.e., $\sigma^2 >1$, then
$$ \Phi\left( \dfrac{(2\nu-1)|Y|}{2\gamma}\right) \geq p \iff |Y| \leq \dfrac{2\gamma \Phi^{-1}(p)}{1-2\nu}.$$
\item If $\nu=1/2$, i.e., $\sigma^2=1$, then \Phi\left( \dfrac{(2\nu-1)|Y|}{2\gamma}\right)$ is a constant.
\end{enumerate}
Thus the two strategies are equivalent only if $\sigma^2<1$ and in that case
$$ c = \dfrac{2\gamma \Phi^{-1}(p)}{2\nu-1} = \dfrac{2\sigma \sqrt{1+\sigma^2} \Phi^{-1}(p)}{1-\sigma^2}.$$ |
1992-q4 | Suppose X_0 is uniformly distributed on [0, 1], and for n \ge 1 define X_n recursively by X_n = 2X_{n-1} - \lfloor 2X_{n-1} \rfloor where \lfloor x \rfloor denotes the greatest integer less than or equal to x. Let S_n = X_1 + X_2 + \cdots + X_n. Find constants a_n and b_n such that (S_n - a_n)/b_n converges in distribution as n \to \infty to the standard normal law. Justify your answer. | We have $X_0\sim \text{Uniform}([0,1])$ and $X_n=2X_{n-1}-\lfloor 2X_{n-1}\rfloor$, for all $n \geq 1$. Recall that $X_0 = \sum_{j \geq 1} 2^{-j}Y_{j-1}$, where $Y_0, Y_1, \ldots \stackrel{iid}{\sim} \text{Ber}(1/2)$. Thus $(Y_0,Y_1, \ldots,)$ is the binary representation of $X_0$. Observe the following fact.
$$ x=\sum_{j \geq 1} 2^{-j}x_{j-1}, \; x_i \in \left\{0,1\right\}, \; \forall \; i \geq 0 \Rightarrow 2x - \lfloor 2x \rfloor =\sum_{j \geq 1} 2^{-j}x_{j}.$$
Using this fact recursively, we get that
$$ X_n = \sum_{j \geq 1} 2^{-j}Y_{n+j-1}, \; \; \forall \; n \geq 0.$$
Therefore, for all $n \geq 1$,
\begin{align*}
S_n = \sum_{k=1}^n X_k = \sum_{k=1}^n \sum_{j \geq 1} 2^{-j}Y_{k+j-1} &= \sum_{k=1}^n Y_k \left( \sum_{l=1}^k 2^{-l} \right) + \sum_{k >n} Y_k \left( \sum_{l=k-n+1}^{k} 2^{-l}\right) \\
&= \sum_{k=1}^n (1-2^{-k})Y_k + (1-2^{-n})\sum_{k >n} 2^{-(k-n)}Y_k \\
&= \sum_{k=1}^n (1-2^{-k})Y_k + (1-2^{-n})\sum_{l \geq 1} 2^{-l}Y_{l+n}\
& = \sum_{k=1}^n (1-2^{-k})Y_k + (1-2^{-n})X_{n+1} = \sum_{k=1}^n Y_k + R_n,
\end{align*}
where $R_n = -\sum_{k=1}^n 2^{-k}Y_k + (1-2^{-n})X_{n+1}.$ Clearly, $ |R_n| \leq \sum_{k \geq 1} 2^{-k} + (1-2^{-n}) \leq 2.$ By CLT, we have
$$ \dfrac{\sum_{k=1}^n Y_k - (n/2)}{\sqrt{n/4}} \stackrel{d}{\longrightarrow} N(0,1),$$
and hence
$$ \dfrac{2(S_n-R_n)-n}{\sqrt{n}}\stackrel{d}{\longrightarrow} N(0,1).$$
On the otherhand, $R_n/\sqrt{n}$ converges to $0$ in probability. Hence, using \textit{Slutsky's Theorem}, we conclude that
$$ \dfrac{S_n - (n/2)}{\sqrt{n/4}} \stackrel{d}{\longrightarrow} N(0,1),$$
i.e., $a_n=n/2$ and $b_n = \sqrt{n/4}.$ |
1992-q5 | Let \{Z_n\} be independent and identically distributed, with mean 0 and finite variance. Let T_n be a sequence of random variables such that T_n \to \alpha a.s. as n \to \infty, for some constant \alpha with |\alpha| < 1. Let X_0 = 0, and define X_n recursively by X_{n+1} = T_n X_n + Z_{n+1} Prove that n^{-1} \sum_{i=1}^n X_i \to 0 a.s. as n \to \infty. | We have $Z_1,Z_2, \ldots$ independent and identically distributed with mean $0$ and finite variance, say $\sigma^2$. $T_n$ converges to $\alpha \in (-1,1)$ almost surely. $X_0=0$ and $X_{n+1}=T_nX_n+Z_{n+1}, \; \forall \; n \geq 0$. Opening up the recursion we get
\begin{align*}
X_{n+1} = Z_{n+1}+T_nX_n = Z_{n+1}+T_n(Z_n+T_{n-1}X_{n-1}) &= Z_{n+1}+T_nZ_n + T_nT_{n-1}X_{n-1} \\
&= \cdots \\
&= Z_{n+1} + T_nZ_n + T_nT_{n-1}Z_{n-1} + \cdots + T_nT_{n-1}\cdots T_1Z_1.
\end{align*}
In other words,
$$ X_n = \sum_{k=1}^{n-1} Z_k U_{k:n-1} + Z_n, \; \forall \; n \geq 1,$$
where $U_{i:j}:= \prod_{m=i}^j T_m.$ As a first step, we shall show that
\begin{align*}
\dfrac{1}{n}\sum_{k=1}^n X_k = \dfrac{1}{n} \sum_{k=1}^n \left[ \sum_{l=1}^{k-1} Z_l U_{l:k-1} + Z_k \right] &= \dfrac{1}{n} \sum_{k=1}^{n-1} Z_k \left[ 1+T_k+T_kT_{k+1}+\cdots+T_k \cdots T_{n-1}\right] + \dfrac{Z_n}{n} \\
&= \dfrac{1}{n} \sum_{k=1}^{n-1} Z_k \left[ 1+\alpha+\alpha^2+\cdots+\alpha^{n-k}\right] + \dfrac{Z_n}{n} + R_n \\
&= \dfrac{1}{n} \sum_{k=1}^{n} Z_k \left[ 1+\alpha+\alpha^2+\cdots+\alpha^{n-k}\right] + R_n,
\end{align*}
where we shall show that $R_n$ converges almost surely to $0$. Take $\omega$ such that $T_k(\omega) \to \alpha \in (-1,1)$ and $n^{-1}\sum_{k=1}^n |Z_k(\omega)| \to \mathbb{E}|Z|$. Fix $\varepsilon \in (0, (1-|\alpha|)/2)$ and get $N(\omega)\in \mathbb{N}$ such that $|T_n(\omega)-\alpha| \leq \varepsilon$ for all $n > N(\omega)$. Then $|T_n(\omega)| \leq |\alpha|+\varepsilon =\gamma <1,$ for all $n > N(\omega)$. We shall use the following observation. For any $a_1,\ldots, a_m, b_1, \ldots, b_m \in (-C,C)$, we have
\begin{align*}
\Bigg \rvert \prod_{k=1}^m a_k - \prod_{k=1}^m b_k\Bigg \rvert \leq \sum_{k=1}^m |a_1\ldots a_kb_{k+1}\ldots b_m - a_1\ldots a_{k-1}b_{k}\ldots b_m| \leq mC^{m-1} \max_{k=1}^m |a_k-b_k|.
\end{align*}
Using this we get for large enough $n$,
\begin{align*}
\Bigg \rvert \dfrac{1}{n} \sum_{k=N(\omega)+1}^{n-1} Z_k \left[ 1+T_k+T_kT_{k+1}+\cdots+T_k \cdots T_{n-1}\right] &+ \dfrac{Z_n}{n} - \dfrac{1}{n} \sum_{k=N(\omega)+1}^{n} Z_k \left[ 1+\alpha+\alpha^2+\cdots+\alpha^{n-k}\right]\Bigg \rvert \\
&\leq \dfrac{1}{n}\sum_{k=N(\omega)+1}^{n-1} |Z_k| \Bigg \rvert \sum_{l=k}^{n-1} (U_{k:l}-\alpha^{l-k+1}) \Bigg \rvert \\
& \leq \dfrac{1}{n}\sum_{k=N(\omega)+1}^{n-1} \sum_{l=k}^{n-1} |Z_k||U_{k:l}-\alpha^{l-k+1}| \\
& \leq \dfrac{1}{n}\sum_{k=N(\omega)+1}^{n-1} \sum_{l=k}^{n-1} |Z_k|(l-k+1)\gamma^{l-k}\varepsilon \\
& \leq \dfrac{A \varepsilon}{n}\sum_{k=N(\omega)+1}^{n-1} |Z_k|,
\end{align*}
where $A:= \sum_{k \geq 1} k\gamma^{k-1}<\infty$ as $\gamma<1$. We have omitted $\omega$ from the notations for the sake of brevity.
On the otherhand,
\begin{align*}
\Bigg \rvert \dfrac{1}{n} \sum_{k=1}^{N(\omega)} Z_k \left[ 1+T_k+T_kT_{k+1}+\cdots+T_k \cdots T_{n-1}\right] - \dfrac{1}{n} \sum_{k=1}^{N(\omega)} Z_k \left[ 1+\alpha+\alpha^2+\cdots+\alpha^{n-k}\right]\Bigg \rvert \\
\leq \dfrac{1}{n}\sum_{k=1}^{N(\omega)} |Z_k| \left(S_k + \dfrac{1}{1-|\alpha|}\right),
\end{align*}
where $S_k=1+\sum_{l \geq k} \left( \prod_{m=k}^l |T_m|\right)$ which is a convergent series since $T_k(\omega)\to \alpha \in (-1,1)$. This follows from root test for series convergence, since $\left( \prod_{m=k}^l |T_m|\right)^{1/(l-k)} \to |\alpha| <1$. Combining these we get
$$ |R_n(\omega)| \leq \dfrac{A \varepsilon}{n}\sum_{k=N(\omega)+1}^{n-1} |Z_k(\omega)| + \dfrac{1}{n}\sum_{k=1}^{N(\omega)} |Z_k(\omega)| \left(S_k(\omega) + \dfrac{1}{1-|\alpha|}\right),$$
which implies that
$$ \limsup_{n \to \infty} |R_n(\omega)| \leq A\varepsilon \mathbb{E}|Z|.$$
Since the above holds true for all $\varepsilon >0$, we have proved that $R_n(\omega) \to 0$ and hence consequently $R_n$ converges to $0$ almost surely. It is now enough to prove that
$$ \dfrac{1}{n} \sum_{k=1}^{n} Z_k \left[ 1+\alpha+\alpha^2+\cdots+\alpha^{n-k}\right] = \dfrac{1}{n} \sum_{k=1}^{n} \dfrac{1-\alpha^{n-k+1}}{1-\alpha}Z_k \stackrel{a.s.}{\longrightarrow} 0.$$
Since $n^{-1}\sum_{k=1}^n Z_k$ converges to $\mathbb{E}Z=0$ almost surely, it is enough to prove that for $\alpha \neq 0$,
$$ \dfrac{1}{n} \sum_{k=1}^{n} \alpha^{n-k}Z_k = \dfrac{(\operatorname{sgn}(\alpha))^n}{|\alpha|^{-n}n} \sum_{k=1}^{n} \alpha^{-k}Z_k \stackrel{a.s.}{\longrightarrow} 0.$$
Since $n|\alpha|^{-n} \uparrow \infty$, we employ \textit{Kronecker's Lemma} and it is enough to prove that
$\sum_{n \geq 1} n^{-1}|\alpha|^n \alpha^{-n}Z_n$ converges almost surely. Since $Z_n$'s are independent mean $0$ variables with finite second moment,
$M_n=\sum_{k = 1}^n k^{-1}|\alpha|^k \alpha^{-k}Z_k$ is a $L^2$-MG with respect to canonical filtration of $Z$ and predictable compensator
$$ \langle M \rangle_{n} = \sum_{k=1}^n k^{-2}\mathbb{E}(Z_k^2) \leq \mathbb{E}(Z^2) \sum_{k \geq 1} k^{-2}<\infty,\; \forall \; n \geq 1.$$
Thus $M_n$ is a $L^2$-bounded MG and therefore converges almost surely. This completes our proof. |
1992-q6 | Let X_1, X_2, \cdots be independent normal variables, with E(X_n) = \mu_n and Var(X_n) = \sigma_n^2. Show that \sum_n X_n^2 converges a.s. or diverges a.s. according as \sum_n(\mu_n^2 + \sigma_n^2) converges or diverges. | We have $X_n \stackrel{ind}{\sim} N(\mu_n, \sigma_n^2).$ Then $\mathbb{E}(X_n^2) = \mu_n^2+\sigma_n^2,$ for all $n \geq 1$.
Suppose first that $\sum_{n \geq 1} (\mu_n^2 + \sigma_n^2)< \infty$. This implies that
$$\mathbb{E} \left( \sum_{n \geq 1} X_n^2 \right) = \sum_{n \geq 1} \mathbb{E}(X_n^2) = \sum_{n \geq 1} (\mu_n^2 + \sigma_n^2)< \infty,$$
and hence $\sum_{n \geq 1} X_n^2$ converges almost surely.
Now suppose that $\sum_{n \geq 1} (\mu_n^2 + \sigma_n^2)= \infty$. Since $(\sum_{n \geq 1} X_n^2 < \infty)$ is in the tail $\sigma$-algebra of the collection $\left\{X_1, X_2, \ldots\right\}$, we can apply \textit{Kolmogorov $0-1$ Law} and conclude that $\mathbb{P}(\sum_{n \geq 1} X_n^2 < \infty) \in \left\{0,1\right\}$. To prove that $\sum_{n \geq 1} X_n^2$ diverges almost surely, it is therefore enough to show that $\sum_{n \geq 1} X_n^2$ does not converge almost surely. We go by contradiction. Suppose $\sum_{n \geq 1} X_n^2$ converges almost surely. Apply \textit{Kolmogorov three series theorem} to conclude that
$$ \mathbb{P}(X_n^2 \geq 1) < \infty, \; \sum_{n \geq 1} \mathbb{E}(X_n^2\mathbbm{1}(X_n^2 \leq 1)) < \infty.$$
Therefore,
$$ \sum_{n \geq 1} \mathbb{E}(X_n^2\wedge 1) \leq \sum_{n \geq 1} \mathbb{E}(X_n^2\mathbbm{1}(X_n^2 \leq 1)) + \sum_{n \geq 1}\mathbb{P}(X_n^2 \geq 1) < \infty.$$
On the otherhand, if $Z \sim N(0,1)$, then
$$ \infty >\sum_{n \geq 1} \mathbb{E}(X_n^2\wedge 1) = \mathbb{E} \left[\sum_{n \geq 1} (X_n^2 \wedge 1) \right] = \mathbb{E} \left[\sum_{n \geq 1} ((\sigma_n Z+\mu_n)^2 \wedge 1) \right],$$
and hence
\begin{align*}
\sum_{n \geq 1} ((\sigma_n z+\mu_n)^2 \wedge 1) < \infty, \; a.e.[z] &\Rightarrow \sum_{n \geq 1} (\sigma_n z+\mu_n)^2 < \infty, \; a.e.[z] \\
&\Rightarrow \sum_{n \geq 1} (\sigma_n z+\mu_n)^2 + \sum_{n \geq 1} (-\sigma_n z+\mu_n)^2 < \infty, \; a.e.[z] \\
&\Rightarrow z^2\sum_{n \geq 1} \sigma_n^2 + \sum_{n \geq 1} \mu_n^2 < \infty, , \; a.e.[z] \\
&\Rightarrow \sum_{n \geq 1} (\mu_n^2+\sigma_n^2)< \infty,
\end{align*}
which gives a contradiction. |
1992-q7 | Let X_1, X_2, \cdots be independent, and for n \ge 1 let S_n = X_1 + X_2 + \cdots + X_n. Let \{a_n\} and \{b_n\} be sequences of constants so that P(X_n \ge a_n \ i.o.) = 1 and P(S_{n-1} \ge b_n) \ge \epsilon > 0 for all n. Show P(S_n \ge a_n + b_n \ i.o.) \ge \epsilon | We have $X_1, X_2, \ldots$ to be independent sequence. Let $\mathcal{G}_n = \sigma(X_k : k \geq n)$. Then by \textit{Kolmogorov $0-1$ Law}, the $\sigma$-algebra $\mathcal{G}_{\infty} = \bigcap_{n \geq 1} \mathcal{G}_n$ is trivial.
$S_n = \sum_{k=1}^n X_k$ for all $n \geq 1$ and $\left\{a_n\right\}_{n \geq 1}$, $\left\{b_n\right\}_{n \geq 1}$ are two sequences of constants. Let $B_n := \bigcup_{m \geq n} (S_m \geq a_m+b_m).$ Then
$$ (S_n \geq a_n+b_n \text{ i.o.}) = B = \bigcap_{n \geq 1} B_n.$$
We are given that $\mathbb{P}(S_{n-1} \geq b_n) \geq \varepsilon >0$ for all $n \geq 2$. Hence, for all $n \geq 2$,
\begin{align*}
\mathbb{E}\left(\mathbbm{1}_{B_{n}} \mid \mathcal{G}_n \right) = \mathbb{P}(B_{n} \mid \mathcal{G}_n) \geq \mathbb{P}(S_{n} \geq a_n+b_n \mid \mathcal{G}_n) \stackrel{(i)}{=} \mathbb{P}(S_{n} \geq a_n+b_n \mid X_n) & \geq \mathbb{P}(S_{n-1} \geq b_n, X_n \geq a_n \mid X_n) \\
&= \mathbbm{1}(X_n \geq a_n)\mathbb{P}(S_{n-1} \geq b_n \mid X_n) \\
&=\mathbbm{1}(X_n \geq a_n)\mathbb{P}(S_{n-1} \geq b_n) \geq \varepsilon \mathbbm{1}(X_n \geq a_n),
\end{align*}
where $(i)$ follows from the fact that $(S_n,X_n) \perp\!\!\perp (X_{n+1},X_{n+2}, \ldots)$. Since, $B_n \downarrow B$, we use \textit{Levy's Downward Theorem} to conclude that $\mathbb{E}\left(\mathbbm{1}_{B_{n}} \mid \mathcal{G}_n \right) \stackrel{a.s.}{\longrightarrow} \mathbb{E}\left(\mathbbm{1}_{B} \mid \mathcal{G}_{\infty} \right) = \mathbb{P}(B),$ since $\mathcal{G}_{\infty}$ is trivial. Therefore,
$$ \mathbb{P}(B) \geq \varepsilon \limsup_{n \to \infty} \mathbbm{1}(X_n \geq a_n) = \varepsilon, \; \text{a.s.},$$
since $X_n \geq a_n$ infinitely often with probability $1$. Thus
$$ \mathbb{P}(S_n \geq a_n+b_n \text{ i.o.}) = \mathbb{P}(B) \geq \varepsilon.$$ |
1993-q1 | I have 3 empty boxes. I throw balls into the boxes, uniformly at random and independently, until the first time that no two boxes contain the same number of balls. Find the expected number of balls thrown. | Let $X_n := (X_{n,1}, X_{n,2}, X_{n,3})$ be the distribution of the balls in the three boxes after $n$ throws. Define the following events. $A_n$ be the event that after $n$ throws no two boxes has same number of balls, $B_n$ be the event that exactly two boxes has same number of balls and $C_n$ be the event that all of the three boxes have same number of balls. Then define $$ T:= \inf \left\{ n \geq 0 : X_{n,1}=X_{n,2}=X_{n,3}\right\}.$$ Our first step would be to show that $\mathbb{E}(T)< \infty.$ Define the $\sigma$-algebra $\mathcal{F}_n :=\sigma(X_{m,i} : 1\leq i \leq 3, \; m \leq n).$ It is easy to see that for any $(k_1,k_2,k_3) \in \mathbb{Z}_{\geq 0}^3$, there exists $l_1,l_2,l_3 \in \left\{0,1,2,3\right\}^3$ such that $l_1+l_2+l_3=3$ and $k_i+l_i$'s are all distinct. \begin{itemize} \item If $k_1=k_2=k_3$, take $l_1=2,l_2=1,l_3=0$. \item If $k_1=k_2 >k_3$, take $l_1=3, l_2=l_3=0.$ If $k_1=k_2<k_3$ take $l_1=0, l_2=1, l_3=2$. Similar observations can be made for $k_1\neq k_2=k_3$ and $k_1=k_3 \neq k_2.$ \item If $k_i$'s are all distinct, take $l_1=l_2=l_3=1$. \end{itemize} This shows that $ \mathbb{P}(A_{n+3} \mid X_n) \geq (1/3)^3 =:\delta>0$, almost surely. Therefore, $$ \mathbb{P}(T > n+3 \mid \mathcal{F}_n) = \mathbb{P}(T > n+3 \mid \mathcal{F}_n)\mathbbm{1}(T>n) \leq \mathbb{P}(A_{n+3}^c \mid \mathcal{F}_n)\mathbbm{1}(T>n) \leq (1-\delta)\mathbbm{1}(T>n), $$ and hence $\mathbb{P}(T>n+3)\leq (1-\delta)\mathbb{P}(T>n)$ for all $n \geq 0$. Using induction, we conclude that $\mathbb{P}(T>3n)\leq (1-\delta)^n,$ and hence $\mathbb{P}(T>n)\leq (1-\delta)^{\lfloor n/3 \rfloor}.$ Hence, $$ \mathbb{E}(T) = \sum_{n \geq 0} \mathbb{P}(T>n) \leq \sum_{n \geq 0} (1-\delta)^{\lfloor n/3 \rfloor} =\dfrac{3}{\delta}< \infty.$$ Let us now evaluate $\mathbb{E}(T)$. It is easy to see that $T \geq 3$ almost surely and $\mathbb{P}(T=3)=\mathbb{P}(A_3)=2/3$. If $C_3$ happens, i.e. al the boxes has one ball each after $3$ throws then clearly we need another $T^{\prime}$ may throws where $T^{\prime}$ is distributed same as $T$, conditioned on $C_3$ happens. Note that $\mathbb{P}(C_3)=6/27.$ The only way $B_3$ can happen is when one box has all the three balls after three throws. In that case the first time we place one ball in one of the now empty boxes, all the boxes would have different number of balls. So we need $G$ many more throws where $G \sim \text{Geometric}(2/3)$, conditioned on $B_3$ happens. Note that $\mathbb{P}(B_3)=3/27$. Combining these observations we can write, \begin{align*} T = 3\mathbb{I}_{A_3} + (3+T^{\prime})\mathbbm{1}_{C_3} + (3+G)\mathbbm{1}_{B_3} &\Rightarrow \mathbb{E}(T) = 3\mathbb{P}(A_3) + (3+\mathbb{E}(T))\mathbb{P}(C_3)+(3+\mathbb{E}(\text{Geo}(2/3)))\mathbb{P}(B_3) \\ & \Rightarrow \mathbb{E}(T) = 2+(3+\mathbb{E}(T))(2/9)+(3+3/2)(1/9) \\ & \Rightarrow \mathbb{E}(T) = 3 + \dfrac{2}{9}\mathbb{E}(T) + \dfrac{1}{6} \Rightarrow \mathbb{E}(T) = \dfrac{57}{14}, \end{align*} where the last implication is valid since $\mathbb{E}(T)< \infty$. |
1993-q2 | Let \( X_1, X_2, \ldots \) be independent random variables with the distributions described below. Let \( S_n = X_1 + X_2 + \cdots + X_n \). In each of parts a) through d), find constants \( a_n \) and \( b_n \to \infty \) such that \( (S_n - a_n)/b_n \) converges in law as \( n \to \infty \) to a non-degenerate limit, and specify the limit distribution. If no such constants exist, say so. Prove your answers.
a) \( P(X_k = k) = P(X_k = -k) = 1/2k^2 \)\\
\( P(X_k = 0) = 1 - 1/k^2 \)
b) For some \( 0 < \alpha < \infty \),\\
\( P(X_k = k^\alpha) = P(X_k = -k^\alpha) = 1/2k^{\alpha/2} \)\\
\( P(X_k = 0) = 1 - 1/k^{\alpha/2} \)
c) The \( X_k \)'s are identically distributed, with probability density function\\
\( f(x) = \frac{1}{\pi} \frac{\sigma}{\sigma^2 + (x - \mu)^2}, \quad -\infty < x < \infty \)\\
for some \( -\infty < \mu < \infty \) and \( \sigma > 0 \).
d) \( P(X_k = 2^{k/2}) = P(X_k = -2^{k/2}) = 1/2^{k+1} \)\\
\( P(X_k = 1) = P(X_k = -1) = (1 - 1/2^k)/2 \) | \begin{enumerate}[label=(\alph*)] \item In this case $$ \sum_{n \geq 1} \mathbb{P}(X_n \neq 0) = \sum_{n \geq 1} n^{-2} < \infty,$$ and hence by \textit{Borel-Cantelli Lemma I}, we conclude that $\mathbb{P}(X_n = 0, \; \text{eventually for all } n) =1$. This implies that $S_n$ converges almost surely as $n \to \infty$. Now suppose there exists constants $a_n,b_n$ such that $b_n \to \infty$ and $(S_n-a_n)/b_n \stackrel{d}{\longrightarrow} G$, where $G$ is some non-degenerate distribution. Since $S_n$ converges almost surely, we have $S_n/b_n \stackrel{a.s.}{\longrightarrow} 0$ and by \textit{Slutsky's Theorem}, we get that a (non)-random sequence $-a_n/b_n$ converges weakly to non-degenerate $G$, which gives a contradiction. \item For $\alpha >2$, we have $$ \sum_{n \geq 1} \mathbb{P}(X_n \neq 0) = \sum_{n \geq 1} n^{-\alpha/2} < \infty,$$ and hence following the arguments presented in (a), we can conclude that there is no such $a_n,b_n$. Now consider the case $0 < \alpha <2$. Here we have $\mathbb{E}(X_n)=0$ and $$ s_n^2 = \sum_{k=1}^n \operatorname{Var}(X_k) = \sum_{k=1}^n \mathbb{E}(X_k^2) = \sum_{k=1}^n k^{3\alpha/2} \sim \dfrac{n^{1+3\alpha/2}}{1+3\alpha/2} \longrightarrow \infty,$$ where we have used the fact that $\sum_{k=1}^n k^{\gamma} \sim (\gamma+1)^{-1}n^{\gamma+1}$, for $\gamma >-1$. In this case $n^{\alpha}/s_n \sim n^{\alpha/4 - 1/2}$ converges to $0$ and hence for any $\varepsilon >0$, we can get $N(\varepsilon)\in \mathbb{N}$ such that $|X_n| \leq n^{\alpha} < \varepsilon s_n$, almost surely for all $n \geq N(\varepsilon)$. Therefore, $$ \dfrac{1}{s_n^2} \sum_{k=1}^n \mathbb{E}\left[ X_k^2, |X_k|\geq \varepsilon s_n\right] = \dfrac{1}{s_n^2} \sum_{k=1}^{N(\varepsilon)} \mathbb{E}\left[ X_k^2, |X_k|\geq \varepsilon s_n\right] \leq \dfrac{1}{s_n^2} \sum_{k=1}^{N(\varepsilon)} \mathbb{E}\left[ X_k^2\right] \longrightarrow 0,$$ implying that \textit{Lindeberg condition} is satisfied. Therefore, $ S_n/s_n$ converges weakly to standard Gaussian distribution. In other words, $$ \dfrac{S_n}{n^{1/2 + 3\alpha/4}} \stackrel{d}{\longrightarrow} N \left(0,\dfrac{1}{1+3\alpha/2}\right).$$. Finally consider the case $\alpha=2$. In light of the fact that $S_n$ is symmetric around $0$, We focus on finding the asymptotics of $S_n/b_n$ for some scaling sequence $\left\{b_n\right\}$ with $b_n \uparrow \infty$, to be chosen later. Fix $t \in \mathbb{R}$. Since, the variables $X_k$'s have distributions which are symmetric around $0$, we get, $$ \mathbb{E} \exp\left[ it\dfrac{S_n}{b_n}\right] = \prod_{k=1}^n \mathbb{E} \exp\left[ it\dfrac{X_k}{b_n}\right] = \prod_{k=1}^n \mathbb{E} \cos \left[ t\dfrac{X_k}{b_n}\right] = \prod_{k=1}^n \left[ \dfrac{1}{k}\cos \left( \dfrac{tk^{2}}{b_n}\right) + \left(1-\dfrac{1}{k}\right)\right] = \prod_{k=1}^n g_{n,k}(t), $$ where $$ g_{n,k}(t) := \dfrac{1}{k}\cos \left( \dfrac{tk^{2}}{b_n}\right) + \left(1-\dfrac{1}{k}\right), \; \forall \; 1 \leq k \leq n.$$ Now observe that, \begin{align*} 0 \leq 1-g_{n,k}(t) &= \dfrac{1}{k}\left\{1-\cos \left( \dfrac{tk^{2}}{b_n}\right)\right\} = \dfrac{2}{k}\sin^2 \left( \dfrac{tk^{2}}{2b_n}\right) \leq \dfrac{2}{k} \left( \dfrac{tk^{2}}{2b_n}\right)^2 \leq \dfrac{t^2k^{3}}{2b_n^2}. \end{align*} Take $b_n=n^{2}$. Then \begin{align*} \sum_{k=1}^n (1-g_{n,k}(t)) &= 2\sum_{k=1}^n \dfrac{1}{k}\sin^2 \left( \dfrac{tk^{2}}{2n^{2}}\right) = \dfrac{2}{n} \sum_{k=1}^n \dfrac{n}{k}\sin^2 \left( \dfrac{tk^{2}}{2n^{2}}\right)= 2 \int_{0}^1 \dfrac{1}{x} \sin^2 \left(\dfrac{tx^{2}}{2}\right) \, dx + o(1) \; \text{ as } n \to \infty. \end{align*} Not that the integral above is finite since, $$ 2\int_{0}^1 \dfrac{ an^2(tz^{2}/2)}{z} \, dz \leq 2 \int_{0}^1 \dfrac{t^2z^{4}}{4z} \, dz = 2\int_{0}^1 \dfrac{t^2z^{3}}{4} \, dz = \dfrac{t^2}{8} < \infty.$$ Also note that $$ \max_{1 \leq k \leq n} |1-g_{n,k}(t)| \leq \max_{1 \leq k \leq n} \dfrac{t^2k^{3}}{n^{4}} \leq \dfrac{t^2 n^{3}}{n^{4}} \to 0,$$ and hence $$ \sum_{k=1}^n |1-g_{n,k}(t)|^2 \leq \max_{1 \leq k \leq n} |1-g_{n,k}(t)| \sum_{k=1}^n (1-g_{n,k}(t)) \longrightarrow 0.$$ Therefore, using ~\cite[Lemma 3.3.31]{dembo}, we conclude that $$ \mathbb{E} \exp\left(\dfrac{itS_n}{n^{2}} \right) \longrightarrow \exp \left[-2 \int_{0}^1 \dfrac{1}{x} \sin^2 \left(\dfrac{tx^{2}}{2}\right) \, dx \right] =: \psi (t), \; \forall \; t \in \mathbb{R}.$$ To conclude convergence in distribution for $S_n/n^{2}$, we need to show that $\psi$ is continuous at $0$ (see ~\cite[Theorem 3.3.18]{dembo}). But it follows readily from DCT since $\sup_{-1 \leq t \leq 1} \tan^2(tz^{2}/2)/z \leq z^{3}$ which in integrable on $[0,1]$. So by \textit{Levy's continuity Theorem}, $S_n/n^{2}$ converges in distribution to $G$ with characteristic function $\psi$. \item In this case $X_i$'s are \textit{i.i.d.} from Cauchy distribution with location parameter $\mu$ and scale parameter $\sigma$. Hence, $\mathbb{E}\exp(it\sigma^{-1}(X_k-\mu)) = \exp(-|t|)$ for all $t \in \mathbb{R}$ and for all $k$. Hence, $$ \mathbb{E} \exp \left( \dfrac{it(S_n-n\mu)}{\sigma}\right) = \exp(-n|t|) \Rightarrow \mathbb{E} \exp \left( \dfrac{it(S_n-n\mu)}{n\sigma}\right) = \exp(-|t|), \; \forall \; t \in \mathbb{R}.$$ Thus $(S_n-n\mu)/(n\sigma) \sim \text{Cauchy}(0,1)$ for all $n \geq 1$ and thus $a_n=n\mu$ and $b_n=n\sigma$ works with the limiting distribution being the standard Cauchy. \item We have $\mathbb{P}(X_n=2^{n/2})=\mathbb{P}(X_n=-2^{n/2})=2^{-n-1}$ and $\mathbb{P}(X_n=1)=\mathbb{P}(X_n=-1)=\frac{1}{2}(1-2^{-n})$. Note that $$ \sum_{n \geq 1} \mathbb{P}(|X_n| \neq 1) = \sum_{n \geq 1} 2^{-n} < \infty,$$ and hence with probability $1$, $|X_n|=1$ eventually for all $n$. Let $Y_n = \operatorname{sgn}(X_n)$. Clearly, $Y_i$'s are \textit{i.i.d.} $\text{Uniform}(\left\{-1,+1\right\})$ variables and hence $ (\sum_{k=1}^n Y_k)/\sqrt{n}$ converges weakly to a standard Gaussian variable. By previous argument, $\mathbb{P}(X_n=Y_n, \; \text{eventually for } n)=1$ and hence $n^{-1/2}\left(S_n - \sum_{k=1}^n Y_k \right) \stackrel{p}{\longrightarrow}0.$ Use \textit{Slutsky's Theorem} to conclude that $S_n/\sqrt{n}$ converges weakly to standard Gaussian distribution. \end{enumerate} |
1993-q3 | A deck consists of \( n \) cards. Consider the following simple shuffle: the top card is picked up and inserted into the deck at a place picked uniformly at random from the \( n \) available places (\( n-2 \) places between the remaining \( n-1 \) cards, one place on top, and one on the bottom).
This shuffle is repeated many times. Let \( T \) be the number of shuffles required for the original bottom card to come to the top of the deck. And let \( V = T+1 \) be the number of shuffles required for the original bottom card to come to the top and be reinserted into the deck.
a) Show that after \( V \) shuffles all \( n! \) arrangements of cards in the deck are equally likely.
b) Find \( E(V) \), as a function of \( n \). For large \( n \), which of the terms below best represents the order of magnitude of \( E(V) \)? Explain.
\( n^3 \quad n^2 \log n \quad n^2 \quad n \log n \quad n \quad \sqrt{n \log n} \quad \sqrt{n} \)
c) Repeat part b), with \( E(V) \) replaced by \( SD(V) \).
d) Show that \( P(V > v) \leq n[(n-1)/n]^v \) for \( v > n \).
e) Show that \( P(V > n \log n + cn) \leq e^{-c} \) for all \( c > 0 \). | \begin{enumerate}[label=(\alph*)] \item Without loss of generality, we assume that initially the deck is arranged as $1, \ldots, n$ with the top card being the card labelled $1$. In the following discussion, one unit of time will refer to one shuffle. We define $n$ stopping times. For $k=1, \ldots, n-1$, let $\tau_k$ be the first time there are $k$ cards (irrespective of their labels) in the deck below the card labelled $n$. It is easy to see that $\tau_1 < \cdots < \tau_{n-1}=T.$ Let $A_k := \left\{a_{k,1}, \ldots, a_{k,k}\right\}$ be the (random) set of cards which are below card labelled $n$ at the time $\tau_k$, with $a_{k,1}$ being the label of the card just below the card labelled $n$ and so on. We claim that $(a_{k,1}, \ldots, a_{k,k}) \mid A_k \sim \text{Uniform}(S_{A_k})$ where $S_{B}$ is the collection of all permutations of elements of a set $B$. This claim will be proved by induction. The base case $k=1$ is trivially true. Assuming the it holds true for $k <m \leq n-1$, we condition on $A_m$ and $(a_{m-1,1}, \ldots, a_{m-1,m-1})$. The conditioning fixes the only card in the set $A_m \setminus A_{m-1}$ and by the shuffle dynamics, it is placed (conditionally) uniformly at random in one of the $m$ places; $m-2$ places between the cards of $A_{m-1}$, one place at just below the card labelled $n$ and one at the bottom. Clearly by induction hypothesis, this makes $$(a_{m,1}, \ldots, a_{m,m}) \mid \left(A_m, (a_{m-1,1}, \ldots, a_{m-1,m-1}) \right) \sim \text{Uniform}(S_{A_m}).$$ This implies that $(a_{m,1}, \ldots, a_{m,m}) \mid A_m \sim \text{Uniform}(S_{A_m}).$ The claim hence holds true. Note that by design, $A_{n-1}=\left\{1, \ldots, n-1\right\}$. So after $T=\tau_{n-1}$ shuffles, when we remove the card labelled $n$ from the top, the remaining deck is an uniform random element from $S_{n-1}$. Therefore, after inserting the $n$-labelled card at a random place in this deck, we get an uniform random element from $S_n$. This proves that after $V=T+1$ shuffles, all $n!$ arrangements of the deck are equally likely. \item Note that $V=T+1 = 1+\sum_{k=1}^{n-1}(\tau_k-\tau_{k-1})$, with $\tau_0:=0$. Conditioned on $\tau_{k-1}, A_{k-1}$, the dynamics is as follows. We take the top card and place it at random (independent of all what happened before) in one of the $n$ places. Out of those $n$ places, placement in one of the $k$ positions below the card labelled $n$ will take us to an arrangement where there are $k$ cards below the card labelled $n$. Thus conditioned on $\tau_{k-1}, A_{k-1}$, we have $\tau_{k}-\tau_{k-1}$ to be the first success time of a geometric trial with success probability $k/n$. Since the distribution does jot depend on $\tau_{k-1}, A_{k-1}$, we have $$ \tau_{k}-\tau_{k-1} \stackrel{ind}{\sim} \text{Geo}(k/n), \; k=1, \ldots, n-1.$$ Therefore, \begin{align*} \mathbb{E}(V) = 1 + \sum_{k=1}^{n-1} \mathbb{E}(\tau_k-\tau_{k-1}) = 1 + \sum_{k=1}^{n-1} \dfrac{n}{k} = n \left(\sum_{k=1}^n \dfrac{1}{k} \right) = n (\log n + \gamma + o(1)) \sim n \log n, \end{align*} with $\gamma$ being the \textit{Euler constant}. \item By independence we have \begin{align*} \operatorname{Var(V)} = \sum_{k=1}^{n-1} \operatorname{Var}(\tau_k-\tau_{k-1}) = \sum_{k=1}^{n-1} \dfrac{1-(k/n)}{(k/n)^2} = n^2 \sum_{k=1}^{n-1} k^{-2} - n \sum_{k=1}^{n-1} k^{-1} &= n^2 \left(\dfrac{\pi^2}{6}+o(1)\right) - n (\log n + \gamma +o(1)) \\ & \sim \dfrac{\pi^2 n^2}{6}. \end{align*} Therefore, $\operatorname{SD}(V) \sim \dfrac{\pi n}{\sqrt{6}}.$ \item Recall the \textit{Coupon collector's problem}. We take $U_1, U_2, \ldots \stackrel{iid}{\sim} \text{Uniform}(\left\{1, \ldots, n\right\})$ and let $$\theta_k := \inf \left\{ m \geq 1 : |\left\{U_1, \ldots, U_m\right\}| = k\right\}.$$ Clearly, $\theta_k-\theta_{k-1} \stackrel{ind}{\sim} \text{Geo}\left( \dfrac{n-k+1}{n}\right)$ for $k =1, \ldots, n$, with $\theta_0:=0$. Hence $V \stackrel{d}{=} \theta_n$ and for any $v \geq 1$, $$ \mathbb{P}(V >v) = \mathbb{P}(\theta_n >v) \leq \sum_{k=1}^n \mathbb{P}(U_i \neq k, \; \forall \;1 \leq i \leq v) = n \left( 1- \dfrac{1}{n} \right)^v = n[(n-1)/n]^v.$$ \item Plug in $v=n\log n +c n$ in part (d) for some $c>0$. Apply the inequality that $(1-x) \leq e^{-x}$ for all $x$. $$ \mathbb{P}(V > n\log + cn) \leq n\left( 1- \dfrac{1}{n}\right)^{n \log n + cn} \leq n \exp\left( -\dfrac{n \log n + cn}{n}\right) = n \exp(-\log n - c) = e^{-c}.$$ \end{enumerate} |
1993-q4 | Let \( X_0, X_1, \ldots, X_n \) be a martingale such that \( X_0 = 0 \) a.s., and \( \text{Var}(X_n) = \sigma^2 < \infty \). Define times \( T_0, T_1, \ldots, T_n \) inductively as follows: \( T_0 = 0, \) and for \( 1 \leq i \leq n, \) \( T_i \) is the smallest \( t > T_{i-1} \) such that \( X_t > X_{T_{i-1}}, \) if there is such a \( t \). And \( T_i = n \) if there is no such \( t \).
a) Show that \( E \sum_{i=1}^n (X_{T_i} - X_{T_{i-1}})^2 = \sigma^2 \).
b) Let \( M_n = \max_{0 \leq i \leq n} X_i \). Show that \( E(M_n - X_n)^2 \leq \sigma^2 \). | \textbf{Correction :} Assume that $\mathbb{E}X_i^2<\infty$ for all $i=0, \ldots, n$. We have $X_0,X_1, \ldots, X_n$ to be a $L^2$-MG with respect to filtration $\left\{\mathcal{F}_m : 0 \leq m \leq n\right\}$ such that $X_0=0$ almost surely and $\operatorname{Var}(X_n)=\sigma^2 < \infty$. Define $T_0:=0$ and for all $1 \leq i \leq n$, $$ T_i := \inf \left\{ m >T_{i-1} : T_{i-1} : X_m > X_{T_{i-1}}\right\} \wedge n,$$ with the understanding that infimum of an empty set is $\infty$. \begin{enumerate}[label=(\alph*)] \item Since the stopping times $T_0 \leq T_1 \leq \cdots \leq T_n$ are uniformly bounded, we apply OST and conclude that we have the following $\mathbb{E}(X_{T_i} \mid \mathcal{F}_{T_{i-1}}) =X_{T_{i-1}}$, for all $1 \leq i \leq n$. Since, $X_{T_i}^2 \leq \sum_{k=0}^n X_k^2 \in L^1$, we can conclude that $\left\{Y_i :=X_{T_i}, \mathcal{F}_{T_i}, 0 \leq i \leq n\right\}$ is a $L^2$-MG with predictable compensator process $$ \langle Y \rangle_m = \sum_{k=1}^m \mathbb{E}\left[(X_{T_k}-X_{T_{k-1}})^2 \mid \mathcal{F}_{T_{k-1}}\right], \; \forall \; 0 \leq m \leq n.$$ Hence, $$ \mathbb{E}X_{T_n}^2 = \mathbb{E}Y_n^2 = \mathbb{E}\langle Y \rangle_n = \mathbb{E} \sum_{i=1}^n (X_{T_i}-X_{T_{i-1}})^2.$$ It is easy to observe that $T_n =n$ almost surely and hence $$ \mathbb{E} \sum_{i=1}^n (X_{T_i}-X_{T_{i-1}})^2 = \mathbb{E}X_{T_n}^2 = \mathbb{E}X_n^2 = \operatorname{Var}(X_n)=\sigma^2,$$ since $\mathbb{E}X_n = \mathbb{E}X_0=0$. \item Let $M_n = \max_{0 \leq i \leq n } X_i$. Let $j^{*}$ be the smallest element of the (random) set $\arg \max_{0 \leq i \leq n} X_i$. Then $X_{j^{*}} > X_i$ for all $0 \leq i \leq j^{*}-1$ and $X_{j^{*}} \geq X_i$ for all $j^{*}<i \leq n$. This implies that $j^{*} = T_k$ for some $0 \leq k \leq n$ and $T_i=n$ for all $k<i \leq n.$ Consequently, $$ (M_n-X_n)^2 = (X_{T_k}-X_n)^2 \leq \sum_{i=1}^n (X_{T_i}-X_{T_{i-1}})^2.$$ From above and using part (a), we conclude that $\mathbb{E}(M_n-X_n)^2 \leq \sigma^2.$ \end{enumerate} |
1993-q5 | Let \( X_1, X_2, \ldots \) be a sequence of square-integrable random variables. Suppose the sequence is tight, and that \( E X_n^2 \to \infty \) as \( n \to \infty \). Show that \( \text{Var}(X_n) \to \infty \) as \( n \to \infty \). | Suppose $\operatorname{Var}(X_n)$ does not go to \infty$ and hence there exists a subsequence $\left\{n_k : k \geq 1\right\}$ such that $\sup_{k \geq 1} \operatorname{Var}(X_{n_k}) = C < \infty$. Since, $\mathbb{E}(X_{n_k}^2) \to \infty$, we conclude that $|\mathbb{E}(X_{n_k})| \to \infty$. Without loss of generality, we can assume $\mu_{n_k}:=\mathbb{E}(X_{n_k}) \to \infty$ (otherwise we may have to take a further subsequence). Fix $M \in (0,\infty)$ and $\delta >0$. Then for large enough $k$, \begin{align*} \mathbb{P}(X_{n_k} \leq M) \leq \mathbb{P}(X_{n_k} \leq \mu_{n_k} - \delta^{-1}\sqrt{C}) \leq \mathbb{P}(|X_{n_k}-\mu_{n_k}| \geq \delta^{-1}\sqrt{C}) \leq \dfrac{\operatorname{Var}(X_{n_k})}{\delta^{-2}C} \leq \delta^2. \end{align*} Therefore, $$ \sup_{n \geq 1} \mathbb{P}(|X_n| > M) \geq 1-\delta^2,$$ for all choices of $M$ and $\delta$. Take $\delta \uparrow 1$ and conclude that $$ \sup_{n \geq 1} \mathbb{P}(|X_n| > M) =1, \; \forall \; M.$$ This contradicts the tightness assumption. |
1993-q6 | Let \( X, X_1, X_2, \ldots \) be i.i.d. with \( 0 < E|X| < \infty, \) and \( EX = \mu \). Let \( a_1, a_2, \ldots \) be positive reals. In each of parts a) through d), say whether the statement is true or false, and give a proof or counterexample.
a) If \( \mu = 0 \) and \( \sum_i a_i < \infty, \) then \( \sum_i a_i X_i \) converges a.s.
b) If \( \mu = 0 \) and \( \sum_i a_i X_i \) converges a.s., then \( \sum_i a_i < \infty \).
c) If \( \mu > 0 \) and \( \sum_i a_i < \infty, \) then \( \sum_i a_i X_i \) converges a.s.
d) If \( \mu > 0 \) and \( \sum_i a_i X_i \) converges a.s., then \( \sum_i a_i < \infty \). | Let $\mathcal{F}_n := \sigma(X_i : 1 \leq i \leq n).$ Also define $S_n := \sum_{k=1}^n a_kX_k$ with $S_0:=0$ and $\mathbb{E}|X|=\nu < \infty.$ \begin{enumerate}[label=(\alph*)] \item \textbf{TRUE}. Since $\mu=0$, we have $\left\{S_n, \mathcal{F}_n, n \geq 0\right\}$ to be a MG with $$ \sup_{n \geq 1} \mathbb{E}|S_n| \leq \sup_{n \geq 1} \sum_{k=1}^n a_k \mathbb{E}|X_k| = \sup_{n \geq 1} \nu \sum_{k =1}^n a_k = \nu \sum_{k \geq 1} a_k < \infty.$$ Therefore, by \textit{Doob's MG convergence Theorem}, $\sum_{n \geq 1} a_nX_n$ converges almost surely. \item \textbf{FALSE}. Take $X \sim N(0,1)$ and $a_i =i^{-1}$. We still have $\left\{S_n, \mathcal{F}_n, n \geq 0\right\}$ to be a MG with $$ \sup_{n \geq 1} \mathbb{E}S_n^2 =\sup_{n \geq 1} \operatorname{Var}(S_n) = \sup_{n \geq 1} \sum_{k=1}^n a_k^2 = \sup_{n \geq 1} \sum_{k =1}^n k^{-2} = \sum_{k \geq 1} k^{-2} < \infty.$$ Being $L^2$-bounded, the MG $S_n$ converges almost surely and hence $\sum_{n \geq 1} a_kX_k$ converges almost surely. But here $\sum_{k \geq 1} a_k =\infty.$ \item \textbf{TRUE}. Let $Z_k=X_k-\mu$. Then $Z_i$'s are i.i.d. with mean $0$ and therefore by part (a) we have $\sum_{n \geq 1} a_nZ_n$ converges almost surely, since $\sum_{n \geq 1} a_n < \infty.$ The summability condition on $a_i$'s also imply that $\mu\sum_{n \geq 1} a_n < \infty$. Summing them up we get, $\sum_{n \geq 1} a_nX_n = \sum_{n \geq 1} a_nZ_n + \mu\sum_{n \geq 1} a_n$ converges almost surely. \item \textbf{TRUE}. We shall use \textit{Kolmogorov's Three series theorem}. We assume that $\sum_{n \geq 1} a_nX_n$ converges almost surely. Then by \textit{Kolmogorov's Three series theorem}, we have that $\sum_{n \geq 1} \mathbb{P}(|a_nX_n| > c)$ and $\sum_{n \geq 1} \mathbb{E}(a_nX_n\mathbbm{1}(|a_nX_n| \leq c))$ converge for any $c>0$. Since $X_i$'s \textit{i.i.d.}, we have $\sum_{n \geq 1} \mathbb{P}(|X|>a_n^{-1}c)$ and $\sum_{n \geq 1} \mathbb{E}(a_nX\mathbbm{1}(a_n|X| \leq c))$ converges for all $c>0$. The summability of the first series implies that $\mathbb{P}(|X| > a_n^{-1}c) \to 0$ for any $c>0$. Since $\mathbb{E}X = \mu >0$, we have $\mathbb{P}(|X|>\varepsilon) > \delta$ for some $\varepsilon, \delta >0$. This implies that the sequence $a_n$ is uniformly bounded, say by $C \in (0,\infty).$ Introduce the notation $f(x) := \mathbb{E}(X \mathbbm{1}(|X| \leq x))$ for any $x>0$. Note that $f(x) \to \mu >0$ as $x \to \infty$. Then we have $\sum_{n \geq 1} a_nf(a_n^{-1}c)$ converging for any $c>0$. Get $M \in (0, \infty)$ such that $f(x) \geq \mu/2$ for all $x \geq M$ and take $c=MC >0$. Then $a_n^{-1}c \geq C^{-1}c =M$ for all $n \geq 1$ and hence $f(a_n^{-1}c) \geq \mu/2$. This implies that $$ (\mu/2) \sum_{n \geq 1} a_n \leq \sum_{n \geq 1} a_nf(a_n^{-1}c) < \infty,$$ and therefore $\sum_{n \geq 1} a_n < \infty$, since $\mu >0$. \end{enumerate} |
1993-q7 | Let \( Y \) be a non-negative random variable with finite mean \( \mu \). Show that
\[ (E|Y - \mu|)^2 \leq 4\mu \text{Var}(\sqrt{Y}) \]
[Hint: Consider \( \sqrt{Y} \pm \sqrt{\mu} \).] | We have a non-negative random variable $Y$ with $\mathbb{E}(Y)=\mu$. Let's introduce some more notations. $\nu := \mathbb{E}\sqrt{Y}$ and $V:= \operatorname{Var}(\sqrt{Y}) = \mu - \nu^2.$ Applying \textit{Cauchy-Schwarz Inequality} we get \begin{align*} \left( \mathbb{E}|Y-\mu| \right)^2 = \left( \mathbb{E}\left(|\sqrt{Y}-\sqrt{\mu}||\sqrt{Y}+\sqrt{\mu}|\right) \right)^2 &\leq \mathbb{E}(\\sqrt{Y}-\sqrt{\mu})^2 \mathbb{E}(\\sqrt{Y}+\sqrt{\mu})^2 \\ & =\left[ \operatorname{Var}(\sqrt{Y}) + (\sqrt{\mu}-\nu)^2\right] \left[ \mathbb{E}Y + \mu +2 \sqrt{\mu}\mathbb{E}\sqrt{Y} \right] \\ & = \left[ \mu-\nu^2 + (\sqrt{\mu}-\nu)^2\right] \left[ 2\mu +2 \nu\sqrt{\mu}\right] \\ & = \left( 2\mu - 2\nu \sqrt{\mu}\right) \left( 2\mu +2 \nu\sqrt{\mu}\right) \\ & = 4\mu^2 - 4\nu^2\mu = 4\mu(\mu-\nu^2) = 4\mu \operatorname{Var}(\sqrt{Y}). \end{align*} |
1993-q8 | A coin which lands heads with probability \( p \) is tossed repeatedly. Let \( W_n \) be the number of tosses required to get at least \( n \) heads and at least \( n \) tails. Find constants \( a_n \) and \( b_n \to \infty \) such that \( (W_n-a_n)/b_n \) converges in law to a non-degenerate distribution, and specify the limit distribution, in the cases:
a) \( p = 2/3 \)
b) \( p = 1/2 \)
Prove your answers. | Let $X_n$ be the number of heads in first $n$ tosses. Hence $X_n \sim \text{Bin}(n,p)$. $W_n$ is the number of tosses required to get at least $n$ heads and $n$ tails. Evidently, $W_n \geq 2n$, almost surely for all $n \geq 1$. Note that for $x \geq 0$, $$ (W_n > 2n+x) = (X_{2n+x} \wedge (2n+x-X_{2n+x}) <n) =(X_{2n+x}<n \text{ or } X_{2n+x} > n+x)$$ and hence for all $n \geq 1$ and $x \in \mathbb{Z}_{\geq 0}$ we have $$ \mathbb{P}(W_n \leq 2n+x) = \mathbb{P}(n \leq X_{2n+x} \leq n+x) = \mathbb{P}(X_{2n+x} \leq n+x) - \mathbb{P}(X_{2n+x}<n). \begin{enumerate}[label=(\alph*)] \item Consider the case $p>1/2$. Then for any sequence $x_n$ of non-negative integers, we have $$ \mathbb{P}(X_{2n+x_n}<n) \leq \mathbb{P}(X_{2n} < n) = \mathbb{P}\left(\dfrac{X_{2n}}{2n}-p < \dfrac{1}{2} - p\right) \longrightarrow 0, $$ since $(2n)^{-1}X_{2n} \stackrel{a.s.}{\longrightarrow} p > \frac{1}{2}$ as $n \to \infty$. On the otherhand. \begin{align*} \mathbb{P}(X_{2n+x_n} \leq n+x_n) &= \mathbb{P}(X_{2n+x_n}-(2n+x_n)p \leq n+x_n - (2n+x_n)p) \\ &= \mathbb{P} \left( \dfrac{X_{2n+x_n}-(2n+x_n)p}{\sqrt{(2n+x_n)p(1-p)}} \leq \dfrac{n(1-2p)+x_n(1-p)}{\sqrt{(2n+x_n)p(1-p)}}\right) \end{align*} Take $x_n = n(2p-1)(1-p)^{-1} + C \sqrt{n}$ where $C \in \mathbb{R}$. Then $$ \dfrac{n(1-2p)+x_n(1-p)}{\sqrt{(2n+x_n)p(1-p)}} = \dfrac{C(1-p)\sqrt{n}}{\sqrt{2np(1-p)+n(2p-1)p+Cp(1-p)\sqrt{n}}} = \dfrac{C(1-p)\sqrt{n}}{\sqrt{np+Cp(1-p)\sqrt{n}}} \to \dfrac{C(1-p)}{\sqrt{p}}. $$ Since $2n+x_n \to \infty$ as $n \to \infty$, we have by CLT that $$ \dfrac{X_{2n+x_n}-(2n+x_n)p}{\sqrt{(2n+x_n)p(1-p)}} \stackrel{d}{\longrightarrow} N(0,1).$$ Therefore, using \textit{Polya's Theorem} we get $$ \mathbb{P}(X_{2n+x_n} \leq n+x_n) \to \Phi \left( \dfrac{C(1-p)}{\sqrt{p}}\right).$$ Consequently, we have $$ \mathbb{P}(W_n \leq 2n+n(2p-1)(1-p)^{-1}+C\sqrt{n}) \longrightarrow \Phi \left( \dfrac{C(1-p)}{\sqrt{p}}\right), \; \forall \; C \in \mathbb{R}.$$ Plugging in $C=x\sqrt{p}(1-p)^{-1}$, we get $$ \mathbb{P}(W_n \leq 2n+n(2p-1)(1-p)^{-1}+x\sqrt{p}(1-p)^{-1}\sqrt{n}) \longrightarrow \Phi(x), \; \forall \; x \in \mathbb{R},$$ which implies $$ \dfrac{(1-p)W_n- n}{\sqrt{np}} \stackrel{d}{\longrightarrow} N(0,1), \; \text{ as } n \to \infty.$$ For $p = 2/3$, this simplifies to $n^{-1/2}(W_n-3n) \stackrel{d}{\longrightarrow} N(0,6).$ \item For $p=1/2$, we have $$ \mathbb{P}(W_n \leq 2n+x_n) = \mathbb{P}(n \leq X_{2n+x_n} \leq n+x_n) = \mathbb{P}\left( - \dfrac{x_n/2}{\sqrt{(2n+x_n)/4}} \leq \dfrac{X_{2n+x_n}-(n+x_n/2)}{\sqrt{(2n+x_n)/4}} \leq \dfrac{x_n/2}{\sqrt{(2n+x_n)/4}}\right).$$ Take $x_n = C\sqrt{n}$, for $C \in (0,\infty)$. Since by CLT we have $$ \dfrac{X_{2n+x_n}-(n+x_n/2)}{\sqrt{(2n+x_n)/4}} \stackrel{d}{\longrightarrow} N(0,1),$$ we can conclude by \textit{Polya's Theorem} that $$ \mathbb{P}(W_n \leq 2n+C\sqrt{n}) \longrightarrow \Phi(C/\sqrt{2}) - \Phi(-C/\sqrt{2}) = \mathbb{P}(|Z| \leq C/\sqrt{2}) = \mathbb{P}(|N(0,2)| \leq C).$$ Therefore, with the aid of the fact that $W_n \geq 2n$, almost surely we can conclude that $$ \dfrac{W_n-2n}{\sqrt{n}} \stackrel{d}{\longrightarrow} |N(0,2)|, \; \text{ as } n \to \infty.$$ \end{enumerate} |
1994-q1 | Let X_{ij}, i = 1, \cdots, k, j = 1, \cdots, n be independent random variables and suppose that P\{X_n = 1\} = P\{X_n = 0\} = 0.5. Consider the row vectors X_i = (X_{i1}, \cdots, X_{in}).
(i) Find the probability that the elementwise addition modulo 2 of X_1 and X_2 results with the vector 0 = (0, \cdots, 0).
(ii) Let Q_k be the probability that there exist m and 1 \le i_1 < i_2 < \cdots < i_m \le k such that the elementwise addition modulo 2 of X_{i_1}, X_{i_2}, \cdots, X_{i_m} results with the vector 0 = (0, \cdots, 0). Prove that Q_k = 1 - \prod_{i=0}^{k-1} (1 - 2^{-(n-k)}). | We have $X_{ij}, \; i=1, \ldots, k; j=1,\ldots, n$ to be collection of \textit{i.i.d.} $\text{Ber}(1/2)$ random variables. Define $\mathbf{X}_i=(X_{i1}, \ldots, X_{in})$ for all $1 \leq i \leq k$. The (co-ordinate wise) addition modulo $2$ will de denoted by $\oplus$.
\begin{enumerate}[label=(\roman*)]
\item We have
\begin{align*}
\mathbb{P}(\mathbf{X}_1 \oplus \mathbf{X}_2 = \mathbf{0}) = \mathbb{P}(X_{1j}\oplus X_{2j}=0, \; \forall \; 1 \leq j \leq n) = \prod_{j=1}^n \mathbb{P}(X_{1j}\oplus X_{2j}=0) = \prod_{j=1}^n \mathbb{P}(X_{1j}= X_{2j}) = 2^{-n}.
\end{align*}
\item Consider $V=\left\{0,1\right\}^n$, which is a vector space of dimension $n$ over the field $F=\mathbb{Z}_2$ and the addition being $\oplus$ operator. Let $S_l := \left\{\mathbf{X}_1, \ldots, \mathbf{X}_l\right\}$ for all $1 \leq l \leq k$.
Note the event $Q_l$ that there exists $m$ and $1 \leq i_1 < \cdots < i_m \leq l$ such that the element-wise modulo $2$ of $\mathbf{X}_{i_1}, \ldots, \mathbf{X}_{i_m}$ results the vector $\mathbf{0}$, is same as the event that the set $S_l$ is linearly dependent.
Since $V$ has dimension $n$, clearly we have $Q_{l}=1$ for all $k \geq n+1$. Now suppose that $Q_l^c$ has occurred for some $1 \leq l \leq n-1$. Then conditioned on $Q_l^c$, $Q_{l+1}^c$ will occur if and only if $\mathbf{X}_{l+1} \notin \mathcal{L}(S_l)$, where $\mathcal{L}(S_l)$ is the subspace of $V$ spanned by $S_l$. Since conditioned on $Q_l^c$, we have $S_l$ to be linearly independent, we also have $|\mathcal{L}(S_l)|=|F|^{|S_l|} =2^l.$ Since $\mathbf{X}_{l+1}$ is an uniform element from $V$, we can write
$$ \mathbb{P}(Q_{l+1}^c \mid Q_l^c) = \mathbb{P}(\mathbf{X}_{l+1} \notin \mathcal{L}(S_l) \mid Q_l^c) = 2^{-n}(2^n-2^l) = 1-2^{-(n-l)}.$$
Note that, $\mathbb{P}(Q_1^c) = \mathbb{P}(\mathbf{X}_1 \neq \mathbf{0}) =1- 2^{-n}$. Since, $Q_1^c \supseteq Q_2^c \subseteq \cdots$, we have for $k \leq n$,
$$ \mathbb{P}(Q_k^c) = \mathbb{P}(Q_1^c) \prod_{l=1}^{k-1} \mathbb{P}(Q_{l+1}^c \mid Q_l^c) = (1-2^{-n}) \prod_{l=1}^{k-1} (1-2^{-(n-l)}) = \prod_{l=0}^{k-1} (1-2^{-(n-l)}).$$
Hence,
$$ \mathbb{P}(Q_k) = 1- \prod_{l=0}^{k-1} \left[1-2^{-(n-l)} \right]_{+}, \; \forall \; k,n.$$
\end{enumerate} |
1994-q2 | Let S_0 = 0, S_n = X_1 + X_2 + \cdots X_n, n \ge 1 be a Martingale sequence such that |S_n - S_{i-1}| = 1 for i = 1, 2, \cdots. Find the joint distribution of \{X_1, X_2, \cdots, X_n\}. | \textbf{Correction :} We have a MG $\left\{S_n = \sum_{k=1}^n X_k, \mathcal{F}_n, n \geq 0\right\}$ such that $S_0=0$ and $|S_i-S_{i-1}|=1$ for all $i \geq 1$.
Clearly, $\mathbb{E}(X_k \mid \mathcal{F}_{k-1})=0$ and $X_k \in \left\{-1,+1\right\}$ for all $k \geq 1$. This implies $X_k \mid \mathcal{F}_{k-1} \sim \text{Uniform}(\left\{-1,+1\right\})$ and therefore $X_k \perp\!\!\perp \mathcal{F}_{k-1}$. Since, $X_1, \ldots, X_{k-1} \in m\mathcal{F}_{k-1}$, we have $X_k \perp\!\!\perp(X_1, \ldots, X_{k-1})$. Since this holds true for all $k \geq 1$, we conclude that $X_1,X_2, \ldots \stackrel{iid}{\sim} \text{Uniform}(\left\{-1,+1\right\}).$ |
1994-q3 | (i) Let X_n, Y_n, Z_n be random variables such that |X_n| \le |Y_n| + |Z_n| for all n \ge 1. Suppose that \sup_{n \ge 1} E(|Y_n| \log |Y_n|) < \infty and \lim_{n \to \infty} E|Z_n| = 0. Show that \{X_n, n \ge 1\} is uniformly integrable.
(ii) Give a counter-example to show that we cannot replace the assumption \lim_{n \to \infty} E|Z_n| = 0 above by \sup_{n \ge 1} E|Z_n| < \infty. | \begin{enumerate}[label=(\roman*)]
\item It is enough to show that both $\left\{Y_n, n \geq 1 \right\}$ and $\left\{Z_n, n \geq 1 \right\}$ are uniformly integrable.
Take $y >1.$ Using the fact that $x\log x \geq -1/e$ for all $x \geq 0$, we get
\begin{align*}
\mathbb{E}\left(|Y_n|, |Y_n| \geq y \right) = \mathbb{E}\left(|Y_n|, \log |Y_n| \geq \log y \right) &\leq \dfrac{1}{\log y}\mathbb{E}\left(|Y_n|\log |Y_n|, \log |Y_n| \geq \log y \right) \\
& = \dfrac{1}{\log y}\mathbb{E}\left(|Y_n|\log |Y_n|\right) - \dfrac{1}{\log y}\mathbb{E}\left(|Y_n|\log |Y_n|, \log |Y_n| < \log y \right) \\
& \leq \dfrac{1}{\log y}\mathbb{E}\left(|Y_n|\log |Y_n|\right)+ \dfrac{1}{e \log y},
\end{align*}
implying that
$$ \sup_{n \geq 1} \mathbb{E}\left(|Y_n|, |Y_n| \geq y \right) \leq \dfrac{1}{\log y} \sup_{n \geq 1} \mathbb{E}\left(|Y_n|\log |Y_n|\right) + \dfrac{1}{e \log y}.$$
Taking $y \uparrow \infty$, we conclude that $\left\{Y_n, n \geq 1 \right\}$ is uniformly integrable.
We have $\mathbb{E}|Z_n| \to 0$, which implies that $Z_n$ converges to $0$ both in probability and in $L^1$. Apply \textit{Vitali's Theorem} to conclude that $\left\{Z_n, n \geq 1 \right\}$ is uniformly integrable. This conclude the proof.
\item Take $U \sim \text{Uniform}(0,1)$, $Y_n \equiv 1$, $Z_n =n\mathbbm{1}(U<1/n)$ and $X_n=Y_n+Z_n=Z_n+1$, for all $n \geq 1$.
Clearly, $|X_n| \leq |Y_n|+|Z_n|$, $\sup_{n \geq 1} \mathbb{E}(|Y_n|\log |Y_n|) =0$ and $\sup_{n \geq 1} \mathbb{E}|Z_n|=1$.
Note that $Z_n \stackrel{a.s.}{\longrightarrow} 0$ and hence $X_n \stackrel{a.s.}{\longrightarrow} 1$. If $\left\{X_n : n \geq 1\right\}$ is uniformly integrable, we can apply \textit{Vitali's Theorem} to conclude that $\mathbb{E}X_n \to 1$. But $\mathbb{E}(X_n)=2$ for all $n \geq 2$. Hence $\left\{X_n : n \geq 1\right\}$ is not uniformly integrable,
\end{enumerate} |
1994-q4 | Let X_1, X_2, \cdots be independent random variables and let S_n = X_1 + \cdots + X_n.
(i) Suppose P\{X_n = 1\} = P\{X_n = -1\} = \frac{1}{2}n^{-1} and \{X_n = 0\} = 1 - n^{-1}. Show that S_n/\sqrt{\log n} has a limiting normal distribution.
(ii) Let \beta \ge 0 and 2\alpha > \beta - 1. Suppose P\{X_n = n^\alpha\} = P\{X_n = -n^\alpha\} = \frac{1}{2}n^{-\beta} and P\{X_n = 0\} = 1 - n^{-\beta}. Show that the Lindeberg condition holds if and only if \beta < 1. What can you say about the asymptotic distribution of S_n for the cases 0 \le \beta < 1, \beta = 1 and \beta > 1?
(iii) Suppose P\{X_n = n\} = P\{X_n = -n\} = \frac{1}{2}n^{-2} and P\{X_n = 1\} = P\{X_n = -1\} = \frac{1}{2}(1 - n^{-2}). Show that the Lindeberg condition fails but that S_n/\sqrt{n} still has a limiting standard normal distribution. | $X_1,X_2, \ldots$ are independent random variables with $S_n = X_1 + \ldots + X_n.$
\begin{enumerate}[label=(\roman*)]
\item We want to apply \textit{Lyapounav CLT}. Since, $\mathbb{P}(X_n=1)=\mathbb{P}(X_n=-1)=1/(2n)$ and $\mathbb{P}(X_n=0)=1-1/n,$ we have $\mathbb{E}(X_n)=0$ and
$$ s_n^2 = \sum_{k=1}^n \operatorname{Var}(X_k) = \sum_{k=1}^n \mathbb{E}(X_k^2) = \sum_{k=1}^n \dfrac{1}{k} = \log n + O(1).$$
Finally,
$$ \dfrac{1}{s_n^3} \sum_{k=1}^n \mathbb{E}|X_k|^3 = \dfrac{1}{s_n^3} \sum_{k=1}^n \dfrac{1}{k} = \dfrac{1}{s_n} \longrightarrow 0.$$
Hence, $S_n/s_n $ converges weakly to standard Gaussian distribution. Since, $s_n \sim \sqrt{\log n}$, we get
$$ \dfrac{S_n}{\sqrt{\log n}} \stackrel{d}{\longrightarrow} N(0,1).$$
\item We have $\mathbb{P}(X_n=n^{\alpha}) = \mathbb{P}(X_n= -n^{\alpha})=1/(2n^{\beta})$ and $\mathbb{P}(X_n=0)=1-n^{-eta},$ where $\beta \geq 0, 2\alpha > \beta-1.$ Here also $\mathbb{E}(X_n)=0$ and
$$ s_n^2 = \sum_{k=1}^n \operatorname{Var}(X_k) = \sum_{k=1}^n \mathbb{E}(X_k^2) = \sum_{k=1}^n k^{2\alpha - \beta} \sim \dfrac{n^{2\alpha-\beta+1}}{2\alpha-\beta +1} \longrightarrow \infty,$$
where we have used the fact that $\sum_{k=1}^n k^{\gamma} \sim (\gamma+1)^{-1}n^{\gamma+1}$, for $\gamma >-1$ and the assumption that $2\alpha-\beta+1>0$. Now we need to consider two situations. Firstly, suppose $\beta <1$. In that case $n^{\alpha}/s_n$ converges to $0$ and hence for any $\varepsilon >0$, we can get $N(\varepsilon)\in \mathbb{N}$ such that $|X_n| \leq n^{\alpha} < \varepsilon s_n$, almost surely for all $n \geq N(\varepsilon)$. Therefore,
$$ \dfrac{1}{s_n^2} \sum_{k=1}^n \mathbb{E}\left[ X_k^2, |X_k|\geq \varepsilon s_n\right] = \dfrac{1}{s_n^2} \sum_{k=1}^{N(\varepsilon)} \mathbb{E}\left[ X_k^2, |X_k|\geq \varepsilon s_n\right] \leq \dfrac{1}{s_n^2} \sum_{k=1}^{N(\varepsilon)} \mathbb{E}\left[ X_k^2\right] \longrightarrow 0,$$
implying that \textit{Lindeberg condition} is satisfied.
How consider the case $\beta \geq 1$. In this case, $s_n = O(n^{\alpha})$ and hence we can get hold of $\varepsilon >0$ such that $ \varepsilon s_n \leq n^{\alpha}$ for all $n \geq 1$. Therefore,
$$ \dfrac{1}{s_n^2} \sum_{k=1}^n \mathbb{E}\left[ X_k^2, |X_k|\geq \varepsilon s_n\right] = \dfrac{1}{s_n^2} \sum_{k=1}^{n} \mathbb{E}\left[ X_k^2, |X_k|=k^{\alpha}\right] = \dfrac{1}{s_n^2} \sum_{k=1}^{n} k^{2\alpha-\beta} =1 \nrightarrow 0.$$
Thus \textit{Lindeberg condition} is satisfied if and only if $\beta <1$.
For $0 \leq \beta<1$, we have shown the \textit{Lindeberg condition} to satisfy and hence $S_n/s_n$ converges weakly to standard Gaussian variable. Plugging in asymptotic estimate for $s_n$ we obtain
$$ n^{-\alpha + \beta/2-1/2}S_n \stackrel{d}{\longrightarrow} N\left(0, \dfrac{1}{2\alpha-\beta+1}\right).$$
For $\beta >1$, we observe that
$$ \sum_{n \geq 1} \mathbb{P}(X_n \neq 0) =\sum_{n \geq 1} n^{-\beta}<\infty,$$
and therefore with probability $1$, the sequence $X_n$ is eventually $0$. This implies that $S_n$ converges to a finite limit almost surely.
For $\beta =1$, We focus on finding the asymptotics of $S_n/c_n$ for some scaling sequence $\left\{c_n\right\}$ with $c_n \uparrow \infty$, to be chosen later. Fix $t \in \mathbb{R}$. Since, the variables $X_k$'s have distributions which are symmetric around $0$, we get,
$$ \mathbb{E} \exp\left[ it\dfrac{S_n}{c_n}\right] = \prod_{k=1}^n \mathbb{E} \exp\left[ it\dfrac{X_k}{c_n}\right] = \prod_{k=1}^n \mathbb{E} \cos \left[ t\dfrac{X_k}{c_n}\right] = \prod_{k=1}^n \left[ \dfrac{1}{k}\cos \left( \dfrac{tk^{\alpha}}{c_n}\right) + \left(1-\dfrac{1}{k}\right)\right] = \prod_{k=1}^n g_{n,k}(t), $$
where
$$ g_{n,k}(t) := \dfrac{1}{k}\cos \left( \dfrac{tk^{\alpha}}{c_n}\right) + \left(1-\dfrac{1}{k}\right), \; \forall \; 1 \leq k \leq n.$$
Now observe that,
\begin{align*}
0 \leq 1-g_{n,k}(t) &= \dfrac{1}{k}\left\{1-\cos \left( \dfrac{tk^{\alpha}}{c_n}\right)\right\} = \dfrac{2}{k}\sin^2 \left( \dfrac{tk^{\alpha}}{2c_n}\right) \leq \dfrac{2}{k} \left( \dfrac{tk^{\alpha}}{2c_n}\right)^2 \leq \dfrac{t^2k^{2\alpha-1}}{2c_n^2}.
\end{align*}
Take $c_n=n^{\alpha}$. Then
\begin{align*}
\sum_{k=1}^n (1-g_{n,k}(t)) &= 2\sum_{k=1}^n \dfrac{1}{k}\sin^2 \left( \dfrac{tk^{\alpha}}{2n^{\alpha}}\right) = \dfrac{2}{n} \sum_{k=1}^n \dfrac{n}{k}\sin^2 \left( \dfrac{tk^{\alpha}}{2n^{\alpha}}\right)= 2 \int_{0}^1 \dfrac{1}{x} \sin^2 \left(\dfrac{tx^{\alpha}}{2}\right) \, dx + o(1)\; \text{ as } n \to \infty.
\end{align*}
Not that the integral above is finite since,
$$ 2\int_{0}^1 \dfrac{\sin^2(tz^{\alpha}/2)}{z} \, dz \leq 2 \int_{0}^1 \dfrac{t^2z^{2\alpha}}{4z} \, dz = 2\int_{0}^1 \dfrac{t^2z^{2\alpha-1}}{4} \, dz = \dfrac{t^2}{4\alpha} < \infty,$$
since $2\alpha > \beta -1 =0.$ Also note that
$$ \max_{1 \leq k \leq n} |1-g_{n,k}(t)| \leq \max_{1 \leq k \leq n} \dfrac{t^2k^{2\alpha-1}}{n^{2\alpha}} \leq \dfrac{t^2 (n^{2\alpha-1} \vee 1)}{n^{2\alpha}} \to 0,$$
and hence
$$ \sum_{k=1}^n |1-g_{n,k}(t)|^2 \leq \max_{1 \leq k \leq n} |1-g_{n,k}(t)| \sum_{k=1}^n (1-g_{n,k}(t)) \longrightarrow 0.$$\n
Therefore, using ~\cite[Lemma 3.3.31]{dembo}, we conclude that
$$ \mathbb{E} \exp\left(\dfrac{itS_n}{n^{\alpha}} \right) \longrightarrow \exp \left[-2 \int_{0}^1 \dfrac{1}{x} \sin^2 \left(\dfrac{tx^{\alpha}}{2}\right) \, dx \right] =: \psi (t), \; \forall \; t \in \mathbb{R}.$$
To conclude convergence in distribution for $S_n/n^{\alpha}$, we need to show that $\psi$ is continuous at $0$ (see ~\cite[Theorem 3.3.18]{dembo}). But it follows readily from DCT since $\sup_{-1 \leq t \leq 1} \sin^2(tz^{\alpha}/2)/z \leq z^{2\alpha-1}$ which in integrable on $[0,1]$, since $2\alpha-1 >-1$. So by \textit{Levy's continuity Theorem}, $S_n/n^{\alpha}$ converges in distribution to $G$ with characteristic function $\psi$.
\item We have $\mathbb{P}(X_n=n)=\mathbb{P}(X_n=-n)=1/(2n^2)$ and $\mathbb{P}(X_n=1)=\mathbb{P}(X_n=-1)=\frac{1}{2}(1-n^{-2})$. Note that
$$ \sum_{n \geq 1} \mathbb{P}(|X_n| \neq 1) =\sum_{n \geq 1} n^{-2} < \infty,$$
and hence with probability $1$, $|X_n|=1$ eventually for all $n$. Let $Y_n = \operatorname{sgn}(X_n)$. Clearly, $Y_i$'s are \textit{i.i.d.} $\text{Uniform}(\left\{-1,+1\right\})$ variables and hence $ \left(\sum_{k=1}^n Y_k\right)/\sqrt{n}$ converges weakly to a standard Gaussian variable. By previous argument, $\mathbb{P}(X_n=Y_n, \; \text{eventually for } n)=1$ and hence $n^{-1/2}\left(S_n - \sum_{k=1}^n Y_k \right) \stackrel{p}{\longrightarrow}0.$ Use \textit{Slutsky's Theorem} to conclude that $S_n/\sqrt{n}$ converges weakly to standard Gaussian distribution.
\end{enumerate} |
1994-q5 | (i) Let N_p be a geometric random variable with parameter p, i.e., P\{N_p = k\} = p(1-p)^k, k = 0, 1, \cdots. show that pN_p converges weakly to an exponential random variable as p \to 0.
(ii) Let X_1, X_2, \cdots be i.i.d. random variables with EX_1 > 0. Let S_n = X_1 + \cdots + X_n. Let N be a stopping time having the geometric distribution with parameter p. Show that S_N/E(S_N) converges weakly to an exponential random variable as p \to 0.
(Hint: Use Wald's lemma, the law of large numbers, Slutsky's theorem and (i).) | \begin{enumerate}[label=(\roman*)]
\item For any $n \in \mathbb{Z}_{\geq 0}$ we have $\mathbb{P}(N_p > n) = 1-\sum_{k=0}^n p(1-p)^k = (1-p)^{n+1}.$ Since, $N_p$ is integer-valued random variable, we have $\mathbb{P}(N_p > x) = (1-p)^{\lfloor x \rfloor +1},$ for all $x \geq 0$. Hence, for all $x \geq 0,
$$ \log \mathbb{P}(pN_p >x) = \log (1-p)^{\lfloor x/p \rfloor +1} = (\lfloor x/p \rfloor +1) \log (1-p) \sim - (\lfloor x/p \rfloor +1) p \sim -x, \; \text{ as } p \to 0.$$
Hence for all $x \geq 0$, we have $\mathbb{P}(pN_p >x) \to \exp(-x)$. On the otherhand, for $x <0$, $\mathbb{P}(pN_p >x)=1$. Therefore, $pN_p \stackrel{d}{\longrightarrow} \text{Exponential}(1),$ as $p \to 0.$
\item We have $X_i$'s \textit{i.i.d.} with positive finite mean. $S_n :=X_1+\cdots+X_n,$ with $S_0:=0$. $N_p$ is a stopping time having geometric distribution with parameter $p$ and hence is integrable. So by \textit{Wald's Identity} (See ~\cite[Exercise 5.4.10]{dembo}), we have $\mathbb{E}(S_{N_p}) =\mathbb{E}(N_p)\mathbb{E}(X_1) = p^{-1}(1-p)\mathbb{E}(X_1).
Note that $\mathbb{P}(N_p >x)=(1-p)^{\lfloor x \rfloor +1} \to 1$ as $p \to 0$ for all $x \geq 0$. This implies that $N_p \stackrel{p}{\longrightarrow} \infty.$ By strong law of large numbers, we have $S_n/n \stackrel{a.s.}{\longrightarrow} \mathbb{E}(X_1)$. Take any sequence $\left\{p_m : m \geq 1 \right\}$ converging to $0$. Since, $N_p \stackrel{p}{\longrightarrow} \infty$ as $p \to 0$, we can get a further subsequence $\left\{p_{m_l} : l \geq 1\right\}$ such that $N_{p_{m_l}} \stackrel{a.s.}{\longrightarrow} \infty.$ Therefore, $S_{N_{p_{m_l}} }/N_{p_{m_l}}\stackrel{a.s.}{\longrightarrow} \mathbb{E}(X_1).$ This argument show that $S_{N_p}/N_p \stackrel{p}{\longrightarrow} \mathbb{E}(X_1)$ as $p \to 0.$
Now,
$$ \dfrac{S_{N_p}}{\mathbb{E}S_{N_p}} = \dfrac{S_{N_p}}{p^{-1}(1-p)\mathbb{E}(X_1)} = \left(\dfrac{S_{N_p}}{N_p \mathbb{E}(X_1)}\right) \left(\dfrac{pN_p}{(1-p)}\right) .$$
As $p \to 0$, the quantity in the first parenthesis converges in probability to $1$, since $\mathbb{E}(X_1)>0$. From part (a), the quantity in the second parenthesis converges weakly to standard exponential variable. Apply \textit{Slutsky's Theorem} to conclude that
$$ \dfrac{S_{N_p}}{\mathbb{E}S_{N_p}} \stackrel{d}{\longrightarrow} \text{Exponential}(1), \text{ as } p \to 0.$$
\end{enumerate} |
1994-q6 | Suppose that you play a game in which you win a dollar with probability \frac{1}{2}, lose a dollar with probability \frac{1}{4}, and neither win nor lose with probability \frac{1}{4}.
(a) Let X_1, X_2, \cdots be the gains in independent plays of the game and S_n = X_1 + \cdots + X_n, V_n = 2^{-S_n} for n = 1, 2, \cdots.
(i) For k > 0 find E(V_{n+k}|V_1, \cdots, V_n).
(ii) What happens to V_n as n grows without bound? Why?
(b) Suppose you play the game until you are either $10 ahead or $3 behind, whichever happens first. Infer from (ii) that the game terminates in finite time with probability 1.
(c) Let T be the (random) time at which the game terminates. Prove that for all k > 0, E(T^k) < \infty.
Find P(S_T = 10) and E(T). | We have $X_i$'s to be \textit{i.i.d.} with $\mathbb{P}(X_1=1)=1/2, \mathbb{P}(X_1=0)=1/4, \mathbb{P}(X_1=-1)=1/4.$ Thus $\mathbb{E}(X_1) = 1/4$ and $\mathbb{E}2^{-X_1} = 1.$
\begin{enumerate}[label=(\alph*)]
\item Define $S_n= \sum_{k=1}^n X_k$ and $V_n=2^{-S_n}$.
\begin{enumerate}[label=(\roman*)]
\item For $k >0$, since $S_{n+k}-S_n$ is independent of $(X_1, \ldots, X_n)$ we have the following.
\begin{align*}
\mathbb{E}(V_{n+k} \mid V_1, \ldots, V_n) = \mathbb{E}(2^{-S_{n+k}} \mid X_1, \ldots, X_n) = 2^{-S_n} \mathbb{E}\left(2^{-(S_{n+k}-S_n)}\right) &= 2^{-S_n} \left( \mathbb{E}(2^{-X_1})\right)^k = V_n.
\end{align*}
\item Since $\left\{S_n : n \geq 0\right\}$ is the random walk on $\mathbb{R}$ with step-size mean being positive, we have $S_n \to \infty$ almost surely as $n \to \infty$. Therefore, $V_n$ converges to $0$ almost surely as $n \to \infty.$
\end{enumerate}
\item Let $T := \inf \left\{ n \geq 0 : S_n \in \left\{-3,10\right\} \right\}$, where $S_0=0$. Since we have argued that $S_n \stackrel{a.s.}{\longrightarrow} \infty$ as $n \to \infty$, we clearly have $T< \infty$ almost surely, i.e. the game terminates with probability $1$.
\item Note that for any $k \geq 1$, we have
\begin{align*}
\mathbb{P}(T > k+13 \mid T>k) &= \mathbb{P}(S_{k+1}, \ldots, S_{k+13} \notin \left\{-3,10\right\} \mid S_k) \mathbbm{1}(-2 \leq S_i \leq 9, \; \forall \; i=1,\ldots, k) \\
&\leq \mathbb{P}(S_{k+1}, \ldots, S_{k+13} \neq 10 \mid S_k) \mathbbm{1}(-2 \leq S_i \leq 9, \; \forall \; i=1,\ldots, k) \\
& \leq (1-2^{-(10-S_k)}) \mathbbm{1}(-2 \leq S_i \leq 9, \; \forall \; i=1,\ldots, k) \leq 1-2^{-12}.
\end{align*}
Letting $\delta=2^{-12}>0$, we get
$$ \mathbb{P}(T>k) \leq \mathbb{P}(T>13\lfloor k/13 \rfloor) \leq (1-\delta)^{\lfloor k/13 \rfloor} \mathbb{P}(T>0) = (1-\delta)^{\lfloor k/13 \rfloor} , \; \forall \; k \geq 0.$$
Hence, for any $k>0$, we have
$$ \mathbb{P}(T^k) = \sum_{y \geq 0} ky^{k-1}\mathbb{P}(T>y) \leq \sum_{y \geq 0} ky^{k-1}(1-\delta)^{\lfloor y/13 \rfloor} < \infty.$$
From part (a), we can see that $\left\{V_n, \mathcal{F}_n, n \geq 0\right\}$ is a non-negative MG, where $\mathcal{F}_n = \sigma(X_1, \ldots, X_n)$. Hence $\left\{V_{T \wedge n}, \mathcal{F}_n, n \geq 0\right\}$ is a non-negative uniformly bounded MG such that $V_{T \wedge n} \to V_T$ as $n \to \infty$ almost surely. Using OST and DCT, we conclude that $\mathbb{E}V_{T}=\mathbb{E}V_0=1,$ hence
$$ 1= \mathbb{E}V_T = \mathbb{E}2^{-S_T}= 2^{-10}\mathbb{P}(S_T=10)+2^3\mathbb{P}(S_T=-3) \Rightarrow \mathbb{P}(S_T=10) = \dfrac{2^3-1}{2^3-2^{-10}} = \dfrac{7}{8-2^{-10}}.$$
Since $\mathbb{E}X_1=1/4$, it is easy to see that $\left\{S_n-(n/4), \mathcal{F}_n, n \geq 0\right\}$ is a MG. Using OST we write, $\mathbb{E}(S_{T \wedge n}) = (1/4)\mathbb{E}(T \wedge n).$ Since $S_{T \wedge n}$ is uniformly bounded and converges to $S_T$ almost surely, we use DCT and MCT to conclude that
\begin{align*}
\mathbb{E}T = \lim_{n \to \infty} \mathbb{E}(T \wedge n) = 4 \lim_{n \to \infty} \mathbb{E}(S_{T \wedge n}) = 4 \mathbb{E}(S_T) = 4 \left[ 10 \mathbb{P}(S_T=10)-3\mathbb{P}(S_T=-3)\right]& = \dfrac{280 - 12(1-2^{-10})}{8-2^{-10}} \\
&= \dfrac{268 + 3 \times 2^{-8}}{8-2^{-10}}.
\end{align*}
\end{enumerate} |
1994-q7 | Let X_1, X_2, \cdots, be independent and identically distributed on (\Omega, \mathcal{F}, P) such that E(X_i) = 0 and \text{Var}(X_i) = 1.
(i) What is the limiting distribution of \frac{S_n}{\sqrt{n}} and why?
(ii) Is there an \mathcal{F}-measurable random variable Z such that \frac{S_n}{\sqrt{n}} \to Z in probability?
(iii) Prove that E(W \frac{S_n}{\sqrt{n}}) \to 0 for every W in L^2(\Omega, \mathcal{F}, P). | We have $X_i$'s to be \textit{i.i.d.} with mean $0$ and variance $1$, on the probability space $(\Omega, \mathcal{F}, \mathbb{P})$. Define $S_n = \sum_{k=1}^n X_k$.
\begin{enumerate}[label=(\roman*)]
\item By CLT, we have $S_n/\sqrt{n}$ converges weakly to standard Gaussian distribution.
\item Suppose there exists $Z \in m\mathcal{F}$ such that $S_n/\sqrt{n}$ converges to $Z$ in probability. Then we have $S_{2n}/\sqrt{2n}$ converges to $Z$ in probability and hence
$$ \dfrac{S_n}{\sqrt{n}} = \dfrac{\sum_{k=1}^n X_k}{\sqrt{n}} \stackrel{d}{=} \dfrac{\sum_{k=n+1}^{2n} X_k}{\sqrt{n}} = \dfrac{S_{2n}-S_n}{\sqrt{n}} \stackrel{p}{\longrightarrow} (\sqrt{2}-1)Z,$$
which gives a contradiction.
\item
Take $W \in L^2(\Omega, \mathcal{F}, \mathbb{P})$ and set $Y=\mathbb{E}(W \mid \mathcal{F}_{\infty})$, where $\mathcal{F}_n := \sigma(X_i : 1 \leq i \leq n)$ for all $1 \leq n \leq \infty$. Then $\mathbb{E}(Y^2) \leq \mathbb{E}(W^2)$ and hence $Y \in L^2(\Omega, \mathcal{F}_{\infty}, \mathbb{P}).$ Introduce the notation $Z_n=S_n/\sqrt{n}$. Note that,
$$ \mathbb{E}(WZ_n) = \mathbb{E} \left( \mathbb{E}(WZ_n \mid \mathcal{F}_{\infty})\right) = \mathbb{E} \left( Z_n\mathbb{E}(W \mid \mathcal{F}_{\infty})\right) = \mathbb{E}(YZ_n),$$
since $Z_n \in m \mathcal{F}_n \subseteq m \mathcal{F}_{\infty}$. Thus it is enough to prove that $\mathbb{E}(YZ_n) \to 0.$
We have $Y \in L^2(\Omega, \mathcal{F}_{\infty}, \mathbb{P})$. Take $Y_m=\mathbb{E}(Y \mid \mathcal{F}_m)$, for $m \geq 1$. Clearly, $\left\{Y_m, \mathcal{F}_m, m \geq 1\right\}$ is a $L^2$ bounded MG since $\mathbb{E}(Y_m^2) \leq \mathbb{E}(Y^2)$. Hence, $Y_m \stackrel{L^2}{\longrightarrow} Y_{\infty}$ as $m \to \infty$, where $Y_{\infty} = \mathbb{E}(Y \mid \mathcal{F}_{\infty})=Y$ (see ~\cite[Corollary 5.3.13]{dembo}).
Note that for $i > m$, $\mathbb{E}(X_i \mid \mathcal{F}_m) = \mathbb{E}(X_i)=0$, whereas for $0 \leq i \leq m$, $\mathbb{E}(X_i \mid \mathcal{F}_m) =X_i$. Therefore, for $n>m$,
$$ \mathbb{E}(Y_mZ_n) = \dfrac{1}{\sqrt{n}}\sum_{i=1}^n \mathbb{E}(Y_m X_i) = \dfrac{1}{\sqrt{n}}\sum_{i=1}^n \mathbb{E} \left( \mathbb{E} \left[Y_m X_i \mid \mathcal{F}_m \right] \right)= \dfrac{1}{\sqrt{n}}\sum_{i=1}^n \mathbb{E} \left( Y_m\mathbb{E} \left[ X_i \mid \mathcal{F}_m \right] \right) = \dfrac{1}{\sqrt{n}}\sum_{i=1}^m \mathbb{E} \left( Y_mX_i \right) \to 0, $
as $n \to \infty$. Fix $m \geq 1$. We have
$$ \Big \rvert \mathbb{E}(Y_mZ_n) - \mathbb{E}(YZ_n)\Big \rvert \leq \mathbb{E}\left( |Z_n||Y-Y_m|\right) \leq \sqrt{\mathbb{E}(Z_n^2)\\mathbb{E}(Y-Y_m)^2} =\sqrt{\mathbb{E}(Y-Y_m)^2}.$$
Since, $\mathbb{E}(Y_mZ_n)$ converges to $0$ as $n \to \infty$, the above inequality implies that
$$ \limsup_{n \to \infty} \Big \rvert \mathbb{E}(YZ_n) \Big \rvert \leq \sqrt{\mathbb{E}(Y-Y_m)^2}, \; \forall \; m \geq 1.$$
We already know that the right hand side in the above inequality goes to $0$ as $m \to \infty$. Hence, $\mathbb{E}(YZ_n)=o(1).
\end{enumerate} |
1995-q1 | Prove the following for any random variable $X$ and any constant $\alpha > 0$. (i) $\sum_{n=1}^{\infty} e^{\alpha n} E\{I(X \ge e^{\alpha n})/X\} < \infty$. (ii) $\sum_{n=1}^{\infty} n^{\alpha} E\{I(X \ge n^{\alpha})/X\} < \infty \iff E(X^+)^{1/2} < \infty$. (iii) $\sum_{n=2}^{\infty}(\log n)E\{I(\alpha X \ge \log n)/X\} < \infty \iff E\exp(\alpha X^+) < \infty$. (Hint: Partition $\{X \ge k\}$ suitably.) | egin{enumerate}[label=(\roman*)]
\item Since, \alpha >0,
\begin{align*}
\sum_{n \geq 1} e^{\alpha n} \mathbb{E} X^{-1}\mathbbm{1}(X \geq e^{\alpha n}) &= \sum_{n \geq 1} \sum_{k \geq n} e^{\alpha n} \mathbb{E}X^{-1}\mathbbm{1}(e^{\alpha k} \leq X < e^{\alpha (k+1)}) \\
&= \sum_{k \geq 1} \sum_{n=1}^k e^{\alpha n} \mathbb{E}X^{-1}\mathbbm{1}(k^{\alpha} \leq X < (k+1)^{\alpha}) \\
&= \sum_{k \geq 1} \dfrac{e^{\alpha (k+1)}-e^{\alpha}}{e^{\alpha}-1} \mathbb{E}X^{-1}\mathbbm{1}(e^{\alpha k} \leq X < e^{\alpha (k+1)}) \\
& \leq \sum_{k \geq 1} \dfrac{e^{\alpha (k+1)}-e^{\alpha}}{e^{\alpha}-1} e^{-\alpha k}\mathbb{P}(e^{\alpha k} \leq X < e^{\alpha (k+1)}) \\
& \leq \dfrac{e^{\alpha}}{e^{\alpha}-1} \sum_{k \geq 1} \mathbb{P}(e^{\alpha k} \leq X < e^{\alpha (k+1)}) \leq \dfrac{e^{\alpha}}{e^{\alpha}-1} < \infty,.
\end{align*}
\item Define $H(k,\alpha) := \sum_{j=1}^k j^{\alpha}$. Note that
$$ k^{-\alpha-1}H(k,\alpha) = \dfrac{1}{k}\sum_{j=1}^k \left(\dfrac{j}{k} \right)^{\alpha} \stackrel{k \to \infty}{\longrightarrow} \int_{0}^1 x^{\alpha}\, dx = \dfrac{1}{\alpha+1},$$
and thus $C_1k^{\alpha +1} \leq H(k,\alpha) \leq C_2k^{\alpha+1}$, for all $k \geq 1$ and for some $0 < C_1 <C_2<\infty$.
\begin{align*}
\sum_{n \geq 1} n^{\alpha} \mathbb{E} X^{-1}\mathbbm{1}(X \geq n^{\alpha}) &= \sum_{n \geq 1} \sum_{k \geq n} n^{\alpha} \mathbb{E}X^{-1}\mathbbm{1}(k^{\alpha} \leq X < (k+1)^{\alpha}) \\
&= \sum_{k \geq 1} \sum_{n=1}^k n^{\alpha} \mathbb{E}X^{-1}\mathbbm{1}(k^{\alpha} \leq X < (k+1)^{\alpha})\\
&= \sum_{k \geq 1} H(k,\alpha) \mathbb{E}X^{-1}\mathbbm{1}(k^{\alpha} \leq X < (k+1)^{\alpha}).
\end{align*}
Thus
\begin{align*}
\sum_{k \geq 1} H(k,\alpha) \mathbb{E}X^{-1}\mathbbm{1}(k^{\alpha} \leq X < (k+1)^{\alpha}) & \leq \sum_{k \geq 1} H(k,\alpha)k^{-\alpha} \mathbb{P}(k^{\alpha} \leq X < (k+1)^{\alpha}) \\
& \leq C_2 \sum_{k \geq 1} k^{\alpha +1}k^{-\alpha} \mathbb{P}(k^{\alpha} \leq X < (k+1)^{\alpha}) \\
& = C_2 \sum_{k \geq 1} k \mathbb{P}(k^{\alpha} \leq X < (k+1)^{\alpha}) \\
& \leq C_2 \sum_{k \geq 1} \mathbb{E}X^{1/\alpha}\mathbbm{1}(k^{\alpha} \leq X < (k+1)^{\alpha}) \leq C_2 \mathbb{E}(X_{+})^{1/\alpha}.
\end{align*}
On the otherhand,
\begin{align*}
\sum_{k \geq 1} H(k,\alpha) \mathbb{E}X^{-1}\mathbbm{1}(k^{\alpha} \leq X < (k+1)^{\alpha}) & \geq \sum_{k \geq 1} H(k,\alpha)(k+1)^{-\alpha} \mathbb{P}(k^{\alpha} \leq X < (k+1)^{\alpha}) \\
& \geq C_1 \sum_{k \geq 1} k^{\alpha +1}(k+1)^{-\alpha} \mathbb{P}(k^{\alpha} \leq X < (k+1)^{\alpha}) \\
& \geq C_1 \sum_{k \geq 1} 2^{-\alpha-1}(k+1) \mathbb{P}(k^{\alpha} \leq X < (k+1)^{\alpha}) \\
& \geq 2^{-\alpha -1}C_1 \sum_{k \geq 1} \mathbb{E}X^{1/\alpha}\mathbbm{1}(k^{\alpha} \leq X < (k+1)^{\alpha})\\
&= 2^{-\alpha-1}C_1 \mathbb{E}(X)^{1/\alpha}\mathbbm{1}(X \geq 1) \geq 2^{-\alpha-1}C_1 \left(\mathbb{E}(X_{+})^{1/\alpha} -1 \right).
\end{align*}
Therefore,
$$ \sum_{n \geq 1} n^{\alpha} \mathbb{E} X^{-1}\mathbbm{1}(X \geq n^{\alpha}) < \infty \iff \mathbb{E}(X_{+})^{1/\alpha} < \infty.$$\n\item Define $G(k) := \sum_{j=1}^k \log j$. Note that
$$ G(k) \leq \int_{1}^{k+1} \log x \, dx = (x\log x -x) \Bigg \rvert_{1}^{k+1} = (k+1)\log(k+1) - k \leq C_4 k \log k,$$
whereas
$$ G(k) \geq \int_{0}^{k} \log x \, dx = (x\log x -x) \Bigg \rvert_{0}^{k} = k\log k - k \geq C_3(k+1)\log(k+1),$$
for all $k \geq 2$, for some $0 < C_3 <C_4<\infty$.
\begin{align*}
\sum_{n \geq 2} (\log n) \mathbb{E} X^{-1}\mathbbm{1}(\alpha X \geq \log n) &= \sum_{n \geq 2} \sum_{k \geq n} (\log n) \mathbb{E}X^{-1}\mathbbm{1}( \log (k) \leq \alpha X < \log (k+1)) \\
&= \sum_{k \geq 2} \sum_{n=2}^k (\log n)\mathbb{E}X^{-1}\mathbbm{1}(\log k \leq \alpha X < \log (k+1))\ \\
&= \sum_{k \geq 2} G(k) \mathbb{E}X^{-1}\mathbbm{1}( \log k \leq \alpha X < \log (k+1)).
\end{align*}
Thus
\begin{align*}
\sum_{k \geq 2} G(k) \mathbb{E}X^{-1}\mathbbm{1}( \log k \leq\alpha X < \log (k+1)) & \leq \sum_{k \geq 2} G(k) (\log k)^{-1} \mathbb{P}( \log k \leq \alpha X < \log (k+1)) \\
& \leq C_4 \sum_{k \geq 2} k \mathbb{P}( \log k\leq \alpha X < \log (k+1)) \\
& \leq C_4 \sum_{k \geq 2} \mathbb{E} \exp(\alpha X)\mathbbm{1}(\log k \leq \alpha X < \log (k+1)) \\
&= C_4 \mathbb{E}\exp(\alpha X)\mathbbm{1}(\alpha X \geq \log 2) \leq C_4 \mathbb{E}\exp(\alpha X_{+}).
\end{align*}
On the otherhand,
\begin{align*}
\sum_{k \geq 2} G(k) \mathbb{E}X^{-1}\mathbbm{1}( \log k \leq\alpha X < \log (k+1)) & \geq \sum_{k \geq 2} G(k) (\log (k+1))^{-1} \mathbb{P}( \log k \leq \alpha X < \log (k+1)) \\
& \geq C_3 \sum_{k \geq 2} (k+1) \mathbb{P}( \log k\leq \alpha X < \log (k+1)) \\
& \geq C_3 \sum_{k \geq 2} \mathbb{E} \exp(\alpha X)\mathbbm{1}(\log k \leq \alpha X < \log (k+1)) \\
&= C_3 \mathbb{E}\exp(\alpha X)\mathbbm{1}(\alpha X \geq \log 2) \geq C_3 (\mathbb{E}\exp(\alpha X_{+}) -2).
\end{align*}
Therefore,
$$ \sum_{n \geq 2} (\log n) \mathbb{E} X^{-1}\mathbbm{1}( \alpha X \geq \log n) < \infty \iff \mathbb{E}\exp( \alpha X_{+}) < \infty.$$\n\end{enumerate}
|
1995-q2 | Let $X_1, X_2, \ldots$ be i.i.d. $k$-dimensional random vectors such that $EX_1 = \mu$ and $\text{Cov}(X_1) = V$. Let $S_n = X_1 + \cdots + X_n, \ \bar{X}_n = S_n/n$. Let $f : \mathbb{R}^k \to \mathbb{R}$ be continuously-differentiable in some neighborhood of $\mu$. (i) Show that with probability 1, the lim sup and lim inf of $\sqrt{n}/\log\log n\{f(\bar{X}_n) - f(\mu)\}$ are nonrandom constants. Give formulas for these constants. (ii) Show that $\sqrt{n}/\log\log n\{f(\bar{X}_n) - f(\mu)\}$ converges in probability to a nonrandom constant and identify the constant. | Let $U$ be an open ball around $\mu$ of radius $r$ such that $f$ is continuously differentiable in $U$. Hence, for any $x \in U$, we have by \textit{Mean Value Theorem},
$$ \big \rvert f(x)-f(\mu)-(\nabla f (\mu))^{\prime}(x-\mu)\big \rvert \leq ||x-\mu||_2 \left(\sup_{y \in L_x} || \nabla f(y) - \nabla f(\mu)||_2 \right) \leq ||x-\mu||_2 \left(\sup_{y \in B(\mu,r(x))} || \nabla f(y) - \nabla f(\mu)||_2 \right),$$
where $L_x$ is the line segment connecting $x$ to $\mu$ and $r(x)=||x-\mu||_2.$ $B(y,s)$ is the open ball of radius $s$ around $y$.
Introduce the notation
$$ \Delta(s) := \sup_{y \in B(\mu,s)} || \nabla f(y) - \nabla f(\mu)||_2 , \; \forall \, 0 \leq s <r.$$\nClearly, $s \mapsto \Delta(s)$ is non-decreasing and by continuity of $\nabla f$ on $U$, we have $\Delta(s)\downarrow 0$ as $s \downarrow 0$.
For notational convenience, introduce the notations $X_j := (X_{j,1}, \ldots, X_{j,k})$ for all $j \geq 1$, $\bar{X}_{n,i} = n^{-1}\sum_{j=1}^n X_{j,i}$, for all $n \geq 1$ and $1 \leq i \leq k$; $\mu=(\mu_1, \ldots, \mu_k)$ and finally $V=(V_{ii^{\prime}}).$
\begin{enumerate}[label=(\roman*)]
\item By SLLN, we have $\bar{X}_n \stackrel{a.s.}{\longrightarrow} \mu.$ Thus for a.e.[$\omega$], we can get $n_0(\omega)$ such that $\bar{X}_n(\omega)\in U$ for all $n \geq n_0(\omega)$. Then for all $n \geq n_0(\omega)$,
\begin{equation}{\label{original}}
\dfrac{\sqrt{n}}{\sqrt{\log \log n}} \big \rvert f(\bar{X}_n(\omega))-f(\mu)-(\nabla f (\mu))^{\prime}(\bar{X}_n(\omega)-\mu)\big \rvert \leq \dfrac{\sqrt{n}}{\sqrt{\log \log n}}||\bar{X}_n(\omega)-\mu||_2 \Delta(||\bar{X}_n(\omega)-\mu||_2).
\end{equation}
Note that for all $1 \leq i \leq k$, $\left\{ \sum_{j=1}^n (X_{j,i} - \mu_i), n \geq 0 \right\}$ is a mean-zero RW with step variances being $V_{ii}$. So by \textit{Hartman-Winter LIL},
$$ \liminf_{n \to \infty} \dfrac{n(\bar{X}_{n,i} - \mu_i)}{\sqrt{2 n \log\log n}} = -\sqrt{V_{ii}}, \; \limsup_{n \to \infty} \dfrac{n(\bar{X}_{n,i} - \mu_i)}{\sqrt{2 n \log\log n}} = \sqrt{V_{ii}}, \; \text{almost surely}.$$\nUsing this we can conclude that
\begin{align*}
\limsup_{n \to \infty} \dfrac{\sqrt{n}}{\sqrt{\log \log n}}||\bar{X}_n-\mu||_2 \leq \limsup_{n \to \infty} \sum_{i=1}^k \dfrac{\sqrt{n}}{\sqrt{\log \log n}}|\bar{X}_{n,i}-\mu_i| & \leq \sum_{i=1}^k \limsup_{n \to \infty} \dfrac{\sqrt{n}}{\sqrt{\log \log n}}|\bar{X}_{n,i}-\mu_i| = \sum_{i=1}^k \sqrt{2V_{ii}}.
\end{align*}
On the otherhand, using the SLLN, we get $\Delta(||\bar{X}_n - \mu||_2)$ converges to $0$ almost surely. Combining these two observations with (
ef{original}), we conclude that
\begin{equation}{\label{original2}}
\dfrac{\sqrt{n}}{\sqrt{\log \log n}} \big \rvert f(\bar{X}_n)-f(\mu)-(\nabla f (\mu))^{\prime}(\bar{X}_n-\mu)\big \rvert \stackrel{a.s.}{\longrightarrow} 0.
\end{equation}
Similarly, $\left\{\sum_{j=1}^n(\nabla f(\mu))^{\prime} (X_{j} - \mu), n \geq 0 \right\}$ is a mean-zero RW with step variances being $\sigma^2 = (\nabla f(\mu))^{\prime}V(\nabla f(\mu))$. So by \textit{Hartman-Winter LIL},
\begin{equation}{\label{lil2}}
\liminf_{n \to \infty} \dfrac{n (\nabla f(\mu))^{\prime}(\bar{X}_n - \mu)}{\sqrt{2 n \log\log n}} = - \sigma , \; \; \limsup_{n \to \infty} \dfrac{n (\nabla f(\mu))^{\prime}(\bar{X}_n - \mu)}{\sqrt{2 n \log\log n}} = \sigma, \;\, \text{almost surely}.
\end{equation}
Combining (
ef{original2}) and (
ef{lil2}), we can conclude that
$$\liminf_{n \to \infty} \dfrac{\sqrt{n}(f(\bar{X}_n)-f(\mu))}{\sqrt{ \log\log n}} = -\sqrt{2(\nabla f(\mu))^{\prime}V(\nabla f(\mu))} , \; \; \limsup_{n \to \infty} \dfrac{\sqrt{n}(f(\bar{X}_n)-f(\mu))}{\sqrt{ \log\log n}} = \sqrt{2(\nabla f(\mu))^{\prime}V(\nabla f(\mu))}, \;\, \text{a.s.}$$
\item By CLT, we have
$$ \sqrt{n}(\nabla f (\mu))^{\prime}(\bar{X}_n-\mu) \stackrel{d}{\longrightarrow} N(0,\sigma^2),$$
and hence $\sqrt{n}(\nabla f (\mu))^{\prime}(\bar{X}_n-\mu)/\sqrt{\log \log n} \stackrel{p}{\longrightarrow} 0$. This along with (
ef{original2}) gives
$$ \dfrac{\sqrt{n}(f(\bar{X}_n)-f(\mu))}{\sqrt{ \log\log n}} \stackrel{p}{\longrightarrow} 0.\n$$
\end{enumerate}
|
1995-q3 | Let $X_1, X_2, \ldots$ be i.i.d. random variables with a common distribution function $F$. Let $g : \mathbb{R}^2 \to \mathbb{R}$ be a Borel measurable function such that $\int \int |g(x, y)|dF(x)dF(y) < \infty$ and $\int g(x, y)dF(y) = 0$ for a.e. $x$, $\int g(x, y)dF(x) = 0$ for a.e. $y$, where "a.e." means "almost every" (with respect to $F$). Let $S_n = \sum_{1 \le i < j \le n} g(X_i, X_j)$, and let $\mathcal{F}_n$ be the $\sigma$-algebra generated by $X_1, \ldots, X_n$. (i) Show that $\{S_n, n \ge 2\}$ is a martingale with respect to $\{\mathcal{F}_n\}$. (ii) Suppose that $\int \int g^2(x, y)dF(x)dF(y) = A < \infty$. Evaluate $ES_n^2$ in terms of $A$ and show that $E(\max_{j \le S_n} S_j^2) \le 2n(n-1)A$. (iii) For $n \ge 2$, let $\mathcal{G}_{-n}$ be the $\sigma$-algebra generated by $S_n, S_{n+1}, \ldots$ Show that $E(g(X_1, X_2)|\mathcal{G}_{-n}) = S_n/\binom{n}{2}$ a.s. Hence show that $S_n/n^2$ converges to 0 a.s. and in $L_1$. | The conditions given on $g$ can be re-written as follows. For all $i \neq j$,
$$ \mathbb{E}|g(X_i,X_j)| < \infty, \; \; \mathbb{E}\left[g(X_i,X_j)\mid X_i\right] = 0 = \mathbb{E}\left[ g(X_i,X_j) \mid X_j\right], \; \text{almost surely}.$$\nThus also implies that $\mathbb{E}g(X_i,X_j)=0.$\nWe have $X_i$'s to be \textit{i.i.d.} and $S_n = \sum_{1 \leq i < j \leq n} g(X_i,X_j)$. Also $\mathcal{F}_n := \sigma(X_i : i \leq n)$.\n\begin{enumerate}[label=(\roman*)]
\item Clearly, $S_n \in m\mathcal{F}_n$ and $\mathbb{E}|S_n| < \infty$ since $\mathbb{E}|g(X_i,X_j)| < \infty$ for all $i \neq j$. Note that for $n \geq 2$,
\begin{align*}
\mathbb{E}(S_n \mid \mathcal{F}_{n-1}) = S_{n-1} + \sum_{j=1}^{n-1} \mathbb{E}(g(X_j,X_n) \mid \mathcal{F}_{n-1}) \stackrel{(i)}{=} S_{n-1} + \sum_{j=1}^{n-1} \mathbb{E}(g(X_j,X_n) \mid X_j) = S_{n-1}, \; \text{almost surely},
\end{align*}
where $(i)$ follows from the fact that $(X_j,X_n) \perp \!
\!\\perp \left\{X_i : i \neq j,n\right\}$. Therefore, $\left\{S_n, \mathcal{F}_n, n \geq 2\right\}$ is a MG.
\item The condition given can be re-written as $\mathbb{E}g(X_i,X_j)^2 =: A< \infty$ for all $i \neq j$. Under this condition $S_n$ is clearly square-integrable. Take $1 \leq i_1<j_1 \leq n$ and $1 \leq i_2<j_2 \leq n$. If the sets $\left\{i_1,j_1\right\}$ and $\left\{i_2,j_2\right\}$ are disjoint then $g(X_{i_1}, X_{j_1})$ and $g(X_{i_2},X_{j_2})$ are independent with mean $0$ and therefore $\mathbb{E} g(X_{i_1}, X_{j_1})g(X_{i_2},X_{j_2})=0$. On the otherhand, if $i_1=i_2$ but $j_1 \neq j_2$, then
$$ \mathbb{E}[g(X_{i_1},X_{j_1})g(X_{i_1},X_{j_2})] = \mathbb{E} \left(\mathbb{E}[g(X_{i_1},X_{j_1})g(X_{i_1},X_{j_2}) \mid X_{i_1}]
ight) = \mathbb{E} \left(\mathbb{E}[g(X_{i_1},X_{j_1}) \mid X_{i_1}]\mathbb{E}[g(X_{i_1},X_{j_2}) \mid X_{i_1}]
ight)=0,$$
where the above holds since conditioned on $X_{i_1}$, we have $g(X_{i_1},X_{j_1})$ and $g(X_{i_1},X_{j_2})$ are independent. Similar calculations hold true for other situations satisfying $|\left\{i_1,j_1\right\} \cap \left\{i_2,j_2\right\}|=1$. Therefore,
$$ \mathbb{E}S_n^2 = \sum_{1 \leq i < j \leq n} \mathbb{E}g(X_i,X_j)^2 = {n \choose 2} A.$$
Since, $\left\{S_n, \mathcal{F}_n, n \geq 2\right\}$ is a $L^2$-MG, using \textit{Doob's Maximal Inequality}, we get
$$ \mathbb{E}\left( \max_{j \leq n} S_j^2 \right) \leq 4\mathbb{E}S_n^2 = 4{n \choose 2} A = 2n(n-1)A.\n$$
\item Let $\mathcal{G}_{-n}=\sigma(S_m : m \geq n)$. Since, $X_i$'s are i.i.d., this symmetry implies that
$$(X_1,X_2, \ldots,) \stackrel{d}{=}(X_{\pi(1)},X_{\pi(2)},\ldots, X_{\pi(n)}, X_{n+1}, \ldots),$$
for all $\pi$, permutation on $\left\{1, \ldots,n\right\}$. Taking $1 \leq i <j \leq n$ and a permutation $\pi$ such that $\pi(1)=i, \pi(2)=j$, we can conclude that $\mathbb{E}(g(X_i,X_j)\mid \mathcal{G}_{-n}) = \mathbb{E}(g(X_1,X_2)\mid \mathcal{G}_{-n})$ for all $1 \leq i \neq j \leq n.$ Summing over all such pairs, we get
$$ S_n = \mathbb{E}\left[ S_n \mid \mathcal{G}_{-n}\right] = \sum_{1 \leq i < j \leq n} \mathbb{E}(g(X_i,X_j) \mid \mathcal{G}_{-n}) = {n \choose 2} \mathbb{E}(g(X_1,X_2)\mid \mathcal{G}_{-n}),$$
implying that
$$ \mathbb{E}(g(X_1,X_2)\mid \mathcal{G}_{-n}) = \dfrac{S_n}{{n \choose 2}}, \; \text{almost surely}.$$\nNote that $\mathcal{G}_{-n} \subseteq \mathcal{E}_n$ where $\mathcal{E}_n$ is the $\sigma$-algebra, contained in $\mathcal{F}_{\infty}$, containing all the events which are invariant under permutation of first $n$ co-ordinates of $(X_1, X_2, \ldots)$. We know that $\mathcal{E}_n \downarrow \mathcal{E}$, which is the exchangeable $\sigma$-algebra. By \textit{Hewitt-Savage 0-1 law}, $\mathcal{E}$ is trivial in this case and hence so is $\mathcal{G}_{-\infty}$ where $\mathcal{G}_{-n} \downarrow \mathcal{G}_{-\infty}$. By \textit{Levy's downward theorem}, we conclude that
$$ \dfrac{S_n}{{n \choose 2}} = \mathbb{E}(g(X_1,X_2)\mid \mathcal{G}_{-n}) \stackrel{a.s., \; L^1}{\longrightarrow}\mathbb{E}(g(X_1,X_2)\mid \mathcal{G}_{-\infty}) = \mathbb{E}g(X_1,X_2)=0.$$\nHence, $n^{-2}S_n$ converges to $0$ almost surely and in $L^1$. \end{enumerate}
|
1995-q4 | Let $Z_1, Z_2, \ldots$ be independent standard normal random variables and let $\mathcal{F}_n$ be the $\sigma$-algebra generated by $Z_1, Z_2, \cdots, Z_n$. Let $U_n$ be $\mathcal{F}_{n-1}$-measurable random variables such that $$n^{-1} \sum_{j=1}^{n} U_j^2 \to 1 \text{ a.s.}$$ Let $S_n = \sum_{j=1}^{n} U_j Z_j$. (i) Show that $S_n/n \to 0 \text{ a.s.}$ (ii) Let $i = \sqrt{-1}$. Show that $$E\{\exp(i\theta S_n + \frac{1}{2} \theta^2 \sum_{j=1}^{n} U_j^2)\} = 1 \text{ for all } \theta \in \mathbb{R}.$$ (iii) Show that $S_n/\sqrt{n}$ converges in distribution to a standard normal random variable. (Hint: Use (ii).) | Suppose $x_n$ be a sequence of non-negative real numbers such that $n^{-1}\sum_{k =1}^n x_k \to 1$ as $n \to \infty$. Then
$$ \dfrac{x_n}{n} = \dfrac{1}{n}\sum_{j=1}^n x_j - \dfrac{n-1}{n}\dfrac{1}{n-1}\sum_{j=1}^{n-1} x_j \longrightarrow 1-1 =0.$$\nLet $b_n$ be a non-decreasing sequence of natural numbers such that $1 \leq b_n \leq n$ and $x_{b_n} = \max_{1 \leq k \leq n} x_k$. If $b_n$ does not go to \infty, then it implies that the sequence $x_k$ is uniformly bounded and hence $\max_{1 \leq k \leq n} x_k =O(1)=o(n)$. Otherwise $b_n \uparrow \infty$ and hence $ x_{b_n}/b_n \to 0$ as $n \to \infty$. Note that
$$ \dfrac{\max_{1 \leq k \leq n} x_k}{n} = \dfrac{x_{b_n}}{n} \leq \dfrac{x_{b_n}}{b_n} \longrightarrow 0.$$\nThus in both cases $\max_{1 \leq k \leq n} x_k =o(n).$\n\n\nWe have \textit{i.i.d.} standard normal variables $Z_1, Z_2, \ldots$ and $\mathcal{F}_n = \sigma(Z_i : i \leq n)$. We have $U_i \in m \mathcal{F}_{i-1}$ for all $i \geq 1$ and
$$ \dfrac{1}{n}\sum_{j=1}^n U_j^2 \stackrel{a.s.}{\longrightarrow} 1.$$\nFrom our previous discussion this implies that $n^{-1}U_n^2, n^{-1}\max_{1 \leq k \leq n} U_k^2 \stackrel{a.s.}{\longrightarrow}0.$\n\begin{enumerate}[label=(\roman*)]
\item Set $V_j = U_j \mathbbm{1}(|U_j| \leq \sqrt{j})$ for all $j \geq 1$. Since $U_n^2/n$ converges to $0$ almost surely, we can say that with probability $1$, $U_j=V_j$ for all large enough $n$. More formally, if we define $T := \sup \left\{j \geq 1 : U_j \neq V_j\right\}$ (with the convention that the supremum of an empty set is $0$), then $T< \infty$ with probability $1$.
\n Since $V_j \in m\mathcal{F}_{j-1}$ and is square-integrable, we can conclude that $\left\{S_n^{\prime}:= \sum_{j=1}^n V_jZ_j, \mathcal{F}_n, n \geq 0\right\}$ is a $L^2$-MG, with $S^{\prime}_0=0$ and predictable compensator process
$$ \langle S^{\prime} \rangle _n = \sum_{j=1}^n \mathbb{E}(V_j^2Z_j^2 \mid \mathcal{F}_{j-1}) = \sum_{j=1}^n V_j^2= \sum_{j \leq T \wedge n} (V_j^2-U_j^2) + \sum_{j=1}^n U_j^2. $$
Since the first term in the last expression is finite almost surely, we have $n^{-1}\langle S^{\prime} \rangle _n $ converging to $1$ almost surely. This shows that $\langle S^{\prime} \rangle _{\infty}=\infty$ almost surely and hence
$$ \dfrac{S_n^{\prime}}{\langle S^{\prime} \rangle _n } \stackrel{a.s.}{\longrightarrow} 0 \Rightarrow \dfrac{S_n^{\prime}}{ n} \stackrel{a.s.}{\longrightarrow} 0 \Rightarrow \dfrac{\sum_{j \leq T \wedge n} (V_jZ_j-U_jZ_j)}{n} + \dfrac{\sum_{j=1}^n U_jZ_j}{n} \stackrel{a.s.}{\longrightarrow} 0 \Rightarrow \dfrac{S_n}{n} =\dfrac{\sum_{j=1}^n U_jZ_j}{n} \stackrel{a.s.}{\longrightarrow} 0. $$
\item To make sense of this problem, we need to assume that
$$ \mathbb{E} \exp \left(\dfrac{1}{2}\theta^2 \sum_{j=1}^n U_j^2 \right) < \infty,\; \; \forall \; \theta \in \mathbb{R}, \; \forall \, n \geq 1.$$\nIn that case we clearly have
$$ \Bigg \rvert \exp(i \theta S_n)\exp \left(\dfrac{1}{2}\theta^2 \sum_{j=1}^n U_j^2 \right)\Bigg \rvert \leq \exp \left(\dfrac{1}{2}\theta^2 \sum_{j=1}^n U_j^2 \right) \in L^1,$$
and hence observing that $U_nZ_n \mid \mathcal{F}_{n-1} \sim N(0, U_n^2)$, we can write
\begin{align*}
\mathbb{E} \left[ \exp(i \theta S_n)\exp \left(\dfrac{1}{2}\theta^2 \sum_{j=1}^n U_j^2 \right) \Bigg \rvert \mathcal{F}_{n-1}\right] &= \exp(i \theta S_{n-1})\exp \left(\dfrac{1}{2}\theta^2 \sum_{j=1}^n U_j^2 \right) \mathbb{E} \exp(i\theta U_n Z_n \mid \mathcal{F}_{n-1}) \\
&= \exp(i \theta S_{n-1})\exp \left(\dfrac{1}{2}\theta^2 \sum_{j=1}^n U_j^2 \right) \exp \left( -\dfrac{1}{2} \theta^2 U_n^2\right) \\
&= \exp(i \theta S_{n-1})\exp \left(\dfrac{1}{2}\theta^2 \sum_{j=1}^{n-1} U_j^2 \right).
\end{align*}
Therefore, taking expectations on both sides we get
$$ \mathbb{E} \left[ \exp(i \theta S_n)\exp \left(\dfrac{1}{2}\theta^2 \sum_{j=1}^n U_j^2 \right)\right] = \mathbb{E} \left[ \exp(i\theta S_{n-1})\exp \left(\dfrac{1}{2}\theta^2 \sum_{j=1}^{n-1} U_j^2 \right) \right].$$
Applying the recursion $n$ times we get
$$ \mathbb{E} \left[ \exp(i \theta S_n)\exp \left(\dfrac{1}{2}\theta^2 \sum_{j=1}^n U_j^2 \right)\right] = \mathbb{E}\exp(i\theta S_0)=1.$$\n
\item We shall prove that $S_n^{\prime}/\sqrt{n}$ converges weakly to standard Gaussian variable. Since $|S_n-S_{n}^{\prime}| \leq \sum_{k=1}^T |V_kZ_k|$ for all $n \geq 1$, and $T$ is almost surely finite, we have $(S_n-S_n^{\prime})/\sqrt{n}$ converging to $0$ almost surely. Using \textit{Slutsky's Theorem}, we can conclude that $S_n/\sqrt{n}$ converges weakly to standard Gaussian variable.
We have already observed that $\left\{S_n^{\prime}, \mathcal{F}_n, n \geq 0 \right\}$ is a $L^2$-Martingale with $S_0^{\prime} :=0$. Its predictable compensator process satisfies for all $0 \leq t \leq 1$,
$$ n^{-1}\langle S^{\prime} \rangle_{\lfloor nt \rfloor} = \dfrac{1}{n} \sum_{k=1}^{\lfloor nt \rfloor} V_k^2 = \dfrac{\lfloor nt \rfloor}{n} \dfrac{1}{\lfloor nt \rfloor}\sum_{k=1}^{\lfloor nt \rfloor} V_k^2 \stackrel{a.s.}{\longrightarrow} t.$$\nThis comes from the fact that $n^{-1}\langle S^{\prime} \rangle_n = n^{-1}\sum_{k=1}^n V_k^2$ converges to $1$ almost surely, which we proved in part (a). This also implies ( from our discussion at the very start of the solution) that $n^{-1}\max_{1 \leq k \leq n} V_k^2$ converges almost surely to $0$. Therefore, for any $\varepsilon >0$,
\begin{align*}
\dfrac{1}{n} \sum_{k=1}^n \mathbb{E} \left[(S^{\prime}_k-S^{\prime}_{k-1})^2; |S^{\prime}_k-S^{\prime}_{k-1}| \geq \varepsilon \sqrt{n} \mid \mathcal{F}_{k-1} \right] &= \dfrac{1}{n} \sum_{k=1}^n V_k^2\mathbb{E} \left[Z_k^2; |V_kZ_k| \geq \varepsilon \sqrt{n} \mid \mathcal{F}_{k-1} \right] \\
& \leq \dfrac{1}{n} \sum_{k=1}^n V_k^2\mathbb{E} \left[|Z_k|^2 |V_kZ_k|^2\varepsilon^{-2}n^{-1} \mid \mathcal{F}_{k-1} \right] \\
& \leq \varepsilon^{-2} (\mathbb{E}Z^4) \left( \dfrac{1}{n^{2}}\sum_{k=1}^n V_k^4 \right) \\
& \leq \varepsilon^{-2} (\mathbb{E}Z^4) \left( \dfrac{1}{n}\max_{k=1}^n V_k^2 \right)\left( \dfrac{1}{n}\sum_{k=1}^n V_k^2 \right),
\end{align*}
where $Z$ is a standard Gaussian variable. Clearly the last expression goes to $0$ almost surely. Then we have basically proved the required conditions needed to apply \textit{Martingale CLT}. Define,
$$ \widehat{S}_n^{\prime}(t) := \dfrac{1}{\sqrt{n}} \left[S^{\prime}_{\lfloor nt \rfloor} +(nt-\lfloor nt \rfloor)(S^{\prime}_{\lfloor nt \rfloor +1}-S^{\prime}_{\lfloor nt \rfloor}) \right]=\dfrac{1}{\sqrt{n}} \left[S^{\prime}_{\lfloor nt \rfloor} +(nt-\lfloor nt \rfloor)V_{\lfloor nt \rfloor +1}Z_{\lfloor nt \rfloor +1} \right], $$
for all $0 \leq t \leq 1$. We then have $\widehat{S}^{\prime}_n \stackrel{d}{\longrightarrow} W$ as $n \to \infty$ on the space $C([0,1])$, where $W$ is the standard Brownian Motion on $[0,1]$.
Since, $\mathbf{x} \mapsto x(1)$ is a continuous function on $C([0,1])$, we have
$$ n^{-1/2}S^{\prime}_n = \widehat{S}^{\prime}_n(1) \stackrel{d}{\longrightarrow} W(1) \sim N(0,1),$$
Which completes our proof.
\end{enumerate}
|
1995-q5 | Let $X_1, X_2, \ldots$ be independent random variables such that $EX_n = \mu > 0$ for all $n$ and $\sup_n E|X_n|^p < \infty$ for some $p > 2$. Let $S_n = X_1 + \cdots + X_n$ and $$N_a = \inf \{n : S_n \ge a\}.$$ (i) Show that $\lim_{a\to \infty} a^{-1} N_a = 1/\mu \text{ a.s.}$ (ii) Show that $\{(S_n - n\mu)/\sqrt{n}^p, n \ge 1\}$ is uniformly integrable and hence conclude that $E|S_n - n\mu|^p = O(n^{p/2})$. (iii) Show that $\{a^{-1}N_a, a \ge 1\}$ is uniformly integrable and hence evaluate $EN_a$ asymptotically as $a \to \infty$. (iv) Show that for every $\epsilon > 0$, there exists $\delta > 0$ such that for all large $a$, $$P\{ \max_{n| n-a/\mu | \le \delta a} |(S_n - n\mu) - (S_{[a/\mu]} - [a/\mu]\mu)| \ge \epsilon\sqrt{a} \} \le \epsilon.$$ (v) Suppose that $n^{-1} \sum^{n} \text{Var}(X_i) \to \sigma^2 > 0$ as $n \to \infty$. Show that as $a \to \infty$, $(S_{N_a} - \mu N_a)/(\sigma\sqrt{N_a})$ has a limiting standard normal distribution. (Hint: Make use of (iv).) | We have $X_i$'s to be independent random variables with common mean $\mu >0$ and $\sup_{n} \mathbb{E}|X_n|^p < \infty$ for some $p>2$. Let $S_n = \sum_{k =1}^n X_k$ and $N_a = \inf \left\{n : S_n \geq a\right\}$. $S_0:=0$\n\begin{enumerate}[label=(\roman*)]
\item The condition $\sup_{n} \mathbb{E}|X_n|^p < \infty$ for some $p>2$ guarantees that $\sigma^2 = \sup_{n} \mathbb{E}X_n^2 < \infty$. Hence we apply \textit{Kolmogorov SLLN} to conclude that $(S_n-\mathbb{E}(S_n))/n$ converges almost surely to $0$, i.e. $S_n/n$ converges almost surely to $\mu$. This implies that $S_n$ converges to $\infty$ almost surely and hence $N_a < \infty$ almost surely for all $a>0$.
Note that $N_a$ is non-decreasing in $a$ and $N_a \uparrow \infty$, almost surely if $a \uparrow \infty$. This is true since otherwise there exists $n \in \mathbb{N}$ such that $S_n =\infty$ and we know $\mathbb{P}(S_n = \infty, \; \text{for some } n) = \sum_{n \geq 1} \mathbb{P}(S_n=\infty)=0$.
Now observe that we have $S_{N_a-1} \leq a \leq S_{N_a}$ for all $a >0$. Since, $N_a \uparrow \infty$ almost surely and $S_n/n \to \mu$ almost surely, we can conclude that $S_{N_a}/(N_a), S_{N_a-1}/N_a \longrightarrow \mu$, almost surely. Therefore, $a/N_a$ converges to $\mu >0$, almost surely and hence $a^{-1}N_a \stackrel{a.s.}{\longrightarrow} \mu^{-1}.$
\item Let $\mathcal{F}_n := \sigma(X_k : k \leq n)$ for all $n \geq 0$. It is easy to see that $\left\{M_n :=S_n-n\mu, \mathcal{F}_n, n \geq 0\right\}$ is a MG with quadratic variation being
$$ [M]_n = \sum_{k =1}^n (M_k-M_{k-1})^2 = \sum_{k =1}^n (X_k-\mu)^2.$$
Since, $p >2$, Using \textit{Burkholder's Inequality}, we conclude that
\begin{align*}
\mathbb{E} \left[\sup_{1 \leq k \leq n} |M_k|^p \right] \leq \mathbb{E}[M]_n^{p/2} = \mathbb{E} \left[ \sum_{k =1}^n (X_k-\mu)^2\right]^{p/2} \leq n^{p/2-1} \sum_{k =1}^n \mathbb{E}|X_k-\mu|^p &\leq n^{p/2} \sup_{k \geq 1} \mathbb{E}|X_k-\mu|^p \\
&\leq n^{p/2} \left[ 2^{p-1}\sup_{k \geq 1} \mathbb{E}|X_k|^p + 2^{p-1}\mu^p\right].
\end{align*}
Here we have used the inequality that $(\sum_{k =1}^n y_k/n)^q \leq \sum_{k =1}^n y_k^q/n$ for all $y_k$'s non-negative and $q>1$. Therefore, $\mathbb{E}\left( \max_{k=1}^n |M_k|^p\right) \leq C n^{p/2}$, for some $C < \infty$. In particular $\mathbb{E}|M_n|^p \leq Cn^{p/2}$. Since, $(X_i-\mu)$'s are independent mean zero variables, we apply Corollary~\ref{cor} and get the following upper bound. For any $t >0$,
\begin{align*}
\mathbb{P} \left( \max_{k=1}^n |M_k|^p \geq n^{p/2}t \right) = \mathbb{P} \left( \max_{k=1}^n |M_k| \geq \sqrt{n}t^{1/p} \right) &\leq \mathbb{P} \left( \max_{k=1}^n |X_k-\mu| \geq \dfrac{\sqrt{n}t^{1/p}}{4} \right) + 4^{2p}(\sqrt{n}t^{1/p})^{-2p}(\mathbb{E}|M_n|^p)^2 \\
&\leq \mathbb{P} \left( \max_{k=1}^n |X_k-\mu|^p \geq \dfrac{n^{p/2}t}{4^p} \right) + C^24^{2p}t^{-2}.
\end{align*}
Therefore, for any $y \in (0,\infty)$, we have
\begin{align*}
\mathbb{E}\left( n^{-p/2} \max_{k=1}^n |M_k|^p; n^{-p/2} \max_{k=1}^n |M_k|^p \geq y \right) &= \int_{0}^{\infty} \mathbb{P} \left( n^{-p/2}\max_{k=1}^n |M_k|^p \geq \max(t,y) \right) \, dt \\
& \leq \int_{0}^{\infty} \mathbb{P} \left( \dfrac{4^p \max_{k=1}^n |X_k-\mu|^p}{n^{p/2}} \geq t \vee y \right) \, dt + C^24^{2p} \int_{0}^{\infty} (t \vee y)^{-2}\, dt \\
& = \mathbb{E}\left( 4^pn^{-p/2} \max_{k=1}^n |X_k-\mu|^p; 4^pn^{-p/2} \max_{k=1}^n |X_k-\mu|^p \geq y \right) + \dfrac{2C^24^{2p}}{y}.
\end{align*}
Note that,
$$ \mathbb{E}\left( 4^pn^{-p/2} \max_{k=1}^n |X_k-\mu|^p; 4^pn^{-p/2} \max_{k=1}^n |X_k-\mu|^p \geq y \right) \leq 4^pn^{-p/2}\sum_{k=1}^n \mathbb{E}|X_k-\mu|^p \leq C_1n^{1-p/2},$$
for some $C_1 < \infty$. Since $p >2$ and for any $N \in \mathbb{N}$, the finite collection $\left\{4^pn^{-p/2} \max_{k=1}^n |X_k-\mu|^p : 1 \leq n \leq N\right\}$ is uniformly integrable (since each of them is integrable), we get the following.
\begin{align*}
\sup_{n \geq 1} \mathbb{E}\left( \dfrac{ \max_{k=1}^n |M_k|^p}{n^{p/2}}; \dfrac{ \max_{k=1}^n |M_k|^p}{n^{p/2}} \geq y \right) \leq \sup_{n=1}^N \mathbb{E}\left( \dfrac{4^p \max_{k=1}^n |X_k-\mu|^p}{n^{p/2}}; \dfrac{4^p \max_{k=1}^n |X_k-\mu|^p}{n^{p/2}} \geq y \right) &+ C_1N^{1-p/2} \\&+\dfrac{2C^24^{2p}}{y}.
\end{align*}
Take $y \to \infty$ to get
$$ \lim_{y \to \infty} \sup_{n \geq 1} \mathbb{E}\left( \dfrac{ \max_{k=1}^n |M_k|^p}{n^{p/2}}; \dfrac{ \max_{k=1}^n |M_k|^p}{n^{p/2}} \geq y \right) \leq C_1N^{1-p/2},$$
for all $N \in \mathbb{N}.$ Taking $N \to \infty$, we conclude that the collection $\left\{n^{-p/2}\max_{k=1}^n |M_k|^p : n \geq 1\right\}$ is uniformly integrable and hence $\
left\{n^{-p/2}|M_n|^p=n^{-p/2}|S_n-n\mu|^p : n \geq 1\right\}$.\n
We have already established that $\mathbb{E}|S_n-\nu|^p = \mathbb{E}|M_n|^p = O(n^{p/2}).
\n\item Fix any $a >0$. Then for any $y >\mu^{-1}+a^{-1}$ , we have $\mu\lfloor ay \rfloor \geq \mu(ay-1) \geq a$ and hence
egin{align*}
\mathbb{P}(a^{-1}N_a > y) \leq \mathbb{P}(S_{\lfloor ay \rfloor} < a) \leq \mathbb{P}(|M_{\lfloor ay \rfloor}| \geq \mu\lfloor ay \rfloor - a) \leq \dfrac{\mathbb{E}||M_{\lfloor ay \rfloor}|^p}{(\mu\lfloor ay \rfloor - a)^p} \leq \dfrac{C_2({\lfloor ay \rfloor})^{p/2}}{(\mu\lfloor ay \rfloor - a)^p} \leq \dfrac{C_2y^{p/2}a^{p/2}}{(\mu ay - \mu -a)^p}
ewlineNow fix $a \geq 1$. For any $y \geq 2\mu^{-1}+2$, we have $\mu ay \geq 2a+2\mu$ and hence
$$ \mathbb{P}(a^{-1}N_a > y) \leq \dfrac{C_2y^{p/2}a^{p/2}}{2^{-p}(\mu a y)^{p}} \leq C_3 a^{-p/2}y^{-p/2} \leq C_3 y^{-p/2},$$\nfor some absolute constant ( depending on $\mu$ only) $C_3$. Since $p>2$, take $r \in (1,p/2)$. Then
$$ \mathbb{E}a^{-r}N_a^r = \int_{0}^{\infty} \mathbb{P}(a^{-1}N_a > y^{1/r})\, dy \leq (2\mu^{-1}+2)^r + \int_{(2\mu^{-1}+2)^r}^{\infty} C_3y^{-p/2}\, dy =: C_4 < \infty. \n$$
Thus $\left\{a^{-1}N_a : a \geq 1\right\}$ is $L^r$-bounded for some $r>1$ and hence $\left\{a^{-1}N_a : a \geq 1\right\}$ is uniformly integrable. Part (a) and \textit{Vitali's Theorem} now implies that $a^{-1}\mathbb{E}N_a \longrightarrow \mu^{-1}$ as $a \to \infty.
\n\item First note that $\mathbb{E}|M_k-M_l|^p \leq C |k-l|^{p/2}$ for any $k,l \geq 1$, where $C$ is a finite constant not depending upon $k,l$. The proof of this is exactly same as the first part of (b). \n
\nFix any $\varepsilon, \delta >0$. We shall apply \textit{Doob's Inequality}.
\begin{align*}
\mathbb{P}\left( \max_{n: \mu^{-1}a \leq n \leq \mu^{-1}a+\delta a} \Big \rvert M_n - M_{\lfloor \mu^{-1}a \rfloor}\Big \rvert \geq \varepsilon \sqrt{a}\right) &\leq \varepsilon^{-p}a^{-p/2}\mathbb{E}\Big \rvert M_{\lfloor \mu^{-1}a +\delta a \rfloor}- M_{\lfloor \mu^{-1}a \rfloor}\Big \rvert^p \\
& \leq \varepsilon^{-p}a^{-p/2} C \Big \rvert \lfloor \mu^{-1}a+\delta a \rfloor-\lfloor \mu^{-1}a \rfloor\Big \rvert^{p/2} \\
& \leq C\varepsilon^{-p}a^{-p/2} (\delta a+1)^{p/2} = C\varepsilon^{-p}(\delta + a^{-1})^{p/2}.
\end{align*}
Similarly,
\begin{align*}
\mathbb{P}\left( \max_{n: \mu^{-1}a-\delta a \leq n \leq \mu^{-1}a} \Big \rvert M_{\lfloor \mu^{-1}a \rfloor}-M_n\Big \rvert \geq \varepsilon \sqrt{a}\right) &\leq \varepsilon^{-p}a^{-p/2}\mathbb{E}\Big \rvert M_{\lfloor \mu^{-1}a \rfloor}-M_{\lfloor \mu^{-1}a-\delta a \rfloor}\Big \rvert^p \\
& \leq \varepsilon^{-p}a^{-p/2} C \Big \rvert \lfloor \mu^{-1}a \rfloor-\lfloor \mu^{-1}a-\delta a \rfloor\Big \rvert^{p/2} \\
& \leq C\varepsilon^{-p}a^{-p/2} (\delta a+1)^{p/2} = C\varepsilon^{-p}(\delta + a^{-1})^{p/2}.
\end{align*}
Combining them we get
$$ \mathbb{P}\left( \max_{n: | n - \mu^{-1}a| \leq \delta a} \Big \rvert M_{\lfloor \mu^{-1}a \rfloor}-M_n\Big \rvert \geq \varepsilon \sqrt{a}\right) \leq 2C\varepsilon^{-p}(\delta + a^{-1})^{p/2}.$$\nChoose $\delta >0$ such that $2C\varepsilon^{-p}\delta^{p/2} \leq \varepsilon/2,$ i.e., $0 < \delta < \left(\varepsilon^{p+1}/(4C) \right)^{2/p}.$ Then
$$ \limsup_{a \to \infty} \mathbb{P}\left( \max_{n: | n - \mu^{-1}a| \leq \delta a} \Big \rvert M_n- M_{\lfloor \mu^{-1}a \rfloor}\Big \rvert \geq \varepsilon \sqrt{a}\right) \leq \varepsilon/2,$$
which is more than what we are asked to show.
\item First we shall apply \textit{Lyapounav's Theorem} to establish the CLT for the sequence $S_n$. Let $v_n = \operatorname{Var}(S_n) = \sum_{k=1}^n \operatorname{Var}(X_k) \sim n \sigma^2$. Moreover,
\begin{align*}
v_n^{-p/2} \sum_{k=1}^n \mathbb{E}|X_k-\mu|^p \leq v_n^{-p/2} n2^{p-1}\left(\sup_{k \geq 1} \mathbb{E}|X_k|^p + \mu^p \right) \longrightarrow 0,
\end{align*}
since $p>2$ and $v_n\sim n\sigma^2$ with $\sigma>0$. Thus $(S_n-n\mu)/\sqrt{v_n}$ converges weakly to standard Gaussian distribution which in turn implies that $M_n/(\sigma \sqrt{n})=(S_n-n\mu)/(\sigma \sqrt{n})$ converges weakly to standard Gaussian distribution.
Fix $x \in \mathbb{R}$, $\varepsilon >0$ and get $\delta>0$ as in part (d). Then
\begin{align*}
\mathbb{P}\left( \dfrac{M_{N_a}}{\sigma \sqrt{a}} \leq x\right) &\leq \mathbb{P}\left( \dfrac{M_{N_a}}{\sigma \sqrt{a}} \leq x, \Big \rvert \dfrac{N_a}{a}-\dfrac{1}{\mu} \Big \rvert \leq \delta\right) + \mathbb{P} \left( \Big \rvert \dfrac{N_a}{a}-\dfrac{1}{\mu} \Big \rvert > \delta\right) \\
& \leq \mathbb{P} \left( \dfrac{M_{\lfloor \mu^{-1}a \rfloor}}{\sigma \sqrt{a}} \leq x + \dfrac{\varepsilon}{\sigma} \right) + \mathbb{P}\left( \max_{n: | n - \mu^{-1}a| \leq \delta a} \Big \rvert M_n- M_{\lfloor \mu^{-1}a \rfloor}\Big \rvert \geq \varepsilon \sqrt{a}\right) + \mathbb{P} \left( \Big \rvert \dfrac{N_a}{a}-\dfrac{1}{\mu} \Big \rvert > \delta\right).
\end{align*}
Take $a \to \infty$. By CLT the first term converges to $\mathbb{P}\left(N(0, \mu^{-1}) \leq x+ \sigma^{-1}\varepsilon\right).$ The last term converges to $0$ and the second term has limit superior to be at most $\varepsilon/2$ form part (d). Therefore,
$$\limsup_{a \to \infty} \mathbb{P}\left( \dfrac{M_{N_a}}{\sigma \sqrt{a}} \leq x\right) \leq \mathbb{P}\left(N(0, \mu^{-1}) \leq x+ \sigma^{-1}\varepsilon\right) + \varepsilon. $$
Take $\varepsilon \downarrow 0$ to conclude that
$$ \limsup_{a \to \infty} \mathbb{P}\left( \dfrac{M_{N_a}}{\sigma \sqrt{a}} \leq x\right) \leq \mathbb{P}\left(N(0, \mu^{-1}) \leq x\right).$$
Similarly,
\begin{align*}
\mathbb{P}\left( \dfrac{M_{N_a}}{\sigma \sqrt{a}} > x\right) &\leq \mathbb{P}\left( \dfrac{M_{N_a}}{\sigma \sqrt{a}} > x, \Big \rvert \dfrac{N_a}{a}-\dfrac{1}{\mu} \Big \rvert \leq \delta\right) + \mathbb{P} \left( \Big \rvert \dfrac{N_a}{a}-\dfrac{1}{\mu} \Big \rvert > \delta\right) \\
& \leq \mathbb{P} \left( \dfrac{M_{\lfloor \mu^{-1}a \rfloor}}{\sigma \sqrt{a}} > x - \dfrac{\varepsilon}{\sigma} \right) + \mathbb{P}\left( \max_{n: | n - \mu^{-1}a| \leq \delta a} \Big \rvert M_n- M_{\lfloor \mu^{-1}a \rfloor}\Big \rvert \geq \varepsilon \sqrt{a}\right) + \mathbb{P} \left( \Big \rvert \dfrac{N_a}{a}-\dfrac{1}{\mu} \Big \rvert > \delta\right).
\end{align*}
Take $a \to \infty$ and get
$$\limsup_{a \to \infty} \mathbb{P}\left( \dfrac{M_{N_a}}{\sigma \sqrt{a}} > x\right) \leq \mathbb{P}\left(N(0, \mu^{-1}) > x- \sigma^{-1}\varepsilon\right) + \varepsilon. $$
Take $\varepsilon \downarrow 0$ to conclude that
$$ \limsup_{a \to \infty} \mathbb{P}\left( \dfrac{M_{N_a}}{\sigma \sqrt{a}} > x\right) \leq \mathbb{P}\left(N(0, \mu^{-1}) > x\right).$$
Combining previous observations we can conclude that $M_{N_a}/\sigma \sqrt{a}$ converges weakly to $N(0,\mu^{-1})$ distribution. Apply \textit{Slutsky's Theorem} to conclude that
$$ \dfrac{S_{N_a}-\mu N_a}{\sigma \sqrt{N_a}} \stackrel{d}{\longrightarrow} N(0,1), \; \text{ as } a \to \infty.$$\n\end{enumerate}
|
1996-q1 | Let \( Y_1, Y_2, \ldots \) be any sequence of mean-zero random variables with \( E|Y_n| = 1 \) for all \( n \).
a. Construct such a sequence \( \{Y_n\} \) that is also not i.i.d. and satisfies
\[
\liminf_{n \to \infty} P(Y_n > 0) = 0.
\]
b. Prove that if the \( Y_n \) are uniformly integrable, then
\[
\liminf_{n \to \infty} P(Y_n > 0) > 0.
\] | \begin{enumerate}[label=(\alph*)]
\item Take $U \sim \text{Uniform}([-1,1])$. Define
$$ Y_n := n\mathbbm{1}(U>1-n^{-1}) - n\mathbbm{1}(U<-1+n^{-1}), \; \forall \; n \geq 1.$$\nClearly, $\mathbb{E}Y_n = \dfrac{n}{2n} -\dfrac{n}{2n}=0$ and $\mathbb{E}|Y_n|= \dfrac{n}{2n}+\dfrac{n}{2n}=1.$ Clearly the variables are not \textit{i.i.d.}. Finally, $\mathbb{P}(Y_n >0) = \mathbb{P}(U>1-n^{-1}) = \dfrac{1}{2n} \to 0,$ as $n \to \infty$.
\item Since $\mathbb{E}Y_n=0$ and $\mathbb{E}|Y_n|=1$, we have $\mathbb{E}(Y_n)_{+}=1/2$ for all $n \geq 1$. Now suppose that $\liminf_{n \to \infty} \mathbb{P}(Y_n>0)=0$. Hence we can get a subsequence $\left\{n_k : k \geq 1\right\}$ such that $\mathbb{P}(Y_{n_k}>0) \to 0$ as $k \to \infty$. This implies, for any $\varepsilon >0$,\n$$ \mathbb{P}((Y_{n_k})_{+} \geq \varepsilon) \leq \mathbb{P}(Y_{n_k}>0) \to 0,$$\nas $k \to \infty$. In other words $(Y_{n_k})_{+} \stackrel{p}{\longrightarrow} 0$. We are given that $\left\{Y_n : n \geq 1\right\}$ is U.I. and hence so is $\left\{(Y_{n_k})_{+} : k \geq 1\right\}$. Apply \textit{Vitali's Theorem} to conclude that $\mathbb{E}(Y_{n_k})_{+} \to 0$. This gives a contradiction.
\end{enumerate}
|
1996-q2 | Let \( X \) be a mean-zero random variable.
a. Prove \( E|1 + X| \leq 3E|1 - X| \).
b. Prove that, given \( c < 3 \), there exists a mean-zero random variable \( X \) such that
\[
E|1 + X| > cE|1 - X|.
\] | We have $X$ to be a mean zero random variable. Hence $\mathbb{E}X\mathbbm{1}(X \geq a) + \mathbb{E}X\mathbbm{1}(X < a)=0$, for all $a \in \mathbb{R}$.
\begin{enumerate}[label=(\alph*)]
\item Note that
\begin{align*}
\mathbb{E}|1+X| &= \mathbb{E}(1+X)\mathbbm{1}(X \geq -1) - \mathbb{E}(1+X)\mathbbm{1}(X < -1) \\
&= \mathbb{P}(X \geq -1) + \mathbb{E}X\mathbbm{1}(X \geq -1) - \mathbb{P}(X<-1) - \mathbb{E}X\mathbbm{1}(X < -1) \\
&= 2\mathbb{P}(X \geq -1) + 2\mathbb{E}X\mathbbm{1}(X \geq -1) -1,
\end{align*}
whereas
\begin{align*}
3\mathbb{E}|1-X| &= 3\mathbb{E}(X-1)\mathbbm{1}(X \geq 1) + 3\mathbb{E}(1-X)\mathbbm{1}(X < 1) \\
&= -3\mathbb{P}(X \geq 1) + 3\mathbb{E}X\mathbbm{1}(X \geq 1) + 3\mathbb{P}(X<1) - 3\mathbb{E}X\mathbbm{1}(X < 1) \\
&= 3-6\mathbb{P}(X \geq 1) + 6\mathbb{E}X\mathbbm{1}(X \geq 1).
\end{align*}
Hence we have to prove that
$$ \mathbb{P}(X \geq -1) + \mathbb{E}X\mathbbm{1}(X \geq -1) + 3\mathbb{P}(X \geq 1) \leq 2 + 3\mathbb{E}X\mathbbm{1}(X \geq 1).$$
But this follows immediately since,
$$ \mathbbm{1}(X \geq -1) + X\mathbbm{1}(X \geq -1) + 3\mathbbm{1}(X \geq 1) \leq 2+ 3X\mathbbm{1}(X \geq 1).$$
The above can be checked by considering different cases whether $X\geq 1$, $-1 \leq X <1$ or $X<-1$.
\item It is enough to construct, for any $c \in (0,3)$, a random variable $X$ with mean zero such that $\mathbb{E}|1+X| \geq c \mathbb{E}|1-X|.$
Take any $0<c <3$ and define $a=[(3-c)^{-1}(1+c)] \vee 1$. Define $X$ to be as follows. $\mathbb{P}(X=1)=(1+a)^{-1}a = 1- \mathbb{P}(X=-a).$ Note that $\mathbb{E}(X)=0$. Moreover, since $a \geq 1$, we have
$$ \mathbb{E}|1+X| = \dfrac{2a}{1+a} + \dfrac{a-1}{1+a} = \dfrac{3a-1}{1+a}, \; \mathbb{E}|1-X|=\dfrac{1+a}{1+a}=1.$$\nThus using the fact that $a \geq (3-c)^{-1}(1+c)$, we get
$$ \mathbb{E}|1+X| = \dfrac{3a-1}{1+a} = 3- \dfrac{4}{1+a} \geq 3- \dfrac{4}{1+(3-c)^{-1}(1+c)} = 3- \dfrac{12-4c}{3-c+1+c} = 3-(3-c)=c\mathbb{E}|1-X|. $$
\end{enumerate}
|
1996-q3 | Let \( X \) be a random variable with \( E|X| < \infty \), \( B \) a \( \sigma \)-algebra.
a. Show that there exists a \( B \in B \) such that
\[
E X 1_B = \sup_{A \in B} E X 1_A.
\]
Call a \( B \) with this property \( B \)-optimal for \( X \).
b. Show that \( Y = E(X|B) \) a.s. if and only if for each real number \( r \) the event
\(( \omega : Y(\omega) > r \)) is \( B \)-optimal for \( X - r \). | \begin{enumerate}[label=(\alph*)]
\item Since $X$ is integrable, $Z:= \mathbb{E}(X \mid \mathcal{B})$ is well defined and $Z \in m\mathcal{B}$. Then note that for any $A \in \mathcal{B}$,
\begin{align*}
\mathbb{E}X\mathbbm{1}_A &= \mathbb{E}(\mathbb{E}(X \mid \mathcal{B})\mathbbm{1}_A) = \mathbb{E}Z\mathbbm{1}_A = \mathbb{E}Z_{+}\mathbbm{1}_A - \mathbb{E}Z_{-}\mathbbm{1}_A \leq \mathbb{E}Z_{+}\mathbbm{1}_A \leq \mathbb{E}Z_{+} = \mathbb{E}Z\mathbbm{1}(Z > 0) =\mathbb{E}X\mathbbm{1}(Z > 0),
\end{align*}
where the last equality is true since $(Z > 0) \in \mathcal{B}$. Thus clearly, $(Z > 0)$ is $\mathcal{B}$-optimal for $X$.
\item Suppose $Y=\mathbb{E}(X \mid \mathcal{B})$ a.s. Fix a real number $r$. Then $Y-r=\mathbb{E}(X-r \mid \mathcal{B})$ a.s. and by our calculation in part (a), we have $(Y>r)=(Y-r >0)$ is $\mathcal{B}$-optimal for $X-r$.
Now suppose that $(Y>r)$ is $\mathcal{B}$-optimal for $X-r$ for all $r$. This hypothesis implies that $(Y>r) \in \mathcal{B}$ for all real $r$ and hence $Y \in m\mathcal{B}$. Let $Z$ be as was in part (a).Then by the discussion in previous paragraph, we have $(Z>r)$ to be $\mathcal{B}$-optimal for $X-r$ for all $r$. In other words,
\begin{equation}{\label{eq1}}
\mathbb{E}(Z-r)\mathbbm{1}(Y>r) = \mathbb{E}(X-r)\mathbbm{1}(Y>r) = \sup_{A \in \mathcal{B}} \mathbb{E}(X-r)\mathbbm{1}_A = \mathbb{E}(X-r)\mathbbm{1}(Z>r)= \mathbb{E}(Z-r)\mathbbm{1}(Z>r),
\end{equation}
where the left-most and the right-most equalities are true since $(Y>r),(Z>r)\in \mathcal{B}$ and $Z=\mathbb{E}(X \mid \mathcal{B})$. Note that
$$ (Z-r) \left(\mathbbm{1}(Z>r) - \mathbbm{1}(Y>r) \right) \geq 0, \; \text{almost surely},$$
whereas (\ref{eq1}) tell us that $\mathbb{E}(Z-r) \left(\mathbbm{1}(Z>r) - \mathbbm{1}(Y>r) \right)=0.$ Hence, $(Z-r) \left(\mathbbm{1}(Z>r) - \mathbbm{1}(Y>r) \right) = 0$, almost surely. Therefore, for all $r \in \mathbb{R}$, we have
$ \mathbb{P}(Z>r, Y<r) =0 = \mathbb{P}(Z <r, Y >r).
Observe that
\begin{align*}
\mathbb{P}(Y \neq Z) &= \mathbb{P}\left( \exists \; q \in \mathbb{Q} \text{ such that } Y < q < Z \text{ or } Z < q < Y\right) \leq \sum_{q \in \mathbb{Q}} \mathbb{P}(Z>q, Y<q) + \sum_{q \in \mathbb{Q}} \mathbb{P}(Z<q, Y>q) =0,
\end{align*}
and thus $Y=Z=\mathbb{E}(X \mid \mathcal{B})$, almost surely.
\end{enumerate} |
1996-q4 | Let \( \alpha > 1 \) and let \( X_1, X_2, \ldots \) be i.i.d. random variables with common density function
\[
f(x) = \frac{1}{2}(\alpha - 1)|x|^{-\alpha} \quad \text{if } |x| \geq 1,
\]
\[ = 0 \quad \text{otherwise}.
\]
Let \( S_n = X_1 + \cdots + X_n \).
a. For \( \alpha > 3 \), show that \( \limsup_{n \to \infty} |S_n|/(n \log \log n)^{1/2} = C_{\alpha} \) a.s. and evaluate \( C_{\alpha} \).
b. For \( \alpha = 3 \), show that \( (n \log n)^{-1/2} S_n \) has a limiting standard normal distribution.
(Hint: Show that \( \max_{i \leq n} P(|X_i| \geq (n \log n)^{1/3}) \to 0 \) and hence we can truncate the \( X_i \) as \( X_{ni} = X_i I(|X_i| < (n \log n)^{1/2}) \) for \( 1 \leq i \leq n \).) | \begin{enumerate}[label=(\roman*)]
\item For $\alpha >3$,
$$ \mathbb{E}(X_1^2) = \int_{-\infty}^{\infty} x^2f(x)\, dx = 2 \int_{1}^{\infty} \dfrac{\alpha -1}{2} x^{-\alpha + 2} \, dx = (\alpha -1) \int_{1}^{\infty} x^{-\alpha + 2} \, dx = \dfrac{\alpha-1}{\alpha-3} < \infty,$$\nand the variables have mean zero (since the density is symmetric around $0$). Using \textit{Hartman-Winter LIL}, we conclude that
$$ \limsup_{n \to \infty} \dfrac{|S_n|}{\sqrt{n \log \log n}} = \sqrt{2\mathbb{E}(X_1^2)} = \sqrt{\dfrac{2(\alpha-1)}{\alpha-3}}, \; \text{almost surely}.$$\n
\item For $\alpha =3$, we have $f(x)=|x|^{-3}\mathbbm{1}(|x| \geq 1)$. We go for truncation since the variables has infinite variance. Let $X_{n,k}:= X_k \mathbbm{1}(|X_k| \leq b_n)/\sqrt{n \log n}$, for all $n,k \geq 1$; where $\left\{b_n : n \geq 1\right\}$ is some sequence of non-negative real numbers greater than $1$ to be chosen later. Note that
\begin{align*}
\mathbb{P} \left(\dfrac{S_n}{\sqrt{n \log n}} \neq \dfrac{\sum_{k=1}^n X_{n,k}}{\sqrt{n \log n}} \right) &\leq \sum_{k=1}^n \mathbb{P}(X_{n,k} \neq X_k) = n \mathbb{P}(|X_1| \geq b_n) = 2n \int_{b_n}^{\infty} x^{-3}\,dx = nb_n^{-2}.
\end{align*}
Suppose that we have chosen $b_n$ sequence in such a way that $nb_n^{-2}=o(1)$. Then
$$ \dfrac{S_n}{\sqrt{n \log n}} - \dfrac{\sum_{k=1}^n X_{n,k}}{\sqrt{n \log n}} \stackrel{p}{\longrightarrow} 0.$$\nNow focus on $\dfrac{\sum_{k=1}^n X_{n,k}}{\sqrt{n \log n}}$. By symmetry of the density $\mathbb{E}(X_{n,k})=0$. Also
\begin{align*}
\sum_{k=1}^n \dfrac{\mathbb{E}X_{n,k}^2}{n \log n} &=\dfrac{n\mathbb{E}X_{n,1}^2}{n \log n}= \dfrac{2}{\log n} \int_{1}^{b_n} x^{-1}\, dx = \dfrac{2\log b_n}{\log n},
\end{align*}
and for any small enough $\varepsilon >0$,
\begin{align*}
\sum_{k=1}^n \mathbb{E}\left[\dfrac{X_{n,k}^2}{n \log n}; |X_{n,k}| \geq \varepsilon \sqrt{n \log n} \right] &= n\mathbb{E}\left[\dfrac{X_{n,1}^2}{n \log n}; |X_{n,1}| \geq \varepsilon \sqrt{n \log n} \right] &= \dfrac{2}{\log n} \int_{\varepsilon \sqrt{n \log n}}^{b_n} x^{-1}\, dx \\
&= \dfrac{2\log b_n - \log n - \log \log n - 2\log \varepsilon}{\log n} \\
&= \dfrac{2\log b_n}{\log n} - 1 +o(1),
\end{align*}
provided that $b_n \geq \varepsilon \sqrt{n \log n}$ for all $n \geq 1$, for all small enough $\varepsilon$. Clearly if we choose $b_n = \sqrt{n \log n}$ then $nb_n^{-2}=o(1)$,
$$ \sum_{k=1}^n \dfrac{\mathbb{E}X_{n,k}^2}{n \log n} \to 1, \;\sum_{k=1}^n \mathbb{E}\left[\dfrac{X_{n,k}^2}{n \log n}; |X_{n,k}| \geq \varepsilon \sqrt{n \log n} \right] \to 0, \; \; \forall \; 0 < \varepsilon <1.$$\nApply \textit{Lindeberg-Feller CLT} and \textit{Slutsky's Theorem} to conclude that
$$\dfrac{S_n}{\sqrt{n \log n}} \stackrel{d}{\longrightarrow} N(0,1). $$
\end{enumerate}
|
1996-q5 | Let \( k \) be a positive integer and let \( (X_n, \mathcal{F}_n, n \geq 1) \) be a martingale difference sequence such that \( E|X_n|^k < \infty \) for all \( n \). Let
\[
U_n^{(k)} = \sum_{1 \leq i_1 < \cdots < i_k \leq n} X_{i_1} \cdots X_{i_k}, \quad n \geq k.
\]
a. Show that \( \{U_n^{(k)}, \mathcal{F}_n, n \geq k\} \) is a martingale.
b. Suppose \( E(X_n^2 | \mathcal{F}_{n-1}) \leq C \) a.s. for all \( n \geq 1 \) and some constant \( C \). Show that
\[
E((U_n^k - U_{n-1}^{(k)})^2 | \mathcal{F}_{n-1}) \leq C E((U_{n-1}^{(k-1)})^2),
\]
where we define \( U_n^{(0)} = 1 \). Hence show that
\[
E((U_n^{(k)})^2) \leq \binom{n}{k} C^k.
\]
(Hint: Use induction.) | We have MG difference sequence $\left\{X_n, \mathcal{F}_n, n \geq 1\right\}$ such that $\mathbb{E}|X_n|^k < \infty$ for all $n$ and
$$ U_n^{(k)} = \sum_{1 \leq i_1 < \cdots < i_k \leq n} X_{i_1}\cdots X_{i_k}, \; \forall \; n \geq k.$$\n\begin{enumerate}[label=(\roman*)]
\item Clearly, $ U_n^{(k)} \in m \sigma(X_1, \ldots, X_n) \subseteq m \mathcal{F}_n.$ Also, for any $1 \leq i_1< \cdots <i_k \leq n$,
$$ \mathbb{E}|X_{i_1}\cdots X_{i_k}| \leq \mathbb{E} \left(\sum_{l=1}^k |X_{i_l}| \right)^k \leq C_k \mathbb{E} \left(\sum_{l=1}^k |X_{i_l}|^k \right) < \infty,$$
and thus $\mathbb{E}|U_n^{(k)}|< \infty.$ Finally, for all $n \geq k+1$,
\begin{align*}
\mathbb{E}\left[ U_{n}^{(k)} \mid \mathcal{F}_{n-1}\right] &= \mathbb{E}\left[ \sum_{1 \leq i_1 < \cdots < i_k \leq n} X_{i_1}\cdots X_{i_k} \Bigg \rvert \mathcal{F}_{n-1}\right] \\
&= \sum_{1 \leq i_1 < \cdots < i_k \leq n-1} X_{i_1}\cdots X_{i_k} + \mathbb{E}\left[ \sum_{1 \leq i_1 < \cdots < i_k = n} X_{i_1}\cdots X_{i_k} \Bigg \rvert \mathcal{F}_{n-1}\right] \\
&= U_{n-1}^{(k)} + \left(\sum_{1 \leq i_1 < \cdots < i_{k-1} \leq n-1} X_{i_1}\cdots X_{i_{k-1}} \right) \mathbb{E}(X_n \mid \mathcal{F}_{n-1}) = U_{n-1}^{(k)}.
\end{align*}
Thus $\left\{U_n^{(k)}, \mathcal{F}_n, n \geq k\right\}$ is a MG.
\item We have $\mathbb{E}(X_n^2 \mid \mathcal{F}_{n-1}) \leq C$ almost surely for all $n \geq 1$. This yields that $\mathbb{E}X_n^2 \leq C$ for all $n \geq 1$. We first need to establish that $\mathbb{E}(U_n^{(k)})^2< \infty$. In order to that it is enough to show that for any $1 \leq i_1 < \cdots < i_k \leq n$, we have $\mathbb{E}\left( X_{i_1}^2\cdots X_{i_k}^2 \right) < \infty$. Consider the event $A_M=(|X_{i_1} \cdots X_{i_{k-1}}| \leq M)$. Then $M \in \mathcal{F}_{i_{k-1}}$ and clearly $X_{i_1}^2\cdots X_{i_k}^2 \mathbbm{1}_{A_M} \leq M^2X_{i_k}^2$ is integrable. Hence,
\begin{align*}
\mathbb{E} \left[ X_{i_1}^2\cdots X_{i_k}^2 \mathbbm{1}_{A_M}\right] &= \mathbb{E} \left[\mathbb{E} \left( X_{i_1}^2\cdots X_{i_k}^2 \mathbbm{1}_{A_M} \mid \mathcal{F}_{i_{k-1}}\right) \right] \\
&= \mathbb{E} \left[ X_{i_1}^2\cdots X_{i_{k-1}}^2 \mathbbm{1}_{A_M}\mathbb{E} \left(X_{i_k}^2 \mid \mathcal{F}_{i_{k-1}}\right) \right] \\
&\leq C\mathbb{E} \left[ X_{i_1}^2\cdots X_{i_{k-1}}^2 \mathbbm{1}_{A_M}\right].
\end{align*}
Taking $M \uparrow \infty$, we get
$$ \mathbb{E} \left[ X_{i_1}^2\cdots X_{i_k}^2\right] \leq C\mathbb{E} \left[ X_{i_1}^2\cdots X_{i_{k-1}}^2\right].\n$$
Repeating these kind of computations we can conclude that $\mathbb{E} \left[ X_{i_1}^2\cdots X_{i_k}^2\right] \leq C^k$. Thus $\left\{U_n^{(k)}, \mathcal{F}_n, n \geq k\right\}$ is a $L^2$-MG.
From definition, we can see that for all $n \geq k+1, k \geq 2$,
$$ U_n^{(k)} -U_{n-1}^{(k)} = \sum_{1 \leq i_1 < \cdots < i_k = n} X_{i_1}\cdots X_{i_k} = X_n \sum_{1 \leq i_1 < \cdots < i_{k-1} \leq n-1} X_{i_1}\cdots X_{i_{k-1}} = X_n U_{n-1}^{(k-1)}. $$
Using the definition that $U_n^{(0)}:=1$ for all $n \geq 0$, we can also write for all $n \geq 2$ that $U_n^{(1)} - U_{n-1}^{(1)} = \sum_{i=1}^n X_i - \sum_{i=1}^{n-1} X_i =X_n = X_n U_{n-1}^{(0)}$. Thus for all $n -1\geq k \geq 1$, we have $U_n^{(k)} -U_{n-1}^{(k)} = X_n U_{n-1}^{(k-1)}.$ Since, $U_{n-1}^{(k-1)} \in m\mathcal{F}_{n-1}$, we have
$$ \mathbb{E}(U_n^{(k)} -U_{n-1}^{(k)})^2 = \mathbb{E}(X_n U_{n-1}^{(k-1)})^2 = \mathbb{E}\left[\mathbb{E}\left((X_n U_{n-1}^{(k-1)})^2 \mid \mathcal{F}_{n-1}\right) \right] = \mathbb{E}\left[ (U_{n-1}^{(k-1)})^2 \mathbb{E}(X_n^2 \mid \mathcal{F}_{n-1})\right] \leq C \mathbb{E}(U_{n-1}^{(k-1)})^2.\n$$
Since $\left\{U_n^{(k)}, \mathcal{F}_n, n \geq k\right\}$ is a $L^2$-MG, we have $\mathbb{E}(U_n^{(k)} -U_{n-1}^{(k)})^2 = \mathbb{E}(U_n^{(k)})^2 - \mathbb{E}(U_{n-1}^{(k)})^2$, for all $n -1\geq k$. Summing over $n$, we get for all $n \geq k+1$ and $k \geq 1$,
$$ \mathbb{E}(U_n^{(k)})^2 - \mathbb{E}(U_k^{(k)})^2 = \sum_{l=k+1}^n \left( \mathbb{E}(U_l^{(k)})^2 - \mathbb{E}(U_{l-1}^{(k)})^2\right)= \sum_{l=k+1}^n \mathbb{E}(U_l^{(k)} -U_{l-1}^{(k)})^2 \leq C \sum_{l=k+1}^n \mathbb{E}(U_{l-1}^{(k-1)})^2.\n$$
We want to prove that $\mathbb{E}(U_n^{(k)})^2 \leq {n \choose k} C^k$ for all $n \geq k \geq 0$. For $k=0$, the result is obvious from the definition. We want to use induction on $k$. Suppose the result is true for all $k =0, \ldots, m-1$ where $m \geq 1$. We have shown earlier that
$$\mathbb{E}(U_m^{(m)})^2 = \mathbb{E}(X_1^2\cdots X_m^2) \leq C^m = C^m {m \choose 0}.$$\nFor $n \geq m+1$, we have
\begin{align*}
\mathbb{E}(U_n^{(m)})^2 &\leq \mathbb{E}(U_m^{(m)})^2 + C \sum_{l=m+1}^n \mathbb{E}(U_{l-1}^{(m-1)})^2 \leq C^m + C \sum_{l=m+1}^n {l-1 \choose m-1} C^{m-1} = C^m \sum_{l=m}^n {l-1 \choose m-1}.
\end{align*}
Note that
\begin{align*}
\sum_{l=m}^n {l-1 \choose m-1} &= C^m \sum_{j=m-1}^{n-1} {j \choose m-1} \\&= C^m \sum_{j=m-1}^{n-1}{j \choose j-m+1} &= C^m \sum_{r=0}^{n-m} {r+m-1 \choose r} \\
&= C^m \sum_{r=0}^{n-m} \left[ {r+m \choose r} - {r+m-1 \choose r-1}\right] \\
&= C^m {n \choose n-m} = C^m {n \choose m}.
\end{align*}
This proves the induction hypothesis and thus we have proved that $\mathbb{E}(U_n^{(k)})^2 \leq {n \choose k} C^k$ for all $n \geq k \geq 0$.
\end{enumerate} |
1996-q6 | Let \( X_1, X_2, \ldots \) be i.i.d. Bernoulli-random variables with
\[
P(X_i = 1) = 1/2 = P(X_i = -1).
\]
Let \( S_n = X_1 + \cdots + X_n \).
a. Show that \( \{(\max_{1 \leq i \leq n} |S_i|/\sqrt{n})^p, n \geq 1\} \) is uniformly integrable for all \( p > 0 \). (Hint: Use Burkholder's inequality.)
b. Show that \( E[\max_{1 \leq i \leq n} S_i^p/n^{p/2}] \) converges to a limit \( m_p \) as \( n \to \infty \). Can you give an expression for \( m_p \)? | Clearly $\left\{S_n, \mathcal{F}_n, n \geq 0\right\}$ is an $L^2$-MG with respect to the canonical filtration $\mathcal{F}$ and $S_0=0$. The quadratic variation for this process is
$$ [S]_n = \sum_{k=1}^n (S_k-S_{k-1})^2 = \sum_{k=1}^n X_k^2 =n.$$\n
\begin{enumerate}[label=(\roman*)]
\item Set $M_n := \max_{1 \leq i \leq n} |S_i|$, for all $n \geq 1$. Fix any $q> \max(p,1)$. By Using \textit{Burkholder's inequality}, there exists $C_q < \infty$ such that for all $n \geq 1$,
$$ \mathbb{E}M_n^q = \mathbb{E} \left[ \max_{i \leq k \leq n} |S_k|\right]^q \leq C_q \mathbb{E}[S]_n^{q/2} = C_qn^{q/2}. $$\nHence, $\sup_{n \geq 1} \mathbb{E}\left[(n^{-p/2}M_n^p)^{q/p}\right] = \sup_{n \geq 1} \mathbb{E} \left[ n^{-q/2}M_n^q\right] \leq C_q < \infty.$ Since $q/p >1$, this shows that $\left\{n^{-p/2}M_n^p, n \geq 1\right\}$ is U.I.
\item Define,
$$ \widehat{S}_n(t) := \dfrac{1}{\sqrt{n}} \left[S_{\lfloor nt \rfloor} +(nt-\lfloor nt \rfloor)(S_{\lfloor nt \rfloor +1}-S_{\lfloor nt \rfloor}) \right]=\dfrac{1}{\sqrt{n}} \left[S_{\lfloor nt \rfloor} +(nt-\lfloor nt \rfloor)X_{\lfloor nt \rfloor +1} \right], $$\nfor all $0 \leq t \leq 1$. By \textit{Donsker's Invariance Principle}, we have $\widehat{S}_n \stackrel{d}{\longrightarrow} W$ as $n \to \infty$ on the space $C([0,1])$, where $W$ is the standard Brownian Motion on $[0,1]$.
Observe that the function $t \mapsto \widehat{S}_n(t)$ on $[0,1]$ is the piecewise linear interpolation using the points
$$ \left\{ \left( \dfrac{k}{n}, \dfrac{S_k}{\sqrt{n}}\right) \Bigg \rvert k=0, \ldots, n\right\}.$$\nRecall the fact that for a piecewise linear function on a compact set, we can always find a maximizing point among the collection of the interpolating points. Hence,
$$ \max_{1 \leq k \leq n} \dfrac{S_k}{\sqrt{n}} = \sup_{0 \leq t \leq 1} \widehat{S}_n(t).$$\nSince, $\mathbf{x} \mapsto \sup_{0 \leq t \leq 1} x(t)$ is a continuous function on $C([0,1])$, we have
$$ \sup_{0 \leq t \leq 1} \widehat{S}_n(t) \stackrel{d}{\longrightarrow} \sup_{0 \leq t \leq 1} W(t) \stackrel{d}{=} |W(1)|.$$
Hence using \textit{Continuous Mapping Theorem}, we can conclude that
$$n^{-p/2}| ext{max}_{1 \leq i \leq n} S_i|^p \stackrel{d}{\longrightarrow} |W(1)|^p.$$\nSince $|\text{max}_{1 \leq i \leq n} S_i| \leq M_n$, part (i) yields that $\left\{ n^{-p/2}| ext{max}_{1 \leq i \leq n} S_i|^p : n \geq 1\right\}$ is uniformly integrable and thus by \textit{Vitali's Theorem}, we can conclude that
\begin{align*}
n^{-p/2} \mathbb{E} | ext{max}_{1 \leq i \leq n} S_i|^p \longrightarrow m_p := \mathbb{E} |W(1)|^p = \int_{-\infty}^{\infty} |x|^p \phi(x)\, dx &= \dfrac{2}{\sqrt{2\pi}} \int_{0}^{\infty} x^p \exp(-x^2/2)\, dx \\
&= \dfrac{2}{\sqrt{2\pi}} \int_{0}^{\infty} (2u)^{(p-1)/2} \exp(-u)\, du \\
&= \dfrac{2^{p/2} \Gamma((p+1)/2)}{\sqrt{\pi}}.
\end{align*}
\end{enumerate} |
1998-q1 | Assume \((X, Y)\) are bivariate normal with mean values equal to 0, variances equal to 1, and correlation \( \rho \). Find the correlation coefficient of \( X^2 \) and \( Y^2 \). | $(X,Y)$ are bivariate normal with mean values equal to $0$, variances equal to $1$ and correlation $\rho$. Let $Z=X-\rho Y$. Then $Z \sim N(0,1-\rho^2)$ and is independent of $Y$. We have $ \operatorname{Var}(X^2) = \mathbb{E}(X^4)-(\mathbb{E}(X^2))^2 = 3-1 =2$ and similarly $\operatorname{Var}(Y^2)=2$. Now
\begin{align*} \mathbb{E}(X^2Y^2) = \mathbb{E}((Z+\rho Y)^2Y^2) &= \mathbb{E} \left[ Z^2Y^2 + 2\rho ZY^3 + \rho^2 Y^4\right] \\
&= \mathbb{E}(Z^2)\mathbb{E}(Y^2) + 2\rho \mathbb{E}(Z)\mathbb{E}(Y^3) + \rho^2 \mathbb{E}(Y^4) \\
&= (1-\rho^2) + 0 + 3\rho^2 = 1+2\rho^2,
\end{align*} and thus
$$ \operatorname{Cov}(X^2,Y^2) = \mathbb{E}(X^2Y^2)-\mathbb{E}(X^2)\mathbb{E}(Y^2) = 1+2\rho^2-1=2\rho^2.$$ Combining them we get,
$$ \operatorname{Corr}(X^2,Y^2) = \dfrac{\operatorname{Cov}(X^2,Y^2)}{\sqrt{\operatorname{Var}(X^2)\operatorname{Var}(Y^2)}} =\dfrac{2\rho^2}{\sqrt{4}}=\rho^2.$$ |
1998-q2 | On \(( \Omega, \mathcal{F}, P) \) let \( X_0 = Z_0 = 0 \) and \( Z_i, i = 1,2, \ldots \) i.i.d. normal of mean 0 and variance 1. Let \( S_n = \sum_{j=1}^n Z_j \) and \( \mathcal{F}_n \) the \( \sigma \)-algebra generated by \( \{Z_1, \ldots, Z_n\} \). Let \( X_n = S_n^3 + S_n^2 + a_n S_n + b_n \) where \( a_n \) and \( b_n \) are two deterministic sequences. Either give the most general \( a_n \) and \( b_n \) for which \((X_n, \mathcal{F}_n)\) is a martingale, or prove that this is never the case. | By definition clearly $X_n \in m\mathcal{F}_n$ and since $S_n \sim N(0,n)$ we also have $X_n$ to be integrable for all $n \geq 0$ and for any choice of the deterministic sequences. Observe that for any $n \geq 1$,
$$ \mathbb{E}(S_n \mid \mathcal{F}_{n-1}) = S_{n-1} + \mathbb{E}(Z_n \mid \mathcal{F}_{n-1}) = S_{n-1} + \mathbb{E}(Z_n)=S_{n-1},$$
$$ \mathbb{E}(S_n^2 \mid \mathcal{F}_{n-1}) = S_{n-1}^2 + 2S_{n-1}\mathbb{E}(Z_n \mid \mathcal{F}_{n-1}) + \mathbb{E}(Z_n^2 \mid \mathcal{F}_{n-1}) = S_{n-1}^2 + 2S_{n-1}\mathbb{E}(Z_n)+ \mathbb{E}(Z_n^2)=S_{n-1}^2+1,$$ and
\begin{align*} \mathbb{E}(S_n^3 \mid \mathcal{F}_{n-1}) &= S_{n-1}^3 + 3S_{n-1}^2\mathbb{E}(Z_n \mid \mathcal{F}_{n-1}) + 3S_{n-1}\mathbb{E}(Z_n^2 \mid \mathcal{F}_{n-1}) + \mathbb{E}(Z_n^3 \mid \mathcal{F}_{n-1}) \\
&= S_{n-1}^3 + 3S_{n-1}^2\mathbb{E}(Z_n)+ 3S_{n-1}\mathbb{E}(Z_n^2)+ \mathbb{E}(Z_n^3) \\
&=S_{n-1}^3+ 3S_{n-1}.
\end{align*}
\n\text{Hence, for any choice of the deterministic sequences and any $n \geq 1$,}\
\begin{align*} \mathbb{E}\left[ X_n \mid \mathcal{F}_{n-1}\right] = \mathbb{E}\left[ S_n^3 + S_n^2 + a_nS_n + b_n \mid \mathcal{F}_{n-1}\right] &= S_{n-1}^3 + 3S_{n-1} + S_{n-1}^2 +1 + a_nS_{n-1}+b_n \\
&= S_{n-1}^3 + S_{n-1}^2 + (a_n+3)S_{n-1} + (b_n+1).
\end{align*}
\n\text{Since $S_0=0$ almost surely and $S_n \neq 0$ with probability $1$ for all $n \geq 1$, we have}
\begin{align*} \left\{X_n, \mathcal{F}_n, n \geq 0\right\} \text{ is a MG } &\iff a_n+3=a_{n-1}, b_n+1=b_{n-1}, \; \forall \, n \geq 2, \; b_1+1=b_0, \\
& \iff a_n=a_1-3(n-1), \; \forall \, n \geq 1, \; b_n=b_0-n, \; \forall \, n \geq 1. \end{align*}
\n\text{Now $0=X_0 = S_0^3+S_0^2+a_0S_0+b_0=b_0$. Hence the most general sequence pair will look like }
$$ \left\{a_n\right\}_{n \geq 0} = (a_0,a_1,a_1-3,a_1-6, a_1-9, \ldots) \text{ and } \left\{b_n\right\}_{n \geq 0}=(0,-1,-2,-3,\ldots),$$\
\text{ for any $a_0,a_1 \in \mathbb{R}$.} |
1998-q3 | Let \( S_n = \sum_{i=1}^n Y_i \) where \( Y_i, i = 1, \ldots, n \) are pairwise independent random variables such that \( E(Y_i) = 0 \) and \( |Y_i| \leq 1 \).
a) Show that \( P(S_n = n) \leq 1/n \).
b) For \( n = 2^k - 1\), construct an example of pairwise independent \( Y_i \in \{-1, 1\} \) with \( E(Y_i) = 0 \) such that \( P(S_n = n) \geq 1/(n + 1) \).
c) Explain what happens in parts (a) and (b) when pairwise independence is replaced by mutual independence? | \begin{enumerate}[label=(\alph*)]\item Since the variables $Y_i$'s are uniformly bounded by $1$, they are square integrable. We have $\mathbb{E}(S_n)=0$ and by \textit{pairwise independence} we have $\operatorname{Var}(S_n)=\sum_{i=1}^n \operatorname{Var}(Y_i) \leq \sum_{i=1}^n \mathbb{E}(Y_i^2) \leq n.$ Apply \textit{Chebyshev's Inequality} to get
$$ \mathbb{P}(S_n=n) \leq \mathbb{P}(|S_n-\mathbb{E}(S_n)|\geq n) \leq \dfrac{\operatorname{Var}(S_n)}{n^2} \leq \dfrac{1}{n}.$$\item Take $k \geq 1$ and let $\mathcal{J}_k$ be the set of all non-empty subsets of $\left\{1, \ldots, k\right\}$. Consider $Z_1, \ldots, Z_k \stackrel{iid}{\sim}\text{Uniform}(\left\{+1,-1\right\})$. Define the collection of random variables $\left\{Y_I : I \in \mathcal{J}_k\right\}$ as $Y_I := \prod_{i \in I} Z_i$. Clearly this a collection of $n=2^k-1$ random variables with $Y_I \in \left\{+1,-1\right\}$ and
$$\mathbb{E}(Y_I)= \prod_{i \in I} \mathbb{E}(Y_i) = 0,$$ for all $I \in \mathcal{J}_k$. In other words, $Y_I \sim \text{Uniform}(\left\{+1,-1\right\})$ for all $I \in \mathcal{J}_k$. For $I_1,I_2,I_3 \in \mathcal{J}_k$ mutually disjoint, the random variables $Y_{I_1},Y_{I_2}, Y_{I_3}$ are clearly mutually independent. Moreover, for any $I \neq J \in \mathcal{J}_k$ with $I \cap J \neq \emptyset$, we have\begin{align*}\mathbb{P}(Y_I=1,Y_J=1) &= \mathbb{P}(Y_{I \cap J}=1, Y_{I \cap J^c}=1, Y_{I^c \cap J}=1) + \mathbb{P}(Y_{I \cap J}=1, Y_{I \cap J^c}=-1, Y_{I^c \cap J}=-1) \\
&= \mathbb{P}(Y_{I \cap J}=1)\mathbb{P}(Y_{I \cap J^c}=1)\mathbb{P}(Y_{I^c \cap J}=1) + \mathbb{P}(Y_{I \cap J}=1)\mathbb{P}(Y_{I \cap J^c}=-1) \mathbb{P}(Y_{I^c \cap J}=-1) \\
&= \dfrac{1}{8} + \dfrac{1}{8} = \dfrac{1}{4} = \mathbb{P}(Y_I=1)\mathbb{P}(Y_J=1).
\end{align*} Thus $\left\{Y_I : I \in \mathcal{J}_k\right\}$ is a collection of pairwise independent random variables. Then
\begin{align*}\mathbb{P} \left(\sum_{I \in \mathcal{J}_k} Y_I = n=2^k-1 \right) = \mathbb{P}(Y_I=1, \; \forall \; I \in \mathcal{J}_k) = \mathbb{P}(Z_j=1, \; \forall \; j=1, \ldots, k) = 2^{-k} = \dfrac{1}{n+1}.
\end{align*}\item If the variables $Y_i$'s were mutually independent, then
$$ \mathbb{P}(S_n=n) = \mathbb{P}(Y_i=1, \; \forall\; i=1, \ldots, n) = \prod_{i =1}^n \mathbb{P}(Y_i=1)$$ but using \textit{Markov's Inequality} we can say for all $ 1\ \leq i \leq n$,
$$ \mathbb{P}(Y_i =1)= \mathbb{P}(Y_i \geq 1) = \mathbb{P}(Y_i+1 \geq 2) \leq \dfrac{\mathbb{E}(Y_i+1)}{2}=\dfrac{1}{2},$$ and hence $\mathbb{P}(S_n=n)\leq 2^{-n}$. Since $2^{-n}\leq n^{-1}$, part (a) is also in this case. Since $2^{-n} < (n+1)^{-1}$ for all $n \geq 2$, an example like part (b) is not possible here for $n \geq 2$.
\end{enumerate} |
1998-q4 | Let \( \{X_n, \mathcal{F}_n\} \) be a martingale difference sequence such that \( \sup_{n \geq 1} E(|X_n|^r | \mathcal{F}_{n-1}) < \infty \) a.s. and \( E(X_n^2| \mathcal{F}_{n-1}) = b \) for some constants \( b > 0 \) and \( r > 2 \). Suppose that \( u_n \) is \( \mathcal{F}_{n-1}\)-measurable and that \( n^{-1} \sum_{i=1}^n u_i^2 \to_p c \) for some constant \( c > 0 \). Let \( S_n = \sum_{i=1}^n u_i X_i \). Show that the following limits exist for every real number \( x \) and evaluate them:
a) \( \lim_{a \to \infty} P(T(a) \leq a^2 x) \), where \( T(a) = \inf\{n \geq 1 : S_n \geq a\} \).
b) \( \lim_{n \to \infty} P(\max_{1 \leq k \leq n} (S_k - k S_n/n) \geq \sqrt{nx}) \). | Since we are not given any integrability condition on $u$, we need to perform some truncation to apply MG results. Let $v_{n,k}=u_k \mathbbm{1}(|u_k| \leq n).$ We have $v_{n,i}X_i$ to be square-integrable for all $n,i \geq 1$. Clearly, $v_{n,i}X_i \in m\mathcal{F}_i$ and $\mathbb{E}(v_{n,i}X_i \mid \mathcal{F}_{i-1}) = v_{n,i} \mathbb{E}(X_i \mid \mathcal{F}_{i-1})=0$. Hence $\left\{M_{n,k}:=\sum_{i=1}^k v_{n,i}X_i, \mathcal{F}_k, k \geq 0 \right\}$ is a $L^2$-Martingale with $M_{n,0} :=0$. Its predictable compensator process is as follows.
$$ \langle M_n \rangle_l = \sum_{k=1}^l \mathbb{E}(v_{n,k}^2X_k^2 \mid \mathcal{F}_{k-1}) = \sum_{k=1}^l v_{n,k}^2 \mathbb{E}(X_k^2 \mid \mathcal{F}_{k-1}) = b\sum_{k=1}^l v_{n,l}^2, \; \forall \; n,l \geq 1.$$\nFor any $q>0$ and any sequence of natural numbers $\left\{l_n \right\}$ such that $l_n =O(n)$, we have
$$ \mathbb{P} \left(\sum_{k=1}^{l_n} |v_{n,k}|^q \neq \sum_{k=1}^{l_n} |u_k|^q \right) \leq \mathbb{P}\left( \max_{k=1}^{l_n} |u_k| >n \right) \longrightarrow 0,$$
which follows from Lemma~\ref{lemma} and the fact that $n^{-1}\sum_{k=1}^n u_k^2$ converges in probability to $c$. Thus for any sequence of positive real numbers $\left\{c_n \right\}$ we have
\begin{equation}{\label{error}}\dfrac{1}{c_n} \sum_{k=1}^{l_n} |v_{n,k}|^q = \dfrac{1}{c_n} \sum_{k=1}^{l_n} |u_k|^q + o_p(1).
\end{equation} In particular, for any $0 \leq t \leq 1$,
$$ n^{-1}\langle M_n \rangle_{\lfloor nt \rfloor} = \dfrac{b}{n}\sum_{k=1}^{\lfloor nt \rfloor} v_{n,k}^2 = \dfrac{b}{n}\sum_{k=1}^{\lfloor nt \rfloor} u_k^2 + o_p(1) = \dfrac{b\lfloor nt \rfloor}{n}\dfrac{1}{\lfloor nt \rfloor}\sum_{k=1}^{\lfloor nt \rfloor} u_k^2 + o_p(1) \stackrel{p}{\longrightarrow} bct.$$ Also for any $\varepsilon >0$,\begin{align*}\dfrac{1}{n} \sum_{k=1}^n \mathbb{E} \left[(M_{n,k}-M_{n,k-1})^2; |M_{n,k}-M_{n,k-1}| \geq \varepsilon \sqrt{n} \mid \mathcal{F}_{k-1} \right] &= \dfrac{1}{n} \sum_{k=1}^n v_{n,k}^2\mathbb{E} \left[X_k^2; |v_{n,k}X_k| \geq \varepsilon \sqrt{n} \mid \mathcal{F}_{k-1} \right] \\
& \leq \dfrac{1}{n} \sum_{k=1}^n v_{n,k}^2\mathbb{E} \left[|X_k|^2 (\varepsilon \sqrt{n} |v_{n,k}X_k|^{-1})^{2-r} \mid \mathcal{F}_{k-1} \right] \\
& \leq \varepsilon^{2-r}\left( \sup_{k \geq 1} \mathbb{E}(|X_k|^r \mid \mathcal{F}_{k-1})\right) \left( \dfrac{1}{n^{r/2}}\sum_{k=1}^n |v_{n,k}|^r \right) \\
&= \varepsilon^{2-r}\left( \sup_{k \geq 1} \mathbb{E}(|X_k|^r \mid \mathcal{F}_{k-1})\right) \left( \dfrac{1}{n^{r/2}}\sum_{k=1}^n |u_k|^r +o_p(1) \right).
\end{align*} From Lemma~\ref{lemma} and the fact that $n^{-1}\sum_{k=1}^n u_k^2$ converges to $c$ in probability, we can conclude that $n^{-r/2} \sum_{k=1}^n |u_k|^r$ converges in probability to $0$. \n\n\text{Then we have basically proved the required conditions needed to apply \textit{Martingale CLT}. Define,}
$$ \widehat{S}_n(t) := \dfrac{1}{\sqrt{bcn}} \left[M_{n,\lfloor nt \rfloor} +(nt-\lfloor nt \rfloor)(M_{n,\lfloor nt \rfloor +1}-M_{n,\lfloor nt \rfloor}) \right]=\dfrac{1}{\sqrt{bcn}} \left[M_{n,\lfloor nt \rfloor} +(nt-\lfloor nt \rfloor)v_{n,\lfloor nt \rfloor +1}X_{\lfloor nt \rfloor +1} \right], $$
\text{for all $0 \leq t \leq 1$. We then have $\widehat{S}_n \stackrel{d}{\longrightarrow} W$ as $n \to \infty$ on the space $C([0,1])$, where $W$ is the standard Brownian Motion on $[0,1]$. |
1998-q5 | Let \( \{B(t), t \ge 0\} \) and \( \{A(t), t \ge 0\} \) be two independent stochastic processes defined on \((\Omega, \mathcal{F}, P)\) such that \( B(\cdot) \) is standard Brownian motion and \( A(\cdot) \) is a nondecreasing, right-continuous process with independent increments. Let \( X_t = B(A(t)) \).
a) Show that \( X_t \) has independent increments and is a right-continuous martingale.
b) Let \( T \) be a stopping time with respect to the filtration \( \{\mathcal{F}_t, t \ge 0\} \), where \( \mathcal{F}_t \) is the \( \sigma \)-algebra generated by \( \{A(s), s \le t\} \cup \{B(s), s \le A(t)\} \). If \( E A(T) < \infty \), show that \( E X_T = 0 \).
c) In the case where \( A(\cdot) \) is a Poisson process with rate \( \lambda \), find the characteristic function of \( X_t \). | \textbf{Correction :} For part (a), assume that $A(t) \geq 0$ and $\mathbb{E}\sqrt{A(t)}< \infty$ for all $t \geq 0$. For part (b), assume that $\mathbb{E}A(t)< \infty$ for all $t \geq 0$.\n\nWe have $B$ to be standard BM and $A$ is non-decreasing right-continuous process with independent increments. We have $B$ to be independent to $A$. We have $X(t)=B(A(t)).$\n\begin{enumerate}[label=(\alph*)]\item Let $\mathcal{G}_t := \sigma(A(s): s \leq t)$ and $\mathcal{G}_{\infty} := \sigma(A(s): s \geq 0)$. Fix $n \geq 1$ and take any $0=t_0<t_1<t_2< \cdots < t_n$ and $D_1, \ldots, D_n \in \mathcal{B}_{\mathbb{R}}.$ Consider the map $ \phi : \left(C([0,\infty)) \times [0, \infty)^{n+1}, \mathcal{B}_{C([0,\infty))} \otimes \mathcal{B}_{[0,\infty)^{n+1}} \right) \to (\mathbb{R}, \mathcal{B}_{\mathbb{R}})$ defined as
$$ \phi(\mathbf{x}, s_0, \ldots, s_n) = \mathbbm{1}\left(x(s_i)-x(s_{i-1}) \in D_i, \; \forall \; i=1, \ldots, n \right).$$ Clearly this is a bounded measurable map. Note that for any $0 < s_0 < s_1 < \ldots < s_n < \infty$, we have by independent increment property of the BM the following identity.\begin{align*} g(s_0,\ldots, s_n) = \mathbb{E} \left[ \phi(B,s_0, \ldots, s_n )\right] = \mathbb{P} \left( B(s_i)-B(s_{i-1}) \in D_i, \; \forall \; i=1, \ldots, n\right) &= \prod_{i =1}^n \mathbb{P}(B(s_i)-B(s_{i-1}) \in D_i) \\
&= \prod_{i =1}^n Q(s_i-s_{i-1},D_i),\end{align*} where $Q(t, D) := \mathbb{P}(N(0,t) \in D)$ for any $t \geq 0$ and $D \in \mathcal{B}_{\mathbb{R}}$. For any fixed $D$, the function $t \mapsto Q(t,D)$ is continuous and hence measurable. Since, $B \perp \!
rac{ ext{G}}{ ext{I}}$ and $X$ is a martingale with respect to the canonical filtration $(A(t)$. Therefore,\end{enumerate} |
1999-q1 | Let \( X_1, \ldots, X_n, \ldots \) be an arbitrary sequence of random variables on the same probability space and \( A, B \) two Borel subsets of \( \mathbb{R} \). Let \( \Gamma_n = \bigcup_{m=n}^\infty \{ X_m \in B \} \). Suppose that for some \( \delta > 0 \) and all \( n \geq 1 \),
\[ P(\Gamma_{n+1} | X_1, \ldots, X_n) \geq \delta \text{ on } \{ X_n \in A \}. \]
Show that
\[ \{ X_n \in A \text{ i.o.} \} \subseteq \{ X_n \in B \text{ i.o.} \} \quad \text{a.s.} \] | Let \mathcal{F}_n$ be the $\sigma$-algebra generated by $X_1, \ldots, X_n$. Also define $\Gamma^B := (X_n \in B \text{ i.o.})$ and $\Gamma^A := (X_n \in A \text{ i.o.})$. Note that $\Gamma_{n} = \bigcup_{m \geq n}(X_m \in B) \downarrow \Gamma^B$, in other words $\mathbbm{1}_{\Gamma_n} \stackrel{a.s.}{\longrightarrow} \mathbbm{1}_{\Gamma^B}$. By the hypothesis, we have $$ \mathbb{E} \left[ \mathbbm{1}_{\Gamma_n} \mid \mathcal{F}_n \right] \geq \delta \mathbbm{1}_{(X_n \in A)} $$ By \textit{Levy's upward Theorem}, we have $$ \mathbb{E} \left[ \mathbbm{1}_{\Gamma_{n+1}} \mid \mathcal{F}_n \right] \stackrel{a.s.}{\longrightarrow} \mathbb{E} \left[ \mathbbm{1}_{\Gamma^B} \mid \mathcal{F}_{\infty} \right] = \mathbbm{1}_{\Gamma^B},$$ where $\mathcal{F}_{\infty}=\sigma(X_1, X_2, \ldots,)$. Thus we can conclude that $$ \limsup_{n \to \infty} \delta \mathbbm{1}_{(X_n \in A)} \leq \mathbbm{1}_{\Gamma^B}, \; \text{almost surely}.$$ If $\omega \in \Gamma^A$, then $\limsup_{n \to \infty} \mathbbm{1}_{(X_n \in A)}(\omega)=1$ and hence $$ \delta \mathbbm{1}_{\Gamma^A} \leq \limsup_{n \to \infty} \delta \mathbbm{1}_{(X_n \in A)} \leq \mathbbm{1}_{\Gamma^B}, \; \text{almost surely}.$$ Since, $\delta >0$, this shows that $\mathbb{P}(\Gamma^A \setminus \Gamma^B)=0$. This concludes the proof. |
1999-q2 | Let \( Z_n \) be a Galton-Watson process (as in Durrett 4.3.d or Williams Ch. 0) such that each member of the population has at least one offspring (that is, in Durrett's notations, \( p_0 = 0 \)). Construct the corresponding Galton-Watson tree by connecting each member of the population to all of its offsprings. Call a vertex of this tree a branch point if it has more than one offspring. For each vertex \( v \) of the tree let \( C(v) \) count the number of branch points one encounters when traversing along the path from the root of the tree to \( v \). Let \( B_n \) be the minimum of \( C(v) \) over all vertices \( v \) in the \( n \)-th generation of the process. Show that for some constant \( \delta > 0 \) that depends only on the offspring distribution,
\[ \liminf_{n \to \infty} n^{-1} B_n \geq \delta \quad \text{a.s.} \]
*Hint*: For each \( \lambda > 0 \), find \( M(\lambda) \) such that \( \sum_v \exp(-\lambda C(v)) M(\lambda)^n \) is a Martingale, where the sum is over \( v \) in the \( n \)-th generation of the process. | \textbf{Correction :} Assume that the progeny mean is finite and $p_1<1$.
Let $Z$ denote the progeny distribution with $\mathbb{P}(Z=k)=:p_k$ for all $k \in \mathbb{Z}_{\geq 0}$. In this problem $p_0=0$. Introduce the notation $P(v)$ which will refer to the parent of vertex $v$ and $\mathcal{O}(v)$ which will refer to the set of offspring of vertex $v$. Also $D_n$ will denote the collection of all the vertices in the $n$-th generation and $\mathcal{F}_n$ will denote all the information up to generation $n$.
$C(v)$ is the number of branch points along the path from the root to $v$, excluding $v$. Clearly, $C(v)$ for any $v \in D_n$ depends on the offspring counts of only the vertices in $D_{n-1}$ and hence is $\mathcal{F}_n$-measurable. (Note that by convention, $\mathcal{F}_n$ does not incorporate information about the offspring counts of the vertices in $D_n$.)
Fix any $\lambda >0$. Then for all $n \geq 1$,
$$ \mathbb{E} \left[ \sum_{v \in D_n} \exp(-\lambda C(v))\right] \leq \mathbb{E}|D_n| = \mu^n < \infty,$$ where $\mu=\mathbb{E}(Z)$ is the progeny mean. Therefore,
\begin{align*}
\mathbb{E} \left[ \sum_{v \in D_n} \exp(-\lambda C(v)) \Bigg \rvert \mathcal{F}_{n-1} \right] &= \mathbb{E} \left[ \sum_{u \in D_{n-1}} \sum_{v : P(v)=u} \exp(-\lambda C(v)) \Bigg \rvert \mathcal{F}_{n-1} \right] \\
&= \sum_{u \in D_{n-1}} \mathbb{E} \left[ \sum_{v : P(v)=u} \exp(-\lambda C(v)) \Bigg \rvert \mathcal{F}_{n-1} \right] \\
&= \sum_{u \in D_{n-1}} \mathbb{E} \left[ \exp(-\lambda C(u)) \sum_{v : P(v)=u} \exp(-\lambda \mathbbm{1}( u \text{ is a branching point})) \Bigg \rvert \mathcal{F}_{n-1} \right] \\
&= \sum_{u \in D_{n-1}} \exp(-\lambda C(u)) \mathbb{E} \left[ |\mathcal{O}(v)| \exp(-\lambda \mathbbm{1}( |\mathcal{O}(v)| \geq 2)) \Bigg \rvert \mathcal{F}_{n-1} \right] \\
&= \sum_{u \in D_{n-1}} \exp(-\lambda C(u)) M(\lambda),\
\end{align*} where
$$ M(\lambda) := \mathbb{E}\left[ Z \exp(-\lambda \mathbbm{1}(Z \geq 2))\right] = \sum_{k \geq 2} ke^{-\lambda}p_k + p_1 = e^{-\lambda}(\mu -p_1)+p_1 \in (0, \infty). $$
This shows that $\left\{ T_n = M(\lambda)^{-n}\sum_{v \in D_n} \exp(-\lambda C(v)), \mathcal{F}_n, n \geq 1\right\}$ is a MG. Being a non-negative MG, it converges almost surely to some $T_{\infty}$ with $$\mathbb{E}T_{\infty} \leq \liminf_{n \to \infty} \mathbb{E}T_n = \mathbb{E}T_1 \leq M(\lambda)^{-1}\mu < \infty.$$ In particular, $T_{\infty}< \infty$ with probability $1$.
Now note if $B_n := \min_{v \in D_n} C(v)$, then $$ T_n = M(\lambda)^{-n}\sum_{v \in D_n} \exp(-\lambda C(v)) \geq M(\lambda)^{-n} \exp(-\lambda B_n),$$ and hence $$ \dfrac{\lambda}{n} B_n \geq - \log M(\lambda) - \dfrac{1}{n}\log T_n.$$ Since, $T_{\infty}< \infty$ almost surely, we have $\limsup_{n \to \infty} n^{-1} \log T_n \leq 0$, almost surely (this is an inequality since $T_{\infty}$ can take the value $0$ with positive probability). Thus, almost surely $$ \liminf_{n \to \infty} n^{-1}B_n \geq -\lambda^{-1}\log M(\lambda) =- \dfrac{\log(e^{-\lambda(\mu-p_1)}+p_1)}{\lambda}.$$ Our proof will be complete if we can furnish a $\lambda_0 >0$ such that $M(\lambda_0)<1$. Note that $\lim_{\lambda \to \infty} M(\lambda)=p_1 <1,$ and hence such $\lambda_0$ exists. |
1999-q3 | Show that the uniform distribution on \([0,1]\] is not the convolution of two independent, identically distributed variables. | Let $U \sim \text{Uniform}([0,1])$ and $U = X_1+X_2$, where $X_1$ and $X_2$ are \textit{i.i.d.} variables. Then $$ \text{Uniform}([-1,1]) \stackrel{d}{=} 2U-1 = 2X_1+2X_2-1 = Y_1+Y_2,$$ where $Y_i=2X_i-1/2$ for $i=1,2$. Clearly $Y_1$ and $Y_2$ are still \textit{i.i.d.} variables. Set $V:= 2U-1$. $V$ has a distribution which is symmetric around $0$ and hence it has all odd order moments to be $0$.
First of all, note that $$ (\mathbb{P}(Y_1 > 1/2))^2 = \mathbb{P}(Y_1 > 1/2, Y_2 > 1/2) \leq \mathbb{P}(V >1)=0,$$ and hence $Y_1 \leq 1/2$ almost surely. Similar argument shows $Y_1 \geq -1/2$ almost surely. Thus $Y_1$ is a bounded random variable.
We shall first show that the distribution of $Y_1$ has all odd order moments $0$, i.e. $\mathbb{E}Y_1^{2k-1}=0$ for all $k \in \mathbb{N}$. To prove this we go by induction. For $k=1$, the statement is true since $$ 2 \mathbb{E}(Y_1) = \mathbb{E}(Y_1+Y_2) = \mathbb{E}(V) = 0.$$ Now suppose that $\mathbb{E}Y_1^{2k-1}=0$ for all $k=1, \ldots, m$. Then \begin{align*} 0=\mathbb{E}(V^{2m+1}) =\mathbb{E}(Y_1+Y_2)^{2m+1}) =\sum_{l=0}^{2m+1} {2m+1 \choose l} \mathbb{E}Y_1^lY_2^{2m+1-l} = \sum_{l=0}^{2m+1} {2m+1 \choose l} \mathbb{E}Y_1^l \mathbb{E}Y_1^{2m+1-l}.\end{align*} For any $0 < l <2m+1$, if $l$ is odd then by induction hypothesis we have $\mathbb{E}Y_1^l=0$. If $l$ is even then $2m+1-l$ is an odd number strictly less than $2m+1$ and induction hypothesis then implies $\mathbb{E}Y_1^{2m+1-l}=0$. Therefore, $$ 0 = 2 \mathbb{E}Y_1^{2m+1} \Rightarrow \mathbb{E}Y_1^{2m+1}=0,$$ proving the claim.
Now let $Y_1 \perp \!\!\!\perp Z\sim \text{Uniform}(\left\{+1,-1\right\})$ and set $W_1=Z|Y_1|$. Clearly, for all $k \geq 1$, $$ \mathbb{E}W_1^{2k-1} = \mathbb{E}(Z^{2k-1}|Y_1|^{2k-1})=\mathbb{E}(Z|Y_1|^{2k-1}) =\mathbb{E}(Z) \mathbb{E}|Y_1|^{2k-1} =0,\; \mathbb{E}W_1^{2k}= \mathbb{E}(Z^{2k}Y_1^{2k}) = \mathbb{E}Y_1^{2k}.$$ Hence, $\mathbb{E}Y_1^k = \mathbb{E}W_1^k$. Since probability measures on bounded intervals are determined by its moments (see~\cite[Theorem 30.1]{B01}), we can conclude that $Y_1 \stackrel{d}{=}W_1$. It is evident that the distribution of $W_1$ is symmetric around $0$ and hence so is $Y_1$. Consequently, $\mathbb{E}(e^{itY_1}) \in \mathbb{R}$ for all $t \in \mathbb{R}$. Hence for all $t >0$, \begin{align*} 0 \leq (\mathbb{E}(e^{itY_1}))^2 = \mathbb{E}\exp(it(Y_1+Y_2)) = \mathbb{E}\exp(itV) = \dfrac{1}{2} \int_{-1}^{1} e^{itx} \, dx = \dfrac{e^{it}-e^{-it}}{2it} = \dfrac{\sin t}{t} \Rightarrow \sin t >0,\end{align*} which is a contradiction. |
1999-q4 | Let \( \pi \) be a permutation of \( \{1, \ldots, n \} \) chosen at random. Let \( X = X(\pi) \) be the number of fixed points of \( \pi \), so \( X(\pi) \) is the number of indices \( i \) such that \( \pi(i) = i \). Prove
\[ E[X(X-1)(X-2)\ldots(X-k+1)] = 1 \]
for \( k = 1, 2, \ldots, n \). | We have $\pi \sim \text{Uniform}(S_n)$ where $S_n$ is the symmetric group of order $n$. Let $\mathcal{C}(\pi)$ is the collection of fixed points of $\pi$ and $X=X(\pi):=|\mathcal{C}(\pi)|$. Then for all $1 \leq k \leq n$, \begin{align*} X(X-1)\ldots(X-k+1) &= \text{Number of ways to choose an ordered $k$-tuple from $\mathcal{C}(\pi)$} \\
&= \sum_{i_1, \ldots, i_k \in \mathcal{C}(\pi) : \; i_j\text{'s are all distinct}} 1 \\
&= \sum_{i_1, \ldots, i_k : \;\, i_j\text{'s are all distinct}} \mathbbm{1}(\pi(i_j)=i_j, \; \forall \; j=1, \ldots, k).
\end{align*} For any distinct collection of indices $(i_1, \ldots, i_k)$, we have $$ \mathbb{P}(\pi(i_j)=i_j, \; \forall \; j=1, \ldots, k) = \dfrac{(n-k)!}{n!},$$ and hence $$ \mathbb{E}\left[X(X-1)\ldots(X-k+1) \right] = k!{n \choose k} \dfrac{(n-k)!}{n!} = 1.$$ |
1999-q5 | Let \( \{ X_n, \mathcal{F}_n, n \geq 1 \} \) be a martingale difference sequence such that \( E(X_n^2 | \mathcal{F}_{n-1}) = b \) and
\[ \sup_{n \geq 1} E(|X_n|^r | \mathcal{F}_{n-1}) < \infty \]
almost surely, for some constants \( b > 0 \) and \( r > 2 \). Let \( S_n = \sum_{i=1}^n X_i \) and \( S_0 = 0 \).
Show that
\[ n^{-2} \sum_{i=\sqrt{n}}^n S_{i-1}^2, n^{-1} \sum_{i=\sqrt{n}}^n S_{i-1} X_i \]
converges weakly as \( n \to \infty \). Describe this limiting distribution and hence find the limiting distribution of \( ( \sum_{i=\sqrt{n}}^n S_{i-1} X_i ) / ( \sum_{i=\sqrt{n}}^n S_{i-1}^2 )^{1/2} \). *(Hint: \( S_n^2 = \sum_{i=1}^n X_i^2 + 2\sum_{i=1}^n X_i S_{i-1} \).* | Under the assumption we have $X_i$ to be integrable for all $i \geq 1$. $\left\{S_k=\sum_{i=1}^k X_i, \mathcal{F}_k, k \geq 0 \right\}$ is a $L^2$-Martingale with $S_0 :=0$. Its predictable compensator process satisfies $$ n^{-1}\langle S \rangle_n = \dfrac{1}{n} \sum_{k=1}^n \mathbb{E}(X_k^2 \mid \mathcal{F}_{k-1}) = b.$$ Also for any $\varepsilon >0$, \begin{align*} \dfrac{1}{n} \sum_{k=1}^n \mathbb{E} \left[(S_k-S_{k-1})^2; |S_k-S_{k-1}| \geq \varepsilon \sqrt{n} \mid \mathcal{F}_{k-1} \right] &= \dfrac{1}{n} \sum_{k=1}^n \mathbb{E} \left[X_k^2; |X_k| \geq \varepsilon \sqrt{n} \mid \mathcal{F}_{k-1} \right] \\
& \leq \dfrac{1}{n} \sum_{k=1}^n \mathbb{E} \left[|X_k|^2 (\varepsilon \sqrt{n} |X_k|^{-1})^{2-r} \mid \mathcal{F}_{k-1} \right] \\
& \leq n^{-r/2}\varepsilon^{2-r}\left( \sup_{k \geq 1} \mathbb{E}(|X_k|^r \mid \mathcal{F}_{k-1})\right) \stackrel{a.s.}{\longrightarrow} 0.\end{align*} We have basically proved the required conditions needed to apply \textit{Martingale CLT}. Define, $$ \widehat{S}_n(t) := \dfrac{1}{\sqrt{n}} \left[S_{\lfloor nt \rfloor} +(nt-\lfloor nt \rfloor)(S_{\lfloor nt \rfloor +1}-S_{\lfloor nt \rfloor}) \right]=\dfrac{1}{\sqrt{n}} \left[S_{\lfloor nt \rfloor} +(nt-\lfloor nt \rfloor)X_{\lfloor nt \rfloor +1} \right], $$ for all $0 \leq t \leq 1$. We then have $\widehat{S}_n \stackrel{d}{\longrightarrow} \sqrt{b}W$ as $n \to \infty$ on the space $C([0,1])$, where $W$ is the standard Brownian Motion on $[0,1]$.
First note that for any $n \geq 1$, $$ 2\sum_{i=1}^n X_iS_{i-1} = \sum_{i=1}^n S_i^2 - \sum_{i=1}^n S_{i-1}^2 - \sum_{i=1}^n X_i^2 = S_n^2 - \sum_{i=1}^n X_i^2.$$ Therefore, \begin{align*} n^{-1}\mathbb{E}\Bigg \rvert \sum_{i=1}^n S_{i-1}X_i - \sum_{i=\lceil\sqrt{n} \rceil }^n S_{i-1}X_i \Bigg \rvert &\leq n^{-1} \mathbb{E} \Bigg \rvert\sum_{i=1}^{\lceil \sqrt{n} \rceil -1}S_{i-1}X_i \Bigg \rvert & \leq \dfrac{1}{2n} \mathbb{E}|S_{\lceil \sqrt{n} \rceil -1}|^2 +\frac{1}{2n} \mathbb{E} \left(\sum_{i=1}^{\lceil \sqrt{n} \rceil -1}X_i^2 \right) = \dfrac{2b(\lceil \sqrt{n} \rceil -1)}{2n}=o(1). \end{align*} Similarly, \begin{align*} n^{-2}\mathbb{E}\Bigg \rvert \sum_{i=1}^n S_{i-1}^2 - \sum_{i=\lceil\sqrt{n} \rceil }^n S_{i-1}^2 \Bigg \rvert &\leq n^{-2} \mathbb{E} \Bigg \rvert\sum_{i=1}^{\lceil \sqrt{n} \rceil -1}S_{i-1}^2 \Bigg \rvert & = \dfrac{1}{n^2} \sum_{i=1}^{\lceil \sqrt{n} \rceil -1}\mathbb{E}S_{i-1}^2 = \dfrac{1}{n^2} \sum_{i=1}^{\lceil \sqrt{n} \rceil -1} (i-1)b = O(n^{-1})=o(1). \end{align*} Hence, \begin{align*} \left( n^{-2}\sum_{i=\lceil\sqrt{n} \rceil }^n S_{i-1}^2, n^{-1}\sum_{i=\lceil\sqrt{n} \rceil }^n S_{i-1}X_i \right) &= \left( n^{-2}\sum_{i=1}^n S_{i-1}^2, n^{-1}\sum_{i=1 }^n S_{i-1}X_i \right) + o_p(1) \\
&= \left( n^{-2}\sum_{i=1}^n S_{i-1}^2, n^{-1}S_n^2 - n^{-1}\sum_{i=1}^n X_i^2 \right) + o_p(1) \\
&= \left( n^{-1}\sum_{i=1}^n \widehat{S}_n\left( \dfrac{i-1}{n}\right)^2, \widehat{S}_n^{(1)}^2 - n^{-1}\sum_{i=1}^n X_i^2 \right) + o_p(1) \end{align*}
Notice that, \begin{align*} \Bigg \rvert n^{-1}\sum_{i=1}^n \widehat{S}_n^2((i-1)/n) - \int_{0}^1 \widehat{S}^2_n(t)\, dt \Bigg \rvert &\leq \sum_{i=1}^n \Bigg \rvert n^{-1} \widehat{S}_n^2((i-1)/n) - \int_{(i-1)/n}^{i/n} \widehat{S}^2_n(t)\, dt \Bigg \rvert \\
& \leq \sum_{i=1}^n \dfrac{1}{n} \sup_{(i-1)/n \leq t \leq i/n} \big \rvert \widehat{S}^2_n(t) - \widehat{S}_n^2((i-1)/n)\big \rvert \\
& \leq 2\left(\sup_{0 \leq t \leq 1} |\widehat{S}_n(t)| \right) \left[ \sum_{i=1}^n \dfrac{1}{n} \sup_{(i-1)/n \leq t \leq i/n} \big \rvert \widehat{S}_n(t) - \widehat{S}_n((i-1)/n)\big \rvert \right] \\
& = 2 \left(\sup_{0 \leq t \leq 1} |\widehat{S}_n(t)| \right) \dfrac{1}{n} \sum_{i=1}^n n^{-1/2}|X_i| = \dfrac{2}{\sqrt{n}}\left(\sup_{0 \leq t \leq 1} |\widehat{S}_n(t)| \right) \left( \dfrac{1}{n} \sum_{i=1}^n |X_i| \right).
Since, $\mathbf{x} \mapsto \sup_{0 \leq t \leq 1} |x(t)|$ is a continuous function on $C([0,1])$, we have $\sup_{0 \leq t \leq 1} |\widehat{S}_n(t)| \stackrel{d}{\longrightarrow} \sup_{0 \leq t \leq 1} \sqrt{b}|W(t)| = O_p(1).$ On the otherhand, $\mathbb{E} \left(n^{-1}\sum_{i=1}^n |X_i|\right) \leq n^{-1} \sum_{i=1}^n \mathbb{E}|X_i| \leq \sqrt{b}$ and hence $n^{-1}\sum_{i=1}^n |X_i| = O_p(1).$ Therefore, $$\Bigg \rvert n^{-1}\sum_{i=1}^n \widehat{S}_n^2((i-1)/n) - \int_{0}^1 \widehat{S}^2_n(t)\, dt \Bigg \rvert = o_p(1),$$ which yields $$ \left( n^{-2}\sum_{i=\lceil\sqrt{n} \rceil }^n S_{i-1}^2, n^{-1}\sum_{i=\lceil\sqrt{n} \rceil }^n S_{i-1}X_i \right) = \left( \int_{0}^1 \widehat{S}^2_n(t)\, dt, \widehat{S}_n(1)^2 - \dfrac{1}{n} \sum_{i=1}^n X_i^2\right) + o_p(1). $$
Since, $\mathbf{x} \mapsto (x^2(1),\int_{0}^1 x^2(t)\, dt)$ is a jointly continuous function on $C([0,1])$, we apply Corollary~\ref{need} and conclude that $$ \left( n^{-2}\sum_{i=\lceil\sqrt{n} \rceil }^n S_{i-1}^2, n^{-1}\sum_{i=\lceil\sqrt{n} \rceil }^n S_{i-1}X_i \right) \stackrel{d}{\longrightarrow} \left( b\int_{0}^1 W(t)^2\, dt, bW(1)^2-b\right), \; \text{ as } n \to \infty.$$
Apply \textit{Slutsky's Theorem} to conclude that $$ \dfrac{\sum_{i=\lceil\sqrt{n} \rceil }^n S_{i-1}X_i }{\sqrt{\sum_{i=\lceil\sqrt{n} \rceil }^n S_{i-1}^2}} \stackrel{d}{\longrightarrow} \dfrac{\sqrt{b}(W(1)^2-1)}{\sqrt{\int_{0}^1 W(t)^2\, dt}}.$$ |
1999-q6 | Let \( \{ B_t, \mathcal{F}_t, t \geq 0 \} \) be Brownian Motion with \( B_0 = 0 \) and let \( F \) be a probability distribution on the real line. Let
\[ Z_t = \int_{-\infty}^{\infty} e^{\theta B_t - \frac{\theta^2}{2}t} dF(\theta) .\]
(a) Show that \( \{ Z_t, \mathcal{F}_t, t \geq 0 \} \) is a continuous, nonnegative martingale with \( \lim_{t \to \infty} Z_t = 0 \) almost surely.
(b) Show that for \( b > 1 \),
\[ P \{ Z_t \geq b \text{ for some } t \geq 0 \} = 1/b. \]
*(Hint: Use the first part of the problem and Doob's optional sampling theorem.)*
(c) Compute \( Z_t \) in the case \( F \) is standard normal and rewrite the expression in the second part in terms of the probability that Brownian Motion ever hits a nonlinear boundary. | First note that for any $x \in \mathbb{R}$ and $t>0$, $$ \int_{\mathbb{R}} \exp \left( \theta x - \dfrac{t}{2} \theta ^2\right)\, dF(\theta ) \leq \int_{\mathbb{R}} \exp \left( |\theta ||x| - \dfrac{t}{2} \theta ^2\right)\, dF(\theta ) \leq \sup_{\theta \in \mathbb{R}} \exp \left( |\theta ||x| - \dfrac{t}{2} \theta ^2\right) < \infty.$$
\begin{enumerate}[label=(\alph*)]
\item By the above argument, we can say that $Z_t(\omega)$ is a finite real number for all $\omega$ and $t \geq 0$. Next note that $(\theta , \omega) \mapsto (\theta , B_t(\omega))$ is a measurable function from $(\mathbb{R} \times \Omega, \mathcal{B}_{\mathbb{R}} \otimes \mathcal{F}_t)$ to $(\mathbb{R} \times \mathbb{R}, \mathcal{B}_{\mathbb{R}} \otimes \mathcal{B}_{\mathbb{R}}).$ On the otherhand, $(\theta , x) \mapsto \exp \left( \theta x - \dfrac{t}{2} \theta ^2\right)$ is jointly continuous hence measurable function from $(\mathbb{R} \times \mathbb{R}, \mathcal{B}_{\mathbb{R}} \otimes \mathcal{B}_{\mathbb{R}})$ to $(\mathbb{R}, \mathcal{B}_{\mathbb{R}})$. Therefore, $(\theta , \omega) \mapsto \exp \left( \theta B_t(\omega) - \dfrac{t}{2} \theta ^2\right)$ is measurable from $(\mathbb{R} \times \Omega, \mathcal{B}_{\mathbb{R}} \otimes \mathcal{F}_t)$ to $(\mathbb{R}, \mathcal{B}_{\mathbb{R}})$. Hence, by \textit{Fubini's Theorem}, we have $$ \omega \mapsto \int_{\mathbb{R}} \exp \left( \theta B_t(\omega) - \dfrac{t}{2} \theta ^2\right)\, dF(\theta ) \; \text{is } \mathcal{F}_t-\text{measurable}.$$ In other words, $Z_t \in m \mathcal{F}_t$. Note that $\theta B_t \sim N(0, t\theta ^2)$ and therefore, by \textit{Fubini's Theorem}, \begin{align*} \mathbb{E}|Z_t|= \mathbb{E}(Z_t) = \mathbb{E} \left[\int_{\mathbb{R}} \exp \left( \theta B_t - \dfrac{t}{2} \theta ^2\right)\, dF(\theta ) \right] = \int_{\mathbb{R}} \mathbb{E}\exp \left( \theta B_t - \dfrac{t}{2} \theta ^2\right)\, dF(\theta ) = \int_{\mathbb{R}} \, dF(\theta ) =1. \end{align*} Thus $Z_t$ is integrable. Take $0 \leq s \leq t.$ Then using independent stationary increment property of BM, we get \begin{align*} \mathbb{E}(Z_t \mid \mathcal{F}_s) &= \mathbb{E} \left[\int_{\mathbb{R}}\exp \left( \theta B_s \right) \exp \left( \theta (B_t-B_s)- \dfrac{t}{2} \theta ^2 \right)\, dF(\theta ) \Big \rvert \mathcal{F}_s\right] \\\n&= \int_{\mathbb{R}} \exp \left( \theta B_s - \dfrac{t}{2} \theta ^2 \right) \mathbb{E} \left[ \exp \left( \theta (B_t-B_s)\right) \mid \mathcal{F}_s\right] \, dF(\theta ) \\\n&= \int_{\mathbb{R}} \exp \left(\theta B_s+ \dfrac{t-s}{2}\theta ^2- \dfrac{t}{2} \theta ^2 \right)\, dF(\theta ) = Z_s. \end{align*} This establishes that $\left\{Z_t, \mathcal{F}_t, t \geq 0\right\}$ is a MG. Notice that by DCT, $$ x \mapsto \int_{\mathbb{R}} \exp \left( \theta x - \dfrac{t}{2} \theta ^2\right)\, dF(\theta )$$ is a continuous function and therefore, the process $Z$ has continuous sample paths by sample-path-continuity of BM. Thus $\left\{Z_t, \mathcal{F}_t, t \geq 0\right\}$ is a continuous non-negative MG and hence by \textit{Doob's Convergence Theorem}, it converges almost surely to some $Z_{\infty}$ as $t \to \infty$.
By \textit{Law of Iterated Logarithm}, we have $$ -\infty = \liminf_{t \to \infty} B_t < \limsup_{t \to \infty} B_t = \infty, \; \text{almost surely}.$$ Take $\omega$ such that $Z_t(\omega) \to Z_{\infty}(\omega)$ and $$ -\infty = \liminf_{t \to \infty} B_t(\omega) < \limsup_{t \to \infty} B_t(\omega) = \infty, \; \text{almost surely}.$$ By continuity of the function $t \mapsto B_t(\omega)$, we can get a sequence $t_n(\omega) \uparrow \infty$ such that $B_{t_n(\omega)}(\omega) = 0$ for all $n \geq 1$. Then $$ Z_{t_n(\omega)}(\omega) = \int_{\mathbb{R}} \exp(-\theta^2 t_n(\omega)/2) \, dF(\theta), \; \forall \, n \geq 1. $$ For all $n \geq 1$, we have $\exp(-\theta^2 t_n(\omega)/2) \leq 1$ and $\exp(-\theta^2 t_n(\omega)/2) \to \mathbbm{1}(\theta=0)$ as $n \uparrow \infty$. Thus applying DCT, we conclude that $$ Z_{\infty}(\omega) = \lim_{n \to \infty} Z_{t_n(\omega)}(\omega) = \lim_{n \to \infty}\int_{\mathbb{R}} \exp(-\theta^2 t_n(\omega)/2) \, dF(\theta) = \int_{\mathbb{R}} \mathbbm{1}(\theta=0) \, dF(\theta) = F(\left\{0\right\}) = F(0)-F(0-). $$ Hence, $Z_{\infty} = F(0)-F(0-)$ almost surely. We need to assume $F$ to be continuous at $0$ to conclude that $Z_t \stackrel{a.s.}{\longrightarrow} 0$ as $t \to \infty.$
\item Define the stopping time $T:= \left\{t \geq 0 : Z_t \geq b\right\}$. Since $Z_0 =1, b>1$ and $Z$ has continuous paths we can conclude that $Z_{T}\mathbbm{1}(T< \infty)=b\mathbbm{1}(T< \infty)$. Using OST, we get $\mathbb{E}(Z_{T \wedge t}) = \mathbb{E}(Z_0)= 1$ for all $t \geq 0$. Observe that $$ Z_{T \wedge t} \stackrel{a.s.}{\longrightarrow} Z_T\mathbbm{1}(T < \infty) + Z_{\infty}\mathbbm{1}(T=\infty), \;\text{as } t \to \infty.$$ Since $0 \leq Z_{T \wedge t} \leq b$ for all $t \geq 0$, we can apply DCT to conclude that $$1 = \mathbb{E} \left[Z_T\mathbbm{1}(T < \infty) \right] + \mathbb{E} \left[Z_{\infty}\mathbbm{1}(T = \infty) \right] = b\mathbb{P}(T< \infty) + (F(0)-F(0-))\mathbb{P}(T=\infty).$$ Therefore, $$ \mathbb{P}(Z_t \geq b \text{ for some } t \geq 0 ) = \mathbb{P}(T < \infty) = \dfrac{1-F(0)+F(0-)}{b-F(0)+F(0-)}.$$ If $F$ is continuous at $0$, the last expression just becomes $1/b$.
\item $F$ is $N(0, 1)$. Hence, \begin{align*} Z_t = \int_{\mathbb{R}} \exp \left( \theta B_t - \dfrac{t}{2} \theta ^2\right)\, dF(\theta ) = \int_{\mathbb{R}} \exp \left( \theta B_t - \dfrac{t}{2} \theta^2 \right) \phi \left( \theta\right)\, d\theta.\end{align*} For any $a \in \mathbb{R}$, \begin{align*} \int_{\mathbb{R}} \exp \left( \theta a - \dfrac{t}{2} \theta ^2 \right) \phi \left( \theta \right)\, d\theta = \int_{\mathbb{R}} \exp(a\theta) \dfrac{1}{\sqrt{2\pi}} \exp\left( -\dfrac{(t+1)\theta^2}{2}\right)\, d\theta.\end{align*} Set $\nu^2 = (t+1)^{-1}.$ and take $Y \sim N(0,\nu^2)$. Then \begin{align*} \int_{\mathbb{R}} \exp(a\theta) \dfrac{1}{\sqrt{2\pi}} \exp\left( -\dfrac{(t+1)\theta^2}{2}\right)\, d\theta = \int_{\mathbb{R}} \exp(a\theta) \dfrac{1}{\sqrt{2\pi}} \exp\left( -\dfrac{\theta^2}{2\nu^2}\right)\, d\theta = \nu \mathbb{E}\exp(a Y) = \nu \exp (a^2\nu^2/2).\end{align*} Hence, $$ Z_t = (t +1)^{-1/2} \exp \left( \dfrac{B_t^2}{2(t+1)} \right). $$ Since, the normal distribution has no mass at $0$, for this case $(b)$ implies that for any $b>1$, \begin{align*} 1/b = \mathbb{P}(Z_t \geq b \text{ for some } t \geq 0 ) = \mathbb{P}(|B_t| \geq h(t) \text{ for some } t \geq 0 ), \end{align*} where $h(t):= \sqrt{2(t+1)\log (b\sqrt{t+1})}$ for all $t \geq 0$.
\end{enumerate} |
2000-q1 | Fix \( s > 1 \). Let \( \zeta(s) = \sum_{j=1}^{\infty} j^{-s} \) and consider the random variable \( J \) such that \( P(J = j) = j^{-s}/\zeta(s) \) for \( j = 1, 2, 3, \cdots \).
(a) Let \( X_n = 1 \) if \( J \) is divisible by \( n \) and \( X_n = 0 \) otherwise. Calculate in closed form \( P(X_n = 1) \).
(b) Let \( \mathcal{P} = \{2, 3, 5, 7, \ldots \} \) denote the set of prime numbers. Using part (a), prove that \( \{X_p\}_{p \in \mathcal{P}} \) is a collection of independent random variables.
(c) Prove Euler's formula \( \zeta(s) = \prod_{p \in \mathcal{P}}(1 - p^{-s})^{-1} \), and explain its probabilistic interpretation. | \begin{enumerate}[label=(\alph*)]
\item
\begin{align*}
\mathbb{P}(X_n=1) = \mathbb{P} \left(J \text{ is divisible by } n \right) = \sum_{k : n|k} \mathbb{P}(J=k) = \sum_{l \geq 1} \mathbb{P}(J=nl) = \sum_{l \geq 1} \zeta(s)^{-1}(nl)^{-s} = n^{-s}.
\end{align*}
\item Let $p_n$ be the $n$-th prime number. Note that a natural number $n$ is divisible by $p_1p_2\ldots p_n$ if and only if $n$ is divisible by each of $p_1, \ldots, p_n$. Therefore, for all $n \geq 1$,
\begin{align*}
\mathbb{P}(X_{p_i}=1, \; \forall \; i=1, \ldots, n) = \mathbb{P} \left( J \text{ is divisible by } p_1, \ldots, p_n\right) &= \mathbb{P}(J \text{ is divisible by } p_1\ldots p_n) \\
&= \left( \prod_{i=1}^n p_i\right)^{-s} = \prod_{i=1}^n p_i^{-s} = \prod_{i=1}^n \mathbb{P}(X_{p_i}=1).
\end{align*}
Since $X_{p_i}$'s are $0-1$ valued random variables, this implies that $\left\{X_p \mid p \in \mathcal{P}\right\}$ is a collection of independent random variables.
\item
\begin{align*}
\zeta(s)^{-1}=\mathbb{P}(J=1) = \mathbb{P}(J \text{ is not divisible by any prime}) = \mathbb{P}(X_p=0, \; \forall \; p \in \mathcal{P}) = \prod_{p \in \mathcal{P}} \mathbb{P}(X_p=0) =\prod_{p \in \mathcal{P}} (1-p^{-s}).
\end{align*}
Thus proves \textit{Euler's Formula}. Let $g(p,n)$ be the highest power of prime $p$ that divides $n$. Then
$$ \mathbb{P}(J=n) = \zeta(s)^{-1}n^{-s} = \prod_{p \in \mathcal{P}} (1-p^{-s}) p^{-sg(p,n)}.$$
Thus $\left\{g(p,J) \mid p \in \mathcal{P}\right\}$ is a collection of independent variables with $g(p,J) \sim \text{Geo}(p^{-s}).$ This gives a way to simulate $J$.
\end{enumerate} |
2000-q2 | \( X \) and \( Y \) are two independent normal variables with mean 0 and variance 1. Let \( Z = \frac{2XY}{\sqrt{X^2 + Y^2}} \) and \( W = \frac{(X^2 - Y^2)}{\sqrt{X^2 + Y^2}} \). Show that the vector \( (Z,W) \) has a normal joint distribution (using polar coordinates might help you). | We know that $(X,Y) \stackrel{d}{=}(R\cos \theta, R \sin \theta)$ where $R$ is a non-negative random variable with $R^2 \sim \chi^2_1$, $\theta \sim \text{Uniform}([0,2\pi])$ and $R \perp\!\!\perp \theta$. Then,
$$ Z=\dfrac{2XY}{\sqrt{X^2+Y^2}} = \dfrac{2R^2\cos(\theta)\sin(\theta)}{R} = R\sin(2\theta), W=\dfrac{X^2-Y^2}{\sqrt{X^2+Y^2}} = \dfrac{R^2(\cos^2(\theta)-\sin^2(\theta))}{R} = R\cos(2\theta).$$
Let
$$ U := \begin{cases}
2\theta, &\; \text{if } 0 \leq 2\theta \leq 2\pi, \\
2\theta - 2\pi, & \; \text{if } 2\pi < 2\theta \leq 4\pi.
\end{cases}$$
Clearly, $U \sim \text{Uniform}([0,2\pi])$, $R \perp\!\!\perp U$ and $Z=R\sin U, W=R\cos U$. Hence, $Z,W$ are independent standard normal variables. |
2000-q3 | Suppose \( Z_1, Z_2, \cdots \) are independent random variables that are normally distributed with mean value \( \mu > 0 \) and variance one. Let \( S_n = \sum_{k=1}^n Z_k \) and for \( b > 0 \) define \( \tau := \inf \{ n : |S_n| \geq b \} \). Prove that \( P(S_\tau \leq -b) < (1 + \exp(2\mu b))^{-1} \) and \( E(\tau) > (b/\mu)[(\exp(2\mu b) - 1)/(\exp(2\mu b) + 1)] \). | Let $\mathbb{P}_{\mu}$ denotes the probability under which $Z_1, Z_2, \ldots \stackrel{iid}{\sim} N(\mu,1)$.
Define $S_n=\sum_{i=1}^n Z_i$ with $S_0:=0$. Let $\mathcal{F}_n := \sigma(Z_i : 1 \leq i \leq n)$ and $\mathbb{P}_{\mu,n}:= \mathbb{P}_{\mu} \rvert_{\mathcal{F}_n}$, for all $n \geq 1$. Note that, for any $\mu_1,\mu_2 \in \mathbb{R}$,
$$ \dfrac{d\mathbb{P}_{\mu_1,n}}{d\mathbb{P}_{\mu_2,n}} = \dfrac{\prod_{i=1}^n \phi(Z_i-\mu_1)}{\prod_{i=1}^n \phi(Z_i-\mu_2)} = \exp \left( (\mu_1-\mu_2)\sum_{i=1}^n Z_i -\dfrac{n}{2}(\mu_1^2-\mu_2^2)\right) = \exp \left( (\mu_1-\mu_2)S_n -\dfrac{n}{2}(\mu_1^2-\mu_2^2)\right). $$
In particular, for any $\mu \in \mathbb{R}$ and $A \in \mathcal{F}_n$, we have $ \mathbb{P}_{\mu}(A) = \mathbb{E}_{-\mu} \left[\exp(2\mu S_n),A \right].$ More precisely, for any $Y \in m\mathcal{F}_n$ with $Y \geq 0$ (or $Y \leq 0$), we have $ \mathbb{E}_{\mu} Y = \mathbb{E}_{-\mu}(Ye^{2\mu S_n}).$
Now let $\mu , b>0$ and set $\tau := \inf \left\{ n : |S_n| \geq b\right\}$. Since, $\mu >0$, we have $S_n \to \infty$ almost surely as $n \to \infty$ and hence $\tau < \infty$ almost surely. Since, $d\mathbb{P}_{-\mu,n}/d\mathbb{P}_{\mu,n} = \exp(-2\mu S_n)$, we can conclude that $\left\{e^{-2\mu S_n}, \mathcal{F}_n, n \geq 0\right\}$ is a MG under $\mathbb{P}_{\mu}$. OST then implies that $\mathbb{E}_{\mu}\exp(-2\mu S_{\tau \wedge n}) =1$, for all $n \geq 1$. Apply \textit{Fatou's Lemma} and conclude that $\mathbb{E}_{\mu} \exp(-2\mu S_{\tau}) \leq 1$. Therefore,
\begin{align*}
1 \geq \mathbb{E}_{\mu} \exp(-2\mu S_{\tau}) &= \mathbb{E}_{\mu} \left(\exp(-2\mu S_{\tau}), S_{\tau} \leq -b \right) + \mathbb{E}_{\mu} \left(\exp(-2\mu S_{\tau}), S_{\tau} \geq b\right) \\
& =\mathbb{E}_{\mu} \left(\exp(-2\mu S_{\tau}), S_{\tau} \leq -b \right) + \sum_{n \geq 1} \mathbb{E}_{\mu} \left(\exp(-2\mu S_n), \tau=n, S_n \geq b\right) \\
&= \mathbb{E}_{\mu} \left(\exp(-2\mu S_{\tau}), S_{\tau} \leq -b \right) + \sum_{n \geq 1} \mathbb{P}_{-\mu} \left( \tau=n, S_n \geq b\right) \\
&= \mathbb{E}_{\mu} \left(\exp(-2\mu S_{\tau}), S_{\tau} \leq -b \right) + \mathbb{P}_{-\mu} \left( S_{\tau} \geq b \right) \\
&= \mathbb{E}_{\mu} \left(\exp(-2\mu S_{\tau}), S_{\tau} \leq -b \right) + \mathbb{P}_{\mu} \left( S_{\tau} \leq -b \right),
\end{align*}
where the last line follows from the fact that
\begin{equation}{\label{dist}}
\left( \left\{Z_i\right\}_{i \geq 1}, \tau \right) \Bigg \rvert_{\mathbb{P}_{\mu}} \stackrel{d}{=} \left( \left\{-Z_i\right\}_{i \geq 1}, \tau \right) \Bigg \rvert_{\mathbb{P}_{-\mu}}.
\end{equation}
Thus $1 \geq \mathbb{E}_{\mu} \left[ (1+e^{-2\mu S_{\tau}}), S_{\tau} \leq -b\right] \geq (1+e^{2b\mu})\mathbb{P}_{\mu}(S_{\tau} \leq -b).$ If the second inequality is strict, then we have got what we need to show. Otherwise, equality will imply that on the event $(S_{\tau} \leq -b)$, we have $S_{\tau}=-b$ almost surely. But,
$$\mathbb{P}_{\mu}(S_{\tau}=-b) = \sum_{n \geq 1} \mathbb{P}_{\mu}( au=n, S_n =-b) \leq \sum_{n \geq 1} \mathbb{P}_{\mu}( S_n =-b)=0,$$
and hence it will imply that $\mathbb{P}_{\mu}(S_{\tau} \leq -b)=0$. In any case, we have
$$ \mathbb{P}_{\mu}(S_{\tau} \leq -b) < \dfrac{1}{1+e^{2b\mu}} \Rightarrow \mathbb{P}_{\mu}(S_{\tau} \geq b) > \dfrac{e^{2b \mu}}{1+e^{2b\mu}}.$$
To prove the next part we shall need to establish something before, namely $\mathbb{E}_{\mu}(-\inf_{n \geq 0} S_n) < \infty$ for $\mu >0$. To prove this consider the stopping time $T_{c} := \inf \left\{n \geq 1: S_n \leq -c\right\}$ for all $c>0$.
Recall that $\left\{e^{-2\mu S_n}, \mathcal{F}_n, n \geq 0\right\}$ was a MG under $\mathbb{P}_{\mu}$ and hence by OST and \textit{Fatou's Lemma}, we get
$$ 1= \lim_{n \to \infty} \mathbb{E}_{\mu} \exp(-2\mu S_{T_c \wedge n}) \geq \mathbb{E}_{\mu} \left[ \liminf_{n \to \infty} \exp(-2\mu S_{T_c \wedge n})\right].$$
On the event $(T_c = \infty)$, we have $S_{T_c \wedge n} = S_n \to \infty$, almost surely since $\mu>0$ and hence $\exp(-2\mu S_{T_c \wedge n}) \stackrel{a.s.}{\longrightarrow} \exp(-2\mu S_{T_c}) \mathbbm{1}(T_c < \infty).$ Therefore,
$$ 1 \geq \mathbb{E}_{\mu} \exp(-2\mu S_{T_c})\mathbbm{1}(T_c< \infty) \geq e^{2c\mu} \mathbb{P}_{\mu}(T_c < \infty) \Rightarrow \mathbb{P}_{\mu}(T_c < \infty) \leq e^{-2c\mu}.$$
This implies,
$$ \mathbb{E}_{\mu}(-\inf_{n \geq 0} S_n) = \int_{0}^{\infty} \mathbb{P}_{\mu}(-\inf_{n \geq 0} S_n \geq c) \, dc = \int_{0}^{\infty} \mathbb{P}_{\mu}(T_{c} < \infty) \, dc \leq \int_{0}^{\infty} e^{-2c\mu}\, dc < \infty.$$
Now observe that under $\mathbb{P}_{\mu}$, $\left\{S_n-n\mu, \mathcal{F}_n, n \geq 0\right\}$ is a MG and hence by OST, we have $\mu \mathbb{E}_{\mu}(\tau \wedge n) = \mathbb{E}_{\mu}(S_{\tau \wedge n})$. By \textit{Fatou's Lemma},
$$ \liminf_{n \to \infty} \mathbb{E}_{\mu}(S_{\tau \wedge n} - \inf_{m \geq 0} S_m) \geq \mathbb{E}_{\mu}(S_{\tau} -\inf_{m \geq 0} S_m ).$$
Since, $\mathbb{E}_{\mu}(-\inf_{m \geq 0} S_m) < \infty$, we have
$$ \mu \mathbb{E}_{\mu}(\tau) = \lim_{n \to \infty} \mu \mathbb{E}_{\mu}(\tau \wedge n) \geq \liminf_{n \to \infty} \mathbb{E}_{\mu}(S_{\tau \wedge n}) \geq \mathbb{E}_{\mu}(S_{\tau}).$$
But
\begin{align*}
\mathbb{E}_{\mu}(S_{\tau}) &= \mathbb{E}_{\mu}(S_{\tau}, S_{\tau} \geq b) + \mathbb{E}_{\mu}(S_{\tau}, S_{\tau} \leq -b) \\
& = b\mathbb{P}_{\mu}(S_{\tau} \geq b) -b\mathbb{P}_{\mu}(S_{\tau} \leq -b) + \mathbb{E}_{\mu}(S_{\tau}-b, S_{\tau} \geq b) + \mathbb{E}_{\mu}(S_{\tau}+b, S_{\tau} \leq -b) \\
& > \dfrac{be^{2b\mu}}{1+e^{2b\mu}} - \dfrac{b}{1+e^{2b\mu}} + \mathbb{E}_{\mu}(S_{\tau}-b, S_{\tau} \geq b) + \mathbb{E}_{\mu}(S_{\tau}+b, S_{\tau} \leq -b) \\
& = \dfrac{b(e^{2b\mu}-1)}{1+e^{2b\mu}} + \mathbb{E}_{\mu}(S_{\tau}-b, S_{\tau} \geq b) + \mathbb{E}_{\mu}(S_{\tau}+b, S_{\tau} \leq -b).
\end{align*}
Hence we will be done if we can show that $\mathbb{E}_{\mu}(S_{\tau}-b, S_{\tau} \geq b) + \mathbb{E}_{\mu}(S_{\tau}+b, S_{\tau} \leq -b) \geq 0$. To see this, note that
\begin{align*}
\mathbb{E}_{\mu}(S_{\tau}+b, S_{\tau} \leq -b) = \sum_{n \geq 1} \mathbb{E}_{\mu}(S_{n}+b, S_{n} \leq -b, \tau=n)
&= \sum_{n \geq 1} \mathbb{E}_{-\mu}(e^{2\mu S_n}(S_{n}+b), S_{n} \leq -b, \tau=n) \\
&= \mathbb{E}_{-\mu}(e^{2\mu S_{\tau}}(S_{\tau}+b), S_{\tau} \leq -b) \\
& \stackrel{(ii)}{=}\mathbb{E}_{\mu}(e^{-2\mu S_{\tau}}(-S_{\tau}+b), -S_{\tau} \leq -b) \\
& =-\mathbb{E}_{\mu}(e^{-2\mu S_{\tau}}(S_{\tau}-b), S_{\tau} \geq b) \geq -e^{-2b \mu}\mathbb{E}_{\mu}(S_{\tau}-b, S_{\tau} \geq b),
\end{align*}
where $(ii)$ follows from (
ef{dist}). Therefore,
$$ \mathbb{E}_{\mu}(S_{\tau}-b, S_{\tau} \geq b) + \mathbb{E}_{\mu}(S_{\tau}+b, S_{\tau} \leq -b) \geq (1-e^{2b \mu}) \mathbb{E}_{\mu}(S_{\tau}-b, S_{\tau} \geq b) \geq 0,$$
which completes the proof. |
2000-q4 | The random variables \( Y_1, Y_2, \cdots \) defined in the same probability space \( (\Omega, \mathcal{F}, P) \) are uniformly integrable and such that \( Y_n \to Y_\infty \) almost surely. Explain which one of the following statements holds for every \( \sigma \)-algebra \( \mathcal{G} \subseteq \mathcal{F} \). Prove this statement and provide a counterexample for the other statement:
(a) \( E(Y_n|\mathcal{G}) \to E(Y_\infty|\mathcal{G}) \) almost surely.
(b) \( E(Y_n|\mathcal{G}) \to E(Y_\infty|\mathcal{G}) \) in \( L^1 \). | \begin{enumerate}[label=(\alph*)]\n\item This statement is not necessarily true. As a counter example, define $U_0, U_1, \ldots \stackrel{iid}{\sim} \text{Uniform}([0,1])$ on some probability space $(\Omega, \mathcal{F},\mathbb{P})$. For all $n \geq 1$, define $Y_n = n \mathbbm{1}(U_0 \leq n^{-1}) \mathbbm{1}(U_n \geq 1-n^{-1}).$ Since, $U_0 >0$ almost surely, we have $Y_n \stackrel{a.s.}{\longrightarrow} 0=Y_{\infty}$. Moreover,
$$ \sup_{n \geq 1} \mathbb{E}(Y_n^2) = \sup_{n \geq 1} n^2 \mathbb{P}(U_0 \leq n^{-1}) \mathbb{P}(U_n \geq 1-n^{-1}) =1.$$\nBeing $L^2$-bounded, the sequence $\left\{Y_n : n \geq 1\right\}$ is also uniformly integrable. Now consider the $\sigma$-algebra $\mathcal{G} := \sigma(U_i : i \geq 1).$ Then
$\mathbb{E}(Y_n \mid \mathcal{G}) = n \mathbbm{1}(U_n \geq 1-n^{-1}) \mathbb{P}(U_0 \leq n^{-1} \mid \mathcal{G}) = \mathbbm{1}(U_n \geq 1-n^{-1})$, for all $n \geq 1$ almost surely, whereas $\mathbb{E}(Y_{\infty} \mid \mathcal{G})=0$, almost surely. Since, $U_n$'s are independent and $\sum_{n \geq 1} \mathbb{P}(U_n \geq 1-n^{-1}) = \sum_{n \geq 1} n^{-1} = \infty $, we apply \textit{Borel-Cantelli Lemma II} and conclude that $\mathbb{P}\left(U_n \geq 1-n^{-1}, \; \text{i.o.} \right)=1$. This implies that $\limsup_{n \to \infty} \mathbb{E}(Y_n \mid \mathcal{G}) =1,$ almost surely. In other words, $\mathbb{E}(Y_n \mid \mathcal{G})$ does not converge to $\mathbb{E}(Y_{\infty} \mid \mathcal{G})$ with probability $1$.
\n\item This statement is true. We have $Y_n$ converging almost surely to $Y_{\infty}$ and $\left\{Y_n \mid n \geq 1\right\}$ is uniformly integrable. Apply \textit{Vitali's Theorem} and conclude that $Y_n \stackrel{L^1}{\longrightarrow} Y_{\infty}.$ Therefore,
$$ \mathbb{E}\rvert \mathbb{E}(Y_n \mid \mathcal{G}) - \mathbb{E}(Y_{\infty} \mid \mathcal{G})\rvert = \mathbb{E} \rvert \mathbb{E}(Y_n -Y_{\infty} \mid \mathcal{G})\rvert \leq \mathbb{E} \left[ \mathbb{E}(|Y_n-Y_{\infty}| \mid \mathcal{G})\right] = \mathbb{E}|Y_n - Y_{\infty}| \longrightarrow 0.$$ \nTherefore, $ \mathbb{E}(Y_n \mid \mathcal{G}) \stackrel{L^1}{\longrightarrow} \mathbb{E}(Y_{\infty} \mid \mathcal{G}).$
\end{enumerate} |
2000-q5 | Let \( B_t = (B_{1,t}, B_{2,t}) \) be a pair of independent Brownian motions (zero drift, unit variance) starting from the point \( B_0 = (x_1, x_2) \), where \( x_1, x_2 > 0 \). Let \( \tau \) denote the first time that either \( B_{1,t} = 0 \) or \( B_{2,t} = 0 \). Find explicitly the probability density function of \( B_\tau \). | We have $\mathbf{B}_t = (B_{1,t},B_{2,t})$ to be a two-dimensional standard BM (with independent components) and starting from $\mathbf{B}_0 = (B_{1,0},B_{2,0})=(x_1,x_2)$ with $x_1,x_2>0$. For $i=1,2$, let $\tau_i := \inf \left\{ t \geq 0 \mid B_{i,t} =0\right\} = \inf \left\{ t \geq 0 \mid B_{i,t} \leq 0\right\}$, almost surely. Then $\tau = \tau_1 \wedge \tau_2$ and we are to find the density of $\mathbf{B}_{\tau}$. Note that the support of $\mathbf{B}_{\tau}$ is contained in $\mathcal{X} := (\left\{0\right\} \times \mathbb{R}_{>0}) \cup (\mathbb{R}_{>0} \times \left\{0\right\} ).$ The $\sigma$-algebra on $\mathcal{X}$ will be
$$ \mathcal{G} = \left\{ (\left\{0\right\} \times A_2) \cup (A_1 \times \left\{0\right\} ) \mid A_1, A_2 \in \mathcal{B}_{(0, \infty)}\right\} = \left\{ A \cap \mathcal{X} \mid X \in \mathcal{B}_{\mathbb{R}^2}\right\} = \text{Borel $\sigma$-algebra on } \mathcal{X}.$$
The measure on $\mathcal{X}$ which will be the reference measure for the density is
$$ \mu \left( (\left\{0\right\} \times A_2) \cup (A_1 \times \left\{0\right\} )\right) = \text{Leb}_{\mathbb{R}}(A_1) + \text{Leb}_{\mathbb{R}}(A_2), \;\; \forall \; A_1, A_2 \in \mathcal{B}_{(0, \infty)}.$$
From now on $W$ will denote a generic standard BM starting from $0$ with its first passage time process being denoted by $\left\{T_b : b \geq 0\right\}$. Note that
$$ \left( \left\{B_{1,t}\right\}_{t \geq 0}, \tau_1 \right) \stackrel{d}{=} \left( \left\{x_1 -W_t\right\}_{t \geq 0}, T_{x_1} \right), \;\, \left( \left\{B_{2,t}\right\}_{t \geq 0}, \tau_2 \right) \stackrel{d}{=} \left( \left\{x_2 -W_t\right\}_{t \geq 0}, T_{x_2} \right).$$
\n\n\n Fix any $ y_1>0$. Then using independence of the processes $B_1$ and $B_2$, we get
\begin{align*}
\mathbb{P}(B_{1,\tau} \geq y_1, B_{2,\tau} =0) = \mathbb{P}(\tau_2 < \tau_1,B_{1,\tau_2} \geq y_1 ) &= \int_{0}^{\infty} \mathbb{P}(B_{1,t} > y_1, \tau_1 > t) f_{\tau_2}(t)\, dt \\
&= \int_{0}^{\infty} \mathbb{P}(W_t < x_1-y_1, T_{x_1} > t) f_{T_{x_2}}(t)\, dt.\n\end{align*}
Using \cite[Exercise 10.1.12]{dembo}, we get
\begin{align*}
\mathbb{P}(W_t < x_1-y_1, T_{x_1} > t) \stackrel{(i)}{=} \mathbb{P}(W_t < x_1-y_1, T_{x_1} \geq t) &= \mathbb{P}(W_t < x_1-y_1) - \mathbb{P}(W_t < x_1-y_1, T_{x_1} < t) \\
&= \mathbb{P}(W_t < x_1-y_1) - \mathbb{P}(W_t >x_1+y_1),\n\end{align*}
where in ${(i)}$ we have used the fact that $T_{x_1}$ has a density with respect to the Lebesgue measure. Recall that $T_b$ has density
$$ f_{T_b}(t) = \dfrac{b}{\sqrt{2\pi t^3}} \exp(-b^2/2t) \mathbbm{1}(t >0).$$
Therefore,
\begin{align*}
\mathbb{P}(B_{1,\tau} \geq y_1, B_{2,\tau} =0) &= \int_{0}^{\infty} \left[ \Phi \left( \dfrac{x_1-y_1}{\sqrt{t}}\right) - \bar{\Phi} \left( \dfrac{x_1+y_1}{\sqrt{t}}\right)\right] \dfrac{x_2}{\sqrt{2\pi t^3}} \exp(-x_2^2/2t) \, dt \\
&= \int_{0}^{\infty} \int_{y_1}^{\infty} \dfrac{1}{\sqrt{t}}\left[ \phi \left( \dfrac{x_1-u}{\sqrt{t}}\right) + \phi \left( \dfrac{x_1+u}{\sqrt{t}}\right)\right] \dfrac{x_2}{\sqrt{2\pi t^3}} \exp(-x_2^2/2t) \, du\, dt \\
&= \int_{y_1}^{\infty} \left[ \int_{0}^{\infty} \dfrac{x_2}{t^2}\phi \left( \dfrac{x_2}{\sqrt{t}}\right)\left( \phi \left( \dfrac{x_1-u}{\sqrt{t}}\right) + \phi \left( \dfrac{x_1+u}{\sqrt{t}}\right)\right) \, dt \right] \, du.\n\end{align*}
Note that for any $a,b$ with at least one of them non-zero, we have
\begin{align*}
\int_{0}^{\infty} \dfrac{1}{t^2} \phi \left( \dfrac{a}{\sqrt{t}}\right) \phi \left( \dfrac{b}{\sqrt{t}}\right) \, dt= \dfrac{1}{2\pi} \int_{0}^{\infty} \dfrac{1}{t^2} \exp \left( -\dfrac{a^2+b^2}{2t}\right) \, dt = \dfrac{1}{2\pi} \int_{0}^{\infty} \exp \left( -(a^2+b^2)v/2\right) \, dv = \dfrac{1}{\pi (a^2+b^2)}.\n\end{align*}
Plugging-in this formula, we get
\begin{align*}
\mathbb{P}(B_{1,\tau} \geq y_1, B_{2,\tau} =0) = \int_{y_1}^{\infty} \dfrac{1}{\pi} \left[ \dfrac{x_2}{x_2^2 + (u-x_1)^2} + \dfrac{x_2}{x_2^2 + (u+x_1)^2}\right] \, du.\n\end{align*}
Similarly, for any $y_2 >0$,
$$\mathbb{P}(B_{1,\tau} = 0, B_{2,\tau} \geq y_2) = \int_{y_2}^{\infty} \dfrac{1}{\pi} \left[ \dfrac{x_1}{x_1^2 + (v-x_2)^2} + \dfrac{x_1}{x_1^2 + (v+x_2)^2}\right] \, dv.$$\nHence, the random vector $\mathbf{B}_{\tau}$ has density $g$ with respect to $\mu$, where
$$ g(u,v) = \begin{cases}
\dfrac{1}{\pi} \left( \dfrac{x_2}{x_2^2 + (u-x_1)^2} + \dfrac{x_2}{x_2^2 + (u+x_1)^2}\right) = \text{Cauchy}(u,x_1,x_2) + \text{Cauchy}(u,-x_1,x_2) , & \text{ if } u >0, v =0,\\
\dfrac{1}{\pi} \left( \dfrac{x_1}{x_1^2 + (v-x_2)^2} + \dfrac{x_1}{x_1^2 + (v+x_2)^2}\right) = \text{Cauchy}(v,x_2,x_1) + \text{Cauchy}(v,-x_2,x_1), & \text{ if } u=0, v>0,
\end{cases}$$
where $\text{Cauchy}(z,\nu, \sigma)$ denotes the density of Cauchy distribution with location parameter $\nu$ and scale parameter $\sigma$ at the point $z$. |
2001-q1 | The government of a country of n people has a surplus of $300n. Thus it could rebate exactly $300 to each person in the country. However, it gives rebates of $300 by drawing n balls at random with replacement from an urn containing exactly n balls, each of which contains the social security number of one person. As a result some people may get no rebate while others get several $300 rebates. Assume that n is very large (a) What percentage of people get no rebate. Justify your answer. (b) What percentage of people in the population get at least $1200 in rebates? | Lets write this problem in abstract notations. Let $J_{n,i}$ be the winner of the $i$-th draw, when the population size is $n$. We have
$$ J_{n,1}, \, ext{Uniform}([n]),$$
where $[n]$ denotes the set $\left\{1, \, \ldots, \, n \right\}$. Let $X_{n,k}$ be the number of times the $k$-th person has won the draw, i.e. $X_{n,k} = \sum_{i=1}^n \mathbbm{1}(J_{n,i}=k) \sim \text{Binomial}(n, 1/n).$
Take any $0<s,t \leq 1$ and $k \neq l$. Then
\begin{align*}
\mathbb{E}\left[s^{X_{n,k}}t^{X_{n,l}} \right] = \prod_{i=1}^n \mathbb{E}\left[s^{\mathbbm{1}(J_{n,i}=k)}t^{\mathbbm{1}(J_{n,i}=l)} \right] &= \left[ \left(1-\dfrac{1}{n}\right)^2 + (s+t)\dfrac{1}{n}\left(1-\dfrac{1}{n}\right) + st\dfrac{1}{n^2}\right]^n \\
&= \left[ 1 + \dfrac{s+t-2}{n}+ O(n^{-2})\right]^n \longrightarrow \exp(s+t-2) = \mathbb{E}\left[ s^{\text{Poi}(1)}\right]\mathbb{E}\left[ t^{\text{Poi}(1)}\right].\end{align*}
Therefore,
$$ (X_{n,k}, X_{n,l}) \stackrel{d}{\longrightarrow} \text{Poi}(1) \otimes \text{Poi}(1), \; \text{as } n \to \infty.$$
Since the variables are discrete valued, this implies that probability mass functions converge which implies convergence of total variation distance to $0$ and hence
$$ \mathbb{P}(X_{n,k}\in A, X_{n,l} \in B) \longrightarrow \mathbb{P}(\text{Poi}(1) \in A)\mathbb{P}(\text{Poi}(1) \in B),$$
for any $A,B \subseteq \mathbb{Z}$.
\begin{enumerate}[label=(\alph*)]
\item We have to find the asymptotics of $Y_n :=\dfrac{1}{n} \sum_{k=1}^n \mathbbm{1}(X_{n,k}=0)$, as $n \to \infty$. Since the variables $\left\{X_{n,k} : 1 \leq k \leq n\right\}$ forms an exchangeable collection, we have
$$ \mathbb{E}(Y_n) = \mathbb{E} \left[\dfrac{1}{n} \sum_{k=1}^n \mathbbm{1}(X_{n,k}=0) \right] = \mathbb{P}(X_{n,1}=0) \longrightarrow \mathbb{P}(\text{Poi}(1)=0)=e^{-1},$$
whereas,
$$ \mathbb{E}(Y_n^2)= \dfrac{1}{n} \mathbb{P}(X_{n,1}=0)+ \dfrac{n(n-1)}{n^2} \mathbb{P}(X_{n,1}=X_{n,2}=0) \to \mathbb{P}(\text{Poi}(1) \otimes \text{Poi}(1) = (0,0)) = e^{-2}. $$
Hence, $\operatorname{Var}(Y_n) \to 0$ and this yields
$$Y_n = \dfrac{1}{n} \sum_{k=1}^n \mathbbm{1}(X_{n,k}=0) \stackrel{p}{\longrightarrow} e^{-1}.$$
\item We have to find the asymptotics of $Z_n :=\dfrac{1}{n} \sum_{k=1}^n \mathbbm{1}(X_{n,k} \geq 4)$, as $n \to \infty$. Since the variables $\left\{X_{n,k} : 1 \leq k \leq n\right\}$ forms an exchangeable collection, we have
$$ \mathbb{E}(Z_n) = \mathbb{E} \left[\dfrac{1}{n} \sum_{k=1}^n \mathbbm{1}(X_{n,k}\geq 4) \right] = \mathbb{P}(X_{n,1} \geq 4) \longrightarrow \mathbb{P}(\text{Poi}(1) \geq 4)=:q,$$
whereas,
$$ \mathbb{E}(Z_n^2)= \dfrac{1}{n} \mathbb{P}(X_{n,1} \geq 4)+ \dfrac{n(n-1)}{n^2} \mathbb{P}(X_{n,1},X_{n,2} \geq 4) \to \mathbb{P}(\text{Poi}(1) \otimes \text{Poi}(1) \in [4, \infty)^2) = q^2. $$
Hence, $\operatorname{Var}(Z_n) \to 0$ and this yields
$$Z_n = \dfrac{1}{n} \sum_{k=1}^n \mathbbm{1}(X_{n,k}\geq 4) \stackrel{p}{\longrightarrow} q = 1- \exp(-1)\sum_{j=0}^3 \dfrac{1}{j!}.$$
\end{enumerate} |
2001-q2 | For each x > 0, let M(x) be a real-valued random variable, and set M(0) = 0. Assume that the random function M(x) is monotone non-decreasing on [0,∞). Define T(y) = inf{z ≥ 0 : M(z) ≥ y}. Suppose that e^{-x/T(y)} converges in law to an Exponential(λ) random variable when y -> ∞. (a) Find non-random a(x) and b(x) > 0 such that (M(x) - a(x))/b(x) converges in law for x -> ∞ to a non-degenerate random variable. (b) Provide the distribution function of the limit random variable. | We have $e^{-y}T(y) \stackrel{d}{\longrightarrow} \text{Exponential}(\lambda)$. By \textit{Polya'a Theorem}, we know this convergence to be uniform and hence for any sequence $\left\{(x_n,y_n)\right\}_{n \geq 1}$ such that $x_n \to x$ ad $y_n \uparrow \infty$, we have
$$ \mathbb{P}(e^{-y_n}T(y_n) \leq x_n) \longrightarrow \mathbb{P}(\text{Exponential}(\lambda) \leq x) = (1-\exp(- \lambda x))\mathbbm{1}(x \geq 0).$$
Suppose that $a(\cdot), b(\cdot)$ are two non-random functions such that $tb(x)+a(x)$ blows up to $\infty$ as $x \uparrow \infty$, for all $t \in \mathbb{R}$. Then using monotonicity of the process $M$, we get the following.
\begin{align*}
\mathbb{P} \left( \dfrac{M(x)-a(x)}{b(x)} \geq t\right) = \mathbb{P}\left( M(x) \geq a(x)+tb(x)\right) &= \mathbb{P} \left(T(a(x)+tb(x)) \leq x \right) \\
&= \mathbb{P} \left[ \exp(-a(x)-tb(x))T(a(x)+tb(x)) \leq x\exp(-a(x)-tb(x)) \right].
\end{align*}
Clearly we will be done if we can choose such $a,b$ with further condition that $\log x -a(x)-tb(x) \to h(t)$ as $x \to \infty$, for all $t \in \mathbb{R}$, where $h$ is some real valued function on $\mathbb{R}$. An obvious choice is $a(x)=\log x$ and $b \equiv 1.$ Then for any sequence $x_n$ going to \infty$ we have
$$ \mathbb{P}(M(x_n)-\log x_n \geq t) = \mathbb{P} \left[ \exp(-\log x_n -t)T(\log x_n+t) \leq x_n\exp(-\log x_n-t) \right] \to 1- \exp(-\lambda\exp(-t)).$$
In other words, $ M(x) - \log x \stackrel{d}{\longrightarrow} G_{\lambda}$, as $x \to \infty$. The distribution function $G_{\lambda}$ has the form $G_{\lambda}(t) = \exp(-\lambda\exp(-t)),$ for all $t.$ |
2001-q3 | Let Z denote the set of integers and w = {w_i : i ∈ Z} a sequence of independent and identically distributed random variables, such that P(w_i = 2/3) = P(w_i = 3/4) = 1/2. Fixing w, let S_n be the random walk on Z such that S_0 = 0 and for any x ∈ Z, P_w(S_{n+1} = x + 1|S_n = x) = 1 - P_w(S_{n+1} = x - 1|S_n = x) = w_x. Let T_x = min{k ≥ 0 : S_k = x} denote the time of first visit of x = 1, 2, 3, ... by this random walk. Denote by E_w(T_x) the expected value of T_x per fixed w and by E(T_x) its expectation with respect to the law of w. (a) Compute E(T_1). (b) Show that almost surely, x^{-1}(T_x - E_w(T_x)) -> 0 when x -> ∞. (c) Show that almost surely, x^{-1}T_x -> E(T_1) when x -> ∞. | We shall assume that $\mathbb{P}(w_i=p_1)=\mathbb{P}(w_i=p_2)=1/2$, with $p_2 >p_1>1/2$. Here $p_2=3/4, p_1=2/3.$
\begin{enumerate}[label=(\alph*)]
\item Let From now on $\mathbb{E}^{x}$ will denote expectation conditional on the event that $(S_0=x)$, i.e., the chain has started form $x$. Also let $X_{n+1} := S_{n+1}-S_n$, for all $n \geq 0$.
The first order of business is to prove that $\mathbb{E}^x_{w}(T_y) < \infty$, almost surely, for all $y >x$; $x,y \in \mathbb{Z}$. Let $B_1, B_2 , \ldots \stackrel{iid}{\sim}$ with $\mathbb{P}(B_1=1)=p_1/p_2= 1-\mathbb{P}(B_1=-1)$. Define a new collection of random variables $\left\{Y_n\right\}_{n \geq 1}$ as follows.
$$ Y_n := \begin{cases}
X_n, & \text{if } w_{S_{n-1}} = p_1, \\
\min{(X_n, B_n)} & \text{if } w_{S_{n-1}} = p_2.
\end{cases}$$
Note that $Y_n$ is $\pm 1$ valued and
\begin{align*}
\mathbb{P}_w(Y_n=1 \mid Y_1, \ldots, Y_{n-1}, S_0, \ldots, S_{n-1}, X_n) &= \mathbb{P}_w (Y_n=1 \mid S_{n-1}, X_n) \\
&= \mathbb{P}_w (Y_n=1, w_{S_{n-1}}=p_1 \mid S_{n-1}, X_n) + \mathbb{P}_w (Y_n=1, w_{S_{n-1}}=p_2 \mid S_{n-1}, X_n) \\
&= \mathbbm{1}(X_n=1, w_{S_{n-1}}=p_1 ) + \mathbb{P}_w(B_n=1 \mid S_{n-1}, X_n)\mathbbm{1}(w_{S_{n-1}}=p_2, X_n=1) \\
&= \mathbbm{1}(X_n=1, w_{S_{n-1}}=p_1) + \dfrac{p_1}{p_2}\mathbbm{1}(w_{S_{n-1}}=p_2, X_n=1).
\end{align*}
Taking expectation with respect to $S_0, \ldots, S_{n-1}, X_n$, we get
\begin{align*}
\mathbb{P}_w(Y_n=1 \mid Y_1, \ldots, Y_{n-1}) &= \mathbb{P}_w(X_n=1, w_{S_{n-1}}=p_1) + \dfrac{p_1}{p_2}\mathbb{P}_w(w_{S_{n-1}}=p_2, X_n=1) \\
&= p_1\mathbb{P}_w(w_{S_{n-1}}=p_1) + \dfrac{p_1}{p_2}p_2\mathbb{P}_w(w_{S_{n-1}}=p_2)= p_1.
\end{align*}
Therefore, $Y_1,Y_2, \ldots \stackrel{iid}{\sim}$ with $\mathbb{P}(Y_1=1)=p_1= 1-\mathbb{P}(Y_1=-1)$. Consider the random walk with these steps, i.e., $S_n^{\prime} = S_0 + \sum_{i=1}^n Y_i$ and $T_y^{\prime}$ be corresponding hitting times. By construction, $Y_n \leq X_n$ for all $n$ and hence $S_n \leq S^{\prime}_n$ for all $n$. This implies that $T_y \leq T^{\prime}_y$, a.s.[$\mathbb{P}^x_w$], for all $y >x$. This yields,
$$ \mathbb{E}_w^x(T_y) \leq \mathbb{E}_w^x(T_y^{\prime}) = \dfrac{y-x}{2p_1-1} < \infty,$$
since $\left\{S_n^{\prime}\right\}_{n \geq 0}$ is a SRW starting from $S_0^{\prime}=S_0$ with probability of going in the right direction being $p_1$.
Notice the fact that $\mathbb{E}_w^x(T_y)$ only depends on $\left\{w_i : i \leq y-1\right\}$ and hence independent with $\left\{w_i : i \geq y\right\}$.
Now to compute $\mathbb{E}_w^{0}(T_1)$,
note that under $\mathbb{P}^0_w$,
$$ T_1 = \begin{cases} 1, & \text{ with probability } w_0, \\ 1+T, &\text{ with probability } 1-w_0, \end{cases} $$
where $T$ under $\mathbb{P}_w^{0}$ is a random variable distributed same as $T_1$ under $\mathbb{P}_w^{-1}.$ Since, the random walk can go only one step at a time, it is easy to see that the distribution $T_1$ under $\mathbb{P}_w^{-1}$ is the convolution of distribution of $T_0$ under $\mathbb{P}_w^{-1}$ and distribution of $T_1$ under $\mathbb{P}_w^{0}.$ Hence, using the already proved fact that all expectations encountered below are finite, we have,
$$ \mathbb{E}^0_w(T_1) = w_0 + (1-w_0)\left[ 1+\mathbb{E}_w^{-1}(T_0) + \mathbb{E}_w^{0}(T_1)\right] \Rightarrow \mathbb{E}_w^{0}(T_1) = 1 + \dfrac{1-w_0}{w_0}\left[1+\mathbb{E}_w^{-1}(T_0)\right].$$
Recall the fact that $\mathbb{E}_w^{-1}(T_0) \perp\!\!\!\perp w_0$ and taking expectations on both sides we get,
$$ \mathbb{E}^{0}(T_1) = 1 + \mathbb{E} \left[ \dfrac{1-w_0}{w_0}\right]\left[ 1+ \mathbb{E}^{-1}(T_0)\right].$$
Since, $w_i$'s are i.i.d., from the translation invariance on $\mathbb{Z}$, we can conclude that $ \mathbb{E}^{0}(T_1) = \mathbb{E}^{-1}(T_0)$ and therefore,
$$ \mathbb{E}^0(T_1) = \dfrac{1+\mathbb{E} \left[ \dfrac{1-w_0}{w_0}\right]}{1-\mathbb{E} \left[ \dfrac{1-w_0}{w_0}\right]} = \dfrac{\mathbb{E}(w_0^{-1})}{2-\mathbb{E}(w_0^{-1})} = \dfrac{(1/2)(p_1^{-1}+p_2^{-1})}{2-(1/2)(p_1^{-1}+p_2^{-1})}= \dfrac{p_1^{-1}+p_2^{-1}}{4-p_1^{-1}-p_2^{-1}} = \dfrac{p_1+p_2}{4p_1p_2-p_1-p_2}.$$
Plugging in $p_1=2/3, p_2=3/4$, we get \mathbb{E}^0(T_1) = \dfrac{17}{7}.$$
\item We shall prove that for any fixed realization of $\left\{w_i : i \in \mathbb{Z}\right\}$, we have
$$ x^{-1}(T_x - \mathbb{E}^0_{w}(T_x)) \stackrel{a.s.} {\longrightarrow} 0.$$
Let $\tau_{i,i+1} :=T_{i+1} - T_i$, for all $i \geq 0$. By Strong Markov property of the walk (conditional on $w$), we can easily observe that $\left\{\tau_{i,i+1} \mid i \geq 0\right\}$ is an \textit{independent} collection. Hence, by \textit{Kolmogorov SLLN}, it is enough to show that
$$ \sum_{k=0}^{\infty} k^{-2}\operatorname{Var}^0_{w}(\tau_{k,k+1}) < \infty.$$
It is enough to show that $\sup_{k \geq 0} \mathbb{E}_w^{0}(\tau_{k,k+1}^2) < \infty.$
Fix $k \geq 0$. Consider the construction of the auxiliary process $\left\{ S_n^{\prime} : n \geq 0 \right\}$ with $S_0=k=S_0^{\prime}$. It is obvious that $\tau_{k,k+1}$ under $\mathbb{P}_w^0 \stackrel{d}{=} T_{k+1}$ under $\mathbb{P}_w^{k}$. So, $\mathbb{E}_w^{0}(\tau_{k,k+1}^2) = \mathbb{E}_w^{k} (T_{k+1}^2) \leq \mathbb{E}_w^{k}(T_{k+1}^{\prime 2}) = \mathbb{E}_w^{0}(T_1^{\prime 2}).$ The last equality holds as the process $S^{\prime}$ is a simple RW with probability of going to the right is $p_1$. Thus it is enough to show that $\mathbb{E}_w^{0}(T_1^{\prime 2}) < \infty.$
Set $\mu := p_1-(1-p_1) >0, \sigma^2 := \mathbb{E}_w(Y_1-\mu)^2.$ Thus $\left\{S_n^{\prime} - n\mu\right\}_{n \geq 0}$ is a $L^2$-MG with respect to its canonical filtration (conditional on $w$) and compensator process being $\langle S^{\prime}_n-n\mu \rangle_n = n\sigma^2.$ Hence, for any $n \geq 0$, we have
$$ \mathbb{E}_w^0\left[ (S_{T_1^{\prime} \wedge n}^{\prime} - (T_1^{\prime} \wedge n) \mu)^2 - (T_1^{\prime} \wedge n)\sigma^2\right]=0.$$
Since, $S_{T_1^{\prime} \wedge n}^{\prime }
leq 1$,
\begin{align*}
\mu^2\mathbb{E}_w^{0}(T_1^{\prime 2} \wedge n^2) = \mathbb{E}_w^0\left[ -S_{T_1^{\prime} \wedge n}^{\prime 2} + 2S_{T_1^{\prime} \wedge n}^{\prime } (T_1^{\prime} \wedge n) \mu + (T_1^{\prime} \wedge n)\sigma^2\right] & \leq \mathbb{E}_w^0\left[ 2 (T_1^{\prime} \wedge n) \mu + (T_1^{\prime} \wedge n)\sigma^2\right] \\
& \leq (2\mu+\sigma^2) \mathbb{E}_w^{0}(T_1^{\prime}) = \dfrac{2\mu+\sigma^2}{2p_1-1}.
\end{align*}
Therefore, taking MCT, we get $\mathbb{E}_w^{0}(T_1^{\prime 2}) \leq \dfrac{2\mu+\sigma^2}{\mu^2(2p_1-1)} < \infty.$ This completes the proof.
\item In light of what we have proved in (b), it is enough to show that $x^{-1}\mathbb{E}_{w}^0(T_x) \stackrel{a.s.}{\longrightarrow} \mathbb{E}^0(T_1)$ as $x \to \infty.$
We have observed that $\mathbb{E}_w^{0}T_1$ only depends on $\left\{w_i : i \leq 0\right\}$ and therefore we can get hold of a measurable function $h : \left( [0,1]^{\infty}, \mathcal{B}_{[0,1]}^{\infty}\right) \to (\mathbb{R}, \mathcal{B}_{\mathbb{R}})$ such that
$$ \mathbb{E}^0_{w}(T_1) = h(w_0, w_{-1},w_{-2}, \ldots).$$
We have also observed in part (b) that $\tau_{k,k+1}$ under $\mathbb{P}_w^0 \stackrel{d}{=} T_{k+1}$ under $\mathbb{P}_w^{k}$. From the translation invariance on $\mathbb{Z}$, we can conclude that
$$ \mathbb{E}_w^0(\tau_{k,k+1}) = \mathbb{E}^k_w(T_{k+1}) = h(w_{k},w_{k-1}, \ldots).$$
Therefore,
$$ \mathbb{E}^0_w(T_x) = \sum_{k=0}^{x-1} \mathbb{E}_w^0(\tau_{k,k+1}) = \sum_{k=0}^{x-1} h(w_{k},w_{k-1}, \ldots).$$
Since the collection $\left\{w_i : i \in \mathbb{Z}\right\}$ is \textit{i.i.d.}, we have the sequence $\left\{h(w_k,w_{k-1}, \ldots) : k \geq 0\right\}$ to be a stationary sequence. Moreover, $\mathbb{E}h(w_0,w_{-1}, \ldots) = \mathbb{E}^0(T_1) < \infty$ as we have shown in (a). Now we can apply \textit{Birkhoff's Ergodic Theorem} to conclude that
$$ x^{-1}\mathbb{E}_w^0(T_x) = \dfrac{1}{x} \sum_{k=0}^{x-1} h(w_{k},w_{k-1}, \ldots),$$
converges almost surely to some integrable random variable $H$.
On the otherhand, for any $x \geq 1$, the \textit{i.i.d.} structure of the collection $\left\{w_i : i \in \mathbb{Z}\right\}$ implies that
$$ (w_{x-1}, w_{x-2}, \ldots) \stackrel{d}{=} (w_0,w_1,\ldots),$$
and hence
$$x^{-1}\mathbb{E}_w^0(T_x) = \dfrac{1}{x} \sum_{k=0}^{x-1} h(w_{k},w_{k-1}, \ldots) \stackrel{d}{=} \dfrac{1}{x} \sum_{l=0}^{x-1} h(w_{l},w_{l+1}, \ldots) \stackrel{a.s.}{\longrightarrow} \mathbb{E}\left[ h(w_0,w_1,\ldots) \mid \mathcal{I}_{\theta}\right],$$
where $\mathcal{I}_{\theta}$ is the invariant $\sigma$-algebra for the shift operator. We know that $\mathcal{I}_{\theta}$ is a sub-$\sigma$-algebra for the tail-$\sigma$-algebra for the collection $\left\{w_i : i \geq 0\right\}$ and hence is trivial, a consequence of the i.i.d. structure and \textit{Kolmogorov $0-1$ law}. Hence,
$$ \mathbb{E}\left[ h(w_0,w_1,\ldots) \mid \mathcal{I}_{\theta}\right] = \mathbb{E}h(w_0,w_1, \ldots) = \mathbb{E}h(w_0,w_{-1}, \ldots) = \mathbb{E}^0(T_1).$$
By uniqueness of limit, we conclude that $H \equiv \mathbb{E}^0(T_1)$ which concludes our proof.
\end{enumerate} |
2001-q4 | Let p(i, j) be the transition probability of an irreducible, positive recurrent Markov chain with stationary probability distribution π. Assume that p(i, i) = 0 for all i. Suppose p_i are constants satisfying 0 < p_i < 1. Let a second transition probability be defined by q(i, j) = p_i p(i, j) for i ≠ j and q(i, i) = (1 - p_i). Show that this second transition probability defines an irreducible Markov chain. When is this chain positive recurrent? Find its stationary probability distribution in terms of π. | Below, $\mathcal{I}$ will denote the state space of the chains.
Since $p_i >0$ for all $i$, we have for all $i \neq j$, $q(i,j)=p_ip(i,j) >0 \iff p(i,j)>0$. Hence, state $j$ is accessible from state $i$ in the second chain if and only if the same happens in the first chain. Since, the first chain is irreducible, this implies that the second chain is also irreducible.
We know that for an irreducible chain, existence of a stationary probability measure is equivalent to the positive recurrence if the chain (see \cite[Corollary 6.2.42]{dembo}). So we focus on finding out conditions under which a stationary probability measure exists.
We have $\pi$ to be a stationary probability measure for the first chain. Hence, $\sum_{i} \pi_ip(i,j)=\pi_j$ for all $j$. Since, $p(i,i)=0$ for all $i$, we have $\sum_{i \neq j} \pi_ip(i,j)=\pi_j$, for all $j$. Let $\nu$ be a stationary probability measure for the second chain. Then for all $j$,
$$ \sum_{i} \nu_iq(i,j) =\nu_j \iff \nu_j = \nu_jq(j,j) + \sum_{i \neq j} \nu_iq(i,j) = \nu_j(1-p_j) + \sum_{i \neq j} \nu_ip_ip(i,j) \iff p_j \nu_j = \sum_{i \neq j} \nu_ip_ip(i,j)= \sum_{i} \nu_ip_ip(i,j). $$
Since $p_j >0$ for all $j$ and $\sum_{j}\nu_j=1$, the above equation implies that $\left\{p_j\nu_j \mid j \in \mathcal{I}\right\}$ is an invariant measure for the first chain. Since the first chain is positive recurrent, its invariant measure is unique upto constant multiple. Hence, $p_j\nu_j =C\pi_j$, for all $j$ and for some $C >0$. Therefore,
$$ 1= \sum_{j} \nu_j = \sum_{j} C\dfrac{\pi_j}{p_j} \Rightarrow \sum_{j} \dfrac{\pi_j}{p_j} < \infty.$$
On the otherhand, if $\sum_{j} \pi_jp_j^{-1}< \infty$, we define
\begin{equation}{\label{stat}}
\nu_j := \dfrac{p_j^{-1}\pi_j}{\sum_{i} p_j^{-1}\pi_j}, \;\forall \; j,
\end{equation}
will clearly be a stationary probability measure for the second chain. Hence the second chain is positive recurrent if and only if $\sum_{j} \pi_jp_j^{-1}< \infty$ and in that case its unique stationary probability distribution is given by (
ef{stat}). |
2001-q5 | Let {B_t, F_t, t ≥ 0} be d-dimensional Brownian motion (with independent components and B_0 = 0). Let G be a probability measure on R^d and define Z_t = ∫_{R^d} exp(θ'B_t - \frac{1}{2} ||θ||^2) dG(θ), t ≥ 0, where ' denotes transpose and ||θ||^2 = θ'θ. (i) Show that {Z_t, F_t, t ≥ 0} is a martingale. (ii) Show that Z_t converges almost surely as t -> ∞ and evaluate the limit. (iii) Let T be a stopping time. Show that E[Z_T | T < ∞] ≤ 1. (iv) Evaluate Z_t in the case where G is a multivariate normal distribution with mean 0 and covariate matrix σ^2I. What does (ii) imply for this special case? | First note that for any $\mathbf{x} \in \mathbb{R}^d$ and $t>0$,
$$ \int_{\mathbb{R}^d} \exp \left( \theta^{\prime}\mathbf{x} - \dfrac{t}{2} ||\theta||_2^2\right)\, dG(\theta) \leq \int_{\mathbb{R}^d} \exp \left( ||\theta||_2||\mathbf{x}||_2 - \dfrac{t}{2} ||\theta||_2^2\right)\, dG(\theta) \leq \sup_{\theta \in \mathbb{R}^d} \exp \left( ||\theta||_2||\mathbf{x}||_2 - \dfrac{t}{2} ||\theta||_2^2\right) < \infty.$$
\begin{enumerate}[label=(\alph*)]
\item By the above argument, we can say that $Z_t(\omega)$ is a finite real number for all $\omega$ and $t \geq 0$. Next note that
$ (\theta, \omega) \mapsto (\theta, B_t(\omega))$ is a measurable function from $(\mathbb{R}^d \times \Omega, \mathcal{B}_{\mathbb{R}^d} \otimes \mathcal{F}_t)$ to $(\mathbb{R}^d \times \mathbb{R}^d, \mathcal{B}_{\mathbb{R}^d} \otimes \mathcal{B}_{\mathbb{R}^d}).$ On the otherhand, $(\theta, \mathbf{x}) \mapsto \exp \left( \theta^{\prime}\mathbf{x} - \dfrac{t}{2} ||\theta||_2^2\right)$ is jointly continuous hence measurable function from $(\mathbb{R}^d \times \mathbb{R}^d, \mathcal{B}_{\mathbb{R}^d} \otimes \mathcal{B}_{\mathbb{R}^d})$ to $(\mathbb{R}, \mathcal{B}_{\mathbb{R}})$. Therefore, $(\theta, \omega) \mapsto \exp \left( \theta^{\prime}B_t(\omega) - \dfrac{t}{2} ||\theta||_2^2\right)$ is measurable from $(\mathbb{R}^d \times \Omega, \mathcal{B}_{\mathbb{R}^d} \otimes \mathcal{F}_t)$ to $(\mathbb{R}, \mathcal{B}_{\mathbb{R}})$. Hence, by \textit{Fubini's Theorem}, we have
$$ \omega \mapsto \int_{\mathbb{R}^d} \exp \left( \theta^{\prime}B_t(\omega) - \dfrac{t}{2} ||\theta||_2^2\right)\, dG(\theta) \; \text{is } \mathcal{F}_t-\text{measurable}.$$
In other words, $Z_t \in m \mathcal{F}_t$. Note that $\theta^{\prime}B_t \sim N(0, t||\theta||_2^2)$ and therefore, by \textit{Fubini's Theorem},
\begin{align*}
\mathbb{E}|Z_t|= \mathbb{E}(Z_t) = \mathbb{E} \left[\int_{\mathbb{R}^d} \exp \left( \theta^{\prime}B_t - \dfrac{t}{2} ||\theta||_2^2\right)\, dG(\theta) \right] = \int_{\mathbb{R}^d} \mathbb{E}\exp \left( \theta^{\prime}B_t - \dfrac{t}{2} ||\theta||_2^2\right)\, dG(\theta) = \int_{\mathbb{R}^d} \, dG(\theta) =1.
\end{align*}
Thus $Z_t$ is integrable. Take $0 \leq s \leq t.$ Then using independent stationary increment property of BM, we get
\begin{align*}
\mathbb{E}(Z_t \mid \mathcal{F}_s) &= \mathbb{E} \left[\int_{\mathbb{R}^d}\exp \left( \theta^{\prime}B_s \right) \exp \left( \theta^{\prime}(B_t-B_s)- \dfrac{t}{2} ||\theta||_2^2 \right)\, dG(\theta) \Big \rvert \mathcal{F}_s\right] \\
&= \int_{\mathbb{R}^d} \exp \left( \theta^{\prime}B_s - \dfrac{t}{2} ||\theta||_2^2 \right) \mathbb{E} \left[ \exp \left( \theta^{\prime}(B_t-B_s)\right) \mid \mathcal{F}_s\right] \, dG(\theta) \\
&= \int_{\mathbb{R}^d} \exp \left(\theta^{\prime}B_s+ \dfrac{t-s}{2}||\theta||_2^2- \dfrac{t}{2} ||\theta||_2^2 \right)\, dG(\theta) = Z_s.
\end{align*}
This establishes that $\left\{Z_t, \mathcal{F}_t, t \geq 0\right\}$ is a MG.
\item Notice that by DCT,
$$ \mathbf{x} \mapsto \int_{\mathbb{R}^d} \exp \left( \theta^{\prime}\mathbf{x} - \dfrac{t}{2} ||\theta||_2^2\right)\, dG(\theta)$$
is a continuous function and therefore, the process $Z$ has continuous sample paths by sample-path-continuity of BM. Thus $\left\{Z_t, \mathcal{F}_t, t \geq 0\right\}$ is a continuous non-negative sup-MG and hence by \textit{Doob's Convergence Theorem}, it converges almost surely to some $Z_{\infty}$ as $t \to \infty$.
To find out the $Z_{\infty}$, consider the following random variables. Let $A_n := \left\{\theta \in \mathbb{R}^d \mid ||\theta||_2 =0 \text{ or } ||\theta||_2 \geq 1/n\right\}.$ Define
$$ Z_{t,n} := \int_{\mathbb{R}^d} \exp \left( \theta^{\prime}B_t - \dfrac{t}{2} ||\theta||_2^2\right)\mathbbm{1}_{A_n}\, dG(\theta), \; \forall \; t \geq 0, \; n \geq 1.$$
Exactly similar calculation to the one already done in part (a), shows that $\left\{Z_{t,n}, \mathcal{F}_t, t \geq 0\right\}$ is a continuous non-negative MG and hence converges almost surely to some $Z_{\infty, n}$ as $t \to \infty$. By \textit{Fatou's Lemma},
\begin{align*}
\mathbb{E}|Z_{\infty,n}-Z_{\infty}| \leq \liminf_{t \to \infty} \mathbb{E}|Z_{t,n}-Z_t| &= \liminf_{t \to \infty} \mathbb{E} \left[ \int_{\mathbb{R}^d} \exp \left( \theta^{\prime}B_t - \dfrac{t}{2} ||\theta||_2^2\right)\mathbbm{1}_{A_n^c}\, dG(\theta) \right] \\
&= \liminf_{t \to \infty} \int_{\mathbb{R}^d} \mathbb{E} \exp \left( \theta^{\prime}B_t - \dfrac{t}{2} ||\theta||_2^2\right)\mathbbm{1}_{A_n^c}\, dG(\theta) =G(A_n^c).
\end{align*}
Since, $A_n^c \downarrow \emptyset$ as $n \uparrow \infty$ and $G$ is a probability measure, we have $\mathbb{E}|Z_{\infty,n}-Z_{\infty}|=o(1).$ So let us focus on computing $Z_{\infty,n}$.
Fix $n \geq 1$. By \textit{The Law of Iterated Logarithm}, we have $t^{-1}B_t$ converges almost surely to $\mathbf{0}$ as $t \to \infty$. Take $\omega$ in that almost sure event. Then clearly,
$$ \exp \left( \theta^{\prime}B_t(\omega) - \dfrac{t}{2} ||\theta||_2^2\right) \longrightarrow \mathbbm{1}( heta = \mathbf{0}),$$
as $t \to \infty$. Get $0<t_0(\omega) < \infty$ such that $M(\omega) := \sup_{t \geq t_0(\omega)}t^{-1}||B_t(\omega)||_2 < \dfrac{1}{2n}.$ Then
\begin{align*} \sup_{t \geq t_0(\omega)} \exp \left( \theta^{\prime}B_t(\omega) - \dfrac{t}{2} ||\theta||_2^2\right) \leq \sup_{t \geq t_0(\omega)} \exp \left( ||\theta||_2||B_t(\omega)||_2 - \dfrac{t}{2} ||\theta||_2^2\right) &\leq \sup_{t \geq t_0(\omega)}\exp \left( t||\theta||_2 \left( \dfrac{1}{2n} - \dfrac{||\theta||_2}{2}\right)\right),
\end{align*}
and hence
$$ \sup_{t \geq t_0(\omega)} \exp \left( \theta^{\prime}B_t(\omega) - \dfrac{t}{2} ||\theta||_2^2\right) \mathbbm{1}( heta \in A_n) \leq \mathbbm{1}( heta \in A_n) \leq 1.$$
Therefore, by \textit{Bounded Convergence Theorem},
$$Z_{t,n}(\omega)= \int_{\mathbb{R}^d} \exp \left( \theta^{\prime}B_t(\omega) - \dfrac{t}{2} ||\theta||_2^2\right)\mathbbm{1}_{A_n}\, dG(\theta) \longrightarrow \int_{\mathbb{R}^d} \mathbbm{1}( heta = \mathbf{0})\mathbbm{1}_{A_n}\, dG(\theta) = G(\left\{\mathbf{0}\right\}).$$
This shows that $Z_{\infty,n} = G(\left\{\mathbf{0}\right\})$ almost surely for all $n \geq 1$ and therefore $Z_{\infty} = G(\left\{\mathbf{0}\right\})$ almost surely. In other words, $Z_t \stackrel{a.s.}{\longrightarrow} G(\left\{\mathbf{0}\right\})$, as $t \to \infty$.
\item Since, $\left\{Z_t, \mathcal{F}_t, t \geq 0\right\}$ is a continuous MG, we use \textit{Optional Stopping Theorem} to get that $\mathbb{E}(Z_{T \wedge t}) = \mathbb{E}(Z_0) =1.$ Again path-continuity of $Z$ guarantees that $Z_{T \wedge t}$ converges almost surely to $Z_T$ on the event $(T< \infty)$ as $t \to \infty$. Employ \textit{Fatou's Lemma} and conclude that
$$ \mathbb{E}(Z_T\mathbbm{1}(T < \infty)) \leq \liminf_{t \to \infty} \mathbb{E} \left[ Z_{T \wedge t}\mathbbm{1}(T < \infty)\right] \leq \liminf_{t \to \infty} \mathbb{E} \left[ Z_{T \wedge t}\right] =1.$$
\item $G$ is $N_d(\mathbf{0}, \sigma^2I_d)$. Hence,
\begin{align*}
Z_t = \int_{\mathbb{R}^d} \exp \left( \theta^{\prime}B_t - \dfrac{t}{2} ||\theta||_2^2\right) &= \int_{\mathbb{R}^d} \prod_{i=1}^d \left[\exp \left( \theta_iB_t^{(i)} - \dfrac{t}{2} \theta_i^2 \right) \dfrac{1}{\sigma} \phi \left( \dfrac{\theta_i}{\sigma}\right) \right]\, \prod_{i=1}^d d\theta_i \\
&= \prod_{i=1}^d \left[\int_{\mathbb{R}} \exp \left( \theta_iB_t^{(i)} - \dfrac{t}{2} \theta_i^2 \right) \dfrac{1}{\sigma} \phi \left( \dfrac{\theta_i}{\sigma}\right)\, d\theta_i \right].
\end{align*}
For any $a \in \mathbb{R}$,
\begin{align*}
\int_{\mathbb{R}} \exp \left( \theta a - \dfrac{t}{2} \theta^2 \right) \dfrac{1}{\sigma} \phi \left( \dfrac{\theta}{\sigma}\right)\, d\theta = \int_{\mathbb{R}} \exp \left( \sigma ax - \dfrac{t \sigma^2}{2} x^2 \right) \phi(x)\, dx &= \int_{\mathbb{R}} \exp(a\sigma x) \dfrac{1}{\sqrt{2\pi}} \exp\left( -\dfrac{(t\sigma^2+1)x^2}{2}\right)\, dx.
\end{align*}
Set $\nu^2 = (t\sigma^2+1)^{-1}.$ and take $Y \sim N(0,\nu^2)$. Then
\begin{align*}
\int_{\mathbb{R}} \exp(a\sigma x) \dfrac{1}{\sqrt{2\pi}} \exp\left( -\dfrac{(t\sigma^2+1)x^2}{2}\right)\, dx = \int_{\mathbb{R}} \exp(a\sigma x) \dfrac{1}{\sqrt{2\pi}} \exp\left( -\dfrac{x^2}{2\nu^2}\right)\, dx = \nu \mathbb{E}\exp(a\sigma Y) = \nu \exp (a^2\sigma^2\nu^2/2).
\end{align*}
Hence,
$$ Z_t = \prod_{i=1}^d \left[(t\sigma^2+1)^{-1/2} \exp \left( \dfrac{(B_t^{(i)})^2\sigma^2}{2(t\sigma^2+1)} \right) \right] = (t\sigma^2 +1)^{-d/2} \exp \left( \dfrac{\sigma^2||B_t||_2^2}{2(t\sigma^2+1)} \right). $$
For this case $(b)$ implies that almost surely as $t \to \infty$,
\begin{align*}
Z_t \longrightarrow 0 \iff -\dfrac{d}{2}\log(t\sigma^2 +1) + \dfrac{\sigma^2||B_t||_2^2}{2(t\sigma^2+1)} \longrightarrow - \infty &\iff -d\log t + \dfrac{\sigma^2||B_t||_2^2}{t\sigma^2+1} \longrightarrow - \infty \\
& \iff -d\log t + t^{-1}||B_t||_2^2 \stackrel{a.s.}{\longrightarrow} -\infty.
\end{align*}
In other words, $t^{-1}||B_t||_2^2 - d\log t \stackrel{a.s.}{\longrightarrow} - \infty$, as $t \to \infty$. |
2001-q6 | Let {X_n, F_n, n ≥ 1} be a martingale difference sequence such that E(X_n^2 | F_{n-1}) = 1 and sup_{n ≥ 2} E(|X_n|^p|F_{n-1}) < ∞ a.s. for some p > 2. Suppose that u_n is F_{n-1}-measurable and that n^{-1}∑_{i=1}^n u_i^2 converges in probability to c for some constant c > 0. Let S_k = ∑_{i=1}^k u_i X_i. Fix a > 0. Show that
lim_{n → ∞} P(
max_{1 ≤ k ≤ n} (S_k - \frac{k}{n} S_n) ≥ a√n )
exists and evaluate the limit. | Since we are not given any integrability condition on $u$, we need to perform some truncation to apply MG results. Let $v_{n,k}=u_k \mathbbm{1}(|u_k| \leq n).$ Note that
\begin{align*}
\mathbb{P} \left( \bigcup_{k=1}^n \left(\sum_{i=1}^k u_iX_i \neq \sum_{i=1}^k v_{n,i}X_i \right) \right) \leq \mathbb{P}(orall 1 \leq k \leq n, \; u_k = v_{n,k}) = \mathbb{P}(orall 1 \leq k \leq n, \; |u_k| \leq n) = \mathbb{P}(orall 1 \leq k \leq n, \; \max_{1 \leq k \leq n} |u_k| >n),
\end{align*}
where the last term goes to $0$ by Lemma~\ref{lemma} and the fact that $n^{-1}\sum_{k=1}^n u_k^2 \stackrel{p}{\longrightarrow} c$, since this implies $n^{-1}\max_{1 \leq k \leq n} u_k^2 \stackrel{p}{\longrightarrow} 0$. Thus
$$ \dfrac{1}{\sqrt{n}} \max_{1 \leq k \leq n} \left( S_k - \dfrac{k}{n}S_n \right) =\dfrac{1}{\sqrt{n}} \max_{1 \leq k \leq n} \left(\sum_{i=1}^k v_{n,i}X_i - \dfrac{k}{n}\sum_{i=1}^n v_{n,i}X_i \right) + o_p(1).$$
We have $v_{n,i}X_i$ to be square-integrable for all $n,i \geq 1$. Clearly, $v_{n,i}X_i \in m\mathcal{F}_i$ and $\mathbb{E}(v_{n,i}X_i \mid \mathcal{F}_{i-1}) = v_{n,i} \mathbb{E}(X_i \mid \mathcal{F}_{i-1})=0$. Hence $\left\{M_{n,k}:=\sum_{i=1}^k v_{n,i}X_i, \mathcal{F}_k, k \geq 0 \right\}$ is a $L^2$-Martingale with $S_{n,0} :=0$. Its predictable compensator process is as follows.
$$ \langle M_n \rangle_l = \sum_{k=1}^l \mathbb{E}(v_{n,k}^2X_k^2 \mid \mathcal{F}_{k-1}) = \sum_{k=1}^l v_{n,k}^2 \mathbb{E}(X_k^2 \mid \mathcal{F}_{k-1}) = \sum_{k=1}^l v_{n,l}^2, \; \forall \; n,l \geq 1.$$
For any $q>0$ and any sequence of natural numbers $\left\{l_n \right\}$ such that $l_n =O(n)$, we have
$$ \mathbb{P} \left(\sum_{k=1}^{l_n} |v_{n,k}|^q \neq \sum_{k=1}^{l_n} |u_k|^q \right) \leq \mathbb{P}\left( \max_{k=1}^{l_n} |u_k| >n \right) \longrightarrow 0, $$
which follows from Lemma~\ref{lemma}. Thus for any sequence of positive real numbers $\left\{c_n \right\}$ we have
\begin{equation}{\label{error}}
\dfrac{1}{c_n} \sum_{k=1}^{l_n} |v_{n,k}|^q = \dfrac{1}{c_n} \sum_{k=1}^{l_n} |u_k|^q + o_p(1).
\end{equation}
In particular, for any $0 \leq t \leq 1$,
$$ n^{-1}\langle M_n \rangle_{\lfloor nt \rfloor} = \dfrac{1}{n}\sum_{k=1}^{\lfloor nt \rfloor} v_{n,k}^2 = \dfrac{1}{n}\sum_{k=1}^{\lfloor nt \rfloor} u_k^2 + o_p(1) = \dfrac{\lfloor nt \rfloor}{n}\dfrac{1}{\lfloor nt \rfloor}\sum_{k=1}^{\lfloor nt \rfloor} u_k^2 + o_p(1) \stackrel{p}{\longrightarrow} ct.$$
Also for any $\varepsilon >0$,
\begin{align*}
\dfrac{1}{n} \sum_{k=1}^n \mathbb{E} \left[(S_{n,k}-S_{n,k-1})^2; |S_{n,k}-S_{n,k-1}| \geq \varepsilon \sqrt{n} \mid \mathcal{F}_{k-1} \right] &= \dfrac{1}{n} \sum_{k=1}^n v_{n,k}^2\mathbb{E} \left[X_k^2; |v_{n,k}X_k| \geq \varepsilon \sqrt{n} \mid \mathcal{F}_{k-1} \right] \\
& \leq \dfrac{1}{n} \sum_{k=1}^n v_{n,k}^2\mathbb{E} \left[|X_k|^2 (\varepsilon \sqrt{n} |v_{n,k}X_k|^{-1})^{2-p} \mid \mathcal{F}_{k-1} \right] \\
& \leq \varepsilon^{2-p}\left( \sup_{k \geq 1} \mathbb{E}(|X_k|^p \mid \mathcal{F}_{k-1})\right) \left( \dfrac{1}{n^{p/2}}\sum_{k=1}^n |v_{n,k}|^p \right) \\
&= \varepsilon^{2-p}\left( \sup_{k \geq 1} \mathbb{E}(|X_k|^p \mid \mathcal{F}_{k-1})\right) \left( \dfrac{1}{n^{p/2}}\sum_{k=1}^n |u_k|^p +o_p(1) \right).
\end{align*}
From Lemma~\ref{lemma} and the fact that $n^{-1}\sum_{k=1}^n u_k^2$ converges to $c$ in probability, we can conclude that $n^{-p/2} \sum_{k=1}^n |u_k|^p$ converges in probability to $0$. Then we have basically proved the required conditions needed to apply \textit{Martingale CLT}. Define,
$$ \widehat{S}_n(t) := \dfrac{1}{\sqrt{cn}} \left[M_{n,\lfloor nt \rfloor} +(nt-\lfloor nt \rfloor)(M_{n,\lfloor nt \rfloor +1}-M_{n,\lfloor nt \rfloor}) \right]=\dfrac{1}{\sqrt{cn}} \left[M_{n,\lfloor nt \rfloor} +(nt-\lfloor nt \rfloor)v_{n,\lfloor nt \rfloor +1}X_{\lfloor nt \rfloor +1} \right], $
for all $0 \leq t \leq 1$. We then have $\widehat{S}_n \stackrel{d}{\longrightarrow} W$ as $n \to \infty$ on the space $C([0,1])$.
Observe that the function $t \mapsto \widehat{S}_n(t)$ on $[0,1]$ is the piecewise linear interpolation using the points
$$ \left\{ \left( \dfrac{k}{n}, \dfrac{1}{\sqrt{cn}} M_{n,k} \right) \Bigg \rvert k=0, \ldots, n\right\}.$$
Hence,
$$ \dfrac{1}{\sqrt{n}}\max_{1 \leq k \leq n} \left(S_k - \dfrac{k}{n}S_n \right) = \dfrac{1}{\sqrt{n}} \max_{1 \leq k \leq n} \left(M_{n,k} - \dfrac{k}{n}M_{n,n}\right) + o_p(1) = \sqrt{c} \sup_{0 \leq t \leq 1} \left(\widehat{S}_n(t)-t\widehat{S}_n(1) \right) +o_p(1).
$$
Since, $\mathbf{x} \mapsto \sup_{0 \leq t \leq 1} (x(t)-tx(1))$ is a continuous function on $C([0,1])$, we have
$$ \sup_{0 \leq t \leq 1} (\widehat{S}_n(t)-t\widehat{S}_n(1)) \stackrel{d}{\longrightarrow} \sup_{0 \leq t \leq 1} (W(t)-tW(1)) = \sup_{0 \leq t \leq 1} B(t),$$
where $\left\{B(t) : 0 \leq t \leq 1\right\}$ is the standard Brownian Bridge. We know that
$$ \mathbb{P}(\sup_{0 \leq t \leq 1} B(t) \geq b) = \exp(-2b^2),$$
for all $b \geq 0$ (see, \cite[Exercise 10.2.14]{dembo}) Therefore, using \textit{Slutsky's Theorem} and noting that \sup_{0 \leq t \leq 1} B(t)$ has a continuous distribution function, we can conclude that
$$ \mathbb{P} \left( \max_{1 \leq k \leq n} \left(S_k - \dfrac{k}{n}S_n \right) \geq a\sqrt{n} \right) \longrightarrow \mathbb{P} \left( \sqrt{c}\sup_{0 \leq t \leq 1} B(t) \geq a\right) = \exp \left(-\dfrac{2a^2}{c}\right),$$
for all $a \geq 0.$ |
2002-q1 | Let $X_1, X_2, \\ldots, X_k$ be independent standard normal random variables and $\\gamma_1(t), \\ldots, \\gamma_k(t)$ infinitely differentiable functions of a real variable defined on a closed, bounded interval, such that $\\sum_{i=1}^k \\gamma_i^2(t) = 1$ for all $t$. Let $Z(t) = \\sum_{i=1}^k \\gamma_i(t)X_i$. Let $\\dot{Z}(t), \\ddot{Z}(t)$, etc. denote first, second, etc. derivatives of $Z(t)$ with respect to $t$. (a) Show that $\\text{Cov}(Z(t), \\dot{Z}(t)) = 0$. (b) Evaluate $E(Z(t)\\dot{Z}(t))$ in terms of $\\dot{Z}(t)$ and expressions of the form $\\sum_{i=1}^k (\\gamma_i(t))^l(\\frac{d^m \\gamma_i(t)}{dt^m})^b$, for some $a, b, m$ values. | (a) \begin{align*} \operatorname{Cov} \left( Z(t), \dot{Z}(t)\right) = \operatorname{Cov} \left( \sum_{i=1}^k \gamma_i(t)X_i,\sum_{i=1}^k \dot{\gamma}_i(t)X_i \right) = \sum_{i=1}^k \gamma_i(t)\dot{\gamma}_i(t) = \dfrac{1}{2} \left( \sum_{i=1} \gamma_i^2(t)\right)^{\prime} = 0. \end{align*} \n\n(b) Note that $(Z(t),\ddot{Z}(t)) =\left( \sum_{i=1}^k \gamma_i(t)X_i, \sum_{i=1}^k \ddot{\gamma}_i(t)X_i \right)$ is jointly normal, with $$ \left( {\begin{array}{c} Z(t) \\ \ddot{Z}(t) \\ \end{array}}\right) \sim N_2 \left( \left( {\begin{array}{c} 0 \\ 0 \\ \end{array}}\right), \left( {\begin{array}{cc} \sum_{i=1}^k \gamma^2_i(t) & \sum_{i=1}^k \gamma_i(t)\ddot{\gamma}_i(t) \\ \sum_{i=1}^k \gamma_i(t)\ddot{\gamma}_i(t) & \sum_{i=1}^k \ddot{\gamma}^2_i(t) \end{array}}\right) \right).$$ Therefore, \begin{align*} \mathbb{E} \left[Z(t) |\dddot{Z}(t) \right] &= \mathbb{E}(Z(t)) + \dfrac{\operatorname{Cov} \left( Z(t), \ddot{Z}(t)\right)}{\operatorname{Var} \left(\ddot{Z}(t)\right)} \left( \ddot{Z}(t)- \mathbb{E}(\ddot{Z}(t))\right) \\ & = \dfrac{\sum_{i=1}^k \gamma_i(t)\ddot{\gamma}_i(t)}{\sum_{i=1}^k \ddot{\gamma}^2_i(t)} \ddot{Z}(t). \end{align*} Let \( \Gamma_{a,b,m}(t) := \sum_{i=1}^k (\gamma_i(t))^a \left(d^m\gamma_i(t)/dt^m \right)^b. \) Then, $$ \mathbb{E} \left[Z(t) |\ddot{Z}(t) \right] = \dfrac{\ddot{Z}(t)\Gamma_{1,1,2}(t)}{\Gamma_{0,2,2}(t)}.$$ |
2002-q2 | Let $U_1, U_2, \\ldots$ be independent and uniformly distributed on $[0, c]$, where $c > 0$. Let $S_n = \\sum_{k=1}^n \\prod_{i=1}^k U_i$. (a) Give a sufficient condition to guarantee that $\\lim_{n\\rightarrow\\infty} S_n$ is finite with probability one. Explain. (b) Find the least upper bound of the values of $c$ for which $\\lim_{n\\rightarrow\\infty} S_n$ is finite with probability one. (c) For $c$ equal to the least upper bound in (b), is $\\lim_{n\\rightarrow\\infty} S_n$ finite or infinite with probability one? (d) Let $W_n = \\sum_{k=1}^n \\prod_{i=1}^k U_i$. For the values of $c$ you identified in (a), does $W_n$ converge with probability one? Does it converge in law? Explain your answers. | We have $U_1, U_2, \ldots, \stackrel{iid}{\sim} \text{Uniform}(0,c)$, with $c>0$. $S_n := \sum_{k=1}^n \prod_{i=1}^k U_i.$ \n\n(a) We employ \textit{Root test} for convergence of series. This gives, $$ \limsup_{k \to \infty} \left( \prod_{i=1}^k U_i\right)^{1/k} < 1, \; \text{almost surely} \Rightarrow \sum_{k \geq 1} \prod_{i=1}^k U_i < \infty, \; \text{almost surely}.$$ Note that the LHS condition above is equivalent to $$ \limsup_{k \to \infty} \dfrac{1}{k} \sum_{i=1}^k \log U_i < 0, \; \text{almost surely}.$$ Using SLLN and the fact that for $V \sim \text{Uniform}(0,1)$, $$ \log U_1 \stackrel{d}{=} \log (cV) = \log c + \log V \sim \log c - \text{Exponential}(1),$$ we obtain $$ \limsup_{k \to \infty} \dfrac{1}{k} \sum_{i=1}^k \log U_i = \mathbb{E} \log U_1 = \log c - 1, \; \text{almost surely}. $$ Hence the sufficient condition is $\log c < 1$ or equivalently, $c < e$. \n\n(b) Let $A:= \left\{ c >0 \mid \lim_{n \to \infty} S_n \text{ is finite w.p. } 1\right\}$. We have already shown $A \supseteq (0,e)$. We shall now show that $(e, \infty) \subseteq A^c$ and thus $\sup A = e$. \n\nSo take $c>e$. We shall use Root test again. $$ \limsup_{k \to \infty} \left( \prod_{i=1}^k U_i\right)^{1/k} > 1, \; \text{almost surely} \Rightarrow \sum_{k \geq 1} \prod_{i=1}^k U_i = \infty, \; \text{almost surely}.$$ and the LHS condition above is equivalent to $$ \log c -1 = \mathbb{E} \log U_1 = \limsup_{k \to \infty} \dfrac{1}{k} \sum_{i=1}^k \log U_i > 0. \; \text{almost surely}.$$ The last condition is same as $c >e$, which concludes the proof. \n\n(c) Consider the case $c=e$. Then $\left\{\log U_i \right\}_{i \geq 1}$ is a collection of non-degenerate \textit{i.i.d.} variables with mean $0$. Hence, from the Theory of SRW, we have $$ \limsup_{k \to \infty} \sum_{i=1}^k \log U_i = \infty, \; \text{ almost surely},$$ or equivalently, $$ \limsup_{k \to \infty} \prod_{i=1}^k U_i = \infty, \; \text{ almost surely}.$$ Therefore, $S_n \uparrow \infty$ with probability $1$. \n\n(d) Note that for all $n \geq 2$, $$ W_n = U_n + U_nU_{n-1} + \cdots + U_n \cdots U_1 = U_n + U_n \left[ U_{n-1} + U_{n-1}U_{n-2} + \cdots + U_{n-1}\cdots U_1\right] = U_n(1+W_{n-1}).$$ If $W_n$ converges to $W_{\infty}$ w.p. $1$ where $W_{\infty}<\infty$ a.s., we should have $$ U_n \stackrel{a.s.}{\longrightarrow} \dfrac{W_{\infty}}{1+W_{\infty}}, \; \text{as } n \to \infty,$$ and hence $U_{n+1} -U_n \stackrel{a.s.}{\longrightarrow} 0$. But $U_{n+1}-U_n \stackrel{d}{=} U_2-U_1 \neq 0$ and hence we get a contradiction. On the otherhand, $\left(U_i \mid 1 \leq i \leq n \right) \stackrel{d}{=} \left(U_{n-i+1} \mid 1 \leq i \leq n \right)$.Thus for $0<c<e$, $$ W_n = \sum_{k=1}^n \prod_{i=k}^n U_i \stackrel{d}{=} \sum_{k=1}^n \prod_{i=k}^n U_{n-i+1} = \sum_{k=1}^n \prod_{j=1}^{n-k+1} U_{j} = \sum_{l=1}^n \prod_{j=1}^{l} U_{j} = S_n \stackrel{a.s.}{\longrightarrow} S_{\infty},$$ and hence $W_n \stackrel{d}{\longrightarrow} S_{\infty}.$ |
2002-q3 | For $n \\geq 1$ the random variable $Z_n$ has the discrete arc-sine law, assigning mass $2^{-2n} \\\binom{2n}{k} (2n-2k)/n-k$ to $k = 0, 1, \\ldots , n$. Let $Z_{\\infty}$ be a random variable with probability density function $\\frac{1}{\\pi\\sqrt{x(1-x)}}$ on $[0,1]$. (a) Find the scaling $a_n$ such that $a_nZ_n$ converge in law to $Z_{\\infty}$ and prove this convergence. (b) The random variables $X$ and $Y$ are independent, with $X$ having $\\text{Gamma}(a,1)$ distribution and $Y$ having $\\text{Gamma}(b,1)$ distribution (i.e., the probability density function of $Y$ is $\\Gamma(b)^{-1}y^{b-1}e^{-y}$ for $y \\geq 0$). Show that $X/(X + Y)$ is independent of $X + Y$. (c) Suppose $U$, $W_{\\infty}$ and $V_{\\infty}$ are mutually independent $[0,1]$-valued random variables, with $U$ of uniform law, while both $W_{\\infty}$ and $V_{\\infty}$ have the (arc-sine) probability density function $\\frac{1}{\\pi\\sqrt{x(1-x)}}$. Show that $UW_{\\infty} + (1-U)V_{\\infty}$ has the same distribution as $U$. | (a) We shall use \textit{Sterling's approximation}. Fix $\varepsilon >0$ and we can say that for large enough $k$, $$ (1-\varepsilon) \sqrt{2\pi} k^{k+1/2} e^{-k} \leq k! \leq (1+\varepsilon) \sqrt{2\pi} k^{k+1/2} e^{-k}. $$ Fix $0 <a<b <1$. Then for large enough $n$, we have $$ {2k \choose k}{2n-2k \choose n-k} = \dfrac{(2k)!(2n-2k)!}{(k!)^2((n-k)!)^2} \in [C_1(\varepsilon),C_2(\varepsilon)] \dfrac{1}{2\pi}\dfrac{(2k)^{2k+1/2}(2n-2k)^{2n-2k+1/2}}{k^{2k+1}(n-k)^{2n-2k+1}},$$ for all $ an \leq k \leq bn$, where $C_1(\varepsilon)=(1-\varepsilon)^2(1+\varepsilon)^{-4}$ and $C_2(\varepsilon)=(1+\varepsilon)^2(1-\varepsilon)^{-4}.$ Hence, setting $a_n = \lceil an \rceil $ and $b_n=\lfloor bn \rfloor$, we have \begin{align*} \limsup_{n \to \infty} \mathbb{P} \left( a \leq \dfrac{Z_n}{n} \leq b \right) &= \limsup_{n \to \infty} \sum_{k=a_n}^{b_n} 2^{-2n} {2k \choose k}{2n-2k \choose n-k} \\ & \leq C_2(\varepsilon) \limsup_{n \to \infty} \sum_{k=a_n}^{b_n} 2^{-2n} \dfrac{1}{2\pi}\dfrac{(2k)^{2k+1/2}(2n-2k)^{2n-2k+1/2}}{k^{2k+1}(n-k)^{2n-2k+1}} \\ & = C_2(\varepsilon) \limsup_{n \to \infty} \sum_{k=a_n}^{b_n} \dfrac{1}{\pi} k^{-1/2}(n-k)^{-1/2} \\ & = C_2(\varepsilon)\dfrac{1}{\pi} \dfrac{1}{n} \limsup_{n \to \infty} \sum_{k=a_n}^{b_n} \left( \dfrac{k}{n}\right)^{-1/2}\left( 1-\dfrac{k}{n}\right)^{-1/2} = C_2(\varepsilon)\dfrac{1}{\pi} \int_{a}^b \dfrac{1}{\sqrt{x(1-x)}} \, dx. \end{align*} Similarly, $$\liminf_{n \to \infty} \mathbb{P} \left( a \leq \dfrac{Z_n}{n} \leq b \right) \geq C_1(\varepsilon)\dfrac{1}{\pi} \int_{a}^b \dfrac{1}{\sqrt{x(1-x)}} \, dx.$$ Taking $\varepsilon \downarrow 0$, we conclude that $$ \mathbb{P} \left( a \leq \dfrac{Z_n}{n} \leq b \right) \rightarrow \int_{a}^b \dfrac{1}{\pi\sqrt{x(1-x)}} \, dx = \mathbb{P}(a \leq Z_{\infty} \leq b).$$ Since, this holds true for all $0<a<b<1$, we have $Z_n/n \stackrel{d}{\longrightarrow} Z_{\infty}.$ \n\n(b) We employ the "change of variable" formula to find the joint density of $(X+Y, X/(X+Y))$. Consider the function $g : \mathbb{R}_{>0}^2 \mapsto \mathbb{R}_{>0} \times (0,1)$ defined as $$ g(x,y) = \left(x+y, \dfrac{x}{x+y}\right), \; \; \forall \; x,y >0.$$ Then, $$ g^{-1}(z,w) = (zw,z(1-w)), \; \forall \;z>0, w \in (0,1); \; J_{g^{-1}} = \bigg \rvert \begin{matrix} w & z \\ 1-w & -z \end{matrix} \bigg \rvert= -zw-z(1-w) = -z, $$ and therefore the joint density of $(Z,W) :=(X+Y, X/(X+Y)) = g(X,Y)$ is \begin{align*} f_{Z,W} (z,w) = f_{X,Y}\left( g^{-1}(z,w)\right)|J_{g^{-1}}(z,w)| &= f_X(zw) f_Y(z(1-w)) z \\ &=\Gamma(a)^{-1}(zw)^{a-1}\exp(-zw)\Gamma(b)^{-1}(z-zw)^{b-1}\exp(-z+zw) z \\ &= \left(\dfrac{z^{a+b-1}\exp(-z)}{\Gamma(a+b-1)} \right) \left( \dfrac{\Gamma(a+b-1)}{\Gamma(a)\Gamma(b)} w^{a-1}(1-w)^{b-1} \right), \end{align*} for all $z>0$ and $w \in (0,1)$; and is $0$ otherwise. Clearly, $Z \perp\!\!\perp W$ with $Z \sim \text{Gamma}(a+b)$ and $W \sim \text{Beta}(a,b)$. \n\n(c) Take $X_1, X_2, X_2, X_4 \stackrel{iid}{\sim} \text{Gamma}(1/2,1)$. Then by part (b), the following four random variables are mutually independent, $X_1/(X_1+X_2), X_1+X_2, X_3/(X_3+X_4), X_3+X_4$. Set $$ U := \dfrac{X_1+X_2}{X_1+X_2+X_3+X_4}, \; W_{\infty} = \dfrac{X_1}{X_1+X_2}, \; V_{\infty} = \dfrac{X_3}{X_3+X_4}.$$ Then, using previous mutual independence observation, we conclude that $U, W_{\infty}, V_{\infty}$ are mutually independent. Now note that by part (b), $W_{\infty}, V_{\infty} \sim \text{Beta}(1/2,1/2)$ which is the arc-sin distribution. On the otherhand, $X_1+X_2, X_3+X_4$ are independent $\text{Gamma}(1,1)$ variables and hence $U \sim \text{Beta}(1,1)$ which is the Uniform distribution on $[0,1]$. Finally, $$ UW_{\infty} + (1-U)V_{\infty} = \dfrac{X_1+X_3}{X_1+X_2+X_3+X_4} \stackrel{d}{=} \dfrac{X_1+X_2}{X_1+X_2+X_3+X_4} = U.$ |
2002-q4 | Suppose $I_1, I_2, \\ldots , I_n, \\ldots$ is a stationary and ergodic sequence of $\\{0,1\\}$-valued random variables, with $E(I_1) = p$. (a) Fix an integer $K \\geq 1$. Does $n^{-1} \\sum_{i=n}^{Kn} I_i$ converge with probability one for $n \\rightarrow \\infty$? Explain your answer, providing the value of the limit or an example of no convergence. (b) Suppose $b_1, b_2, \\ldots , b_n, \\ldots$ is a stationary and ergodic sequence of non-negative random variables, with $P(b_1 > 0) > 0$. Does this imply that with probability one $\\lim\\inf_{n\\rightarrow\\infty}\\max_{n\\leq s \\leq 2n} b_i \\geq c$ for some non-random $c > 0$? Prove or provide a counterexample. | (a) Since the sequence $\left\{I_i : i \geq 1\right\}$ is stationary and ergodic, and uniformly bounded, we can apply \textit{Birkhoff's Ergodic Theorem} to conclude that $n^{-1}\sum_{i=1}^n I_i $ converges to $\mathbb{E}(I_1)=p$, almost surely as $n \to \infty$. Therefore, for any $K \in \mathbb{N},$ we have $$ n^{-1}\sum_{i=n}^{Kn}I_i = K (Kn)^{-1}\sum_{i=1}^{Kn}I_i - n^{-1}\sum_{i=1}^n I_i + n^{-1}I_n \stackrel{a.s.} {\longrightarrow} (K-1)p,$$ since $0 \leq I_n/n \leq 1/n$ for all $n \geq 1$. \n\n(b) Since $\mathbb{P}(b_1 >0)>0$, we can get a real constant $c >0$ such that $\mathbb{P}(b \geq c) =\delta > 0$. Since the sequence $\left\{b_i : i \geq 1\right\}$ is stationary and ergodic, with $\mathbb{E} \mathbbm{1}(b_i \geq c) < \infty$, we can apply \textit{Birkhoff's Ergodic Theorem} to conclude that $$ \dfrac{1}{n} \sum_{k=1}^n \mathbbm{1}(b_k \geq c) \stackrel{a.s.}{\longrightarrow} \mathbb{P}(b_1 \geq c)=\delta.$$ Using similar calculation as in (a), conclude that $$ \dfrac{1}{n} \sum_{k=n}^{2n} \mathbbm{1}(b_k \geq c) = \dfrac{1}{n} \sum_{k=1}^{2n} \mathbbm{1}(b_k \geq c) - \dfrac{n-1}{n}\dfrac{1}{n-1} \sum_{k=1}^{n-1} \mathbbm{1}(b_k \geq c) \stackrel{a.s.}{\longrightarrow} 2\delta - \delta = \delta.$$ For any $\omega$ such that $n^{-1}\sum_{k=n}^{2n} \mathbbm{1}(b_k(\omega) \geq c) \to \delta$, we have $\sum_{k=n}^{2n} \mathbbm{1}(b_k(\omega) \geq c) >0$ for large enough $n$ and hence $\max_{n \leq k \leq 2n} b_k(\omega) \geq c$ for all large enough $n$, implying that $\liminf_{n \to \infty} \max_{n \leq k \leq 2n} b_k(\omega) \geq c.$ Therefore, $$ \liminf_{n \to \infty} \max_{n \leq k \leq 2n} b_k \geq c >0, \; \text{almost surely}.$$ |
2002-q5 | Let $\\{N_t, t \\geq 0\\}$ be a Poisson process with intensity $\\lambda > 0$. Consider the compound Poisson process $X_t = \\sum_{k\\leq N_t} Y_k$, where $Y_1, Y_2, \\ldots$ are i.i.d. random variables that are independent of $\\{N_t, t \\geq 0\\}$. (a) Let $T$ be a stopping time with respect to the filtration $\\mathcal{F}_t$ generated by $\\{(X_s, N_s), s \\leq t\\}$ such that $ET = \\mu$. Suppose $Y_1$ has finite mean $\\mu $. Derive a formula for $E(X_T)$ in terms of $\\mu, \\bar{\\mu}$ and $\\lambda$. (b) Suppose $Y_1$ has the Bernoulli distribution $P(Y_1 = 1) = p = 1 - P(Y_1 = 0)$. Show that $\\{X_t, t \\geq 0\\}$ is a Poisson process. (c) Suppose the $Y_i$ are standard normal. Show that $\\{X_t, t \\geq 0\\}$ has the same finite-dimensional distributions as $\\{B_t, t \\geq 0\\}$, where $\\{B_t, t \\geq 0\\}$ is a Brownian motion (with $B_0 = 0$) independent of $\\{N_t, t \\geq 0\\}$. Hence show that for $a > 0$ and $b > 0$, $P(X_t \\geq a + bN_t \\text{ for some } t \\geq 0) \\leq e^{-2ab}$. | We have a Poisson process $\left\{N_t : t \geq 0\right\}$ with intensity $\lambda >0$ and independent to $N$, a collection on \textit{i.i.d.} random variables $\left\{Y_k : k \geq 1\right\}$. Define $S_n := \sum_{k=1}^n Y_k,$ for all $n \geq 1$ with $S_0 :=0$. Let $Q_n$ be the probability measure induced by $S_n$, i.e. $Q_n(B)=\mathbb{P}(S_n \in B)$, for all $B \in \mathcal{B}_{\mathbb{R}}.$ \n\nDefine $X_t := S_{N_t} = \sum_{k \leq N_t} Y_k,$ for all $t \geq 0$. \n\nIntroduce the following notations. $\mathcal{G}^N_t := \sigma (N_s : 0 \leq s \leq t)$, $\mathcal{H}_n := \sigma (Z_k : 1 \leq k \leq n)$, $\mathcal{G}^X_t := \sigma(X_s : 0 \leq s \leq t)$. Note that $\mathcal{F}_t = \sigma(\mathcal{G}^N_t, \mathcal{G}^X_t).$ \n\n(a) We assume that $Y_1$ has finite mean $\widetilde{\mu}$.We know that the process $X$ has stationary independent increments (see \cite[Proposition 9.3.36]{dembo}). So for ant $t \geq s \geq 0$, we have $X_t-X_s \perp\!\!\perp \mathcal{G}^X_s.$ On the otherhand, for all $B \in \mathcal{B}_{\mathbb{R}}$, $$ \mathbb{P}(X_t-X_s \in B \mid \mathcal{G}_{\infty}^N) = Q_{N(t)-N(s)}(B) \perp\!\!\perp \mathcal{G}^N_s,$$ and hence the distribution of $X_t-X_s$ conditional on $\mathcal{G}^N_s$ does not depend on $\mathcal{G}^N_s$, implying that $X_t-X_s \perp\!\!\perp \mathcal{F}_s.$ \n\nNote that for any $t \geq 0$, using independence of $Y_k$'s and $N$ we get \begin{align*} \mathbb{E}|X_t| = \mathbb{E}\Bigg \rvert \sum_{k \geq 1} Y_k \mathbbm{1}(k \leq N_t)\Bigg \rvert \leq \sum_{k \geq 1} \mathbb{E}|Y_k \mathbbm{1}(k \leq N_t)| = \sum_{k \geq 1} \mathbb{E}|Y_k|\mathbb{P}(k \leq N_t) = \mathbb{E}|Y_1| \sum_{k \geq 1} \mathbb{P}(k \leq N_t) = \mathbb{E}|Y_1| \mathbb{E}(N_t) < \infty, \end{align*} and hence \begin{align*} \mathbb{E}X_t = \mathbb{E}\left(\sum_{k \geq 1} Y_k \mathbbm{1}(k \leq N_t)\right) =\sum_{k \geq 1} \mathbb{E}\left(Y_k \mathbbm{1}(k \leq N_t)\right) = \sum_{k \geq 1} \mathbb{E}(Y_1)\mathbb{P}(k \leq N_t) = \widetilde{\mu} \sum_{k \geq 1} \mathbb{P}(k \leq N_t) = \widetilde{\mu} \mathbb{E}(N_t) = \widetilde{\mu}\lambda t. \end{align*} The stationary increment property guarantees that $\mathbb{E}(X_t-X_s)=\mathbb{E}(X_{t-s}) = \widetilde{\mu}\lambda (t-s)$ for all $t \geq s \geq 0$. Therefore we can conclude that $\left\{X_t-\widetilde{\mu}\lambda t, \mathcal{F}_t, t \geq 0\right\}$ is a right-continuous MG, since $N$ is right-continuous. Apply OST and conclude that $\mathbb{E}(X_{T \wedge t}) = \widetilde{\mu}\lambda \mathbb{E}(T \wedge t)$, for all $t \geq 0$. \n\nNote that $$ \mathbb{E}|X_t-X_s| = \mathbb{E}|X_{t-s}| \leq \mathbb{E}|Y_1| \mathbb{E}(N_{t-s}) = \mathbb{E}|Y_1| (t-s), \; \forall \; t \geq s \geq 0,$$ and hence using the Proposition~\ref{prop}, we conclude that $\left\{X_{T \wedge t} : t \geq 0\right\}$ is U.I., since $\mathbb{E}(T)=\mu < \infty.$ Having a finite mean, $T$ is finite almost surely and hence $X_{T \wedge t}$ converges to $X_T$ almost surely. Combined with uniform integrability, this gives $L^1$-convergence by \textit{Vitali's Theorem} and hence $$ \mathbb{E}X_{T} = \lim_{t \to \infty} \mathbb{E}(X_{T \wedge t}) = \lim_{t \to \infty} \widetilde{\mu}\lambda \mathbb{E}(T \wedge t) = \widetilde{\mu}\lambda \mathbb{E}(T) = \widetilde{\mu}\lambda \mu.$$ \n\n(b) We know that the process $X$ has stationary independent increments with $X_0=S_{N_0}=S_0=0$. Thus if we can show that $X_t \sim \text{Poi}(\gamma t)$ for all $t \geq 0$ and for some $\gamma >0$, we can conclude that $X \sim PP(\gamma)$. \n\nNow note that $X_t = S_{N_t} \mid (N_t=k) \stackrel{d}{=} S_k \sim \text{Bin}(k,p)$, for all $k \geq 0$, since $Y_i$'s are \textit{i.i.d.} $\text{Ber(p)}$ variables. Thus for all $j \in \mathbb{Z}_{\geq 0}$, \begin{align*} \mathbb{P}(X_t=j) = \sum_{k \geq 0} \mathbb{P}(X_t=j \mid N_t=k)\mathbb{P}(N_t=k) &= \sum_{k \geq j} {k \choose j}p^j(1-p)^{k-j}e^{-\lambda t}\dfrac{(\lambda t)^k}{k!} \\ &= e^{-\lambda t} \dfrac{1}{j!}p^j (\lambda t)^j \sum_{k \geq j} \dfrac{(\lambda t (1-p))^{k-j}}{(k-j)!} \\ &= e^{-\lambda t} \dfrac{1}{j!}p^j (\lambda t)^j e^{\lambda t (1-p)} = e^{-p\lambda t} \dfrac{(p\lambda t)^j}{j!}, \end{align*} implying that $X \sim PP(p\lambda)$. \n\n(c) If $Y_i$'s are \textit{i.i.d.} standard normal, then by independent and stationary increment property of BM we can conclude that $$ \left\{S_n : n \geq 0\right\} \stackrel{d}{=} \left\{B_n : n \geq 0\right\} ,$$ and hence $\left\{X_t = S_{N_t} ; t \geq 0\right\}$ has same finite dimensional distributions as $\left\{B_{N_t} ; t \geq 0\right\}$. \n\nI can take $Y_i = B_i-B_{i-1}$, since $\left\{B_i-B_{i-1}, i \geq 0\right\}$ is a collection of \textit{i.i.d.} standard normal variables, and in that case $X_t=B_{N_t}.$ \begin{align*} \mathbb{P}(X_t \geq a+bN_t, \; \text{ for some } t \geq 0)& = \mathbb{P}(B_{N_t} \geq a+bN_t, \; \text{ for some } t \geq 0) \n\ & \leq \mathbb{P}(B_t \geq a+bt, \; \text{ for some } t \geq 0) = \exp(-2ab), \end{align*} where the last equality follows from \cite[Exercise 9.2.35(c)]{dembo}. |
2002-q6 | Let $\\{d_n\\}$ be a sequence of random variables adapted to the filtration $\\{\\mathcal{F}_n\\}$ such that for some $r > 2$, $\\sup_{n\\geq 1} E(|d_n|^r|\\mathcal{F}_{n-1}) < \\infty$ with probability one, and for some positive constants $\\mu$ and $\\sigma$, with probability one $E(d_n|\\mathcal{F}_{n-1}) = \\mu$ and $\\lim_{n\\rightarrow\\infty} \\text{Var}(d_n|\\mathcal{F}_{n-1}) = \\sigma^2$. Let $S_n = \\sum_{i=1}^n d_i$ and $\\bar{X}_n = S_n/n$. (a) Show that $n^{-1/2}\\max_{1\\leq k\\leq n} \\{S_k - k\\bar{X}_n\\}$ converges in law as $n \\rightarrow \\infty$ and find the limiting distribution. (b) Let $T_b = \\\\inf\\{n \\geq 1 : S_n \\geq b\\} (\\inf\\emptyset = \\infty)$. Show that as $b \\rightarrow \\infty$, $T_b/b \\rightarrow 1/\\mu$ with probability one. | We have a sequence $\left\{d_n, \mathcal{F}_n, n \geq 0\right\}$, which is $\mathcal{F}$-adapted such that $$ \mathbb{E}(d_n \mid \mathcal{F}_{n-1})=\mu,\; \operatorname{Var}(d_n\mid \mathcal{F}_{n-1})=\sigma^2, \; \sup_{n \geq 1}\mathbb{E}(|d_n|^r \mid \mathcal{F}_{n-1})< \infty, \; \text{almost surely},$$ where $\mu, \sigma$ are positive constants with $r>2$. Clearly, $d_n$ is square-integrable with $\mathbb{E}d_n^2 = \mu^2 +\sigma^2$ and hence $\left\{M_n :=S_n - n \mu = \sum_{k=1}^n d_k - n \mu, \mathcal{F}_n, n \geq 0\right\}$ is a $L^2$-MG, with $M_0:=0$. Its predictable compensator process satisfies $$ n^{-1}\langle M \rangle_n = \dfrac{1}{n} \sum_{k=1}^n \mathbb{E}((M_k-M_{k-1})^2 \mid \mathcal{F}_{k-1}) = \dfrac{1}{n} \sum_{k=1}^n \mathbb{E}((d_k-\mu)^2 \mid \mathcal{F}_{k-1}) = \dfrac{1}{n} \sum_{k=1}^n \operatorname{Var}(d_k\mid \mathcal{F}_{k-1}) =\sigma^2.$$ Also for any $\varepsilon >0$, \begin{align*} \dfrac{1}{n} \sum_{k=1}^n \mathbb{E} \left[(M_k-M_{k-1})^2; |M_k-M_{k-1}| \geq \varepsilon \sqrt{n} \mid \mathcal{F}_{k-1} \right] &= \dfrac{1}{n} \sum_{k=1}^n \mathbb{E} \left[(d_k-\mu)^2; |d_k-\mu| \geq \varepsilon \sqrt{n} \mid \mathcal{F}_{k-1} \right] \n\ & \leq \dfrac{1}{n} \sum_{k=1}^n \mathbb{E} \left[|d_k-\mu|^2 (\varepsilon \sqrt{n} |d_k-\mu|^{-1})^{2-r} \mid \mathcal{F}_{k-1} \right] \n\ & \leq \dfrac{\varepsilon^{2-r}}{n^{r/2-1}}\left( \sup_{k \geq 1} \mathbb{E}(|d_k-\mu|^r \mid \mathcal{F}_{k-1})\right)\n\ & \leq \dfrac{\varepsilon^{2-r}2^{r-1}}{n^{r/2-1}}\left( \sup_{k \geq 1} \mathbb{E}\left((|d_k|^r+\mu^r) \mid \mathcal{F}_{k-1}\right)\right) \n\ &\leq \dfrac{\varepsilon^{2-r}2^{r-1}}{n^{r/2-1}}\left( \sup_{k \geq 1} \mathbb{E}\left(|d_k|^r \mid \mathcal{F}_{k-1}\right) + \mu^r\right) \stackrel{a.s.}{\longrightarrow} 0. \end{align*} We have basically proved the required conditions needed to apply \textit{Martingale CLT}. Define, $$ \widehat{S}_n(t) := \dfrac{1}{\sigma\sqrt{n}} \left[M_{\lfloor nt \rfloor} +(nt-\lfloor nt \rfloor)(M_{\lfloor nt \rfloor +1}-M_{\lfloor nt \rfloor}) \right]=\dfrac{1}{\sigma \sqrt{n}} \left[M_{\lfloor nt \rfloor} +(nt-\lfloor nt \rfloor)(d_{\lfloor nt \rfloor +1}-\mu) \right], $$ for all $0 \leq t \leq 1$. We then have $\widehat{S}_n \stackrel{d}{\longrightarrow} W$ as $n \to \infty$ on the space $C([0,1])$, where $W$ is the standard Brownian Motion on $[0,1]$. \n\n(a) First note that $ \overline{X}_n = S_n/n = M_n/n +\mu$ and hence $S_k-k \overline{X}_n = M_k+k\mu - k(M_n/n)-k\mu = M_k - kn^{-1}M_n,$ for all $ 1\leq k \leq n$. \n\nObserve that the function $t \mapsto \widehat{S}_n(t)-t\widehat{S}_n(1)$ on $[0,1]$ is the piecewise linear interpolation using the points $$ \left\{ \left( \dfrac{k}{n}, \sigma^{-1}n^{-1/2}\left(M_k - \dfrac{k}{n}M_n\right)\right) \Bigg \rvert k=0, \ldots, n\right\}.$$ Recall the fact that for a piecewise linear function on a compact set, we can always find a maximizing point among the collection of the interpolating points. Hence, $$ \max_{1 \leq k \leq n} \dfrac{1}{\sqrt{n}}\left(S_k - k\overline{X}_n \right) = \max_{1 \leq k \leq n} \dfrac{1}{\sqrt{n}}\left(M_k - \dfrac{k}{n}M_n \right) =\max_{0 \leq k \leq n} \dfrac{1}{\sqrt{n}}\left(M_k - \dfrac{k}{n}M_n \right) = \sigma\sup_{0 \leq t \leq 1} (\widehat{S}_n(t)-t\widehat{S}_n(1)).$$ \n\nSince, $\mathbf{x} \mapsto \sup_{0 \leq t \leq 1} (x(t)-tx(1))$ is a continuous function on $C([0,1])$, we have $$ \sup_{0 \leq t \leq 1} (\widehat{S}_n(t)-t\widehat{S}_n(1)) \stackrel{d}{\longrightarrow} \sup_{0 \leq t \leq 1} (W(t)-tW(1)) = \sup_{0 \leq t \leq 1} B(t),$$ where $\left\{B(t) : 0 \leq t \leq 1\right\}$ is the standard Brownian Bridge. We know that $$ \mathbb{P}(\sup_{0 \leq t \leq 1} B(t) \geq b) = \exp(-2b^2),$$ for all $b \geq 0$ (see \cite[Exercise 10.2.14]{dembo}) Therefore, using \textit{Slutsky's Theorem} and noting that $\sup_{0 \leq t \leq 1} B(t)$ has a continuous distribution function, we can conclude that $$ \mathbb{P} \left( n^{-1/2}\max_{1 \leq k \leq n} \left(S_k - k\overline{X}_n \right) \geq a \right) \longrightarrow \mathbb{P}(\sigma \sup_{0 \leq t \leq 1} B(t) \geq a) = \exp(-2\sigma^{-2}a^2),$$ for all $a \geq 0$. In other words, $$ n^{-1/2}\max_{1 \leq k \leq n} \left(S_k - k\overline{X}_n \right) \stackrel{d}{\longrightarrow} \text{Rayleigh}( ext{scale}=\sigma/2).$ We shall start by establishing the fact that $S_n/n$ converges almost surely to $\mu$. From \textit{Kronecker's Lemma}, it is enough to show that $$ \sum_{n \geq 1} \dfrac{(S_n-n\mu)-(S_{n-1}-(n-1)\mu)}{n} = \sum_{n \geq 1} \dfrac{d_n-\mu}{n} \; \; \text{converges almost surely}.$$ \n\nObserve that $\left\{W_n:=\sum_{k=1}^n k^{-1}(d_k-\mu), \mathcal{F}_k, k \geq 0\right\}$ is a $L^2$-MG with $$ \sup_{n \geq 1} \mathbb{E} W_n^2 = \sup_{n \geq 1} \mathbb{E} \langle W \rangle_n = \mathbb{E}\langle W \rangle _{\infty} = \sum_{n \geq 1} \mathbb{E}(n^{-2}(d_n-\mu)^2) = \sigma^2 \sum_{n \geq 1} n^{-2} < \infty.$$ Being an $L^2$-bounded MG, $W_n$ converges almost surely as $n \to \infty$, which is what we wanted to prove. \n\nNow we define $T_b := \inf \left\{ k \geq 1 : S_k \geq b\right\}$, for all $b > 0$. Clearly $b \mapsto T_b(\omega)$ is non-decreasing for all $\omega$. Also $T_b \uparrow \infty$ as $b \uparrow \infty$ almost surely since $\mathbb{P}(S_k < \infty, \;\; \forall \; k \geq 0) =1.$ Finally, what we have proved in the preceding paragraph shows that $S_n \to \infty$ almost surely as $n \to \infty$ (since $\mu >0$) and hence $T_b < \infty$ for all $b >0$ with probability $1$. Hence we can conclude that $$ \begin{equation}{\label{conv}} \dfrac{S_{T_b}}{T_b},\dfrac{S_{T_b-1}}{T_b-1} \stackrel{a.s.}{\longrightarrow} \mu, \;\; \text{as } b \to \infty. \end{equation} $$ From definition of $T_b$ it is clear that $$ \dfrac{S_{T_b-1}}{T_b} \leq \dfrac{b}{T_b} \leq \dfrac{S_{T_b}}{T_b}, \text{ which implies } \dfrac{T_b-1}{T_b} \dfrac{S_{T_b-1}}{T_b-1} \leq \dfrac{b}{T_b} \leq \dfrac{S_{T_b}}{T_b}, \;\forall \; b >0.$$ Taking $b \uparrow \infty$, using (\ref{conv}) and recalling that $T_b \uparrow \infty$ almost surely as $b \uparrow \infty$, we conclude that $b/T_b$ converges almost surely to $\mu$ and hence $T_b/b$ converges almost surely to $1/\mu.$ |
2003-q1 | Let $X$ be a Poisson random variable with mean $\lambda > 0$.
a) Prove that
$$E|X - \lambda| = 2\lambda e^{-\lambda} \frac{\lambda^{[\lambda]}}{[\lambda]!}$$
with $[\lambda]$ the greatest integer less than or equal to $\lambda$.
b) Assuming the result in Part (a), prove that as $\lambda \uparrow \infty$
$$E \left| \frac{X}{\lambda} - 1 \right| \rightarrow 0$$ | $X \sim \text{Poisson}(\lambda)$ with $\lambda >0$.
\begin{enumerate}[label=(\alph*)]
\item Set $\lfloor \lambda \rfloor = m \in \mathbb{Z}_{\geq 0}$. Then
\begin{align*}
\mathbb{E}(X-\lambda)_{+} = \sum_{k=m+1}^{\infty} (k-\lambda)\exp(-\lambda) \dfrac{\lambda^k}{k!} &= \exp(-\lambda)\sum_{k=m+1}^{\infty} k \dfrac{\lambda^k}{k!} - \lambda \exp(-\lambda) \sum_{k=m+1}^{\infty} \dfrac{\lambda^k}{k!} \\
&= \lambda \exp(-\lambda)\sum_{k=m+1}^{\infty} \dfrac{\lambda^{k-1}}{(k-1)!} - \lambda \exp(-\lambda) \sum_{k=m+1}^{\infty} \dfrac{\lambda^k}{k!} \\
&= \lambda \exp(-\lambda)\sum_{k=m}^{\infty} \dfrac{\lambda^{k}}{k!} - \lambda \exp(-\lambda) \sum_{k=m+1}^{\infty} \dfrac{\lambda^k}{k!} \\
&= \lambda \exp(-\lambda) \dfrac{\lambda^m}{m!}.
\end{align*}
Hence,
$$ \mathbb{E}|X-\lambda| = 2 \mathbb{E}(X-\lambda)_{+} - \mathbb{E}(X-\lambda) = 2 \lambda \exp(-\lambda) \dfrac{\lambda^m}{m!}.$$
\item We shall use \textit{Stirling's Approximation} as $\lambda \uparrow \infty$. Here $\operatorname{frac}(x) := x - \lfloor x \rfloor$ is fractional part of $x$.
\begin{align*}
\mathbb{E} \left \rvert \dfrac{X}{\lambda} -1 \right \rvert \leq 2 \exp(-\lambda) \dfrac{\lambda^{\lfloor \lambda \rfloor}}{\lfloor \lambda \rfloor !} &\sim 2 \exp(-\lambda) \dfrac{\lambda^{\lfloor \lambda \rfloor}}{\sqrt{2 \pi} {\lfloor \lambda \rfloor}^{\lfloor \lambda \rfloor + 1/2} \exp(-\lfloor \lambda \rfloor)} \\
& = \dfrac{2}{\sqrt{2 \pi}} \exp(-\operatorname{frac}(\lambda)) \lfloor \lambda \rfloor^{-1/2} \left( 1 + \dfrac{\operatorname{frac(\lambda)}}{\lfloor \lambda \rfloor}\right)^{\lfloor \lambda \rfloor} \\
& \leq \dfrac{2}{\sqrt{2 \pi}} \exp(-\operatorname{frac}(\lambda)) \lfloor \lambda \rfloor^{-1/2} \exp \left( \operatorname{frac}(\lambda)\right) \; \; ( \text{ as } 1+x \leq e^x) \\
&= \dfrac{2}{\sqrt{2 \pi}} \lfloor \lambda \rfloor^{-1/2} = o(1).
\end{align*}
This completes the proof.
\end{enumerate} |
2003-q2 | At time zero, an urn contains one ball labeled zero and one ball labeled one. Let $a_1, a_2, a_3, \ldots$ be any sequence of positive integers. At times $i \geq 1$, a ball is drawn uniformly from the urn and replaced with $a_i$ additional balls with the same label. Let $N_n$ be the total number of balls in the urn at time $n$. Let $S_n$ be the number of ones in the urn.
a) Prove that $S_n/N_n$ converges almost surely.
b) Can you give a single example of a choice of $a_n$ such that the limiting random variable has a known law? | Let $A_n$ denotes the event "at the $n$-th draw, a ball labelled $1$ was drawn" and $\mathcal{F}_n = \sigma (A_i \mid 1 \leq i \leq n)$, with $\mathcal{F}_0$ being the trivial $\sigma$-algebra. Then clearly,
$$ S_n = 1 + \sum_{i=1}^n a_i \mathbbm{1}_{A_i}, \; N_n = 2+\sum_{i=1}^n a_i, \; \mathbb{P} \left( A_n \mid \mathcal{F}_{n-1}\right) = \dfrac{S_{n-1}}{N_{n-1}}, \;\forall \; n \geq 1.$$
\begin{enumerate}[label=(\alph*)]
\item For any $n \geq 1$, note that $0 \leq R_n := \dfrac{S_n}{N_n} \leq 1$ by definition and
$$ \mathbb{E}(R_n \mid \mathcal{F}_{n-1}) = \dfrac{1}{N_n}\mathbb{E}(S_n \mid \mathcal{F}_{n-1}) = \dfrac{1}{N_n} \left[ S_{n-1} + a_n \mathbb{P} \left( A_n \mid \mathcal{F}_{n-1}\right) \right] = \dfrac{1}{N_n} \left[ S_{n-1} + a_n \dfrac{S_{n-1}}{N_{n-1}}
ight] = \dfrac{S_{n-1}}{N_n} \left( 1+ \dfrac{a_n}{N_{n-1}} \right) = \dfrac{S_{n-1}}{N_n} \dfrac{N_n}{N_{n-1}} = R_{n-1}. $$
Thus $\left\{R_n, \mathcal{F}_n, n \geq 0\right\}$ is an uniformly bounded MG and hence by \textit{Martingale Convergence Theorem}, it converges almost surely to some $R_{\infty}$ and also in $L^p$ for all $p \geq 1$.
\item We shall give two examples.
\begin{enumerate}[label=(\arabic*)]
\item $a_i =1$, for all $i \geq 1$.
We first claim that $S_n \sim \text{Uniform}(\left\{1,\ldots,n+1 \right\})$ for all $n \geq 1$.
We prove this via induction. Since, $N_0=2$ and $S_0=1$, we have $\mathbbm{1}_{A_1} \sim \text{Ber}(1/2)$, and hence we clearly have $S_1 \sim \text{Uniform}(\left\{1,2 \right\}) $. Suppose $S_m \sim \text{Uniform}(\left\{1,\ldots,m+1 \right\}). $ Then
for all $k=2, \ldots, m+2$, we have
\begin{align*}
\mathbb{P}(S_{m+1}=k) &= \mathbb{P}(S_{m+1}=k \mid S_m=k-1)\mathbb{P}( S_m=k-1) + \mathbb{P}(S_{m+1}=k \mid S_m=k) \mathbb{P}(S_m=k) \\
&= \mathbb{P}(A_{m+1} \mid S_m=k-1)\mathbb{P}( S_m=k-1) + \mathbb{P}(A_{m+1}^c \mid S_m=k)\mathbb{P}(S_m=k) \\
&= \dfrac{k-1}{m+2}\dfrac{1}{m+1} + \left(1-\dfrac{k}{m+2} \right)\dfrac{1}{m+1} = \dfrac{1}{m+2}.
\end{align*}
Since $S_{m+1} \in \left\{1, \ldots, m+2 \right\}$, this proves that $S_{m+1} \sim \text{Uniform}(\left\{1,\ldots,m+2 \right\}).$ This concludes the proof of the claim. Therefore, for any $x \in (0,1)$,
$$ \mathbb{P}(R_n \leq x) = \mathbb{P}(S_n \leq (n+2)x) = \dfrac{\lfloor (n+2)x \rfloor \wedge (n+1)}{n+1} \stackrel{n \to \infty}{\longrightarrow} x.$$
Thus $R_{\infty} \sim \text{Uniform}(0,1)$.
\item $a_i = 2^i$, for all $i \geq 1$.
In this case $N_n = 2 + \sum_{i=1}^{n} 2^i = 2^{n+1}$. We also have,
$$ \operatorname{Var}(R_n \mid \mathcal{F}_{n-1}) = \dfrac{1}{N_n^2} \operatorname{Var}(S_{n-1} + a_n \mathbbm{1}_{A_n} \mid \mathcal{F}_{n-1}) = \dfrac{1}{N_n^2} \operatorname{Var}( a_n \mathbbm{1}_{A_n} \mid \mathcal{F}_{n-1}) = \dfrac{a_n^2}{N_n^2} R_{n-1}(1-R_{n-1}), $$
and therefore,
$$ \mathbb{E}(R_n^2 \mid \mathcal{F}_{n-1}) = \dfrac{a_n^2}{N_n^2} R_{n-1}(1-R_{n-1}) + R_{n-1}^2 = \dfrac{1}{4} R_{n-1}(1-R_{n-1}) + R_{n-1}^2.$$
Combining them we obtain,
$$ \mathbb{E}(R_n(1-R_n) \mid \mathcal{F}_{n-1}) = R_{n-1} - \dfrac{1}{4} R_{n-1}(1-R_{n-1}) - R_{n-1}^2 = \dfrac{3}{4}R_{n-1}(1-R_{n-1}), \; \forall \; n \geq 1.$$
In part (a), we have argued that $R_n$ converges to $R_{\infty}$ in $L^2$. Therefore,
$$ \mathbb{E}(R_{\infty}(1-R_{\infty})) = \lim_{n \to \infty} \mathbb{E}(R_n(1-R_n)) = \lim_{n \to \infty}\dfrac{3^n}{4^n}\mathbb{E}(R_0(1-R_0)) = \lim_{n \to \infty}\dfrac{3^n}{4^{n+1}} = 0 . $$
This implies that $R_{\infty} \in \left\{0,1\right\}$ with probability $1$. The $L^2$ convergence also implies,
$$ \mathbb{E}(R_{\infty}) = \lim_{n \to \infty} \mathbb{E}(R_n) = \mathbb{E}(R_0) = \dfrac{1}{2},$$
we conclude that $R_{\infty} \sim \text{Ber}(1/2)$.
\end{enumerate}
\end{enumerate} |
2003-q3 | Let $X_i, 1 \leq i \leq \infty$ be independent random variables with common density $2x$ on $[0,1]$.
Let $M_n = \max_{1\leq i\leq n} X_i$. Find $a_n, b_n$ and a non-trivial distribution function $F(x)$ such that
for all real $x$, $P\left\{ \frac{M_n - a_n}{b_n} \leq x \right\} \to F(x)$. | Since the common density is $f(x) = 2x \mathbbm{1}_{0 \leq x \leq 1}$. it is easy to see that the common distribution function is $F(x) = x^2$, if $x \in [0,1]$. Then, for any $x>0$ and for large enough $n$, we have
\begin{align*}
\mathbb{P}(n(1-M_n) \geq x) = \mathbb{P} \left( M_n \leq 1- \dfrac{x}{n} \right) = \left[ F \left( 1-\dfrac{x}{n}\right)\right]^n = \left( 1- \dfrac{x}{n}\right)^{2n} \stackrel{n \to \infty}{\longrightarrow} \exp(-2x).
\end{align*}
Hence, $n(1-M_n) \stackrel{d}{\longrightarrow} \text{Exponential} (\text{rate}=2)$, as $n \to \infty$. |
2003-q4 | An autoregressive process. Let $Z_k$, $-\infty < k < \infty$, be a sequence of independent identically distributed integer random variables with $P(Z_k = j) = \frac{3}{j^2}$ for $j = \pm 2, \pm 3, \pm 4, \ldots$
Fix a sequence of positive constants $\{c_j, -\infty < j < \infty\}$ satisfying $\sum_j c_j^{1/2} < \infty$. Set
$$ X_n = \sum_{j=-\infty}^{\infty} c_j Z_{n-j}, -\infty < n < \infty. $$ Prove that for each $n$ the limit defining $X_n$ exists and is finite almost surely. Extra credit. Prove this same result under the weaker condition that $\Sigma c_j < \infty$. | It is enough to prove that the limit of $W_n :=\sum_{j =1}^{n} c_j Z_j$ as $n \to \infty$ exists and is finite almost surely for \textit{i.i.d.} integer-valued variables $Z_j$'s with $\mathbb{P}(Z_j=k) = c|k|^{-2}$, for all $k \in \left\{ \pm 2, \pm 3, \ldots\right\}$. We shall prove it under the weaker condition that $\sum_{j \geq 1} c_j < \infty$.
The idea is to use truncation (since the $Z$-variables have infinite mean) and then use Martingale techniques. Consider truncating the absolute value of the $m$-th variable at $b_m$ for some sequence of positive real numbers $\left\{b_m\right\}_{m \geq 1} \subseteq (2, \infty)$, i.e. define
$$ Y_m := Z_m \mathbbm{1}(|Z_m| \leq b_m); \; \; S_n := \sum_{m=1}^n c_mY_m; \; \forall \; n \geq 1,$$
with $S_0 :=0$.
Since, $Z_m$ has a symmetric distribution around $0$, so has $Y_m$ and hence $\mathbb{E}(Y_m)=0$. Also,
$$ \mathbb{P}(Y_m \neq Z_m) = \mathbb{P}(|Z_m| > b_m) = 2c \sum_{k \geq \lceil b_m \rceil} k^{-2} \leq 2c \sum_{k \geq \lceil b_m \rceil} \int_{k-1}^k x^{-2} \, dx = 2c \int_{\lceil b_m \rceil - 1}^{\infty} x^{-2} \, dx = \dfrac{2c}{\lceil b_m \rceil - 1},$$
whereas,
$$ \mathbb{E}(Y_m^2) = 2c \sum_{k=2}^{\lfloor b_m \rfloor} \dfrac{k^2}{k^2} \leq 2c\lfloor b_m \rfloor. $$
Let us first investigate the convergence of the sequence $\left\{S_n\right\}_{n \geq 0}$. Set $\mathcal{F}_n := \sigma \left(Z_j \mid 1 \leq j \leq n \right)$ and notice that $\left\{S_n, \mathcal{F}_n, n \geq 0\right\}$ is an $L^2$ MG with
$$ \mathbb{E}(S_n^2) = \sum_{m=1}^n c_m^2 \mathbb{E}(Y_m^2) \leq 2c \sum_{m=1}^n c_m^2\lfloor b_m \rfloor .$$
Hence, if we choose $\left\{b_m\right\}$ such that $ \sum_{m=1}^{\infty} c_m^2\lfloor b_m \rfloor < \infty$, then the MG would be $L^2$-bounded and therefore, by \textit{Doob's $L^p$-Martingale Convergence Theorem}, $S_n$ will almost surely have a finite limit. Moreover, if $\sum_{m=1}^{\infty} ({\lceil b_m \rceil - 1})^{-1} < \infty$, then $\sum_{m \geq 1} \mathbb{P}(Y_m \neq Z_m) < \infty$ and hence by \textit{Borel-Cantelli Lemma 1}, $Y_m = Z_m$ eventually for all large enough $m$ with probability $1$. This yields that $W_n$ will also have a finite limit almost surely. Thus we shall be done if we can choose $\left\{b_m\right\}_{m \geq 1} \subseteq [2, \infty)$ such that
$$ \sum_{m=1}^{\infty} c_m^2\lfloor b_m \rfloor, \sum_{m=1}^{\infty} ({\lceil b_m \rceil - 1})^{-1} < \infty.$$
Take $b_m = \max(2,c_m^{-1})$. Since, $\sum_{m \geq 1} c_m < \infty$, we have $c_m=o(1)$ and hence $b_m=c_m^{-1}$ for large enough $m$. Therefore,
$$ \sum_{m=1}^{\infty} c_m^2\lfloor b_m \rfloor \leq \text{constant} + \sum_{m \geq 1} c_m^2 c_m^{-1} = \text{constant} + \sum_{m \geq 1} c_m < \infty,$$
and
$$ \sum_{m=1}^{\infty} ({\lceil b_m \rceil - 1})^{-1} \leq \text{constant} + \sum_{m \geq 1 : c_m < 1/2} \dfrac{1}{c_m^{-1}-1} = \text{constant} + \sum_{m \geq 1 : c_m < 1/2} 2c_m < \infty.$$
This completes the proof. |
2003-q5 | A Poisson process $N_t$ of rate $\lambda$ starts from state 0 at time 0. A Brownian motion $X_t$ with drift $\mu > 0$ starts from $-c$ at time 0, where $c$ is a positive constant. Find the expected value of $T = \inf\{t : N_t = X_t\}$, where by convention $\inf(\phi) = \infty$. | We have $\left\{N_t : t \geq 0\right\}$, Poisson Process with rate $\lambda$ and $N_0=0$; and Brownian motion $\left\{X_t : t \geq 0\right\}$ with drift $\mu >0$ and $X_0=-c<0$. $N$ and $X$ are independent. The stopping time we are concerned with is $T := \inf \left\{t \mid N_t=X_t\right\}$. It is easy to see that $T=T^{\prime} := \inf \left\{t \mid N_t \leq X_t\right\}$. To observe this first note that $0 <T^{\prime} \leq T$, by definition ( $T^{\prime} >0$ since $X$ is continuous with $X_0 <0$ and $N$ is non-decreasing with $N_0=0$). If both are $\infty$ then there is nothing to prove. If $T^{\prime}(\omega) < \infty$, using RCLL sample paths of $N$ and continuous sample paths of $X$ we can write
$$ N_{T^{\prime}(\omega)}(\omega) - X_{T^{\prime}(\omega)}(\omega) \leq 0; N_{T^{\prime}(\omega)-}(\omega) - X_{T^{\prime}(\omega)}(\omega) \geq 0.$$
Since, $N_{T^{\prime}(\omega)-}(\omega) \leq N_{T^{\prime}(\omega)}(\omega)$, this guarantees that $N_{T^{\prime}(\omega)}(\omega) = X_{T^{\prime}(\omega)}(\omega) \leq 0$ and hence $T(\omega) \leq T^{\prime}(\omega)$. Hence $T^{\prime}=T$ almost surely.
Now first consider the case $\mu > \lambda$. Recall that as $t \uparrow \infty$, $N_t/t$ converges to $\lambda$ almost surely and $X_t/t$ converges to $\mu$ almost surely; where the second assertion is true since for standard Wiener Process $W$, we have $W_t/t$ converges to $0$ almost surely as $t \uparrow \infty$ (See \textit{Law of Iterated Logarithm}). Therefore, $(X_t-N_t)$ converges to $\infty$ almost surely and thus $T^{\prime}=T < \infty$ almost surely.
Recall that $\left\{Y_t:=X_t-N_t -(
u-\lambda) t, \mathcal{F}_t, t \geq 0\right\}$ is a MG, where $\mathcal{F}_t := \sigma \left(X_s, N_s \mid 0 \leq s \leq t \right)$. Note that $Y_0=-c$. Therefore, by \textit{OST}, $\mathbb{E}(Y_{T \wedge t}) = -c$ and hence,
$$( \mu-\lambda)\mathbb{E}(T \wedge t) = \mathbb{E}(X_{T \wedge t}-N_{T \wedge t}) +c \leq c \Rightarrow \mathbb{E}(T \wedge t) \leq \dfrac{c}{\mu-\lambda}.$$
Using MCT, we conclude that $\mathbb{E}(T) \leq (\mu-\lambda)^{-1}c < \infty$. On the otherhand, recall that $\left\{(X_t-\mu t)^2 - t, t \geq 0\right\}$ and $\left\{(N_t-\lambda t)^2 - \lambda t, t \geq 0\right\}$ both are martingales with respect to their respective canonical filtration. Hence, using OST we can write
$$ \sup_{t \geq 0} \mathbb{E}(X_{T \wedge t}-\mu (T \wedge t))^2 = \sup_{t \geq 0}\left[ c^2 + \mathbb{E}(T \wedge t)\right] = c^2 + \mathbb{E}(T)< \infty,$$
and
$$ \sup_{t \geq 0} \mathbb{E}(N_{T \wedge t}-\lambda (T \wedge t))^2 = \sup_{t \geq 0}\lambda \mathbb{E}(T \wedge t) = \lambda \mathbb{E}(T)< \infty.$$
Combining them we get
$$ \sup_{t \geq 0} \mathbb{E} Y_{T \wedge t}^2 \leq 2\sup_{t \geq 0} \mathbb{E}(X_{T \wedge t}-\mu (T \wedge t))^2 + 2\sup_{t \geq 0} \mathbb{E}(N_{T \wedge t}-\lambda (T \wedge t))^2 < \infty.$$
Thus the MG $\left\{Y_{T \wedge t}, \mathcal{F}_t, t \geq 0\right\}$ is $L^2$-bounded and hence by \textit{Doob's $L^p$-MG convergence theorem}, $Y_{T \wedge t} = X_{T \wedge t} -N_{T \wedge t}- (
u- \lambda) (T \wedge t) \longrightarrow X_T -N_T -(
u- \lambda) T$ almost surely and in $L^2$. In particular,
$$ -c = \lim_{t \to \infty} \mathbb{E} \left( Y_{T \wedge t}\right) = \mathbb{E}\left(X_T-N_T-(\mu-\lambda) T \right) = -\mathbb{E}((\mu-\lambda) T) \Rightarrow \mathbb{E}(T) = \dfrac{c}{\mu-\lambda}.$$
Now consider $\mu \leq \lambda$. For any $\nu > \lambda$, consider the process $X^{\nu}_t := X_t + (\nu-\mu)t$, for all $t \geq 0$. This is a BM starting from $-c$ with drift $\nu > \lambda >0$. Defining $T_{\nu} := \inf \left\{t \geq 0 \mid X^{\nu}_t = N_t\right\} = \inf \left\{t \geq 0 \mid X^{\nu}_t \geq N_t\right\}$, we notice that $T_{\nu} \leq T$ almost surely since $X^{\nu}_t \geq X_t$ for all $t \geq 0$, almost surely. Thus $\mathbb{E}(T_{\nu}) \leq \mathbb{E}(T)$, for all such $\nu$ and hence
$$ \mathbb{E}(T) \geq \sup_{\nu > \lambda} \mathbb{E}(T_{\nu}) = \sup_{\nu > \lambda} \dfrac{c}{\nu-\lambda} = \infty.$$ |
2003-q6 | In a continuous time branching process each particle lives an exponentially distributed length of time having mean one. Upon dying it gives birth to $k = 0, 1, or 2$ particles with probability $p_0 = q, p_1 = r, p_2 = p$, where $p < q$ and $p+r+q=1$. Both the lifetimes and the number of offspring of different particles are mutually independent random variables.
(a) What is the probability that the process eventually becomes extinct?
(b) What is the probability that sometime during its history it achieves a population size of $c$ if at time 0 there is a single particle? Hint: Consider the "embedded" process that consists of the number of particles still alive immediately after a death occurs. | Let $X_t$ be the number of particles alive at time $t \geq 0$. Introduce some further notations. Let $B_v$ is the birth-time of the particle $v$ and $D_v$ is its death time. Also let $\mathcal{F}_t$ denotes the $\sigma$-algebra containing all the information upto time $t$. Fix $t \geq s \geq 0$. We shall condition on $\mathcal{F}_s$ and try to evaluate the conditional distribution of $X_t$. Note that $X_s \in m \mathcal{F}_s$. If $X_s =0$, then obviously $X_t=0$. If $X_s = k >0$, let $v_1, \ldots, v_k$ are the particles alive at time $s$. So we know that $B_{v_i} < s$ and $D_{v_i}-B_{v_i} >s-B_{v_i}$, for all $i=1,\ldots, k$. Since, $D_{v_i}-B_{v_i} \mid B_{v_i} \sim \text{Exponential}(1)$, we use memoryless property and conclude that
$$D_{v_i}-s \mid \mathcal{F}_s\sim D_{v_i}-s \mid (D_{v_i}>s> B_{v_i}) \sim (D_{v_i}-B_{v_i}-(s-B_{v_i})) \mid (D_{v_i}-B_{v_i}> s-B_{v_i}>0)\sim \text{Exponential}(1).$$
Here we have used independence of the behaviour of the different particles. Also by this independence $D_{v_i}-s$ are mutually independent for different $i$'s conditional on $\mathcal{F}_s$. Finally after death the particles reproduce according the same law, conditional on $\mathcal{F}_s$. Thus we can conclude that the process $X_{s \cdot}$ conditional on $\mathcal{F}_s$, with $X_s=k$, has same distribution of $X_{\cdot}$, if we started with $k$ particles. Thus $\left\{X_t, \mathcal{F}_t, t \geq 0\right\}$ is a homogeneous Markov process. Also the sample paths of this process are clearly step-functions and hence this is a Markov jump process and has \textit{Strong Markov Property}, see \cite[Proposition 9.3.24]{dembo}.
To analyse it further, we need to find the holding times $\left\{\lambda_k : k \geq 0\right\}$ and jump transition probabilities $\left\{p(k,l) : k,l \geq 0\right\}$ associated with this process. See \cite[Definition 9.3.27]{dembo} for definition.
$\mathbb{P}_k$ will denote the law of the process if we start with $k$ particles initially. Let $\tau := \inf \left\{ t \geq 0 : X_t \neq X_0\right\}$. Then $\mathbb{P}_k(\tau > t) = \exp(-\lambda_k t)$ for all $t \geq 0$ and $p(k,l) := \mathbb{P}_k(X_{\tau}=l).$
Since, $0$ is absorbing state for the process, clearly $\lambda_0 =0$ with $p(0,l)=\mathbbm{1}(l=0)$. Consider $k >0$.
The survival time being a continuous distribution, no two deaths occur at the same time with probability $1$; hence it makes sense of talking about the first death. If $\theta$ is the Markov time denoting first death,
under $\mathbb{P}_k$, $\theta$ is distributed same as minimum of $k$ many independent standard exponential variables and hence $\theta \sim \text{Exponential}(k)$.
It is clear that if $p_1=0$, then the particle, which is dying first at time $\theta$, produces $2$ or no off-springs almost surely and hence the total number of particles alive changes at $\theta$ implying that $\tau=\theta$ and hence $\lambda_k=k$. On the otherhand if $p_1>0$, then note that
$$ \mathbb{P}_k(\tau >t) = \mathbb{P}_k(\theta >t) + \mathbb{P}_k(\theta \leq t, \tau > t )= \mathbb{P}_k(\theta >t) + \mathbb{P}_k(\theta \leq t, X_{\theta}=k, \tau_{\theta,k} > t ). $$
By \textit{Strong Markov Property},
$$ \mathbb{P}_k(\tau_{\theta,k} >t \mid \mathcal{F}_{\theta^+})\mathbbm{1}(\theta< \infty) = \mathbb{P}_{X_{\theta}}(\tau > t-\theta) \mathbbm{1}(\theta< \infty),$$
and hence
\begin{align*}
\mathbb{P}_k(\theta \leq t, X_{\theta}=k, \tau_{\theta,k} > t ) &= \mathbb{E} \left[ \exp(-\lambda_k(t-\theta)); \theta \leq t, X_{\theta}=k\right]&= p_1 \int_{0}^t e^{- au_k(t-u)}ke^{-ku}\, du \\
&= \begin{cases}
p_1ke^{-\lambda_kt} \dfrac{e^{(\lambda_k-k)t}-1}{\lambda_k-k}, & \text{ if } \lambda_k \neq k,\\
p_1kte^{-kt}, & \text{ if } \lambda_k=k.\
\end{cases}
\end{align*}
Here we have used that the fact that total number of particles remains same if the dying particle generates inly one off-spring.In both cases $\exp(-\lambda_kt)=\mathbb{P}_k(\tau >t) > \mathbb{P}_k(\theta >t) = \exp(-kt)$ and hence $\lambda_k <k.$
Therefore, for all $t >0$,
\begin{align*}
\exp(-\lambda_k t) &= \exp(-k t) + p_1ke^{-\lambda_kt} \dfrac{e^{(\lambda_k-k)t}-1}{\lambda_k-k} &\Rightarrow \exp(-\lambda_k t) - \exp(-kt)= \dfrac{p_1k}{k-\lambda_k} \left(\exp(-\lambda_kt) -\exp(-kt) \right) \\
& \Rightarrow p_1k = k-\lambda_k \Rightarrow \lambda_k = k(1-p_1).
\end{align*}
Thus $\lambda_k=k(1-p_1)$ far any $p_1 \in [0,1]$ and $k \geq 0$. Now under $\mathbb{P}_k$, $X_{\tau} \in \left\{k-1,k+1\right\}$, almost surely, see \cite[Proposition 9.3.26(c)]{dembo}. Then
\begin{align*}
\mathbb{P}_k(X_{\tau}=k-1) &= \sum_{l \geq 1} \mathbb{P}(X_{\tau}=k-1, \tau = \text{Time of $l$-th death }) \\
&= \sum_{l \geq 1} \mathbb{P}(\text{First $l-1$ deaths produced $1$ off-springs each while $l$-th death produced none}) \\
&= \sum_{l \geq 1} p_1^{l-1}p_0 = \dfrac{p_0}{1-p_1}=\dfrac{p_0}{p_0+p_2}.
\end{align*}
Here we have used that number of off-springs produced are independent of survival times.
In conclusion, $\left\{X_t, \mathcal{F}_t, t \geq 0\right\}$ is a Markov jump process on state space $S:=\left\{0,1 , \ldots\right\}$ with holding times $\lambda_k= k(p+q)$ and jump transition probabilities $p(k,k-1)= \dfrac{q}{p+q} = 1- p(k,k+1)$, for $k \geq 1$ and $p(0,0)=1$.
We now apply \cite[Theorem 9.3.28]{dembo}. We get a homogeneous Markov chain $\left\{Z_n : n \geq 0\right\}$ on state space $S$ and transition probabilities given by $p(\cdot, \cdot)$. Also for each $y \in S$, we take $\left\{\tau_j(y) : j \geq 1\right\}$ be \textit{i.i.d.} $\text{Exponential}(\lambda_y)$ variables, each collection independent of one another and independent to the chain $Z$. Set $T_0=0$ and $T_k=\sum_{j =1}^k \tau_j(Z_{j-1})$.
Since $p<q$, $Z$ is a left-biased SRW on $\mathbb{Z}_{\geq 0}$ with $0$ being an absorbing state and therefore, $Z_n=0$, eventually for all $n$, with probability $1$. Since $\lambda_0=0$, this implies that $\tau_j(0)=\infty$, for all $j$, almost surely and hence $T_{\infty}=\infty$, almost surely.
Define $W_t := Z_k$, for all $t \in [T_{k},T_{k+1})$, and $k \geq 0$. \cite[Theorem 9.3.28]{dembo} implies that $\left\{W_t : t \geq 0\right\}$ is the unique Markov jump process with given jump parameters and hence $W$ is distributed same as the process $X$.
For the sake of simplicity, we shall therefore use $W$ as a representation of $X$.
\begin{enumerate}[label=(\alph*)]
\item We have established that $Z_n=0$ eventually for all $n$ with probability $1$. Let $N$ be the first time $Z_n$ becomes $0$; $N< \infty$ almost surely. Note that $\lambda_y >0$ for all $y >0$ and hence $\tau_j(y)< \infty$ almost surely for all $j \geq 1, y \geq 1$. Therefore, $T_N < \infty$ almost surely. But $X_{T_N} = Z_N=0$ and hence the process $X$ becomes extinct almost surely.
\item Since, $N, T_N < \infty$ almost surely and $X_t =0$ for all $t \geq T_N$, the event that $(X_t=c, \; \text{for some } t \geq 0)$ is same as the event $(Z_n=c, \; \text{ for some } n \geq 0)$. Hence,
\begin{align*}
\mathbb{P}_1(X_t=c, \; \text{for some } t \geq 0) = \mathbb{P}(Z_n=c, \; \text{ for some } n \geq 0 \mid Z_0=1).
\end{align*}
Upon a closer look, the right hand side turns out to be the probability that a SRW on $\mathbb{Z}$ with probability of going right $p/(p+q)$, starting from $1$, will reach $c \geq 1$ before reaching $0$. Apply \cite[Theorem 4.8.9(b)]{B01} to conclude that
$$ \mathbb{P}_1(X_t=c, \; \text{for some } t \geq 0) = \dfrac{(q/p)-1}{(q/p)^c-1}.$$
The probability is $1$ for $c =0$ as discussed in part (a).
\end{enumerate} |
2004-q1 | Let \( B_t, 0 \leq t \leq 1 \) be standard Brownian motion with \( B_0 = 0 \). Fix \( n \), and values \( 0 < t_1 < t_2 < \ldots < t_n < 1 \). Compute \( E\{B_t | B_{t_i} = b_i, 1 \leq i \leq n\} \) for all \( t \) in \([0, 1]\). Explain your reasoning. | $\left\{B_t \mid 0 \leq t \leq 1\right\}$ is standard BM. $0=t_0<t_1<\cdots < t_n <t_{n+1}=1$. Take $t \in [t_i,t_{i+1}]$, for some $0 \leq i \leq n$. Then by independent increment property of BM, we have
$$B(t)-B(t_i) \perp \! \! \! \perp \left\{B(t_{k+1})-B(t_k) \mid 0 \leq k \leq n, k\neq i\right\}.$$ Therefore, we have
\begin{align*}
\mathbb{E} \left[ B(t) \mid B(t_k), \; 0 \leq k \leq n+1\right] &= B(t_i) + \mathbb{E} \left[ B(t) -B(t_i) \mid B(t_k), \; 0 \leq k \leq n+1\right] \\
&= B(t_i) + \mathbb{E} \left[ B(t) -B(t_i) \mid B(t_{k+1})-B(t_k), \; 0 \leq k \leq n\right] \\
&= B(t_i) + \mathbb{E} \left[ B(t) -B(t_i) \mid B(t_{i+1})-B(t_i)\right] \\
&= B(t_i) + \dfrac{\operatorname{Cov}\left(B(t) -B(t_i),B(t_{i+1})-B(t_i)\right)}{\operatorname{Var}\left(B(t_{i+1})-B(t_i) \right)} \left[ B(t_{i+1})-B(t_i)\right] \\
&= B(t_i) + \dfrac{t-t_i}{t_{i+1}-t_i} \left[ B(t_{i+1})-B(t_i)\right] \\
&= \dfrac{t-t_i}{t_{i+1}-t_i}B(t_{i+1}) + \dfrac{t_{i+1}-t}{t_{i+1}-t_i}B(t_i).
\end{align*} Therefore, for $t \in [t_i,t_{i+1}]$ with $0 \leq i \leq n-1$, we have
$$ \mathbb{E} \left[ B(t) \mid B(t_k), \; 0 \leq k \leq n\right] = \mathbb{E} \left( \mathbb{E} \left[ B(t) \mid B(t_k), \; 0 \leq k \leq n+1\right] \mid B(t_k), \; 0 \leq k \leq n\right) = \dfrac{t-t_i}{t_{i+1}-t_i}B(t_{i+1}) + \dfrac{t_{i+1}-t}{t_{i+1}-t_i}B(t_i). $$ If $t \in [t_n,1]$, then \begin{align*} \mathbb{E} \left[ B(t) \mid B(t_k), \; 0 \leq k \leq n\right] &= B(t_n) + \mathbb{E} \left[ B(t) -B(t_n) \mid B(t_{k+1})-B(t_k), \; 0 \leq k \leq n-1\right] \\
&= B(t_n) + \mathbb{E} \left[ B(t) -B(t_n) \right] = B(t_n). \end{align*} Combining all the above, we can conclude,
$$ \mathbb{E} \left[B(t) \mid B(t_i)=b_i, \; 1 \leq i \leq n \right] = \begin{cases}
\dfrac{t}{t_1}b_1, & \text{if } 0 \leq t \leq t_1, \\
\dfrac{t-t_i}{t_{i+1}-t_i}b_{i+1} + \dfrac{t_{i+1}-t}{t_{i+1}-t_i}b_i, & \text{if } t_i \leq t \leq t_{i+1}, \; 1 \leq i \leq n-1, \\
b_n, & \text{if } t_n \leq t \leq 1.
\end{cases}$$ |
2004-q2 | Let \( \sigma \) be a permutation of \( \{1, 2, 3, \ldots, n\} \) chosen from the uniform distribution. Let \( W(\sigma) = |\{i, 1 \leq i < n : \sigma(i) = i + 1\}| \). (a) Compute \( E(W) \). (b) Find the limiting distribution of \( W \) when \( n \) is large. (c) Prove your answer in Part (b). | $\mathcal{S}_n$ is the set of all permutations of $[n]:=\left\{1, \ldots, n\right\}$ and $\sigma \sim \text{Uniform}(S_n)$. Clearly,
$$ \mathbb{P} \left(\sigma(i) = j \right) = \dfrac{(n-1)!}{n!}=\dfrac{1}{n}. \; \forall \; 1\leq i,j \leq n,$$ i.e., $\sigma(i) \sim \text{Uniform}([n]).$\n\begin{enumerate}[label=(\alph*)]\item We have
$$ \mathbb{E}(W_n) = \mathbb{E} \left[ \sum_{i=1}^{n-1} \mathbbm{1}(\sigma(i)=i+1)\right] = \sum_{i=1}^{n-1} \mathbb{P}(\sigma(i)=i+1) = \dfrac{n-1}{n}.$$ \item Define,
$$ T_n := \sum_{i=1}^{n-1} \mathbbm{1}(\sigma(i)=i+1) + \mathbbm{1}(\sigma(n)=1).$$ It is easy to see that for any $0 \leq k \leq n$,
$$ \left \rvert \left\{\pi \in S_n \Bigg \rvert \sum_{i=1}^{n-1} \mathbbm{1}(\pi(i)=i+1) + \mathbbm{1}(\pi(n)=1)=k\right\}\right\rvert = {n \choose k} D_{n-k},$$ where $D_l$ denotes the number of derangements of an $l$ element set. Therefore, as $n \to \infty$,
$$ \mathbb{P}(T_n=k) = \dfrac{1}{n!} {n \choose k} D_{n-k} = \dfrac{1}{n!} {n \choose k} (n-k)! \left[ \sum_{j=0}^{n-k} \dfrac{(-1)^j}{j!}\right] \longrightarrow \dfrac{1}{k!} \sum_{j=0}^{\infty} \dfrac{(-1)^j}{j!} = e^{-1}\dfrac{1}{k!}.$$ Hence, $T_n \stackrel{d}{\longrightarrow} \text{Poi}(1)$. On the otherhand,
$$ \mathbb{E}(T_n-W_n)^2 = \mathbb{P}(\sigma(n)=1) = \dfrac{1}{n} \longrightarrow 0,$$ and hence $T_n-W_n \stackrel{P}{\longrightarrow} 0$. Use \textit{Slutsky's Theorem} to conclude that $W_n \stackrel{d}{\longrightarrow} \text{Poi}(1)$.\end{enumerate} |
2004-q3 | Let \( F_{\theta}(x) \) be a jointly continuous function from \( 0 \leq \theta \leq 1, 0 \leq x \leq 1 \). For fixed \( x \), let \( \theta^*(x) \) maximize \( f_{\theta}(x) \). Prove that \( x \mapsto \theta^*(x) \) is Borel measurable. | \textbf{Correction :} $F(\theta,x)$ is jointly continuous from $0 \leq \theta \leq 1$ and $0 \leq x \leq 1$. Thus for fixed $x$, $\theta \mapsto F(\theta,x)$ is continuous on the compact set $[0,1]$ and hence attains its maximum. We assume that the maximum is attained uniquely for each $x \in [0,1]$. Otherwise, the assertion of the problem is not necessarily true. For example take $F \equiv 0$. Then if the assertion of the problem is true, it will imply that any function $g :[0,1] \mapsto [0,1]$ has to be Borel measurable, which is not true.\n
Set $\theta^{*}(x) := \arg \sup_{\theta \in [0,1]} F(\theta,x)$ and $G_r(x) := \sup_{\theta \in [0,r]} F(\theta,x)$, for all $0 \leq r,x \leq 1$. Clearly, $\theta^* : [0,1] \mapsto [0,1]$ and since $F$ is continuous,
$$ G_r(x) = \sup_{\theta \in [0,r]} F(\theta,x) = \sup_{\theta \in [0,r] \cap \mathbb{Q}} F(\theta,x).$$ Being continuous, $x \mapsto F(\theta,x)$ is Borel measurable for all $\theta \in [0,1]$ and therefore so is $G_r$. Thus for all $s \in [0,1)$,
\begin{align*}
\theta^{* -1}((s,\infty)) = \left\{x \in[0,1] \mid \theta^*(x)>s\right\} = \left\{x \in[0,1] \mid G_1(x) > G_s(x)\right\} = (G_1-G_s)^{-1}((0,\infty)).
\end{align*} Since, $G_1$ and $G_s$ are both Borel measurable, this shows $\theta^{* -1}((s,\infty)) \in \mathcal{B}_{[0,1]}$ for all $s \in [0,1)$. This proves that $\theta^{*}$ is Borel measurable. |
2004-q4 | Let \( X_1, \ldots , X_m \) be independent random variables, with \( P\{X_k = \pm k^{\lambda/2}\} = 1/(2k^\lambda) \) and \( P\{X_k = 0\} = 1 - 1/k^\lambda \), where \( 0 < \lambda \leq 1 \). Let \( S_n = \sum_{k=1}^{n} X_k \), \( Z_m = \sum_{n=1}^{m} S_n/c_m \) where \( c_m \) are positive constants. Is it possible to choose \( c_m \) so that \( Z_m \) converges in distribution to a standard normal distribution as \( m \to \infty \)? Explain your answer. | We have $X_k$'s independent with $\mathbb{P}(X_k = \pm k^{\lambda/2}) = 1/(2k^{\lambda})$ and $\mathbb{P}(X_k=0)=1-1/k^{\lambda}$, with $0 < \lambda \leq 1$. $S_m = \sum_{k=1}^m X_k$ and $Z_n = \sum_{k=1}^n S_k = \sum_{k=1}^n (n-k+1)X_k.$ Note that for all $k \geq 1$,
$$ \mathbb{E}(X_k) =0, \; \mathbb{E}(X_k^2)=k^{\lambda}k^{-\lambda}=1. \n\begin{enumerate}\item[\textbf{Case 1:}] $\lambda \in (0,1)$.\n
Note that in this case,
$$ \mathbb{E}(Z_n) =0, \; v_n := \operatorname{Var}(Z_n) = \sum_{k=1}^n (n-k+1)^2\operatorname{Var}(X_k) = \sum_{k=1}^n (n-k+1)^2 = \sum_{k=1}^n k^2 = \dfrac{n(n+1)(2n+1)}{6} \sim \dfrac{n^3}{3},$$ and\n\begin{align*} 0 \leq v_n^{-3/2} \sum_{k=1}^n \mathbb{E}|(n-k+1)X_k|^3 = v_n^{-3/2} \sum_{k=1}^n (n-k+1)^3k^{3\lambda/2} k^{-\lambda} &= v_n^{-3/2} \sum_{k=1}^n (n-k+1)^3k^{\lambda/2} \\
& \leq v_n^{-3/2} \sum_{k=1}^n n^3n^{\lambda/2} \\
&\leq C n^{-9/2}n^{4+\lambda/2} = Cn^{-(1-\lambda)/2} \longrightarrow 0, \end{align*} as $n \to \infty$. \textit{Lyapounov's Theorem} now implies that
$$ \dfrac{Z_n}{\sqrt{v_n}} \stackrel{d}{\longrightarrow} N(0,1), \; \text{as } n \to \infty \Rightarrow n^{-3/2}Z_n \stackrel{d}{\longrightarrow} N(0,1/3), \; \text{as } n \to \infty.$$\n
\item[\textbf{Case 2:}] $\lambda=1$.\n
We use characteristic functions here. For any sequence of positive integers $\left\{c_m\right\}_{m \geq 1}$ with $c_m \uparrow \infty$ and $t \in \mathbb{R}$, we have\n\begin{align*} \mathbb{E} \exp\left( \dfrac{itZ_m}{c_m}\right) &= \prod_{k=1}^m \mathbb{E} \exp\left( \dfrac{it(m-k+1)X_k}{c_m}\right) \\
&= \prod_{k=1}^m \left[\left( 1-\dfrac{1}{k}\right)+ \dfrac{1}{2k} \left\{\exp \left(\dfrac{it(m-k+1)\sqrt{k}}{c_m}\right) + \exp \left(-\dfrac{it(m-k+1)\sqrt{k}}{c_m}\right) \right\}\right] \\
&= \prod_{k=1}^m \left[\left( 1-\dfrac{1}{k}\right)+ \dfrac{1}{k}\cos \left(\dfrac{t(m-k+1)\sqrt{k}}{c_m}\right)\right]. \
\end{align*} Therefore, for large enough $m$, \n$$ \log \mathbb{E} \exp\left( \dfrac{itZ_m}{c_m}\right) = \sum_{k=1}^m \log \left[1- \dfrac{2}{k}\sin^2 \left(\dfrac{t(m-k+1)\sqrt{k}}{2c_m}\right)\right]. \n$$ Note that $1-(2/k) \sin^2(t(m-k+1)\sqrt{k}/2c_m) >0$ for $3 \leq k \leq m$. On the otherhand, $1-2\sin^2(tm/2c_m)$, $ 1-\sin^2(t(m-1)\sqrt{2}/2c_m) >0$, for large enough $m$, fixed $t$, provided $m=o(c_m)$. Hence, taking logarithms are valid.\n
Using the fact that $| \log(1+x)-x| \leq Cx^2$, for all $|x| \leq 2$ for some $C \in (0,\infty)$, we get\n\begin{align*} \Bigg \rvert \sum_{k=1}^m \log \left[1- \dfrac{2}{k}\sin^2 \left(\dfrac{t(m-k+1)\sqrt{k}}{2c_m}\right)\right] + \sum_{k=1}^m \dfrac{2}{k}\sin^2 \left(\dfrac{t(m-k+1)\sqrt{k}}{2c_m}\right) \Bigg \rvert &\leq \sum_{k=1}^m C \dfrac{4}{k^2}\sin^4 \left(\dfrac{t(m-k+1)\sqrt{k}}{2c_m}\right) \\
&\leq 4C \sum_{k=1}^m \dfrac{1}{k^2} \dfrac{t^4(m-k+1)^4k^2}{16c_m^4}\ \\
& = 4C \sum_{k=1}^m \dfrac{t^4(m-k+1)^4}{16c_m^4}\ \\
& \leq C_1 \dfrac{t^4 m^5}{c_m^4} =o(1), \end{align*} provided $m^5 =o(c_m^4)$. Take $c_m = m^{3/2}$. First note that $|\sin^2 x - \sin^2 y | \leq 2 |x-y|$, for all $x,y \in \mathbb{R}$ (obtained by \textit{Mean Value Theorem}) and hence\n\begin{align*} \Bigg \rvert \sum_{k=1}^m \dfrac{2}{k}\sin^2 \left(\dfrac{t(m-k+1)\sqrt{k}}{2m^{3/2}}\right) - \sum_{k=1}^m \dfrac{2}{k}\sin^2 \left(\dfrac{t(m-k)\sqrt{k}}{2m^{3/2}}\right) \Bigg \rvert &\leq \sum_{k=1}^m \dfrac{4}{k} \dfrac{|t|\sqrt{k}}{2m^{3/2}} \\
&= \sum_{k=1}^m \dfrac{2|t|}{\sqrt{k}m^{3/2}} = \dfrac{2|t|m}{m^{3/2}} = o(1).\end{align*} On the otherhand,\n\begin{align*} \sum_{k=1}^m \dfrac{2}{k}\sin^2 \left(\dfrac{t(m-k)\sqrt{k}}{2m^{3/2}}\right) = 2 \dfrac{1}{m}\sum_{k=1}^m \dfrac{m}{k} \sin^2 \left(\dfrac{t(1-k/m)\sqrt{k/m}}{2})\right) & \stackrel{m \to \infty}{\longrightarrow} 2 \int_{0}^1 \dfrac{1}{x}\sin^2 \left( \dfrac{t(1-x)\sqrt{x}}{2}\right) \, dx \
&\leq 2\int_{0}^1 \dfrac{t^2(1-x)^2x}{4x} \, dx = t^2/2 < \infty.\end{align*} Combining the above we conclude that\n\begin{align} \log \mathbb{E} \exp\left( \dfrac{itZ_m}{m^{3/2}}\right) \stackrel{m \to \infty}{\longrightarrow} - 2 \int_{0}^1 \dfrac{\sin^2 \left( t(1-x)\sqrt{x}/2 \right)}{x} \, dx
&= - \int_{0}^1 \dfrac{1}{x} \left[1-\cos(t(1-x)\sqrt{x}) \right]
onumber\\n&= - \int_{0}^1 \dfrac{1}{x} \left[ \sum_{j \geq 1} \dfrac{(-1)^{j-1}(t(1-x)\sqrt{x})^{2j}}{(2j)!}\right] \, dx
onumber\\n&= - \sum_{j \geq 1} \dfrac{(-1)^{j-1}t^{2j}\operatorname{Beta}(j,2j+1)}{(2j)!}
onumber \\
&= - \sum_{j \geq 1} \dfrac{(-1)^{j-1}t^{2j}(j-1)!}{(3j)!} \label{taylor},\end{align} where in the penultimate line the interchange of sum and integral is justified via \textit{Fubini's Theorem}. In other words, $Z_m/ \sqrt{m^3}$ converges in distribution to $G$ whose characteristic function is given below.
$$ \phi_G(t) = \exp \left[ -2 \int_{0}^1 \dfrac{1}{z} \sin^2 \left( \dfrac{t(1-z)\sqrt{z}}{2}\right) \, dz\right], \forall \; t \in \mathbb{R},$$ provided we show that $\phi_G$ is continuous at $0$ (See, \cite[Theorem 3.3.18]{dembo}). But that can easily be established using DCT since $\sup_{-1 \leq t \leq 1} \sin^2(t(1-z)\sqrt{z}/2)/z \leq t^2(1-z)^2z/4$, which is integrable on $[0,1]$.\n
Now, if $Z_m/c_m$ converges to $N(0,1)$ for some sequence $\left\{c_m\right\}_{m \geq 1}$ then $c_m/\sqrt{m^3}$ converges to some real number and $G$ has to be Gaussian distribution with zero mean and some variance, which is not the case as is evident from (\ref{taylor}). |
2004-q5 | Let \( \{ M_n, \mathcal{F}_n, n \geq 1 \} \) be a martingale such that for some \( r > 2 \),
\[
sup_n E(|M_n - M_{n-1}|^r|\mathcal{F}_{n-1}) < \infty \quad \text{and} \quad \lim_{n \to \infty} E((M_n - M_{n-1})^2|\mathcal{F}_{n-1}) = 1 \text{ a.s.}
\]
Let \( T_b = \inf\{n \geq 1 : M_n \geq b \}, \tau_b = \inf\{n \geq 1 : M_n + b^{-1}n \geq b\} (\inf \emptyset = \infty) \). Show that as \( b \to \infty \), \( T_b/b^2 \) and \( \tau_b/b^2 \) have limiting distributions and write down the density functions (with respect to Lebesgue measure) of the limiting distributions. | We intend to use \textit{Martingale CLT}. By conditions given on the MG $M$, we have that $M_n$ is $L^r$-integrable, in particular $L^2$-integrable for all $n \geq 1$. Let $\mu := \mathbb{E}(M_1)$. Set $M_0 := \mu$. Define for all $n \geq 1$,
$$ M_{n,0} = 0, \; M_{n,k} = \dfrac{M_k-\mu}{\sqrt{n}}, \; k \geq 1.$$ Also set $\mathcal{F}_{n,l}:=\mathcal{F}_l$ for all $n,l \geq 1$ and set $\mathcal{F}_{n,0}$ to be the trivial $\sigma$-algebra. It is therefore evident that $\left\{M_{n,k}, \mathcal{F}_{n,k}, k \geq 0\right\}$ is a $L^2$-MG. The predictable compensator of this MG is $$ \left\langle M_n \right\rangle_l = \sum_{k=1}^l \mathbb{E} \left[ (M_{n,k}-M_{n,k-1})^2 \mid \mathcal{F}_{n,k-1} \right] = \dfrac{1}{n} \sum_{k=1}^l \mathbb{E} \left[ (M_{k}-M_{k-1})^2 \mid \mathcal{F}_{k-1} \right]. $$ We are given that $$ \lim_{n \to \infty} \mathbb{E} \left[(M_n-M_{n-1})^2 \mid \mathcal{F}_{n-1} \right] =1, \; \text{almost surely},$$ and hence for all $t \in (0,1]$ $$ \lim_{n \to \infty} \dfrac{1}{\lfloor nt \rfloor} \sum_{k=1}^{\lfloor nt \rfloor} \mathbb{E} \left[(M_k-M_{k-1})^2 \mid \mathcal{F}_{k-1} \right] =1, \; \text{almost surely}.$$ In particular $ \left\langle M_n \right\rangle_{\lfloor nt \rfloor} \longrightarrow t$, almost surely. The limit also clearly also holds for $t=0$. Setting $D_{n,k} = M_{n,k}-M_{n,k-1}$, we have for all $\varepsilon >0$, \begin{align*} g_n(\varepsilon) = \sum_{k=1}^n \mathbb{E} \left[D_{n,k}^2 \mathbbm{1} \left( |D_{n,k}| \geq \varepsilon \right) \mid \mathcal{F}_{n,k-1} \right] &\leq \sum_{k=1}^n \mathbb{E} \left(|D_{n,k}|^r \varepsilon^{2-r} \mid \mathcal{F}_{n,k-1}\right) \\
&= \dfrac{1}{n^{r/2}\varepsilon^{r-2}} \sum_{k=1}^n \mathbb{E} \left(|M_k-M_{k-1}|^r \mid \mathcal{F}_{k-1}\right) \\
&= n^{1-r/2} \varepsilon^{2-r} \sup_{k \geq 1} \mathbb{E} \left(|M_k-M_{k-1}|^r \mid \mathcal{F}_{k-1}\right) \stackrel{a.s}{\longrightarrow} 0.\end{align*} Define $$ \widehat{S}_n(t) = M_{n,\lfloor nt \rfloor} + \left( nt-\lfloor nt \rfloor\right) D_{n,\lfloor nt \rfloor+1}, \;\forall \; n \geq 1, \; t \in [0,1].$$ Then by Martingale CLT (\cite[Theorem 10.2.22]{dembo}) we have $\widehat{S}_n$ converges in distribution on $C([0,1])$ to the standard BM $W$ on $[0,1]$. Since the process $\widehat{S}_n$ is obtained from $M$ by linear interpolation, it is easy to see that $$ \sup_{0 \leq t \leq 1} \widehat{S}_n(t) = \max_{0 \leq k \leq n} M_{n,k}.$$\nNow we have $T_b := \left\{n \geq 1 \mid M_n \geq b\right\}$. Then for any $x>0$ and $b>\mu$, we have \begin{align*} \mathbb{P} (b^{-2}T_b \leq x) = \mathbb{P} \left( \sup_{1 \leq k \leq \lfloor xb^2 \rfloor} M_k \geq b\right) = \mathbb{P} \left( \sup_{1 \leq k \leq \lfloor xb^2 \rfloor} \dfrac{M_k-\mu}{\sqrt{\lfloor xb^2 \rfloor}} \geq \dfrac{b-\mu}{\sqrt{\lfloor xb^2 \rfloor}}\right) &= \mathbb{P} \left( \sup_{1 \leq k \leq \lfloor xb^2 \rfloor} M_{\lfloor xb^2 \rfloor, k} \geq \dfrac{b-\mu}{\sqrt{\lfloor xb^2 \rfloor}}\right) \\
&= \mathbb{P} \left( \sup_{0 \leq k \leq \lfloor xb^2 \rfloor} M_{\lfloor xb^2 \rfloor, k} \geq \dfrac{b-\mu}{\sqrt{\lfloor xb^2 \rfloor}}\right). \end{align*} Recall that the function $\mathbf{x} \mapsto \sup_{0 \leq t \leq 1} \mathbf{x}(t)$ is continuous on $C([0,1])$. Therefore, $$ \sup_{0 \leq t \leq 1} \widehat{S}_{\lfloor xb^2 \rfloor}(t) \stackrel{d}{\longrightarrow} \sup_{0 \leq t \leq 1} W(t) \stackrel{d}{=} |W(1)|, \; ext{as } b \to \infty.$$ The limiting distribution being continuous, the convergence in uniform (using \textit{Polya's Theorem}) and therefore, $$ \mathbb{P} (b^{-2}T_b \leq x) = \mathbb{P} \left( \sup_{0 \leq t \leq 1} \widehat{S}_{\lfloor xb^2 \rfloor}(t) \geq \dfrac{b-\mu}{\sqrt{\lfloor xb^2 \rfloor}}\right) \stackrel{b \to \infty}{\longrightarrow} \mathbb{P} \left( \sup_{0 \leq t \leq 1} W(t) \geq x^{-1/2}\right) = \mathbb{P}(|W(1)| \geq x^{-1/2}) = 2\bar{\Phi}(x^{-1/2}).$$ The limiting density is $$ f_T(x) = x^{-3/2}\phi(x^{-1/2})\mathbbm{1}(x >0) = \dfrac{1}{\sqrt{2\pi x^3}} \exp \left( -\dfrac{1}{2x}\right)\mathbbm{1}(x >0). $$ Also clearly,
$$\mathbb{P} (b^{-2}T_b \leq x) \stackrel{b \to \infty}{\longrightarrow}\mathbb{P}(|W(1)| \geq x^{-1/2}) = \mathbb{P}(W(1)^{-2} \leq x), $$ i.e., $T_b/b^2$ converges in distribution to the $\text{Inv-}\chi^2_1$ distribution.\n
Now we have $\tau_b := \left\{n \geq 1 \mid M_n +b^{-1}n \geq b\right\}$. Then for any $x>0$ and $b>\mu$, we have \begin{align*} \mathbb{P} (b^{-2}\tau_b \leq x) = \mathbb{P} \left( \sup_{1 \leq k \leq \lfloor xb^2 \rfloor} (M_k+b^{-1}k) \geq b\right) &= \mathbb{P} \left( \sup_{1 \leq k \leq \lfloor xb^2 \rfloor} \dfrac{M_k +b^{-1}k-\mu}{\sqrt{\lfloor xb^2 \rfloor}} \geq \dfrac{b-\mu}{\sqrt{\lfloor xb^2 \rfloor}}\right) \\
&= \mathbb{P} \left( \sup_{0 \leq k \leq \lfloor xb^2 \rfloor} \left[M_{\lfloor xb^2 \rfloor, k} + \dfrac{\sqrt{\lfloor xb^2 \rfloor}}{b}\dfrac{k}{\lfloor xb^2 \rfloor}\right]\geq \dfrac{b-\mu}{\sqrt{\lfloor xb^2 \rfloor}}\right) \\
&= \mathbb{P} \left( \sup_{0 \leq t \leq 1} \left[\widehat{S}_{\lfloor xb^2 \rfloor}(t) + \dfrac{\sqrt{\lfloor xb^2 \rfloor}}{b}t \right] \geq \dfrac{b-\mu}{\sqrt{\lfloor xb^2 \rfloor}}\right),\end{align*} where the last assertion is true since $t \mapsto \widehat{S}_{n}(t) +\kappa t$ on $[0,1]$ is piecewise linear interpolation of $$\left\{(k/n) \mapsto M_{n,k}+\kappa n^{-1}k \mid 0 \leq k \leq n\right\},$$ for any $\kappa$ and hence the maximum should attain at $k/n$ for some $0 \leq k \leq n$. Recall that the function $\mathbf{x} \mapsto \sup_{0 \leq t \leq 1} \mathbf{x}(t)$ is continuous on $C([0,1])$. Therefore, $$ \sup_{0 \leq t \leq 1} \left[\widehat{S}_{\lfloor xb^2 \rfloor}(t) + \sqrt{x}t \right] \stackrel{d}{\longrightarrow} \sup_{0 \leq t \leq 1} (W(t)+\sqrt{x}t), \; ext{as } b \to \infty.$$ \textcolor{black}{The problem now boils down to finding the distribution of first passage times of Brownian motion with drift. Since I couldn't find any good self-containing reference, I am adding the derivation in the Appendix. For $y>0$, $$ \mathbb{P} \left(\sup_{0 \leq t \leq 1} (W(t)+\sqrt{x}t) \geq y \right) = e^{2y\sqrt{x}}\bar{\Phi}\left( y+\sqrt{x}\right) + \bar{\Phi}\left( y-\sqrt{x}\right).$$ The limiting distribution being continuous, the convergence in uniform (using \textit{Polya's Theorem}) and therefore, $$ \mathbb{P} (b^{-2}\tau_b \leq x) \stackrel{b \to \infty}{\longrightarrow} \mathbb{P} \left( \sup_{0 \leq t \leq 1} (W(t)+\sqrt{x}t) \geq x^{-1/2}\right) = e^2\bar{\Phi}\left( \sqrt{x}+rac{1}{\sqrt{x}}\right) + \bar{\Phi}\left( \frac{1}{\sqrt{x}}-\sqrt{x}\right).$$ The limiting density (with respect to Lebesgue measure) is \begin{align*} f_{\tau}(x) &=\left[- e^2\phi\left( \sqrt{x}+\dfrac{1}{\sqrt{x}}\right)\left( \dfrac{1}{2\sqrt{x}}-\dfrac{1}{2\sqrt{x^3}}\right) - \phi\left( -\sqrt{x}+\dfrac{1}{\sqrt{x}}\right)\left( -\dfrac{1}{2\sqrt{x}}-\dfrac{1}{2\sqrt{x^3}}\right) \right] \mathbbm{1}(x>0) \\
&=\left[-\phi\left( -\sqrt{x}+\dfrac{1}{\sqrt{x}}\right)\left( \dfrac{1}{2\sqrt{x}}-\dfrac{1}{2\sqrt{x^3}}\right) - \phi\left( -\sqrt{x}+\dfrac{1}{\sqrt{x}}\right)\left( -\dfrac{1}{2\sqrt{x}}-\dfrac{1}{2\sqrt{x^3}}\right) \right]\mathbbm{1}(x>0) \\
&= x^{-3/2} \phi\left( -\sqrt{x}+\dfrac{1}{\sqrt{x}}\right)\mathbbm{1}(x>0) = \dfrac{1}{\sqrt{2\pi x^3}} \exp \left( -\dfrac{x}{2}-\dfrac{1}{2x}+1\right)\mathbbm{1}(x>0). \end{align*} |
2004-q6 | Let \( X_1, X_2, \ldots \) be independent with \( P(X_n = 1) = p, \) \( P(X_n = 0) = r, \) \( P(X_n = -1) = q, \) where \( p + r + q = 1 \). Let \( S_n = X_1 + \cdots + X_n (S_0 = 0) \), and for \( b \) a positive integer, let \( T = \inf\{n : n \geq 1, S_n = 0 \text{ or } b\} \). (a) Find \( P(S_T = 0) \) and \( E(T) \). (b) Now let \( M = \inf\{n : S_n - \min_{0\leq k \leq n} S_k \geq b\} \). Find \( E(M) \). Hint: By considering the first time this process returns to 0, you can express \( E(M) \) in terms of the answer to (a). | \textbf{Correction :} $T=\inf \left\{n : n \geq 1, S_n=-1 \text{ or } b\right\}$ and $r<1$. We need to find $\mathbb{P}(S_T=-1)$ in part (a).\nWe have $X_i$'s \textit{i.i.d.} with $\mathbb{P}(X_1=1)=p, \mathbb{P}(X_1=0)=r$ and $\mathbb{P}(X_1=-1)=q$, where $p+q+r=1$. Let $S_n = \sum_{k=1}^n X_k$ with $S_0=0$. For $b>0$, let $T_b := \inf \left\{n \mid n \geq 1, S_n \in \left\{-1,b \right\}\right\}.$\n\begin{enumerate}[label=(\alph*)]\item Let $\mu = \mathbb{E}X_1 = p-q$. Since $r<1$, we have $\limsup_{n \to \infty} |S_n| =\infty$ almost surely and hence $T_b<\infty$ almost surely. \nFor $p \neq q$, recall that $\left\{(q/p)^{S_n}, n \geq 0 \right\}$ is a MG with respect to the canonical filtration. Since, $(q/p)^{S_{T_b \wedge n}} \leq (q/p)^b + p/q,$ the stopped MG $\left\{ (q/p)^{S_{T_b \wedge n}} : n \geq 0\right\}$ is U.I. and hence by OST we conclude $\mathbb{E}\left[ \left(q/p \right)^{S_{T_b}}\right] = 1.$ Therefore,
$$ (q/p)^{-1} \mathbb{P}(S_{T_b}=-1) + (q/p)^{b} \mathbb{P}(S_{T_b}=b) =1 \Rightarrow \mathbb{P}(S_{T_b}=-1) = \dfrac{1-(q/p)^{b}}{(p/q)-(q/p)^{b}}= \dfrac{q(p^b-q^{b})}{p^{b+1}-q^{b+1}}.$$ Also $\left\{S_n-n\mu, n \geq 0 \right\}$ is a MG with respect to the canonical filtration. Since, $|S_{T_b \wedge n}| \leq b$, we have by DCT that $\mathbb{E}(S_{T_b \wedge n}) \to \mathbb{E}(S_{T_b})$. By OST we conclude $\mathbb{E}\left[ S_{T_b \wedge n} -(T_b \wedge n)\mu\right] = 0,$ implying that\n\begin{align*} \mathbb{E}(T_b) = \lim_{n \to \infty} \mathbb{E}(T_b \wedge n) = \mu^{-1} \lim_{n \to \infty} \mathbb{E}(S_{T_b \wedge n}) = \mu^{-1}\mathbb{E}(S_{T_b}) &= \dfrac{1}{\mu}\dfrac{b(p^{b+1}-qp^b)-(qp^b-q^{b+1})}{p^{b+1}-q^{b+1}} \\
&= \dfrac{bp^{b+1}-(b+1)qp^b + q^{b+1}}{(p-q)(p^{b+1}-q^{b+1})}. \end{align*} \nFor $p = q$, recall that $\left\{S_n, n \geq 0 \right\}$ is a MG with respect to the canonical filtration. Since, $|S_{T_b \wedge n}| \leq b,$ the stopped MG $\left\{ S_{T_b \wedge n} : n \geq 0\right\}$ is U.I. and hence by OST we conclude $\mathbb{E}\left[ S_{T_b}\right] = 0.$ Therefore,
$$ -\mathbb{P}(S_{T_b}=-1) + b \mathbb{P}(S_{T_b}=b) =0 \Rightarrow \mathbb{P}(S_{T_b}=-1) = \dfrac{b}{b+1}.$$ \nAlso $\left\{S_n^2-n\sigma^2, n \geq 0 \right\}$ is a MG with respect to the canonical filtration, where $\sigma^2=\operatorname{Var}(X_1)=p+q=2p=1-r.$ Since, $|S_{T_b \wedge n}| \leq b$, we have by DCT that $\mathbb{E}(S_{T_b \wedge n}) \to \mathbb{E}(S_{T_b})$. By OST we conclude $\mathbb{E}\left[ S_{T_b \wedge n}^2 -(T_b \wedge n)\sigma^2\right] = 0,$ implying that\n\begin{align*} \mathbb{E}(T_b) = \lim_{n \to \infty} \mathbb{E}(T_b \wedge n) = \sigma^{-2} \lim_{n \to \infty} \mathbb{E}(S_{T_b \wedge n}^2) = \sigma^{-2}\mathbb{E}(S_{T_b}^2) &= \dfrac{1}{\sigma^2}\dfrac{b+b^2}{b+1} = \dfrac{b}{1-r}. \end{align*} \end{enumerate}\n\item We define $M:= \inf \left\{ n : Y_n \geq b\right\}$, where $Y_n:=S_n - \min_{0 \leq k \leq n} S_k = \max_{0 \leq k \leq n} (S_n-S_k).$ Observe that, $Y_n \geq 0$ and if $Y_n \geq 1$, then $\min_{0 \leq k \leq n} S_k \leq S_n-1 \leq S_{n+1}$ which implies that
$$Y_{n+1}=S_{n+1}-\min_{0 \leq k \leq n+1} S_k = S_{n+1}-\min_{0 \leq k \leq n} S_k = S_{n+1}-S_n+Y_n = Y_n+X_{n+1}.$$ On the otherhand, if $Y_n=0$ then $S_n=\min_{0 \leq k \leq n} S_k$. If $X_{n+1} \leq 0$, then $S_{n+1}=\min_{0 \leq k \leq n+1} S_k$ and hence $Y_{n+1}=0$. If $X_{n+1}=1$, then clearly $S_{n+1}=\min_{0 \leq k \leq n} S_k + 1 = \min_{0 \leq k \leq n+1} S_k+1$, leading to $Y_{n+1}=1$. Combining all of these we can conclude that the stochastic process $\left\{Y_n : n \geq 0\right\}$ satisfies $$ Y_0 = 0, \; \; Y_{n+1} = (Y_n+X_{n+1})_{+} = \max(Y_n+X_{n+1},0), \; \forall \; n \geq 0.$$ This recursion relation also shows that $\left\{Y_n : n \geq 0\right\}$ is a Markov Chain on the state space $\mathbb{Z}_{\geq 0}$ with the transition probabilities $p(i,i+1)=p, p(i,i)=r, p(i,i-1)=q$ for all $i \geq 1$ and $p(0,0)=q+r, p(0,1)=p$. In comparison to this $\left\{S_n : n \geq 0\right\}$ is also a Markov Chain on the state space $\mathbb{Z}$ with the transition probabilities $p(i,i+1)=p, p(i,i)=r, p(i,i-1)=q$ for all $i \in \mathbb{Z}.$ \n \n\nLet $\tau := \inf \left\{n : Y_n > 0\right\} = \inf \left\{n : X_n >0\right\}$, where the equality follows from the recursion structure. Clearly $\tau \sim \text{Geo}(p)$. Let $\tau_1 := \inf \left\{n \geq \tau : Y_n \in \left\{0,b\right\}\right\} = \inf \left\{n \geq 1 : Y_n \in \left\{0,b\right\}, Y_{n-1} \neq 0\right\} $. Inductively define $\tau_j = \inf \left\{n >\tau_{j-1} : Y_n \in \left\{0,b\right\}, Y_{n-1} \neq 0\right\},$ for all $j \geq 2$. Finally let $N:= \inf \left\{j \geq 1 : Y_{\tau_j}=b\right\}.$ Then clearly $M = \tau_N$. \n\nAgain it is easy to observe by the Markov chain property of the process $Y$ that $$ \tau_j-\tau_{j-1} \mid (N \geq j) \stackrel{d}{=} \tau_j-\tau_{j-1} \mid (Y_{\tau_{j-1}}=0) \stackrel{d}{=} \tau_1 = \tau + \tau^{\prime},\; \forall \; j \geq 1,$$ where $\tau_0 :=0$, $\tau^{\prime}$ is the time required by the process $Y$ to reach $\left\{0,b\right\}$ starting from $1$ and clearly $\tau^{\prime} \stackrel{d}{=}T_{b-1}$ by translation invariance of the chain $\left\{S_n : n \geq 0\right\}$. By the same line of argument,
$$ Y_{\tau_j} \mid (N \geq j) \stackrel{d}{=} Y_{\tau_j} \mid (Y_{\tau_{j-1}}=0) \stackrel{d}{=} Y_{\tau_1} \stackrel{d}{=} S_{T_{b-1}}+1,\; \forall \; j \geq 1.$$ This shows that $$ \mathbb{P}(N \geq j+1 \mid N \geq j) = \mathbb{P}(Y_{\tau_j}=0 \mid N \geq j) = \mathbb{P}(S_{T_{b-1}}=-1) =: f(b,p,q,r) >0, \; \forall \; j \geq 1.$$ This implies that $\mathbb{P}(N \geq j) = f(b,p,q,r)^{j-1}$ for all $j \geq 1$. In other words $N \sim \text{Geo}(1-f(b,p,q,r))$ and hence integrable. \begin{align*} \mathbb{E}(M) = \mathbb{E} \left[ \sum_{j \geq 1} (\tau_j-\tau_{j-1})\mathbbm{1}(N \geq j)\right] =\sum_{j \geq 1} \mathbb{E} \left[\tau_j-\tau_{j-1} \mid N \geq j \right]\mathbb{P}(N \geq j) &= \mathbb{E}(N)\mathbb{E}(\tau_1) \end{align*} \n\textit{where } \mathbb{E}(N) = \dfrac{1}{1-f(b,p,q,r)}, \text{and } \mathbb{E}(\tau_1) = \mathbb{E}(\tau)+\mathbb{E}(\tau^{\prime}).\textit{Applying basic expectation formulas, we find:}
$$ \mathbb{E}(M) = \mathbb{E}(N) \left[ \mathbb{E}(\tau)+\mathbb{E}(\tau^{\prime})\right] = \dfrac{p^{-1}+\mathbb{E}(T_{b-1})}{1-f(b,p,q,r)}. \end{align*} Plugging in the expressions from part (a) and simplifying we get $$\mathbb{E}(M) = \begin{cases} \dfrac{bp^{b+1}-(b+1)qp^b + q^{b+1}}{(p-q)^2p^b} = \dfrac{b}{p-q}-\dfrac{q(p^b-q^b)}{p^b(p-q)^2}, & \text{ if } p \neq q, \\
\dfrac{b(b+1)}{2p}, & \text{ if }p=q. \end{cases}$$ \end{enumerate}\n\n\appenelfix \\section{Appendix : Problem 5} Suppose $\left\{W_t : t \geq 0\right\}$ is a standard BM starting from $0$ and $\mu >0$. Take $b>0$. We are interested in finding the distribution of $T_b :=\inf \left\{ t \geq 0 \mid W_t+\mu t \geq b \right\}$. Fix $T>0$. We shall compute $\mathbb{P}(T_b \leq T)$.\n\nLet $\widetilde{W}_t=T^{-1/2}W_{Tt}$, for all $t \geq 0$. Also set $B_s=\widetilde{W}_s-s\widetilde{W}_1$ for $0 \leq s \leq 1$. Then $\left\{\widetilde{W}_s : s \geq 0\right\}$ is a standard BM while $\left\{B_s : 0 \leq s \leq 1\right\}$ is a standard Brownian Bridge process independent of $\widetilde{W}_1.$ Observe that\n\begin{align*} (T_b \leq T) = \left( \sup_{0 \leq t \leq T} (W_t + \mu t) \geq b\right) = \left( \sup_{0 \leq s \leq 1} (W_{Ts} + \mu Ts) \geq b\right) &= \left( \sup_{0 \leq s \leq 1} (\sqrt{T}B_s+\sqrt{T}s\widetilde{W}_1 + \mu Ts) \geq b\right) \\
&= \left( \sup_{0 \leq s \leq 1} (B_s+( ilde{W_1} + \\text{ extit{ Correction. Truncated at this point.}} $\end{align*} |
2005-q1 | Let X be a nonnegative random variable with mean \( \mu \) and finite variance \( \sigma^2 \). Show that, for any \( a > 0 \), \[ P\{X > \mu + a\sigma\} \leq \frac{1}{1 + a^2}. \] | For any $c \geq 0$, we have
$$ \mathbb{E}\left[X > \mu + a \sigma \right] \leq \mathbb{P}\left[(X-\mu+c)^2 > (a\sigma+c)^2\right] \leq \dfrac{\mathbb{E}(X-\mu+c)^2}{(a\sigma+c)^2} = \dfrac{\sigma^2 + c^2}{(a\sigma + c)^2}=: g(c). $$
Optimizing $g$ over $c \in [0,\infty)$, we can observe that $g$ is minimized at $c=\sigma/a > 0$ and
$$ g \left( \dfrac{\sigma}{a}\right) = \dfrac{\sigma^2 + a^{-2}\sigma^2}{(a\sigma + a^{-1}\sigma)^2} = \dfrac{1 + a^{-2}}{(a + a^{-1})^2} = \dfrac{1}{1+a^2}.$$
Hence, $\mathbb{P}(X>\mu+a\sigma) \leq (1+a^2)^{-1}.$
|
2005-q2 | Let \( X \) be \( \text{Bin}(n,p) \). Find \( E[1/(i + X)] \) for \( i = 1, 2 \). Hint: Recall that \( \int x^a dx = x^{a+1}/(a+1) + C \) for \( a \neq -1 \). | $X \sim \text{Bin}(n,p)$. Hence,
\begin{align*}
\mathbb{E} \left[\dfrac{1}{1+X} \right] = \sum_{j=0}^n \dfrac{1}{1+j}{n \choose j}p^j(1-p)^{n-j} &= \sum_{j=0}^n \dfrac{1}{n+1}{n+1 \choose j+1}p^j(1-p)^{n-j} \\
&=\dfrac{1}{n+1} \sum_{k=1}^{n+1} {n+1 \choose k}p^{k-1}(1-p)^{n+1-k} \\
&= \dfrac{ 1 - (1-p)^{n+1}}{(n+1)p}.
\end{align*}
On the otherhand,
\begin{align*}
\mathbb{E} \left[\dfrac{1}{(1+X)(2+X)} \right] = \sum_{j=0}^n \dfrac{1}{(1+j)(2+j)}{n \choose j}p^j(1-p)^{n-j} &= \sum_{j=0}^n \dfrac{1}{(n+1)(n+2)}{n+2 \choose j+2}p^j(1-p)^{n-j} \\
&=\dfrac{1}{(n+1)(n+2)} \sum_{k=2}^{n+2} {n+2 \choose k}p^{k-2}(1-p)^{n+2-k} \\
&= \dfrac{ 1 - (1-p)^{n+2}-(n+2)p(1-p)^{n+1}}{(n+1)(n+2)p^2}.
\end{align*}
Therefore,
\begin{align*}
\mathbb{E} \left[\dfrac{1}{2+X} \right] = \mathbb{E} \left[\dfrac{1}{1+X} \right] - \mathbb{E} \left[\dfrac{1}{(1+X)(2+X)} \right] &= \dfrac{ 1 - (1-p)^{n+1}}{(n+1)p} - \dfrac{ 1 - (1-p)^{n+2}-(n+2)p(1-p)^{n+1}}{(n+1)(n+2)p^2} \\
&= \dfrac{(n+2)p-1+(1-p)^{n+2}}{(n+1)(n+2)p^2}.
\end{align*}
|
2005-q3 | For each \( n = 1, 2, \ldots \) let \( X_{n,i}, \ 1 \leq i \leq \lfloor e^n \rfloor \), be independent normally distributed random variables with mean value 0 and variance \( n \). Let \( M_n = \max_i X_{n,i} \). (Here, \( \lfloor a \rfloor \) is the greatest integer less than or equal to \( a \).) Find constants \( c_n \), which grow asymptotically at the rate \( 2^{1/2}n \), such that \( M_n - c_n \) converges in distribution. What is the limiting distribution? | $\left\{X_{n,i} \mid 1 \leq i \leq \lfloor e^n \rfloor\right\}$ is an \textit{i.i.d.} collection of $N(0,n)$ random variables and $M_n = \max_{i} X_{n,i}.$ Therefore, for any real $x$,
$$ \mathbb{P} \left( M_n - c_n \leq x \right) = \mathbb{P} \left(X_{n,i} \leq c_n +x, \; \forall \; 1 \leq i \leq \lfloor e^n \rfloor \right) = \left[\Phi \left( \dfrac{c_n+x}{\sqrt{n}} \right) \right]^{\lfloor e^n \rfloor}.$$
Thus yields that
$$ \log \mathbb{P} \left( M_n - c_n \leq x \right) = \lfloor e^n \rfloor \log \Phi \left( \dfrac{c_n+x}{\sqrt{n}} \right) \sim -\exp(n) \bar{\Phi} \left( \dfrac{c_n+x}{\sqrt{n}} \right), $$
where the last assertion is true since $c_n \sim \sqrt{2}n$ as $n \to \infty$ and $\log(1+x) \sim x$ as $x \to 0$. Using the fact that $\bar{\Phi}(x) \sim \phi(x)/x$ as $x \to \infty$, we obtain
\begin{align*}
\log \mathbb{P} \left( M_n - c_n \leq x \right) \sim -\dfrac{\sqrt{n}\exp(n)}{c_n+x} \phi \left( \dfrac{c_n+x}{\sqrt{n}} \right) &\sim - \dfrac{1}{\sqrt{2\pi}} \dfrac{\sqrt{n}\exp(n)}{c_n} \exp \left( - \dfrac{(c_n+x)^2}{2n}\right)\\
&\sim - \dfrac{1}{\sqrt{2\pi}} \dfrac{\sqrt{n}\exp(n)}{\sqrt{2}n} \exp \left( - \dfrac{(c_n+x)^2}{2n}\right) \\
& \sim - \dfrac{1}{\sqrt{4\pi}} \exp \left( -\dfrac{1}{2}\log n + n - \dfrac{c_n^2}{2n} -x\sqrt{2}\right),
\end{align*}
where in the last assertion we have used $c_n \sim \sqrt{2}n$ again. Take $c_n = \sqrt{2n^2 - n \log n} \sim \sqrt{2}n.$ Then $c_n^2/2n = n -\frac{1}{2}\log n$ and therefore
$$ \log \mathbb{P} \left( M_n - c_n \leq x \right) \sim - \dfrac{1}{\sqrt{4\pi}} \exp \left( -x\sqrt{2}\right).$$
In other word, $M_n - c_n \stackrel{d}{\longrightarrow} G$ where
$$ G(x) = \exp \left[- \dfrac{1}{\sqrt{4\pi}} \exp \left( -x\sqrt{2}\right) \right], \;\forall \; x \in \mathbb{R}.$
|
2005-q4 | Let \( X_t \) be Brownian motion with drift 0 and variance parameter \( \sigma^2 = 1 \), starting from the initial value \( X_0 = 0 \). For fixed \( a \neq 0 \), let \[ R(t) = \int_0^t \exp[a(X(t) - X(s)) - a^2(t-s)/2]ds , \] where \( R(0) = 1 \). Let \( b > 1 \) and \( T = \min\{t : R(t) = b\} \). (a) What is the expected value of \( R(t) \)? (b) What is the expected value of \( T \)? Hint: From part (a) you may be able to discover a martingale that will help you with part (b). | $\left\{X_t : t \geq 0\right\}$ is standard BM with filtration $\mathcal{F}_t = \sigma \left(X_s : 0 \leq s \leq t \right)$. For $a \geq 0$
$$ R(t) := \int_{0}^t \exp \left[ a(X_t-X_s)-\dfrac{a^2(t-s)}{2}\right] \, ds; \forall \, t > 0,$$
with $R(0)=0$ (Correction !). Recall the exponential MG associated to standard BM, $\left\{M_t := \exp\left(aX_t-\dfrac{a^2t}{2}\right), \mathcal{F}_t, t \geq 0 \right\}$.
\begin{enumerate}[label=(\alph*)]
\item By \textit{Fubini's Theorem} and the fact that $X_t-X_s \sim N(0,t-s)$, for all $0 \leq s \leq t$, we have the following.
\begin{align*}
\mathbb{E}(R_t) = \mathbb{E} \left[ \int_{0}^t \exp \left( a(X_t-X_s)-\dfrac{a^2(t-s)}{2}\right)\, ds \right]&= \int_{0}^t \mathbb{E} \exp\left( a(X_t-X_s)-\dfrac{a^2(t-s)}{2}\right)\, ds \\
&= \int_{0}^t \; ds =t.
\end{align*}
\item
Notice that for $0 \leq r \leq s \leq t$,
$$ \mathcal{F}_r \perp \!\! \perp \exp \left[ a(X_t-X_s)-\dfrac{a^2(t-s)}{2}\right] = \dfrac{M_t}{M_s} \stackrel{d}{=} M_{t-s} \Rightarrow \mathbb{E} \left[ \dfrac{M_t}{M_s} \bigg \rvert \mathcal{F}_r\right] = \mathbb{E}(M_{t-s}) = \mathbb{E}(M_0)=1,\n$$
whereas for $0 \leq s \leq r \leq t$, we have
$$ \mathbb{E} \left[ \dfrac{M_t}{M_s} \bigg \rvert \mathcal{F}_r\right] = \dfrac{1}{M_s} \mathbb{E} \left[ M_t \mid \mathcal{F}_r\right] = \dfrac{M_r}{M_s}.$$
In part (a), we have proved $R_t$ is integrable. Employing \textit{Fubini's Theorem} we get the following for all $0 \leq r \leq t$.
\begin{align*}
\mathbb{E} \left[R_t \mid \mathcal{F}_r \right] = \mathbb{E} \left[\int_{0}^t \dfrac{M_t}{M_s} \, ds \bigg \rvert \mathcal{F}_r \right] =\int_{0}^t \mathbb{E} \left[ \dfrac{M_t}{M_s} \bigg \rvert \mathcal{F}_r \right] \, ds &= \int_{0}^r \dfrac{M_r}{M_s} \, ds + \int_{r}^t \, ds \\
&= R_r + (t-r).
\end{align*}
Therefore $\left\{R_t-t, \mathcal{F}_t, t \geq 0\right\}$ is a MG. We define $T:= \inf \left\{t \mid R_t=b \right\}$. Since, $R_0=0$, it is clear by sample-path-continuity for the process $R$ that $T= \inf \left\{t \mid R_t \geq b \right\}$ and $R_T=b$ if $T< \infty$. By \textit{Optional Stopping Theorem} and noting that $R_{T \wedge t} \leq b$ for all $t \geq 0$, we get the following for all $t \geq 0$.
$$ \mathbb{E} \left(R_{T \wedge t} - (T \wedge t) \right) = \mathbb{E}(R_0) =0 \Rightarrow \mathbb{E}(T \wedge t) = \mathbb{E}R_{T \wedge t} \leq b.$$
Use MCT and take $t \uparrow \infty$, to conclude that $\mathbb{E}(T) \leq b$. This guarantees that $T< \infty$ almost surely and hence $R_{T \wedge t} \longrightarrow R_T=b$ almost surely, as $t \to \infty$. Since, $0 \leq R_{T \wedge t } \leq b$, use DCT to conclude that $\mathbb{E}(T)=b.$
\end{enumerate}
|
2005-q5 | Let \( X_t \) be a Poisson process with rate \( \lambda \) and \( X_0 = 0 \). Let \( a > 0, 0 < \lambda < b \) and \( T = \inf\{t : X_t \leq -a + bt\} \), where it is understood that the infimum of the empty set is 0. What is \( E(\exp(-a T))? \ E(T)? \) Hint: For the first part, find a martingale of the form \( \exp(\theta X_t)/[M(\theta)]^t \). It might be helpful to re-write the entire problem in terms of a new process \( Y_t = bt - X_t \) and define the martingale in terms of that process. | \begin{enumerate}[label=(\alph*)]
\item $\left\{X_t : t\geq 0\right\}$ is the Poisson Process with rate $\lambda$ and $X_0=0$. Set $\mathcal{F}_t = \sigma \left(X_s \mid 0 \leq s \leq t \right)$. Then for all $0 \leq s \leq t$, we have $X_t-X_s \perp \!\! \perp \mathcal{F}_s$ and has distribution $\text{Poi}(\lambda(t-s))$; therefore
$$ \mathbb{E} \left[ \exp(-\theta X_t) \mid \mathcal{F}_s\right] = \exp(-\theta X_s) \mathbb{E} \exp(-\theta(X_t-X_s)) = \exp \left[ -\theta X_s + \lambda(t-s)(e^{-\theta}-1) \right].$$
Therefore, $\left\{Z_t := \exp \left[ -\theta X_t - \lambda(e^{-\theta}-1)t\right]; \mathcal{F}_t; t \geq 0\right\}$ is a MG.
We define $T := \inf \left\{t \mid X_t \leq -a+bt \right\}$. Since, $X_t/t$ converges almost surely to $\lambda <b$ as $t \to \infty$, we have $X_t+a-bt \longrightarrow -\infty$ almost surely as $t \to \infty$. Since, $X_0=0, a>0$ as $X$ has right-continuous sample paths. we can conclude that $T \in (0, \infty)$ almost surely. By definition of $T$ and using RCLL sample paths for $X$ we can write,
$$ X_{T(\omega)}(\omega) \leq -a+bT(\omega), \;\; X_{T(\omega)-}(\omega) \geq -a+bT(\omega), \; \forall \; \omega \in \omega.$$
But $X$ has non-decreasing sample paths and so $X_{t-} \leq X_t$ for all $t >0$ and therefore, $X_T = -a+bT$, almost surely.
Now suppose we can get $\theta >0$ such that $b\theta + \lambda(e^{-\theta}-1) = a > 0$. Then for all $t \geq 0$, we have $X_{T \wedge t} \geq -a + b(T \wedge t)$ and therefore
$$ Z_{T \wedge t} \leq \exp \left[ a\theta - \theta b (T \wedge t) - \lambda(e^{-\theta}-1)(T \wedge t) \right] = \exp \left[ a\theta - a(T \wedge t)\right].$$
By \textit{Optional Stopping Theorem}, $\mathbb{E}Z_{T \wedge t} = \mathbb{E}Z_0 = 1$. Since, $Z_{T \wedge t} \longrightarrow Z_T = \exp \left[ a\theta - \theta bT - \lambda(e^{-\theta}-1)T \right] = \exp \left[ a\theta - aT\right]$ almost surely; we use DCT to conclude that
$$ 1 = \lim_{t \to \infty} \mathbb{E}Z_{T \wedge t} = \mathbb{E} Z_T = \mathbb{E} \exp \left( a\theta-aT\right).$$
Hence, $\mathbb{E} \exp(-aT) = \exp(-a\theta_0),$ where $\theta_0$ is positive solution of $f(\theta)=b \theta + \lambda(e^{-\theta}-1)=a$. The solution is unique since $f^{\prime}(\theta) = b-\lambda e^{-\theta} \geq b-\lambda >0$, $f(0)=0$ and $f(\theta) \to \infty$ as $\theta \to \infty$.
\item Recall that $\left\{W_t:=X_t-\lambda t, \mathcal{F}_t, t \geq 0\right\}$ is a MG. Therefore, by \textit{OST}, $\mathbb{E}(X_{T \wedge t}-\lambda (T \wedge t)) = 0$ and hence,
$$ \lambda\mathbb{E}(T \wedge t) = \mathbb{E}(X_{T \wedge t}) \geq -a + b \mathbb{E}(T \wedge t) \Rightarrow \mathbb{E}(T \wedge t) \leq \dfrac{a}{b-\lambda}.$$
Using MCT, we conclude that $\mathbb{E}(T) \leq (b-\lambda)^{-1}a < \infty$. On the otherhand, since for all $0 \leq s \leq t$
$$ \mathbb{E} \left[ W_t^2 \mid \mathcal{F}_s\right] = W_s^2 + \mathbb{E} \left[ (W_t-W_s)^2 \mid \mathcal{F}_s\right] = W_s^2 + \lambda(t-s),$$
we have $\left\{W_t^2-\lambda t, \mathcal{F}_t, t \geq 0\right\}$ to be a MG. Hence, by OST, $\mathbb{E}(W_{T \wedge t}^2) = \lambda \mathbb{E}(T \wedge t).$ Therefore,
$$ \sup_{t \geq 0} \mathbb{E} (W_{T \wedge t}^2) = \sup_{t \geq 0} \lambda \mathbb{E}(T \wedge t) = \lambda \mathbb{E}(T) \leq \dfrac{a\lambda}{b-\lambda}<\infty.$$
Thus the MG $\left\{W_{T \wedge t}, \mathcal{F}_t, t \geq 0\right\}$ is $L^2$-bounded and hence by \textit{Doob's $L^p$-MG convergence theorem}, $W_{T \wedge t} = X_{T \wedge t} - \lambda (T \wedge t) \longrightarrow X_T - \lambda T$ almost surely and in $L^2$. In particular,
$$ 0 = \lim_{t \to \infty} \mathbb{E} \left( W_{T \wedge t}\right) = \mathbb{E}\left(X_T-\lambda T \right) = \mathbb{E}(-a+bT-\lambda T) \Rightarrow \mathbb{E}(T) = \dfrac{a}{b-\lambda}.$$
\end{enumerate}
|
2005-q6 | Let \( X_1 \) and \( X_2 \) be i.i.d. according to a density which is strictly positive on the real line. Let \( a \) be arbitrary. Show that \[ P\{ |(1/2)(X_1 + X_2) - a| < |X_1 - a| \} > 1/2. \] (Yes, strict inequalities). | Without loss of generality we can take $a=0$ since $X_1-a$ and $X_2-a$ are also \textit{i.i.d.} according to density which is strictly positive on the real line. Set $Y=X_2/X_1$. Then $Y$ is almost surely well-defined, $Y$ also has a density which is positive on the real line and $Y$ has the same distribution as $Y^{-1}.$
\begin{align*}
p=\mathbb{P} \left( \left \rvert \dfrac{X_1+X_2}{2}\right \rvert < |X_1|\right) = \mathbb{P} \left( (X_1+X_2)^2 < 4X_1^2\right) &= \mathbb{P} \left(3X_1^2-2X_1X_2-X_2^2 > 0 \right) \\
&= \mathbb{P} \left( 3-2Y-Y^2 > 0\right) \\
&= \mathbb{P} \left((3+Y)(1-Y) > 0\right) \\
&= \mathbb{P} \left( -3<Y<1\right) \\
&= \mathbb{P} \left( -3<Y^{-1}<1\right) = \mathbb{P} \left( Y>1 \text{ or }Y<-1/3\right).
\end{align*}
Since, $Y$ has positive density everywhere, we have the following.
\begin{align*}
2p = \mathbb{P} \left( -3<Y<1\right) + \mathbb{P} \left( Y>1 \text{ or }Y<-1/3\right) = 1 + \mathbb{P} \left( -3<Y<-1/3\right) > 1.
\end{align*}
Thus we get $p > 1/2$.
|
2006-q1 | We say that a random variable $G$ has a geometric distribution with parameter $u \in (0,1)$ if $P(G = k) = u(1 - u)^k$ for $k = 0, 1, \ldots$. Suppose $N$ has a geometric distribution with parameter $p \in (0, 1)$. Let $X_1, X_2, \cdots$ be independent and geometrically distributed with parameter $\lambda \in (0,1)$, independently of $N$. Let $Y = X_1 + \cdots + X_N$ (with the convention that $Y = 0$ if $N = 0$). (a) Provide an explicit formula for $H(z) = E[z^Y]$, $|z| < 1$. (b) Show that the law of $Y$ conditional on the event {$Y > 0$} has the same distribution as $G + 1$ and evaluate the parameter $u$ of $G$ in terms of $p$ and $\lambda$. | egin{enumerate}[label=(\alph*)]
\item First note that for any $G \sim \text{Geometric}(u) $ and $|z| <1$,
$$ \mathbb{E}(z^G) = \sum_{k \geq 0} z^k u(1-u)^k = \dfrac{u}{1-z(1-u)}.$$\nObserve that, since $G \geq 0$ almost surely and $G>0$ with positive probability for $u \in (0,1)$, we have $|z|^G \leq 1$ almost surely and $|z|^G <1$ with positive probability. Hence, $|\mathbb{E}(z^G)| <1$.
Now for any $k \geq 1$, using independence of $\left\{X_i \mid i \geq 1\right\}$ and $N$ along with \textit{i.i.d.} structure of $\left\{X_i \mid i \geq 1\right\}$, we get the following.
$$ \mathbb{E} \left[z^Y \mid N=k \right] = \mathbb{E} \left[ \prod_{i=1}^k z^{X_i} \mid N=k \right] = \mathbb{E} \left[ \prod_{i=1}^k z^{X_i} \right] = \left( \mathbb{E}(z^{X_1})\right)^k.$$\nMoreover, $ \mathbb{E} \left[z^Y \mid N=0 \right] = 1$, by definition. Hence,
\begin{align*}
\mathbb{E} \left[ z^Y\right] = \mathbb{E} \left[\mathbb{E} \left( z^Y \mid N\right) \right] = \mathbb{E} \left[\left(\mathbb{E}( z^{X_1})\right)^N \right] = \dfrac{p}{1-(1-p)\mathbb{E}(z^{X_1})} &= \dfrac{p}{1-(1-p)\dfrac{\lambda}{1-z(1-\lambda)}} \\
&= \dfrac{p-p(1-\lambda)z}{1-z(1-\lambda)-(1-p)\lambda}.
\end{align*}
\item Note that
$$ \mathbb{P}(Y=0) = \mathbb{E}(z^Y) \rvert_{z=0} = \dfrac{p}{1-(1-p)\lambda}.$$\nHence, for all $|z| <1$,
\begin{align*}
\mathbb{E}\left[z^{Y-1} \mid Y >0 \right] = \dfrac{z^{-1}\mathbb{E}\left[z^{Y}, Y >0 \right]}{1-\mathbb{P}(Y=0)} &= \dfrac{z^{-1}\mathbb{E}\left[z^{Y} \right] - z^{-1} \mathbb{E}(z^Y, Y=0)}{1-\mathbb{P}(Y=0)} \\
&= \dfrac{z^{-1}\mathbb{E}\left[z^{Y} \right] - z^{-1} \mathbb{P}( Y=0)}{1-\dfrac{p}{1-(1-p)\lambda}} \\
&= pz^{-1} \dfrac{\left[\dfrac{(1-z(1-\lambda))(1-\lambda(1-p))}{1-z(1-\lambda)-(1-p)\lambda} -1\right]}{1-\lambda(1-p)-p} \\
&= pz^{-1} \dfrac{\left[\dfrac{z\lambda(1-\lambda)(1-p)}{1-z(1-\lambda)-(1-p)\lambda} \right]}{(1-\lambda)(1-p)} \\
&= p \dfrac{\lambda}{1-z(1-\lambda)-(1-p)\lambda} = \dfrac{\dfrac{p\lambda}{1-(1-p)\lambda}}{1-z \dfrac{1-\lambda}{1-(1-p)\lambda}} = \dfrac{u}{1-z(1-u)},
\end{align*}
where $u = \dfrac{p\lambda}{1-\lambda+p\lambda}$. By uniqueness of probability generating functions for non-negative integer valued random variables, we concluded that $(Y-1) \mid Y>0 \sim \text{Geometric}(u),$ i.e., $Y \mid Y >0 \stackrel{d}{=} G+1$ where $G \sim \text{Geometric}(u).$\end{enumerate} |
2006-q2 | Let the random vector $\eta^{(n)} = (\eta_1^{(n)}, \ldots , \eta_n^{(n)})$ be uniformly distributed on the sphere of radius $\sqrt{n}$ in $\mathbb{R}^n$, with $X_{n,d} = (\eta_1^{(n)}, \ldots , \eta_d^{(n)})$ denoting the first $d \leq n$ coordinates of the vector $\eta^{(n)}$. (a) Prove that $X_{n,d}$ converges in distribution as $n \to \infty$ to a Gaussian random vector $X_{\infty,d} \in \mathbb{R}^d$ of zero mean and identity covariance matrix. (b) Let $Q_{n,d}$ and $Q_{\infty,d}$ denote the laws of $X_{n,d}$ and $X_{\infty,d}$ respectively. Show that if $n > d$ then $Q_{n,d}$ is absolutely continuous with respect to $Q_{\infty,d}$ with a bounded Radon-Nikodym derivative. (c) Show that if $f \in L^1(\mathbb{R}^d, \mathcal{B}_{\mathbb{R}^d}, Q_{\infty,d})$ then $\int_{\mathbb{R}^d} |f(x)| dQ_{n,d}(x) < \infty$ for every $n > d$ and $\lim_{n \to \infty} \int_{\mathbb{R}^d} f(x) dQ_{n,d}(x) = \int_{\mathbb{R}^d} f(x) dQ_{\infty,d}(x)$ | Let $\left\{Y_i \right\}_{i \geq 1}$ be a collection of \textit{i.i.d.} standard Gaussian variables. Then we know that for all $n \geq 1$,
$$ \underline{\eta}^{(n)} = (\eta_1^{(n)}, \ldots, \eta_n^{(n)}) = \stackrel{d}{=} \dfrac{\sqrt{n}(Y_1, \ldots, Y_n)}{\sqrt{\sum_{i=1}^n Y_i^2}}.$$
\begin{enumerate}[label=(\alph*)]
\item Fix $d \geq 1$. Then by previous observation,
$$ \underline{X}_{n,d} = (\eta_1^{(n)}, \ldots, \eta_d^{(n)}) \stackrel{d}{=} \dfrac{\sqrt{n}(Y_1, \ldots, Y_d)}{\sqrt{\sum_{i=1}^n Y_i^2}}.$$\nNow, as $n \to \infty$, by SLLN, $(\sum_{i=1}^n Y_i^2)/n \stackrel{a.s}{\longrightarrow} \mathbb{E}(Y_1^2)=1.$ Therefore, by \textit{Slutsky's Theorem},
$$ \dfrac{\sqrt{n}(Y_1, \ldots, Y_d)}{\sqrt{\sum_{i=1}^n Y_i^2}} \stackrel{d}{\longrightarrow} (Y_1, \ldots, Y_d) := \underline{X}_{\infty,d},$$
This yields that $\underline{X}_{n,d} \stackrel{d}{\longrightarrow} \underline{X}_{\infty,d},$ as $n \to \infty$ and clearly $\underline{X}_{\infty,d}$ is a Gaussian random vector of dimension $d$, zero mean and identity covariance matrix.
\item Take $d<n$. Recall the fact that
$$ \dfrac{(Y_1, \ldots, Y_d)}{\sqrt{\sum_{i=1}^d Y_i^2}} \perp \!
\!
\perp \sum_{i=1}^d Y_i^2, $$
and in the above the LHS is an uniform random vector from the $d$-dimensional unit sphere and the RHS follows $\chi^2_d$ distribution. Therefore,
$$ \underline{X}_{n,d} \stackrel{d}{=} \dfrac{(Y_1, \ldots, Y_d)}{\sqrt{\sum_{i=1}^d Y_i^2}} \sqrt{\dfrac{n\sum_{i=1}^d Y_i^2}{\sum_{i=1}^d Y_i^2 + \sum_{i=d+1}^n Y_i^2}} =: \underline{U}_d S_{n,d},$$
where $\underline{U}_d \sim \text{Uniform}(\mathcal{S}_d)$, $\mathcal{S}_d$ being $d$-dimensional unit sphere, $S_{n,d}^2 \sim n \text{Beta}(\frac{d}{2}. \frac{n-d}{2})$ and $ \underline{U}_d \perp \!
\!
\perp S_{n,d}.$ On the otherhand,
$$ \underline{X}_{\infty,d} \sim \dfrac{(Y_1, \ldots, Y_d)}{\sqrt{\sum_{i=1}^d Y_i^2}} \sqrt{\sum_{i=1}^d Y_i^2} =: \underline{U}_d S_{\infty,d},$$
where $S_{\infty,d}^2 \sim \chi^2_d$ and $ \underline{U}_d \perp \!
\!
\perp S_{\infty,d}.$ From now on $\Lambda_d$ will denote the uniform probability measure on $\mathcal{S}_d$, i.e., the normalized Haar measure. Take $h :\mathbb{R}^d \to \mathbb{R}$ bounded measurable. Also, $f_{S_{n,d}}$ and $f_{S_{\infty,d}}$ will denote the densities of the random variables $S_{n,d}$ and $S_{\infty,d}$ respectively (with respect to Lebesgue measure on $\mathbb{R}$). Then, for any $1 \leq n \leq \infty$,
\begin{align*}
\int_{\mathbb{R}^d} h(\underline{x}) \, dQ_{n,d}(\underline{x}) = \mathbb{E} h(\underline{X}_{n,d}) = \mathbb{E} h(\underline{U}_{d}S_{n,d}) = \mathbb{E}\left[ \mathbb{E} \left(h(\underline{U}_{d}S_{n,d}) \mid \underline{U}_{d}\right)\right] &= \int_{\mathcal{S}_d} \mathbb{E}\left[h(\underline{u}S_{n,d}) \right] \, d\Lambda_d(\underline{u}) \\
&= \int_{\mathcal{S}_d} \int_{0}^{\infty} h(r\underline{u}) f_{S_{n,d}}(r) \,dr \,d\Lambda_d(\underline{u}).
\end{align*}
Define,
\begin{align*}
g_{n,d}(\underline{x}) = \dfrac{f_{S_{n,d}}(||\underline{x}||_2)}{f_{S_{\infty,d}}(||\underline{x}||_2)} &= \dfrac{\dfrac{1}{n} \dfrac{1}{\operatorname{Beta}(\frac{d}{2}, \frac{n-d}{2})}\left(\dfrac{||\underline{x}||^2_2}{n}\right)^{d/2-1} \left(1-\dfrac{||\underline{x}||^2_2}{n}\right)^{(n-d)/2-1}}{\dfrac{1}{\Gamma(d/2)2^{d/2}}\exp(-||\underline{x}||^2_2/2) ||\underline{x}||_2^{d-2}} \mathbbm{1}(0<||\underline{x}||^2_2<n) \\
&= \dfrac{2^{d/2}}{n^{d/2}}\dfrac{\Gamma(n/2)}{\Gamma((n-d)/2)} \exp(||\underline{x}||^2_2/2)\left(1-\dfrac{||\underline{x}||^2_2}{n}\right)^{(n-d)/2-1} \mathbbm{1}(0<||\underline{x}||^2_2<n).
\end{align*}
Use \textit{Stirling's approximation} that there exists $0 < C_1 <C_2< \infty$ such that
$$ C_1 t^{t-1/2}\exp(-t) \leq \Gamma(t) \leq C_2 t^{t-1/2}\exp(-t), \; \forall \; t >0,$$\nand the inequality $(1+x) \leq \exp(x)$ to conclude the following.
\begin{align*}
0 \leq g_{n,d}(\underline{x}) &\leq \dfrac{2^{d/2}}{n^{d/2}}\dfrac{C_2 (n/2)^{(n-1)/2}\exp(-n/2)}{C_1 ((n-d)/2)^{(n-d-1)/2}\exp(-(n-d)/2)} \exp(||\underline{x}||^2_2/2) \exp \left(-\dfrac{n-d-2}{2n}||\underline{x}||^2_2 \right) \mathbbm{1}(0<||\underline{x}||^2_2<n) \\
& = \dfrac{C_2}{C_1}\left( \dfrac{n}{n-d}\right)^{(n-d-1)/2} \exp(-d/2) \exp \left(\dfrac{d+2}{2n}||\underline{x}||^2_2 \right) \mathbbm{1}(0<||\underline{x}||^2_2<n) \\
& \leq \dfrac{C_2}{C_1} \left( 1+\dfrac{d}{n-d}\right)^{(n-d-1)/2} \exp(-d/2) \exp((d+2)/2) \leq \dfrac{eC_2}{C_1} \exp \left( \dfrac{d(n-d-1)}{2(n-d)}\right) \leq \dfrac{C_2}{C_1} \exp\left(\dfrac{d}{2}+1\right).
\end{align*}
Thus $g_{n,d}$ is uniformly bounded over $n$ and $\underline{x}$. Hence $hg_{n,d}$ is also a bounded measurable function on $\mathbb{R}^d$ and therefore\n\begin{align*}
\int_{\mathbb{R}^d} h(\underline{x}) \, dQ_{n,d}(\underline{x}) = \int_{\mathcal{S}_d} \int_{0}^{\infty} h(r\underline{u}) f_{S_{n,d}}(r) \,dr \,d\Lambda_d(\underline{u}) &= \int_{\mathcal{S}_d} \int_{0}^{\infty} h(r\underline{u}) f_{S_{\infty,d}}(r)g_{n,d}(r\underline{u}) \,dr \,d\Lambda_d(\underline{u}) \\
&= \int_{\mathcal{S}_d} \int_{0}^{\infty} (hg_{n,d})(r\underline{u}) f_{S_{\infty,d}}(r) \,dr \,d\Lambda_d(\underline{u}) \\
&= \int_{\mathbb{R}^d} (hg_{n,d})(\underline{x}) \, dQ_{\infty,d}(\underline{x}).
\end{align*}
Since this holds true for all bounded measurable $h$, we conclude that $Q_{n,d}$ is absolutely continuous with respect to $Q_{\infty,d}$ with Radon-Nikodym derivative being $g_{n,d}$ which we already established to be bounded.
\item We have $f \in L^1(\mathbb{R}^d, \mathcal{B}_{\mathbb{R}^d}, Q_{n,d})$ for all $d < n \leq \infty$. First note that as $n \to \infty$, for all $\underline{x}$ we have the following.
\begin{align*}
g_{n,d}(\underline{x}) & \sim \dfrac{2^{d/2}}{n^{d/2}}\dfrac{\sqrt{2\pi} (n/2)^{(n-1)/2}\exp(-n/2)}{\sqrt{2 \pi} ((n-d)/2)^{(n-d-1)/2}\exp(-(n-d)/2)} \exp(||\underline{x}||^2_2/2) \exp \left(-\dfrac{n-d-2}{2n}||\underline{x}||^2_2 \right) \mathbbm{1}(||\underline{x}||^2_2 >0) \\
& = \left( \dfrac{n}{n-d}\right)^{(n-d-1)/2} \exp(-d/2) \exp \left(\dfrac{d+2}{2n}||\underline{x}||^2_2 \right) \mathbbm{1}(||\underline{x}||^2_2 >0) \\
& \sim \left( 1+\dfrac{d}{n-d}\right)^{(n-d-1)/2} \exp(-d/2) \mathbbm{1}(||\underline{x}||^2_2 >0) \sim \exp \left( \dfrac{d(n-d-1)}{2(n-d)}\right) \exp(-d/2) \mathbbm{1}(||\underline{x}||^2_2 >0)\sim \mathbbm{1}(||\underline{x}||^2_2 >0),
\end{align*}
i.e., $fg_{n,d}$ converges to $f$ almost everywhere. We also have $g_{n,d}$ to be uniformly bounded and hence $|f|\sup_{n >d} ||g_{n,d}||_{\infty} \in L^1(\mathbb{R}^d, \mathcal{B}_{\mathbb{R}^d}, Q_{\infty,d})$. Therefore, use DCT and conclude that
$$ \int_{\mathbb{R}^d} f \, dQ_{n,d} = \int_{\mathbb{R}^d} fg_{n,d} \, dQ_{\infty,d} \stackrel{n \to \infty}{\longrightarrow} \int_{\mathbb{R}^d} f \, dQ_{\infty,d}.$$
\end{enumerate} |
2006-q3 | The random variable $U$ is uniformly distributed on $[0, 1]$. Let $Y = 4U - \lfloor 4U \rfloor$, where $\lfloor x \rfloor$ denotes the largest integer $\leq x$. (a) What is $E(U|Y)$? (b) Show by calculation or otherwise that $\text{Var}(U|Y)$ does not depend on $Y$. | First note that $0 \leq Y \leq 1$. We shall demonstrate that $Y=4U- \lfloor 4U \rfloor$ is stochastically independent to $\lfloor 4U \rfloor$. Note that $\lfloor 4U \rfloor \in \left\{0,1,2,3\right\}$. Hence for any $k \in \left\{0,1,2,3\right\}$ and any $0<x < 1$ we have
$$ \mathbb{P}(\lfloor 4U \rfloor = k , Y \leq x) = \mathbb{P}(k \leq 4U \leq k+x) = \dfrac{k+x-k}{4}=\dfrac{x}{4}.$$\nTaking $x \uparrow 1$ this shows that $\lfloor 4U \rfloor \sim \text{Uniform}(\left\{0,1,2,3\right\})$ and summing over $k$ we get $Y \sim \text{Uniform}(0,1)$. The above equation then implies their independence.
\begin{enumerate}[label=(\alph*)]
\item Using independence of $Y$ and $\lfloor 4U \rfloor$ we get the following.
$$ \mathbb{E}(U \mid Y) = \dfrac{1}{4} \mathbb{E}(Y+\lfloor 4U \rfloor \mid Y) = \dfrac{Y}{4} + \dfrac{1}{4} \mathbb{E}(\lfloor 4U \rfloor \mid Y) = \dfrac{Y}{4} + \dfrac{1}{4} \mathbb{E}(\lfloor 4U \rfloor) = \dfrac{2Y+3}{8}. $$
\item
$$ \operatorname{Var}(U|Y) = \dfrac{1}{16} \operatorname{Var}(\lfloor 4U \rfloor + Y \mid Y) = \dfrac{1}{16} \operatorname{Var}(\lfloor 4U \rfloor \mid Y) = \dfrac{1}{16} \operatorname{Var}(\lfloor 4U \rfloor) = \dfrac{1}{16} \left( \dfrac{14}{4} - \dfrac{9}{4}\right) = \dfrac{5}{64},$$\nwhich does not depend on $Y$.
\end{enumerate} |
2006-q4 | Let $\mathcal{X} = \{0, 1, 2, \ldots , m\}$. Define a Markov chain $\{X_k\}$ on $\mathcal{X}$ by the following scheme. \begin{itemize} \item Given $X_{k-1} = x \in \mathcal{X}$, pick a random variable $\Theta_k$ of probability density function $(m+1) \binom{m}{x} (1 - \theta)^{x} \theta^{m-x}$ on $(0,1)$. \item Given $\Theta_k \in (0,1)$, pick $X_k$ of a Binomial $(m, \Theta_k)$ distribution. \end{itemize} (a) Provide a simple formula for the Markov transition probability matrix $k(x, y) = P(X_1 = y|X_0 = x)$ on $\mathcal{X} \times \mathcal{X}$. (b) Explain why $\{X_k\}$ is an irreducible, positive recurrent Markov chain and show that its unique invariant measure is the uniform distribution on $\mathcal{X}$. (c) Show that the function $f(x) = x - \frac{m}{2}$ is an eigenvector of the Markov transition matrix $k(x, y)$ and compute the corresponding eigenvalue. | \begin{enumerate}[label=(\alph*)]
\item For all $x,y \in \left\{0, \ldots, m\right\}$ we have,
\begin{align*}
k(x,y) = \mathbb{P} \left(X_1 = y \mid X_0 =x \right) &= \int_{0}^1 \mathbb{P} \left(X_1 = y \mid \Theta_1 =\theta \right) (m+1) {m \choose x} (1-\theta)^{x}\theta^{m-x} \; d \theta \
&= \int_{0}^1 {m \choose y} \theta^y(1-\theta)^{m-y} (m+1) {m \choose x} (1-\theta)^{x}\theta^{m-x} \; d \theta \
&= (m+1){m \choose y}{m \choose x} \int_{0}^1 \theta^{m-x+y}(1-\theta)^{m-y+x} \, d\theta \
&= (m+1){m \choose y}{m \choose x} \operatorname{Beta}(m-x+y+1,m-y+x+1) \
&= \dfrac{(m+1)! m! (m-x+y)!(m-y+x)!}{(m-y)!y! (m-x)!x!(2m+1)!} = \dfrac{{m-x+y \choose y}{m-y+x \choose x} }{{2m+1 \choose m}}.
\end{align*}
\item From part (a), it is obvious that $k(x,y) >0$ for all $x,y \in \mathcal{X},$ implying that the chain is positive recurrent. Also from the expression, it is clear that $k(x,y)=k(y,x),$ for all $x, y \in \mathcal{X}$ and thus the uniform probability measure on $\mathcal{X}$ is a reversible measure for this chain. This implies that $\text{Uniform}(\mathcal{X})$ is the invariant probability measure for this chain. Since we have an invariant probability measure for this irreducible chain, it implies that the chain is positive recurrent (see \cite[Corollary 6.2.42]{dembo}). Positive recurrence of the chain implies uniqueness of the invariant probability measure (see \cite[Proposition 6.2.30]{dembo}).
\item \textbf{Correction :} $f(x) = x- \frac{m}{2}.$
\nNote that $\Theta_1 \mid X_0 \sim \text{Beta}(m-X_0+1,X_0+1)$ and $X_1 \mid \Theta_1, X_0 \sim \text{Binomial}(m,\Theta_1).$ Hence,
$$ \mathbb{E}(X_1 \mid X_0) = \mathbb{E} \left[ \mathbb{E} \left(X_1 \mid \Theta_1, X_0 \right) \mid X_0\right] = \mathbb{E} \left[ m\Theta_1 \mid X_0\right] = \dfrac{m(m-X_0+1)}{m+2}.$$\nTake $f(x) = x-m/2$, for all $x \in \mathcal{X}$. Then $K$ being the transition matrix, we have for all $x \in \mathcal{X}$,
$$ (Kf)(x) = \mathbb{E}(f(X_1) \mid X_0=x) = \mathbb{E}(X_1 \mid X_0=x) - \dfrac{m}{2} = \dfrac{m(m-x+1)}{m+2} - \dfrac{m}{2} = - \dfrac{mx}{m+2} + \dfrac{m^2}{2(m+2)} = -\dfrac{m}{m+2}f(x). $$
Thus $f$ is an eigenvector of $K$ with eigenvalue $-\frac{m}{m+2}.$
\end{enumerate} |
2006-q5 | Let $\{X_n, \mathcal{F}_n , n \geq 1\}$ be a bounded martingale difference sequence such that $\text{E}(X_2^2|\mathcal{F}_{n-1}) = 1$ a.s. for all $n$. Let $S_n = X_1 + \cdots + X_n$. Let $g : \mathbb{R} \to \mathbb{R}$ be continuously differentiable with $g'(0) > 0$. Show that the following limits exist for all $x$ and give explicit formulas for them: (a) $\lim_{n \to \infty} \text{P} \left( n^{-5/2} \sum_{i=1}^n iS_i \leq x \right)$, (b) $\lim_{n \to \infty} \text{P} \left( \frac{\max_{1 \leq k \leq n} k(g(S_k/k) - g(0)) \geq \sqrt{n} \ x \right)$. | We have $\left\{X_n, \mathcal{F}_n, n \geq 1\right\}$ to be a bounded MG difference sequence with $\mathbb{E}(X_n^2 \mid \mathcal{F}_{n-1})=1$, a.s. for all $n$. Let $S_n = X_1+\ldots+X_n$ and $S_0 :=0$. Then $\left\{S_n, \mathcal{F}_n, n \geq 0\right\}$ is a $L^2$-MG ( $\mathcal{F}_0$ being the trivial $\sigma$-algebra) with predictable compensator being,
$$ \langle S \rangle_n = \sum_{k =1}^n \mathbb{E}\left[(S_k-S_{k-1})^2 \mid \mathcal{F}_{k-1} \right] = \sum_{k =1}^n \mathbb{E}\left[ X_k^2\mid \mathcal{F}_{k-1} \right] =n, \; \text{almost surely}.$$\nTherefore, $S_n/n$ converges to $0$ a.s. (See \cite[Theorem 5.3.33(b)]{dembo}).
By hypothesis we can get a $C \in (0,\infty)$ such that $|X_k| \leq C$ a.s. for all $k$. Fix any $\varepsilon >0$. Then for large enough $n$, we have $C < \varepsilon\sqrt{n}$ and hence
$$ n^{-1} \sum_{k =1}^n \mathbb{E}\left[(S_k-S_{k-1})^2; |S_k-S_{k-1}| \geq \varepsilon \sqrt{n} \right] = n^{-1} \sum_{k =1}^n \mathbb{E}\left[X_k^2; |X_k| \geq \varepsilon \sqrt{n} \right] \longrightarrow 0.$$\nWe then define the interpolated S.P. as
$$ \widehat{S}_n(t) = n^{-1/2} \left[S_{\lfloor nt \rfloor} + (nt-\lfloor nt \rfloor) \left(S_{\lfloor nt \rfloor +1} - S_{\lfloor nt \rfloor} \right) \right] = n^{-1/2} \left[S_{\lfloor nt \rfloor} + (nt-\lfloor nt \rfloor) X_{\lfloor nt \rfloor +1} \right], \; \; 0 \leq t \leq 1. $$\nThen $\widehat{S}_n \stackrel{d}{\longrightarrow} W$ on $C([0,1])$ when $n \to \infty$. Here $W$ is the standard BM on $[0,1]$. Since, $|S_j| \leq jC$ a.s., we have $|\widehat{S}_n(t)| \leq Ct \sqrt{n},$ almost surely.
\begin{enumerate}[label=(\alph*)]
\item Note that
\begin{align*}
\Big \rvert n^{-5/2}\sum_{j=1}^n jS_j - \int_{0}^1 t \widehat{S}_n(t) \; dt \Big \rvert &\leq \sum_{j=1}^n \Big \rvert n^{-5/2}jS_j - \int_{(j-1)/n}^{j/n} t \widehat{S}_n(t) \; dt \Big \rvert &= \sum_{j=1}^n \Big \rvert \int_{(j-1)/n}^{j/n} \left(t \widehat{S}_n(t) - n^{-3/2}jS_j \right) \; dt \Big \rvert \\ & \leq \sum_{j=1}^n \int_{(j-1)/n}^{j/n} \Big \rvert t \widehat{S}_n(t) - \dfrac{j}{n}n^{-1/2}S_j \Big \rvert \, dt \\ & = \sum_{j=1}^n \int_{(j-1)/n}^{j/n} \Big \rvert t \widehat{S}_n(t) - \dfrac{j}{n} \widehat{S}_n \left(\dfrac{j}{n}\right) \Big \rvert \, dt.
\end{align*}
For all $(j-1)/n \leq t \leq j/n$ where $1 \leq j \leq n$, we have
\begin{align*}
\Big \rvert t \widehat{S}_n(t) - \dfrac{j}{n} \widehat{S}_n \left(\dfrac{j}{n}\right) \Big \rvert &\leq \big \rvert t-\dfrac{j}{n} \big \rvert |\widehat{S}_n(t)| + \dfrac{j}{n}\Big \rvert \widehat{S}_n(t) - \widehat{S}_n \left(\dfrac{j}{n}\right) \Big \rvert \\
& \leq \dfrac{1}{n}|\widehat{S}_n(t)| + \dfrac{j}{n} \Big \rvert n^{-1/2}S_{j-1}+n^{-1/2}(nt-j+1)X_j - n^{-1/2}S_j \Big \rvert \\
& \leq \dfrac{1}{n}|\widehat{S}_n(t)| + \dfrac{1}{\sqrt{n}} (j-nt)|X_j| \leq \dfrac{1}{n}|\widehat{S}_n(t)| + \dfrac{1}{\sqrt{n}}|X_j| \leq \dfrac{C(t+1)}{\sqrt{n}}, \; \text{almost surely}.
\end{align*}
Therefore,
$$ \mathbb{E}\Big \rvert n^{-5/2}\sum_{j=1}^n jS_j - \int_{0}^1 t \widehat{S}_n(t) \; dt \Big \rvert \leq \int_{0}^1 \dfrac{C(t+1)}{\sqrt{n}} \, dt = O(n^{-1/2}) \to 0.$$\nOn the otherhand, since $\mathbf{x} \to \int_{0}^1 tx(t)\, dt$ is Lipschitz continuous on $C([0,1])$, we have
$$ \int_{0}^1 t \widehat{S}_n(t) \; dt \stackrel{d}{\longrightarrow} \int_{0}^1 t W(t) \; dt.$$\nEmploy \textit{Slutsky's Theorem} and conclude that
$$ n^{-5/2}\sum_{j=1}^n jS_j \stackrel{d}{\longrightarrow} \int_{0}^1 t W(t) \; dt.$$\nNote that $\int_{0}^1 t W(t) \; dt$ is a Gaussian variable since it is the almost sure limit of the Gaussian variables $\dfrac{1}{n} \sum_{k =1}^n \dfrac{k}{n}W \left( \dfrac{k}{n}\right)$ with
\begin{align*}
\mathbb{E} \left[\dfrac{1}{n} \sum_{k =1}^n \dfrac{k}{n}W \left( \dfrac{k}{n}\right) \right] =0, \; \; \operatorname{Var} \left(\dfrac{1}{n} \sum_{k =1}^n \dfrac{k}{n}W \left( \dfrac{k}{n}\right) \right) = \dfrac{1}{n^2} \sum_{j,k=1}^n \dfrac{jk(j \wedge k)}{n^3} &\stackrel{n \to \infty}{\longrightarrow} \int_{0}^1 \int_{0}^1 st(s \wedge t) \, ds \, dt\
&= \int_{0}^1 \left[ t \int_{0}^t s^2 \, ds + t^2 \int_{t}^1 s \, ds\right] \, dt \
&= \int_{0}^1 \left[ \dfrac{t^4}{3} + \dfrac{t^2(1-t^2)}{2}\right] \, dt = \dfrac{1}{6}-\dfrac{1}{30} = \dfrac{2}{15}.
\end{align*}
Hence, $\int_{0}^1 tW(t)\, dt \sim N(0,2/15)$ and therefore
$$ \lim_{n \to \infty} \left[ n^{-5/2}\sum_{j=1}^n jS_j \leq x\right] = \mathbb{P}(N(0,2/15) \leq x) = \Phi \left( \dfrac{x\sqrt{2}}{\sqrt{15}}\right).$$
\item It is easy to see that
\begin{align}
\Big \rvert \max_{1 \leq k \leq n} n^{-1/2}k(g(S_k/k)-g(0)) - \max_{1 \leq k \leq n} n^{-1/2}S_k g^{\prime}(0)\Big \rvert & \leq \max_{1 \leq k \leq n} n^{-1/2}k \Big \rvert g(S_k/k)-g(0) - \dfrac{S_k}{k} g^{\prime}(0)\Big \rvert \nonumber\
& \leq \max_{1 \leq k \leq n} \left[n^{-1/2}|S_k| \sup_{0 \leq |t| \leq |S_k|/k} |g^{\prime}(t) -g^{\prime}(0)| \right]. \label{bound}
\end{align}
By \textit{Doob's $L^p$ Maximal Inequality},
$$ \mathbb{E} \left[ \max_{k=1}^n |S_k|^2 \right] =\mathbb{E} \left[\left( \max_{k=1}^n |S_k|\right)^2 \right] \leq 4 \mathbb{E}(S_n^2) = 4n.$$\nWrite $\gamma(t):= \sup_{0 \leq |s| \leq |t|} |g^{\prime}(s)-g^{\prime}(0)|$, for all $ t \geq 0$ and $Z_n := \sup_{k \geq n} |S_k|/k \leq C$, a.s. Then $Z_n$ converges to $0$ almost surely (as $S_n/n$ converges to $0$ almost surely). Since $\gamma$ is continuous at $0$ (as $g^{\prime}$ is continuous) and bounded on the interval $[0,C]$, we can apply DCT to conclude that $\mathbb{E}(\gamma(Z_n)^2)=o(1).$ Fix any $N \geq 1$ and observe that
\begin{align*}
\mathbb{E} \left[\max_{1 \leq k \leq n} \left(n^{-1/2}|S_k| \gamma(S_k/k) \right) \right] &\leq \mathbb{E} \left[\max_{1 \leq k \leq N} \left(n^{-1/2}|S_k| \gamma(S_k/k) \right) \right] + \mathbb{E} \left[\max_{N \leq k \leq n} \left(n^{-1/2}|S_k| \gamma(S_k/k) \right) \right] \\& \leq \gamma(C) n^{-1/2} \mathbb{E} \left[\max_{1 \leq k \leq N}|S_k| \right] + \mathbb{E} \left[ \left(\max_{N \leq k \leq n} n^{-1/2}|S_k|\right) \gamma(Z_N) \right] \\& \leq \gamma(C) n^{-1/2} \sqrt{\mathbb{E} \left[\max_{1 \leq k \leq N}|S_k|^2 \right]}+ \sqrt{\mathbb{E} \left[ \max_{1 \leq k \leq n} n^{-1}|S_k|^2 \right] \mathbb{E}(\gamma(Z_N)^2)} \\&= 2\sqrt{N}\gamma(C)n^{-1/2} + 2 \sqrt{\mathbb{E}(\gamma(Z_N)^2)} \to 2 \sqrt{\mathbb{E}(\gamma(Z_N)^2)},
\end{align*}
as $n \to \infty$. Taking $N \uparrow \infty$ and combining it with (
ef{bound}), we conclude,
$$ \max_{1 \leq k \leq n} n^{-1/2}k(g(S_k/k)-g(0)) - \max_{1 \leq k \leq n} n^{-1/2}S_k g^{\prime}(0) \stackrel{p}{\longrightarrow} 0.$$\n
On the otherhand,
\begin{align*}
\Big \rvert \max_{1 \leq k \leq n} n^{-1/2}S_k g^{\prime}(0) - \sup_{0 \leq t \leq 1} g^{\prime}(0) \widehat{S}_n(t) \Big \rvert & = \Big \rvert \max_{1 \leq k \leq n} g^{\prime}(0) \widehat{S}_n(k/n) - \sup_{0 \leq t \leq 1} g^{\prime}(0) \widehat{S}_n(t) \Big \rvert \
&= \Big \rvert \max_{0 \leq t \leq 1} g^{\prime}(0) \widehat{S}_n(\lceil nt \rceil /n) - \sup_{0 \leq t \leq 1} g^{\prime}(0) \widehat{S}_n(t) \Big \rvert \
&= g^{\prime}(0) \max_{0 \leq t \leq 1} \Big \rvert \widehat{S}_n(\lceil nt \rceil /n) - \widehat{S}_n(t) \Big \rvert \leq g^{\prime}(0) n^{-1/2}\max_{1 \leq k \leq n} |X_k| \leq \dfrac{Cg^{\prime}(0)}{\sqrt{n}}.
\end{align*}
Thus
$$ \Big \rvert \max_{1 \leq k \leq n} n^{-1/2}S_k g^{\prime}(0) - \sup_{0 \leq t \leq 1} g^{\prime}(0) \widehat{S}_n(t) \Big \rvert \stackrel{p}{\longrightarrow} 0.$$\nSince, $\mathbf{x} \to \sup_{0 \leq t \leq 1} x(t)$ is Lipschitz continuous on $C([0,1])$ we have
$$ \sup_{0 \leq t \leq 1} \widehat{S}_n(t) \stackrel{d}{\longrightarrow} \sup_{0 \leq t \leq 1} W(t).$$\nUse $g^{\prime}(0) >0$ and \textit{Slutsky's Theorem} to conclude that
$$ \max_{1 \leq k \leq n} n^{-1/2}k(g(S_k/k)-g(0)) \stackrel{d}{\longrightarrow} g^{\prime}(0) \sup_{0 \leq t \leq 1} W(t) \stackrel{d}{=} g^{\prime}(0)|W(1)|.$$\nTherefore,
$$ \mathbb{P} \left( \max_{1 \leq k \leq n} k(g(S_k/k)-g(0)) \geq \sqrt{n}x \right) = \mathbb{P} \left( \max_{1 \leq k \leq n} n^{-1/2}k(g(S_k/k)-g(0)) \geq x \right) \to \mathbb{P}(g^{\prime}(0)|W(1)| \geq x) = 2\bar{\Phi}\left( \dfrac{x}{g^{\prime}(0)}\right).$$
\end{enumerate} |
2006-q6 | The process $\{X_t, \ t \geq 0\}$ has stationary independent increments. \begin{itemize} \item[(a)] Suppose $X_t$ has continuous sample paths and for $a \geq 0$, let $T_a = \inf\{t \geq 0 : X_t = a\}$. Show that $\{T_a$, $a \geq 0\}$ is a left-continuous process with stationary independent increments. \item[(b)] Suppose that $X_t$ has right-continuous sample paths such that $X_0 = 0$ and $\lim_{t \to \infty} \text{E}|X_t| = 0$. Show that $\text{E}|X_1| < \infty$ and that if $T$ is a stopping time with respect to a filtration $\mathcal{F}_t$ to which $X_t$ is adapted such that $ET < \infty$, then $EX_T = (EX_1)(ET)$. \end{itemize} | \begin{enumerate}[label=(\alph*)]
\item
\textbf{Correction :} Assume that $X_0=0$. Otherwise consider the following counter-example. $X_t = t+1$, for all $t \geq 0$. This process clearly has stationary, independent increments. and continuous sample paths. But $T_a = \infty$ for all $0 \leq a <1$ while $T_1=0$, implying that the sample path of $T$ is not left-continuous at $1$ with probability $1$.
\n
Take any $\omega$ in the sample space $\Omega$ and $a \geq 0$. By definition, either $T_a(\omega) = \infty$ or there exists a sequence $\left\{t_n\right\}$ of non-negative real numbers (of course depending on $\omega$) such that $t_n \downarrow T_a(\omega)$ and $X_{t_n}(\omega)=a$ for all $n \geq 1$. The sample-path-continuity of $X$ implies that in the second scenario, we have $X_{T_a(\omega)}(\omega)=a$. Now take $0 \leq a^{\prime} <a$. If $T_a(\omega) = \infty$, then trivially $T_{a^{\prime}}(\omega) \leq T_a(\omega)$. Otherwise we have $X_0(\omega) =0$ and $X_{T_a(\omega)}(\omega)=a$; these two facts with the aid of sample-path-continuity of $X$ implies existence of $0 \leq t < T_a(\omega)$ such that $X_t(\omega) =a^{\prime}$ and hence $T_{a^{\prime}}(\omega) \leq T_a(\omega)$. Thus $a \mapsto T_a$ has non-decreasing sample paths.
\n\n Now take any $a >0$ and $a_n \geq 0$ such that $a_n \uparrow a$. By monotonicity of $T$, we conclude that $\lim_{n \to \infty} T_{a_n}(\omega)$ exists and the limit is at most $T_a(\omega)$. Let us call this limit $c(\omega)$. If $c(\omega)=\infty$ we automatically get that $\infty = T_a(\omega)=c(\omega).$ Otherwise, $T_{a_n}(\omega)\leq c(\omega)<\infty$ and hence $X_{T_{a_n}(\omega)}(\omega)=a_n$. Sample-path-continuity of $X$ implies that $X_{c(\omega)} = a$ and therefore, $T_a(\omega) \leq c(\omega)$. Thus we conclude $\lim_{n \to \infty} T_{a_n} = T_a$ i.e., $T$ has left-continuous sample paths.
\n\nTo make sense of the increments, we now assume that almost surely $T_a < \infty$ for all $a \geq 0$ which is equivalent to $\limsup_{t \to \infty} X_t = \infty$ almost surely, since $X$ has continuous sample paths starting at $0$.
\n Let $\mathcal{G}_t := \sigma \left( X_s : 0 \leq s \leq t\right)$. Since $X$ has continuous sample paths with stationary, independent increments, we have $\left\{X_t, \mathcal{G}_t, t \geq 0\right\}$ to be a strong Markov process (see \cite[Exercise 9.3.20(b)]{dembo}). Fix $0 \leq a <b$ and consider the $\mathcal{G}_t$-stopping time $T_a$. Let $X_{T_a+\cdot}$ denotes the time-translated process. Fix $v \geq 0 $ and Consider the function $h : \mathbb{R}_{ \geq 0} \times \mathbb{R}^{\infty} \to \mathbb{R}$ defines as
$$ h(s, \mathbf{x}(\cdot)) := \mathbbm{1}(\sup_{0 \leq t \leq v} \mathbf{x}(t) < b) .$$
Clearly $h$ is bounded measurable on the corresponding $\sigma$-algebras. Hence, almost surely
\begin{equation}{\label{strongmar}}
\mathbbm{1}(T_a < \infty) \mathbb{E} \left[ h(T_a,X_{T_a + \cdot}) \mid \mathcal{G}_{T_a}\right] = g_h(T_a, X_{T_a}) \mathbbm{1}(T_a < \infty),
\end{equation}
where $g_h(s,x) := \mathbb{E} \left[ h(s,X_{\cdot}) \mid X_0=x\right] (See \cite[Corollary 9.3.14]{dembo}). Now,
\begin{align*}
g_h(s,a) = \mathbb{P} \left[ \sup_{0 \leq t \leq v} X_t < b \mid X_0=a\right] = \mathbb{P} \left[ \sup_{0 \leq t \leq v}( X_t-X_0) < b-a \mid X_0=a\right] & \stackrel{(i)}{=} \mathbb{P} \left[ \sup_{0 \leq t \leq v}( X_t-X_0) < b-a \right]\n & = \mathbb{P}(T_{b-a} > v),
\end{align*}
where in $(i)$ we have used independent increment property of $X$. On the otherhand,
$$ \mathbbm{1}(T_a < \infty) \mathbb{E} \left[ h(T_a,X_{T_a + \cdot}) \mid \mathcal{G}_{T_a}\right] = \mathbbm{1}(T_a < \infty)\mathbb{P} \left[ \sup_{0 \leq s \leq v} X_{T_a+s} < b \mid \mathcal{G}_{T_a}\right] = \mathbbm{1}(T_a < \infty)\mathbb{P} \left[ T_b > T_a + v \mid \mathcal{G}_{T_a}\right]. \n$$
Since $T_a < \infty$ and $X_{T_a}=a$ almost surely, ~(
ef{strongmar}) becomes,
$$ \mathbb{P} \left[ T_b-T_a > v \mid \mathcal{G}_{T_a}\right] = \mathbb{P}(T_{b-a} > v), \;\, \text{almost surely}. \n$$
Since $T_c \in m\mathcal{G}_{T_a}$ for all $0 \leq c \leq a$ and $T_0=0$, we have shown that the process $T$ has independent and stationary increments.
\item \textbf{Correction :} $\lim_{t \downarrow 0} \mathbb{E}|X_t| =0,$ and $\mathcal{F}_t$ is such a filtration that the independence increment property holds with respect to this filtration.
\n We have $\lim_{t \to 0} \mathbb{E}|X_t| =0$ which implies that $\mathbb{E}|X_t| < \infty$ for all $0 \leq t \leq 1/n$ for some $n \in \mathbb{N}$. Fix $m \geq n$. Then by stationary independent increment property, we have $\left\{X_{km^{-1}}-X_{(k-1)m^{-1}} \mid k \geq 1\right\}$ to be an \textit{i.i.d.} collection with finite common mean since $\mathbb{E}|X_{m^{-1}}| < \infty$. This yields that
$$ \mathbb{E} |X_{km^{-1}}| \leq \sum_{l=1}^{k} \mathbb{E}|X_{lm^{-1}}-X_{(l-1)m^{-1}}| = k \mathbb{E}|X_{m^{-1}}| < \infty,$$\n for all $k \geq 1$. Since the choice of $m$ is arbitrary, this proves that $\mathbb{E}|X_q|< \infty$, for all $q \in \mathbb{Q}_{\geq 0}$, in particular $\mathbb{E}|X_1| < \infty$.
\n Let $\mu(t) := \mathbb{E}(X_t)$, for all $t \in \mathbb{Q}_{\geq 0}$. Then by stationary increment property,
$$\mu(s+t) = \mathbb{E}(X_{s+t}) = \mathbb{E}(X_{s+t}-X_t) + \mathbb{E}(X_t) = \mathbb{E}(X_s) + \mathbb{E}(X_t) = \mu(s) + \mu(t),$$
for all $s,t \in \mathbb{Q}_{\geq 0}$ with $\mu(0)=0.$ Upon applying the above additivity property many times, we conclude that $\mu(q)=q\mu(1)$, for all $q \in \mathbb{Q}_{\geq 0}$.
\n Take any $t \in \mathbb{R}_{\geq 0}$ and get $t_n \downarrow t$ such that $t_n \in \mathbb{Q}.$ Then
$$ \mathbb{E}|X_{t_n}-X_t| = \mathbb{E}|X_{t_n-t}-X_0| = \mathbb{E}|X_{t_n-t}| \longrightarrow 0,$$
as $n \to \infty$. Hence, $\mathbb{E}(X_t) = \lim_{n \to \infty} \mathbb{E}(X_{t_n}) = t\mu(1).$ In other words, $\mathbb{E}(X_t) = t \mathbb{E}(X_1)$, for all $t \geq 0$.
\n Set $\xi := \mathbb{E}(X_1)$. We have already shown that $X_t \in L^1$ for all $t \geq 0$. As a first step, suppose that the stopping time $T$ takes values in the discrete set $\left\{k/m : k \in \mathbb{Z}_{\geq 0}\right\}$ for some $m \geq 1$. Then
\begin{align*}
\mathbb{E}|X_{T}| = \sum_{k \geq 1} \mathbb{E} \left[ \Big \rvert \sum_{l=1}^{k} \left(X_{lm^{-1}}-X_{(l-1)m^{-1}} \right) \Big \rvert \mathbbm{1}(T=km^{-1})\right] & \leq \sum_{k \geq 1} \sum_{l=1}^{k}\mathbb{E} \left[ \Big \rvert \left(X_{lm^{-1}}-X_{(l-1)m^{-1}} \right) \Big \rvert \mathbbm{1}(T=km^{-1})\right]\
&= \sum_{l \geq 1} \mathbb{E} \left[ \Big \rvert \left(X_{lm^{-1}}-X_{(l-1)m^{-1}}\right) \Big \rvert \mathbbm{1}(T \geq lm^{-1})\right] \
& \stackrel{(ii)}{=} \sum_{l \geq 1} \mathbb{E} |X_{m^{-1}}| \mathbb{P}(T \geq lm^{-1}) = \mathbb{E} |X_{m^{-1}}| \mathbb{E}(mT) < \infty,
\end{align*}
where $(ii)$ holds since $ \mathbbm{1}(T \geq lm^{-1})$ is $\mathcal{F}_{(l-1)m^{-1}}$-measurable. This shows $X_T$ is integrable and hence we redo the same calculation to do the following, where the interchange of sums is justified by \textit{Fubini's Theorem}.
\begin{align*}
\mathbb{E}X_{T} = \sum_{k \geq 1} \mathbb{E} \left[ \sum_{l=1}^{k} \left(X_{lm^{-1}}-X_{(l-1)m^{-1}} \right) \mathbbm{1}(T=km^{-1})\right] & = \sum_{k \geq 1} \sum_{l=1}^{k}\mathbb{E} \left[ \left(X_{lm^{-1}}-X_{(l-1)m^{-1}} \right) \mathbbm{1}(T=km^{-1})\right]\
&= \sum_{l \geq 1} \mathbb{E} \left[ \left(X_{lm^{-1}}-X_{(l-1)m^{-1}}\right) \mathbbm{1}(T \geq lm^{-1})\right] \
& = \sum_{l \geq 1} \mathbb{E}( X_{m^{-1}}) \mathbb{P}(T \geq lm^{-1}) = \xi m^{-1} \mathbb{E}(mT) = \xi \mathbb{E}(T).
\end{align*}
\n Now come back to the case of general $T$. Consider $T_m := m^{-1}\lceil mT \rceil $ for $m \in \mathbb{N}$. $T_m$ is also a stopping time with $0 \leq T \leq T_m \leq T + m^{-1}$ (thus $\mathbb{E}T_m < \infty$) and $T_m$ takes values in $\left\{k/m : k \in \mathbb{Z}_{\geq 0}\right\}$. Hence, $\mathbb{E}(X_{T_m}) = \xi \mathbb{E}(T_m)$. Since $|T_m-T| \leq m^{-1}$ we have $\mathbb{E}T_m \to \mathbb{E}(T)$ as $m \to \infty$. Hence it is enough to show that $ \mathbb{E}(X_{T_m}) \to \mathbb{E}(X_T)$.
\n\n Since $X$ has right-continuous sample paths with stationary, independent increments, we have $\left\{X_t, \mathcal{F}_t, t \geq 0\right\}$ to be a strong Markov process (see \cite[Exercise 9.3.20(b)]{dembo}). Fix $v \geq 0 $ and Consider the function $h : \mathbb{R}_{ \geq 0} \times \mathbb{R}^{\infty} \to \mathbb{R}$ defines as
$$ h(s, \mathbf{x}(ullet)) := \mathbbm{1}\left( \big \rvert x(m^{-1}\lceil ms \rceil-s) - x(0) \big \rvert > v \right) .$$
Clearly $h$ is bounded measurable on the corresponding $\sigma$-algebras. Hence, almost surely
\begin{equation}{\label{strongmar1}}
\mathbbm{1}(T < \infty) \mathbb{E} \left[ h(T,X_{T + \cdot}) \mid \mathcal{F}_{T}\right] = g_h(T, X_{T}) \mathbbm{1}(T < \infty).
\\end{equation}
It is clear from the definition that $ h(T,X_{T + \cdot}) = \mathbbm{1}(|X_{T_m}-X_T| > v)$ and
\begin{align*}
g_h(s,a) = \mathbb{P} \left[ \big \rvert X(m^{-1}\lceil ms \rceil-s) - X(0) \big \rvert > v \mid X_0=a\right] = \mathbb{P} \left[ \big \rvert X(m^{-1}\lceil ms \rceil-s) \big \rvert > v\right] =: H(m^{-1}\lceil ms \rceil-s, v),
\end{align*}
where $H(t,v) := \mathbb{P}(|X_t| > v)$. Notice that $\int_{0}^{\infty} H(t,v) \, dv = \mathbb{E}|X_t| =: \nu(t)$. Integrating both sides of (
ef{strongmar1}) with respect to $v$ on $[0,\infty)$, we get almost surely,
$$ \mathbbm{1}(T < \infty) \mathbb{E} \left[ \rvert X_{T_m}-X_T\rvert \mid \mathcal{F}_{T}\right] = \nu(T_m-T) \mathbbm{1}(T < \infty).
$$
To make the almost sure argument rigorous, note that it is enough to consider the event on which (
ef{strongmar1}) holds for all $v$ positive rational numbers (this event is an almost sure event) and observe that on this event (
ef{strongmar1}) holds for all $v$ by right-continuity of distribution functions. All these arguments finally lead us to the fact that $\mathbb{E}|X_{T_m}-X_T| = \mathbb{E}(\nu(T_m-T)).$ Hence, it is enough to show that $\mathbb{E}(\nu(T_m-T))=o(1).$ But
$$ 0 \leq \mathbb{E}\nu(T_m-T) \leq \left[ \sup_{0 \leq t \leq m^{-1}} \nu(t)\right] = o(1),$$
as $\nu(t) \to 0$ as $t \downarrow 0$. This completes the proof.
\begin{remark}
If we assumed $\lim_{t \to \infty} \mathbb{E}|X_t|=0$, that would imply for all $s \geq 0$,
$$ 0 \leq \mathbb{E}|X_s| = \mathbb{E}|X_{t+s} - X_t| \leq \mathbb{E}|X_{t+s}| + \mathbb{E}|X_t| \stackrel{t \to \infty}{\longrightarrow} 0,$$
which would imply that $X_t=0$ almost surely for all $t \geq 0$. With the aid of right-continuity this implies that $X_t=0, \; \forall \; t \geq 0$, almost surely. The assertions will then be trivial.
\end{remark}
\end{enumerate} |
2007-q1 | Let $X_1, X_2, \ldots , X_n$ be independent random variables with $X_k$ taking the values $\pm k^a$ with probabilities $\frac{1}{2k}$ and values $\pm 1$ with probabilities $\frac{1}{2}(1 - k^{-a})$, where $a > 0$. Let $S_n = \sum_{i=1}^{n} X_k$. For which values of $a$ are there exist constants $c_n \to \infty$ such that $c_n^{-1} S_n$ is asymptotically normally distributed? Explain. | We have $X_1, X_2 \ldots \stackrel{ind}{\sim}$ with $$ \mathbb{P}(X_k=k^a) = \mathbb{P}(X_k=-k^a)=\dfrac{1}{2k^a},\; \mathbb{P}(X_k=1) = \mathbb{P}(X_k=-1)=\dfrac{1}{2}(1-k^{-a}), \; \forall \; k \geq 1.$$ $S_n = \sum_{k=1}^n X_k$, for all $n \geq 1$. We shall consider three separate cases.
\begin{itemize}
\item{Case 1 :} $a>1$.
Define, $Y_i = \operatorname{sgn}(X_i)$, for all $i \geq 1$. Then $Y_i$'s are \textit{i.i.d.} \text{Uniform}((\{-1,+1\})) variables. Therefore, using \textit{Central Limit Theorem}, we get
$$ \dfrac{\sum_{i=1}^n Y_i}{\sqrt{n}} \stackrel{d}{\longrightarrow} N(0,1).$$ On the otherhand,
$$ \sum_{k \geq 1} \mathbb{P}(X_k \neq Y_k) = \sum_{k \geq 1} \mathbb{P}(|X_k|=k^a) = \sum_{k \geq 1} k^{-a} < \infty, $$
and hence by \textit{Borel-Cantelli Lemma I}, we have $X_k = Y_k$, eventually for all $k$, with probability $1$. Thus
$$ \dfrac{S_n - \sum_{i=1}^n Y_i}{\sqrt{n}} \stackrel{a.s.}{\longrightarrow} 0.$$ Apply \textit{Slutsky's Theorem} and conclude $S_n/\sqrt{n} \stackrel{d}{\longrightarrow} N(0,1).$
\item{Case 2 :} $a<1$.
Note that, the distribution of $X_k$ is symmetric around $0$ and hence $\mathbb{E}(X_k)=0$. Set $T_n(j):= \sum_{k=1}^n k^j$, for all $n \geq 1$, and hence for all $j >-1$, we have
$$ T_n(j) = \sum_{k=1}^n k^j = n^{j+1} \dfrac{1}{n} \sum_{k=1}^n\left( \dfrac{k}{n} \right)^j = n^{j+1} \left[ \int_{0}^1 x^j \, dx +o(1) \right] = \dfrac{n^{j+1}}{j+1} + o(n^{j+1}), \; \text{ as } n \to \infty,$$ where $T_n(j) =O(1)$, if $j<-1$.
Now,
$$ s_n^2 := \sum_{k=1}^n \operatorname{Var}(X_k) = \sum_{k=1}^n \mathbb{E}(X_k^2)=\sum_{k=1}^n \left[ k^{2a}k^{-a} + (1-k^{-a})\right] = \sum_{k=1}^n \left[ k^{a} -k^{-a} +1 \right] = T_n(a)-T_n(-a)+n \sim \dfrac{n^{a+1}}{a+1}$$ whereas $$\sum_{k=1}^n \mathbb{E}(|X_k|^3)=\sum_{k=1}^n \left[ k^{3a}k^{-a} + (1-k^{-a})\right] = \sum_{k=1}^n \left[ k^{2a} -k^{-a} +1 \right] = T_n(2a)-T_n(-a)+n \sim \dfrac{n^{2a+1}}{2a+1}.$$ Therefore,
$$ \dfrac{1}{s_n^3} \sum_{k=1}^n \mathbb{E}(|X_k|^3) \sim \dfrac{(a+1)^{3/2}}{2a+1}\dfrac{n^{2a+1}}{n^{3(a+1)/2}} = \dfrac{(a+1)^{3/2}}{2a+1} n^{(a-1)/2} = o(1).$$ Thus \textit{Lyapounav's Condition} holds true for $\delta=1$ and so $S_n/s_n \stackrel{d}{\longrightarrow} N(0,1)$. Using the asymptotics for $s_n$, we conclude $$ n^{-(a+1)/2}S_n \stackrel{d}{\longrightarrow} N \left(0, \dfrac{1}{a+1}\right), \; \text{ as } n \to \infty.$$
\item{Case 3 :} $a=1$.
We focus on finding the asymptotics of $S_n/c_n$ for some scaling sequence $\left\{c_n\right\}$ with $c_n \uparrow \infty$, to be chosen later. Fix $t \in \mathbb{R}$. Since, the variables $X_k$'s have distributions which are symmetric around $0$, we get, $$ \mathbb{E} \exp\left[ it\dfrac{S_n}{c_n}\right] = \prod_{k=1}^n \mathbb{E} \exp\left[ it\dfrac{X_k}{c_n}\right] = \prod_{k=1}^n \mathbb{E} \cos \left[ t\dfrac{X_k}{c_n}\right] = \prod_{k=1}^n \left[ \dfrac{1}{k}\cos \left( \dfrac{tk}{c_n}\right) + \left(1-\dfrac{1}{k}\right)\cos \left( \dfrac{t}{c_n}\right) \right] = \prod_{k=1}^n g_{n,k}(t), $$ where $$ g_{n,k}(t) := \dfrac{1}{k}\cos \left( \dfrac{tk}{c_n}\right) + \left(1-\dfrac{1}{k}\right)\cos \left( \dfrac{t}{c_n}\right), \; \forall \; 1 \leq k \leq n.$$ Then for large enough $n$, $$ \log \mathbb{E} \exp\left[ it\dfrac{S_n}{c_n}\right] = \sum_{k=1}^n \log g_{n,k}(t).$$ Note that, as $n \to \infty$,\begin{align*}
\min_{3 \leq k \leq n} g_{n,k}(t) \geq \min_{3 \leq k \leq n} \left[ \left(1-\dfrac{1}{k}\right)\cos \left( \dfrac{t}{c_n}\right)- \dfrac{1}{k} \right] &= \min_{3 \leq k \leq n} \left[ \cos \left( \dfrac{t}{c_n}\right)- \dfrac{1}{k}\left( 1+ \cos \left(\dfrac{t}{c_n} \right)\right) \right] \\
& \geq \cos \left( \dfrac{t}{c_n}\right)- \dfrac{1}{3}\left( 1+ \cos \left(\dfrac{t}{c_n} \right)\right) \rightarrow \dfrac{1}{3} >0,
\end{align*} where$g_{n,1}(t),g_{n,2}(t) \rightarrow 1$. Thus for large enough $n$, we have $\min_{1 \leq k \leq n} g_{n,k}(t) >0$ and so taking logarithms is valid.
Now observe that,
\begin{align*}
0 \leq 1-g_{n,k}(t) &= -\dfrac{1}{k}\cos \left( \dfrac{tk}{c_n}\right) + \dfrac{1}{k}\cos \left( \dfrac{t}{c_n}\right) + 1- \cos \left( \dfrac{t}{c_n}\right) \\
&= \dfrac{1}{k}\left\{1-\cos \left( \dfrac{tk}{c_n}\right)\right\} - \dfrac{1}{k}\left\{1-\cos \left( \dfrac{t}{c_n}\right)\right\} + 1- \cos \left( \dfrac{t}{c_n}\right) \\
& = \dfrac{2}{k}\sin^2 \left( \dfrac{tk}{2c_n}\right) - \dfrac{2}{k}\sin^2 \left( \dfrac{t}{2c_n}\right) + 2\sin^2 \left( \dfrac{t}{2c_n}\right) \\
& \leq \dfrac{2}{k} \left( \dfrac{tk}{2c_n}\right)^2 + 2 \left( \dfrac{t}{2c_n}\right)^2 \leq \dfrac{kt^2}{2c_n^2} + \dfrac{t^2}{2c_n^2} \leq \dfrac{nt^2}{c_n^2}.
\end{align*} Take $c_n=n$. Then
\begin{align*}
\sum_{k=1}^n (1-g_{n,k}(t)) &= 2\sum_{k=1}^n \left[ \dfrac{1}{k}\sin^2 \left( \dfrac{tk}{2n}\right) - \dfrac{1}{k}\sin^2 \left( \dfrac{t}{2n}\right) + \sin^2 \left( \dfrac{t}{2n}\right) \right] \\
&= \dfrac{2}{n} \sum_{k=1}^n \dfrac{n}{k}\sin^2 \left( \dfrac{tk}{2n}\right) + 2 \sin^2 \left( \dfrac{t}{2n}\right) \left(n - \sum_{k=1}^n \dfrac{1}{k} \right) \\
&= 2 \int_{0}^1 \dfrac{1}{x} \sin^2 \left(\dfrac{tx}{2}\right) \, dx + o(1) + \dfrac{2t^2}{4n^2} (1+o(1)) \left( n- \log n +O(1)\right) \\
&= 2 \int_{0}^{1/2} \dfrac{\sin^2(tz)}{z} \, dz +o(1), \; \text{ as } n \to \infty.
\end{align*} Not that the integral above is finite since,
$$ 2\int_{0}^{1/2} \dfrac{\sin^2(tz)}{z} \, dz \leq 2 \int_{0}^{1/2} \dfrac{t^2z^2}{z} \, dz = 2\int_{0}^{1/2} t^2z \, dz = t^2/4.$$ Since, $0 < g_{n,k}(t) \leq 1$, for all $1 \leq k \leq n$ and large enough $n$, we use the error bound $|\log(1+x)-x| \leq Cx^2$ for all $|x| <1$ and some $C \in (0,\infty)$, to conclude the following.
\begin{align*}
\bigg \rvert \sum_{k=1}^n \log g_{n,k}(t) + \sum_{k=1}^n (1-g_{n,k}(t)\bigg \rvert & \leq C \sum_{k=1}^n (1-g_{n,k}(t))^2 \leq C \dfrac{nt^2}{c_n^2} \sum_{k=1}^n (1-g_{n,k}(t)) = \dfrac{Ct^2}{n}\sum_{k=1}^n (1-g_{n,k}(t)) \rightarrow 0.
\end{align*} Therefore,
$$ \log \mathbb{E} \exp\left(\dfrac{itS_n}{n} \right) \longrightarrow -2 \int_{0}^{1/2} \dfrac{\sin^2(tz)}{z} \, dz =: \log \psi (t), \; \forall \; t \in \mathbb{R}.$$ To conclude convergence in distribution for $S_n/n$, we need to show that $\psi$ is continuous at $0$ (see \cite[Theorem 3.3.18]{dembo}). But it follows readily from DCT since $\sup_{-1 \leq t \leq 1} \sin^2(tz)/z \leq z$ which in integrable on $[0,1/2]$. So by \textit{Levy's continuity Theorem}, $S_n/n$ converges in distribution to $G$ with characteristic function $\psi$.
Now suppose $S_n/d_n$ converges in distribution to some Gaussian variable. Apply \textit{Convergence of Type Theorem} and conclude that $G$ has to a Gaussian variable. In other words,
\begin{equation}{\label{Gauss}}
\int_{0}^{1/2} \dfrac{\sin^2(tz)}{z} \, dz = \alpha^2 t^2, \; \forall \; t \in \mathbb{R}, \end{equation} for some $\alpha \neq 0$. But,
$$ \int_{0}^{1/2} \dfrac{\sin^2(tz)}{z} \, dz = \dfrac{1}{2}\int_{0}^{1/2} \dfrac{1-\cos(2tz)}{z} \, dz = \dfrac{1}{2}\int_{0}^{1/2} \dfrac{1}{z} \sum_{k \geq 1} \dfrac{(-1)^{k-1}(2tz)^{2k}}{(2k)!} dz \stackrel{\text{(Fubini)}}{=} \sum_{k \geq 1} \dfrac{(-1)^{k-1}t^{2k}}{4k(2k)!}. $$ This contradicts (
ef{Gauss}).
\end{itemize} |
2007-q2 | Let $N_1(t)$ and $N_2(t)$ be independent Poisson processes with the common intensity parameter $\lambda$. For an integer $b > 0$, let $T = \inf \{t : |N_1(t) - N_2(t)| = b\}$. (a) Find $E\{T\}$. (b) Let $\alpha > 0$. Find $E\{e^{-\alpha T}\}$. | $N_1,N_2$ are independent Poisson Processes with intensity $\lambda$ and $b \in \mathbb{N}$. Set $Y(t) = N_1(t)-N_2(t)$, which starts from $Y(0)=0$ and has jump sizes $\pm 1$. $T:= \inf \left\{t \geq 0 : |Y(t)|=b\right\}$. Also set $\mathcal{F}_t := \sigma \left( N_1(s), N_2(s) \mid 0 \leq s \leq t\right)$.
\begin{enumerate}[label=(\alph*)]
\item Note that $Y(t)$ has mean zero and is square integrable for all $t$ By independent increment property of $N_1$ and $N_2$ we have $Y(t)-Y(s) \perp \!\!\perp \mathcal{F}_s$ for all $0 \leq s \leq t$ and therefore,
\begin{align*}
\mathbb{E} \left(Y(t)^2 \mid \mathcal{F}_s \right) = Y(s)^2 + \mathbb{E}(Y(t)-Y(s))^2 &= Y(s)^2 + \operatorname{Var}(Y(t)-Y(s)) \\
&= Y(s)^2 + \operatorname{Var}(N_1(t)-N_1(s)) + \operatorname{Var}(N_2(t)-N_2(s)) \\
&= Y(s)^2 + 2 \lambda(t-s).
\end{align*} Hence, $\left\{Y(t)^2-2\lambda t, \mathcal{F}_t, t \geq 0 \right\}$ is a MG. Employ OST and we get $\mathbb{E}(Y(T \wedge t))^2 = 2 \lambda \mathbb{E}(T \wedge t)$, for all $t \geq 0$. Since $Y(0)=0$ and the process has jump sizes $\pm 1$, we have $|Y(T \wedge t)| \leq b$ for all $t \geq 0$ and hence $2\lambda \mathbb{E}(T \wedge t) \leq b^2$. Apply MCT and conclude $\mathbb{E}(T) \leq b^2/(2\lambda)$. Thus implies that $T <\infty$ almost surely and in particular $|Y(T)|=b$, since jump sizes are $\pm 1$ and $b$ is positive integer. Therefore, $Y(T \wedge t)^2 \rightarrow Y(T)^2=b^2$, almost surely and employing DCT we get $b^2 = \mathbb{E}Y(T)^2 = 2\lambda \mathbb{E}(T)$. This gives $\mathbb{E}(T) = b^2/(2 \lambda)$.
\item The processes $N_1$ and $N_2$ are independent copies of each other; hence by symmetry $Y(T) \sim \text{Uniform}(\left\{\pm b\right\}).$ For $\theta \in \mathbb{R}$ we know that $\exp(\theta Y(t))$ is integrable since it is difference between two independent Poisson variables. For all $0 \leq s \leq t$,
\begin{align*}
\mathbb{E} \exp(\theta Y(t) \mid \mathcal{F}_s) &= \exp(\theta Y(s)) \mathbb{E} \exp \left(\theta (Y(t)-Y(s)) \mid \mathcal{F}_s \right) \\
& = \exp(\theta Y(s)) \mathbb{E} \exp(\theta N_1(t-s)) \mathbb{E} \exp(-\theta N_2(t-s)) \\
&= \exp \left[\theta Y(s) + \lambda(t-s)(e^{\theta}-1) + \lambda(t-s)(e^{-\theta}-1) \right] \\
& = \exp \left[\theta Y(s) + \lambda(t-s)(e^{\theta}+e^{-\theta}-2) \right].
\end{align*} Set $\psi(\theta) = e^{\theta} + e^{-\theta} -2 \geq 0$. Therefore, $\left\{\exp\left[\theta Y(t) - \lambda \psi(\theta) t\right], \mathcal{F}_t, t \geq 0\right\}$ is a MG. Since, $\lambda \psi(\theta) \geq 0$, we have $\exp\left[\theta Y(T \wedge t) - \lambda \psi(\theta) (T \wedge t)\right] \leq \exp(|\theta|b)$. Apply OST and DCT to conclude that
$$ 1 = \mathbb{E} \exp\left[\theta Y(T) - \lambda \psi(\theta) T\right] = \dfrac{1}{2}\left[ e^{\theta b} + e^{-\theta b}\right] \mathbb{E}\exp(-\lambda \psi(\theta)T) \Rightarrow \mathbb{E}\exp(-\lambda \psi(\theta)T) = \dfrac{2}{e^{\theta b} + e^{-\theta b}} = \operatorname{sech}(\theta b).
$$ Now take any $\alpha >0$ and solve $\psi(\theta) = 2\cosh(\theta)-2 = \alpha/\lambda$ for $\theta$. Take the unique positive solution $\theta = \operatorname{arccosh}\left(1+ \dfrac{\alpha}{2\lambda}\right).$ Therefore,
$$ \mathbb{E} \exp \left(-\alpha T \right) = \operatorname{sech}\left( b \operatorname{arccosh}\left(1+ \dfrac{\alpha}{2\lambda} \right)\right), \; \forall \; \alpha >0.$
\end{enumerate} |
2007-q3 | Let $G$ be a finite, connected, simple graph (i.e., a graph without self loops or multiple edges). A particle performs a random walk on (the vertices of) $G$, where at each step it moves to a neighbor of its current position, each such neighbor being chosen with equal probability. (a) Prove the existence of a unique stationary distribution when the graph is finite and express it in terms of the vertex degrees. (b) What happens if the graph is infinite (with bounded degrees)? Does the random walk always have a stationary measure? (c) Consider the random walk on the infinite binary tree. Is it transient, positive recurrent or null recurrent? Justify your answer. | \begin{enumerate}[label=(\alph*)]
\item Let $V$ and $E$ denote the set of vertices and set of edges of the graph $G$ respectively. Here the transition probabilities are $p(x,y) = d(x)^{-1}\mathbbm{1}\left(\left\{x,y\right\} \in E \right)$, where $d(x)$ is the degree of the vertex $x \in V$. The chain is clearly irreducible. Consider the following probability measure on $G$, $\pi(x) := d(x)/(2|E|)$, for all $x \in V$. This is a probability measure since $\sum_{x \in V} d(x)=2|E|$. On the otherhand,
$$ \pi(x)p(x,y) = \dfrac{d(x)}{2|E|}\dfrac{\mathbbm{1}\left(\left\{x,y\right\} \in E \right)}{d(x)} = \dfrac{\mathbbm{1}\left(\left\{x,y\right\} \in E \right)}{2|E|} = \dfrac{d(y)}{2|E|}\dfrac{\mathbbm{1}\left(\left\{x,y\right\} \in E \right)}{d(y)} = \pi(y)p(y,x), \; \forall \; x, y \in V. $$ Thus the chain is reversible with respect to the probability measure $\pi$ and hence $\pi$ is a stationary probability measure. The existence of the stationary probability measure guarantees that the chain is positive recurrent (see \cite[Corollary 6.2.42]{dembo}) which in turn implies that the stationary probability measure is unique (see \cite[Proposition 6.2.30]{dembo}).
\item Assume that $d(x) < \infty$ for all $x \in V$. Consider the (possibly infinite) measure $\nu(x)=d(x)$, for all $x \in V$. Note that,
$$ \sum_{x \in V} \nu(x)p(x,y) = \sum_{x \in V : \left\{x,y\right\} \in E} d(x) \dfrac{1}{d(x)} = \sum_{x \in V : \left\{x,y\right\} \in E} 1 = d(y), \; \forall \; y \in V.$$ Thus $\nu$ is a stationary measure for the RW.
\item Let $\left\{X_n\right\}_{n \geq 0}$ be a RW on the binary tree. Let $g(x)$ be the distance of vertex $x \in V$ from the root. It is easy to see that $\left\{g(X_n)\right\}_{n \geq 0}$ is also a Markov chain on the state space $\mathbb{Z}_{\geq 0}$ with transition probabilities $q(i,i+1)=2/3, q(i,i-1)=1/3$, for $i \geq 1$ and $q(0,1)=1$. Let $\left\{Z_n\right\}_{n \geq 0}$ be the RW on $\mathbb{Z}$ with $q^{\prime}(i,i+1)=2/3, q^{\prime}(i,i-1)=1/3$, for all $i \in \mathbb{Z}$. Let $x$ be a vertex in the binary tree adjacent to the root. Then
$$ \mathbb{P} \left(X_n \neq \text{root}, \forall \; n \mid X_0 = x \right) \geq \mathbb{P} \left(g(X_n) \neq 0, \; \forall \; n \mid g(X_0) = 1 \right) = \mathbb{P} \left(Z_n \neq 0, \; \forall \; n \mid Z_0 = 1 \right) = \dfrac{1/3}{2/3}=1/2 >0.$$ Hence the RW is on binary tree transient.
\end{enumerate} |
2007-q4 | Let $X_1, X_2, \ldots , X_n$ be independent random variables with $E X_i = 0$, $\sum_{i=1}^{n} E X_i^2 = 1$ and $E|X_i|^3 < \infty$ for $1 \leq i \leq n$. Let $h(w)$ be an absolutely continuous function satisfying $|h(x) - h(y)| \leq |x - y|$ for all $x, y \in \mathbb{R}$ and $f$ be the solution to the following Stein equation \[ f^{\prime}(w) - wf(w) = h(w) - E\{h(Z)\}, \] where $Z$ is a standard normal random variable. It is known that $\sup_w |f^{\prime\prime}(w)| \leq 2$. (a) Use Stein’s method to prove that \[ |E h(W) - E h(Z)| \leq 3 \sum_{i=1}^{n} E |X_i|^3, \] where $W = \sum_{i=1}^{n} X_i$. [ Hint: Let $W^{(i)} = W - X_i$ and write $E \{W f(W)\} = \sum_{i=1}^{n} E\{X_i (f(W^{(i)} + X_i) - f(W^{(i)}))\}. ] (b) Prove that the above implies \[ \sup_z |\mathbb{P}\{W \leq z\} - \mathbb{P}\{Z \leq z\}| \leq 3 \left( \sum_{i=1}^{n} E|X_i|^3 \right)^{1/2} \] | For notational convenience, we define $\mu_{k,i} := \mathbb{E}|X_i|^k$, for all $1 \leq k \leq n$ and $i >0$. Recall that \textit{Jensen's Inequality} implies that $\mu_{p,i}^{1/p} \leq \mu_{q,i}^{1/q}$ for all $0 < p <q$. This follows from considering the convex function $x \mapsto x^{q/p}$ and the random variable $|X_i|^{p}$.
\begin{enumerate}[label=(\alph*)]
\item Since, $X_i \in L^3$ for all $1 \leq i \leq n$, we have $W \in L^3$. It is given that $||f^{\prime \prime}||_{\infty} \leq 2$, which implies that $|f(w)| \leq C_1 w^2 +C_2 |w| + C_3, |f^{\prime}(w)| \leq C_4|w|+C_5 \; \forall \; w$, where $C_1, C_2, C_3, C_4, C_5$ are some positive constants. Therefore,
$$ |Wf(W)| \leq C_1|W|^3 + C_2W^2 + C_3 |W| \in L^1, \; |f^{\prime}(W)| \leq C_4|W| + C_5 \in L^3 \subseteq L^1.$$ As given in the hint, we define $W_i=W-X_i \perp\!\!\perp X_i$. Using arguments similar to presented above, we conclude that $f(W_i) \in L^1$ and is independent of $X_i$. Hence, $\mathbb{E}(X_if(W_i)) = \mathbb{E}(X_i)\mathbb{E}(f(W_i)) =0$. This gives,
\begin{align*}
\mathbb{E}(Wf(W)) = \sum_{i=1}^n \mathbb{E}(X_if(W)) &= \sum_{i=1}^n \mathbb{E}(X_if(W)) - \sum_{i=1}^n \mathbb{E}(X_if(W_i)) \\
& = \sum_{i=1}^n \mathbb{E}\left[X_i\left(f(W_i+X_i)-f(W_i)\right)\right] \\
&= \sum_{i=1}^n \mathbb{E}\left[ X_i^2 f^{\prime}(W_i)\right] + R_n,
\end{align*} where
\begin{align*}
|R_n| = \Big \rvert \sum_{i=1}^n \mathbb{E}\left[X_i\left(f(W_i+X_i)-f(W_i)-X_if^{\prime}(W_i)\right)\right] \Big \rvert
& \leq \sum_{i=1}^n \mathbb{E}\Big \rvert X_i\left(f(W_i+X_i)-f(W_i)-X_if^{\prime}(W_i)\right) \Big \rvert \\
& \leq \dfrac{1}{2}\sum_{i=1}^n \mathbb{E} |X_i|^3 ||f^{\prime \prime}||_{\infty} \leq \sum_{i=1}^n \mathbb{E} |X_i|^3 = \sum_{i=1}^n \mu_{3,i}.
\end{align*} On the otherhand, using the fact that $\sum_{i=1}^n \mu_{2,i}=1$, we get,
\begin{align*}
\Big \rvert \sum_{i=1}^n \mathbb{E}\left[ X_i^2 f^{\prime}(W_i)\right] - \mathbb{E}f^{\prime}(W)\Big \rvert = \Big \rvert \sum_{i=1}^n \mu_{2,i}\mathbb{E}f^{\prime}(W_i) - \mathbb{E}f^{\prime}(W)\Big \rvert &= \Big \rvert \sum_{i=1}^n \mu_{2,i}\mathbb{E}f^{\prime}(W_i) - \sum_{i=1}^n \mu_{2,i}\mathbb{E}f^{\prime}(W)\Big \rvert \\
& \leq \sum_{i=1}^n \mu_{2,i} \mathbb{E} \big \rvert f^{\prime}(W)-f^{\prime}(W_i)\big \rvert \\
& \leq \sum_{i=1}^n \mu_{2,i} ||f^{\prime\prime}||_{\infty}\mathbb{E} \big \rvert X_i\big \rvert \\
& \leq 2 \sum_{i=1}^n \mu_{2,i}\mu_{1,i} \leq 2 \sum_{i=1}^n \mu_{3,i}^{2/3}\mu_{3,i}^{1/3} = 2 \sum_{i=1}^n \mu_{3,i}.
\end{align*} Combining them we get,
$$ \big \rvert \mathbb{E}(Wf(W)) - \mathbb{E}f^{\prime}(W)\big \rvert \leq |R_n| + \Big \rvert \sum_{i=1}^n \mathbb{E}\left[ X_i^2 f^{\prime}(W_i)\right] - \mathbb{E}f^{\prime}(W)\Big \rvert \leq 3 \sum_{i=1}^n \mu_{3,i}.$$ Since, $f^{\prime}(W)-Wf(W) = h(W)-\mathbb{E}h(Z)$, we conclude
$$ \big \rvert \mathbb{E}h(W) - \mathbb{E}h(Z) \big \rvert \leq 3 \sum_{i=1}^n \mathbb{E}|X_i|^3.$$
\end{enumerate}
\item Fix any $z \in \mathbb{R}$ and $\varepsilon >0$. Consider the following two approximations to the function $g:=\mathbbm{1}_{(-\infty,z]}$.
$$ g_{+}(x) := \begin{cases}
1, & \text{if } x\leq z \\
1-\dfrac{x-z}{\varepsilon}, & \text{if } z \leq x \leq z+\varepsilon \\
0, & \text{if } x \geq z+\varepsilon.
\end{cases}, \;\; g_{-}(x) := \begin{cases}
1, & \text{if } x\leq z-\varepsilon \\
\dfrac{z-x}{\varepsilon}, & \text{if } z-\varepsilon \leq x \leq z \\
0, & \text{if } x \geq z.
\end{cases}$$ It is easy to see that $0 \leq g_{-} \leq g \leq g_{+} \leq 1$ and both $g_{-}$ and $g_{+}$ are Lipschitz with factor $1/\varepsilon$. Therefore, $\varepsilon g_{-}$ and $\varepsilon g_{+}$ are Lipschitz continuous with factor $1$ and hence by part (a),
$$ \varepsilon \big \rvert \mathbb{E}g_{-}(W) - \mathbb{E}g_{-}(Z) \big \rvert, \varepsilon \big \rvert \mathbb{E}g_{+}(W) - \mathbb{E}g_{+}(Z) \big \rvert \leq 3 \sum_{i=1}^n \mathbb{E}|X_i|^3.$$ On the otherhand,
$$ \mathbb{E}g_{+}(Z) - \mathbb{E}g(Z) = \mathbb{E}\left[g_{+}(Z)\mathbbm{1}(z \leq Z \leq z+\varepsilon) \right] \leq \mathbb{P}(z \leq Z \leq z+\varepsilon) \leq \int_{z}^{z+\varepsilon} \phi(t)\, dt \leq \dfrac{\varepsilon}{\sqrt{2\pi}}.$$ Similarly,
$$ \mathbb{E}g(Z) - \mathbb{E}g_{-}(Z) \leq \dfrac{\varepsilon}{\sqrt{2\pi}}.$$ Therefore,
$$ \mathbb{E} g(W) - \mathbb{E}g(Z) \leq \mathbb{E}g_{+}(W) - \mathbb{E}g_{+}(Z) + \mathbb{E}g_{+}(Z) - \mathbb{E}g(Z) \leq \dfrac{3}{\varepsilon} \sum_{i=1}^n \mathbb{E}|X_i|^3 + \dfrac{\varepsilon}{\sqrt{2\pi}}.$$ Similarly,
$$ \mathbb{E} g(Z) - \mathbb{E}g(W) \leq \mathbb{E}g(Z) - \mathbb{E}g_{-}(Z) + \mathbb{E}g_{-}(Z) - \mathbb{E}g_{-}(W) \leq \dfrac{3}{\varepsilon} \sum_{i=1}^n \mathbb{E}|X_i|^3 + \dfrac{\varepsilon}{\sqrt{2\pi}}.$$ Combining them and taking supremum over $z$ and infimum over $\varepsilon >0$, we conclude,
\begin{align*}
\sup_{z \in \mathbb{R}} \big \rvert \mathbb{P}(W \leq z) - \mathbb{P}(Z \leq z)\big \rvert = \sup_{z \in \mathbb{R}} \big \rvert \mathbb{E} \mathbbm{1}_{(-\infty,z]}(W) - \mathbb{E}\mathbbm{1}_{(-\infty,z]}(Z)\big \rvert & \leq \inf_{\varepsilon >0} \left[ \dfrac{3}{\varepsilon} \sum_{i=1}^n \mathbb{E}|X_i|^3 + \dfrac{\varepsilon}{\sqrt{2\pi}}\right] \\
&= 2 \left[ \dfrac{3}{\sqrt{2 \pi}} \sum_{i=1}^n \mathbb{E}|X_i|^3 \right]^{1/2} \leq 3 \left[ \sum_{i=1}^n \mathbb{E}|X_i|^3 \right]^{1/2}.
\end{align*}
\end{enumerate} |
2007-q5 | Let $\{X_i, \mathcal{F}_i, i \ge 1\}$ be a martingale difference sequence such that $E \{X_i^2 | \mathcal{F}_{i-1} \} = 1$ and $E |X_i|^p | \mathcal{F}_{i-1} \} \le C$ for some $p > 2$, $C < \infty$ and all $i$. Let $\tau_n$ be a stopping time with respect to $\{\mathcal{F}_i\}$ such that $\tau_n / n$ converges in distribution to a continuous random variable $\tau$. (a) Prove that for any $0 < t < t'$, \[ \lim_{n \to \infty} \mathbb{P} \left\{ \sum_{\tau_n < i \leq nt'} X_i \ge 0, \tau_n \leq nt \right\} = \frac{1}{2} \mathbb{P}\{\tau \leq t\}. \] (b) Let $\alpha > 0$ and $\nu_n = \inf\{m > \tau_n : \sum_{\tau_n < i \leq m} X_i > \alpha \sqrt{n} \}$. Show that $\nu_n/n$ converges weakly as $n \to \infty$ and derive the limiting distribution of $\nu_n/n$. | \begin{enumerate}[label=(\alph*)]
\item Note that $\left\{X_n, \mathcal{F}_n, n \geq 1\right\}$ and $\left\{X_n^2-1, \mathcal{F}_n, n \geq 1\right\}$ are MG-difference sequences whereas $\left\{|X_n|^p-C, \mathcal{F}_n, n \geq 1\right\}$ is a sup-MG difference sequence. From Lemma~\ref{mg} and Corollary~\ref{mgco}, we have that $\mathbb{E}(X_{\tau_n+i} \mid \mathcal{F}_{\tau_n+i-1})\mathbbm{1}(\tau_n < \infty)=0$, $\mathbb{E}(X^2_{\tau_n+i}-1 \mid \mathcal{F}_{\tau_n+i-1})\mathbbm{1}(\tau_n < \infty)=0$ and $\mathbb{E}(|X_{\tau_n+i}|^p \mid \mathcal{F}_{\tau_n+i-1})\mathbbm{1}(\tau_n < \infty) \leq C\mathbbm{1}(\tau_n < \infty)$ almost surely for all $n,i$.
Let $A$ be the event on which all of the above almost sure statements hold true and thus $\mathbb{P}(A)=1$. Also set $B_n = (\tau_n \leq nt)$. Take $\omega \in A \bigcap ( \bigcap_{n \geq 1} B_n)$.
Fix any $T >0$. Using the existence for R.C.P.D. for a $\mathbb{R}^k$ -valued random variable, for some $k \geq 1$, we cab define $\mathbb{P}_{n}^{\omega}$ to be the the R.C.P.D. of $(X_{\tau_n+1}, \ldots, X_{\tau_n+\lceil nT \rceil})$, conditional on the $\sigma$-algebra $\mathcal{F}_{\tau_n}$, evaluated at $\omega$. Let $(D_{n,1}, \ldots, D_{n,\lceil nT \rceil})$ be a random vector following the distribution $\mathbb{P}_{n}^{\omega}$, for all $n \geq 1$. Then for all $n \geq 1$ and $1 \leq k \leq \lceil nT \rceil$, we have
$$ \mathbb{E}(D_{n,k} \mid D_{n,1}, \ldots, D_{n,k-1}) = \mathbb{E}(X_{\tau_n+k} \mid \mathcal{F}_{\tau_n}, X_{\tau_n+1}, \ldots, X_{\tau_n+k-1})(\omega) = \mathbb{E}(X_{\tau_n+k} \mid \mathcal{F}_{\tau_n+k-1}) (\omega)=0, $$
$$ \mathbb{E}(D_{n,k}^2 \mid D_{n,1}, \ldots, D_{n,k-1}) = \mathbb{E}(X^2_{\tau_n+k} \mid \mathcal{F}_{\tau_n}, X_{\tau_n+1}, \ldots, X_{\tau_n+k-1})(\omega) = \mathbb{E}(X_{\tau_n+k}^2 \mid \mathcal{F}_{\tau_n+k-1}) (\omega)=1, $$
$$ \mathbb{E}(|D_{n,k}|^p \mid D_{n,1}, \ldots, D_{n,k-1}) = \mathbb{E}(|X_{\tau_n+k}|^p \mid \mathcal{F}_{\tau_n}, X_{\tau_n+1}, \ldots, X_{\tau_n+k-1})(\omega) = \mathbb{E}(|X_{\tau_n+k}|^p \mid \mathcal{F}_{\tau_n+k-1}) (\omega)\leq C, $$
since $\tau_n(\omega)< \infty$. Thus each row $\left\{D_{n,k} : 1 \leq k \leq \lceil nT \rceil \right\}$ is a MG-difference sequence with $D_{n,0}=0$, with respect to its canonical filtration $\left\{\mathcal{G}_{n,j} : 0 \leq j \leq \lceil nT \rceil\right\}$and satisfying conditions for MG CLT. This is because for all $0 \leq s \leq T$,
$$ \dfrac{1}{n} \sum_{k=1}^{\lfloor ns \rfloor} \mathbb{E}(D_{n,k}^2 \mid \mathcal{G}_{n,k-1}) = \dfrac{\lfloor ns \rfloor}{n} \to s,$$
and for all $\varepsilon >0$,
$$ \dfrac{1}{n} \sum_{k=1}^{\lfloor nT \rfloor} \mathbb{E}(D_{n,k}^2; |D_{n,k}| \geq \varepsilon\sqrt{n} \mid \mathcal{G}_{n,k-1}) \leq \dfrac{1}{n} \sum_{k=1}^{\lfloor nT \rfloor} \mathbb{E}(D_{n,k}^2 (\varepsilon^{-1}n^{-1/2}|D_{n,k}|)^{p-2} \mid \mathcal{G}_{n,k-1}) \leq \dfrac{\varepsilon^{-(p-2)} C\lfloor nT \rfloor}{n^{p/2}} \to 0.$$ Letting
$$ M_n(s) := \dfrac{1}{\sqrt{n}} \left[ \sum_{k=1}^{\lfloor ns \rfloor} D_{n,k} + (ns-\lfloor ns \rfloor) D_{n,\lfloor ns \rfloor +1}\right], \; \; \forall \; 0 \leq s \leq T, \; n \geq 1,$$
we have by MG CLT that $M_n \stackrel{d}{\longrightarrow} W$, on $C[0,T]$, where $W$ is the standard BM on $[0,T]$.
Now take $t^{\prime}>t$ and $T=t^{\prime}$. Let $t_n(\omega) = \lfloor nt^{\prime} \rfloor /n - \tau_n(\omega)/n$. Since $\tau_n(\omega) \leq nt$ for all $n$, we have $t_n(\omega)$ is a sequence in $[0,t^{\prime}]$, bounded away from $0$. Using Lemma~\ref{lem1}, we can conclude that $\mathbb{P}_n^{\omega}(M_n(t_n(\omega))\geq 0) \longrightarrow 1/2$. But $$ \mathbb{P}_n^{\omega}(M_n(t_n(\omega)) \geq 0) = \mathbb{P}_n^{\omega}\left(\sum_{k=1}^{nt^{\prime}-\tau_n(\omega)} D_{n,k} \geq 0 \right) = \mathbb{P} \left( \sum_{\tau_n(\omega) < i \leq nt^{\prime}} X_i \geq 0 \Bigg \rvert \mathcal{F}_{\tau_n} \right) (\omega).
$$ Here we have used the fact that $t_n$ is $\mathcal{F}_{\tau_n}$-measurable. Therefore, we have $$ \mathbb{P} \left( \sum_{\tau_n < i \leq nt^{\prime}} X_i \geq 0 \Bigg \rvert \mathcal{F}_{\tau_n} \right) \longrightarrow \dfrac{1}{2}, \; \; \text{ on } A \bigcap \left(\bigcap_{n \geq 1} (\tau_n \leq nt) \right).$$
We have written the above proof in a restricted scenario to keep the notations simple. Actually the same proof establishes the following claim. Let $\left\{n_k : k \geq 1\right\}$ be any subsequence. Then
$$ \mathbb{P} \left( \sum_{\tau_{n_k} < i \leq n_kt^{\prime}} X_i \geq 0 \Bigg \rvert \mathcal{F}_{\tau_{n_k}} \right) \longrightarrow \dfrac{1}{2}, \; \; \text{ on } A \bigcap \left(\bigcap_{k \geq 1} (\tau_{n_k} \leq n_kt)\right),$$
and therefore
$$ \Bigg \rvert \mathbb{P} \left( \sum_{\tau_n < i \leq nt^{\prime}} X_i \geq 0 \Bigg \rvert \mathcal{F}_{\tau_n} \right) - \dfrac{1}{2}\Bigg \rvert \mathbbm{1}(\tau_n \leq nt) \longrightarrow 0, \; \; \text{ on } A.$$
Since, $\mathbb{P}(A)=1$, we apply DCT and get
\begin{align*}
\Bigg \rvert \mathbb{P} \left( \sum_{\tau_n < i \leq nt^{\prime}} X_i \geq 0, \tau_n \leq nt \right) - \dfrac{1}{2}\mathbb{P}(\tau_n \leq nt)\Bigg \rvert &= \Bigg \rvert \mathbb{E} \left[ \mathbb{P} \left( \sum_{\tau_n < i \leq nt^{\prime}} X_i \geq 0 \Bigg \rvert \mathcal{F}_{\tau_n} \right) \mathbbm{1}(\tau_n \leq nt) \right] - \dfrac{1}{2}\mathbb{E}\left(\mathbbm{1}(\tau_n \leq nt) \right)\Bigg \rvert \\
&\leq \mathbb{E} \left[ \Bigg \rvert \mathbb{P} \left( \sum_{\tau_n < i \leq nt^{\prime}} X_i \geq 0 \Bigg \rvert \mathcal{F}_{\tau_n} \right) - \dfrac{1}{2}\Bigg \rvert \mathbbm{1}(\tau_n \leq nt)\right] \longrightarrow 0.
\end{align*}
We have that $\tau_n/n$ converges weakly to a continuous random variable $\tau$ and hence $\mathbb{P}(\tau_n \leq nt) \longrightarrow \mathbb{P}(\tau \leq t)$. Combining it with the previous observation, we can conclude that
$$ \lim_{n \to \infty} \mathbb{P} \left( \sum_{\tau_n < i \leq nt^{\prime}} X_i \geq 0, \tau_n \leq nt \right) = \dfrac{1}{2}\mathbb{P}(\tau \leq t).$$
\item Fix any $x,t >0$. Clearly, $\nu_n > \tau_n$ and
$$ ((\nu_n-\tau_n)/n \leq x, \tau_n \leq nt) = \left( \tau_n \leq nt, \sup_{1 \leq m \leq \lfloor nx \rfloor} \sum_{1 \leq i \leq m} X_{\tau_n+i} > a\sqrt{n} \right) $$
Take $T=x$ in the argument we developed in part (a). Note that for $\omega \in (\tau_n \leq nt)$, we have
\begin{align*} \mathbb{P}\left( \sup_{1 \leq m \leq \lfloor nx \rfloor} \sum_{1 \leq i \leq m} X_{\tau_n+i} > a\sqrt{n} \Big \rvert \mathcal{F}_{\tau_n} \right)(\omega) = \mathbb{P}_n^{\omega}\left( \sup_{1 \leq m \leq \lfloor nx \rfloor} \sum_{1 \leq i \leq m} D_{n,i} > a\sqrt{n} \right) &= \mathbb{P}^{\omega}_n \left( \sup_{0 \leq s \leq \lfloor nx \rfloor/ n} M_n(s) > a\right).
\end{align*}Therefore, using Lemma~\ref{lem1}, we can conclude that
$$ \mathbb{P}\left( \sup_{1 \leq m \leq \lfloor n_kx \rfloor} \sum_{1 \leq i \leq m} X_{\tau_{n_k}+i} > a\sqrt{n_k} \Big \rvert \mathcal{F}_{\tau_{n_k}} \right)(\omega) \longrightarrow \mathbb{P}( ext{sup}_{0 \leq s \leq x} W(s) > a), \; \; \text{ on } A \bigcap \left(\bigcap_{k \geq 1} (\tau_{n_k} \leq n_kt)\right),$$
for any subsequence $\left\{n_k : k \geq 1\right\}$.
Thus
$$ \Bigg \rvert \mathbb{P}\left( \sup_{1 \leq m \leq \lfloor nx \rfloor} \sum_{1 \leq i \leq m} X_{\tau_n+i} > a\sqrt{n} \Big \rvert \mathcal{F}_{\tau_n} \right) - \mathbb{P}\left(\sup_{0 \leq s \leq x} W(s) > a \right)\Bigg \rvert \mathbbm{1}(\tau_n \leq nt) \longrightarrow 0, \; \text{ on } A.$$
Apply DCT and conclude that
\begin{align*}
((\nu_n-\tau_n)/n \leq x, \tau_n \leq nt) = \left( \tau_n \leq nt, \sup_{1 \leq m \leq \lfloor nx \rfloor} \sum_{1 \leq i \leq m} X_{\tau_n+i} > a\sqrt{n} \right) &\longrightarrow \mathbb{P}(\tau \leq t) \mathbb{P}\left(\sup_{0 \leq s \leq x} W(s) > a \right)\
& = \mathbb{P}(\tau \leq t) \mathbb{P}(|W(x)| > a) \\
&= 2\mathbb{P}(\tau \leq t) \bar{\Phi}(ax^{-1/2).
\end{align*} Thus $((\nu_n-\tau_n)/n, \tau_n) \stackrel{d}{\longrightarrow}(\tau, T_a)$, where $T_a$ has the same distribution as the hitting time of the level $a$ by standard BM and $\tau \perp\!\!\perp \tau$. By continuous mapping theorem, we have $\nu_n/n \stackrel{d}{\longrightarrow} \nu \sim \tau * T_a$, where $*$ denotes convolution. Thus
$$ \mathbb{P}(\nu \leq y) = \int_{0}^y \mathbb{P}(\tau \leq y-t)f_{T_a}(t)\, dt = \int_{0}^y \mathbb{P}(\tau \leq y-t) \dfrac{1}{\sqrt{2 \pi t^3}}\exp \left( - \dfrac{a^2}{2t}\right)\, dt.$$
\end{enumerate} |
2007-q6 | Here is a system for surely making a profit in a fair game in which at each successive bet you independently either double your wager (with a probability half) or lose it. Choose a finite sequence $x_1, x_2, \ldots , x_K$ of positive numbers. Wager an amount that equals the sum of the first and last numbers. If you won your bet, delete those two numbers from your sequence. If you lost, add their sum as an extra term $x_{K+1} = (x_1 + x_K)$ at the right-hand side of the sequence. You play iteratively according to the above rule (and if your sequence ever consists of one term only, you wage that amount, so upon winning you delete this term, while upon losing you append it to the sequence to obtain two terms) till you have an empty sequence. (a) Let $V$ be the total profit under these strategy, i.e. the amount of money you own at at the end, minus the amount at the beginning. Show that with probability one you terminate at a finite time with a profit $V = \sum_{i=1}^{K} x_i$. (b) Is the mean of the time $T$ till termination finite? Justify your answer. (c) Let $L$ denote the maximal aggregate loss under this strategy prior to termination, i.e. the maximum incurred loss up to any time. Is the mean $L$ finite? Justify your answer. (d) Your friend modifies the rule by waging in each turn an amount that equals to the sum of the two smallest terms in his sequence instead of the sum of the first and last numbers (treating the case of one term just as you do). Repeat (a)-(c) for the profit $V'$ his time till termination $T'$ and his maximal aggregate loss $L'$ prior to termination, under your friend strategy. (d) What can you say about these rules in case you and your friend play alongside each other with the same initial sequence $x_1, \ldots , x_K$ and same betting outcomes? | \begin{enumerate}[label=(\alph*)]
\item Let for all $k \geq 1$, $V_k$ be the amount waged for the bet $k$ and $$ \xi_k := \begin{cases} +1,& \text{if $k$-th bet is won},\\ -1, & \text{if $k$-th bet is lost}. \end{cases} $$ Since the game is fair, $\xi_1, \xi_2, \ldots \stackrel{iid}{\sim} \text{Uniform}( \left\{+1,-1\right\})$ and $$ \text{Total amount won upto time $n$}, Y_n := \sum_{k=1}^n V_k \xi_k, \; \forall \; n \geq 1,$$ with $Y_0:=0$. Now let $S_k$ be the sum of numbers remaining in the sequence after bet $k$ is completed. By our construction, it is easy to see that $S_n-S_{n-1} = V_n(-\xi_n)$, for all $n \geq 1$, with $S_0= \sum_{i=1}^K x_i =: v.$ In other words,
$$S_n = v - \sum_{k=1}^n V_k \xi_k = v-Y_n, \; \forall \; n \geq 0.$$ Now let $X_k$ be the number of integers left in the list after bet $k$ has been completed. By the strategy design, the dynamics of the process $\left\{X_n\right\}_{n \geq 0}$ looks as follows.
$$ X_0 = K; \; \; \forall \; n \geq 1, \; X_n = \begin{cases} X_{n-1} +1, & \text{if } \xi_n=-1, X_{n-1} >0, \\
(X_{n-1} -2)_{+}, & \text{if } \xi_n=+1, X_{n-1}>0, \\
0, & \text{if } X_{n-1}=0. \end{cases}$$ Let $T$ be the time the game terminates, i.e., $T := \inf \left\{n \geq 0 \mid X_n=0\right\}.$ Then for large enough $n$, we have the following.
\begin{align*}
\mathbb{P}(T > n) = \mathbb{P}(X_n >0) & \leq \mathbb{P} \left( K+ \sum_{i=1}^n \mathbbm{1}(\xi_i=-1) - 2 \sum_{i=1}^n \mathbbm{1}(\xi_i=+1) > 0\right) \\
& = \mathbb{P} \left( \sum_{i=1}^n \mathbbm{1}(\xi_i=+1) < \dfrac{n+K}{3}\right)\
&= \mathbb{P} \left( \text{Bin}(n,1/2) - \dfrac{n}{2}< \dfrac{2K-n}{6}\right) \stackrel{(i)}{\leq} \exp \left( -2n \left(\dfrac{n-2K}{6n}\right)^2\right) \leq \exp(-Cn),
\end{align*}
for some $C>0$. In the above argument, $(i)$ follows from the fact that $\text{Bin}(n,1/2)$ variable is $n/4$-sub-Gaussian. Since, the tail of $T$ decays exponentially fast, we have $\mathbb{E}(T)< \infty$; in particular $T < \infty$ with probability $1$. Thus with probability $1$, the game terminates at a finite time.
Since, by construction $S_T=0$; we have $Y_T= 0-S_T=0$. Thus the game ends with profit $v=\sum_{i=1}^K x_i$.
\item We have already shown that $\mathbb{E}(T)< \infty$. But here is another argument to derive the same. Let
$$ W_n := K+\sum_{i=1}^n (\xi_i=-1) - 2 \sum_{i=1}^n \mathbbm{1}(\xi_i=+1), \; \forall \; n \geq 0.$$ Clearly $\left\{W_n + n/2; n \geq 0\right\}$ is a MG with respect to the natural filtration and $T=\inf \left\{n \geq 0 : W_n \leq 0\right\}.$ We apply OST to conclude the following. $$ \mathbb{E}W_{T \wedge n} + \mathbb{E}(T \wedge n)/2 = \mathbb{E}W_0 = K \Rightarrow \mathbb{E}(T \wedge n) \leq 2K-2\mathbb{E}W_{T \wedge n} \leq 2K+2,$$
since $W_{T \wedge n} \geq -1$. Taking $n \uparrow \infty$, we conclude that $\mathbb{E}T \leq 2(K+1)< \infty$.
\item Define $L$ to be maximum aggregate loss, i.e., $L:= - \min_{0 \leq n \leq T} Y_n \geq 0.$ Now note that our strategy is defined in such a way that the wager for the $k$-th bet is determined by the outcome of the game upto $(k-1)$-th bet. In other words, the process $\left\{V_k\right\}_{k \geq 1}$ is predictable with respect to the filtration $\left\{\mathcal{F}_k\right\}_{k \geq 0}$, where $\mathcal{F}_k := \sigma \left( \xi_i : 1 \leq i \leq k\right)$ and $\mathcal{F}_0$ being defined as the trivial $\sigma$-algebra. Clearly, $\left\{Y_n, \mathcal{F}_n, n \geq 0\right\}$ is a MG and hence so is $\left\{Y_{T \wedge n}, \mathcal{F}_n, n \geq 0\right\}.$ Thus $\mathbb{E}(Y_{T \wedge n}) = \mathbb{E}(Y_0) =0$, for all $n \geq 0$. Moreover, since $T< \infty$ almost surely, we have $Y_{T \wedge n} \stackrel{a.s.}{\longrightarrow} Y_T=v.$ Since, by definition, $Y_{T \wedge n} + L \geq 0$, we employ \textit{Fatou's Lemma} and obtain the following.
$$ v + \mathbb{E}(L) = \mathbb{E}(Y_T+L) =\mathbb{E} \left[ \liminf_{n \to \infty}(Y_{T \wedge n}+L)\right] \leq \liminf_{n \to \infty} \mathbb{E}(Y_{T \wedge n}) + \mathbb{E}(L) = \mathbb{E}(L) .$$ Since $v >0$, this implies that $\mathbb{E}(L)=\infty$.
\end{enumerate}
\item None of these two strategies dominate each other and asymptotically their behaviour is same (in regards to termination time of the game and final winning amount). The first strategy is more aggressive in the sense that its wager is greater than or equal to the wager of the second strategy at each step. So a more cautious player will probably prefer the second strategy.
\end{enumerate} |
2008-q1 | Consider a Markov chain X_1, X_2, \ldots, X_t, \ldots taking values in a finite state space \mathcal{X}. Assume that the transition matrix Q(x, y) = P(X_{i+1} = y|X_i = x) has strictly positive entries. Given f : \mathcal{X} \to \mathbb{R}, consider the cumulant generating functions \psi_n(t) = \frac{1}{n}\log \mathbb{E}[\exp\{t \sum_{i=1}^n f(X_i)\}].
(a) Show that the limit \psi(t) = \lim_{n \to \infty} \psi_n(t), exists, and express it in terms of the eigenvalues of the matrix Q_t(x,y) \equiv Q(x,y) \exp\{tf(y)\}.
(b) Prove that \psi(t) is twice differentiable with a non-negative second derivative.
(c) Compute \psi(t) for the two state Markov chain on \mathcal{X} = \{0, 1\} with transition matrix Q(0, 0) = Q(1, 1) = 1 - q, Q(0, 1) = Q(1, 0) = q. | We have a transition matrix $Q$ on the finite state space $\mathbf{\chi}$ with strictly positive entries and a Markov chain $X_1,X_2, \ldots,$ according to this dynamics. Let $\mathcal{F}_n = \sigma(X_i : i \leq n)$ and $\pi$ is the distribution of $X_1$. We have a function $f: \mathbf{\chi} \to \mathbb{R}$ and define $$ \psi_N(t) = \dfrac{1}{N} \log \mathbb{E}\left[ \exp\left( t\sum_{i=1}^N f(X_i)\right)\right], \; \forall \; t \in \mathbb{R}.$$\begin{enumerate}[label=(\alph*)] \item Consider the matrix $Q_t$ with entries $Q_t(x,y)=Q(x,y)e^{tf(y)}$, for all $x,y \in \mathbf{\chi}$. \begin{align*} \psi_N(t) = \dfrac{1}{N} \log \mathbb{E}\left[ \exp\left( t\sum_{i=1}^N f(X_i)\right)\right] &= \dfrac{1}{N} \log \sum_{x_1, \ldots, x_N \in \mathbf{\chi}} \left[ \exp \left( t \sum_{i=1}^N f(x_i)\right) Q(x_N,x_{N-1})\cdots Q(x_2,x_1)\mathbb{P}(X_1=x_1) \right] \\ &= \dfrac{1}{N} \log \sum_{x_1, \ldots, x_N \in \mathbf{\chi}} Q_t(x_N,x_{N-1})\cdots Q_t(x_2,x_1)e^{tf(x_1)}\pi(x_1) \\ &= \dfrac{1}{N} \log \sum_{x_N, x_1 \in \mathbf{\chi}} Q_t^{N-1}(x_N,x_1)e^{tf(x_1)}\pi(x_1).\end{align*} We shall now use \textit{Perron-Frobenius Theorem}. There are many versions of it. The version that we shall use is mentioned in Theorem~\ref{perron}. Consider the matrix $Q_t$. It has strictly positive entries since $Q$ has strictly positive entries. Let $\lambda_t$ be the unique positive eigenvalue of $Q_t$ with largest absolute value. Let $\mathbf{v}_t$ and $\mathbf{w}_t$ be right and left eigenvectors of $Q_t$ corresponding to eigenvalue $\lambda_t$, with strictly positive entries and normalized such that $\mathbf{w}_t^{\prime}\mathbf{v}_t =1.$ In that case $Q_t^{N-1}(x_N,x_1) \sim \lambda_t^{N-1}\mathbf{v}_t(x_N)\mathbf{w}_t(x_1)$ for all $x_N,x_1 \in \mathbf{\chi}$. This implies that \begin{align*} \psi_N(t) = \dfrac{1}{N} \log \sum_{x_N, x_1 \in \mathbf{\chi}} Q_t^{N-1}(x_N,x_1)e^{tf(x_1)}\pi(x_1) &= \dfrac{1}{N} \left[\log \left( \sum_{x_N, x_1 \in \mathbf{\chi}} \lambda_t^{N-1}\mathbf{v}_t(x_N)\mathbf{w}_t(x_1)e^{tf(x_1)}\pi(x_1)\right) +o(1) \right] \\ &= \dfrac{1}{N} \left[\log \lambda_t^{N-1} + \log \left( \sum_{x_N, x_1 \in \mathbf{\chi}} \mathbf{v}_t(x_N)\mathbf{w}_t(x_1)e^{tf(x_1)}\pi(x_1)\right) + o(1)\right] \\ &= \dfrac{1}{N} \left[\log \lambda_t^{N-1} + O(1) + o(1)\right] \longrightarrow \log \lambda_t. \end{align*} Hence $\psi(t)$ exists and is equal to $\log \lambda_t$ where $\lambda_t$ is the largest eigenvalue of $Q_t$. \item First we shall show that $\psi$ is a convex function. Take $t_1, t_2 \in \mathbb{R}$ and $\alpha \in (0,1)$. Using \textit{Holder's Inequality}, we obtain the following. \begin{align*} \psi_N(\alpha t_1 +(1-\alpha)t_2) &= \dfrac{1}{N} \log \mathbb{E}\left[ \exp\left( \alpha t_1\sum_{i=1}^N f(X_i) + (1-\alpha) t_2\sum_{i=1}^N f(X_i) \right)\right] \\ & \leq \dfrac{1}{N} \log \left(\mathbb{E}\left[ \exp\left( t_1\sum_{i=1}^N f(X_i) \right) \right]\right)^{\alpha} \left(\mathbb{E}\left[ \exp\left( t_2\sum_{i=1}^N f(X_i) \right) \right] \right)^{1-\alpha} \\ &= \alpha \psi_N(t_1) + (1-\alpha)\psi_N(t_2). \end{align*} Taking $n \to \infty$, we conclude that $\psi(\alpha t_1 +(1-\alpha)t_2) \leq \alpha \psi(t_1) + (1-\alpha)\psi(t_2)$. This establishes that $\psi$ is convex on $\mathbb{R}$ and hence continuous. Also if we can show that $\psi$ is twice differentiable, we will have $\psi^{\prime \prime} \geq 0$. We shall now use \textit{Implicit function theorem} (IFT). Consider the function $g:\mathbb{R} \times \mathbb{R} \to \mathbb{R}$ as $g(t,\lambda) := \operatorname{det}(Q_t-\lambda I_n)$. Note that $g(t,\lambda_t)=0$. To apply IFT, we need to show that $$ \dfrac{\partial}{\partial \lambda} g(t,\lambda) \Bigg \rvert_{(t,\lambda)=(t,\lambda_t)} \neq 0.$$ We need some facts from Matrix analysis. The first fact is that for a differentiable matrix-valued function $H: \mathbb{R} \to M_n(\mathbb{R})$, where $M_n(\mathbb{R})$ is the collection of all $n \times n $ matrices with real entries, we have $$ \dfrac{\partial}{\partial t} \operatorname{det}(H(t)) = \operatorname{Tr}\left( \operatorname{Adj}(H(t)) \dfrac{\partial H(t)}{\partial t} \right).$$ This formula is called \textit{Jacobi's Formula}. Applying this we get $$ \dfrac{\partial}{\partial \lambda} g(t,\lambda) \Bigg \rvert_{(t,\lambda)=(t,\lambda_t)} = \operatorname{Tr}\left( \operatorname{Adj}(Q_t-\lambda_tI_n) (-I_n) \right) = - \operatorname{Tr}\left( \operatorname{Adj}(Q_t-\lambda_tI_n) \right).$$ The second fact is that for any $n \times n$ matrix $A$ with rank $n-1$, we have $\operatorname{Adj}(A)$ to be a matrix of rank $1$. Moreover $\operatorname{Adj}(A) = \mathbf{x}\mathbf{y}^{\prime}$ for some $\mathbf{x} \in \mathcal{N}(A)$ and $\mathbf{y} \in \mathcal{N}(A^{\prime})$. This fact is a corollary of the identity $\operatorname{Adj}(A)A=A\operatorname{Adj}(A) = \operatorname{det}(A)I_n.$ Since $\lambda_t$ is an eigenvalue of $Q_t$ with algebraic and geometric multiplicity $1$, we have $\operatorname{Rank}(Q_t-\lambda_tI_n) = n-1$. Moreover, $\mathcal{N}(Q_t-\lambda_tI_n)=\mathbf{v}_t$ and $\mathcal{N}(Q_t^{\prime}-\lambda_tI_n)=\mathbf{w}_t.$ Hence, $\operatorname{Adj}(Q_t-\lambda_tI_n) = \beta_t \mathbf{v}_t \mathbf{w}_t^{\prime}$ for some $\beta_t \neq 0$. Plugging in this we get, $$ \dfrac{\partial}{\partial \lambda} g(t,\lambda) \Bigg \rvert_{(t,\lambda)=(t,\lambda_t)} = - \operatorname{Tr}\left( \operatorname{Adj}(Q_t-\lambda_tI_n) \right) =- \operatorname{Tr}\left( \beta_t \mathbf{v}_t \mathbf{w}_t^{\prime} \right) = -\beta_t \mathbf{v}_t^{\prime} \mathbf{w}_t \neq 0,$$ since $\mathbf{v}_t$ and $\mathbf{w}_t$ has strictly positive entries. Since $g$ is infinitely jointly differentiable, we apply IFT and conclude that there exists an open neighbourhood $U_t \ni t$ and unique $h: U_t \to \mathbb{R}$, infinitely differentiable, such that $h(t)=\lambda_t$ and $g(s,h(s))=0$ for all $s \in U_t$. All we need to show now is $h(s)=\lambda_s$ in some open neighbourhood of $t$. Since $g$ is continuously differentiable, we can get open ball $V_t$ such that $U_t \supseteq V_t \ni t$ and open ball $S_t \ni \lambda_t$ such that $$ \operatorname{sgn} \left( \dfrac{\partial}{\partial \lambda} g(t,\lambda) \Bigg \rvert_{(t,\lambda)=(s,\lambda)}\right) = \operatorname{sgn} \left( \dfrac{\partial}{\partial \lambda} g(t,\lambda) \Bigg \rvert_{(t,\lambda)=(t,\lambda_t)}\right), \; \forall \; s \in V_t, \lambda \in S_t.$$ Since $h$ and $s \mapsto \lambda_s$ are continuous we can assume that $h(s),\lambda_s \in S_t$ for all $s \in V_t$. Hence, if $\lambda_s \neq h(s)$, then $$ 0 = g(s,h(s))-g(s,\lambda_s) = \int_{\lambda_s}^{h(s)} \dfrac{\partial}{\partial \lambda} g(t,\lambda) \Bigg \rvert_{(t,\lambda)=(s,\lambda)} \, d\lambda \neq 0,$$ since the integrand is either always positive or negative. This gives a contradiction. Thus $t \mapsto \lambda_t$ is infinitely differentiable and hence so is $\psi$. \item In this case $$ Q = \begin{bmatrix} 1-q & q \\ q & 1-q \end{bmatrix} \Rightarrow Q_t = \begin{bmatrix} e^{tf(0)}(1-q) & e^{tf(1)}q \\ e^{tf(0)}q & e^{tf(1)}(1-q) \end{bmatrix}$$ and hence $$ \operatorname{det}(Q_t-\lambda I_2) = (e^{tf(0)}(1-q) -\lambda)(e^{tf(1)}(1-q) -\lambda) - e^{tf(0)+tf(1)}q^2.$$ Solving the above quadratic equation we get $$ \lambda = \dfrac{(1-q)(e^{tf(0)}+e^{tf(1)}) \pm \sqrt{(1-q)^2(e^{tf(0)}+e^{tf(1)})^2 - 4(1-2q)e^{tf(0)+tf(1)}}}{2}.$$ Since we know from \textit{Perron-Frobenius Theorem} that there exists al least one positive eigenvalue, the discriminant is non-negative. Since $\lambda_t$ is the largest eigenvalue, we have $$ \psi(t)=\log \lambda_t = \log \dfrac{(1-q)(e^{tf(0)}+e^{tf(1)}) + \sqrt{(1-q)^2(e^{tf(0)}+e^{tf(1)})^2 - 4(1-2q)e^{tf(0)+tf(1)}}}{2}.$$ \end{enumerate} |
2008-q2 | Consider the following discrete time Markov process. At time zero each site x of the integer lattice \mathbb{Z}\mathbb{Z} is either empty, with probability 1-p, or occupied by one blue particle, with probability p, independently on all other sites. In addition, one red particle occupies the site x = 0 at time zero.
The red particle then performs a discrete-time, symmetric simple random walk on the lattice. When a red particle hits a site occupied by a blue particle, this blue particle turns red and (on the following time step), starts an independent symmetric simple random walk. The movements taken at each time step by red particles are independent of each other and of the position of the remaining blue particles (in particular, a site can hold as many particles as needed).
(a) For any positive integer n and each collection s of times 0 = t_0 < t_1 < t_2 < t_3 \cdots < t_n and sites x_0 = 0, x_1, x_2, \ldots, x_n on the integer lattice, consider the following event, denoted \mathcal{E}(s): The first red particle (starting at position x_0) hits at time t_1 a blue particle at site x_1, this blue particle turns red and hits another blue particle at time t_2 and site x_2, which in return wakes up (as red) and hits at time t_3 and site x_3 another blue particle and so on, where the n-th (and last) particle thus considered is at time t_n at site x_n. Show that for any collection s,
P(\mathcal{E}(s)) \leq p^{n-1} \prod_{k=1}^{n}Q_{t_k - t_{k-1}}(x_k - x_{k-1}) , | We consider the following generating process of the given dynamics. Let $\left\{Z_i : i \in \mathbb{Z}\right\}$ is a collection of \textit{i.i.d.} $\text{Ber}(p)$ variables, where $Z_i$ is the indicator variable of whether the site $i$ contains a blue particle initially or not. Independent of this collection, we have a collection of independent simple symmetric random walks on $\mathbb{Z}$. $\left\{\left\{S_{n,i} \right\}_{n \geq 0} : i \in \mathbb{Z}\right\}$ where the walk $S_{\cdot,i}$ starts from site $i$. With this ingredients the process can be described as follows. The red particle at site $0$ is subjected to the RW $S_{\cdot, 0}$. For $i \neq 0$, if $Z_i=1$ and $\tau_i$ is the first time a red particle reaches site $i$, then the blue particle at site $i$ becomes red and is subjected to RW $S_{\cdot, i}$, i.e. its position at time $k \geq \tau_i$ would be $S_{k-\tau_i,i}.$\begin{enumerate}[label=(\alph*)] \item With the above mentioned dynamics in mind, we can write \begin{align*} \mathbb{P}(\mathcal{E}(s)) &= \mathbb{P}(Z_{x_i}=1, \tau_{x_i}=t_i \; \forall \; 1 \leq i \leq n-1; S_{t_i-\tau_{x_{i-1}},x_{i-1}}=x_i, \; \forall \; 1 \leq i \leq n) \\ & \leq \mathbb{P}(Z_{x_i}=1, \; \forall \; 1 \leq i \leq n-1; S_{t_i-t_{i-1},x_{i-1}}=x_i-x_{i-1}\right) \\ & = \prod_{i=1}^{n-1} \mathbb{P}(Z_{x_i}=1) \prod_{i=1}^n \mathbb{P}\left( S_{t_i-t_{i-1},x_{i-1}}=x_i-x_{i-1}\right) \\ & = p^{n-1} \prod_{i=1}^n Q_{t_i-t_{i-1}}(x_i-x_{i-1}). \end{align*} \item It is clear that if there is a red particle at site $x >0$ at time $t$, then there exists $n \geq 1$, $0=t_0 < t_1 < \ldots < t_n=t$ and $0=x_0, x_1, \ldots, x_n=x \in \mathbb{Z}$, such that $\mathcal{E}(s)$ has occurred for the collection $s = \left\{t_0, \ldots, t_n, x_0, \ldots, x_n\right\}$. Using part (a) and union bound, we can write \begin{align*} \mathbb{P}(Z_t=x) &\leq \mathbb{P}(\exists \; \text{red particle at site } x \text{ at time } t) \ & \leq \sum_{n \geq 1} \sum_{0=t_0 < \ldots < t_n=t} \sum_{0=x_0, x_1, \ldots, x_n=x} \mathbb{P}(\mathcal{E}(s=\left\{t_0, \ldots, t_n, x_0, \ldots, x_n\right\})) \ & \leq \sum_{n \geq 1} \sum_{0=t_0 < \ldots < t_n=t} \sum_{0=x_0, x_1, \ldots, x_n=x} p^{n-1} \prod_{i=1}^n Q_{t_i-t_{i-1}}(x_i-x_{i-1}) \ & = \sum_{n \geq 1} p^{n-1}\sum_{0=t_0 < \ldots < t_n=t} \left[\sum_{0=x_0, x_1, \ldots, x_n=x} \prod_{i=1}^n Q_{t_i-t_{i-1}}(x_i-x_{i-1}) \right] \ &\stackrel{(i)}{=} \sum_{n \geq 1} p^{n-1}\sum_{0=t_0 < \ldots < t_n=t} Q_t(x) \ & = \sum_{n \geq 1} p^{n-1} Q_t(x) {t-1 \choose n-1} = Q_t(x) \sum_{n=1}^t p^{n-1} {t-1 \choose n-1} = Q_t(x)(1+p)^{t-1}, \end{align*} where in ${i}$ we have used \textit{Chapman-Kolmogorov Equation}. Let $U_i$'s are \textit{i.i.d.} $\text{Uniform}(\pm 1)$ random variables and $Y_n:=\sum_{k=1}^n U_k$. Clearly we have $Q_t(x) = \mathbb{P}(Y_t=x)$, and hence for any $v>0$, \begin{align*} \mathbb{P}(Z_t > vt) = \sum_{x > vt} \mathbb{P}(Z_t=x) &= (1+p)^{t-1} \sum_{x > vt} Q_t(x) \ &= (1+p)^{t-1} \sum_{x > vt} \mathbb{P}(Y_t=x) \ &= (1+p)^{t-1} \mathbb{P}(Y_t > vt) \ &\leq (1+p)^{t-1}\exp\left(-\dfrac{2v^2t^2}{4t} \right) = (1+p)^{-1} \exp \left( t \log(1+p) - \dfrac{v^2t}{2}\right). \end{align*} The last inequality follows from \textit{Azuma-Hoeffding Bound}. Therefore, if $p$ is small enough such that $2\log (1+p) <v^2$, i.e. $0< p < \exp(v^2/2)-1$, then $\mathbb{P}(Z_t > vt) \longrightarrow 0$ as $t \to \infty$. \end{enumerate} |