metadata
stringlengths
7
7
question
stringlengths
95
2.57k
solution
stringlengths
552
19.3k
1991-q1
Let r be a fixed positive integer, and let W be the waiting time till the rth head in a sequence of fair coin tosses. Find simple formulae in terms of r for (a) the mean of W (b) the median of W, that is, the least integer w so that P(W > w) \leq 1/2. (c) the mode of W, that is, an integer w so that P(W = w) \geq P(W = k) for all integers k.
Let $Y_i$ is the waiting time between the $(i-1)$-th head and the $i$-th head (including the time point of the $i$-th head, excluding that for $(i-1)$-th), for $i \geq 1$. Clearly, $Y_i$'s are \textit{i.i.d.} Geometric($1/2$) random variables. \begin{enumerate}[label=(\alph*)] \item By definition, $W=\sum_{i=1}^r Y_i$. Hence, $\mathbb{E}(W)=r\mathbb{E}(Y_1)=2r.$ \item It is easy to see that $$\mathbb{P}(W>w) = \mathbb{P}(\text{There has been less than $r$ heads in first $w$ trials}) = \sum_{k=0}^{r-1} {w \choose k} 2^{-w} \leq \dfrac{1}{2} \iff \sum_{k=0}^{r-1}{w \choose k} \leq 2^{w-1}.$$ Recall that for any $n \geq 1$, $$ \sum_{l=0}^k {n \choose l} \leq 2^{n-1} \iff k \leq \lfloor \dfrac{n-1}{2}\rfloor.$$ Hence, $$ \mathbb{P}(W>w) \leq \dfrac{1}{2} \iff r-1 \leq \lfloor \dfrac{w-1}{2}\rfloor \iff r-1 \leq (w-1)/2 \iff w \geq 2r-1.$$ Thus the median is $2r-1$. \item It is easy to see that for any $w \in \mathbb{N}$, we have $$ p(w) :=\mathbb{P}(W=w) = {w-1 \choose r-1} 2^{-w}. $$ This is positive only for $w \in \left\{r,r+1,r+2, \ldots\right\}.$ Observe that for $w \geq r$, $$ \dfrac{p(w)}{p(w+1)} = \dfrac{2(w-r+1)}{w} \geq 1 \iff w \geq 2r-2, \; \dfrac{p(w)}{p(w+1)} = \dfrac{2(w-r+1)}{w} \leq 1 \iff w \leq 2r-2.$$ The first condition gives that $p(2r-1)=p(2r-2) \geq p(w)$ for all $w \geq 2r-1$ and the second condition implies that $p(2r-2) \geq p(w)$ for all $w \leq 2r-2. $ Since $2r-1 \geq r$, a mode is $M=2r-1.$ \end{enumerate}
1991-q2
Let (Y_n, n = 1, 2, 3, \cdots) be an arbitrary sequence of random variables on a probability space (\Omega, \mathcal{F}, P), and let (\nu_k, k = 1, 2, 3, \cdots) be a sequence of positive integer valued random variables on the same space. Define Y_{\nu_k} by Y_{\nu_k}(\omega) = Y_{\nu_k(\omega)}(\omega), \omega \in \Omega Consider the following conditions: (a) Y_n \to Y a.s. (b) Y_n \to Y in probability (\alpha) \nu_k \to \infty a.s. (\beta) \nu_k \to \infty in probability (A) Y_{\nu_k} \to Y a.s. (B) Y_{\nu_k} \to Y in probability where (\beta) means for every positive \lambda , P(\nu_k > \lambda) \to 1 as k \to \infty. Say whether the following statements are true or false (give proof or counterexample). (i) (a) and (\alpha) together imply (A). (ii) (a) and (\alpha) together imply (B). (iii) (a) and (\beta) together imply (A). (iv) (a) and (\beta) together imply (B). (v) (b) and (\alpha) together imply (A). (vi) (b) and (\beta) together imply (B).
\begin{enumerate}[label=(\roman*)] \item \textbf{TRUE}. Set $A :=(Y_n \to Y), B:=(\nu_k \to \infty) $. Then $\mathbb{P}(A \cap B)=1$ (since $\mathbb{P}(A), \mathbb{P}(B)=1$) and for $\omega \in A \cap B$, clearly we have $Y_{\nu_k(\omega)}(\omega) \to Y(\omega),$ as $k \to \infty$. \item \textbf{TRUE}. This follows from (i), since $Y_{\nu_k} \stackrel{a.s.}{\rightarrow} Y$ implies that $Y_{\nu_k} \stackrel{p}{\rightarrow} Y.$ \item \textbf{FALSE}. As a counter-example, take $Y_n \equiv 1/n$, $Y \equiv 0$ and $\left\{\nu_k : k \geq 1\right\}$ to be independent collection with $$ \mathbb{P}(\nu_k=1)=1/k, \; \mathbb{P}(\nu_k=k)=1-1/k, \; \forall \; k \geq 1.$$ Clearly, $Y_n$ converges to $Y$ almost surely and for any $\lambda >0$, $\mathbb{P}(\nu_k > \lambda) = 1-1/k, \;\forall \; k > \lambda +1$ and hence $\mathbb{P}(\nu_k > \lambda) \to 1$. So $\nu_k \to \infty$ in probability. Since, $\nu_k$'s are independent and $\sum_{k \geq 1} \mathbb{P}(\nu_k =1) = \infty$, we have with probability $1$, $\nu_k=1$ infinitely often and hence $Y_{\nu_k}=Y_1=1$ infinitely often. Thus $\limsup_{k \to \infty} Y_{\nu_k}=1$ almost surely and hence $Y_{\nu_k}$ does not converge to $Y$ almost surely. \item \textbf{TRUE}. Set $Z_n=\sup_{m \geq n} |Y_m-Y|$. Since, $Y_n$ converges almost surely to $Y$, we know that $Z_n \downarrow 0$, almost surely. Fix $\varepsilon>0$. Then for any $N \geq 1$, $$ \mathbb{P}(|Y_{\nu_k}-Y| \geq \varepsilon) \leq \mathbb{P}(Z_N \geq \varepsilon) + \mathbb{P}(\nu_k < N).$$ Take $k \to \infty$ and use $\nu_k \to \infty$ in probability to get, $$\limsup_{k \to \infty} \mathbb{P}(|Y_{\nu_k}-Y| \geq \varepsilon) \leq \mathbb{P}(Z_N \geq \varepsilon).$$ Now take $N \to \infty$ and use that $Z_N$ converges to $0$ almost surely to complete the proof. \item \textbf{FALSE}. As a counter-example, take $\nu_k \equiv k$, and $\left\{Y_n : n \geq 1\right\}$ to be independent collection with $$ \mathbb{P}(Y_n=1)=1/n, \; \mathbb{P}(Y_n=0)=1-1/n, \; \forall \; n \geq 1.$$ Clearly, $\nu_k$ converges to $\infty$ almost surely and $Y_k$ converges to $Y \equiv 0$ in probability. But $Y_{\nu_k}=Y_k$ does not converge to $0$ almost surely since $Y_k$'s are independent and $\sum_{k \geq 1} \mathbb{P}(Y_k=1)=\infty$ implying that $Y_k=1$ infinitely often almost surely. \item \textbf{FALSE}. As a counter-example, take $\left\{Y_n : n \geq 1\right\}$ to be independent collection with $$ \mathbb{P}(Y_n=1)=1/n, \; \mathbb{P}(Y_n=0)=1-1/n, \; \forall \; n \geq 1.$$ Clearly, $Y_n$ converges to $Y \equiv 0$ in probability. Since $Y_k$'s are independent and $\sum_{k \geq 1} \mathbb{P}(Y_k=1)=\infty$, we know that $Y_k=1$ infinitely often almost surely. Set $$\nu_k := \inf \left\{n > \nu_{k-1} : Y_n=1\right\}, \; \forall \; k \geq 1,$$ where $\nu_0:=0$. Therefore $\nu_k \to \infty$ almost surely and hence in probability. But $Y_{\nu_k} \equiv 1$, for all $k \geq 1$ and hence does not converge to $Y$ in probability. \end{enumerate}
1991-q3
Let X be a random variable with 0 < E(X^2) < \infty. Let Y and Z be i.i.d. with the same law as X, and suppose the law of (Y + Z)/\sqrt{2} is the same as the law of X. Show that the law of X is normal with mean 0 and variance \sigma^2 for some positive real \sigma^2.
Let $\mu:=\mathbb{E}(X)$ and $\sigma^2:=\operatorname{Var}(X).$ Since $\mathbb{E}(X^2)< \infty$, both of these quantities are finite. We are given that for $Y,Z$ i.i.d. with same distribution as $X$, we have $X \stackrel{d}{=}(Y+Z)/\sqrt{2}$. Taking expectations on both sides, we get $\mu = \sqrt{2}\mu$ and hence $\mu=0$. Then, $\sigma^2 = \operatorname{Var}(X)=\mathbb{E}(X^2)>0.$ Let $\phi$ be characteristic function of $X$. By CLT, we have \begin{equation}{\label{CLT}} \phi\left( \dfrac{t}{\sqrt{n}}\right)^n \stackrel{n \to \infty}{\longrightarrow} \exp \left( -\dfrac{t^2}{2\sigma^2}\right), \; \forall \; t \in \mathbb{R}. \end{equation} By the condition given , we can write $$ \phi(t) = \mathbb{E}\exp(itX) = \mathbb{E}\exp(it(Y+Z)/\sqrt{2}) = \phi \left(\dfrac{t}{\sqrt{2}} \right)^2, \;\forall \; t \in \mathbb{R}.$$ Using this relation $k$-times we get $$\phi(t) = \phi \left(\dfrac{t}{\sqrt{2^k}} \right)^{2^k}, \;\forall \; t \in \mathbb{R}, \; k \geq 0.$$ Taking $k \to \infty$ and using (\ref{CLT}), we conclude that $\phi(t)= \exp(-t^2/2\sigma^2)$, for all $t \in \mathbb{R}$. Therefore, $X \sim N(0,\sigma^2)$.
1991-q4
David Williams says , "In the Tale of Peredur ap Efrawg in the very early Welsh folk tales ... there is a magical flock of sheep, some black, some white. We sacrifice poetry for precision in specifying its behaviour. At each of times 1, 2, 3, \cdots a sheep (chosen randomly from the entire flock, independently of previous events) bleats; if this bleating sheep is white, one black sheep (if any remain) instantly becomes white; if the bleating sheep is black, one white sheep (if any remain) instantly becomes black. No births or deaths occur." Suppose the flock starts with n sheep, x of which are black. Find the probability that (a) eventually the flock has black sheep only (b) eventually the flock has white sheep only (c) the flock always has sheep of both colors [Hint: You can try Markov chains, or martingales, or a very elementary approach: let P_z be the probability in (a), and compare P_{z+1} - P_z with P_z - P_{z-1}.] (d) Approximate the probability in (a) when n = 500 and x = 200.
Let $X_n$ be the number of black sheep in the flock after time $n$. Clearly, $X_0=x$ and $\left\{X_m : m \geq 0\right\}$ is a Markov chain on the state space $\left\{0,1, \ldots, n\right\}$ with transition probabilities as follows. $$ p(i,i+1) = i/n, p(i,i-1) = 1-i/n, \; \forall \; i=1, \ldots, n-1,$$ with $0,n$ being absorbing states. \begin{enumerate}[label=(\alph*)] \item Let $P_y := \mathbb{P}(X_m =n, \; \text{for some }m \mid X_0=y),$ where $0 \leq y \leq n$. Then if $y \neq 0,n$, \begin{align*} P_y &= \mathbb{P}(X_m =n, \; \text{for some }m \mid X_0=y) \\ &= \mathbb{P}(X_m =n, \; \text{for some }m \geq 1\mid X_1=y+1)(y/n) + \mathbb{P}(X_m =n, \; \text{for some }m \geq 1 \mid X_1=y-1)(1-y/n) \\ &= \mathbb{P}(X_m =n, \; \text{for some }m \mid X_0=y+1)(y/n) + \mathbb{P}(X_m =n, \; \text{for some }m \mid X_0=y-1)(1-y/n) \\ &= P_{y+1}(y/n) +P_{y-1}(1-y/n). \end{align*} Thus $$ (P_{y}-P_{y-1}) = \dfrac{y}{n-y}(P_{y+1}-P_{y}), \; \forall \; 1 \leq y \leq n-1,$$ with the boundary conditions $P_0=0, P_n=1.$ Opening the recursion, we get $$ P_{y} - P_{y-1} = \left[ \prod_{j=y}^{n-1} \dfrac{j}{n-j} \right](P_{n}-P_{n-1}) = (1-P_{n-1}) \dfrac{(n-1)!}{(y-1)!(n-y)!} = {n-1 \choose y-1} (1-P_{n-1}), \; \; 1 \leq y \leq n.$$ Summing over $y$, we get \begin{align*} 1= P_n-P_0 = \sum_{y =1}^n (P_{y}-P_{y-1}) \\ &= (1-P_{n-1}) \sum_{y =1}^{n} {n-1 \choose y-1} = 2^{n-1}(1-P_{n-1}). \end{align*} Thus $1-P_{n-1}=2^{-(n-1)}$ and for all $0 \leq x \leq n-1$, \begin{align*} 1-P_x= P_n-P_x = \sum_{y =x+1}^n (P_{y}-P_{y-1}) \\ &=(1- P_{n-1}) \sum_{y =x+1}^{n} {n-1 \choose y-1} = 2^{-n+1}\sum_{z=x}^{n-1} {n-1 \choose z}. \end{align*} Thus the probability that the flock eventually has black sheep only (and no white sheep) is $P_{x}$ and $$ P_{x} = 2^{-n+1}\sum_{z=0}^{x-1} {n-1 \choose z},\, \forall \; 1 \leq x \leq n, \; P_0=0.$$ \item Consider a MC $\left\{Y_m : m \geq 0\right\}$ on the state space $\left\{0, \ldots, n\right\}$ with transition probabilities $$ q(i,i+1) = i/n, p(i,i-1) = 1-i/n, \; \forall \; i=1, \ldots, n-1; q(0,1)=1, q(n,n-1)=1.$$ Since, $p(i, \cdot)=q(i, \cdot)$ for all $1 \leq i \leq n-1$, we have for any $1 \leq x \leq n-1$, \begin{align*} \mathbb{P}(X_m =0,n \text{ for some } m \geq 0|X_0=x) =\mathbb{P}(Y_m =0,n \text{ for some } m \geq 0|Y_0=x) = 1, \end{align*} where the last equality is true since $q$ gives a irreducible chain on a finite state space. Hence, if the probability that the flock will eventually has only white sheep is $Q_x$, then clearly $Q_0=1, Q_n=0$ and we have argued above that $Q_x=1=P_x$ for $1 \leq x \leq n-1$. Thus $$ Q_{x} = 2^{-n+1}\sum_{z=x}^{n-1} {n-1 \choose z},\, \forall \; 1 \leq x \leq n, \; Q_n=0.$$ \item By the argument in part (b), the event that the flock will always have sheep of both colours is $0$. \item For large enough $n,x,n-x$, $$P_x = 2^{-n+1}\sum_{z=0}^{x-1} {n-1 \choose z} = \mathbb{P}(\text{Bin}(n-1,1/2) \leq x-1) \approx \Phi \left(\dfrac{x-1-(n-1)/2}{\sqrt{(n-1)/4}} \right).$$ If $n=500,x=200$, this becomes $$ P_{200,n=500} \approx \Phi \left(\dfrac{-101}{\sqrt{499}} \right) \approx \Phi(-100/\sqrt{500}) = \Phi(-2\sqrt{5}) \approx 3.87 \times 10^{-6}.$$ \end{enumerate}
1991-q5
Let (X_n) be an irreducible Markov chain with countable state space I. Let B \subset I be infinite, and let T_B = \inf\{n \geq 0 : X_n \in B\} be the first hitting time of B, counting a hit at time 0. For x \in I, let f(x) = P(T_B < \infty|X_0 = x) (a) Show that f(X_n) is a supermartingale, with respect to \sigma-fields which you should specify. (b) Show that either f(x) = 1 for all x \in I, or \inf\{f(x): x \in I\} = 0. (c) Show \{X_n \text{ visits } B \text{ infinitely often}\} = \{f(X_n) \to 1\} \text{ a.s.} and \{X_n \text{ visits } B \text{ finitely often}\} = \{f(X_n) \to 0\} \text{ a.s.} [ Hint: use the fact that if F_n \uparrow F, Y_n \to Y a.s. and the Y_n are bounded, then E(Y_n|F_n) \to E(Y|F) a.s. This fact was proved in 310B, so you need not prove it here.]
Let $\mathcal{F}_n:= \sigma(X_k : 0 \leq k \leq n)$, for all $n \geq 0$. \begin{enumerate}[label=(\alph*)] \item We know that $x \mapsto p(x,\mathcal{C}) = \mathbb{P}(X_{\cdot} \in \mathcal{C} \mid X_0=x)$ is a measurable function from $I$ to $[0,1]$ for any $\mathcal{C} \in \mathcal{B}_{I}^{\mathbb{Z}_{\geq 0}}$ where $\mathcal{B}_I$ is the Borel $\sigma$-algebra for $I$, which is the power set of $I$ since $I$ is countable. Hence $f$ is a bounded measurable function implying that $f(X_n) \in m \mathcal{F}_n$ with $\mathbb{E}|f(X_n)|< \infty$. Let $p(\cdot,\cdot)$ be the transition probability for the chain. If $x \in B$, then by definition we have $f(x)=1 = \sum_{y \in I} p(x,y) \geq \sum_{y \in I} f(y)p(x,y)$, since $0 \leq f(y) \leq 1$. If $x \notin B$, then \begin{align*} f(x)=\mathbb{P}(T_B< \infty \mid X_0=x) &= \mathbb{P}(X_n \in B \; \text{for some }n \geq 1 \mid X_0=x) \\ &= \sum_{y \in I} \mathbb{P}(X_n \in B \; \text{for some }n \geq 1 \mid X_1=y)\mathbb{P}(X_1=y \mid X_0=x) \\ &= \sum_{y \in I} \mathbb{P}(X_n \in B \; \text{for some }n \geq 0 \mid X_0=y)p(x,y) = \sum_{y \in I} f(y)p(x,y). \end{align*} Hence, for all $n \geq 1$, \begin{align*} \mathbb{E}(f(X_n)\mid \mathcal{F}_{n-1}) = \mathbb{E}(f(X_n)\mid X_{n-1}) = \sum_{y \in I} f(y)p(X_{n-1},y) \leq f(X_{n-1}), \end{align*} establishing that $\left\{f(X_n), \mathcal{F}_n, n \geq 0\right\}$ is a super-MG. \item Suppose $\inf \left\{f(x) : x \in I\right\} =c >0$. Also define $\Gamma^B := (X_n \in B \text{ i.o.})$. Note that $\Gamma_{n} :=\bigcup_{m \geq n}(X_m \in B) \downarrow \Gamma^B$, in other words $\mathbbm{1}_{\Gamma_n} \stackrel{a.s.}{\longrightarrow} \mathbbm{1}_{\Gamma^B}$. Let $h: (I^{\mathbb{Z}_{\geq 0}},\mathcal{B}_{I}^{\mathbb{Z}_{\geq 0}}) \to (\mathbb{R}, \mathcal{B}_{\mathbb{R}})$ be the bounded measurable function defined as $$ h(\left\{x_i :i \geq 0\right\}) = \mathbbm{1}(x_i \in B, \; \text{ for some } i \geq 0).$$ By strong Markov property, we have $$ \mathbb{E}\left[ h(X_{n+\cdot}) \mid \mathcal{F}_n\right](\omega) = \mathbb{E}_{X_n(\omega)}[h] = f(X_n(\omega)) .$$ By the hypothesis, we have almost surely, $$ \mathbb{E} \left[ \mathbbm{1}_{\Gamma_n} \mid \mathcal{F}_n \right] = \mathbb{P}(X_m \in B, \; \text{ for some } m \geq n \mid \mathcal{F}_n) = \mathbb{E}\left[ h(X_{n+\cdot}) \mid \mathcal{F}_n\right] = f(X_n) \geq c,$$ By \textit{Levy's upward Theorem}, we have $$ \mathbb{E} \left[ \mathbbm{1}_{\Gamma_{n}} \mid \mathcal{F}_n \right] \stackrel{a.s.}{\longrightarrow} \mathbb{E} \left[ \mathbbm{1}_{\Gamma^B} \mid \mathcal{F}_{\infty} \right] = \mathbbm{1}_{\Gamma^B},$$ where $\mathcal{F}_{\infty}=\sigma(X_0, X_1, \ldots,)$. Thus we can conclude that $\mathbbm{1}_{\Gamma^B} \geq c >0$ almost surely. Hence, $\mathbb{P}(T_B < \infty) \geq \mathbb{P}(\Gamma^B) =1$ for any choice of $X_0$. In particular if we take $X_0\equiv x$, then we get $f(x)=1$. \item We have actually proved in part (b) that $f(X_n) \stackrel{a.s.}{\longrightarrow} \mathbbm{1}_{\Gamma^B}$. Thus $$ (f(X_n) \to 1) = \Gamma^B = (X_n \in B \text{ i.o. }), \; \text{a.s.},$$ and $$ (f(X_n) \to 0) = (\Gamma^B)^c = (X_n \in B \text{ f.o. }), \; \text{a.s.}.$$ \end{enumerate}
1991-q6
Let S_n = X_2 + X_3 + \cdots + X_n where the X_j's are independent, and P(X_j = j) = 1/j, P(X_j = 0) = 1 - 1/j, \ j \geq 2. (a) Discuss the behavior of S_n as n \to \infty. (b) Find constants \{a_n\} and positive constants \{b_n\} so that the law of (S_n - a_n)/b_n tends to a nondegenerate limit as n \to \infty.
We have $X_j$'s independent with $\mathbb{P}(X_j=j)=j^{-1}$ and $\mathbb{P}(X_j=0)=1-1/j$ for all $j \geq 1$. $S_n = \sum_{j=1}^n X_j.$ \begin{enumerate}[label=(\alph*)] \item We have $$ \sum_{j \geq 1} \mathbb{P}(X_j=j) = \sum_{j \geq 1} j^{-1}=\infty,$$ and hence by \textit{Borel-Cantelli Lemma II}, we can conclude $\mathbb{P}(X_j=j \text { i.o.})=1$. Since on the event $(X_j=j \text { i.o.})$, we have $S_n \to \infty$; we can say that $S_n \stackrel{a.s.}{\longrightarrow} \infty$ as $n \to \infty.$ \item Note that $\mathbb{E}(S_n) = \sum_{j=1}^n \mathbb{E}(X_j) = n$ and $\operatorname{Var}(S_n) = \sum_{j=1}^n \operatorname{Var}(X_j) = \sum_{j=1}^n (j-1)=n(n-1)/2.$ Thus we expect that the proper scaling and centering would likely be $a_n=n$, $b_n=n$. In other words, in is enough to try to evaluate asymptotics of $S_n/n$. Fix $t \in \mathbb{R}$. We get, $$ \mathbb{E} \exp\left[ it\dfrac{S_n}{n}\right] = \prod_{k=1}^n \mathbb{E} \exp\left[ it\dfrac{X_k}{n}\right] = \prod_{k=1}^n \left[ \dfrac{1}{k}\exp \left( \dfrac{itk}{n}\right) + \left(1-\dfrac{1}{k}\right) \right] = \prod_{k=1}^n z_{n,k}(t), $ where $$ z_{n,k}(t) :=\dfrac{1}{k}\exp \left( \dfrac{itk}{n}\right) + \left(1-\dfrac{1}{k}\right) , \; \forall \; 1 \leq k \leq n.$$ We intend to apply ~\cite[Lemma 3.3.31]{dembo}. Observe that \begin{align*} \sum_{k=1}^n (1-z_{n,k}(t)) = \sum_{k=1}^n \left( 1-\dfrac{1}{k}\exp \left( \dfrac{itk}{n}\right) - \left(1-\dfrac{1}{k}\right) \right) &= \sum_{k=1}^n \dfrac{1}{k}\left(1-\exp \left( \dfrac{itk}{n}\right) \right) \\ &= \dfrac{1}{n}\sum_{k=1}^n \dfrac{n}{k}\left(1-\exp \left( \dfrac{itk}{n}\right) \right) \\ & \longrightarrow \int_{0}^1 \dfrac{1-\exp(itx)}{x}\, dx, \end{align*} and \begin{align*} \sum_{k=1}^n |1-z_{n,k}(t)|^2 = \sum_{k=1}^n \Bigg \rvert 1-\dfrac{1}{k}\exp \left( \dfrac{itk}{n}\right) - \left(1-\dfrac{1}{k}\right) \Bigg \rvert^2 &= \sum_{k=1}^n \dfrac{1}{k^2}\Bigg \rvert 1-\exp \left( \dfrac{itk}{n}\right) \Bigg \rvert^2 \\ &\leq \sum_{k=1}^n \dfrac{1}{k^2} \Bigg \rvert \dfrac{itk}{n}\Bigg \rvert = \dfrac{|t|}{n}\sum_{k=1}^n \dfrac{1}{k} = \dfrac{|t|( ext{log } n +O(1))}{n} \to 0. \end{align*} Therefore, $$ \mathbb{E} \exp\left[ it\dfrac{S_n}{n}\right] = \prod_{k=1}^n (1+z_{n,k}(t)-1) \longrightarrow \exp \left( - \int_{0}^1 \dfrac{1-\exp(itx)}{x}\, dx\right) =: \psi(t).$$ Since, $|e^{ix}-1| \leq |x|$, we have $$ \int_{0}^1 \Bigg \rvert\dfrac{1-\exp(itx)}{x}\Bigg \rvert\, dx \leq \int_{0}^1 |t| \, dx =|t| < \infty.$$ Hence, $\psi$ is a well defined function. The above inequality also shows that $\psi(t) \to 1 =\psi(0)$ as $t \to 0$. Hence, by \textit{Levy's Continuity Theorem}, we conclude that $S_n/n \stackrel{d}{\longrightarrow} G$ where $G$ is a distribution function with corresponding characteristic function being $\psi$. From the expression of $\psi$, we can be certain that $G$ is not a degenerate distribution because otherwise we would have $$ \int_{0}^1 \dfrac{1-\cos (tx)}{x} \, dx =0, \; \forall \; t \in \mathbb{R}.$$
1991-q7
Find a sequence of Borel measurable functions f_n : [0, 1]^n \to \{0, 1\} such that whenever X_1, X_2, X_3, \cdots are i.i.d. random variables with values in [0, 1], f_n(X_1, X_2, \cdots, X_n) \to 0 \text{ a.s. in case } E(X) < 1/2, and f_n(X_1, X_2, \cdots, X_n) \to 1 \text{ a.s. in case } E(X) \geq 1/2.
Define $f_n :[0,1]^n \to \left\{0,1\right\}$ as $$ f_n(x_1,\ldots,x_n) = \mathbbm{1}\left( \sum_{i=1}^n x_i \geq \dfrac{n}{2} - n^{2/3}\right), \; \forall \; x_1, \ldots,x_n \in [0,1], \; n \geq 1.$$ Clearly $f_n$ is Borel-measurable. Now suppose $X_1, X_2,\ldots$ are \textit{i.i.d.} $[0,1]$-valued random variables with $\mu=\mathbb{E}(X_1)$ and $S_n := \sum_{k=1}^n X_k$. Then $$ f_n(X_1, \ldots, X_n) = \mathbbm{1}(S_n \geq n/2 - n^{2/3}).$$ For $\mu <1/2$, we have by SLLN, $S_n/n \stackrel{a.s.}{\longrightarrow} \mu$ and hence $$ S_n - \dfrac{n}{2} +n^{2/3} = n\left( \dfrac{S_n}{n}-\dfrac{1}{2}+n^{-1/3}\right) \stackrel{a.s.}{\longrightarrow} -\infty \Rightarrow f_n(X_1, \ldots, X_n) \stackrel{a.s.}{\longrightarrow} 0.$$ For $\mu >1/2$, we have by SLLN, $S_n/n \stackrel{a.s.}{\longrightarrow} \mu$ and hence $$ S_n - \dfrac{n}{2} +n^{2/3} = n\left( \dfrac{S_n}{n}-\dfrac{1}{2}+n^{-1/3}\right) \stackrel{a.s.}{\longrightarrow} \infty \Rightarrow f_n(X_1, \ldots, X_n) \stackrel{a.s.}{\longrightarrow} 1.$$ For $\mu=1/2$, note that since $X_1$ takes value in $[0,1]$, it is $1/4$-sub-Gaussian and thus $S_n$ is $n/4$-sub-Gaussian. Hence, $$ \mathbb{P}\left( S_n - \dfrac{n}{2} < -t\right) \leq \exp\left(-\dfrac{2t^2}{n}\right), \; \forall \; t \geq 0,$$ and thus $$ \sum_{n \geq 1} \mathbb{P}(f_n(X_1, \ldots, X_n)=0) = \sum_{n \geq 1} \mathbb{P} \left( S_n - \dfrac{n}{2} < -n^{2/3}\right) \leq \sum_{n \geq 1} \exp \left( -2n^{1/3}\right)<\infty.$$ Applying \textit{Borel-Cantelli Lemma I}, we conclude that $ f_n(X_1, \ldots, X_n) \stackrel{a.s.}{\longrightarrow} 1.$
1992-q1
A gambler wins or loses $1 according as he bets correctly or incorrectly on the outcome of tossing a coin which comes up heads with probability p and tails with probability q = 1 - p. It is known that p \neq 1/2, but not whether p > q or p < q. The gambler uses the following strategy. On the first toss he bets on heads. Thereafter he bets on heads if and only if the accumulated number of heads on past tosses at least equals the accumulated number of tails. Suppose the gambler's initial fortune is $k, where k is a positive integer. What is the probability that the gambler is eventually ruined?
Let $W_n$ be the gambler's winnings upto time $n$. For the sake of simplicity of notation, we shall assume that the gamblers winning can go down to $-\infty$ and the ruin probability is defined as the probability that $W_n \leq 0$ for some $n$. We are given that $W_0=k$. Let $S_n$ be the difference between number of heads and number of tails upto time $n$. $\left\{S_n : n \geq 0\right\}$ is thus a SRW on $\mathbb{Z}$ with probability of going to the right being $p=1-q$. The betting strategy then boils down to ``Bet on heads at the $n$-th toss if and only if $S_{n-1} \geq 0$". The proof depends upon the following crucial but tricky observation. For all $n \geq 1$, $$ (W_n-|S_n|)-(W_{n-1}-|S_{n-1}|) = -2\mathbbm{1}(S_{n-1}=0, S_n=-1).$$ To make sense of the above observation, we consider a case by case basis. \begin{itemize} \item When $S_{n-1}>0$, the gambler bets on heads at the $n$-th toss. If the coin turns up head, $W_n=W_{n-1}+1$ and $|S_n| = S_n=S_{n-1}+1=|S_{n-1}|+1.$ Similarly, if the coin turns up tail, $W_n-W_{n-1}=-1, |S_n| - |S_{n-1}| = -1.$ \item When $S_{n-1}<0$, the gambler bets on tails at the $n$-th toss. If the coin turns up head, $W_n=W_{n-1}-1$ and $|S_n| = -S_n=-S_{n-1}-1=|S_{n-1}|-1.$ Similarly, if the coin turns up tail, $W_n-W_{n-1}=1, |S_n| - |S_{n-1}| = 1.$ \item When $S_{n-1}=0$ and $S_n=1$, it means the gambler bets on heads on $n$-th toss and the coin turns up head. In this case $W_n=W_{n-1}+1$ and hence the observation holds. \item When $S_{n-1}=0$ and $S_n=-1$, it means the gambler bets on heads on $n$-th toss and the coin turns up tail. In this case $W_n=W_{n-1}-1$ and hence the observation holds. \end{itemize} Now define the stopping times as follows. $\tau_0:=0$ and $$ \tau_j := \inf \left\{ n >\tau_{j-1} : S_n=-1, S_{n-1}=0\right\}, \; j \geq 1.$$ Also let $Y_n := W_n -|S_n|$ for all $n \geq 0$. The observation now implies that $$ Y_{\tau_j}\mathbbm{1}(\tau_j < \infty) = (Y_0-2j)\mathbbm{1}(\tau_j < \infty) = (k-2j)\mathbbm{1}(\tau_j < \infty).$$ Introduce the notation $P_{i,j} := \mathbb{P}(S_n=j, \text{ for some n} \mid S_0=i)$, where $i,j \in \mathbb{Z}$. From the theory of SRW, we have $P_{i,j}=P_{0,1}^{j-i}$, for $i<j$ and $P_{i,j}=P_{0,-1}^{i-j}$ for $i >j$, with $$ P_{0,1} = \left( \dfrac{p}{q}\right) \wedge 1, \; P_{0,-1} = \left( \dfrac{q}{p}\right) \wedge 1.$$ \textit{Strong Markov property} implies that $$ \mathbb{P}(\tau_j < \infty) = \mathbb{P}(\tau_1< \infty) (\mathbb{P}(\tau < \infty))^{j-1}, \; \forall \; j \geq 1,$$ where $$ \mathbb{P}(\tau_1 < \infty) = \mathbb{P}( S_{n-1}=0, S_n=-1, \text{ for some } n \geq 1 \mid S_0=0) = \mathbb{P}(S_n=-1 \text{ for some } n \geq 1 \mid S_0=0)=P_{0,-1},$$ and $$ \mathbb{P}(\tau < \infty) = \mathbb{P}( S_{n-1}=0, S_n=-1, \text{ for some } n \geq 1 \mid S_0=-1) = \mathbb{P}(S_n=-1, S_m=0 \text{ for some } n \geq m \geq 1 \mid S_0=-1).$$ To see why the above statements are true note that the first visit to $-1$, say at time $n$, after visiting $0$ at time $m \leq n$, must be preceded by a visit to $0$ at time $n-1$. By \textit{Strong Markov Property}, \begin{align*} \mathbb{P}(\tau < \infty) &= \mathbb{P}(S_n=-1, S_m=0 \text{ for some } n \geq m \geq 1 \mid S_0=-1)\\ & = \mathbb{P}(S_n=0, \text{ for some } n \mid S_0=-1)\mathbb{P}(S_n=-1, \text{ for some n} \mid S_0=0)=P_{0,1}P_{0,-1}. \end{align*} Combining these observations together we get $$ \mathbb{P}(\tau_j < \infty) = P_{0,-1}^{j}P_{0,1}^{j-1}, \; \forall \; j \geq 1.$$ Now the stopping time of our interest is $T := \inf \left\{n \geq 1 : W_n \leq 0\right\} = \inf \left\{n \geq 1 : W_n = 0\right\}.$ Since $W_n = Y_n + |S_n| \geq Y_n$ and $Y_n$ is non-increasing in $n$, we have $$ T \geq \inf \left\{n \geq 1 : Y_n \leq 0\right\} = \tau_{k/2}\mathbbm{1}(k \text{ is even }) + \tau_{(k+1)/2}\mathbbm{1}(k \text{ is odd }) = \tau_{\lfloor \frac{k+1}{2}\rfloor}. $$ Now we shall consider these even and odd cases separately. \begin{itemize} \item[\textbf{Case 1} :] $k=2m-1$ for some $m \in \mathbb{N}$. In this case $T \geq \tau_m$. On the otherhand, if $ au_m < \infty$, then $Y_{\tau_m}=-1$ and by definition $S_{\tau_m}=-1$, which implies that $W_{\tau_m}=0$ and hence $T \leq \tau_m.$ This yields that $T=\tau_m$ and hence $$ \mathbb{P}(T < \infty) = \mathbb{P}(\tau_m < \infty) = P_{0,-1}^{m}P_{0,1}^{m-1}.$$ \item[\textbf{Case 2} :] $k=2m$ for some $m \in \mathbb{N}$. In this case $T \geq \tau_m$. On the otherhand, if $ au_m < \infty$, then $Y_{\tau_m}=0$ and by definition $S_{\tau_m}=-1$, which implies that $W_{\tau_m}=1$. Let $\theta_{m+1}$ be the first time the walk $S$ returns to $0$ after time $\tau_m$, provided that $\tau_m$ is finite. Clearly, $\theta_{m+1} < \tau_{m+1}$ and for any $\tau_m \leq k < \theta_{m+1}$, we have $Y_k=Y_{\tau_m}=0$ (since the process $Y$ is constant on the interval $[\tau_m, \tau_{m+1})$), and $W_k = |S_k| + Y_k = |S_k| >0$. Thus $T \geq \theta_{m+1}$. But $Y_{\theta_{m+1}}=0$ with $S_{\theta_{m+1}}=0$, implying that $W_{\theta_{m+1}}=0$. Hence $T=\theta_{m+1}.$ Apply \textit{Strong Markov Property} to get \begin{align*} \mathbb{P}(T < \infty) &= \mathbb{P}(\theta_{m+1}< \infty) = \mathbb{P}(\tau_m < \infty) \mathbb{P}(S_n=0, \text{ for some n } \mid S_0=-1) = P_{0,-1}^{m}P_{0,1}^{m-1}P_{-1,0} = P_{0,-1}^{m}P_{0,1}^{m}. \end{align*} \end{itemize} Combining these two cases together we can write $$ \mathbb{P}(\text{Ruin} \mid W_0=k) = P_{0,-1}^{\lceil k/2 \rceil} P_{0,1}^{\lfloor k/2 \rfloor} = \left( \dfrac{q}{p} \wedge 1\right)^{\lceil k/2 \rceil} \left( \dfrac{p}{q} \wedge 1\right)^{\lfloor k/2 \rfloor}= \begin{cases} \left( \dfrac{q}{p}\right)^{\lceil k/2 \rceil}, & \text{if } p>q, \\ 1, & \text{if } p=q, \\ \left( \dfrac{p}{q} \right)^{\lfloor k/2 \rfloor}, & \text{if } p<q. \end{cases}$$
1992-q2
Let X_1, X_2, \cdots be independent, with E(X_n) = 0 for all n. Let S_n = X_1 + X_2 + \cdots + X_n and suppose \sup_n E(|S_n|) < \infty. Let T be an a.s. finite stopping time with respect to the natural filtration of the X's. Show E(S_T) = 0.
We have independent mean-zero random variables $X_1, X_2, \ldots$ such that $\sup_{n \geq 1} \mathbb{E}|X_n| =C < \infty$, where $S_1 = \sum_{k=1}^n X_k$. We shall use \textit{Ottaviani's Inequality}. For completeness we are adding a proof of it here. Take any $t,s >0$. Let $B_n:=(|S_1|, |S_2|, \ldots, |S_{n-1}| < t+s, |S_n| \geq t+s)$ for all $n \geq 1$. Clearly the events $B_n$'s are mutually disjoint. Using the fact that $B_k$ is independent to $S_n-S_k$ for all $n \geq k$, we have the following. \begin{align*} \mathbb{P}\left(\max_{k=1}^n |S_k| \geq t+s\right) = \mathbb{P}\left(\bigcup_{k=1}^n B_k\right) = \sum_{k=1}^n \mathbb{P}(B_k) &= \sum_{k=1}^n \mathbb{P}(B_k, |S_n| \geq t) + \sum_{k=1}^n \mathbb{P}(B_k, |S_n| < t) \\ & \leq \mathbb{P}(|S_n| \geq t) + \sum_{k=1}^{n-1} \mathbb{P}(B_k, |S_n| < t) \\ &\leq \mathbb{P}(|S_n| \geq t) + \sum_{k=1}^{n-1} \mathbb{P}(B_k, |S_n-S_k| > s) \\ &\leq \mathbb{P}(|S_n| \geq t) + \sum_{k=1}^{n-1} \mathbb{P}(B_k) \mathbb{P}(|S_n-S_k| > s) \\ & \leq \mathbb{P}(|S_n| \geq t) + \dfrac{2C}{s}\sum_{k=1}^{n-1} \mathbb{P}(B_k) \leq \mathbb{P}(|S_n| \geq t)+ \dfrac{2C\mathbb{P}(\max_{k=1}^n |S_k| \geq t+s)}{s}. \end{align*} Choose $s >2C$. Then \begin{align*} \left(1-\dfrac{2C}{s} \right) \mathbb{P} \left(\max_{k=1}^n |S_k| \geq t+s \right) = \left(1-\dfrac{2C}{s} \right) \mathbb{P} \left(\left(\max_{k=1}^n |S_k|- s\right)_{+} \geq t \right) \leq \mathbb{P}(|S_n| \geq t). \end{align*} Integrating over $t$ from $0$ to $\infty$, we get $$ \mathbb{E} \left( \max_{k=1}^n |S_k|- s\right)_{+} \leq \left(1-\dfrac{2C}{s}\right)^{-1} \mathbb{E}|S_n| \leq \dfrac{Cs}{s-2C}.$$ Take $s=3C$ and conclude that $$ \mathbb{E}\left( \max_{k=1}^n |S_k| \right) \leq \mathbb{E} \left( \max_{k=1}^n |S_k|- 3C\right)_{+} + 3C \leq 6C.$$ Taking $n \uparrow \infty$, we conclude that $\mathbb{E}(\sup_{n \geq 1} |S_n|) < \infty$. Now $S_n$ is a MG with respect to its canonical filtration and $T$ is a stopping time. Apply OST to get that \mathbb{E}(S_{T \wedge n})=0$ for all $n$. Since $T<\infty$ almost surely, we have $S_{T \wedge n}$ converges to $S_T$ almost surely. Moreover, $$ \mathbb{E}\left( \max_{k\geq 1} |S_{T \wedge k}| \right) \leq \mathbb{E}\left( \max_{k\geq 1} |S_{k}| \right) < \infty,$$ and hence $\left\{S_{T \wedge n}, n \geq 1\right\}$ is uniformly integrable. Use \textit{Vitali's Theorem} to conclude that $\mathbb{E}(S_T)=0.$
1992-q3
Suppose X is N(0,1), \epsilon is N(0, \sigma^2) and is independent of X, and Y = X + \epsilon. A statistician observes the value of Y and must decide whether the (unobserved) inequality |Y - X| \le |X| is satisfied. Consider the following two classes of strategies: (a) For some c \ge 0 predict that |Y - X| \le |X| is satisfied if |Y| \ge c. (b) For some 0 < p < 1 predict that |Y - X| \le |X| is satisfied if P(|Y - X| \le |X| \mid Y) \ge p. Are these classes of strategies equivalent? If so, what is the relation between c and p?
As a first step note that $$ (X,Y) \sim N_2 \left[\begin{pmatrix} 0 \\ 0 \end{pmatrix}, \begin{pmatrix} 1 & 1 \\ 1 & 1+ \sigma^2 \end{pmatrix} \right], $$ and hence $$ X \mid Y \sim N \left[ \dfrac{Y}{1+\sigma^2},\dfrac{\sigma^2}{1+\sigma^2} \right]=: N(\nu Y, \gamma^2).$$ Note that almost surely, \begin{align*} \mathbb{P}\left( |Y-X| \leq |X| \mid Y \right) &= \mathbb{P}\left( |Y-X| \leq |X|, X >0 \mid Y \right) + \mathbb{P}\left( |Y-X| \leq |X|, X <0 \mid Y \right) \\ &= \mathbb{P} \left( 0 \leq Y \leq 2X \mid Y\right) + \mathbb{P} \left( 2X \leq Y \leq 0 \mid Y\right) \\ & =\mathbb{P}(X \geq Y/2 \mid Y)\mathbbm{1}(Y \geq 0) + \mathbb{P}(X \leq Y/2 \mid Y)\mathbbm{1}(Y \leq 0) \\ &= \bar{\Phi}\left( Y \dfrac{1/2-\nu}{\gamma}\right)\mathbbm{1}(Y \geq 0) + \Phi\left( Y \dfrac{1/2-\nu}{\gamma}\right)\mathbbm{1}(Y \leq 0) \\ &= \bar{\Phi}\left( \dfrac{(1-2\nu)|Y|}{2\gamma}\right) = \Phi\left( \dfrac{(2\nu-1)|Y|}{2\gamma}\right). \end{align*} Thus \begin{enumerate}[label=(\Roman*)] \item If $\nu > 1/2$, i.e., $\sigma^2 <1$, then $$ \Phi\left( \dfrac{(2\nu-1)|Y|}{2\gamma}\right) \geq p \iff |Y| \geq \dfrac{2\gamma \Phi^{-1}(p)}{2\nu-1}.$$ \item If $\nu < 1/2$, i.e., $\sigma^2 >1$, then $$ \Phi\left( \dfrac{(2\nu-1)|Y|}{2\gamma}\right) \geq p \iff |Y| \leq \dfrac{2\gamma \Phi^{-1}(p)}{1-2\nu}.$$ \item If $\nu=1/2$, i.e., $\sigma^2=1$, then \Phi\left( \dfrac{(2\nu-1)|Y|}{2\gamma}\right)$ is a constant. \end{enumerate} Thus the two strategies are equivalent only if $\sigma^2<1$ and in that case $$ c = \dfrac{2\gamma \Phi^{-1}(p)}{2\nu-1} = \dfrac{2\sigma \sqrt{1+\sigma^2} \Phi^{-1}(p)}{1-\sigma^2}.$$
1992-q4
Suppose X_0 is uniformly distributed on [0, 1], and for n \ge 1 define X_n recursively by X_n = 2X_{n-1} - \lfloor 2X_{n-1} \rfloor where \lfloor x \rfloor denotes the greatest integer less than or equal to x. Let S_n = X_1 + X_2 + \cdots + X_n. Find constants a_n and b_n such that (S_n - a_n)/b_n converges in distribution as n \to \infty to the standard normal law. Justify your answer.
We have $X_0\sim \text{Uniform}([0,1])$ and $X_n=2X_{n-1}-\lfloor 2X_{n-1}\rfloor$, for all $n \geq 1$. Recall that $X_0 = \sum_{j \geq 1} 2^{-j}Y_{j-1}$, where $Y_0, Y_1, \ldots \stackrel{iid}{\sim} \text{Ber}(1/2)$. Thus $(Y_0,Y_1, \ldots,)$ is the binary representation of $X_0$. Observe the following fact. $$ x=\sum_{j \geq 1} 2^{-j}x_{j-1}, \; x_i \in \left\{0,1\right\}, \; \forall \; i \geq 0 \Rightarrow 2x - \lfloor 2x \rfloor =\sum_{j \geq 1} 2^{-j}x_{j}.$$ Using this fact recursively, we get that $$ X_n = \sum_{j \geq 1} 2^{-j}Y_{n+j-1}, \; \; \forall \; n \geq 0.$$ Therefore, for all $n \geq 1$, \begin{align*} S_n = \sum_{k=1}^n X_k = \sum_{k=1}^n \sum_{j \geq 1} 2^{-j}Y_{k+j-1} &= \sum_{k=1}^n Y_k \left( \sum_{l=1}^k 2^{-l} \right) + \sum_{k >n} Y_k \left( \sum_{l=k-n+1}^{k} 2^{-l}\right) \\ &= \sum_{k=1}^n (1-2^{-k})Y_k + (1-2^{-n})\sum_{k >n} 2^{-(k-n)}Y_k \\ &= \sum_{k=1}^n (1-2^{-k})Y_k + (1-2^{-n})\sum_{l \geq 1} 2^{-l}Y_{l+n}\ & = \sum_{k=1}^n (1-2^{-k})Y_k + (1-2^{-n})X_{n+1} = \sum_{k=1}^n Y_k + R_n, \end{align*} where $R_n = -\sum_{k=1}^n 2^{-k}Y_k + (1-2^{-n})X_{n+1}.$ Clearly, $ |R_n| \leq \sum_{k \geq 1} 2^{-k} + (1-2^{-n}) \leq 2.$ By CLT, we have $$ \dfrac{\sum_{k=1}^n Y_k - (n/2)}{\sqrt{n/4}} \stackrel{d}{\longrightarrow} N(0,1),$$ and hence $$ \dfrac{2(S_n-R_n)-n}{\sqrt{n}}\stackrel{d}{\longrightarrow} N(0,1).$$ On the otherhand, $R_n/\sqrt{n}$ converges to $0$ in probability. Hence, using \textit{Slutsky's Theorem}, we conclude that $$ \dfrac{S_n - (n/2)}{\sqrt{n/4}} \stackrel{d}{\longrightarrow} N(0,1),$$ i.e., $a_n=n/2$ and $b_n = \sqrt{n/4}.$
1992-q5
Let \{Z_n\} be independent and identically distributed, with mean 0 and finite variance. Let T_n be a sequence of random variables such that T_n \to \alpha a.s. as n \to \infty, for some constant \alpha with |\alpha| < 1. Let X_0 = 0, and define X_n recursively by X_{n+1} = T_n X_n + Z_{n+1} Prove that n^{-1} \sum_{i=1}^n X_i \to 0 a.s. as n \to \infty.
We have $Z_1,Z_2, \ldots$ independent and identically distributed with mean $0$ and finite variance, say $\sigma^2$. $T_n$ converges to $\alpha \in (-1,1)$ almost surely. $X_0=0$ and $X_{n+1}=T_nX_n+Z_{n+1}, \; \forall \; n \geq 0$. Opening up the recursion we get \begin{align*} X_{n+1} = Z_{n+1}+T_nX_n = Z_{n+1}+T_n(Z_n+T_{n-1}X_{n-1}) &= Z_{n+1}+T_nZ_n + T_nT_{n-1}X_{n-1} \\ &= \cdots \\ &= Z_{n+1} + T_nZ_n + T_nT_{n-1}Z_{n-1} + \cdots + T_nT_{n-1}\cdots T_1Z_1. \end{align*} In other words, $$ X_n = \sum_{k=1}^{n-1} Z_k U_{k:n-1} + Z_n, \; \forall \; n \geq 1,$$ where $U_{i:j}:= \prod_{m=i}^j T_m.$ As a first step, we shall show that \begin{align*} \dfrac{1}{n}\sum_{k=1}^n X_k = \dfrac{1}{n} \sum_{k=1}^n \left[ \sum_{l=1}^{k-1} Z_l U_{l:k-1} + Z_k \right] &= \dfrac{1}{n} \sum_{k=1}^{n-1} Z_k \left[ 1+T_k+T_kT_{k+1}+\cdots+T_k \cdots T_{n-1}\right] + \dfrac{Z_n}{n} \\ &= \dfrac{1}{n} \sum_{k=1}^{n-1} Z_k \left[ 1+\alpha+\alpha^2+\cdots+\alpha^{n-k}\right] + \dfrac{Z_n}{n} + R_n \\ &= \dfrac{1}{n} \sum_{k=1}^{n} Z_k \left[ 1+\alpha+\alpha^2+\cdots+\alpha^{n-k}\right] + R_n, \end{align*} where we shall show that $R_n$ converges almost surely to $0$. Take $\omega$ such that $T_k(\omega) \to \alpha \in (-1,1)$ and $n^{-1}\sum_{k=1}^n |Z_k(\omega)| \to \mathbb{E}|Z|$. Fix $\varepsilon \in (0, (1-|\alpha|)/2)$ and get $N(\omega)\in \mathbb{N}$ such that $|T_n(\omega)-\alpha| \leq \varepsilon$ for all $n > N(\omega)$. Then $|T_n(\omega)| \leq |\alpha|+\varepsilon =\gamma <1,$ for all $n > N(\omega)$. We shall use the following observation. For any $a_1,\ldots, a_m, b_1, \ldots, b_m \in (-C,C)$, we have \begin{align*} \Bigg \rvert \prod_{k=1}^m a_k - \prod_{k=1}^m b_k\Bigg \rvert \leq \sum_{k=1}^m |a_1\ldots a_kb_{k+1}\ldots b_m - a_1\ldots a_{k-1}b_{k}\ldots b_m| \leq mC^{m-1} \max_{k=1}^m |a_k-b_k|. \end{align*} Using this we get for large enough $n$, \begin{align*} \Bigg \rvert \dfrac{1}{n} \sum_{k=N(\omega)+1}^{n-1} Z_k \left[ 1+T_k+T_kT_{k+1}+\cdots+T_k \cdots T_{n-1}\right] &+ \dfrac{Z_n}{n} - \dfrac{1}{n} \sum_{k=N(\omega)+1}^{n} Z_k \left[ 1+\alpha+\alpha^2+\cdots+\alpha^{n-k}\right]\Bigg \rvert \\ &\leq \dfrac{1}{n}\sum_{k=N(\omega)+1}^{n-1} |Z_k| \Bigg \rvert \sum_{l=k}^{n-1} (U_{k:l}-\alpha^{l-k+1}) \Bigg \rvert \\ & \leq \dfrac{1}{n}\sum_{k=N(\omega)+1}^{n-1} \sum_{l=k}^{n-1} |Z_k||U_{k:l}-\alpha^{l-k+1}| \\ & \leq \dfrac{1}{n}\sum_{k=N(\omega)+1}^{n-1} \sum_{l=k}^{n-1} |Z_k|(l-k+1)\gamma^{l-k}\varepsilon \\ & \leq \dfrac{A \varepsilon}{n}\sum_{k=N(\omega)+1}^{n-1} |Z_k|, \end{align*} where $A:= \sum_{k \geq 1} k\gamma^{k-1}<\infty$ as $\gamma<1$. We have omitted $\omega$ from the notations for the sake of brevity. On the otherhand, \begin{align*} \Bigg \rvert \dfrac{1}{n} \sum_{k=1}^{N(\omega)} Z_k \left[ 1+T_k+T_kT_{k+1}+\cdots+T_k \cdots T_{n-1}\right] - \dfrac{1}{n} \sum_{k=1}^{N(\omega)} Z_k \left[ 1+\alpha+\alpha^2+\cdots+\alpha^{n-k}\right]\Bigg \rvert \\ \leq \dfrac{1}{n}\sum_{k=1}^{N(\omega)} |Z_k| \left(S_k + \dfrac{1}{1-|\alpha|}\right), \end{align*} where $S_k=1+\sum_{l \geq k} \left( \prod_{m=k}^l |T_m|\right)$ which is a convergent series since $T_k(\omega)\to \alpha \in (-1,1)$. This follows from root test for series convergence, since $\left( \prod_{m=k}^l |T_m|\right)^{1/(l-k)} \to |\alpha| <1$. Combining these we get $$ |R_n(\omega)| \leq \dfrac{A \varepsilon}{n}\sum_{k=N(\omega)+1}^{n-1} |Z_k(\omega)| + \dfrac{1}{n}\sum_{k=1}^{N(\omega)} |Z_k(\omega)| \left(S_k(\omega) + \dfrac{1}{1-|\alpha|}\right),$$ which implies that $$ \limsup_{n \to \infty} |R_n(\omega)| \leq A\varepsilon \mathbb{E}|Z|.$$ Since the above holds true for all $\varepsilon >0$, we have proved that $R_n(\omega) \to 0$ and hence consequently $R_n$ converges to $0$ almost surely. It is now enough to prove that $$ \dfrac{1}{n} \sum_{k=1}^{n} Z_k \left[ 1+\alpha+\alpha^2+\cdots+\alpha^{n-k}\right] = \dfrac{1}{n} \sum_{k=1}^{n} \dfrac{1-\alpha^{n-k+1}}{1-\alpha}Z_k \stackrel{a.s.}{\longrightarrow} 0.$$ Since $n^{-1}\sum_{k=1}^n Z_k$ converges to $\mathbb{E}Z=0$ almost surely, it is enough to prove that for $\alpha \neq 0$, $$ \dfrac{1}{n} \sum_{k=1}^{n} \alpha^{n-k}Z_k = \dfrac{(\operatorname{sgn}(\alpha))^n}{|\alpha|^{-n}n} \sum_{k=1}^{n} \alpha^{-k}Z_k \stackrel{a.s.}{\longrightarrow} 0.$$ Since $n|\alpha|^{-n} \uparrow \infty$, we employ \textit{Kronecker's Lemma} and it is enough to prove that $\sum_{n \geq 1} n^{-1}|\alpha|^n \alpha^{-n}Z_n$ converges almost surely. Since $Z_n$'s are independent mean $0$ variables with finite second moment, $M_n=\sum_{k = 1}^n k^{-1}|\alpha|^k \alpha^{-k}Z_k$ is a $L^2$-MG with respect to canonical filtration of $Z$ and predictable compensator $$ \langle M \rangle_{n} = \sum_{k=1}^n k^{-2}\mathbb{E}(Z_k^2) \leq \mathbb{E}(Z^2) \sum_{k \geq 1} k^{-2}<\infty,\; \forall \; n \geq 1.$$ Thus $M_n$ is a $L^2$-bounded MG and therefore converges almost surely. This completes our proof.
1992-q6
Let X_1, X_2, \cdots be independent normal variables, with E(X_n) = \mu_n and Var(X_n) = \sigma_n^2. Show that \sum_n X_n^2 converges a.s. or diverges a.s. according as \sum_n(\mu_n^2 + \sigma_n^2) converges or diverges.
We have $X_n \stackrel{ind}{\sim} N(\mu_n, \sigma_n^2).$ Then $\mathbb{E}(X_n^2) = \mu_n^2+\sigma_n^2,$ for all $n \geq 1$. Suppose first that $\sum_{n \geq 1} (\mu_n^2 + \sigma_n^2)< \infty$. This implies that $$\mathbb{E} \left( \sum_{n \geq 1} X_n^2 \right) = \sum_{n \geq 1} \mathbb{E}(X_n^2) = \sum_{n \geq 1} (\mu_n^2 + \sigma_n^2)< \infty,$$ and hence $\sum_{n \geq 1} X_n^2$ converges almost surely. Now suppose that $\sum_{n \geq 1} (\mu_n^2 + \sigma_n^2)= \infty$. Since $(\sum_{n \geq 1} X_n^2 < \infty)$ is in the tail $\sigma$-algebra of the collection $\left\{X_1, X_2, \ldots\right\}$, we can apply \textit{Kolmogorov $0-1$ Law} and conclude that $\mathbb{P}(\sum_{n \geq 1} X_n^2 < \infty) \in \left\{0,1\right\}$. To prove that $\sum_{n \geq 1} X_n^2$ diverges almost surely, it is therefore enough to show that $\sum_{n \geq 1} X_n^2$ does not converge almost surely. We go by contradiction. Suppose $\sum_{n \geq 1} X_n^2$ converges almost surely. Apply \textit{Kolmogorov three series theorem} to conclude that $$ \mathbb{P}(X_n^2 \geq 1) < \infty, \; \sum_{n \geq 1} \mathbb{E}(X_n^2\mathbbm{1}(X_n^2 \leq 1)) < \infty.$$ Therefore, $$ \sum_{n \geq 1} \mathbb{E}(X_n^2\wedge 1) \leq \sum_{n \geq 1} \mathbb{E}(X_n^2\mathbbm{1}(X_n^2 \leq 1)) + \sum_{n \geq 1}\mathbb{P}(X_n^2 \geq 1) < \infty.$$ On the otherhand, if $Z \sim N(0,1)$, then $$ \infty >\sum_{n \geq 1} \mathbb{E}(X_n^2\wedge 1) = \mathbb{E} \left[\sum_{n \geq 1} (X_n^2 \wedge 1) \right] = \mathbb{E} \left[\sum_{n \geq 1} ((\sigma_n Z+\mu_n)^2 \wedge 1) \right],$$ and hence \begin{align*} \sum_{n \geq 1} ((\sigma_n z+\mu_n)^2 \wedge 1) < \infty, \; a.e.[z] &\Rightarrow \sum_{n \geq 1} (\sigma_n z+\mu_n)^2 < \infty, \; a.e.[z] \\ &\Rightarrow \sum_{n \geq 1} (\sigma_n z+\mu_n)^2 + \sum_{n \geq 1} (-\sigma_n z+\mu_n)^2 < \infty, \; a.e.[z] \\ &\Rightarrow z^2\sum_{n \geq 1} \sigma_n^2 + \sum_{n \geq 1} \mu_n^2 < \infty, , \; a.e.[z] \\ &\Rightarrow \sum_{n \geq 1} (\mu_n^2+\sigma_n^2)< \infty, \end{align*} which gives a contradiction.
1992-q7
Let X_1, X_2, \cdots be independent, and for n \ge 1 let S_n = X_1 + X_2 + \cdots + X_n. Let \{a_n\} and \{b_n\} be sequences of constants so that P(X_n \ge a_n \ i.o.) = 1 and P(S_{n-1} \ge b_n) \ge \epsilon > 0 for all n. Show P(S_n \ge a_n + b_n \ i.o.) \ge \epsilon
We have $X_1, X_2, \ldots$ to be independent sequence. Let $\mathcal{G}_n = \sigma(X_k : k \geq n)$. Then by \textit{Kolmogorov $0-1$ Law}, the $\sigma$-algebra $\mathcal{G}_{\infty} = \bigcap_{n \geq 1} \mathcal{G}_n$ is trivial. $S_n = \sum_{k=1}^n X_k$ for all $n \geq 1$ and $\left\{a_n\right\}_{n \geq 1}$, $\left\{b_n\right\}_{n \geq 1}$ are two sequences of constants. Let $B_n := \bigcup_{m \geq n} (S_m \geq a_m+b_m).$ Then $$ (S_n \geq a_n+b_n \text{ i.o.}) = B = \bigcap_{n \geq 1} B_n.$$ We are given that $\mathbb{P}(S_{n-1} \geq b_n) \geq \varepsilon >0$ for all $n \geq 2$. Hence, for all $n \geq 2$, \begin{align*} \mathbb{E}\left(\mathbbm{1}_{B_{n}} \mid \mathcal{G}_n \right) = \mathbb{P}(B_{n} \mid \mathcal{G}_n) \geq \mathbb{P}(S_{n} \geq a_n+b_n \mid \mathcal{G}_n) \stackrel{(i)}{=} \mathbb{P}(S_{n} \geq a_n+b_n \mid X_n) & \geq \mathbb{P}(S_{n-1} \geq b_n, X_n \geq a_n \mid X_n) \\ &= \mathbbm{1}(X_n \geq a_n)\mathbb{P}(S_{n-1} \geq b_n \mid X_n) \\ &=\mathbbm{1}(X_n \geq a_n)\mathbb{P}(S_{n-1} \geq b_n) \geq \varepsilon \mathbbm{1}(X_n \geq a_n), \end{align*} where $(i)$ follows from the fact that $(S_n,X_n) \perp\!\!\perp (X_{n+1},X_{n+2}, \ldots)$. Since, $B_n \downarrow B$, we use \textit{Levy's Downward Theorem} to conclude that $\mathbb{E}\left(\mathbbm{1}_{B_{n}} \mid \mathcal{G}_n \right) \stackrel{a.s.}{\longrightarrow} \mathbb{E}\left(\mathbbm{1}_{B} \mid \mathcal{G}_{\infty} \right) = \mathbb{P}(B),$ since $\mathcal{G}_{\infty}$ is trivial. Therefore, $$ \mathbb{P}(B) \geq \varepsilon \limsup_{n \to \infty} \mathbbm{1}(X_n \geq a_n) = \varepsilon, \; \text{a.s.},$$ since $X_n \geq a_n$ infinitely often with probability $1$. Thus $$ \mathbb{P}(S_n \geq a_n+b_n \text{ i.o.}) = \mathbb{P}(B) \geq \varepsilon.$$
1993-q1
I have 3 empty boxes. I throw balls into the boxes, uniformly at random and independently, until the first time that no two boxes contain the same number of balls. Find the expected number of balls thrown.
Let $X_n := (X_{n,1}, X_{n,2}, X_{n,3})$ be the distribution of the balls in the three boxes after $n$ throws. Define the following events. $A_n$ be the event that after $n$ throws no two boxes has same number of balls, $B_n$ be the event that exactly two boxes has same number of balls and $C_n$ be the event that all of the three boxes have same number of balls. Then define $$ T:= \inf \left\{ n \geq 0 : X_{n,1}=X_{n,2}=X_{n,3}\right\}.$$ Our first step would be to show that $\mathbb{E}(T)< \infty.$ Define the $\sigma$-algebra $\mathcal{F}_n :=\sigma(X_{m,i} : 1\leq i \leq 3, \; m \leq n).$ It is easy to see that for any $(k_1,k_2,k_3) \in \mathbb{Z}_{\geq 0}^3$, there exists $l_1,l_2,l_3 \in \left\{0,1,2,3\right\}^3$ such that $l_1+l_2+l_3=3$ and $k_i+l_i$'s are all distinct. \begin{itemize} \item If $k_1=k_2=k_3$, take $l_1=2,l_2=1,l_3=0$. \item If $k_1=k_2 >k_3$, take $l_1=3, l_2=l_3=0.$ If $k_1=k_2<k_3$ take $l_1=0, l_2=1, l_3=2$. Similar observations can be made for $k_1\neq k_2=k_3$ and $k_1=k_3 \neq k_2.$ \item If $k_i$'s are all distinct, take $l_1=l_2=l_3=1$. \end{itemize} This shows that $ \mathbb{P}(A_{n+3} \mid X_n) \geq (1/3)^3 =:\delta>0$, almost surely. Therefore, $$ \mathbb{P}(T > n+3 \mid \mathcal{F}_n) = \mathbb{P}(T > n+3 \mid \mathcal{F}_n)\mathbbm{1}(T>n) \leq \mathbb{P}(A_{n+3}^c \mid \mathcal{F}_n)\mathbbm{1}(T>n) \leq (1-\delta)\mathbbm{1}(T>n), $$ and hence $\mathbb{P}(T>n+3)\leq (1-\delta)\mathbb{P}(T>n)$ for all $n \geq 0$. Using induction, we conclude that $\mathbb{P}(T>3n)\leq (1-\delta)^n,$ and hence $\mathbb{P}(T>n)\leq (1-\delta)^{\lfloor n/3 \rfloor}.$ Hence, $$ \mathbb{E}(T) = \sum_{n \geq 0} \mathbb{P}(T>n) \leq \sum_{n \geq 0} (1-\delta)^{\lfloor n/3 \rfloor} =\dfrac{3}{\delta}< \infty.$$ Let us now evaluate $\mathbb{E}(T)$. It is easy to see that $T \geq 3$ almost surely and $\mathbb{P}(T=3)=\mathbb{P}(A_3)=2/3$. If $C_3$ happens, i.e. al the boxes has one ball each after $3$ throws then clearly we need another $T^{\prime}$ may throws where $T^{\prime}$ is distributed same as $T$, conditioned on $C_3$ happens. Note that $\mathbb{P}(C_3)=6/27.$ The only way $B_3$ can happen is when one box has all the three balls after three throws. In that case the first time we place one ball in one of the now empty boxes, all the boxes would have different number of balls. So we need $G$ many more throws where $G \sim \text{Geometric}(2/3)$, conditioned on $B_3$ happens. Note that $\mathbb{P}(B_3)=3/27$. Combining these observations we can write, \begin{align*} T = 3\mathbb{I}_{A_3} + (3+T^{\prime})\mathbbm{1}_{C_3} + (3+G)\mathbbm{1}_{B_3} &\Rightarrow \mathbb{E}(T) = 3\mathbb{P}(A_3) + (3+\mathbb{E}(T))\mathbb{P}(C_3)+(3+\mathbb{E}(\text{Geo}(2/3)))\mathbb{P}(B_3) \\ & \Rightarrow \mathbb{E}(T) = 2+(3+\mathbb{E}(T))(2/9)+(3+3/2)(1/9) \\ & \Rightarrow \mathbb{E}(T) = 3 + \dfrac{2}{9}\mathbb{E}(T) + \dfrac{1}{6} \Rightarrow \mathbb{E}(T) = \dfrac{57}{14}, \end{align*} where the last implication is valid since $\mathbb{E}(T)< \infty$.
1993-q2
Let \( X_1, X_2, \ldots \) be independent random variables with the distributions described below. Let \( S_n = X_1 + X_2 + \cdots + X_n \). In each of parts a) through d), find constants \( a_n \) and \( b_n \to \infty \) such that \( (S_n - a_n)/b_n \) converges in law as \( n \to \infty \) to a non-degenerate limit, and specify the limit distribution. If no such constants exist, say so. Prove your answers. a) \( P(X_k = k) = P(X_k = -k) = 1/2k^2 \)\\ \( P(X_k = 0) = 1 - 1/k^2 \) b) For some \( 0 < \alpha < \infty \),\\ \( P(X_k = k^\alpha) = P(X_k = -k^\alpha) = 1/2k^{\alpha/2} \)\\ \( P(X_k = 0) = 1 - 1/k^{\alpha/2} \) c) The \( X_k \)'s are identically distributed, with probability density function\\ \( f(x) = \frac{1}{\pi} \frac{\sigma}{\sigma^2 + (x - \mu)^2}, \quad -\infty < x < \infty \)\\ for some \( -\infty < \mu < \infty \) and \( \sigma > 0 \). d) \( P(X_k = 2^{k/2}) = P(X_k = -2^{k/2}) = 1/2^{k+1} \)\\ \( P(X_k = 1) = P(X_k = -1) = (1 - 1/2^k)/2 \)
\begin{enumerate}[label=(\alph*)] \item In this case $$ \sum_{n \geq 1} \mathbb{P}(X_n \neq 0) = \sum_{n \geq 1} n^{-2} < \infty,$$ and hence by \textit{Borel-Cantelli Lemma I}, we conclude that $\mathbb{P}(X_n = 0, \; \text{eventually for all } n) =1$. This implies that $S_n$ converges almost surely as $n \to \infty$. Now suppose there exists constants $a_n,b_n$ such that $b_n \to \infty$ and $(S_n-a_n)/b_n \stackrel{d}{\longrightarrow} G$, where $G$ is some non-degenerate distribution. Since $S_n$ converges almost surely, we have $S_n/b_n \stackrel{a.s.}{\longrightarrow} 0$ and by \textit{Slutsky's Theorem}, we get that a (non)-random sequence $-a_n/b_n$ converges weakly to non-degenerate $G$, which gives a contradiction. \item For $\alpha >2$, we have $$ \sum_{n \geq 1} \mathbb{P}(X_n \neq 0) = \sum_{n \geq 1} n^{-\alpha/2} < \infty,$$ and hence following the arguments presented in (a), we can conclude that there is no such $a_n,b_n$. Now consider the case $0 < \alpha <2$. Here we have $\mathbb{E}(X_n)=0$ and $$ s_n^2 = \sum_{k=1}^n \operatorname{Var}(X_k) = \sum_{k=1}^n \mathbb{E}(X_k^2) = \sum_{k=1}^n k^{3\alpha/2} \sim \dfrac{n^{1+3\alpha/2}}{1+3\alpha/2} \longrightarrow \infty,$$ where we have used the fact that $\sum_{k=1}^n k^{\gamma} \sim (\gamma+1)^{-1}n^{\gamma+1}$, for $\gamma >-1$. In this case $n^{\alpha}/s_n \sim n^{\alpha/4 - 1/2}$ converges to $0$ and hence for any $\varepsilon >0$, we can get $N(\varepsilon)\in \mathbb{N}$ such that $|X_n| \leq n^{\alpha} < \varepsilon s_n$, almost surely for all $n \geq N(\varepsilon)$. Therefore, $$ \dfrac{1}{s_n^2} \sum_{k=1}^n \mathbb{E}\left[ X_k^2, |X_k|\geq \varepsilon s_n\right] = \dfrac{1}{s_n^2} \sum_{k=1}^{N(\varepsilon)} \mathbb{E}\left[ X_k^2, |X_k|\geq \varepsilon s_n\right] \leq \dfrac{1}{s_n^2} \sum_{k=1}^{N(\varepsilon)} \mathbb{E}\left[ X_k^2\right] \longrightarrow 0,$$ implying that \textit{Lindeberg condition} is satisfied. Therefore, $ S_n/s_n$ converges weakly to standard Gaussian distribution. In other words, $$ \dfrac{S_n}{n^{1/2 + 3\alpha/4}} \stackrel{d}{\longrightarrow} N \left(0,\dfrac{1}{1+3\alpha/2}\right).$$. Finally consider the case $\alpha=2$. In light of the fact that $S_n$ is symmetric around $0$, We focus on finding the asymptotics of $S_n/b_n$ for some scaling sequence $\left\{b_n\right\}$ with $b_n \uparrow \infty$, to be chosen later. Fix $t \in \mathbb{R}$. Since, the variables $X_k$'s have distributions which are symmetric around $0$, we get, $$ \mathbb{E} \exp\left[ it\dfrac{S_n}{b_n}\right] = \prod_{k=1}^n \mathbb{E} \exp\left[ it\dfrac{X_k}{b_n}\right] = \prod_{k=1}^n \mathbb{E} \cos \left[ t\dfrac{X_k}{b_n}\right] = \prod_{k=1}^n \left[ \dfrac{1}{k}\cos \left( \dfrac{tk^{2}}{b_n}\right) + \left(1-\dfrac{1}{k}\right)\right] = \prod_{k=1}^n g_{n,k}(t), $$ where $$ g_{n,k}(t) := \dfrac{1}{k}\cos \left( \dfrac{tk^{2}}{b_n}\right) + \left(1-\dfrac{1}{k}\right), \; \forall \; 1 \leq k \leq n.$$ Now observe that, \begin{align*} 0 \leq 1-g_{n,k}(t) &= \dfrac{1}{k}\left\{1-\cos \left( \dfrac{tk^{2}}{b_n}\right)\right\} = \dfrac{2}{k}\sin^2 \left( \dfrac{tk^{2}}{2b_n}\right) \leq \dfrac{2}{k} \left( \dfrac{tk^{2}}{2b_n}\right)^2 \leq \dfrac{t^2k^{3}}{2b_n^2}. \end{align*} Take $b_n=n^{2}$. Then \begin{align*} \sum_{k=1}^n (1-g_{n,k}(t)) &= 2\sum_{k=1}^n \dfrac{1}{k}\sin^2 \left( \dfrac{tk^{2}}{2n^{2}}\right) = \dfrac{2}{n} \sum_{k=1}^n \dfrac{n}{k}\sin^2 \left( \dfrac{tk^{2}}{2n^{2}}\right)= 2 \int_{0}^1 \dfrac{1}{x} \sin^2 \left(\dfrac{tx^{2}}{2}\right) \, dx + o(1) \; \text{ as } n \to \infty. \end{align*} Not that the integral above is finite since, $$ 2\int_{0}^1 \dfrac{ an^2(tz^{2}/2)}{z} \, dz \leq 2 \int_{0}^1 \dfrac{t^2z^{4}}{4z} \, dz = 2\int_{0}^1 \dfrac{t^2z^{3}}{4} \, dz = \dfrac{t^2}{8} < \infty.$$ Also note that $$ \max_{1 \leq k \leq n} |1-g_{n,k}(t)| \leq \max_{1 \leq k \leq n} \dfrac{t^2k^{3}}{n^{4}} \leq \dfrac{t^2 n^{3}}{n^{4}} \to 0,$$ and hence $$ \sum_{k=1}^n |1-g_{n,k}(t)|^2 \leq \max_{1 \leq k \leq n} |1-g_{n,k}(t)| \sum_{k=1}^n (1-g_{n,k}(t)) \longrightarrow 0.$$ Therefore, using ~\cite[Lemma 3.3.31]{dembo}, we conclude that $$ \mathbb{E} \exp\left(\dfrac{itS_n}{n^{2}} \right) \longrightarrow \exp \left[-2 \int_{0}^1 \dfrac{1}{x} \sin^2 \left(\dfrac{tx^{2}}{2}\right) \, dx \right] =: \psi (t), \; \forall \; t \in \mathbb{R}.$$ To conclude convergence in distribution for $S_n/n^{2}$, we need to show that $\psi$ is continuous at $0$ (see ~\cite[Theorem 3.3.18]{dembo}). But it follows readily from DCT since $\sup_{-1 \leq t \leq 1} \tan^2(tz^{2}/2)/z \leq z^{3}$ which in integrable on $[0,1]$. So by \textit{Levy's continuity Theorem}, $S_n/n^{2}$ converges in distribution to $G$ with characteristic function $\psi$. \item In this case $X_i$'s are \textit{i.i.d.} from Cauchy distribution with location parameter $\mu$ and scale parameter $\sigma$. Hence, $\mathbb{E}\exp(it\sigma^{-1}(X_k-\mu)) = \exp(-|t|)$ for all $t \in \mathbb{R}$ and for all $k$. Hence, $$ \mathbb{E} \exp \left( \dfrac{it(S_n-n\mu)}{\sigma}\right) = \exp(-n|t|) \Rightarrow \mathbb{E} \exp \left( \dfrac{it(S_n-n\mu)}{n\sigma}\right) = \exp(-|t|), \; \forall \; t \in \mathbb{R}.$$ Thus $(S_n-n\mu)/(n\sigma) \sim \text{Cauchy}(0,1)$ for all $n \geq 1$ and thus $a_n=n\mu$ and $b_n=n\sigma$ works with the limiting distribution being the standard Cauchy. \item We have $\mathbb{P}(X_n=2^{n/2})=\mathbb{P}(X_n=-2^{n/2})=2^{-n-1}$ and $\mathbb{P}(X_n=1)=\mathbb{P}(X_n=-1)=\frac{1}{2}(1-2^{-n})$. Note that $$ \sum_{n \geq 1} \mathbb{P}(|X_n| \neq 1) = \sum_{n \geq 1} 2^{-n} < \infty,$$ and hence with probability $1$, $|X_n|=1$ eventually for all $n$. Let $Y_n = \operatorname{sgn}(X_n)$. Clearly, $Y_i$'s are \textit{i.i.d.} $\text{Uniform}(\left\{-1,+1\right\})$ variables and hence $ (\sum_{k=1}^n Y_k)/\sqrt{n}$ converges weakly to a standard Gaussian variable. By previous argument, $\mathbb{P}(X_n=Y_n, \; \text{eventually for } n)=1$ and hence $n^{-1/2}\left(S_n - \sum_{k=1}^n Y_k \right) \stackrel{p}{\longrightarrow}0.$ Use \textit{Slutsky's Theorem} to conclude that $S_n/\sqrt{n}$ converges weakly to standard Gaussian distribution. \end{enumerate}
1993-q3
A deck consists of \( n \) cards. Consider the following simple shuffle: the top card is picked up and inserted into the deck at a place picked uniformly at random from the \( n \) available places (\( n-2 \) places between the remaining \( n-1 \) cards, one place on top, and one on the bottom). This shuffle is repeated many times. Let \( T \) be the number of shuffles required for the original bottom card to come to the top of the deck. And let \( V = T+1 \) be the number of shuffles required for the original bottom card to come to the top and be reinserted into the deck. a) Show that after \( V \) shuffles all \( n! \) arrangements of cards in the deck are equally likely. b) Find \( E(V) \), as a function of \( n \). For large \( n \), which of the terms below best represents the order of magnitude of \( E(V) \)? Explain. \( n^3 \quad n^2 \log n \quad n^2 \quad n \log n \quad n \quad \sqrt{n \log n} \quad \sqrt{n} \) c) Repeat part b), with \( E(V) \) replaced by \( SD(V) \). d) Show that \( P(V > v) \leq n[(n-1)/n]^v \) for \( v > n \). e) Show that \( P(V > n \log n + cn) \leq e^{-c} \) for all \( c > 0 \).
\begin{enumerate}[label=(\alph*)] \item Without loss of generality, we assume that initially the deck is arranged as $1, \ldots, n$ with the top card being the card labelled $1$. In the following discussion, one unit of time will refer to one shuffle. We define $n$ stopping times. For $k=1, \ldots, n-1$, let $\tau_k$ be the first time there are $k$ cards (irrespective of their labels) in the deck below the card labelled $n$. It is easy to see that $\tau_1 < \cdots < \tau_{n-1}=T.$ Let $A_k := \left\{a_{k,1}, \ldots, a_{k,k}\right\}$ be the (random) set of cards which are below card labelled $n$ at the time $\tau_k$, with $a_{k,1}$ being the label of the card just below the card labelled $n$ and so on. We claim that $(a_{k,1}, \ldots, a_{k,k}) \mid A_k \sim \text{Uniform}(S_{A_k})$ where $S_{B}$ is the collection of all permutations of elements of a set $B$. This claim will be proved by induction. The base case $k=1$ is trivially true. Assuming the it holds true for $k <m \leq n-1$, we condition on $A_m$ and $(a_{m-1,1}, \ldots, a_{m-1,m-1})$. The conditioning fixes the only card in the set $A_m \setminus A_{m-1}$ and by the shuffle dynamics, it is placed (conditionally) uniformly at random in one of the $m$ places; $m-2$ places between the cards of $A_{m-1}$, one place at just below the card labelled $n$ and one at the bottom. Clearly by induction hypothesis, this makes $$(a_{m,1}, \ldots, a_{m,m}) \mid \left(A_m, (a_{m-1,1}, \ldots, a_{m-1,m-1}) \right) \sim \text{Uniform}(S_{A_m}).$$ This implies that $(a_{m,1}, \ldots, a_{m,m}) \mid A_m \sim \text{Uniform}(S_{A_m}).$ The claim hence holds true. Note that by design, $A_{n-1}=\left\{1, \ldots, n-1\right\}$. So after $T=\tau_{n-1}$ shuffles, when we remove the card labelled $n$ from the top, the remaining deck is an uniform random element from $S_{n-1}$. Therefore, after inserting the $n$-labelled card at a random place in this deck, we get an uniform random element from $S_n$. This proves that after $V=T+1$ shuffles, all $n!$ arrangements of the deck are equally likely. \item Note that $V=T+1 = 1+\sum_{k=1}^{n-1}(\tau_k-\tau_{k-1})$, with $\tau_0:=0$. Conditioned on $\tau_{k-1}, A_{k-1}$, the dynamics is as follows. We take the top card and place it at random (independent of all what happened before) in one of the $n$ places. Out of those $n$ places, placement in one of the $k$ positions below the card labelled $n$ will take us to an arrangement where there are $k$ cards below the card labelled $n$. Thus conditioned on $\tau_{k-1}, A_{k-1}$, we have $\tau_{k}-\tau_{k-1}$ to be the first success time of a geometric trial with success probability $k/n$. Since the distribution does jot depend on $\tau_{k-1}, A_{k-1}$, we have $$ \tau_{k}-\tau_{k-1} \stackrel{ind}{\sim} \text{Geo}(k/n), \; k=1, \ldots, n-1.$$ Therefore, \begin{align*} \mathbb{E}(V) = 1 + \sum_{k=1}^{n-1} \mathbb{E}(\tau_k-\tau_{k-1}) = 1 + \sum_{k=1}^{n-1} \dfrac{n}{k} = n \left(\sum_{k=1}^n \dfrac{1}{k} \right) = n (\log n + \gamma + o(1)) \sim n \log n, \end{align*} with $\gamma$ being the \textit{Euler constant}. \item By independence we have \begin{align*} \operatorname{Var(V)} = \sum_{k=1}^{n-1} \operatorname{Var}(\tau_k-\tau_{k-1}) = \sum_{k=1}^{n-1} \dfrac{1-(k/n)}{(k/n)^2} = n^2 \sum_{k=1}^{n-1} k^{-2} - n \sum_{k=1}^{n-1} k^{-1} &= n^2 \left(\dfrac{\pi^2}{6}+o(1)\right) - n (\log n + \gamma +o(1)) \\ & \sim \dfrac{\pi^2 n^2}{6}. \end{align*} Therefore, $\operatorname{SD}(V) \sim \dfrac{\pi n}{\sqrt{6}}.$ \item Recall the \textit{Coupon collector's problem}. We take $U_1, U_2, \ldots \stackrel{iid}{\sim} \text{Uniform}(\left\{1, \ldots, n\right\})$ and let $$\theta_k := \inf \left\{ m \geq 1 : |\left\{U_1, \ldots, U_m\right\}| = k\right\}.$$ Clearly, $\theta_k-\theta_{k-1} \stackrel{ind}{\sim} \text{Geo}\left( \dfrac{n-k+1}{n}\right)$ for $k =1, \ldots, n$, with $\theta_0:=0$. Hence $V \stackrel{d}{=} \theta_n$ and for any $v \geq 1$, $$ \mathbb{P}(V >v) = \mathbb{P}(\theta_n >v) \leq \sum_{k=1}^n \mathbb{P}(U_i \neq k, \; \forall \;1 \leq i \leq v) = n \left( 1- \dfrac{1}{n} \right)^v = n[(n-1)/n]^v.$$ \item Plug in $v=n\log n +c n$ in part (d) for some $c>0$. Apply the inequality that $(1-x) \leq e^{-x}$ for all $x$. $$ \mathbb{P}(V > n\log + cn) \leq n\left( 1- \dfrac{1}{n}\right)^{n \log n + cn} \leq n \exp\left( -\dfrac{n \log n + cn}{n}\right) = n \exp(-\log n - c) = e^{-c}.$$ \end{enumerate}
1993-q4
Let \( X_0, X_1, \ldots, X_n \) be a martingale such that \( X_0 = 0 \) a.s., and \( \text{Var}(X_n) = \sigma^2 < \infty \). Define times \( T_0, T_1, \ldots, T_n \) inductively as follows: \( T_0 = 0, \) and for \( 1 \leq i \leq n, \) \( T_i \) is the smallest \( t > T_{i-1} \) such that \( X_t > X_{T_{i-1}}, \) if there is such a \( t \). And \( T_i = n \) if there is no such \( t \). a) Show that \( E \sum_{i=1}^n (X_{T_i} - X_{T_{i-1}})^2 = \sigma^2 \). b) Let \( M_n = \max_{0 \leq i \leq n} X_i \). Show that \( E(M_n - X_n)^2 \leq \sigma^2 \).
\textbf{Correction :} Assume that $\mathbb{E}X_i^2<\infty$ for all $i=0, \ldots, n$. We have $X_0,X_1, \ldots, X_n$ to be a $L^2$-MG with respect to filtration $\left\{\mathcal{F}_m : 0 \leq m \leq n\right\}$ such that $X_0=0$ almost surely and $\operatorname{Var}(X_n)=\sigma^2 < \infty$. Define $T_0:=0$ and for all $1 \leq i \leq n$, $$ T_i := \inf \left\{ m >T_{i-1} : T_{i-1} : X_m > X_{T_{i-1}}\right\} \wedge n,$$ with the understanding that infimum of an empty set is $\infty$. \begin{enumerate}[label=(\alph*)] \item Since the stopping times $T_0 \leq T_1 \leq \cdots \leq T_n$ are uniformly bounded, we apply OST and conclude that we have the following $\mathbb{E}(X_{T_i} \mid \mathcal{F}_{T_{i-1}}) =X_{T_{i-1}}$, for all $1 \leq i \leq n$. Since, $X_{T_i}^2 \leq \sum_{k=0}^n X_k^2 \in L^1$, we can conclude that $\left\{Y_i :=X_{T_i}, \mathcal{F}_{T_i}, 0 \leq i \leq n\right\}$ is a $L^2$-MG with predictable compensator process $$ \langle Y \rangle_m = \sum_{k=1}^m \mathbb{E}\left[(X_{T_k}-X_{T_{k-1}})^2 \mid \mathcal{F}_{T_{k-1}}\right], \; \forall \; 0 \leq m \leq n.$$ Hence, $$ \mathbb{E}X_{T_n}^2 = \mathbb{E}Y_n^2 = \mathbb{E}\langle Y \rangle_n = \mathbb{E} \sum_{i=1}^n (X_{T_i}-X_{T_{i-1}})^2.$$ It is easy to observe that $T_n =n$ almost surely and hence $$ \mathbb{E} \sum_{i=1}^n (X_{T_i}-X_{T_{i-1}})^2 = \mathbb{E}X_{T_n}^2 = \mathbb{E}X_n^2 = \operatorname{Var}(X_n)=\sigma^2,$$ since $\mathbb{E}X_n = \mathbb{E}X_0=0$. \item Let $M_n = \max_{0 \leq i \leq n } X_i$. Let $j^{*}$ be the smallest element of the (random) set $\arg \max_{0 \leq i \leq n} X_i$. Then $X_{j^{*}} > X_i$ for all $0 \leq i \leq j^{*}-1$ and $X_{j^{*}} \geq X_i$ for all $j^{*}<i \leq n$. This implies that $j^{*} = T_k$ for some $0 \leq k \leq n$ and $T_i=n$ for all $k<i \leq n.$ Consequently, $$ (M_n-X_n)^2 = (X_{T_k}-X_n)^2 \leq \sum_{i=1}^n (X_{T_i}-X_{T_{i-1}})^2.$$ From above and using part (a), we conclude that $\mathbb{E}(M_n-X_n)^2 \leq \sigma^2.$ \end{enumerate}
1993-q5
Let \( X_1, X_2, \ldots \) be a sequence of square-integrable random variables. Suppose the sequence is tight, and that \( E X_n^2 \to \infty \) as \( n \to \infty \). Show that \( \text{Var}(X_n) \to \infty \) as \( n \to \infty \).
Suppose $\operatorname{Var}(X_n)$ does not go to \infty$ and hence there exists a subsequence $\left\{n_k : k \geq 1\right\}$ such that $\sup_{k \geq 1} \operatorname{Var}(X_{n_k}) = C < \infty$. Since, $\mathbb{E}(X_{n_k}^2) \to \infty$, we conclude that $|\mathbb{E}(X_{n_k})| \to \infty$. Without loss of generality, we can assume $\mu_{n_k}:=\mathbb{E}(X_{n_k}) \to \infty$ (otherwise we may have to take a further subsequence). Fix $M \in (0,\infty)$ and $\delta >0$. Then for large enough $k$, \begin{align*} \mathbb{P}(X_{n_k} \leq M) \leq \mathbb{P}(X_{n_k} \leq \mu_{n_k} - \delta^{-1}\sqrt{C}) \leq \mathbb{P}(|X_{n_k}-\mu_{n_k}| \geq \delta^{-1}\sqrt{C}) \leq \dfrac{\operatorname{Var}(X_{n_k})}{\delta^{-2}C} \leq \delta^2. \end{align*} Therefore, $$ \sup_{n \geq 1} \mathbb{P}(|X_n| > M) \geq 1-\delta^2,$$ for all choices of $M$ and $\delta$. Take $\delta \uparrow 1$ and conclude that $$ \sup_{n \geq 1} \mathbb{P}(|X_n| > M) =1, \; \forall \; M.$$ This contradicts the tightness assumption.
1993-q6
Let \( X, X_1, X_2, \ldots \) be i.i.d. with \( 0 < E|X| < \infty, \) and \( EX = \mu \). Let \( a_1, a_2, \ldots \) be positive reals. In each of parts a) through d), say whether the statement is true or false, and give a proof or counterexample. a) If \( \mu = 0 \) and \( \sum_i a_i < \infty, \) then \( \sum_i a_i X_i \) converges a.s. b) If \( \mu = 0 \) and \( \sum_i a_i X_i \) converges a.s., then \( \sum_i a_i < \infty \). c) If \( \mu > 0 \) and \( \sum_i a_i < \infty, \) then \( \sum_i a_i X_i \) converges a.s. d) If \( \mu > 0 \) and \( \sum_i a_i X_i \) converges a.s., then \( \sum_i a_i < \infty \).
Let $\mathcal{F}_n := \sigma(X_i : 1 \leq i \leq n).$ Also define $S_n := \sum_{k=1}^n a_kX_k$ with $S_0:=0$ and $\mathbb{E}|X|=\nu < \infty.$ \begin{enumerate}[label=(\alph*)] \item \textbf{TRUE}. Since $\mu=0$, we have $\left\{S_n, \mathcal{F}_n, n \geq 0\right\}$ to be a MG with $$ \sup_{n \geq 1} \mathbb{E}|S_n| \leq \sup_{n \geq 1} \sum_{k=1}^n a_k \mathbb{E}|X_k| = \sup_{n \geq 1} \nu \sum_{k =1}^n a_k = \nu \sum_{k \geq 1} a_k < \infty.$$ Therefore, by \textit{Doob's MG convergence Theorem}, $\sum_{n \geq 1} a_nX_n$ converges almost surely. \item \textbf{FALSE}. Take $X \sim N(0,1)$ and $a_i =i^{-1}$. We still have $\left\{S_n, \mathcal{F}_n, n \geq 0\right\}$ to be a MG with $$ \sup_{n \geq 1} \mathbb{E}S_n^2 =\sup_{n \geq 1} \operatorname{Var}(S_n) = \sup_{n \geq 1} \sum_{k=1}^n a_k^2 = \sup_{n \geq 1} \sum_{k =1}^n k^{-2} = \sum_{k \geq 1} k^{-2} < \infty.$$ Being $L^2$-bounded, the MG $S_n$ converges almost surely and hence $\sum_{n \geq 1} a_kX_k$ converges almost surely. But here $\sum_{k \geq 1} a_k =\infty.$ \item \textbf{TRUE}. Let $Z_k=X_k-\mu$. Then $Z_i$'s are i.i.d. with mean $0$ and therefore by part (a) we have $\sum_{n \geq 1} a_nZ_n$ converges almost surely, since $\sum_{n \geq 1} a_n < \infty.$ The summability condition on $a_i$'s also imply that $\mu\sum_{n \geq 1} a_n < \infty$. Summing them up we get, $\sum_{n \geq 1} a_nX_n = \sum_{n \geq 1} a_nZ_n + \mu\sum_{n \geq 1} a_n$ converges almost surely. \item \textbf{TRUE}. We shall use \textit{Kolmogorov's Three series theorem}. We assume that $\sum_{n \geq 1} a_nX_n$ converges almost surely. Then by \textit{Kolmogorov's Three series theorem}, we have that $\sum_{n \geq 1} \mathbb{P}(|a_nX_n| > c)$ and $\sum_{n \geq 1} \mathbb{E}(a_nX_n\mathbbm{1}(|a_nX_n| \leq c))$ converge for any $c>0$. Since $X_i$'s \textit{i.i.d.}, we have $\sum_{n \geq 1} \mathbb{P}(|X|>a_n^{-1}c)$ and $\sum_{n \geq 1} \mathbb{E}(a_nX\mathbbm{1}(a_n|X| \leq c))$ converges for all $c>0$. The summability of the first series implies that $\mathbb{P}(|X| > a_n^{-1}c) \to 0$ for any $c>0$. Since $\mathbb{E}X = \mu >0$, we have $\mathbb{P}(|X|>\varepsilon) > \delta$ for some $\varepsilon, \delta >0$. This implies that the sequence $a_n$ is uniformly bounded, say by $C \in (0,\infty).$ Introduce the notation $f(x) := \mathbb{E}(X \mathbbm{1}(|X| \leq x))$ for any $x>0$. Note that $f(x) \to \mu >0$ as $x \to \infty$. Then we have $\sum_{n \geq 1} a_nf(a_n^{-1}c)$ converging for any $c>0$. Get $M \in (0, \infty)$ such that $f(x) \geq \mu/2$ for all $x \geq M$ and take $c=MC >0$. Then $a_n^{-1}c \geq C^{-1}c =M$ for all $n \geq 1$ and hence $f(a_n^{-1}c) \geq \mu/2$. This implies that $$ (\mu/2) \sum_{n \geq 1} a_n \leq \sum_{n \geq 1} a_nf(a_n^{-1}c) < \infty,$$ and therefore $\sum_{n \geq 1} a_n < \infty$, since $\mu >0$. \end{enumerate}
1993-q7
Let \( Y \) be a non-negative random variable with finite mean \( \mu \). Show that \[ (E|Y - \mu|)^2 \leq 4\mu \text{Var}(\sqrt{Y}) \] [Hint: Consider \( \sqrt{Y} \pm \sqrt{\mu} \).]
We have a non-negative random variable $Y$ with $\mathbb{E}(Y)=\mu$. Let's introduce some more notations. $\nu := \mathbb{E}\sqrt{Y}$ and $V:= \operatorname{Var}(\sqrt{Y}) = \mu - \nu^2.$ Applying \textit{Cauchy-Schwarz Inequality} we get \begin{align*} \left( \mathbb{E}|Y-\mu| \right)^2 = \left( \mathbb{E}\left(|\sqrt{Y}-\sqrt{\mu}||\sqrt{Y}+\sqrt{\mu}|\right) \right)^2 &\leq \mathbb{E}(\\sqrt{Y}-\sqrt{\mu})^2 \mathbb{E}(\\sqrt{Y}+\sqrt{\mu})^2 \\ & =\left[ \operatorname{Var}(\sqrt{Y}) + (\sqrt{\mu}-\nu)^2\right] \left[ \mathbb{E}Y + \mu +2 \sqrt{\mu}\mathbb{E}\sqrt{Y} \right] \\ & = \left[ \mu-\nu^2 + (\sqrt{\mu}-\nu)^2\right] \left[ 2\mu +2 \nu\sqrt{\mu}\right] \\ & = \left( 2\mu - 2\nu \sqrt{\mu}\right) \left( 2\mu +2 \nu\sqrt{\mu}\right) \\ & = 4\mu^2 - 4\nu^2\mu = 4\mu(\mu-\nu^2) = 4\mu \operatorname{Var}(\sqrt{Y}). \end{align*}
1993-q8
A coin which lands heads with probability \( p \) is tossed repeatedly. Let \( W_n \) be the number of tosses required to get at least \( n \) heads and at least \( n \) tails. Find constants \( a_n \) and \( b_n \to \infty \) such that \( (W_n-a_n)/b_n \) converges in law to a non-degenerate distribution, and specify the limit distribution, in the cases: a) \( p = 2/3 \) b) \( p = 1/2 \) Prove your answers.
Let $X_n$ be the number of heads in first $n$ tosses. Hence $X_n \sim \text{Bin}(n,p)$. $W_n$ is the number of tosses required to get at least $n$ heads and $n$ tails. Evidently, $W_n \geq 2n$, almost surely for all $n \geq 1$. Note that for $x \geq 0$, $$ (W_n > 2n+x) = (X_{2n+x} \wedge (2n+x-X_{2n+x}) <n) =(X_{2n+x}<n \text{ or } X_{2n+x} > n+x)$$ and hence for all $n \geq 1$ and $x \in \mathbb{Z}_{\geq 0}$ we have $$ \mathbb{P}(W_n \leq 2n+x) = \mathbb{P}(n \leq X_{2n+x} \leq n+x) = \mathbb{P}(X_{2n+x} \leq n+x) - \mathbb{P}(X_{2n+x}<n). \begin{enumerate}[label=(\alph*)] \item Consider the case $p>1/2$. Then for any sequence $x_n$ of non-negative integers, we have $$ \mathbb{P}(X_{2n+x_n}<n) \leq \mathbb{P}(X_{2n} < n) = \mathbb{P}\left(\dfrac{X_{2n}}{2n}-p < \dfrac{1}{2} - p\right) \longrightarrow 0, $$ since $(2n)^{-1}X_{2n} \stackrel{a.s.}{\longrightarrow} p > \frac{1}{2}$ as $n \to \infty$. On the otherhand. \begin{align*} \mathbb{P}(X_{2n+x_n} \leq n+x_n) &= \mathbb{P}(X_{2n+x_n}-(2n+x_n)p \leq n+x_n - (2n+x_n)p) \\ &= \mathbb{P} \left( \dfrac{X_{2n+x_n}-(2n+x_n)p}{\sqrt{(2n+x_n)p(1-p)}} \leq \dfrac{n(1-2p)+x_n(1-p)}{\sqrt{(2n+x_n)p(1-p)}}\right) \end{align*} Take $x_n = n(2p-1)(1-p)^{-1} + C \sqrt{n}$ where $C \in \mathbb{R}$. Then $$ \dfrac{n(1-2p)+x_n(1-p)}{\sqrt{(2n+x_n)p(1-p)}} = \dfrac{C(1-p)\sqrt{n}}{\sqrt{2np(1-p)+n(2p-1)p+Cp(1-p)\sqrt{n}}} = \dfrac{C(1-p)\sqrt{n}}{\sqrt{np+Cp(1-p)\sqrt{n}}} \to \dfrac{C(1-p)}{\sqrt{p}}. $$ Since $2n+x_n \to \infty$ as $n \to \infty$, we have by CLT that $$ \dfrac{X_{2n+x_n}-(2n+x_n)p}{\sqrt{(2n+x_n)p(1-p)}} \stackrel{d}{\longrightarrow} N(0,1).$$ Therefore, using \textit{Polya's Theorem} we get $$ \mathbb{P}(X_{2n+x_n} \leq n+x_n) \to \Phi \left( \dfrac{C(1-p)}{\sqrt{p}}\right).$$ Consequently, we have $$ \mathbb{P}(W_n \leq 2n+n(2p-1)(1-p)^{-1}+C\sqrt{n}) \longrightarrow \Phi \left( \dfrac{C(1-p)}{\sqrt{p}}\right), \; \forall \; C \in \mathbb{R}.$$ Plugging in $C=x\sqrt{p}(1-p)^{-1}$, we get $$ \mathbb{P}(W_n \leq 2n+n(2p-1)(1-p)^{-1}+x\sqrt{p}(1-p)^{-1}\sqrt{n}) \longrightarrow \Phi(x), \; \forall \; x \in \mathbb{R},$$ which implies $$ \dfrac{(1-p)W_n- n}{\sqrt{np}} \stackrel{d}{\longrightarrow} N(0,1), \; \text{ as } n \to \infty.$$ For $p = 2/3$, this simplifies to $n^{-1/2}(W_n-3n) \stackrel{d}{\longrightarrow} N(0,6).$ \item For $p=1/2$, we have $$ \mathbb{P}(W_n \leq 2n+x_n) = \mathbb{P}(n \leq X_{2n+x_n} \leq n+x_n) = \mathbb{P}\left( - \dfrac{x_n/2}{\sqrt{(2n+x_n)/4}} \leq \dfrac{X_{2n+x_n}-(n+x_n/2)}{\sqrt{(2n+x_n)/4}} \leq \dfrac{x_n/2}{\sqrt{(2n+x_n)/4}}\right).$$ Take $x_n = C\sqrt{n}$, for $C \in (0,\infty)$. Since by CLT we have $$ \dfrac{X_{2n+x_n}-(n+x_n/2)}{\sqrt{(2n+x_n)/4}} \stackrel{d}{\longrightarrow} N(0,1),$$ we can conclude by \textit{Polya's Theorem} that $$ \mathbb{P}(W_n \leq 2n+C\sqrt{n}) \longrightarrow \Phi(C/\sqrt{2}) - \Phi(-C/\sqrt{2}) = \mathbb{P}(|Z| \leq C/\sqrt{2}) = \mathbb{P}(|N(0,2)| \leq C).$$ Therefore, with the aid of the fact that $W_n \geq 2n$, almost surely we can conclude that $$ \dfrac{W_n-2n}{\sqrt{n}} \stackrel{d}{\longrightarrow} |N(0,2)|, \; \text{ as } n \to \infty.$$ \end{enumerate}
1994-q1
Let X_{ij}, i = 1, \cdots, k, j = 1, \cdots, n be independent random variables and suppose that P\{X_n = 1\} = P\{X_n = 0\} = 0.5. Consider the row vectors X_i = (X_{i1}, \cdots, X_{in}). (i) Find the probability that the elementwise addition modulo 2 of X_1 and X_2 results with the vector 0 = (0, \cdots, 0). (ii) Let Q_k be the probability that there exist m and 1 \le i_1 < i_2 < \cdots < i_m \le k such that the elementwise addition modulo 2 of X_{i_1}, X_{i_2}, \cdots, X_{i_m} results with the vector 0 = (0, \cdots, 0). Prove that Q_k = 1 - \prod_{i=0}^{k-1} (1 - 2^{-(n-k)}).
We have $X_{ij}, \; i=1, \ldots, k; j=1,\ldots, n$ to be collection of \textit{i.i.d.} $\text{Ber}(1/2)$ random variables. Define $\mathbf{X}_i=(X_{i1}, \ldots, X_{in})$ for all $1 \leq i \leq k$. The (co-ordinate wise) addition modulo $2$ will de denoted by $\oplus$. \begin{enumerate}[label=(\roman*)] \item We have \begin{align*} \mathbb{P}(\mathbf{X}_1 \oplus \mathbf{X}_2 = \mathbf{0}) = \mathbb{P}(X_{1j}\oplus X_{2j}=0, \; \forall \; 1 \leq j \leq n) = \prod_{j=1}^n \mathbb{P}(X_{1j}\oplus X_{2j}=0) = \prod_{j=1}^n \mathbb{P}(X_{1j}= X_{2j}) = 2^{-n}. \end{align*} \item Consider $V=\left\{0,1\right\}^n$, which is a vector space of dimension $n$ over the field $F=\mathbb{Z}_2$ and the addition being $\oplus$ operator. Let $S_l := \left\{\mathbf{X}_1, \ldots, \mathbf{X}_l\right\}$ for all $1 \leq l \leq k$. Note the event $Q_l$ that there exists $m$ and $1 \leq i_1 < \cdots < i_m \leq l$ such that the element-wise modulo $2$ of $\mathbf{X}_{i_1}, \ldots, \mathbf{X}_{i_m}$ results the vector $\mathbf{0}$, is same as the event that the set $S_l$ is linearly dependent. Since $V$ has dimension $n$, clearly we have $Q_{l}=1$ for all $k \geq n+1$. Now suppose that $Q_l^c$ has occurred for some $1 \leq l \leq n-1$. Then conditioned on $Q_l^c$, $Q_{l+1}^c$ will occur if and only if $\mathbf{X}_{l+1} \notin \mathcal{L}(S_l)$, where $\mathcal{L}(S_l)$ is the subspace of $V$ spanned by $S_l$. Since conditioned on $Q_l^c$, we have $S_l$ to be linearly independent, we also have $|\mathcal{L}(S_l)|=|F|^{|S_l|} =2^l.$ Since $\mathbf{X}_{l+1}$ is an uniform element from $V$, we can write $$ \mathbb{P}(Q_{l+1}^c \mid Q_l^c) = \mathbb{P}(\mathbf{X}_{l+1} \notin \mathcal{L}(S_l) \mid Q_l^c) = 2^{-n}(2^n-2^l) = 1-2^{-(n-l)}.$$ Note that, $\mathbb{P}(Q_1^c) = \mathbb{P}(\mathbf{X}_1 \neq \mathbf{0}) =1- 2^{-n}$. Since, $Q_1^c \supseteq Q_2^c \subseteq \cdots$, we have for $k \leq n$, $$ \mathbb{P}(Q_k^c) = \mathbb{P}(Q_1^c) \prod_{l=1}^{k-1} \mathbb{P}(Q_{l+1}^c \mid Q_l^c) = (1-2^{-n}) \prod_{l=1}^{k-1} (1-2^{-(n-l)}) = \prod_{l=0}^{k-1} (1-2^{-(n-l)}).$$ Hence, $$ \mathbb{P}(Q_k) = 1- \prod_{l=0}^{k-1} \left[1-2^{-(n-l)} \right]_{+}, \; \forall \; k,n.$$ \end{enumerate}
1994-q2
Let S_0 = 0, S_n = X_1 + X_2 + \cdots X_n, n \ge 1 be a Martingale sequence such that |S_n - S_{i-1}| = 1 for i = 1, 2, \cdots. Find the joint distribution of \{X_1, X_2, \cdots, X_n\}.
\textbf{Correction :} We have a MG $\left\{S_n = \sum_{k=1}^n X_k, \mathcal{F}_n, n \geq 0\right\}$ such that $S_0=0$ and $|S_i-S_{i-1}|=1$ for all $i \geq 1$. Clearly, $\mathbb{E}(X_k \mid \mathcal{F}_{k-1})=0$ and $X_k \in \left\{-1,+1\right\}$ for all $k \geq 1$. This implies $X_k \mid \mathcal{F}_{k-1} \sim \text{Uniform}(\left\{-1,+1\right\})$ and therefore $X_k \perp\!\!\perp \mathcal{F}_{k-1}$. Since, $X_1, \ldots, X_{k-1} \in m\mathcal{F}_{k-1}$, we have $X_k \perp\!\!\perp(X_1, \ldots, X_{k-1})$. Since this holds true for all $k \geq 1$, we conclude that $X_1,X_2, \ldots \stackrel{iid}{\sim} \text{Uniform}(\left\{-1,+1\right\}).$
1994-q3
(i) Let X_n, Y_n, Z_n be random variables such that |X_n| \le |Y_n| + |Z_n| for all n \ge 1. Suppose that \sup_{n \ge 1} E(|Y_n| \log |Y_n|) < \infty and \lim_{n \to \infty} E|Z_n| = 0. Show that \{X_n, n \ge 1\} is uniformly integrable. (ii) Give a counter-example to show that we cannot replace the assumption \lim_{n \to \infty} E|Z_n| = 0 above by \sup_{n \ge 1} E|Z_n| < \infty.
\begin{enumerate}[label=(\roman*)] \item It is enough to show that both $\left\{Y_n, n \geq 1 \right\}$ and $\left\{Z_n, n \geq 1 \right\}$ are uniformly integrable. Take $y >1.$ Using the fact that $x\log x \geq -1/e$ for all $x \geq 0$, we get \begin{align*} \mathbb{E}\left(|Y_n|, |Y_n| \geq y \right) = \mathbb{E}\left(|Y_n|, \log |Y_n| \geq \log y \right) &\leq \dfrac{1}{\log y}\mathbb{E}\left(|Y_n|\log |Y_n|, \log |Y_n| \geq \log y \right) \\ & = \dfrac{1}{\log y}\mathbb{E}\left(|Y_n|\log |Y_n|\right) - \dfrac{1}{\log y}\mathbb{E}\left(|Y_n|\log |Y_n|, \log |Y_n| < \log y \right) \\ & \leq \dfrac{1}{\log y}\mathbb{E}\left(|Y_n|\log |Y_n|\right)+ \dfrac{1}{e \log y}, \end{align*} implying that $$ \sup_{n \geq 1} \mathbb{E}\left(|Y_n|, |Y_n| \geq y \right) \leq \dfrac{1}{\log y} \sup_{n \geq 1} \mathbb{E}\left(|Y_n|\log |Y_n|\right) + \dfrac{1}{e \log y}.$$ Taking $y \uparrow \infty$, we conclude that $\left\{Y_n, n \geq 1 \right\}$ is uniformly integrable. We have $\mathbb{E}|Z_n| \to 0$, which implies that $Z_n$ converges to $0$ both in probability and in $L^1$. Apply \textit{Vitali's Theorem} to conclude that $\left\{Z_n, n \geq 1 \right\}$ is uniformly integrable. This conclude the proof. \item Take $U \sim \text{Uniform}(0,1)$, $Y_n \equiv 1$, $Z_n =n\mathbbm{1}(U<1/n)$ and $X_n=Y_n+Z_n=Z_n+1$, for all $n \geq 1$. Clearly, $|X_n| \leq |Y_n|+|Z_n|$, $\sup_{n \geq 1} \mathbb{E}(|Y_n|\log |Y_n|) =0$ and $\sup_{n \geq 1} \mathbb{E}|Z_n|=1$. Note that $Z_n \stackrel{a.s.}{\longrightarrow} 0$ and hence $X_n \stackrel{a.s.}{\longrightarrow} 1$. If $\left\{X_n : n \geq 1\right\}$ is uniformly integrable, we can apply \textit{Vitali's Theorem} to conclude that $\mathbb{E}X_n \to 1$. But $\mathbb{E}(X_n)=2$ for all $n \geq 2$. Hence $\left\{X_n : n \geq 1\right\}$ is not uniformly integrable, \end{enumerate}
1994-q4
Let X_1, X_2, \cdots be independent random variables and let S_n = X_1 + \cdots + X_n. (i) Suppose P\{X_n = 1\} = P\{X_n = -1\} = \frac{1}{2}n^{-1} and \{X_n = 0\} = 1 - n^{-1}. Show that S_n/\sqrt{\log n} has a limiting normal distribution. (ii) Let \beta \ge 0 and 2\alpha > \beta - 1. Suppose P\{X_n = n^\alpha\} = P\{X_n = -n^\alpha\} = \frac{1}{2}n^{-\beta} and P\{X_n = 0\} = 1 - n^{-\beta}. Show that the Lindeberg condition holds if and only if \beta < 1. What can you say about the asymptotic distribution of S_n for the cases 0 \le \beta < 1, \beta = 1 and \beta > 1? (iii) Suppose P\{X_n = n\} = P\{X_n = -n\} = \frac{1}{2}n^{-2} and P\{X_n = 1\} = P\{X_n = -1\} = \frac{1}{2}(1 - n^{-2}). Show that the Lindeberg condition fails but that S_n/\sqrt{n} still has a limiting standard normal distribution.
$X_1,X_2, \ldots$ are independent random variables with $S_n = X_1 + \ldots + X_n.$ \begin{enumerate}[label=(\roman*)] \item We want to apply \textit{Lyapounav CLT}. Since, $\mathbb{P}(X_n=1)=\mathbb{P}(X_n=-1)=1/(2n)$ and $\mathbb{P}(X_n=0)=1-1/n,$ we have $\mathbb{E}(X_n)=0$ and $$ s_n^2 = \sum_{k=1}^n \operatorname{Var}(X_k) = \sum_{k=1}^n \mathbb{E}(X_k^2) = \sum_{k=1}^n \dfrac{1}{k} = \log n + O(1).$$ Finally, $$ \dfrac{1}{s_n^3} \sum_{k=1}^n \mathbb{E}|X_k|^3 = \dfrac{1}{s_n^3} \sum_{k=1}^n \dfrac{1}{k} = \dfrac{1}{s_n} \longrightarrow 0.$$ Hence, $S_n/s_n $ converges weakly to standard Gaussian distribution. Since, $s_n \sim \sqrt{\log n}$, we get $$ \dfrac{S_n}{\sqrt{\log n}} \stackrel{d}{\longrightarrow} N(0,1).$$ \item We have $\mathbb{P}(X_n=n^{\alpha}) = \mathbb{P}(X_n= -n^{\alpha})=1/(2n^{\beta})$ and $\mathbb{P}(X_n=0)=1-n^{-eta},$ where $\beta \geq 0, 2\alpha > \beta-1.$ Here also $\mathbb{E}(X_n)=0$ and $$ s_n^2 = \sum_{k=1}^n \operatorname{Var}(X_k) = \sum_{k=1}^n \mathbb{E}(X_k^2) = \sum_{k=1}^n k^{2\alpha - \beta} \sim \dfrac{n^{2\alpha-\beta+1}}{2\alpha-\beta +1} \longrightarrow \infty,$$ where we have used the fact that $\sum_{k=1}^n k^{\gamma} \sim (\gamma+1)^{-1}n^{\gamma+1}$, for $\gamma >-1$ and the assumption that $2\alpha-\beta+1>0$. Now we need to consider two situations. Firstly, suppose $\beta <1$. In that case $n^{\alpha}/s_n$ converges to $0$ and hence for any $\varepsilon >0$, we can get $N(\varepsilon)\in \mathbb{N}$ such that $|X_n| \leq n^{\alpha} < \varepsilon s_n$, almost surely for all $n \geq N(\varepsilon)$. Therefore, $$ \dfrac{1}{s_n^2} \sum_{k=1}^n \mathbb{E}\left[ X_k^2, |X_k|\geq \varepsilon s_n\right] = \dfrac{1}{s_n^2} \sum_{k=1}^{N(\varepsilon)} \mathbb{E}\left[ X_k^2, |X_k|\geq \varepsilon s_n\right] \leq \dfrac{1}{s_n^2} \sum_{k=1}^{N(\varepsilon)} \mathbb{E}\left[ X_k^2\right] \longrightarrow 0,$$ implying that \textit{Lindeberg condition} is satisfied. How consider the case $\beta \geq 1$. In this case, $s_n = O(n^{\alpha})$ and hence we can get hold of $\varepsilon >0$ such that $ \varepsilon s_n \leq n^{\alpha}$ for all $n \geq 1$. Therefore, $$ \dfrac{1}{s_n^2} \sum_{k=1}^n \mathbb{E}\left[ X_k^2, |X_k|\geq \varepsilon s_n\right] = \dfrac{1}{s_n^2} \sum_{k=1}^{n} \mathbb{E}\left[ X_k^2, |X_k|=k^{\alpha}\right] = \dfrac{1}{s_n^2} \sum_{k=1}^{n} k^{2\alpha-\beta} =1 \nrightarrow 0.$$ Thus \textit{Lindeberg condition} is satisfied if and only if $\beta <1$. For $0 \leq \beta<1$, we have shown the \textit{Lindeberg condition} to satisfy and hence $S_n/s_n$ converges weakly to standard Gaussian variable. Plugging in asymptotic estimate for $s_n$ we obtain $$ n^{-\alpha + \beta/2-1/2}S_n \stackrel{d}{\longrightarrow} N\left(0, \dfrac{1}{2\alpha-\beta+1}\right).$$ For $\beta >1$, we observe that $$ \sum_{n \geq 1} \mathbb{P}(X_n \neq 0) =\sum_{n \geq 1} n^{-\beta}<\infty,$$ and therefore with probability $1$, the sequence $X_n$ is eventually $0$. This implies that $S_n$ converges to a finite limit almost surely. For $\beta =1$, We focus on finding the asymptotics of $S_n/c_n$ for some scaling sequence $\left\{c_n\right\}$ with $c_n \uparrow \infty$, to be chosen later. Fix $t \in \mathbb{R}$. Since, the variables $X_k$'s have distributions which are symmetric around $0$, we get, $$ \mathbb{E} \exp\left[ it\dfrac{S_n}{c_n}\right] = \prod_{k=1}^n \mathbb{E} \exp\left[ it\dfrac{X_k}{c_n}\right] = \prod_{k=1}^n \mathbb{E} \cos \left[ t\dfrac{X_k}{c_n}\right] = \prod_{k=1}^n \left[ \dfrac{1}{k}\cos \left( \dfrac{tk^{\alpha}}{c_n}\right) + \left(1-\dfrac{1}{k}\right)\right] = \prod_{k=1}^n g_{n,k}(t), $$ where $$ g_{n,k}(t) := \dfrac{1}{k}\cos \left( \dfrac{tk^{\alpha}}{c_n}\right) + \left(1-\dfrac{1}{k}\right), \; \forall \; 1 \leq k \leq n.$$ Now observe that, \begin{align*} 0 \leq 1-g_{n,k}(t) &= \dfrac{1}{k}\left\{1-\cos \left( \dfrac{tk^{\alpha}}{c_n}\right)\right\} = \dfrac{2}{k}\sin^2 \left( \dfrac{tk^{\alpha}}{2c_n}\right) \leq \dfrac{2}{k} \left( \dfrac{tk^{\alpha}}{2c_n}\right)^2 \leq \dfrac{t^2k^{2\alpha-1}}{2c_n^2}. \end{align*} Take $c_n=n^{\alpha}$. Then \begin{align*} \sum_{k=1}^n (1-g_{n,k}(t)) &= 2\sum_{k=1}^n \dfrac{1}{k}\sin^2 \left( \dfrac{tk^{\alpha}}{2n^{\alpha}}\right) = \dfrac{2}{n} \sum_{k=1}^n \dfrac{n}{k}\sin^2 \left( \dfrac{tk^{\alpha}}{2n^{\alpha}}\right)= 2 \int_{0}^1 \dfrac{1}{x} \sin^2 \left(\dfrac{tx^{\alpha}}{2}\right) \, dx + o(1)\; \text{ as } n \to \infty. \end{align*} Not that the integral above is finite since, $$ 2\int_{0}^1 \dfrac{\sin^2(tz^{\alpha}/2)}{z} \, dz \leq 2 \int_{0}^1 \dfrac{t^2z^{2\alpha}}{4z} \, dz = 2\int_{0}^1 \dfrac{t^2z^{2\alpha-1}}{4} \, dz = \dfrac{t^2}{4\alpha} < \infty,$$ since $2\alpha > \beta -1 =0.$ Also note that $$ \max_{1 \leq k \leq n} |1-g_{n,k}(t)| \leq \max_{1 \leq k \leq n} \dfrac{t^2k^{2\alpha-1}}{n^{2\alpha}} \leq \dfrac{t^2 (n^{2\alpha-1} \vee 1)}{n^{2\alpha}} \to 0,$$ and hence $$ \sum_{k=1}^n |1-g_{n,k}(t)|^2 \leq \max_{1 \leq k \leq n} |1-g_{n,k}(t)| \sum_{k=1}^n (1-g_{n,k}(t)) \longrightarrow 0.$$\n Therefore, using ~\cite[Lemma 3.3.31]{dembo}, we conclude that $$ \mathbb{E} \exp\left(\dfrac{itS_n}{n^{\alpha}} \right) \longrightarrow \exp \left[-2 \int_{0}^1 \dfrac{1}{x} \sin^2 \left(\dfrac{tx^{\alpha}}{2}\right) \, dx \right] =: \psi (t), \; \forall \; t \in \mathbb{R}.$$ To conclude convergence in distribution for $S_n/n^{\alpha}$, we need to show that $\psi$ is continuous at $0$ (see ~\cite[Theorem 3.3.18]{dembo}). But it follows readily from DCT since $\sup_{-1 \leq t \leq 1} \sin^2(tz^{\alpha}/2)/z \leq z^{2\alpha-1}$ which in integrable on $[0,1]$, since $2\alpha-1 >-1$. So by \textit{Levy's continuity Theorem}, $S_n/n^{\alpha}$ converges in distribution to $G$ with characteristic function $\psi$. \item We have $\mathbb{P}(X_n=n)=\mathbb{P}(X_n=-n)=1/(2n^2)$ and $\mathbb{P}(X_n=1)=\mathbb{P}(X_n=-1)=\frac{1}{2}(1-n^{-2})$. Note that $$ \sum_{n \geq 1} \mathbb{P}(|X_n| \neq 1) =\sum_{n \geq 1} n^{-2} < \infty,$$ and hence with probability $1$, $|X_n|=1$ eventually for all $n$. Let $Y_n = \operatorname{sgn}(X_n)$. Clearly, $Y_i$'s are \textit{i.i.d.} $\text{Uniform}(\left\{-1,+1\right\})$ variables and hence $ \left(\sum_{k=1}^n Y_k\right)/\sqrt{n}$ converges weakly to a standard Gaussian variable. By previous argument, $\mathbb{P}(X_n=Y_n, \; \text{eventually for } n)=1$ and hence $n^{-1/2}\left(S_n - \sum_{k=1}^n Y_k \right) \stackrel{p}{\longrightarrow}0.$ Use \textit{Slutsky's Theorem} to conclude that $S_n/\sqrt{n}$ converges weakly to standard Gaussian distribution. \end{enumerate}
1994-q5
(i) Let N_p be a geometric random variable with parameter p, i.e., P\{N_p = k\} = p(1-p)^k, k = 0, 1, \cdots. show that pN_p converges weakly to an exponential random variable as p \to 0. (ii) Let X_1, X_2, \cdots be i.i.d. random variables with EX_1 > 0. Let S_n = X_1 + \cdots + X_n. Let N be a stopping time having the geometric distribution with parameter p. Show that S_N/E(S_N) converges weakly to an exponential random variable as p \to 0. (Hint: Use Wald's lemma, the law of large numbers, Slutsky's theorem and (i).)
\begin{enumerate}[label=(\roman*)] \item For any $n \in \mathbb{Z}_{\geq 0}$ we have $\mathbb{P}(N_p > n) = 1-\sum_{k=0}^n p(1-p)^k = (1-p)^{n+1}.$ Since, $N_p$ is integer-valued random variable, we have $\mathbb{P}(N_p > x) = (1-p)^{\lfloor x \rfloor +1},$ for all $x \geq 0$. Hence, for all $x \geq 0, $$ \log \mathbb{P}(pN_p >x) = \log (1-p)^{\lfloor x/p \rfloor +1} = (\lfloor x/p \rfloor +1) \log (1-p) \sim - (\lfloor x/p \rfloor +1) p \sim -x, \; \text{ as } p \to 0.$$ Hence for all $x \geq 0$, we have $\mathbb{P}(pN_p >x) \to \exp(-x)$. On the otherhand, for $x <0$, $\mathbb{P}(pN_p >x)=1$. Therefore, $pN_p \stackrel{d}{\longrightarrow} \text{Exponential}(1),$ as $p \to 0.$ \item We have $X_i$'s \textit{i.i.d.} with positive finite mean. $S_n :=X_1+\cdots+X_n,$ with $S_0:=0$. $N_p$ is a stopping time having geometric distribution with parameter $p$ and hence is integrable. So by \textit{Wald's Identity} (See ~\cite[Exercise 5.4.10]{dembo}), we have $\mathbb{E}(S_{N_p}) =\mathbb{E}(N_p)\mathbb{E}(X_1) = p^{-1}(1-p)\mathbb{E}(X_1). Note that $\mathbb{P}(N_p >x)=(1-p)^{\lfloor x \rfloor +1} \to 1$ as $p \to 0$ for all $x \geq 0$. This implies that $N_p \stackrel{p}{\longrightarrow} \infty.$ By strong law of large numbers, we have $S_n/n \stackrel{a.s.}{\longrightarrow} \mathbb{E}(X_1)$. Take any sequence $\left\{p_m : m \geq 1 \right\}$ converging to $0$. Since, $N_p \stackrel{p}{\longrightarrow} \infty$ as $p \to 0$, we can get a further subsequence $\left\{p_{m_l} : l \geq 1\right\}$ such that $N_{p_{m_l}} \stackrel{a.s.}{\longrightarrow} \infty.$ Therefore, $S_{N_{p_{m_l}} }/N_{p_{m_l}}\stackrel{a.s.}{\longrightarrow} \mathbb{E}(X_1).$ This argument show that $S_{N_p}/N_p \stackrel{p}{\longrightarrow} \mathbb{E}(X_1)$ as $p \to 0.$ Now, $$ \dfrac{S_{N_p}}{\mathbb{E}S_{N_p}} = \dfrac{S_{N_p}}{p^{-1}(1-p)\mathbb{E}(X_1)} = \left(\dfrac{S_{N_p}}{N_p \mathbb{E}(X_1)}\right) \left(\dfrac{pN_p}{(1-p)}\right) .$$ As $p \to 0$, the quantity in the first parenthesis converges in probability to $1$, since $\mathbb{E}(X_1)>0$. From part (a), the quantity in the second parenthesis converges weakly to standard exponential variable. Apply \textit{Slutsky's Theorem} to conclude that $$ \dfrac{S_{N_p}}{\mathbb{E}S_{N_p}} \stackrel{d}{\longrightarrow} \text{Exponential}(1), \text{ as } p \to 0.$$ \end{enumerate}
1994-q6
Suppose that you play a game in which you win a dollar with probability \frac{1}{2}, lose a dollar with probability \frac{1}{4}, and neither win nor lose with probability \frac{1}{4}. (a) Let X_1, X_2, \cdots be the gains in independent plays of the game and S_n = X_1 + \cdots + X_n, V_n = 2^{-S_n} for n = 1, 2, \cdots. (i) For k > 0 find E(V_{n+k}|V_1, \cdots, V_n). (ii) What happens to V_n as n grows without bound? Why? (b) Suppose you play the game until you are either $10 ahead or $3 behind, whichever happens first. Infer from (ii) that the game terminates in finite time with probability 1. (c) Let T be the (random) time at which the game terminates. Prove that for all k > 0, E(T^k) < \infty. Find P(S_T = 10) and E(T).
We have $X_i$'s to be \textit{i.i.d.} with $\mathbb{P}(X_1=1)=1/2, \mathbb{P}(X_1=0)=1/4, \mathbb{P}(X_1=-1)=1/4.$ Thus $\mathbb{E}(X_1) = 1/4$ and $\mathbb{E}2^{-X_1} = 1.$ \begin{enumerate}[label=(\alph*)] \item Define $S_n= \sum_{k=1}^n X_k$ and $V_n=2^{-S_n}$. \begin{enumerate}[label=(\roman*)] \item For $k >0$, since $S_{n+k}-S_n$ is independent of $(X_1, \ldots, X_n)$ we have the following. \begin{align*} \mathbb{E}(V_{n+k} \mid V_1, \ldots, V_n) = \mathbb{E}(2^{-S_{n+k}} \mid X_1, \ldots, X_n) = 2^{-S_n} \mathbb{E}\left(2^{-(S_{n+k}-S_n)}\right) &= 2^{-S_n} \left( \mathbb{E}(2^{-X_1})\right)^k = V_n. \end{align*} \item Since $\left\{S_n : n \geq 0\right\}$ is the random walk on $\mathbb{R}$ with step-size mean being positive, we have $S_n \to \infty$ almost surely as $n \to \infty$. Therefore, $V_n$ converges to $0$ almost surely as $n \to \infty.$ \end{enumerate} \item Let $T := \inf \left\{ n \geq 0 : S_n \in \left\{-3,10\right\} \right\}$, where $S_0=0$. Since we have argued that $S_n \stackrel{a.s.}{\longrightarrow} \infty$ as $n \to \infty$, we clearly have $T< \infty$ almost surely, i.e. the game terminates with probability $1$. \item Note that for any $k \geq 1$, we have \begin{align*} \mathbb{P}(T > k+13 \mid T>k) &= \mathbb{P}(S_{k+1}, \ldots, S_{k+13} \notin \left\{-3,10\right\} \mid S_k) \mathbbm{1}(-2 \leq S_i \leq 9, \; \forall \; i=1,\ldots, k) \\ &\leq \mathbb{P}(S_{k+1}, \ldots, S_{k+13} \neq 10 \mid S_k) \mathbbm{1}(-2 \leq S_i \leq 9, \; \forall \; i=1,\ldots, k) \\ & \leq (1-2^{-(10-S_k)}) \mathbbm{1}(-2 \leq S_i \leq 9, \; \forall \; i=1,\ldots, k) \leq 1-2^{-12}. \end{align*} Letting $\delta=2^{-12}>0$, we get $$ \mathbb{P}(T>k) \leq \mathbb{P}(T>13\lfloor k/13 \rfloor) \leq (1-\delta)^{\lfloor k/13 \rfloor} \mathbb{P}(T>0) = (1-\delta)^{\lfloor k/13 \rfloor} , \; \forall \; k \geq 0.$$ Hence, for any $k>0$, we have $$ \mathbb{P}(T^k) = \sum_{y \geq 0} ky^{k-1}\mathbb{P}(T>y) \leq \sum_{y \geq 0} ky^{k-1}(1-\delta)^{\lfloor y/13 \rfloor} < \infty.$$ From part (a), we can see that $\left\{V_n, \mathcal{F}_n, n \geq 0\right\}$ is a non-negative MG, where $\mathcal{F}_n = \sigma(X_1, \ldots, X_n)$. Hence $\left\{V_{T \wedge n}, \mathcal{F}_n, n \geq 0\right\}$ is a non-negative uniformly bounded MG such that $V_{T \wedge n} \to V_T$ as $n \to \infty$ almost surely. Using OST and DCT, we conclude that $\mathbb{E}V_{T}=\mathbb{E}V_0=1,$ hence $$ 1= \mathbb{E}V_T = \mathbb{E}2^{-S_T}= 2^{-10}\mathbb{P}(S_T=10)+2^3\mathbb{P}(S_T=-3) \Rightarrow \mathbb{P}(S_T=10) = \dfrac{2^3-1}{2^3-2^{-10}} = \dfrac{7}{8-2^{-10}}.$$ Since $\mathbb{E}X_1=1/4$, it is easy to see that $\left\{S_n-(n/4), \mathcal{F}_n, n \geq 0\right\}$ is a MG. Using OST we write, $\mathbb{E}(S_{T \wedge n}) = (1/4)\mathbb{E}(T \wedge n).$ Since $S_{T \wedge n}$ is uniformly bounded and converges to $S_T$ almost surely, we use DCT and MCT to conclude that \begin{align*} \mathbb{E}T = \lim_{n \to \infty} \mathbb{E}(T \wedge n) = 4 \lim_{n \to \infty} \mathbb{E}(S_{T \wedge n}) = 4 \mathbb{E}(S_T) = 4 \left[ 10 \mathbb{P}(S_T=10)-3\mathbb{P}(S_T=-3)\right]& = \dfrac{280 - 12(1-2^{-10})}{8-2^{-10}} \\ &= \dfrac{268 + 3 \times 2^{-8}}{8-2^{-10}}. \end{align*} \end{enumerate}
1994-q7
Let X_1, X_2, \cdots, be independent and identically distributed on (\Omega, \mathcal{F}, P) such that E(X_i) = 0 and \text{Var}(X_i) = 1. (i) What is the limiting distribution of \frac{S_n}{\sqrt{n}} and why? (ii) Is there an \mathcal{F}-measurable random variable Z such that \frac{S_n}{\sqrt{n}} \to Z in probability? (iii) Prove that E(W \frac{S_n}{\sqrt{n}}) \to 0 for every W in L^2(\Omega, \mathcal{F}, P).
We have $X_i$'s to be \textit{i.i.d.} with mean $0$ and variance $1$, on the probability space $(\Omega, \mathcal{F}, \mathbb{P})$. Define $S_n = \sum_{k=1}^n X_k$. \begin{enumerate}[label=(\roman*)] \item By CLT, we have $S_n/\sqrt{n}$ converges weakly to standard Gaussian distribution. \item Suppose there exists $Z \in m\mathcal{F}$ such that $S_n/\sqrt{n}$ converges to $Z$ in probability. Then we have $S_{2n}/\sqrt{2n}$ converges to $Z$ in probability and hence $$ \dfrac{S_n}{\sqrt{n}} = \dfrac{\sum_{k=1}^n X_k}{\sqrt{n}} \stackrel{d}{=} \dfrac{\sum_{k=n+1}^{2n} X_k}{\sqrt{n}} = \dfrac{S_{2n}-S_n}{\sqrt{n}} \stackrel{p}{\longrightarrow} (\sqrt{2}-1)Z,$$ which gives a contradiction. \item Take $W \in L^2(\Omega, \mathcal{F}, \mathbb{P})$ and set $Y=\mathbb{E}(W \mid \mathcal{F}_{\infty})$, where $\mathcal{F}_n := \sigma(X_i : 1 \leq i \leq n)$ for all $1 \leq n \leq \infty$. Then $\mathbb{E}(Y^2) \leq \mathbb{E}(W^2)$ and hence $Y \in L^2(\Omega, \mathcal{F}_{\infty}, \mathbb{P}).$ Introduce the notation $Z_n=S_n/\sqrt{n}$. Note that, $$ \mathbb{E}(WZ_n) = \mathbb{E} \left( \mathbb{E}(WZ_n \mid \mathcal{F}_{\infty})\right) = \mathbb{E} \left( Z_n\mathbb{E}(W \mid \mathcal{F}_{\infty})\right) = \mathbb{E}(YZ_n),$$ since $Z_n \in m \mathcal{F}_n \subseteq m \mathcal{F}_{\infty}$. Thus it is enough to prove that $\mathbb{E}(YZ_n) \to 0.$ We have $Y \in L^2(\Omega, \mathcal{F}_{\infty}, \mathbb{P})$. Take $Y_m=\mathbb{E}(Y \mid \mathcal{F}_m)$, for $m \geq 1$. Clearly, $\left\{Y_m, \mathcal{F}_m, m \geq 1\right\}$ is a $L^2$ bounded MG since $\mathbb{E}(Y_m^2) \leq \mathbb{E}(Y^2)$. Hence, $Y_m \stackrel{L^2}{\longrightarrow} Y_{\infty}$ as $m \to \infty$, where $Y_{\infty} = \mathbb{E}(Y \mid \mathcal{F}_{\infty})=Y$ (see ~\cite[Corollary 5.3.13]{dembo}). Note that for $i > m$, $\mathbb{E}(X_i \mid \mathcal{F}_m) = \mathbb{E}(X_i)=0$, whereas for $0 \leq i \leq m$, $\mathbb{E}(X_i \mid \mathcal{F}_m) =X_i$. Therefore, for $n>m$, $$ \mathbb{E}(Y_mZ_n) = \dfrac{1}{\sqrt{n}}\sum_{i=1}^n \mathbb{E}(Y_m X_i) = \dfrac{1}{\sqrt{n}}\sum_{i=1}^n \mathbb{E} \left( \mathbb{E} \left[Y_m X_i \mid \mathcal{F}_m \right] \right)= \dfrac{1}{\sqrt{n}}\sum_{i=1}^n \mathbb{E} \left( Y_m\mathbb{E} \left[ X_i \mid \mathcal{F}_m \right] \right) = \dfrac{1}{\sqrt{n}}\sum_{i=1}^m \mathbb{E} \left( Y_mX_i \right) \to 0, $ as $n \to \infty$. Fix $m \geq 1$. We have $$ \Big \rvert \mathbb{E}(Y_mZ_n) - \mathbb{E}(YZ_n)\Big \rvert \leq \mathbb{E}\left( |Z_n||Y-Y_m|\right) \leq \sqrt{\mathbb{E}(Z_n^2)\\mathbb{E}(Y-Y_m)^2} =\sqrt{\mathbb{E}(Y-Y_m)^2}.$$ Since, $\mathbb{E}(Y_mZ_n)$ converges to $0$ as $n \to \infty$, the above inequality implies that $$ \limsup_{n \to \infty} \Big \rvert \mathbb{E}(YZ_n) \Big \rvert \leq \sqrt{\mathbb{E}(Y-Y_m)^2}, \; \forall \; m \geq 1.$$ We already know that the right hand side in the above inequality goes to $0$ as $m \to \infty$. Hence, $\mathbb{E}(YZ_n)=o(1). \end{enumerate}
1995-q1
Prove the following for any random variable $X$ and any constant $\alpha > 0$. (i) $\sum_{n=1}^{\infty} e^{\alpha n} E\{I(X \ge e^{\alpha n})/X\} < \infty$. (ii) $\sum_{n=1}^{\infty} n^{\alpha} E\{I(X \ge n^{\alpha})/X\} < \infty \iff E(X^+)^{1/2} < \infty$. (iii) $\sum_{n=2}^{\infty}(\log n)E\{I(\alpha X \ge \log n)/X\} < \infty \iff E\exp(\alpha X^+) < \infty$. (Hint: Partition $\{X \ge k\}$ suitably.)
egin{enumerate}[label=(\roman*)] \item Since, \alpha >0, \begin{align*} \sum_{n \geq 1} e^{\alpha n} \mathbb{E} X^{-1}\mathbbm{1}(X \geq e^{\alpha n}) &= \sum_{n \geq 1} \sum_{k \geq n} e^{\alpha n} \mathbb{E}X^{-1}\mathbbm{1}(e^{\alpha k} \leq X < e^{\alpha (k+1)}) \\ &= \sum_{k \geq 1} \sum_{n=1}^k e^{\alpha n} \mathbb{E}X^{-1}\mathbbm{1}(k^{\alpha} \leq X < (k+1)^{\alpha}) \\ &= \sum_{k \geq 1} \dfrac{e^{\alpha (k+1)}-e^{\alpha}}{e^{\alpha}-1} \mathbb{E}X^{-1}\mathbbm{1}(e^{\alpha k} \leq X < e^{\alpha (k+1)}) \\ & \leq \sum_{k \geq 1} \dfrac{e^{\alpha (k+1)}-e^{\alpha}}{e^{\alpha}-1} e^{-\alpha k}\mathbb{P}(e^{\alpha k} \leq X < e^{\alpha (k+1)}) \\ & \leq \dfrac{e^{\alpha}}{e^{\alpha}-1} \sum_{k \geq 1} \mathbb{P}(e^{\alpha k} \leq X < e^{\alpha (k+1)}) \leq \dfrac{e^{\alpha}}{e^{\alpha}-1} < \infty,. \end{align*} \item Define $H(k,\alpha) := \sum_{j=1}^k j^{\alpha}$. Note that $$ k^{-\alpha-1}H(k,\alpha) = \dfrac{1}{k}\sum_{j=1}^k \left(\dfrac{j}{k} \right)^{\alpha} \stackrel{k \to \infty}{\longrightarrow} \int_{0}^1 x^{\alpha}\, dx = \dfrac{1}{\alpha+1},$$ and thus $C_1k^{\alpha +1} \leq H(k,\alpha) \leq C_2k^{\alpha+1}$, for all $k \geq 1$ and for some $0 < C_1 <C_2<\infty$. \begin{align*} \sum_{n \geq 1} n^{\alpha} \mathbb{E} X^{-1}\mathbbm{1}(X \geq n^{\alpha}) &= \sum_{n \geq 1} \sum_{k \geq n} n^{\alpha} \mathbb{E}X^{-1}\mathbbm{1}(k^{\alpha} \leq X < (k+1)^{\alpha}) \\ &= \sum_{k \geq 1} \sum_{n=1}^k n^{\alpha} \mathbb{E}X^{-1}\mathbbm{1}(k^{\alpha} \leq X < (k+1)^{\alpha})\\ &= \sum_{k \geq 1} H(k,\alpha) \mathbb{E}X^{-1}\mathbbm{1}(k^{\alpha} \leq X < (k+1)^{\alpha}). \end{align*} Thus \begin{align*} \sum_{k \geq 1} H(k,\alpha) \mathbb{E}X^{-1}\mathbbm{1}(k^{\alpha} \leq X < (k+1)^{\alpha}) & \leq \sum_{k \geq 1} H(k,\alpha)k^{-\alpha} \mathbb{P}(k^{\alpha} \leq X < (k+1)^{\alpha}) \\ & \leq C_2 \sum_{k \geq 1} k^{\alpha +1}k^{-\alpha} \mathbb{P}(k^{\alpha} \leq X < (k+1)^{\alpha}) \\ & = C_2 \sum_{k \geq 1} k \mathbb{P}(k^{\alpha} \leq X < (k+1)^{\alpha}) \\ & \leq C_2 \sum_{k \geq 1} \mathbb{E}X^{1/\alpha}\mathbbm{1}(k^{\alpha} \leq X < (k+1)^{\alpha}) \leq C_2 \mathbb{E}(X_{+})^{1/\alpha}. \end{align*} On the otherhand, \begin{align*} \sum_{k \geq 1} H(k,\alpha) \mathbb{E}X^{-1}\mathbbm{1}(k^{\alpha} \leq X < (k+1)^{\alpha}) & \geq \sum_{k \geq 1} H(k,\alpha)(k+1)^{-\alpha} \mathbb{P}(k^{\alpha} \leq X < (k+1)^{\alpha}) \\ & \geq C_1 \sum_{k \geq 1} k^{\alpha +1}(k+1)^{-\alpha} \mathbb{P}(k^{\alpha} \leq X < (k+1)^{\alpha}) \\ & \geq C_1 \sum_{k \geq 1} 2^{-\alpha-1}(k+1) \mathbb{P}(k^{\alpha} \leq X < (k+1)^{\alpha}) \\ & \geq 2^{-\alpha -1}C_1 \sum_{k \geq 1} \mathbb{E}X^{1/\alpha}\mathbbm{1}(k^{\alpha} \leq X < (k+1)^{\alpha})\\ &= 2^{-\alpha-1}C_1 \mathbb{E}(X)^{1/\alpha}\mathbbm{1}(X \geq 1) \geq 2^{-\alpha-1}C_1 \left(\mathbb{E}(X_{+})^{1/\alpha} -1 \right). \end{align*} Therefore, $$ \sum_{n \geq 1} n^{\alpha} \mathbb{E} X^{-1}\mathbbm{1}(X \geq n^{\alpha}) < \infty \iff \mathbb{E}(X_{+})^{1/\alpha} < \infty.$$\n\item Define $G(k) := \sum_{j=1}^k \log j$. Note that $$ G(k) \leq \int_{1}^{k+1} \log x \, dx = (x\log x -x) \Bigg \rvert_{1}^{k+1} = (k+1)\log(k+1) - k \leq C_4 k \log k,$$ whereas $$ G(k) \geq \int_{0}^{k} \log x \, dx = (x\log x -x) \Bigg \rvert_{0}^{k} = k\log k - k \geq C_3(k+1)\log(k+1),$$ for all $k \geq 2$, for some $0 < C_3 <C_4<\infty$. \begin{align*} \sum_{n \geq 2} (\log n) \mathbb{E} X^{-1}\mathbbm{1}(\alpha X \geq \log n) &= \sum_{n \geq 2} \sum_{k \geq n} (\log n) \mathbb{E}X^{-1}\mathbbm{1}( \log (k) \leq \alpha X < \log (k+1)) \\ &= \sum_{k \geq 2} \sum_{n=2}^k (\log n)\mathbb{E}X^{-1}\mathbbm{1}(\log k \leq \alpha X < \log (k+1))\ \\ &= \sum_{k \geq 2} G(k) \mathbb{E}X^{-1}\mathbbm{1}( \log k \leq \alpha X < \log (k+1)). \end{align*} Thus \begin{align*} \sum_{k \geq 2} G(k) \mathbb{E}X^{-1}\mathbbm{1}( \log k \leq\alpha X < \log (k+1)) & \leq \sum_{k \geq 2} G(k) (\log k)^{-1} \mathbb{P}( \log k \leq \alpha X < \log (k+1)) \\ & \leq C_4 \sum_{k \geq 2} k \mathbb{P}( \log k\leq \alpha X < \log (k+1)) \\ & \leq C_4 \sum_{k \geq 2} \mathbb{E} \exp(\alpha X)\mathbbm{1}(\log k \leq \alpha X < \log (k+1)) \\ &= C_4 \mathbb{E}\exp(\alpha X)\mathbbm{1}(\alpha X \geq \log 2) \leq C_4 \mathbb{E}\exp(\alpha X_{+}). \end{align*} On the otherhand, \begin{align*} \sum_{k \geq 2} G(k) \mathbb{E}X^{-1}\mathbbm{1}( \log k \leq\alpha X < \log (k+1)) & \geq \sum_{k \geq 2} G(k) (\log (k+1))^{-1} \mathbb{P}( \log k \leq \alpha X < \log (k+1)) \\ & \geq C_3 \sum_{k \geq 2} (k+1) \mathbb{P}( \log k\leq \alpha X < \log (k+1)) \\ & \geq C_3 \sum_{k \geq 2} \mathbb{E} \exp(\alpha X)\mathbbm{1}(\log k \leq \alpha X < \log (k+1)) \\ &= C_3 \mathbb{E}\exp(\alpha X)\mathbbm{1}(\alpha X \geq \log 2) \geq C_3 (\mathbb{E}\exp(\alpha X_{+}) -2). \end{align*} Therefore, $$ \sum_{n \geq 2} (\log n) \mathbb{E} X^{-1}\mathbbm{1}( \alpha X \geq \log n) < \infty \iff \mathbb{E}\exp( \alpha X_{+}) < \infty.$$\n\end{enumerate}
1995-q2
Let $X_1, X_2, \ldots$ be i.i.d. $k$-dimensional random vectors such that $EX_1 = \mu$ and $\text{Cov}(X_1) = V$. Let $S_n = X_1 + \cdots + X_n, \ \bar{X}_n = S_n/n$. Let $f : \mathbb{R}^k \to \mathbb{R}$ be continuously-differentiable in some neighborhood of $\mu$. (i) Show that with probability 1, the lim sup and lim inf of $\sqrt{n}/\log\log n\{f(\bar{X}_n) - f(\mu)\}$ are nonrandom constants. Give formulas for these constants. (ii) Show that $\sqrt{n}/\log\log n\{f(\bar{X}_n) - f(\mu)\}$ converges in probability to a nonrandom constant and identify the constant.
Let $U$ be an open ball around $\mu$ of radius $r$ such that $f$ is continuously differentiable in $U$. Hence, for any $x \in U$, we have by \textit{Mean Value Theorem}, $$ \big \rvert f(x)-f(\mu)-(\nabla f (\mu))^{\prime}(x-\mu)\big \rvert \leq ||x-\mu||_2 \left(\sup_{y \in L_x} || \nabla f(y) - \nabla f(\mu)||_2 \right) \leq ||x-\mu||_2 \left(\sup_{y \in B(\mu,r(x))} || \nabla f(y) - \nabla f(\mu)||_2 \right),$$ where $L_x$ is the line segment connecting $x$ to $\mu$ and $r(x)=||x-\mu||_2.$ $B(y,s)$ is the open ball of radius $s$ around $y$. Introduce the notation $$ \Delta(s) := \sup_{y \in B(\mu,s)} || \nabla f(y) - \nabla f(\mu)||_2 , \; \forall \, 0 \leq s <r.$$\nClearly, $s \mapsto \Delta(s)$ is non-decreasing and by continuity of $\nabla f$ on $U$, we have $\Delta(s)\downarrow 0$ as $s \downarrow 0$. For notational convenience, introduce the notations $X_j := (X_{j,1}, \ldots, X_{j,k})$ for all $j \geq 1$, $\bar{X}_{n,i} = n^{-1}\sum_{j=1}^n X_{j,i}$, for all $n \geq 1$ and $1 \leq i \leq k$; $\mu=(\mu_1, \ldots, \mu_k)$ and finally $V=(V_{ii^{\prime}}).$ \begin{enumerate}[label=(\roman*)] \item By SLLN, we have $\bar{X}_n \stackrel{a.s.}{\longrightarrow} \mu.$ Thus for a.e.[$\omega$], we can get $n_0(\omega)$ such that $\bar{X}_n(\omega)\in U$ for all $n \geq n_0(\omega)$. Then for all $n \geq n_0(\omega)$, \begin{equation}{\label{original}} \dfrac{\sqrt{n}}{\sqrt{\log \log n}} \big \rvert f(\bar{X}_n(\omega))-f(\mu)-(\nabla f (\mu))^{\prime}(\bar{X}_n(\omega)-\mu)\big \rvert \leq \dfrac{\sqrt{n}}{\sqrt{\log \log n}}||\bar{X}_n(\omega)-\mu||_2 \Delta(||\bar{X}_n(\omega)-\mu||_2). \end{equation} Note that for all $1 \leq i \leq k$, $\left\{ \sum_{j=1}^n (X_{j,i} - \mu_i), n \geq 0 \right\}$ is a mean-zero RW with step variances being $V_{ii}$. So by \textit{Hartman-Winter LIL}, $$ \liminf_{n \to \infty} \dfrac{n(\bar{X}_{n,i} - \mu_i)}{\sqrt{2 n \log\log n}} = -\sqrt{V_{ii}}, \; \limsup_{n \to \infty} \dfrac{n(\bar{X}_{n,i} - \mu_i)}{\sqrt{2 n \log\log n}} = \sqrt{V_{ii}}, \; \text{almost surely}.$$\nUsing this we can conclude that \begin{align*} \limsup_{n \to \infty} \dfrac{\sqrt{n}}{\sqrt{\log \log n}}||\bar{X}_n-\mu||_2 \leq \limsup_{n \to \infty} \sum_{i=1}^k \dfrac{\sqrt{n}}{\sqrt{\log \log n}}|\bar{X}_{n,i}-\mu_i| & \leq \sum_{i=1}^k \limsup_{n \to \infty} \dfrac{\sqrt{n}}{\sqrt{\log \log n}}|\bar{X}_{n,i}-\mu_i| = \sum_{i=1}^k \sqrt{2V_{ii}}. \end{align*} On the otherhand, using the SLLN, we get $\Delta(||\bar{X}_n - \mu||_2)$ converges to $0$ almost surely. Combining these two observations with ( ef{original}), we conclude that \begin{equation}{\label{original2}} \dfrac{\sqrt{n}}{\sqrt{\log \log n}} \big \rvert f(\bar{X}_n)-f(\mu)-(\nabla f (\mu))^{\prime}(\bar{X}_n-\mu)\big \rvert \stackrel{a.s.}{\longrightarrow} 0. \end{equation} Similarly, $\left\{\sum_{j=1}^n(\nabla f(\mu))^{\prime} (X_{j} - \mu), n \geq 0 \right\}$ is a mean-zero RW with step variances being $\sigma^2 = (\nabla f(\mu))^{\prime}V(\nabla f(\mu))$. So by \textit{Hartman-Winter LIL}, \begin{equation}{\label{lil2}} \liminf_{n \to \infty} \dfrac{n (\nabla f(\mu))^{\prime}(\bar{X}_n - \mu)}{\sqrt{2 n \log\log n}} = - \sigma , \; \; \limsup_{n \to \infty} \dfrac{n (\nabla f(\mu))^{\prime}(\bar{X}_n - \mu)}{\sqrt{2 n \log\log n}} = \sigma, \;\, \text{almost surely}. \end{equation} Combining ( ef{original2}) and ( ef{lil2}), we can conclude that $$\liminf_{n \to \infty} \dfrac{\sqrt{n}(f(\bar{X}_n)-f(\mu))}{\sqrt{ \log\log n}} = -\sqrt{2(\nabla f(\mu))^{\prime}V(\nabla f(\mu))} , \; \; \limsup_{n \to \infty} \dfrac{\sqrt{n}(f(\bar{X}_n)-f(\mu))}{\sqrt{ \log\log n}} = \sqrt{2(\nabla f(\mu))^{\prime}V(\nabla f(\mu))}, \;\, \text{a.s.}$$ \item By CLT, we have $$ \sqrt{n}(\nabla f (\mu))^{\prime}(\bar{X}_n-\mu) \stackrel{d}{\longrightarrow} N(0,\sigma^2),$$ and hence $\sqrt{n}(\nabla f (\mu))^{\prime}(\bar{X}_n-\mu)/\sqrt{\log \log n} \stackrel{p}{\longrightarrow} 0$. This along with ( ef{original2}) gives $$ \dfrac{\sqrt{n}(f(\bar{X}_n)-f(\mu))}{\sqrt{ \log\log n}} \stackrel{p}{\longrightarrow} 0.\n$$ \end{enumerate}
1995-q3
Let $X_1, X_2, \ldots$ be i.i.d. random variables with a common distribution function $F$. Let $g : \mathbb{R}^2 \to \mathbb{R}$ be a Borel measurable function such that $\int \int |g(x, y)|dF(x)dF(y) < \infty$ and $\int g(x, y)dF(y) = 0$ for a.e. $x$, $\int g(x, y)dF(x) = 0$ for a.e. $y$, where "a.e." means "almost every" (with respect to $F$). Let $S_n = \sum_{1 \le i < j \le n} g(X_i, X_j)$, and let $\mathcal{F}_n$ be the $\sigma$-algebra generated by $X_1, \ldots, X_n$. (i) Show that $\{S_n, n \ge 2\}$ is a martingale with respect to $\{\mathcal{F}_n\}$. (ii) Suppose that $\int \int g^2(x, y)dF(x)dF(y) = A < \infty$. Evaluate $ES_n^2$ in terms of $A$ and show that $E(\max_{j \le S_n} S_j^2) \le 2n(n-1)A$. (iii) For $n \ge 2$, let $\mathcal{G}_{-n}$ be the $\sigma$-algebra generated by $S_n, S_{n+1}, \ldots$ Show that $E(g(X_1, X_2)|\mathcal{G}_{-n}) = S_n/\binom{n}{2}$ a.s. Hence show that $S_n/n^2$ converges to 0 a.s. and in $L_1$.
The conditions given on $g$ can be re-written as follows. For all $i \neq j$, $$ \mathbb{E}|g(X_i,X_j)| < \infty, \; \; \mathbb{E}\left[g(X_i,X_j)\mid X_i\right] = 0 = \mathbb{E}\left[ g(X_i,X_j) \mid X_j\right], \; \text{almost surely}.$$\nThus also implies that $\mathbb{E}g(X_i,X_j)=0.$\nWe have $X_i$'s to be \textit{i.i.d.} and $S_n = \sum_{1 \leq i < j \leq n} g(X_i,X_j)$. Also $\mathcal{F}_n := \sigma(X_i : i \leq n)$.\n\begin{enumerate}[label=(\roman*)] \item Clearly, $S_n \in m\mathcal{F}_n$ and $\mathbb{E}|S_n| < \infty$ since $\mathbb{E}|g(X_i,X_j)| < \infty$ for all $i \neq j$. Note that for $n \geq 2$, \begin{align*} \mathbb{E}(S_n \mid \mathcal{F}_{n-1}) = S_{n-1} + \sum_{j=1}^{n-1} \mathbb{E}(g(X_j,X_n) \mid \mathcal{F}_{n-1}) \stackrel{(i)}{=} S_{n-1} + \sum_{j=1}^{n-1} \mathbb{E}(g(X_j,X_n) \mid X_j) = S_{n-1}, \; \text{almost surely}, \end{align*} where $(i)$ follows from the fact that $(X_j,X_n) \perp \! \!\\perp \left\{X_i : i \neq j,n\right\}$. Therefore, $\left\{S_n, \mathcal{F}_n, n \geq 2\right\}$ is a MG. \item The condition given can be re-written as $\mathbb{E}g(X_i,X_j)^2 =: A< \infty$ for all $i \neq j$. Under this condition $S_n$ is clearly square-integrable. Take $1 \leq i_1<j_1 \leq n$ and $1 \leq i_2<j_2 \leq n$. If the sets $\left\{i_1,j_1\right\}$ and $\left\{i_2,j_2\right\}$ are disjoint then $g(X_{i_1}, X_{j_1})$ and $g(X_{i_2},X_{j_2})$ are independent with mean $0$ and therefore $\mathbb{E} g(X_{i_1}, X_{j_1})g(X_{i_2},X_{j_2})=0$. On the otherhand, if $i_1=i_2$ but $j_1 \neq j_2$, then $$ \mathbb{E}[g(X_{i_1},X_{j_1})g(X_{i_1},X_{j_2})] = \mathbb{E} \left(\mathbb{E}[g(X_{i_1},X_{j_1})g(X_{i_1},X_{j_2}) \mid X_{i_1}] ight) = \mathbb{E} \left(\mathbb{E}[g(X_{i_1},X_{j_1}) \mid X_{i_1}]\mathbb{E}[g(X_{i_1},X_{j_2}) \mid X_{i_1}] ight)=0,$$ where the above holds since conditioned on $X_{i_1}$, we have $g(X_{i_1},X_{j_1})$ and $g(X_{i_1},X_{j_2})$ are independent. Similar calculations hold true for other situations satisfying $|\left\{i_1,j_1\right\} \cap \left\{i_2,j_2\right\}|=1$. Therefore, $$ \mathbb{E}S_n^2 = \sum_{1 \leq i < j \leq n} \mathbb{E}g(X_i,X_j)^2 = {n \choose 2} A.$$ Since, $\left\{S_n, \mathcal{F}_n, n \geq 2\right\}$ is a $L^2$-MG, using \textit{Doob's Maximal Inequality}, we get $$ \mathbb{E}\left( \max_{j \leq n} S_j^2 \right) \leq 4\mathbb{E}S_n^2 = 4{n \choose 2} A = 2n(n-1)A.\n$$ \item Let $\mathcal{G}_{-n}=\sigma(S_m : m \geq n)$. Since, $X_i$'s are i.i.d., this symmetry implies that $$(X_1,X_2, \ldots,) \stackrel{d}{=}(X_{\pi(1)},X_{\pi(2)},\ldots, X_{\pi(n)}, X_{n+1}, \ldots),$$ for all $\pi$, permutation on $\left\{1, \ldots,n\right\}$. Taking $1 \leq i <j \leq n$ and a permutation $\pi$ such that $\pi(1)=i, \pi(2)=j$, we can conclude that $\mathbb{E}(g(X_i,X_j)\mid \mathcal{G}_{-n}) = \mathbb{E}(g(X_1,X_2)\mid \mathcal{G}_{-n})$ for all $1 \leq i \neq j \leq n.$ Summing over all such pairs, we get $$ S_n = \mathbb{E}\left[ S_n \mid \mathcal{G}_{-n}\right] = \sum_{1 \leq i < j \leq n} \mathbb{E}(g(X_i,X_j) \mid \mathcal{G}_{-n}) = {n \choose 2} \mathbb{E}(g(X_1,X_2)\mid \mathcal{G}_{-n}),$$ implying that $$ \mathbb{E}(g(X_1,X_2)\mid \mathcal{G}_{-n}) = \dfrac{S_n}{{n \choose 2}}, \; \text{almost surely}.$$\nNote that $\mathcal{G}_{-n} \subseteq \mathcal{E}_n$ where $\mathcal{E}_n$ is the $\sigma$-algebra, contained in $\mathcal{F}_{\infty}$, containing all the events which are invariant under permutation of first $n$ co-ordinates of $(X_1, X_2, \ldots)$. We know that $\mathcal{E}_n \downarrow \mathcal{E}$, which is the exchangeable $\sigma$-algebra. By \textit{Hewitt-Savage 0-1 law}, $\mathcal{E}$ is trivial in this case and hence so is $\mathcal{G}_{-\infty}$ where $\mathcal{G}_{-n} \downarrow \mathcal{G}_{-\infty}$. By \textit{Levy's downward theorem}, we conclude that $$ \dfrac{S_n}{{n \choose 2}} = \mathbb{E}(g(X_1,X_2)\mid \mathcal{G}_{-n}) \stackrel{a.s., \; L^1}{\longrightarrow}\mathbb{E}(g(X_1,X_2)\mid \mathcal{G}_{-\infty}) = \mathbb{E}g(X_1,X_2)=0.$$\nHence, $n^{-2}S_n$ converges to $0$ almost surely and in $L^1$. \end{enumerate}
1995-q4
Let $Z_1, Z_2, \ldots$ be independent standard normal random variables and let $\mathcal{F}_n$ be the $\sigma$-algebra generated by $Z_1, Z_2, \cdots, Z_n$. Let $U_n$ be $\mathcal{F}_{n-1}$-measurable random variables such that $$n^{-1} \sum_{j=1}^{n} U_j^2 \to 1 \text{ a.s.}$$ Let $S_n = \sum_{j=1}^{n} U_j Z_j$. (i) Show that $S_n/n \to 0 \text{ a.s.}$ (ii) Let $i = \sqrt{-1}$. Show that $$E\{\exp(i\theta S_n + \frac{1}{2} \theta^2 \sum_{j=1}^{n} U_j^2)\} = 1 \text{ for all } \theta \in \mathbb{R}.$$ (iii) Show that $S_n/\sqrt{n}$ converges in distribution to a standard normal random variable. (Hint: Use (ii).)
Suppose $x_n$ be a sequence of non-negative real numbers such that $n^{-1}\sum_{k =1}^n x_k \to 1$ as $n \to \infty$. Then $$ \dfrac{x_n}{n} = \dfrac{1}{n}\sum_{j=1}^n x_j - \dfrac{n-1}{n}\dfrac{1}{n-1}\sum_{j=1}^{n-1} x_j \longrightarrow 1-1 =0.$$\nLet $b_n$ be a non-decreasing sequence of natural numbers such that $1 \leq b_n \leq n$ and $x_{b_n} = \max_{1 \leq k \leq n} x_k$. If $b_n$ does not go to \infty, then it implies that the sequence $x_k$ is uniformly bounded and hence $\max_{1 \leq k \leq n} x_k =O(1)=o(n)$. Otherwise $b_n \uparrow \infty$ and hence $ x_{b_n}/b_n \to 0$ as $n \to \infty$. Note that $$ \dfrac{\max_{1 \leq k \leq n} x_k}{n} = \dfrac{x_{b_n}}{n} \leq \dfrac{x_{b_n}}{b_n} \longrightarrow 0.$$\nThus in both cases $\max_{1 \leq k \leq n} x_k =o(n).$\n\n\nWe have \textit{i.i.d.} standard normal variables $Z_1, Z_2, \ldots$ and $\mathcal{F}_n = \sigma(Z_i : i \leq n)$. We have $U_i \in m \mathcal{F}_{i-1}$ for all $i \geq 1$ and $$ \dfrac{1}{n}\sum_{j=1}^n U_j^2 \stackrel{a.s.}{\longrightarrow} 1.$$\nFrom our previous discussion this implies that $n^{-1}U_n^2, n^{-1}\max_{1 \leq k \leq n} U_k^2 \stackrel{a.s.}{\longrightarrow}0.$\n\begin{enumerate}[label=(\roman*)] \item Set $V_j = U_j \mathbbm{1}(|U_j| \leq \sqrt{j})$ for all $j \geq 1$. Since $U_n^2/n$ converges to $0$ almost surely, we can say that with probability $1$, $U_j=V_j$ for all large enough $n$. More formally, if we define $T := \sup \left\{j \geq 1 : U_j \neq V_j\right\}$ (with the convention that the supremum of an empty set is $0$), then $T< \infty$ with probability $1$. \n Since $V_j \in m\mathcal{F}_{j-1}$ and is square-integrable, we can conclude that $\left\{S_n^{\prime}:= \sum_{j=1}^n V_jZ_j, \mathcal{F}_n, n \geq 0\right\}$ is a $L^2$-MG, with $S^{\prime}_0=0$ and predictable compensator process $$ \langle S^{\prime} \rangle _n = \sum_{j=1}^n \mathbb{E}(V_j^2Z_j^2 \mid \mathcal{F}_{j-1}) = \sum_{j=1}^n V_j^2= \sum_{j \leq T \wedge n} (V_j^2-U_j^2) + \sum_{j=1}^n U_j^2. $$ Since the first term in the last expression is finite almost surely, we have $n^{-1}\langle S^{\prime} \rangle _n $ converging to $1$ almost surely. This shows that $\langle S^{\prime} \rangle _{\infty}=\infty$ almost surely and hence $$ \dfrac{S_n^{\prime}}{\langle S^{\prime} \rangle _n } \stackrel{a.s.}{\longrightarrow} 0 \Rightarrow \dfrac{S_n^{\prime}}{ n} \stackrel{a.s.}{\longrightarrow} 0 \Rightarrow \dfrac{\sum_{j \leq T \wedge n} (V_jZ_j-U_jZ_j)}{n} + \dfrac{\sum_{j=1}^n U_jZ_j}{n} \stackrel{a.s.}{\longrightarrow} 0 \Rightarrow \dfrac{S_n}{n} =\dfrac{\sum_{j=1}^n U_jZ_j}{n} \stackrel{a.s.}{\longrightarrow} 0. $$ \item To make sense of this problem, we need to assume that $$ \mathbb{E} \exp \left(\dfrac{1}{2}\theta^2 \sum_{j=1}^n U_j^2 \right) < \infty,\; \; \forall \; \theta \in \mathbb{R}, \; \forall \, n \geq 1.$$\nIn that case we clearly have $$ \Bigg \rvert \exp(i \theta S_n)\exp \left(\dfrac{1}{2}\theta^2 \sum_{j=1}^n U_j^2 \right)\Bigg \rvert \leq \exp \left(\dfrac{1}{2}\theta^2 \sum_{j=1}^n U_j^2 \right) \in L^1,$$ and hence observing that $U_nZ_n \mid \mathcal{F}_{n-1} \sim N(0, U_n^2)$, we can write \begin{align*} \mathbb{E} \left[ \exp(i \theta S_n)\exp \left(\dfrac{1}{2}\theta^2 \sum_{j=1}^n U_j^2 \right) \Bigg \rvert \mathcal{F}_{n-1}\right] &= \exp(i \theta S_{n-1})\exp \left(\dfrac{1}{2}\theta^2 \sum_{j=1}^n U_j^2 \right) \mathbb{E} \exp(i\theta U_n Z_n \mid \mathcal{F}_{n-1}) \\ &= \exp(i \theta S_{n-1})\exp \left(\dfrac{1}{2}\theta^2 \sum_{j=1}^n U_j^2 \right) \exp \left( -\dfrac{1}{2} \theta^2 U_n^2\right) \\ &= \exp(i \theta S_{n-1})\exp \left(\dfrac{1}{2}\theta^2 \sum_{j=1}^{n-1} U_j^2 \right). \end{align*} Therefore, taking expectations on both sides we get $$ \mathbb{E} \left[ \exp(i \theta S_n)\exp \left(\dfrac{1}{2}\theta^2 \sum_{j=1}^n U_j^2 \right)\right] = \mathbb{E} \left[ \exp(i\theta S_{n-1})\exp \left(\dfrac{1}{2}\theta^2 \sum_{j=1}^{n-1} U_j^2 \right) \right].$$ Applying the recursion $n$ times we get $$ \mathbb{E} \left[ \exp(i \theta S_n)\exp \left(\dfrac{1}{2}\theta^2 \sum_{j=1}^n U_j^2 \right)\right] = \mathbb{E}\exp(i\theta S_0)=1.$$\n \item We shall prove that $S_n^{\prime}/\sqrt{n}$ converges weakly to standard Gaussian variable. Since $|S_n-S_{n}^{\prime}| \leq \sum_{k=1}^T |V_kZ_k|$ for all $n \geq 1$, and $T$ is almost surely finite, we have $(S_n-S_n^{\prime})/\sqrt{n}$ converging to $0$ almost surely. Using \textit{Slutsky's Theorem}, we can conclude that $S_n/\sqrt{n}$ converges weakly to standard Gaussian variable. We have already observed that $\left\{S_n^{\prime}, \mathcal{F}_n, n \geq 0 \right\}$ is a $L^2$-Martingale with $S_0^{\prime} :=0$. Its predictable compensator process satisfies for all $0 \leq t \leq 1$, $$ n^{-1}\langle S^{\prime} \rangle_{\lfloor nt \rfloor} = \dfrac{1}{n} \sum_{k=1}^{\lfloor nt \rfloor} V_k^2 = \dfrac{\lfloor nt \rfloor}{n} \dfrac{1}{\lfloor nt \rfloor}\sum_{k=1}^{\lfloor nt \rfloor} V_k^2 \stackrel{a.s.}{\longrightarrow} t.$$\nThis comes from the fact that $n^{-1}\langle S^{\prime} \rangle_n = n^{-1}\sum_{k=1}^n V_k^2$ converges to $1$ almost surely, which we proved in part (a). This also implies ( from our discussion at the very start of the solution) that $n^{-1}\max_{1 \leq k \leq n} V_k^2$ converges almost surely to $0$. Therefore, for any $\varepsilon >0$, \begin{align*} \dfrac{1}{n} \sum_{k=1}^n \mathbb{E} \left[(S^{\prime}_k-S^{\prime}_{k-1})^2; |S^{\prime}_k-S^{\prime}_{k-1}| \geq \varepsilon \sqrt{n} \mid \mathcal{F}_{k-1} \right] &= \dfrac{1}{n} \sum_{k=1}^n V_k^2\mathbb{E} \left[Z_k^2; |V_kZ_k| \geq \varepsilon \sqrt{n} \mid \mathcal{F}_{k-1} \right] \\ & \leq \dfrac{1}{n} \sum_{k=1}^n V_k^2\mathbb{E} \left[|Z_k|^2 |V_kZ_k|^2\varepsilon^{-2}n^{-1} \mid \mathcal{F}_{k-1} \right] \\ & \leq \varepsilon^{-2} (\mathbb{E}Z^4) \left( \dfrac{1}{n^{2}}\sum_{k=1}^n V_k^4 \right) \\ & \leq \varepsilon^{-2} (\mathbb{E}Z^4) \left( \dfrac{1}{n}\max_{k=1}^n V_k^2 \right)\left( \dfrac{1}{n}\sum_{k=1}^n V_k^2 \right), \end{align*} where $Z$ is a standard Gaussian variable. Clearly the last expression goes to $0$ almost surely. Then we have basically proved the required conditions needed to apply \textit{Martingale CLT}. Define, $$ \widehat{S}_n^{\prime}(t) := \dfrac{1}{\sqrt{n}} \left[S^{\prime}_{\lfloor nt \rfloor} +(nt-\lfloor nt \rfloor)(S^{\prime}_{\lfloor nt \rfloor +1}-S^{\prime}_{\lfloor nt \rfloor}) \right]=\dfrac{1}{\sqrt{n}} \left[S^{\prime}_{\lfloor nt \rfloor} +(nt-\lfloor nt \rfloor)V_{\lfloor nt \rfloor +1}Z_{\lfloor nt \rfloor +1} \right], $$ for all $0 \leq t \leq 1$. We then have $\widehat{S}^{\prime}_n \stackrel{d}{\longrightarrow} W$ as $n \to \infty$ on the space $C([0,1])$, where $W$ is the standard Brownian Motion on $[0,1]$. Since, $\mathbf{x} \mapsto x(1)$ is a continuous function on $C([0,1])$, we have $$ n^{-1/2}S^{\prime}_n = \widehat{S}^{\prime}_n(1) \stackrel{d}{\longrightarrow} W(1) \sim N(0,1),$$ Which completes our proof. \end{enumerate}
1995-q5
Let $X_1, X_2, \ldots$ be independent random variables such that $EX_n = \mu > 0$ for all $n$ and $\sup_n E|X_n|^p < \infty$ for some $p > 2$. Let $S_n = X_1 + \cdots + X_n$ and $$N_a = \inf \{n : S_n \ge a\}.$$ (i) Show that $\lim_{a\to \infty} a^{-1} N_a = 1/\mu \text{ a.s.}$ (ii) Show that $\{(S_n - n\mu)/\sqrt{n}^p, n \ge 1\}$ is uniformly integrable and hence conclude that $E|S_n - n\mu|^p = O(n^{p/2})$. (iii) Show that $\{a^{-1}N_a, a \ge 1\}$ is uniformly integrable and hence evaluate $EN_a$ asymptotically as $a \to \infty$. (iv) Show that for every $\epsilon > 0$, there exists $\delta > 0$ such that for all large $a$, $$P\{ \max_{n| n-a/\mu | \le \delta a} |(S_n - n\mu) - (S_{[a/\mu]} - [a/\mu]\mu)| \ge \epsilon\sqrt{a} \} \le \epsilon.$$ (v) Suppose that $n^{-1} \sum^{n} \text{Var}(X_i) \to \sigma^2 > 0$ as $n \to \infty$. Show that as $a \to \infty$, $(S_{N_a} - \mu N_a)/(\sigma\sqrt{N_a})$ has a limiting standard normal distribution. (Hint: Make use of (iv).)
We have $X_i$'s to be independent random variables with common mean $\mu >0$ and $\sup_{n} \mathbb{E}|X_n|^p < \infty$ for some $p>2$. Let $S_n = \sum_{k =1}^n X_k$ and $N_a = \inf \left\{n : S_n \geq a\right\}$. $S_0:=0$\n\begin{enumerate}[label=(\roman*)] \item The condition $\sup_{n} \mathbb{E}|X_n|^p < \infty$ for some $p>2$ guarantees that $\sigma^2 = \sup_{n} \mathbb{E}X_n^2 < \infty$. Hence we apply \textit{Kolmogorov SLLN} to conclude that $(S_n-\mathbb{E}(S_n))/n$ converges almost surely to $0$, i.e. $S_n/n$ converges almost surely to $\mu$. This implies that $S_n$ converges to $\infty$ almost surely and hence $N_a < \infty$ almost surely for all $a>0$. Note that $N_a$ is non-decreasing in $a$ and $N_a \uparrow \infty$, almost surely if $a \uparrow \infty$. This is true since otherwise there exists $n \in \mathbb{N}$ such that $S_n =\infty$ and we know $\mathbb{P}(S_n = \infty, \; \text{for some } n) = \sum_{n \geq 1} \mathbb{P}(S_n=\infty)=0$. Now observe that we have $S_{N_a-1} \leq a \leq S_{N_a}$ for all $a >0$. Since, $N_a \uparrow \infty$ almost surely and $S_n/n \to \mu$ almost surely, we can conclude that $S_{N_a}/(N_a), S_{N_a-1}/N_a \longrightarrow \mu$, almost surely. Therefore, $a/N_a$ converges to $\mu >0$, almost surely and hence $a^{-1}N_a \stackrel{a.s.}{\longrightarrow} \mu^{-1}.$ \item Let $\mathcal{F}_n := \sigma(X_k : k \leq n)$ for all $n \geq 0$. It is easy to see that $\left\{M_n :=S_n-n\mu, \mathcal{F}_n, n \geq 0\right\}$ is a MG with quadratic variation being $$ [M]_n = \sum_{k =1}^n (M_k-M_{k-1})^2 = \sum_{k =1}^n (X_k-\mu)^2.$$ Since, $p >2$, Using \textit{Burkholder's Inequality}, we conclude that \begin{align*} \mathbb{E} \left[\sup_{1 \leq k \leq n} |M_k|^p \right] \leq \mathbb{E}[M]_n^{p/2} = \mathbb{E} \left[ \sum_{k =1}^n (X_k-\mu)^2\right]^{p/2} \leq n^{p/2-1} \sum_{k =1}^n \mathbb{E}|X_k-\mu|^p &\leq n^{p/2} \sup_{k \geq 1} \mathbb{E}|X_k-\mu|^p \\ &\leq n^{p/2} \left[ 2^{p-1}\sup_{k \geq 1} \mathbb{E}|X_k|^p + 2^{p-1}\mu^p\right]. \end{align*} Here we have used the inequality that $(\sum_{k =1}^n y_k/n)^q \leq \sum_{k =1}^n y_k^q/n$ for all $y_k$'s non-negative and $q>1$. Therefore, $\mathbb{E}\left( \max_{k=1}^n |M_k|^p\right) \leq C n^{p/2}$, for some $C < \infty$. In particular $\mathbb{E}|M_n|^p \leq Cn^{p/2}$. Since, $(X_i-\mu)$'s are independent mean zero variables, we apply Corollary~\ref{cor} and get the following upper bound. For any $t >0$, \begin{align*} \mathbb{P} \left( \max_{k=1}^n |M_k|^p \geq n^{p/2}t \right) = \mathbb{P} \left( \max_{k=1}^n |M_k| \geq \sqrt{n}t^{1/p} \right) &\leq \mathbb{P} \left( \max_{k=1}^n |X_k-\mu| \geq \dfrac{\sqrt{n}t^{1/p}}{4} \right) + 4^{2p}(\sqrt{n}t^{1/p})^{-2p}(\mathbb{E}|M_n|^p)^2 \\ &\leq \mathbb{P} \left( \max_{k=1}^n |X_k-\mu|^p \geq \dfrac{n^{p/2}t}{4^p} \right) + C^24^{2p}t^{-2}. \end{align*} Therefore, for any $y \in (0,\infty)$, we have \begin{align*} \mathbb{E}\left( n^{-p/2} \max_{k=1}^n |M_k|^p; n^{-p/2} \max_{k=1}^n |M_k|^p \geq y \right) &= \int_{0}^{\infty} \mathbb{P} \left( n^{-p/2}\max_{k=1}^n |M_k|^p \geq \max(t,y) \right) \, dt \\ & \leq \int_{0}^{\infty} \mathbb{P} \left( \dfrac{4^p \max_{k=1}^n |X_k-\mu|^p}{n^{p/2}} \geq t \vee y \right) \, dt + C^24^{2p} \int_{0}^{\infty} (t \vee y)^{-2}\, dt \\ & = \mathbb{E}\left( 4^pn^{-p/2} \max_{k=1}^n |X_k-\mu|^p; 4^pn^{-p/2} \max_{k=1}^n |X_k-\mu|^p \geq y \right) + \dfrac{2C^24^{2p}}{y}. \end{align*} Note that, $$ \mathbb{E}\left( 4^pn^{-p/2} \max_{k=1}^n |X_k-\mu|^p; 4^pn^{-p/2} \max_{k=1}^n |X_k-\mu|^p \geq y \right) \leq 4^pn^{-p/2}\sum_{k=1}^n \mathbb{E}|X_k-\mu|^p \leq C_1n^{1-p/2},$$ for some $C_1 < \infty$. Since $p >2$ and for any $N \in \mathbb{N}$, the finite collection $\left\{4^pn^{-p/2} \max_{k=1}^n |X_k-\mu|^p : 1 \leq n \leq N\right\}$ is uniformly integrable (since each of them is integrable), we get the following. \begin{align*} \sup_{n \geq 1} \mathbb{E}\left( \dfrac{ \max_{k=1}^n |M_k|^p}{n^{p/2}}; \dfrac{ \max_{k=1}^n |M_k|^p}{n^{p/2}} \geq y \right) \leq \sup_{n=1}^N \mathbb{E}\left( \dfrac{4^p \max_{k=1}^n |X_k-\mu|^p}{n^{p/2}}; \dfrac{4^p \max_{k=1}^n |X_k-\mu|^p}{n^{p/2}} \geq y \right) &+ C_1N^{1-p/2} \\&+\dfrac{2C^24^{2p}}{y}. \end{align*} Take $y \to \infty$ to get $$ \lim_{y \to \infty} \sup_{n \geq 1} \mathbb{E}\left( \dfrac{ \max_{k=1}^n |M_k|^p}{n^{p/2}}; \dfrac{ \max_{k=1}^n |M_k|^p}{n^{p/2}} \geq y \right) \leq C_1N^{1-p/2},$$ for all $N \in \mathbb{N}.$ Taking $N \to \infty$, we conclude that the collection $\left\{n^{-p/2}\max_{k=1}^n |M_k|^p : n \geq 1\right\}$ is uniformly integrable and hence $\ left\{n^{-p/2}|M_n|^p=n^{-p/2}|S_n-n\mu|^p : n \geq 1\right\}$.\n We have already established that $\mathbb{E}|S_n-\nu|^p = \mathbb{E}|M_n|^p = O(n^{p/2}). \n\item Fix any $a >0$. Then for any $y >\mu^{-1}+a^{-1}$ , we have $\mu\lfloor ay \rfloor \geq \mu(ay-1) \geq a$ and hence egin{align*} \mathbb{P}(a^{-1}N_a > y) \leq \mathbb{P}(S_{\lfloor ay \rfloor} < a) \leq \mathbb{P}(|M_{\lfloor ay \rfloor}| \geq \mu\lfloor ay \rfloor - a) \leq \dfrac{\mathbb{E}||M_{\lfloor ay \rfloor}|^p}{(\mu\lfloor ay \rfloor - a)^p} \leq \dfrac{C_2({\lfloor ay \rfloor})^{p/2}}{(\mu\lfloor ay \rfloor - a)^p} \leq \dfrac{C_2y^{p/2}a^{p/2}}{(\mu ay - \mu -a)^p} ewlineNow fix $a \geq 1$. For any $y \geq 2\mu^{-1}+2$, we have $\mu ay \geq 2a+2\mu$ and hence $$ \mathbb{P}(a^{-1}N_a > y) \leq \dfrac{C_2y^{p/2}a^{p/2}}{2^{-p}(\mu a y)^{p}} \leq C_3 a^{-p/2}y^{-p/2} \leq C_3 y^{-p/2},$$\nfor some absolute constant ( depending on $\mu$ only) $C_3$. Since $p>2$, take $r \in (1,p/2)$. Then $$ \mathbb{E}a^{-r}N_a^r = \int_{0}^{\infty} \mathbb{P}(a^{-1}N_a > y^{1/r})\, dy \leq (2\mu^{-1}+2)^r + \int_{(2\mu^{-1}+2)^r}^{\infty} C_3y^{-p/2}\, dy =: C_4 < \infty. \n$$ Thus $\left\{a^{-1}N_a : a \geq 1\right\}$ is $L^r$-bounded for some $r>1$ and hence $\left\{a^{-1}N_a : a \geq 1\right\}$ is uniformly integrable. Part (a) and \textit{Vitali's Theorem} now implies that $a^{-1}\mathbb{E}N_a \longrightarrow \mu^{-1}$ as $a \to \infty. \n\item First note that $\mathbb{E}|M_k-M_l|^p \leq C |k-l|^{p/2}$ for any $k,l \geq 1$, where $C$ is a finite constant not depending upon $k,l$. The proof of this is exactly same as the first part of (b). \n \nFix any $\varepsilon, \delta >0$. We shall apply \textit{Doob's Inequality}. \begin{align*} \mathbb{P}\left( \max_{n: \mu^{-1}a \leq n \leq \mu^{-1}a+\delta a} \Big \rvert M_n - M_{\lfloor \mu^{-1}a \rfloor}\Big \rvert \geq \varepsilon \sqrt{a}\right) &\leq \varepsilon^{-p}a^{-p/2}\mathbb{E}\Big \rvert M_{\lfloor \mu^{-1}a +\delta a \rfloor}- M_{\lfloor \mu^{-1}a \rfloor}\Big \rvert^p \\ & \leq \varepsilon^{-p}a^{-p/2} C \Big \rvert \lfloor \mu^{-1}a+\delta a \rfloor-\lfloor \mu^{-1}a \rfloor\Big \rvert^{p/2} \\ & \leq C\varepsilon^{-p}a^{-p/2} (\delta a+1)^{p/2} = C\varepsilon^{-p}(\delta + a^{-1})^{p/2}. \end{align*} Similarly, \begin{align*} \mathbb{P}\left( \max_{n: \mu^{-1}a-\delta a \leq n \leq \mu^{-1}a} \Big \rvert M_{\lfloor \mu^{-1}a \rfloor}-M_n\Big \rvert \geq \varepsilon \sqrt{a}\right) &\leq \varepsilon^{-p}a^{-p/2}\mathbb{E}\Big \rvert M_{\lfloor \mu^{-1}a \rfloor}-M_{\lfloor \mu^{-1}a-\delta a \rfloor}\Big \rvert^p \\ & \leq \varepsilon^{-p}a^{-p/2} C \Big \rvert \lfloor \mu^{-1}a \rfloor-\lfloor \mu^{-1}a-\delta a \rfloor\Big \rvert^{p/2} \\ & \leq C\varepsilon^{-p}a^{-p/2} (\delta a+1)^{p/2} = C\varepsilon^{-p}(\delta + a^{-1})^{p/2}. \end{align*} Combining them we get $$ \mathbb{P}\left( \max_{n: | n - \mu^{-1}a| \leq \delta a} \Big \rvert M_{\lfloor \mu^{-1}a \rfloor}-M_n\Big \rvert \geq \varepsilon \sqrt{a}\right) \leq 2C\varepsilon^{-p}(\delta + a^{-1})^{p/2}.$$\nChoose $\delta >0$ such that $2C\varepsilon^{-p}\delta^{p/2} \leq \varepsilon/2,$ i.e., $0 < \delta < \left(\varepsilon^{p+1}/(4C) \right)^{2/p}.$ Then $$ \limsup_{a \to \infty} \mathbb{P}\left( \max_{n: | n - \mu^{-1}a| \leq \delta a} \Big \rvert M_n- M_{\lfloor \mu^{-1}a \rfloor}\Big \rvert \geq \varepsilon \sqrt{a}\right) \leq \varepsilon/2,$$ which is more than what we are asked to show. \item First we shall apply \textit{Lyapounav's Theorem} to establish the CLT for the sequence $S_n$. Let $v_n = \operatorname{Var}(S_n) = \sum_{k=1}^n \operatorname{Var}(X_k) \sim n \sigma^2$. Moreover, \begin{align*} v_n^{-p/2} \sum_{k=1}^n \mathbb{E}|X_k-\mu|^p \leq v_n^{-p/2} n2^{p-1}\left(\sup_{k \geq 1} \mathbb{E}|X_k|^p + \mu^p \right) \longrightarrow 0, \end{align*} since $p>2$ and $v_n\sim n\sigma^2$ with $\sigma>0$. Thus $(S_n-n\mu)/\sqrt{v_n}$ converges weakly to standard Gaussian distribution which in turn implies that $M_n/(\sigma \sqrt{n})=(S_n-n\mu)/(\sigma \sqrt{n})$ converges weakly to standard Gaussian distribution. Fix $x \in \mathbb{R}$, $\varepsilon >0$ and get $\delta>0$ as in part (d). Then \begin{align*} \mathbb{P}\left( \dfrac{M_{N_a}}{\sigma \sqrt{a}} \leq x\right) &\leq \mathbb{P}\left( \dfrac{M_{N_a}}{\sigma \sqrt{a}} \leq x, \Big \rvert \dfrac{N_a}{a}-\dfrac{1}{\mu} \Big \rvert \leq \delta\right) + \mathbb{P} \left( \Big \rvert \dfrac{N_a}{a}-\dfrac{1}{\mu} \Big \rvert > \delta\right) \\ & \leq \mathbb{P} \left( \dfrac{M_{\lfloor \mu^{-1}a \rfloor}}{\sigma \sqrt{a}} \leq x + \dfrac{\varepsilon}{\sigma} \right) + \mathbb{P}\left( \max_{n: | n - \mu^{-1}a| \leq \delta a} \Big \rvert M_n- M_{\lfloor \mu^{-1}a \rfloor}\Big \rvert \geq \varepsilon \sqrt{a}\right) + \mathbb{P} \left( \Big \rvert \dfrac{N_a}{a}-\dfrac{1}{\mu} \Big \rvert > \delta\right). \end{align*} Take $a \to \infty$. By CLT the first term converges to $\mathbb{P}\left(N(0, \mu^{-1}) \leq x+ \sigma^{-1}\varepsilon\right).$ The last term converges to $0$ and the second term has limit superior to be at most $\varepsilon/2$ form part (d). Therefore, $$\limsup_{a \to \infty} \mathbb{P}\left( \dfrac{M_{N_a}}{\sigma \sqrt{a}} \leq x\right) \leq \mathbb{P}\left(N(0, \mu^{-1}) \leq x+ \sigma^{-1}\varepsilon\right) + \varepsilon. $$ Take $\varepsilon \downarrow 0$ to conclude that $$ \limsup_{a \to \infty} \mathbb{P}\left( \dfrac{M_{N_a}}{\sigma \sqrt{a}} \leq x\right) \leq \mathbb{P}\left(N(0, \mu^{-1}) \leq x\right).$$ Similarly, \begin{align*} \mathbb{P}\left( \dfrac{M_{N_a}}{\sigma \sqrt{a}} > x\right) &\leq \mathbb{P}\left( \dfrac{M_{N_a}}{\sigma \sqrt{a}} > x, \Big \rvert \dfrac{N_a}{a}-\dfrac{1}{\mu} \Big \rvert \leq \delta\right) + \mathbb{P} \left( \Big \rvert \dfrac{N_a}{a}-\dfrac{1}{\mu} \Big \rvert > \delta\right) \\ & \leq \mathbb{P} \left( \dfrac{M_{\lfloor \mu^{-1}a \rfloor}}{\sigma \sqrt{a}} > x - \dfrac{\varepsilon}{\sigma} \right) + \mathbb{P}\left( \max_{n: | n - \mu^{-1}a| \leq \delta a} \Big \rvert M_n- M_{\lfloor \mu^{-1}a \rfloor}\Big \rvert \geq \varepsilon \sqrt{a}\right) + \mathbb{P} \left( \Big \rvert \dfrac{N_a}{a}-\dfrac{1}{\mu} \Big \rvert > \delta\right). \end{align*} Take $a \to \infty$ and get $$\limsup_{a \to \infty} \mathbb{P}\left( \dfrac{M_{N_a}}{\sigma \sqrt{a}} > x\right) \leq \mathbb{P}\left(N(0, \mu^{-1}) > x- \sigma^{-1}\varepsilon\right) + \varepsilon. $$ Take $\varepsilon \downarrow 0$ to conclude that $$ \limsup_{a \to \infty} \mathbb{P}\left( \dfrac{M_{N_a}}{\sigma \sqrt{a}} > x\right) \leq \mathbb{P}\left(N(0, \mu^{-1}) > x\right).$$ Combining previous observations we can conclude that $M_{N_a}/\sigma \sqrt{a}$ converges weakly to $N(0,\mu^{-1})$ distribution. Apply \textit{Slutsky's Theorem} to conclude that $$ \dfrac{S_{N_a}-\mu N_a}{\sigma \sqrt{N_a}} \stackrel{d}{\longrightarrow} N(0,1), \; \text{ as } a \to \infty.$$\n\end{enumerate}
1996-q1
Let \( Y_1, Y_2, \ldots \) be any sequence of mean-zero random variables with \( E|Y_n| = 1 \) for all \( n \). a. Construct such a sequence \( \{Y_n\} \) that is also not i.i.d. and satisfies \[ \liminf_{n \to \infty} P(Y_n > 0) = 0. \] b. Prove that if the \( Y_n \) are uniformly integrable, then \[ \liminf_{n \to \infty} P(Y_n > 0) > 0. \]
\begin{enumerate}[label=(\alph*)] \item Take $U \sim \text{Uniform}([-1,1])$. Define $$ Y_n := n\mathbbm{1}(U>1-n^{-1}) - n\mathbbm{1}(U<-1+n^{-1}), \; \forall \; n \geq 1.$$\nClearly, $\mathbb{E}Y_n = \dfrac{n}{2n} -\dfrac{n}{2n}=0$ and $\mathbb{E}|Y_n|= \dfrac{n}{2n}+\dfrac{n}{2n}=1.$ Clearly the variables are not \textit{i.i.d.}. Finally, $\mathbb{P}(Y_n >0) = \mathbb{P}(U>1-n^{-1}) = \dfrac{1}{2n} \to 0,$ as $n \to \infty$. \item Since $\mathbb{E}Y_n=0$ and $\mathbb{E}|Y_n|=1$, we have $\mathbb{E}(Y_n)_{+}=1/2$ for all $n \geq 1$. Now suppose that $\liminf_{n \to \infty} \mathbb{P}(Y_n>0)=0$. Hence we can get a subsequence $\left\{n_k : k \geq 1\right\}$ such that $\mathbb{P}(Y_{n_k}>0) \to 0$ as $k \to \infty$. This implies, for any $\varepsilon >0$,\n$$ \mathbb{P}((Y_{n_k})_{+} \geq \varepsilon) \leq \mathbb{P}(Y_{n_k}>0) \to 0,$$\nas $k \to \infty$. In other words $(Y_{n_k})_{+} \stackrel{p}{\longrightarrow} 0$. We are given that $\left\{Y_n : n \geq 1\right\}$ is U.I. and hence so is $\left\{(Y_{n_k})_{+} : k \geq 1\right\}$. Apply \textit{Vitali's Theorem} to conclude that $\mathbb{E}(Y_{n_k})_{+} \to 0$. This gives a contradiction. \end{enumerate}
1996-q2
Let \( X \) be a mean-zero random variable. a. Prove \( E|1 + X| \leq 3E|1 - X| \). b. Prove that, given \( c < 3 \), there exists a mean-zero random variable \( X \) such that \[ E|1 + X| > cE|1 - X|. \]
We have $X$ to be a mean zero random variable. Hence $\mathbb{E}X\mathbbm{1}(X \geq a) + \mathbb{E}X\mathbbm{1}(X < a)=0$, for all $a \in \mathbb{R}$. \begin{enumerate}[label=(\alph*)] \item Note that \begin{align*} \mathbb{E}|1+X| &= \mathbb{E}(1+X)\mathbbm{1}(X \geq -1) - \mathbb{E}(1+X)\mathbbm{1}(X < -1) \\ &= \mathbb{P}(X \geq -1) + \mathbb{E}X\mathbbm{1}(X \geq -1) - \mathbb{P}(X<-1) - \mathbb{E}X\mathbbm{1}(X < -1) \\ &= 2\mathbb{P}(X \geq -1) + 2\mathbb{E}X\mathbbm{1}(X \geq -1) -1, \end{align*} whereas \begin{align*} 3\mathbb{E}|1-X| &= 3\mathbb{E}(X-1)\mathbbm{1}(X \geq 1) + 3\mathbb{E}(1-X)\mathbbm{1}(X < 1) \\ &= -3\mathbb{P}(X \geq 1) + 3\mathbb{E}X\mathbbm{1}(X \geq 1) + 3\mathbb{P}(X<1) - 3\mathbb{E}X\mathbbm{1}(X < 1) \\ &= 3-6\mathbb{P}(X \geq 1) + 6\mathbb{E}X\mathbbm{1}(X \geq 1). \end{align*} Hence we have to prove that $$ \mathbb{P}(X \geq -1) + \mathbb{E}X\mathbbm{1}(X \geq -1) + 3\mathbb{P}(X \geq 1) \leq 2 + 3\mathbb{E}X\mathbbm{1}(X \geq 1).$$ But this follows immediately since, $$ \mathbbm{1}(X \geq -1) + X\mathbbm{1}(X \geq -1) + 3\mathbbm{1}(X \geq 1) \leq 2+ 3X\mathbbm{1}(X \geq 1).$$ The above can be checked by considering different cases whether $X\geq 1$, $-1 \leq X <1$ or $X<-1$. \item It is enough to construct, for any $c \in (0,3)$, a random variable $X$ with mean zero such that $\mathbb{E}|1+X| \geq c \mathbb{E}|1-X|.$ Take any $0<c <3$ and define $a=[(3-c)^{-1}(1+c)] \vee 1$. Define $X$ to be as follows. $\mathbb{P}(X=1)=(1+a)^{-1}a = 1- \mathbb{P}(X=-a).$ Note that $\mathbb{E}(X)=0$. Moreover, since $a \geq 1$, we have $$ \mathbb{E}|1+X| = \dfrac{2a}{1+a} + \dfrac{a-1}{1+a} = \dfrac{3a-1}{1+a}, \; \mathbb{E}|1-X|=\dfrac{1+a}{1+a}=1.$$\nThus using the fact that $a \geq (3-c)^{-1}(1+c)$, we get $$ \mathbb{E}|1+X| = \dfrac{3a-1}{1+a} = 3- \dfrac{4}{1+a} \geq 3- \dfrac{4}{1+(3-c)^{-1}(1+c)} = 3- \dfrac{12-4c}{3-c+1+c} = 3-(3-c)=c\mathbb{E}|1-X|. $$ \end{enumerate}
1996-q3
Let \( X \) be a random variable with \( E|X| < \infty \), \( B \) a \( \sigma \)-algebra. a. Show that there exists a \( B \in B \) such that \[ E X 1_B = \sup_{A \in B} E X 1_A. \] Call a \( B \) with this property \( B \)-optimal for \( X \). b. Show that \( Y = E(X|B) \) a.s. if and only if for each real number \( r \) the event \(( \omega : Y(\omega) > r \)) is \( B \)-optimal for \( X - r \).
\begin{enumerate}[label=(\alph*)] \item Since $X$ is integrable, $Z:= \mathbb{E}(X \mid \mathcal{B})$ is well defined and $Z \in m\mathcal{B}$. Then note that for any $A \in \mathcal{B}$, \begin{align*} \mathbb{E}X\mathbbm{1}_A &= \mathbb{E}(\mathbb{E}(X \mid \mathcal{B})\mathbbm{1}_A) = \mathbb{E}Z\mathbbm{1}_A = \mathbb{E}Z_{+}\mathbbm{1}_A - \mathbb{E}Z_{-}\mathbbm{1}_A \leq \mathbb{E}Z_{+}\mathbbm{1}_A \leq \mathbb{E}Z_{+} = \mathbb{E}Z\mathbbm{1}(Z > 0) =\mathbb{E}X\mathbbm{1}(Z > 0), \end{align*} where the last equality is true since $(Z > 0) \in \mathcal{B}$. Thus clearly, $(Z > 0)$ is $\mathcal{B}$-optimal for $X$. \item Suppose $Y=\mathbb{E}(X \mid \mathcal{B})$ a.s. Fix a real number $r$. Then $Y-r=\mathbb{E}(X-r \mid \mathcal{B})$ a.s. and by our calculation in part (a), we have $(Y>r)=(Y-r >0)$ is $\mathcal{B}$-optimal for $X-r$. Now suppose that $(Y>r)$ is $\mathcal{B}$-optimal for $X-r$ for all $r$. This hypothesis implies that $(Y>r) \in \mathcal{B}$ for all real $r$ and hence $Y \in m\mathcal{B}$. Let $Z$ be as was in part (a).Then by the discussion in previous paragraph, we have $(Z>r)$ to be $\mathcal{B}$-optimal for $X-r$ for all $r$. In other words, \begin{equation}{\label{eq1}} \mathbb{E}(Z-r)\mathbbm{1}(Y>r) = \mathbb{E}(X-r)\mathbbm{1}(Y>r) = \sup_{A \in \mathcal{B}} \mathbb{E}(X-r)\mathbbm{1}_A = \mathbb{E}(X-r)\mathbbm{1}(Z>r)= \mathbb{E}(Z-r)\mathbbm{1}(Z>r), \end{equation} where the left-most and the right-most equalities are true since $(Y>r),(Z>r)\in \mathcal{B}$ and $Z=\mathbb{E}(X \mid \mathcal{B})$. Note that $$ (Z-r) \left(\mathbbm{1}(Z>r) - \mathbbm{1}(Y>r) \right) \geq 0, \; \text{almost surely},$$ whereas (\ref{eq1}) tell us that $\mathbb{E}(Z-r) \left(\mathbbm{1}(Z>r) - \mathbbm{1}(Y>r) \right)=0.$ Hence, $(Z-r) \left(\mathbbm{1}(Z>r) - \mathbbm{1}(Y>r) \right) = 0$, almost surely. Therefore, for all $r \in \mathbb{R}$, we have $ \mathbb{P}(Z>r, Y<r) =0 = \mathbb{P}(Z <r, Y >r). Observe that \begin{align*} \mathbb{P}(Y \neq Z) &= \mathbb{P}\left( \exists \; q \in \mathbb{Q} \text{ such that } Y < q < Z \text{ or } Z < q < Y\right) \leq \sum_{q \in \mathbb{Q}} \mathbb{P}(Z>q, Y<q) + \sum_{q \in \mathbb{Q}} \mathbb{P}(Z<q, Y>q) =0, \end{align*} and thus $Y=Z=\mathbb{E}(X \mid \mathcal{B})$, almost surely. \end{enumerate}
1996-q4
Let \( \alpha > 1 \) and let \( X_1, X_2, \ldots \) be i.i.d. random variables with common density function \[ f(x) = \frac{1}{2}(\alpha - 1)|x|^{-\alpha} \quad \text{if } |x| \geq 1, \] \[ = 0 \quad \text{otherwise}. \] Let \( S_n = X_1 + \cdots + X_n \). a. For \( \alpha > 3 \), show that \( \limsup_{n \to \infty} |S_n|/(n \log \log n)^{1/2} = C_{\alpha} \) a.s. and evaluate \( C_{\alpha} \). b. For \( \alpha = 3 \), show that \( (n \log n)^{-1/2} S_n \) has a limiting standard normal distribution. (Hint: Show that \( \max_{i \leq n} P(|X_i| \geq (n \log n)^{1/3}) \to 0 \) and hence we can truncate the \( X_i \) as \( X_{ni} = X_i I(|X_i| < (n \log n)^{1/2}) \) for \( 1 \leq i \leq n \).)
\begin{enumerate}[label=(\roman*)] \item For $\alpha >3$, $$ \mathbb{E}(X_1^2) = \int_{-\infty}^{\infty} x^2f(x)\, dx = 2 \int_{1}^{\infty} \dfrac{\alpha -1}{2} x^{-\alpha + 2} \, dx = (\alpha -1) \int_{1}^{\infty} x^{-\alpha + 2} \, dx = \dfrac{\alpha-1}{\alpha-3} < \infty,$$\nand the variables have mean zero (since the density is symmetric around $0$). Using \textit{Hartman-Winter LIL}, we conclude that $$ \limsup_{n \to \infty} \dfrac{|S_n|}{\sqrt{n \log \log n}} = \sqrt{2\mathbb{E}(X_1^2)} = \sqrt{\dfrac{2(\alpha-1)}{\alpha-3}}, \; \text{almost surely}.$$\n \item For $\alpha =3$, we have $f(x)=|x|^{-3}\mathbbm{1}(|x| \geq 1)$. We go for truncation since the variables has infinite variance. Let $X_{n,k}:= X_k \mathbbm{1}(|X_k| \leq b_n)/\sqrt{n \log n}$, for all $n,k \geq 1$; where $\left\{b_n : n \geq 1\right\}$ is some sequence of non-negative real numbers greater than $1$ to be chosen later. Note that \begin{align*} \mathbb{P} \left(\dfrac{S_n}{\sqrt{n \log n}} \neq \dfrac{\sum_{k=1}^n X_{n,k}}{\sqrt{n \log n}} \right) &\leq \sum_{k=1}^n \mathbb{P}(X_{n,k} \neq X_k) = n \mathbb{P}(|X_1| \geq b_n) = 2n \int_{b_n}^{\infty} x^{-3}\,dx = nb_n^{-2}. \end{align*} Suppose that we have chosen $b_n$ sequence in such a way that $nb_n^{-2}=o(1)$. Then $$ \dfrac{S_n}{\sqrt{n \log n}} - \dfrac{\sum_{k=1}^n X_{n,k}}{\sqrt{n \log n}} \stackrel{p}{\longrightarrow} 0.$$\nNow focus on $\dfrac{\sum_{k=1}^n X_{n,k}}{\sqrt{n \log n}}$. By symmetry of the density $\mathbb{E}(X_{n,k})=0$. Also \begin{align*} \sum_{k=1}^n \dfrac{\mathbb{E}X_{n,k}^2}{n \log n} &=\dfrac{n\mathbb{E}X_{n,1}^2}{n \log n}= \dfrac{2}{\log n} \int_{1}^{b_n} x^{-1}\, dx = \dfrac{2\log b_n}{\log n}, \end{align*} and for any small enough $\varepsilon >0$, \begin{align*} \sum_{k=1}^n \mathbb{E}\left[\dfrac{X_{n,k}^2}{n \log n}; |X_{n,k}| \geq \varepsilon \sqrt{n \log n} \right] &= n\mathbb{E}\left[\dfrac{X_{n,1}^2}{n \log n}; |X_{n,1}| \geq \varepsilon \sqrt{n \log n} \right] &= \dfrac{2}{\log n} \int_{\varepsilon \sqrt{n \log n}}^{b_n} x^{-1}\, dx \\ &= \dfrac{2\log b_n - \log n - \log \log n - 2\log \varepsilon}{\log n} \\ &= \dfrac{2\log b_n}{\log n} - 1 +o(1), \end{align*} provided that $b_n \geq \varepsilon \sqrt{n \log n}$ for all $n \geq 1$, for all small enough $\varepsilon$. Clearly if we choose $b_n = \sqrt{n \log n}$ then $nb_n^{-2}=o(1)$, $$ \sum_{k=1}^n \dfrac{\mathbb{E}X_{n,k}^2}{n \log n} \to 1, \;\sum_{k=1}^n \mathbb{E}\left[\dfrac{X_{n,k}^2}{n \log n}; |X_{n,k}| \geq \varepsilon \sqrt{n \log n} \right] \to 0, \; \; \forall \; 0 < \varepsilon <1.$$\nApply \textit{Lindeberg-Feller CLT} and \textit{Slutsky's Theorem} to conclude that $$\dfrac{S_n}{\sqrt{n \log n}} \stackrel{d}{\longrightarrow} N(0,1). $$ \end{enumerate}
1996-q5
Let \( k \) be a positive integer and let \( (X_n, \mathcal{F}_n, n \geq 1) \) be a martingale difference sequence such that \( E|X_n|^k < \infty \) for all \( n \). Let \[ U_n^{(k)} = \sum_{1 \leq i_1 < \cdots < i_k \leq n} X_{i_1} \cdots X_{i_k}, \quad n \geq k. \] a. Show that \( \{U_n^{(k)}, \mathcal{F}_n, n \geq k\} \) is a martingale. b. Suppose \( E(X_n^2 | \mathcal{F}_{n-1}) \leq C \) a.s. for all \( n \geq 1 \) and some constant \( C \). Show that \[ E((U_n^k - U_{n-1}^{(k)})^2 | \mathcal{F}_{n-1}) \leq C E((U_{n-1}^{(k-1)})^2), \] where we define \( U_n^{(0)} = 1 \). Hence show that \[ E((U_n^{(k)})^2) \leq \binom{n}{k} C^k. \] (Hint: Use induction.)
We have MG difference sequence $\left\{X_n, \mathcal{F}_n, n \geq 1\right\}$ such that $\mathbb{E}|X_n|^k < \infty$ for all $n$ and $$ U_n^{(k)} = \sum_{1 \leq i_1 < \cdots < i_k \leq n} X_{i_1}\cdots X_{i_k}, \; \forall \; n \geq k.$$\n\begin{enumerate}[label=(\roman*)] \item Clearly, $ U_n^{(k)} \in m \sigma(X_1, \ldots, X_n) \subseteq m \mathcal{F}_n.$ Also, for any $1 \leq i_1< \cdots <i_k \leq n$, $$ \mathbb{E}|X_{i_1}\cdots X_{i_k}| \leq \mathbb{E} \left(\sum_{l=1}^k |X_{i_l}| \right)^k \leq C_k \mathbb{E} \left(\sum_{l=1}^k |X_{i_l}|^k \right) < \infty,$$ and thus $\mathbb{E}|U_n^{(k)}|< \infty.$ Finally, for all $n \geq k+1$, \begin{align*} \mathbb{E}\left[ U_{n}^{(k)} \mid \mathcal{F}_{n-1}\right] &= \mathbb{E}\left[ \sum_{1 \leq i_1 < \cdots < i_k \leq n} X_{i_1}\cdots X_{i_k} \Bigg \rvert \mathcal{F}_{n-1}\right] \\ &= \sum_{1 \leq i_1 < \cdots < i_k \leq n-1} X_{i_1}\cdots X_{i_k} + \mathbb{E}\left[ \sum_{1 \leq i_1 < \cdots < i_k = n} X_{i_1}\cdots X_{i_k} \Bigg \rvert \mathcal{F}_{n-1}\right] \\ &= U_{n-1}^{(k)} + \left(\sum_{1 \leq i_1 < \cdots < i_{k-1} \leq n-1} X_{i_1}\cdots X_{i_{k-1}} \right) \mathbb{E}(X_n \mid \mathcal{F}_{n-1}) = U_{n-1}^{(k)}. \end{align*} Thus $\left\{U_n^{(k)}, \mathcal{F}_n, n \geq k\right\}$ is a MG. \item We have $\mathbb{E}(X_n^2 \mid \mathcal{F}_{n-1}) \leq C$ almost surely for all $n \geq 1$. This yields that $\mathbb{E}X_n^2 \leq C$ for all $n \geq 1$. We first need to establish that $\mathbb{E}(U_n^{(k)})^2< \infty$. In order to that it is enough to show that for any $1 \leq i_1 < \cdots < i_k \leq n$, we have $\mathbb{E}\left( X_{i_1}^2\cdots X_{i_k}^2 \right) < \infty$. Consider the event $A_M=(|X_{i_1} \cdots X_{i_{k-1}}| \leq M)$. Then $M \in \mathcal{F}_{i_{k-1}}$ and clearly $X_{i_1}^2\cdots X_{i_k}^2 \mathbbm{1}_{A_M} \leq M^2X_{i_k}^2$ is integrable. Hence, \begin{align*} \mathbb{E} \left[ X_{i_1}^2\cdots X_{i_k}^2 \mathbbm{1}_{A_M}\right] &= \mathbb{E} \left[\mathbb{E} \left( X_{i_1}^2\cdots X_{i_k}^2 \mathbbm{1}_{A_M} \mid \mathcal{F}_{i_{k-1}}\right) \right] \\ &= \mathbb{E} \left[ X_{i_1}^2\cdots X_{i_{k-1}}^2 \mathbbm{1}_{A_M}\mathbb{E} \left(X_{i_k}^2 \mid \mathcal{F}_{i_{k-1}}\right) \right] \\ &\leq C\mathbb{E} \left[ X_{i_1}^2\cdots X_{i_{k-1}}^2 \mathbbm{1}_{A_M}\right]. \end{align*} Taking $M \uparrow \infty$, we get $$ \mathbb{E} \left[ X_{i_1}^2\cdots X_{i_k}^2\right] \leq C\mathbb{E} \left[ X_{i_1}^2\cdots X_{i_{k-1}}^2\right].\n$$ Repeating these kind of computations we can conclude that $\mathbb{E} \left[ X_{i_1}^2\cdots X_{i_k}^2\right] \leq C^k$. Thus $\left\{U_n^{(k)}, \mathcal{F}_n, n \geq k\right\}$ is a $L^2$-MG. From definition, we can see that for all $n \geq k+1, k \geq 2$, $$ U_n^{(k)} -U_{n-1}^{(k)} = \sum_{1 \leq i_1 < \cdots < i_k = n} X_{i_1}\cdots X_{i_k} = X_n \sum_{1 \leq i_1 < \cdots < i_{k-1} \leq n-1} X_{i_1}\cdots X_{i_{k-1}} = X_n U_{n-1}^{(k-1)}. $$ Using the definition that $U_n^{(0)}:=1$ for all $n \geq 0$, we can also write for all $n \geq 2$ that $U_n^{(1)} - U_{n-1}^{(1)} = \sum_{i=1}^n X_i - \sum_{i=1}^{n-1} X_i =X_n = X_n U_{n-1}^{(0)}$. Thus for all $n -1\geq k \geq 1$, we have $U_n^{(k)} -U_{n-1}^{(k)} = X_n U_{n-1}^{(k-1)}.$ Since, $U_{n-1}^{(k-1)} \in m\mathcal{F}_{n-1}$, we have $$ \mathbb{E}(U_n^{(k)} -U_{n-1}^{(k)})^2 = \mathbb{E}(X_n U_{n-1}^{(k-1)})^2 = \mathbb{E}\left[\mathbb{E}\left((X_n U_{n-1}^{(k-1)})^2 \mid \mathcal{F}_{n-1}\right) \right] = \mathbb{E}\left[ (U_{n-1}^{(k-1)})^2 \mathbb{E}(X_n^2 \mid \mathcal{F}_{n-1})\right] \leq C \mathbb{E}(U_{n-1}^{(k-1)})^2.\n$$ Since $\left\{U_n^{(k)}, \mathcal{F}_n, n \geq k\right\}$ is a $L^2$-MG, we have $\mathbb{E}(U_n^{(k)} -U_{n-1}^{(k)})^2 = \mathbb{E}(U_n^{(k)})^2 - \mathbb{E}(U_{n-1}^{(k)})^2$, for all $n -1\geq k$. Summing over $n$, we get for all $n \geq k+1$ and $k \geq 1$, $$ \mathbb{E}(U_n^{(k)})^2 - \mathbb{E}(U_k^{(k)})^2 = \sum_{l=k+1}^n \left( \mathbb{E}(U_l^{(k)})^2 - \mathbb{E}(U_{l-1}^{(k)})^2\right)= \sum_{l=k+1}^n \mathbb{E}(U_l^{(k)} -U_{l-1}^{(k)})^2 \leq C \sum_{l=k+1}^n \mathbb{E}(U_{l-1}^{(k-1)})^2.\n$$ We want to prove that $\mathbb{E}(U_n^{(k)})^2 \leq {n \choose k} C^k$ for all $n \geq k \geq 0$. For $k=0$, the result is obvious from the definition. We want to use induction on $k$. Suppose the result is true for all $k =0, \ldots, m-1$ where $m \geq 1$. We have shown earlier that $$\mathbb{E}(U_m^{(m)})^2 = \mathbb{E}(X_1^2\cdots X_m^2) \leq C^m = C^m {m \choose 0}.$$\nFor $n \geq m+1$, we have \begin{align*} \mathbb{E}(U_n^{(m)})^2 &\leq \mathbb{E}(U_m^{(m)})^2 + C \sum_{l=m+1}^n \mathbb{E}(U_{l-1}^{(m-1)})^2 \leq C^m + C \sum_{l=m+1}^n {l-1 \choose m-1} C^{m-1} = C^m \sum_{l=m}^n {l-1 \choose m-1}. \end{align*} Note that \begin{align*} \sum_{l=m}^n {l-1 \choose m-1} &= C^m \sum_{j=m-1}^{n-1} {j \choose m-1} \\&= C^m \sum_{j=m-1}^{n-1}{j \choose j-m+1} &= C^m \sum_{r=0}^{n-m} {r+m-1 \choose r} \\ &= C^m \sum_{r=0}^{n-m} \left[ {r+m \choose r} - {r+m-1 \choose r-1}\right] \\ &= C^m {n \choose n-m} = C^m {n \choose m}. \end{align*} This proves the induction hypothesis and thus we have proved that $\mathbb{E}(U_n^{(k)})^2 \leq {n \choose k} C^k$ for all $n \geq k \geq 0$. \end{enumerate}
1996-q6
Let \( X_1, X_2, \ldots \) be i.i.d. Bernoulli-random variables with \[ P(X_i = 1) = 1/2 = P(X_i = -1). \] Let \( S_n = X_1 + \cdots + X_n \). a. Show that \( \{(\max_{1 \leq i \leq n} |S_i|/\sqrt{n})^p, n \geq 1\} \) is uniformly integrable for all \( p > 0 \). (Hint: Use Burkholder's inequality.) b. Show that \( E[\max_{1 \leq i \leq n} S_i^p/n^{p/2}] \) converges to a limit \( m_p \) as \( n \to \infty \). Can you give an expression for \( m_p \)?
Clearly $\left\{S_n, \mathcal{F}_n, n \geq 0\right\}$ is an $L^2$-MG with respect to the canonical filtration $\mathcal{F}$ and $S_0=0$. The quadratic variation for this process is $$ [S]_n = \sum_{k=1}^n (S_k-S_{k-1})^2 = \sum_{k=1}^n X_k^2 =n.$$\n \begin{enumerate}[label=(\roman*)] \item Set $M_n := \max_{1 \leq i \leq n} |S_i|$, for all $n \geq 1$. Fix any $q> \max(p,1)$. By Using \textit{Burkholder's inequality}, there exists $C_q < \infty$ such that for all $n \geq 1$, $$ \mathbb{E}M_n^q = \mathbb{E} \left[ \max_{i \leq k \leq n} |S_k|\right]^q \leq C_q \mathbb{E}[S]_n^{q/2} = C_qn^{q/2}. $$\nHence, $\sup_{n \geq 1} \mathbb{E}\left[(n^{-p/2}M_n^p)^{q/p}\right] = \sup_{n \geq 1} \mathbb{E} \left[ n^{-q/2}M_n^q\right] \leq C_q < \infty.$ Since $q/p >1$, this shows that $\left\{n^{-p/2}M_n^p, n \geq 1\right\}$ is U.I. \item Define, $$ \widehat{S}_n(t) := \dfrac{1}{\sqrt{n}} \left[S_{\lfloor nt \rfloor} +(nt-\lfloor nt \rfloor)(S_{\lfloor nt \rfloor +1}-S_{\lfloor nt \rfloor}) \right]=\dfrac{1}{\sqrt{n}} \left[S_{\lfloor nt \rfloor} +(nt-\lfloor nt \rfloor)X_{\lfloor nt \rfloor +1} \right], $$\nfor all $0 \leq t \leq 1$. By \textit{Donsker's Invariance Principle}, we have $\widehat{S}_n \stackrel{d}{\longrightarrow} W$ as $n \to \infty$ on the space $C([0,1])$, where $W$ is the standard Brownian Motion on $[0,1]$. Observe that the function $t \mapsto \widehat{S}_n(t)$ on $[0,1]$ is the piecewise linear interpolation using the points $$ \left\{ \left( \dfrac{k}{n}, \dfrac{S_k}{\sqrt{n}}\right) \Bigg \rvert k=0, \ldots, n\right\}.$$\nRecall the fact that for a piecewise linear function on a compact set, we can always find a maximizing point among the collection of the interpolating points. Hence, $$ \max_{1 \leq k \leq n} \dfrac{S_k}{\sqrt{n}} = \sup_{0 \leq t \leq 1} \widehat{S}_n(t).$$\nSince, $\mathbf{x} \mapsto \sup_{0 \leq t \leq 1} x(t)$ is a continuous function on $C([0,1])$, we have $$ \sup_{0 \leq t \leq 1} \widehat{S}_n(t) \stackrel{d}{\longrightarrow} \sup_{0 \leq t \leq 1} W(t) \stackrel{d}{=} |W(1)|.$$ Hence using \textit{Continuous Mapping Theorem}, we can conclude that $$n^{-p/2}| ext{max}_{1 \leq i \leq n} S_i|^p \stackrel{d}{\longrightarrow} |W(1)|^p.$$\nSince $|\text{max}_{1 \leq i \leq n} S_i| \leq M_n$, part (i) yields that $\left\{ n^{-p/2}| ext{max}_{1 \leq i \leq n} S_i|^p : n \geq 1\right\}$ is uniformly integrable and thus by \textit{Vitali's Theorem}, we can conclude that \begin{align*} n^{-p/2} \mathbb{E} | ext{max}_{1 \leq i \leq n} S_i|^p \longrightarrow m_p := \mathbb{E} |W(1)|^p = \int_{-\infty}^{\infty} |x|^p \phi(x)\, dx &= \dfrac{2}{\sqrt{2\pi}} \int_{0}^{\infty} x^p \exp(-x^2/2)\, dx \\ &= \dfrac{2}{\sqrt{2\pi}} \int_{0}^{\infty} (2u)^{(p-1)/2} \exp(-u)\, du \\ &= \dfrac{2^{p/2} \Gamma((p+1)/2)}{\sqrt{\pi}}. \end{align*} \end{enumerate}
1998-q1
Assume \((X, Y)\) are bivariate normal with mean values equal to 0, variances equal to 1, and correlation \( \rho \). Find the correlation coefficient of \( X^2 \) and \( Y^2 \).
$(X,Y)$ are bivariate normal with mean values equal to $0$, variances equal to $1$ and correlation $\rho$. Let $Z=X-\rho Y$. Then $Z \sim N(0,1-\rho^2)$ and is independent of $Y$. We have $ \operatorname{Var}(X^2) = \mathbb{E}(X^4)-(\mathbb{E}(X^2))^2 = 3-1 =2$ and similarly $\operatorname{Var}(Y^2)=2$. Now \begin{align*} \mathbb{E}(X^2Y^2) = \mathbb{E}((Z+\rho Y)^2Y^2) &= \mathbb{E} \left[ Z^2Y^2 + 2\rho ZY^3 + \rho^2 Y^4\right] \\ &= \mathbb{E}(Z^2)\mathbb{E}(Y^2) + 2\rho \mathbb{E}(Z)\mathbb{E}(Y^3) + \rho^2 \mathbb{E}(Y^4) \\ &= (1-\rho^2) + 0 + 3\rho^2 = 1+2\rho^2, \end{align*} and thus $$ \operatorname{Cov}(X^2,Y^2) = \mathbb{E}(X^2Y^2)-\mathbb{E}(X^2)\mathbb{E}(Y^2) = 1+2\rho^2-1=2\rho^2.$$ Combining them we get, $$ \operatorname{Corr}(X^2,Y^2) = \dfrac{\operatorname{Cov}(X^2,Y^2)}{\sqrt{\operatorname{Var}(X^2)\operatorname{Var}(Y^2)}} =\dfrac{2\rho^2}{\sqrt{4}}=\rho^2.$$
1998-q2
On \(( \Omega, \mathcal{F}, P) \) let \( X_0 = Z_0 = 0 \) and \( Z_i, i = 1,2, \ldots \) i.i.d. normal of mean 0 and variance 1. Let \( S_n = \sum_{j=1}^n Z_j \) and \( \mathcal{F}_n \) the \( \sigma \)-algebra generated by \( \{Z_1, \ldots, Z_n\} \). Let \( X_n = S_n^3 + S_n^2 + a_n S_n + b_n \) where \( a_n \) and \( b_n \) are two deterministic sequences. Either give the most general \( a_n \) and \( b_n \) for which \((X_n, \mathcal{F}_n)\) is a martingale, or prove that this is never the case.
By definition clearly $X_n \in m\mathcal{F}_n$ and since $S_n \sim N(0,n)$ we also have $X_n$ to be integrable for all $n \geq 0$ and for any choice of the deterministic sequences. Observe that for any $n \geq 1$, $$ \mathbb{E}(S_n \mid \mathcal{F}_{n-1}) = S_{n-1} + \mathbb{E}(Z_n \mid \mathcal{F}_{n-1}) = S_{n-1} + \mathbb{E}(Z_n)=S_{n-1},$$ $$ \mathbb{E}(S_n^2 \mid \mathcal{F}_{n-1}) = S_{n-1}^2 + 2S_{n-1}\mathbb{E}(Z_n \mid \mathcal{F}_{n-1}) + \mathbb{E}(Z_n^2 \mid \mathcal{F}_{n-1}) = S_{n-1}^2 + 2S_{n-1}\mathbb{E}(Z_n)+ \mathbb{E}(Z_n^2)=S_{n-1}^2+1,$$ and \begin{align*} \mathbb{E}(S_n^3 \mid \mathcal{F}_{n-1}) &= S_{n-1}^3 + 3S_{n-1}^2\mathbb{E}(Z_n \mid \mathcal{F}_{n-1}) + 3S_{n-1}\mathbb{E}(Z_n^2 \mid \mathcal{F}_{n-1}) + \mathbb{E}(Z_n^3 \mid \mathcal{F}_{n-1}) \\ &= S_{n-1}^3 + 3S_{n-1}^2\mathbb{E}(Z_n)+ 3S_{n-1}\mathbb{E}(Z_n^2)+ \mathbb{E}(Z_n^3) \\ &=S_{n-1}^3+ 3S_{n-1}. \end{align*} \n\text{Hence, for any choice of the deterministic sequences and any $n \geq 1$,}\ \begin{align*} \mathbb{E}\left[ X_n \mid \mathcal{F}_{n-1}\right] = \mathbb{E}\left[ S_n^3 + S_n^2 + a_nS_n + b_n \mid \mathcal{F}_{n-1}\right] &= S_{n-1}^3 + 3S_{n-1} + S_{n-1}^2 +1 + a_nS_{n-1}+b_n \\ &= S_{n-1}^3 + S_{n-1}^2 + (a_n+3)S_{n-1} + (b_n+1). \end{align*} \n\text{Since $S_0=0$ almost surely and $S_n \neq 0$ with probability $1$ for all $n \geq 1$, we have} \begin{align*} \left\{X_n, \mathcal{F}_n, n \geq 0\right\} \text{ is a MG } &\iff a_n+3=a_{n-1}, b_n+1=b_{n-1}, \; \forall \, n \geq 2, \; b_1+1=b_0, \\ & \iff a_n=a_1-3(n-1), \; \forall \, n \geq 1, \; b_n=b_0-n, \; \forall \, n \geq 1. \end{align*} \n\text{Now $0=X_0 = S_0^3+S_0^2+a_0S_0+b_0=b_0$. Hence the most general sequence pair will look like } $$ \left\{a_n\right\}_{n \geq 0} = (a_0,a_1,a_1-3,a_1-6, a_1-9, \ldots) \text{ and } \left\{b_n\right\}_{n \geq 0}=(0,-1,-2,-3,\ldots),$$\ \text{ for any $a_0,a_1 \in \mathbb{R}$.}
1998-q3
Let \( S_n = \sum_{i=1}^n Y_i \) where \( Y_i, i = 1, \ldots, n \) are pairwise independent random variables such that \( E(Y_i) = 0 \) and \( |Y_i| \leq 1 \). a) Show that \( P(S_n = n) \leq 1/n \). b) For \( n = 2^k - 1\), construct an example of pairwise independent \( Y_i \in \{-1, 1\} \) with \( E(Y_i) = 0 \) such that \( P(S_n = n) \geq 1/(n + 1) \). c) Explain what happens in parts (a) and (b) when pairwise independence is replaced by mutual independence?
\begin{enumerate}[label=(\alph*)]\item Since the variables $Y_i$'s are uniformly bounded by $1$, they are square integrable. We have $\mathbb{E}(S_n)=0$ and by \textit{pairwise independence} we have $\operatorname{Var}(S_n)=\sum_{i=1}^n \operatorname{Var}(Y_i) \leq \sum_{i=1}^n \mathbb{E}(Y_i^2) \leq n.$ Apply \textit{Chebyshev's Inequality} to get $$ \mathbb{P}(S_n=n) \leq \mathbb{P}(|S_n-\mathbb{E}(S_n)|\geq n) \leq \dfrac{\operatorname{Var}(S_n)}{n^2} \leq \dfrac{1}{n}.$$\item Take $k \geq 1$ and let $\mathcal{J}_k$ be the set of all non-empty subsets of $\left\{1, \ldots, k\right\}$. Consider $Z_1, \ldots, Z_k \stackrel{iid}{\sim}\text{Uniform}(\left\{+1,-1\right\})$. Define the collection of random variables $\left\{Y_I : I \in \mathcal{J}_k\right\}$ as $Y_I := \prod_{i \in I} Z_i$. Clearly this a collection of $n=2^k-1$ random variables with $Y_I \in \left\{+1,-1\right\}$ and $$\mathbb{E}(Y_I)= \prod_{i \in I} \mathbb{E}(Y_i) = 0,$$ for all $I \in \mathcal{J}_k$. In other words, $Y_I \sim \text{Uniform}(\left\{+1,-1\right\})$ for all $I \in \mathcal{J}_k$. For $I_1,I_2,I_3 \in \mathcal{J}_k$ mutually disjoint, the random variables $Y_{I_1},Y_{I_2}, Y_{I_3}$ are clearly mutually independent. Moreover, for any $I \neq J \in \mathcal{J}_k$ with $I \cap J \neq \emptyset$, we have\begin{align*}\mathbb{P}(Y_I=1,Y_J=1) &= \mathbb{P}(Y_{I \cap J}=1, Y_{I \cap J^c}=1, Y_{I^c \cap J}=1) + \mathbb{P}(Y_{I \cap J}=1, Y_{I \cap J^c}=-1, Y_{I^c \cap J}=-1) \\ &= \mathbb{P}(Y_{I \cap J}=1)\mathbb{P}(Y_{I \cap J^c}=1)\mathbb{P}(Y_{I^c \cap J}=1) + \mathbb{P}(Y_{I \cap J}=1)\mathbb{P}(Y_{I \cap J^c}=-1) \mathbb{P}(Y_{I^c \cap J}=-1) \\ &= \dfrac{1}{8} + \dfrac{1}{8} = \dfrac{1}{4} = \mathbb{P}(Y_I=1)\mathbb{P}(Y_J=1). \end{align*} Thus $\left\{Y_I : I \in \mathcal{J}_k\right\}$ is a collection of pairwise independent random variables. Then \begin{align*}\mathbb{P} \left(\sum_{I \in \mathcal{J}_k} Y_I = n=2^k-1 \right) = \mathbb{P}(Y_I=1, \; \forall \; I \in \mathcal{J}_k) = \mathbb{P}(Z_j=1, \; \forall \; j=1, \ldots, k) = 2^{-k} = \dfrac{1}{n+1}. \end{align*}\item If the variables $Y_i$'s were mutually independent, then $$ \mathbb{P}(S_n=n) = \mathbb{P}(Y_i=1, \; \forall\; i=1, \ldots, n) = \prod_{i =1}^n \mathbb{P}(Y_i=1)$$ but using \textit{Markov's Inequality} we can say for all $ 1\ \leq i \leq n$, $$ \mathbb{P}(Y_i =1)= \mathbb{P}(Y_i \geq 1) = \mathbb{P}(Y_i+1 \geq 2) \leq \dfrac{\mathbb{E}(Y_i+1)}{2}=\dfrac{1}{2},$$ and hence $\mathbb{P}(S_n=n)\leq 2^{-n}$. Since $2^{-n}\leq n^{-1}$, part (a) is also in this case. Since $2^{-n} < (n+1)^{-1}$ for all $n \geq 2$, an example like part (b) is not possible here for $n \geq 2$. \end{enumerate}
1998-q4
Let \( \{X_n, \mathcal{F}_n\} \) be a martingale difference sequence such that \( \sup_{n \geq 1} E(|X_n|^r | \mathcal{F}_{n-1}) < \infty \) a.s. and \( E(X_n^2| \mathcal{F}_{n-1}) = b \) for some constants \( b > 0 \) and \( r > 2 \). Suppose that \( u_n \) is \( \mathcal{F}_{n-1}\)-measurable and that \( n^{-1} \sum_{i=1}^n u_i^2 \to_p c \) for some constant \( c > 0 \). Let \( S_n = \sum_{i=1}^n u_i X_i \). Show that the following limits exist for every real number \( x \) and evaluate them: a) \( \lim_{a \to \infty} P(T(a) \leq a^2 x) \), where \( T(a) = \inf\{n \geq 1 : S_n \geq a\} \). b) \( \lim_{n \to \infty} P(\max_{1 \leq k \leq n} (S_k - k S_n/n) \geq \sqrt{nx}) \).
Since we are not given any integrability condition on $u$, we need to perform some truncation to apply MG results. Let $v_{n,k}=u_k \mathbbm{1}(|u_k| \leq n).$ We have $v_{n,i}X_i$ to be square-integrable for all $n,i \geq 1$. Clearly, $v_{n,i}X_i \in m\mathcal{F}_i$ and $\mathbb{E}(v_{n,i}X_i \mid \mathcal{F}_{i-1}) = v_{n,i} \mathbb{E}(X_i \mid \mathcal{F}_{i-1})=0$. Hence $\left\{M_{n,k}:=\sum_{i=1}^k v_{n,i}X_i, \mathcal{F}_k, k \geq 0 \right\}$ is a $L^2$-Martingale with $M_{n,0} :=0$. Its predictable compensator process is as follows. $$ \langle M_n \rangle_l = \sum_{k=1}^l \mathbb{E}(v_{n,k}^2X_k^2 \mid \mathcal{F}_{k-1}) = \sum_{k=1}^l v_{n,k}^2 \mathbb{E}(X_k^2 \mid \mathcal{F}_{k-1}) = b\sum_{k=1}^l v_{n,l}^2, \; \forall \; n,l \geq 1.$$\nFor any $q>0$ and any sequence of natural numbers $\left\{l_n \right\}$ such that $l_n =O(n)$, we have $$ \mathbb{P} \left(\sum_{k=1}^{l_n} |v_{n,k}|^q \neq \sum_{k=1}^{l_n} |u_k|^q \right) \leq \mathbb{P}\left( \max_{k=1}^{l_n} |u_k| >n \right) \longrightarrow 0,$$ which follows from Lemma~\ref{lemma} and the fact that $n^{-1}\sum_{k=1}^n u_k^2$ converges in probability to $c$. Thus for any sequence of positive real numbers $\left\{c_n \right\}$ we have \begin{equation}{\label{error}}\dfrac{1}{c_n} \sum_{k=1}^{l_n} |v_{n,k}|^q = \dfrac{1}{c_n} \sum_{k=1}^{l_n} |u_k|^q + o_p(1). \end{equation} In particular, for any $0 \leq t \leq 1$, $$ n^{-1}\langle M_n \rangle_{\lfloor nt \rfloor} = \dfrac{b}{n}\sum_{k=1}^{\lfloor nt \rfloor} v_{n,k}^2 = \dfrac{b}{n}\sum_{k=1}^{\lfloor nt \rfloor} u_k^2 + o_p(1) = \dfrac{b\lfloor nt \rfloor}{n}\dfrac{1}{\lfloor nt \rfloor}\sum_{k=1}^{\lfloor nt \rfloor} u_k^2 + o_p(1) \stackrel{p}{\longrightarrow} bct.$$ Also for any $\varepsilon >0$,\begin{align*}\dfrac{1}{n} \sum_{k=1}^n \mathbb{E} \left[(M_{n,k}-M_{n,k-1})^2; |M_{n,k}-M_{n,k-1}| \geq \varepsilon \sqrt{n} \mid \mathcal{F}_{k-1} \right] &= \dfrac{1}{n} \sum_{k=1}^n v_{n,k}^2\mathbb{E} \left[X_k^2; |v_{n,k}X_k| \geq \varepsilon \sqrt{n} \mid \mathcal{F}_{k-1} \right] \\ & \leq \dfrac{1}{n} \sum_{k=1}^n v_{n,k}^2\mathbb{E} \left[|X_k|^2 (\varepsilon \sqrt{n} |v_{n,k}X_k|^{-1})^{2-r} \mid \mathcal{F}_{k-1} \right] \\ & \leq \varepsilon^{2-r}\left( \sup_{k \geq 1} \mathbb{E}(|X_k|^r \mid \mathcal{F}_{k-1})\right) \left( \dfrac{1}{n^{r/2}}\sum_{k=1}^n |v_{n,k}|^r \right) \\ &= \varepsilon^{2-r}\left( \sup_{k \geq 1} \mathbb{E}(|X_k|^r \mid \mathcal{F}_{k-1})\right) \left( \dfrac{1}{n^{r/2}}\sum_{k=1}^n |u_k|^r +o_p(1) \right). \end{align*} From Lemma~\ref{lemma} and the fact that $n^{-1}\sum_{k=1}^n u_k^2$ converges to $c$ in probability, we can conclude that $n^{-r/2} \sum_{k=1}^n |u_k|^r$ converges in probability to $0$. \n\n\text{Then we have basically proved the required conditions needed to apply \textit{Martingale CLT}. Define,} $$ \widehat{S}_n(t) := \dfrac{1}{\sqrt{bcn}} \left[M_{n,\lfloor nt \rfloor} +(nt-\lfloor nt \rfloor)(M_{n,\lfloor nt \rfloor +1}-M_{n,\lfloor nt \rfloor}) \right]=\dfrac{1}{\sqrt{bcn}} \left[M_{n,\lfloor nt \rfloor} +(nt-\lfloor nt \rfloor)v_{n,\lfloor nt \rfloor +1}X_{\lfloor nt \rfloor +1} \right], $$ \text{for all $0 \leq t \leq 1$. We then have $\widehat{S}_n \stackrel{d}{\longrightarrow} W$ as $n \to \infty$ on the space $C([0,1])$, where $W$ is the standard Brownian Motion on $[0,1]$.
1998-q5
Let \( \{B(t), t \ge 0\} \) and \( \{A(t), t \ge 0\} \) be two independent stochastic processes defined on \((\Omega, \mathcal{F}, P)\) such that \( B(\cdot) \) is standard Brownian motion and \( A(\cdot) \) is a nondecreasing, right-continuous process with independent increments. Let \( X_t = B(A(t)) \). a) Show that \( X_t \) has independent increments and is a right-continuous martingale. b) Let \( T \) be a stopping time with respect to the filtration \( \{\mathcal{F}_t, t \ge 0\} \), where \( \mathcal{F}_t \) is the \( \sigma \)-algebra generated by \( \{A(s), s \le t\} \cup \{B(s), s \le A(t)\} \). If \( E A(T) < \infty \), show that \( E X_T = 0 \). c) In the case where \( A(\cdot) \) is a Poisson process with rate \( \lambda \), find the characteristic function of \( X_t \).
\textbf{Correction :} For part (a), assume that $A(t) \geq 0$ and $\mathbb{E}\sqrt{A(t)}< \infty$ for all $t \geq 0$. For part (b), assume that $\mathbb{E}A(t)< \infty$ for all $t \geq 0$.\n\nWe have $B$ to be standard BM and $A$ is non-decreasing right-continuous process with independent increments. We have $B$ to be independent to $A$. We have $X(t)=B(A(t)).$\n\begin{enumerate}[label=(\alph*)]\item Let $\mathcal{G}_t := \sigma(A(s): s \leq t)$ and $\mathcal{G}_{\infty} := \sigma(A(s): s \geq 0)$. Fix $n \geq 1$ and take any $0=t_0<t_1<t_2< \cdots < t_n$ and $D_1, \ldots, D_n \in \mathcal{B}_{\mathbb{R}}.$ Consider the map $ \phi : \left(C([0,\infty)) \times [0, \infty)^{n+1}, \mathcal{B}_{C([0,\infty))} \otimes \mathcal{B}_{[0,\infty)^{n+1}} \right) \to (\mathbb{R}, \mathcal{B}_{\mathbb{R}})$ defined as $$ \phi(\mathbf{x}, s_0, \ldots, s_n) = \mathbbm{1}\left(x(s_i)-x(s_{i-1}) \in D_i, \; \forall \; i=1, \ldots, n \right).$$ Clearly this is a bounded measurable map. Note that for any $0 < s_0 < s_1 < \ldots < s_n < \infty$, we have by independent increment property of the BM the following identity.\begin{align*} g(s_0,\ldots, s_n) = \mathbb{E} \left[ \phi(B,s_0, \ldots, s_n )\right] = \mathbb{P} \left( B(s_i)-B(s_{i-1}) \in D_i, \; \forall \; i=1, \ldots, n\right) &= \prod_{i =1}^n \mathbb{P}(B(s_i)-B(s_{i-1}) \in D_i) \\ &= \prod_{i =1}^n Q(s_i-s_{i-1},D_i),\end{align*} where $Q(t, D) := \mathbb{P}(N(0,t) \in D)$ for any $t \geq 0$ and $D \in \mathcal{B}_{\mathbb{R}}$. For any fixed $D$, the function $t \mapsto Q(t,D)$ is continuous and hence measurable. Since, $B \perp \! rac{ ext{G}}{ ext{I}}$ and $X$ is a martingale with respect to the canonical filtration $(A(t)$. Therefore,\end{enumerate}
1999-q1
Let \( X_1, \ldots, X_n, \ldots \) be an arbitrary sequence of random variables on the same probability space and \( A, B \) two Borel subsets of \( \mathbb{R} \). Let \( \Gamma_n = \bigcup_{m=n}^\infty \{ X_m \in B \} \). Suppose that for some \( \delta > 0 \) and all \( n \geq 1 \), \[ P(\Gamma_{n+1} | X_1, \ldots, X_n) \geq \delta \text{ on } \{ X_n \in A \}. \] Show that \[ \{ X_n \in A \text{ i.o.} \} \subseteq \{ X_n \in B \text{ i.o.} \} \quad \text{a.s.} \]
Let \mathcal{F}_n$ be the $\sigma$-algebra generated by $X_1, \ldots, X_n$. Also define $\Gamma^B := (X_n \in B \text{ i.o.})$ and $\Gamma^A := (X_n \in A \text{ i.o.})$. Note that $\Gamma_{n} = \bigcup_{m \geq n}(X_m \in B) \downarrow \Gamma^B$, in other words $\mathbbm{1}_{\Gamma_n} \stackrel{a.s.}{\longrightarrow} \mathbbm{1}_{\Gamma^B}$. By the hypothesis, we have $$ \mathbb{E} \left[ \mathbbm{1}_{\Gamma_n} \mid \mathcal{F}_n \right] \geq \delta \mathbbm{1}_{(X_n \in A)} $$ By \textit{Levy's upward Theorem}, we have $$ \mathbb{E} \left[ \mathbbm{1}_{\Gamma_{n+1}} \mid \mathcal{F}_n \right] \stackrel{a.s.}{\longrightarrow} \mathbb{E} \left[ \mathbbm{1}_{\Gamma^B} \mid \mathcal{F}_{\infty} \right] = \mathbbm{1}_{\Gamma^B},$$ where $\mathcal{F}_{\infty}=\sigma(X_1, X_2, \ldots,)$. Thus we can conclude that $$ \limsup_{n \to \infty} \delta \mathbbm{1}_{(X_n \in A)} \leq \mathbbm{1}_{\Gamma^B}, \; \text{almost surely}.$$ If $\omega \in \Gamma^A$, then $\limsup_{n \to \infty} \mathbbm{1}_{(X_n \in A)}(\omega)=1$ and hence $$ \delta \mathbbm{1}_{\Gamma^A} \leq \limsup_{n \to \infty} \delta \mathbbm{1}_{(X_n \in A)} \leq \mathbbm{1}_{\Gamma^B}, \; \text{almost surely}.$$ Since, $\delta >0$, this shows that $\mathbb{P}(\Gamma^A \setminus \Gamma^B)=0$. This concludes the proof.
1999-q2
Let \( Z_n \) be a Galton-Watson process (as in Durrett 4.3.d or Williams Ch. 0) such that each member of the population has at least one offspring (that is, in Durrett's notations, \( p_0 = 0 \)). Construct the corresponding Galton-Watson tree by connecting each member of the population to all of its offsprings. Call a vertex of this tree a branch point if it has more than one offspring. For each vertex \( v \) of the tree let \( C(v) \) count the number of branch points one encounters when traversing along the path from the root of the tree to \( v \). Let \( B_n \) be the minimum of \( C(v) \) over all vertices \( v \) in the \( n \)-th generation of the process. Show that for some constant \( \delta > 0 \) that depends only on the offspring distribution, \[ \liminf_{n \to \infty} n^{-1} B_n \geq \delta \quad \text{a.s.} \] *Hint*: For each \( \lambda > 0 \), find \( M(\lambda) \) such that \( \sum_v \exp(-\lambda C(v)) M(\lambda)^n \) is a Martingale, where the sum is over \( v \) in the \( n \)-th generation of the process.
\textbf{Correction :} Assume that the progeny mean is finite and $p_1<1$. Let $Z$ denote the progeny distribution with $\mathbb{P}(Z=k)=:p_k$ for all $k \in \mathbb{Z}_{\geq 0}$. In this problem $p_0=0$. Introduce the notation $P(v)$ which will refer to the parent of vertex $v$ and $\mathcal{O}(v)$ which will refer to the set of offspring of vertex $v$. Also $D_n$ will denote the collection of all the vertices in the $n$-th generation and $\mathcal{F}_n$ will denote all the information up to generation $n$. $C(v)$ is the number of branch points along the path from the root to $v$, excluding $v$. Clearly, $C(v)$ for any $v \in D_n$ depends on the offspring counts of only the vertices in $D_{n-1}$ and hence is $\mathcal{F}_n$-measurable. (Note that by convention, $\mathcal{F}_n$ does not incorporate information about the offspring counts of the vertices in $D_n$.) Fix any $\lambda >0$. Then for all $n \geq 1$, $$ \mathbb{E} \left[ \sum_{v \in D_n} \exp(-\lambda C(v))\right] \leq \mathbb{E}|D_n| = \mu^n < \infty,$$ where $\mu=\mathbb{E}(Z)$ is the progeny mean. Therefore, \begin{align*} \mathbb{E} \left[ \sum_{v \in D_n} \exp(-\lambda C(v)) \Bigg \rvert \mathcal{F}_{n-1} \right] &= \mathbb{E} \left[ \sum_{u \in D_{n-1}} \sum_{v : P(v)=u} \exp(-\lambda C(v)) \Bigg \rvert \mathcal{F}_{n-1} \right] \\ &= \sum_{u \in D_{n-1}} \mathbb{E} \left[ \sum_{v : P(v)=u} \exp(-\lambda C(v)) \Bigg \rvert \mathcal{F}_{n-1} \right] \\ &= \sum_{u \in D_{n-1}} \mathbb{E} \left[ \exp(-\lambda C(u)) \sum_{v : P(v)=u} \exp(-\lambda \mathbbm{1}( u \text{ is a branching point})) \Bigg \rvert \mathcal{F}_{n-1} \right] \\ &= \sum_{u \in D_{n-1}} \exp(-\lambda C(u)) \mathbb{E} \left[ |\mathcal{O}(v)| \exp(-\lambda \mathbbm{1}( |\mathcal{O}(v)| \geq 2)) \Bigg \rvert \mathcal{F}_{n-1} \right] \\ &= \sum_{u \in D_{n-1}} \exp(-\lambda C(u)) M(\lambda),\ \end{align*} where $$ M(\lambda) := \mathbb{E}\left[ Z \exp(-\lambda \mathbbm{1}(Z \geq 2))\right] = \sum_{k \geq 2} ke^{-\lambda}p_k + p_1 = e^{-\lambda}(\mu -p_1)+p_1 \in (0, \infty). $$ This shows that $\left\{ T_n = M(\lambda)^{-n}\sum_{v \in D_n} \exp(-\lambda C(v)), \mathcal{F}_n, n \geq 1\right\}$ is a MG. Being a non-negative MG, it converges almost surely to some $T_{\infty}$ with $$\mathbb{E}T_{\infty} \leq \liminf_{n \to \infty} \mathbb{E}T_n = \mathbb{E}T_1 \leq M(\lambda)^{-1}\mu < \infty.$$ In particular, $T_{\infty}< \infty$ with probability $1$. Now note if $B_n := \min_{v \in D_n} C(v)$, then $$ T_n = M(\lambda)^{-n}\sum_{v \in D_n} \exp(-\lambda C(v)) \geq M(\lambda)^{-n} \exp(-\lambda B_n),$$ and hence $$ \dfrac{\lambda}{n} B_n \geq - \log M(\lambda) - \dfrac{1}{n}\log T_n.$$ Since, $T_{\infty}< \infty$ almost surely, we have $\limsup_{n \to \infty} n^{-1} \log T_n \leq 0$, almost surely (this is an inequality since $T_{\infty}$ can take the value $0$ with positive probability). Thus, almost surely $$ \liminf_{n \to \infty} n^{-1}B_n \geq -\lambda^{-1}\log M(\lambda) =- \dfrac{\log(e^{-\lambda(\mu-p_1)}+p_1)}{\lambda}.$$ Our proof will be complete if we can furnish a $\lambda_0 >0$ such that $M(\lambda_0)<1$. Note that $\lim_{\lambda \to \infty} M(\lambda)=p_1 <1,$ and hence such $\lambda_0$ exists.
1999-q3
Show that the uniform distribution on \([0,1]\] is not the convolution of two independent, identically distributed variables.
Let $U \sim \text{Uniform}([0,1])$ and $U = X_1+X_2$, where $X_1$ and $X_2$ are \textit{i.i.d.} variables. Then $$ \text{Uniform}([-1,1]) \stackrel{d}{=} 2U-1 = 2X_1+2X_2-1 = Y_1+Y_2,$$ where $Y_i=2X_i-1/2$ for $i=1,2$. Clearly $Y_1$ and $Y_2$ are still \textit{i.i.d.} variables. Set $V:= 2U-1$. $V$ has a distribution which is symmetric around $0$ and hence it has all odd order moments to be $0$. First of all, note that $$ (\mathbb{P}(Y_1 > 1/2))^2 = \mathbb{P}(Y_1 > 1/2, Y_2 > 1/2) \leq \mathbb{P}(V >1)=0,$$ and hence $Y_1 \leq 1/2$ almost surely. Similar argument shows $Y_1 \geq -1/2$ almost surely. Thus $Y_1$ is a bounded random variable. We shall first show that the distribution of $Y_1$ has all odd order moments $0$, i.e. $\mathbb{E}Y_1^{2k-1}=0$ for all $k \in \mathbb{N}$. To prove this we go by induction. For $k=1$, the statement is true since $$ 2 \mathbb{E}(Y_1) = \mathbb{E}(Y_1+Y_2) = \mathbb{E}(V) = 0.$$ Now suppose that $\mathbb{E}Y_1^{2k-1}=0$ for all $k=1, \ldots, m$. Then \begin{align*} 0=\mathbb{E}(V^{2m+1}) =\mathbb{E}(Y_1+Y_2)^{2m+1}) =\sum_{l=0}^{2m+1} {2m+1 \choose l} \mathbb{E}Y_1^lY_2^{2m+1-l} = \sum_{l=0}^{2m+1} {2m+1 \choose l} \mathbb{E}Y_1^l \mathbb{E}Y_1^{2m+1-l}.\end{align*} For any $0 < l <2m+1$, if $l$ is odd then by induction hypothesis we have $\mathbb{E}Y_1^l=0$. If $l$ is even then $2m+1-l$ is an odd number strictly less than $2m+1$ and induction hypothesis then implies $\mathbb{E}Y_1^{2m+1-l}=0$. Therefore, $$ 0 = 2 \mathbb{E}Y_1^{2m+1} \Rightarrow \mathbb{E}Y_1^{2m+1}=0,$$ proving the claim. Now let $Y_1 \perp \!\!\!\perp Z\sim \text{Uniform}(\left\{+1,-1\right\})$ and set $W_1=Z|Y_1|$. Clearly, for all $k \geq 1$, $$ \mathbb{E}W_1^{2k-1} = \mathbb{E}(Z^{2k-1}|Y_1|^{2k-1})=\mathbb{E}(Z|Y_1|^{2k-1}) =\mathbb{E}(Z) \mathbb{E}|Y_1|^{2k-1} =0,\; \mathbb{E}W_1^{2k}= \mathbb{E}(Z^{2k}Y_1^{2k}) = \mathbb{E}Y_1^{2k}.$$ Hence, $\mathbb{E}Y_1^k = \mathbb{E}W_1^k$. Since probability measures on bounded intervals are determined by its moments (see~\cite[Theorem 30.1]{B01}), we can conclude that $Y_1 \stackrel{d}{=}W_1$. It is evident that the distribution of $W_1$ is symmetric around $0$ and hence so is $Y_1$. Consequently, $\mathbb{E}(e^{itY_1}) \in \mathbb{R}$ for all $t \in \mathbb{R}$. Hence for all $t >0$, \begin{align*} 0 \leq (\mathbb{E}(e^{itY_1}))^2 = \mathbb{E}\exp(it(Y_1+Y_2)) = \mathbb{E}\exp(itV) = \dfrac{1}{2} \int_{-1}^{1} e^{itx} \, dx = \dfrac{e^{it}-e^{-it}}{2it} = \dfrac{\sin t}{t} \Rightarrow \sin t >0,\end{align*} which is a contradiction.
1999-q4
Let \( \pi \) be a permutation of \( \{1, \ldots, n \} \) chosen at random. Let \( X = X(\pi) \) be the number of fixed points of \( \pi \), so \( X(\pi) \) is the number of indices \( i \) such that \( \pi(i) = i \). Prove \[ E[X(X-1)(X-2)\ldots(X-k+1)] = 1 \] for \( k = 1, 2, \ldots, n \).
We have $\pi \sim \text{Uniform}(S_n)$ where $S_n$ is the symmetric group of order $n$. Let $\mathcal{C}(\pi)$ is the collection of fixed points of $\pi$ and $X=X(\pi):=|\mathcal{C}(\pi)|$. Then for all $1 \leq k \leq n$, \begin{align*} X(X-1)\ldots(X-k+1) &= \text{Number of ways to choose an ordered $k$-tuple from $\mathcal{C}(\pi)$} \\ &= \sum_{i_1, \ldots, i_k \in \mathcal{C}(\pi) : \; i_j\text{'s are all distinct}} 1 \\ &= \sum_{i_1, \ldots, i_k : \;\, i_j\text{'s are all distinct}} \mathbbm{1}(\pi(i_j)=i_j, \; \forall \; j=1, \ldots, k). \end{align*} For any distinct collection of indices $(i_1, \ldots, i_k)$, we have $$ \mathbb{P}(\pi(i_j)=i_j, \; \forall \; j=1, \ldots, k) = \dfrac{(n-k)!}{n!},$$ and hence $$ \mathbb{E}\left[X(X-1)\ldots(X-k+1) \right] = k!{n \choose k} \dfrac{(n-k)!}{n!} = 1.$$
1999-q5
Let \( \{ X_n, \mathcal{F}_n, n \geq 1 \} \) be a martingale difference sequence such that \( E(X_n^2 | \mathcal{F}_{n-1}) = b \) and \[ \sup_{n \geq 1} E(|X_n|^r | \mathcal{F}_{n-1}) < \infty \] almost surely, for some constants \( b > 0 \) and \( r > 2 \). Let \( S_n = \sum_{i=1}^n X_i \) and \( S_0 = 0 \). Show that \[ n^{-2} \sum_{i=\sqrt{n}}^n S_{i-1}^2, n^{-1} \sum_{i=\sqrt{n}}^n S_{i-1} X_i \] converges weakly as \( n \to \infty \). Describe this limiting distribution and hence find the limiting distribution of \( ( \sum_{i=\sqrt{n}}^n S_{i-1} X_i ) / ( \sum_{i=\sqrt{n}}^n S_{i-1}^2 )^{1/2} \). *(Hint: \( S_n^2 = \sum_{i=1}^n X_i^2 + 2\sum_{i=1}^n X_i S_{i-1} \).*
Under the assumption we have $X_i$ to be integrable for all $i \geq 1$. $\left\{S_k=\sum_{i=1}^k X_i, \mathcal{F}_k, k \geq 0 \right\}$ is a $L^2$-Martingale with $S_0 :=0$. Its predictable compensator process satisfies $$ n^{-1}\langle S \rangle_n = \dfrac{1}{n} \sum_{k=1}^n \mathbb{E}(X_k^2 \mid \mathcal{F}_{k-1}) = b.$$ Also for any $\varepsilon >0$, \begin{align*} \dfrac{1}{n} \sum_{k=1}^n \mathbb{E} \left[(S_k-S_{k-1})^2; |S_k-S_{k-1}| \geq \varepsilon \sqrt{n} \mid \mathcal{F}_{k-1} \right] &= \dfrac{1}{n} \sum_{k=1}^n \mathbb{E} \left[X_k^2; |X_k| \geq \varepsilon \sqrt{n} \mid \mathcal{F}_{k-1} \right] \\ & \leq \dfrac{1}{n} \sum_{k=1}^n \mathbb{E} \left[|X_k|^2 (\varepsilon \sqrt{n} |X_k|^{-1})^{2-r} \mid \mathcal{F}_{k-1} \right] \\ & \leq n^{-r/2}\varepsilon^{2-r}\left( \sup_{k \geq 1} \mathbb{E}(|X_k|^r \mid \mathcal{F}_{k-1})\right) \stackrel{a.s.}{\longrightarrow} 0.\end{align*} We have basically proved the required conditions needed to apply \textit{Martingale CLT}. Define, $$ \widehat{S}_n(t) := \dfrac{1}{\sqrt{n}} \left[S_{\lfloor nt \rfloor} +(nt-\lfloor nt \rfloor)(S_{\lfloor nt \rfloor +1}-S_{\lfloor nt \rfloor}) \right]=\dfrac{1}{\sqrt{n}} \left[S_{\lfloor nt \rfloor} +(nt-\lfloor nt \rfloor)X_{\lfloor nt \rfloor +1} \right], $$ for all $0 \leq t \leq 1$. We then have $\widehat{S}_n \stackrel{d}{\longrightarrow} \sqrt{b}W$ as $n \to \infty$ on the space $C([0,1])$, where $W$ is the standard Brownian Motion on $[0,1]$. First note that for any $n \geq 1$, $$ 2\sum_{i=1}^n X_iS_{i-1} = \sum_{i=1}^n S_i^2 - \sum_{i=1}^n S_{i-1}^2 - \sum_{i=1}^n X_i^2 = S_n^2 - \sum_{i=1}^n X_i^2.$$ Therefore, \begin{align*} n^{-1}\mathbb{E}\Bigg \rvert \sum_{i=1}^n S_{i-1}X_i - \sum_{i=\lceil\sqrt{n} \rceil }^n S_{i-1}X_i \Bigg \rvert &\leq n^{-1} \mathbb{E} \Bigg \rvert\sum_{i=1}^{\lceil \sqrt{n} \rceil -1}S_{i-1}X_i \Bigg \rvert & \leq \dfrac{1}{2n} \mathbb{E}|S_{\lceil \sqrt{n} \rceil -1}|^2 +\frac{1}{2n} \mathbb{E} \left(\sum_{i=1}^{\lceil \sqrt{n} \rceil -1}X_i^2 \right) = \dfrac{2b(\lceil \sqrt{n} \rceil -1)}{2n}=o(1). \end{align*} Similarly, \begin{align*} n^{-2}\mathbb{E}\Bigg \rvert \sum_{i=1}^n S_{i-1}^2 - \sum_{i=\lceil\sqrt{n} \rceil }^n S_{i-1}^2 \Bigg \rvert &\leq n^{-2} \mathbb{E} \Bigg \rvert\sum_{i=1}^{\lceil \sqrt{n} \rceil -1}S_{i-1}^2 \Bigg \rvert & = \dfrac{1}{n^2} \sum_{i=1}^{\lceil \sqrt{n} \rceil -1}\mathbb{E}S_{i-1}^2 = \dfrac{1}{n^2} \sum_{i=1}^{\lceil \sqrt{n} \rceil -1} (i-1)b = O(n^{-1})=o(1). \end{align*} Hence, \begin{align*} \left( n^{-2}\sum_{i=\lceil\sqrt{n} \rceil }^n S_{i-1}^2, n^{-1}\sum_{i=\lceil\sqrt{n} \rceil }^n S_{i-1}X_i \right) &= \left( n^{-2}\sum_{i=1}^n S_{i-1}^2, n^{-1}\sum_{i=1 }^n S_{i-1}X_i \right) + o_p(1) \\ &= \left( n^{-2}\sum_{i=1}^n S_{i-1}^2, n^{-1}S_n^2 - n^{-1}\sum_{i=1}^n X_i^2 \right) + o_p(1) \\ &= \left( n^{-1}\sum_{i=1}^n \widehat{S}_n\left( \dfrac{i-1}{n}\right)^2, \widehat{S}_n^{(1)}^2 - n^{-1}\sum_{i=1}^n X_i^2 \right) + o_p(1) \end{align*} Notice that, \begin{align*} \Bigg \rvert n^{-1}\sum_{i=1}^n \widehat{S}_n^2((i-1)/n) - \int_{0}^1 \widehat{S}^2_n(t)\, dt \Bigg \rvert &\leq \sum_{i=1}^n \Bigg \rvert n^{-1} \widehat{S}_n^2((i-1)/n) - \int_{(i-1)/n}^{i/n} \widehat{S}^2_n(t)\, dt \Bigg \rvert \\ & \leq \sum_{i=1}^n \dfrac{1}{n} \sup_{(i-1)/n \leq t \leq i/n} \big \rvert \widehat{S}^2_n(t) - \widehat{S}_n^2((i-1)/n)\big \rvert \\ & \leq 2\left(\sup_{0 \leq t \leq 1} |\widehat{S}_n(t)| \right) \left[ \sum_{i=1}^n \dfrac{1}{n} \sup_{(i-1)/n \leq t \leq i/n} \big \rvert \widehat{S}_n(t) - \widehat{S}_n((i-1)/n)\big \rvert \right] \\ & = 2 \left(\sup_{0 \leq t \leq 1} |\widehat{S}_n(t)| \right) \dfrac{1}{n} \sum_{i=1}^n n^{-1/2}|X_i| = \dfrac{2}{\sqrt{n}}\left(\sup_{0 \leq t \leq 1} |\widehat{S}_n(t)| \right) \left( \dfrac{1}{n} \sum_{i=1}^n |X_i| \right). Since, $\mathbf{x} \mapsto \sup_{0 \leq t \leq 1} |x(t)|$ is a continuous function on $C([0,1])$, we have $\sup_{0 \leq t \leq 1} |\widehat{S}_n(t)| \stackrel{d}{\longrightarrow} \sup_{0 \leq t \leq 1} \sqrt{b}|W(t)| = O_p(1).$ On the otherhand, $\mathbb{E} \left(n^{-1}\sum_{i=1}^n |X_i|\right) \leq n^{-1} \sum_{i=1}^n \mathbb{E}|X_i| \leq \sqrt{b}$ and hence $n^{-1}\sum_{i=1}^n |X_i| = O_p(1).$ Therefore, $$\Bigg \rvert n^{-1}\sum_{i=1}^n \widehat{S}_n^2((i-1)/n) - \int_{0}^1 \widehat{S}^2_n(t)\, dt \Bigg \rvert = o_p(1),$$ which yields $$ \left( n^{-2}\sum_{i=\lceil\sqrt{n} \rceil }^n S_{i-1}^2, n^{-1}\sum_{i=\lceil\sqrt{n} \rceil }^n S_{i-1}X_i \right) = \left( \int_{0}^1 \widehat{S}^2_n(t)\, dt, \widehat{S}_n(1)^2 - \dfrac{1}{n} \sum_{i=1}^n X_i^2\right) + o_p(1). $$ Since, $\mathbf{x} \mapsto (x^2(1),\int_{0}^1 x^2(t)\, dt)$ is a jointly continuous function on $C([0,1])$, we apply Corollary~\ref{need} and conclude that $$ \left( n^{-2}\sum_{i=\lceil\sqrt{n} \rceil }^n S_{i-1}^2, n^{-1}\sum_{i=\lceil\sqrt{n} \rceil }^n S_{i-1}X_i \right) \stackrel{d}{\longrightarrow} \left( b\int_{0}^1 W(t)^2\, dt, bW(1)^2-b\right), \; \text{ as } n \to \infty.$$ Apply \textit{Slutsky's Theorem} to conclude that $$ \dfrac{\sum_{i=\lceil\sqrt{n} \rceil }^n S_{i-1}X_i }{\sqrt{\sum_{i=\lceil\sqrt{n} \rceil }^n S_{i-1}^2}} \stackrel{d}{\longrightarrow} \dfrac{\sqrt{b}(W(1)^2-1)}{\sqrt{\int_{0}^1 W(t)^2\, dt}}.$$
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
15
Edit dataset card