### CSIR JUNE 2011 PART C QUESTION 83 (Singularities of $f(z) = \frac{e^z + 1}{e^z -1}$)

The function $f(z) = \frac{e^z + 1}{e^z -1}$ has what type of singularity at $z=0$?
1. a removable singularity.
2. a pole.
3. an essential singularity.
4. the residue of $f(z)$ at $z = 0$ is $2$.
I am spending my crucial Ph.D. time in writing this blog to help others to achieve in Mathematics. So please encourage me by following the blog by email. Also, visit my blog every day and share the solutions with friends. Thank you.
Solution:
Results:
1. If $z_0$ is a removable singularity of $f(z)$ if $\lim_{z \to z_0} (z - z_0)f(z) = 0$.
2.  If $z_0$ is a pole of $f(z)$ if $\lim_{z \to z_0} (z - z_0)f(z)$ exists and non-zero.
3. The residue of the function $f(z)$ at $z = z_0$ is given by the coefficient of $\frac{1}{z}$ in the Laurent series expansion of $f(z)$. Which can be calculated by $\lim_{z \to z_0} \, (z-z_0)f(z)$.
Remark: From the above results, we observe that the limit used to check whether a singularity $z_0$ is removable or pole of $f(z)$ is same the limit used to find the residue of $f(z)$ at $z_0$. Indeed, in both the cases the limit is $\lim_{z \to z_0}(z-z_0)f(z)$. We conclude that,
1. The residue of $f(z)$ at $z = z_0$ is zero if and only if $z_0$ is a removable singularity of f(z). If particular there is no $\frac{1}{z}$ term in the Laurent series expansion of $f(z)$.
2. The residue of $f(z)$ at $z = z_0$ is non-zero if and only if $z_0$ is a pole of f(z). If particular, the coefficient of $\frac{1}{z}$ term in the Laurent series expansion of $f(z)$ is equal to the residue.

We have $e^z - 1 = z + \frac{z^2}{2} + \frac{z^3}{3!} + \cdots$. Then $\frac{e^z -1}{z} = 1 + \frac{z}{2} + \frac{z^2}{3!} + \cdots$. We observe that the series expansion of the function $h(z) = \frac{e^z - 1}{z}$ has constant term $1$ and hence $h(0) = 1$. This shows that the function $g(z) = \frac{e^z+1}{h(z)}$ is analytic at $z = 0$. In particular, $\lim z \to 0 \,g(z)$ exists. We observe that $$f(z) = \frac{g(z)}{z}$$ and $\lim_{z \to 0} (z-0) f(z) = \lim_{z \to 0} g(z) = \lim_{z \to 0} \frac{z(e^z+1)}{e^z - 1} = 2 \ne 0$. This shows that, from the results given at the beginning, $0$ is a simple pole for $f(z)$.
So option 2 is true.

Now, the residue of $f(z)$ at $z=0$ is equal to $\lim_{z \to 0} \, (z-0)f(z) = \lim_{z \to 0}g(z) = 2$. Hence option 4 is True.

FOLLOW BY EMAIL TO GET NOTIFICATION OF NEW PROBLEMS. SHARE YOUR DOUBTS IN THE COMMENTS BELOW. ALSO, YOU CAN SUGGEST PROBLEMS TO SOLVE WHICH WILL BE SOLVED IMMEDIATELY.

### NBHM 2020 PART C Question 28 Solution ($\mathcal F = \{f :(0,\infty) \to \Bbb R \mid f(x) = f(2x) \,\text{for all}\, x \in (0,\infty)\}$)

Let $\mathcal F = \{f :(0,\infty) \to \Bbb R \mid f(x) = f(2x) \,\text{for all}\, x \in (0,\infty)\}$. Which of the following  statements are true?
1. $f \in \mathcal F$ implies $f$ is bounded,
2. $f \in \mathcal F$ implies $f$ is uniformly continuous,
3. $f \in \mathcal F$ implies $f$ is differentiable,
4. Every uniformly bounded sequence in $\mathcal F$ has a uniform convergence subsequence.
I am spending my crucial Ph.D. time in writing this blog to help others to achieve in Mathematics. So please encourage me by following the blog by email. Also, visit my blog every day and share the solutions with friends. Thank you.
Solution:
Option 1: (True) Let $f \in \mathcal F$, then for all $x \in (0,\infty)$ we have $$f(x) = f(2x).$$ We note that $f(1) = f(2)$ and $$f([1,2]) = f([2,2^2]) = f([2^2,2^3]) = \cdots$$ In the other direction the given condition can be written as $$f(\frac{x}{2}) = f(x)$$ and
$$f([1,2]) = f([\frac{1}{2},1]) = f([\frac{1}{2^2},\frac{1}{2}]) = \cdots$$
This shows that $f$ is determined by its value in the interval $[1,2]$ with the additional condition that $f(1) = f(2)$. This also shows that $\text{Range}(f) = f([1,2])$, Since $f$ is continuous and $[1,2]$ is compact, $f$ is bounded and option 1 is true.
Option 3:(False) From the above observations, we can start with any function $g : [1,2] \to \Bbb R$ satisfying $g(1) = g(2)$ and can be extended to a function $\widetilde g \in \mathcal F$. Note that $\widetilde g$ restricted to $[1,2]$ is $g$ itself.
Define $g: [1,2] \to \Bbb R$ by $g(x) = |x - \frac{3}{2}|$ then $g(1) = g(2) = \frac{1}{2}$ and $g$ is not differentiable at $x = \frac{3}{2}$ ($|x|$ is not differentiable at $0$ proof works here also). Since $g$ is not differentiable at $\frac{3}{2}$ its extension $\widetilde g$ is also not differentiable at $\frac{3}{2}$ ($\widetilde g$ restricted to $[1,2]$ is $g$ itself). So option 3 is false
Option 2:(False) Consider the function $g:(0,\infty) \to \Bbb R$ defined by $$g(x) = sin(\frac{2\pi\text{log }x}{\text{log }2}).$$
$$g(2x) = sin(\frac{2\pi\text{log }2x}{\text{log }2}) = sin(\frac{2\pi(\text{log }2+\text{log x})}{\text{log }2}) = sin(\frac{2\pi\text{log }x}{\text{log }2} + 2\pi) \\ = g(x).$$
Therefore $g \in \mathcal F$.
Result: $f$ is uniformly continuous on $(0,\infty)$ if and only if $f$ can be extended to a continuous function on $[0,\infty)$. In other words, $\lim_{x \to 0}f(x)$ exists.
In our example, since $\lim_{x \to 0}g(x)$ doesn't exists (because of log x), $g$ cannot be extended to a continuous function on $[0,\infty)$. Therefore $g$ is not uniformly continuous.
option 4:(False) I made a mistake in the argument of this option. I am correcting it. I will update this option shortly. Sorry for the inconvenience.
FOLLOW BY EMAIL TO GET NOTIFICATION OF NEW PROBLEMS. SHARE YOUR DOUBTS IN THE COMMENTS BELOW. ALSO, YOU CAN SUGGEST PROBLEMS TO SOLVE WHICH WILL BE SOLVED IMMEDIATELY.

### CSIR JUNE 2011 PART C QUESTION 81 SOLUTION ($f(z) = \frac{z}{3z+1}$ maps $H^+$ to $H^{+}$)

Define $H^{+} = \{z=x+iy \in \Bbb C : y > 0\}$
$H^{-} = \{z=x+iy \in \Bbb C : y < 0\}$
$L^{+} = \{z=x+iy \in \Bbb C : x > 0\}$
$L^{-} = \{z=x+iy \in \Bbb C : x < 0\}$
Then the bilinear transformation $$f(z) = \frac{z}{3z+1}$$ maps
1. $H^{+}$ onto $H^{+}$ and $H^{-}$ onto $H^{-}$.
1. $H^{+}$ onto $H^{-}$ and $H^{-}$ onto $H^{+}$.
1. $H^{+}$ onto $L^{+}$ and $H^{-}$ onto $L^{-}$.
1. $H^{+}$ onto $L^{-}$ and $H^{-}$ onto $L^{+}$.
I am spending my crucial Ph.D. time in writing this blog to help others to achieve in Mathematics. So please encourage me by following the blog by email. Also, visit my blog every day and share the solutions with friends. Thank you.
Solution:
Option 1: (True) We have $$f(z) = \frac{z}{3z+1} = (3+\frac{1}{z})^{-1}.$$
Let $z=x+iy \in H^{+}$ then $\frac{1}{z} = \frac{x}{x^2+y^2}+i\frac{-y}{x^2+y^2}$ whose imaginary part is negative. Hence $\frac{1}{z}(H^{+}) = H^{-}$. The function $g(z) = z + 3$ preserves all four regions since it is just a translation. Therefore if $z \in H^{+}$ then $\frac{1}{z} \in H^{-}$ and $\frac{1}{z}+3 \in H^{-}$. This shows that, $f(z) = (\frac{1}{z}+3)^{-1} \in H^{+}$. Therefore $f$ sends $H^+$
onto $H^{+}$. Similarly if $z \in H^{-}$ then $\frac{1}{z} \in H^{+}$ and $\frac{1}{z}+3 \in H^{+}$. This shows that, $f(z) = (\frac{1}{z}+3)^{-1} \in H^{-}$. Therefore $f$ sends $H^{-}$ onto $H^{-}$.

Option 2:(False)
Option 3:(False)
Option 4:(False)
We have proved that $f$ sends $H^+$ onto $H^{+}$ and $H^{-}$ onto $H^{-}$. Hence they cannot be mapped to any other sets. Hence all the remaining options are false.
FOLLOW BY EMAIL TO GET NOTIFICATION OF NEW PROBLEMS. SHARE YOUR DOUBTS IN THE COMMENTS BELOW. ALSO, YOU CAN SUGGEST PROBLEMS TO SOLVE WHICH WILL BE SOLVED IMMEDIATELY.

### NBHM 2020 PART C Question 26 Solution (Jordan canonical form and Nilpotent matrices)(NBHM interview concept)

Let $T$ be a nilpotent linear operator on the vector space $\Bbb R^5$ (i.e., $T^k = 0$ for some k). Let $d_i$ denote the dimension of the kernel of $T^{i}$ . Which of the following can possibly occur as a value of $(d_1,d_2,d_3)$?
1. $(1,2,3)$,
2.$(2,3,5)$,
3.$(2,2,4)$,
4.$(2,4,5)$.

I am spending my crucial Ph.D. time in writing this blog to help others to achieve in Mathematics. So please encourage me by following the blog by email. Also, visit my blog every day and share the solutions with friends. Thank you.
Solution:
Let $A$ be a $5 \times 5$ nilpotent matrix. We know that its eigenvalues are $0,0,0,0,0$. By Jordan canonical form theorem, there exists an invertible matrix $P$ such that $J(A) := P A P^{-1}$ will be the Jordan canonical form of $A$. Now, $J(A)$ is a block diagonal matrix in which each block is of size varies between one to five. For example, a block of size five is of the form $$\begin{bmatrix}0&1&0&0&0 \\ 0&0&1&0&0 \\ 0&0&0&1&0 \\ 0&0&0&0&1 \\ 0&0&0&0&0\end{bmatrix}$$
a block of size four is of the form
$$\begin{bmatrix}0&1&0&0 \\ 0&0&1&0 \\ 0&0&0&1 \\ 0&0&0&0\end{bmatrix}$$
and so on for the blocks of size $3,2,1$.
Observation1: If $B$ is a size $k$ Jordan block of $A$ then $\text{N}(B) := \text{ Nullity of$B$} = \text{dimension of Kernal of$B$}= 1$ and $\text{R}(B) := \text{rank of$B$} = \text{dimension of Range of$B$} = k-1$.
Proof:  $B$ is a $k \times k$ matrix with first $k-1$ rows are linearly independent and the last row is zero. The result follows.
Observation2: Let $B$ be the following size $5$ Jordan block.
$$\begin{bmatrix}0&1&0&0&0 \\ 0&0&1&0&0 \\ 0&0&0&1&0 \\ 0&0&0&0&1 \\ 0&0&0&0&0\end{bmatrix}$$
Then $B^2, B^3,B^4$ and $B^5$ are given respectively by the following matrices
$$\begin{bmatrix}0&0&1&0&0 \\ 0&0&0&1&0 \\ 0&0&0&0&1 \\ 0&0&0&0&0 \\ 0&0&0&0&0\end{bmatrix}$$
$$\begin{bmatrix}0&0&0&1&0 \\ 0&0&0&0&1 \\ 0&0&0&0&0 \\ 0&0&0&0&0 \\ 0&0&0&0&0\end{bmatrix}$$
$$\begin{bmatrix}0&0&0&0&1 \\ 0&0&0&0&0 \\ 0&0&0&0&0 \\ 0&0&0&0&0 \\ 0&0&0&0&0\end{bmatrix}$$
$$\begin{bmatrix}0&0&0&0&0 \\ 0&0&0&0&0 \\ 0&0&0&0&0 \\ 0&0&0&0&0 \\ 0&0&0&0&0\end{bmatrix}$$
Note that the above matrices are not in their Jordan canonical form. So power of a Jordan block need not be in the Jordan form again. We observe that the Jordan block $B$ has one zero row  and when we take the powers every step the last non-zero row is converted into a new zero row. Hence for $1 \le k \le 5$, $$N(B^k) = k.$$
Observation3: Let $A$ be a $n \times n$ nilpotent matrix, then the number of Jordan blocks in the Jordan decomposition of $A$ is equal to $N(A)$ the dimension of the kernal of $A$.
Proof: Let $J(A)$ equal to
$\begin{bmatrix}B_1&0&0&0&0 \\ 0&B_2&0&0&0 \\ 0&0&B_3&1&0 \\ 0&0&0&B_4&0 \\ 0&0&0&0&B_5\end{bmatrix}$
be the Jordan canonical form of the $n \times n$ matrix $A$. We have $5$ blocks in this decomposition. Let their sizes be $k_1,k_2,k_3,k_4$ and $k_5$ then $k_1 + \cdots +k_5 = n$. In particular the sizes of the blocks form a partition of $n$. For example, if we consider a $5 \times 5$ nilpotent matrix then the possible Jordan block sizes in the Jordan canonical form are $5$
(one single $5$ block), $4+1$ (one $4$ block and one $1$ block ), $3+2$, $3+1+1$, $2+2+1$, $2+1+1+1$, $1+1+1+1+1$ given by all the partitions of $5$. Now, by the observation 1, $N(B_i) = 1$ for all $1 \le i \le 5$. Since $J(A)$ is a block diagonal matrix with blocks $B_1,\dots,B_5$ we have $N(J(A)) = N(B_1)+N(B_1)+\cdots+N(B_5) = 5$. Since $A$ and $J(A)$ are similar matrices we have $N(A) = N(J(A)) = 5 = \text{ number of blocks in }J(A)$. This proves the observation.
Observation4:  The $k$th power of $J(A)$
$$\begin{bmatrix}B_1&0&0&0&0 \\ 0&B_2&0&0&0 \\ 0&0&B_3&1&0 \\ 0&0&0&B_4&0 \\ 0&0&0&0&B_5\end{bmatrix}^k$$ is equal to
$$\begin{bmatrix}B_1^k&0&0&0&0 \\ 0&B_2^k&0&0&0 \\ 0&0&B_3^k&1&0 \\ 0&0&0&B_4^k&0 \\ 0&0&0&0&B_5^k\end{bmatrix}.$$
With these observations we will investigate the options.
option 1:(1,2,3)(Possible) It is given that $d_1 = 1$, $d_2 = 2$ and $d_3 = 3$. By the observation 2, we have to find a $5 \times 5$ matrix $A$ such that $J(A)$ has one Jordan block, $N(A^2) = 2$ and $N(A^3) = 3$. Let $A$ be the $5\times 5$ matrix given below which itself is in the Jordan canonical form.
$$\begin{bmatrix}0&1&0&0&0 \\ 0&0&1&0&0 \\ 0&0&0&1&0 \\ 0&0&0&0&1 \\ 0&0&0&0&0\end{bmatrix}$$
Then $A^2, A^3$ are given respectively by the matrices
$$\begin{bmatrix}0&0&1&0&0 \\ 0&0&0&1&0 \\ 0&0&0&0&1 \\ 0&0&0&0&0 \\ 0&0&0&0&0\end{bmatrix}$$
$$\begin{bmatrix}0&0&0&1&0 \\ 0&0&0&0&1 \\ 0&0&0&0&0 \\ 0&0&0&0&0 \\ 0&0&0&0&0\end{bmatrix}$$
Clearly $N(A^2) = 2$ and $N(A^3) = 3$.
option 2:(2,3,5)(Not Possible) It is given that $d_1 = 2$, $d_2 = 3$ and $d_3 = 5$. By the observation 2, we have to find a $5 \times 5$ matrix $A$ such that $J(A)$ has two Jordan blocks, $N(A^2) = 3$ and $N(A^3) = 5$. Since $J(A)$ has two Jordan blocks, the possible block sizes are $3+2$ or $4+1$. Suppose $J(A)$ has block decomposition $3+2$. Let $B_1$ be the size $3$ block and $B_2$ be the size $2$ block in $J(A)$. When we find $(J(A))^2$, the last non-zero row of $B_1$ will be changed to zero row and similarly in $B_2$. Hence $N((J(A)^2) = N(J(A))+2 = 2+2 = 4$. But we want this number to be $3$ for option b. Hence the block size $3+2$ is not possible for option b. We continue to calculate $N((J(A))^3)$ which will be helpful for option d. When we find $(J(A))^3$, again the last non-zero row will be changed to zero row in $B_1$. Since $B_2$ is a $1 \times 1$ block it wont have a non-zero row to change. Hence $N((J(A)^3) = N((J(A))^2)+1 = 4+1 = 5$. Hence $(d_1,d_2,d_3) = (2,4,5)$ is possible which is option d.
Suppose $J(A)$ has block decomposition $4+1$. Let $B_1$ be the size $4$ block and $B_2$ be the size $1$ block in $J(A)$. When we find $(J(A))^2$, the last non-zero row of  $B_1$ will be changed to zero-row. Since $B_2$ is a $1 \times 1$ block it won't have a non-zero row to change. Hence $N((J(A)^2) = 2 + 1 = 3$. Again, when we find $(J(A))^3$, similar argument can be appliead and $N((J(A)^3) =3 + 1 = 4$.  But we want this number to be $5$. Hence the block size $4+1$ is also not possible for option b. Since $3+2$ and $4+1$ are the only two possible block sizes we have proves that $(d_1,d_2,d_3) = (2,3,5)$ is not possible.
option c:(2,2,4)(Not Posssible) The argument given in option $2$ shows that if $d_1 = 1$ then the possible $(d_1,d_2,d_3)$ are $(2,3,4)$ and $(2,4,5)$. Hence $(2,2,4)$ is not possible.
option d:(2,4,5)(Possible) This is done as part of option b.

A pictorial approach of the same can be seen here:
https://maksmaths.blogspot.com/2020/04/nbhm-problem.html

FOLLOW BY EMAIL TO GET NOTIFICATION OF NEW PROBLEMS. SHARE YOUR DOUBTS IN THE COMMENTS BELOW. ALSO, YOU CAN SUGGEST PROBLEMS TO SOLVE WHICH WILL BE SOLVED IMMEDIATELY.

### CSIR JUNE 2011 PART C QUESTION 81 SOLUTION (Analytic function with $f(0) = \frac{1}{2}$ and $f(\frac{1}{2}) = 0$)

Let $f$ be an analytic function from the unit disc $\Delta$ to $\Delta$ with $\Delta = \{z \in \Bbb C : |z| < 1\}$ satisfying $f(0) = \frac{1}{2}$ and $f(\frac{1}{2}) = 0$.  Which of the following is/are true?
1. $|f^{'}(0)| \le \frac{3}{4}$,
2. $|f^{'}(\frac{1}{2})| \le \frac{4}{3}$.
3. option 1 and 2 both are true.
4. $f$ is the identity function.

I am spending my crucial Ph.D. time in writing this blog to help others to achieve in Mathematics. So please encourage me by following the blog by email. Also, visit my blog every day and share the solutions with friends. Thank you.
Solution:
Schwarz's pick lemma:
Let $f$ be an analytic function from the unit disc $\Delta$ to $\Delta$. Further if $f(a) = b$ then we have $|f^{'}(a)| \le \frac{1-|f(a)|^2}{1-|a|^2}$.
Given that $f(0) = \frac{1}{2}$. This implies that $$|f^{'}(0)| \le \frac{1 - (\frac{1}{2})^2}{1-(0)^2} = \frac{3}{4}.$$
Given that $f(\frac{1}{2}) = 0$. This implies that $$|f^{'}(\frac{1}{2})| \le \frac{1 - (0)^2}{1-(\frac{1}{2})^2} = \frac{4}{3}.$$ So option 1, 2 and 3 are true.
option 4: (False) Let $f(z) =frac{az + b}{cz + d}$. Now, $f(0) = \frac{1}{2}$ implies that $2b = d$ and we have $f(z) = \frac{az+b}{cz+2b}$. Now, $f(\frac{1}{2}) = 0$ implies that $a = -2b$. Let us take $a = 2, b= -1, c= 1$ and $d = 2$, then all these conditions are met and $ad-bc \ne 0$. Therefore the function $$f(z) = \frac{2z - 1}{z - 2}$$ satisfies the given conditions of the problem which is different from the identity function.
FOLLOW BY EMAIL TO GET NOTIFICATION OF NEW PROBLEMS. SHARE YOUR DOUBTS IN THE COMMENTS BELOW. ALSO, YOU CAN SUGGEST PROBLEMS TO SOLVE WHICH WILL BE SOLVED IMMEDIATELY.

### CSIR JUNE 2011 PART B QUESTION 52 SOLUTION (Random variable with $E(X) = Var(X)$)

Let $X$ be a random variable with $E(x) = Var(X)$. Then the distribution of $X$ has to be
1. Poisson,
2. Exponential,
3. Normal,
4. cannot be identified from the given condition

I am spending my crucial Ph.D. time in writing this blog to help others to achieve in Mathematics. So please encourage me by following the blog by email. Also, visit my blog every day and share the solutions with friends. Thank you.
Solution:
Poisson: Let $X$ be a discrete random variable. $X$ has a Poisson distribution with a positive real parameter $\lambda$ if the probability distribution function of $X$ is equal to $$\frac{\lambda^n e^{-\lambda}}{n!}$$ for $n \ge 0$. In this case mean = variance = $\lambda$.
Normal: Let $X$ be a continuous random variable. $X$ has a Normal distribution with parameters $\mu$ and $\sigma$ if the probability distribution of $X$ is equal to $$\frac{1}{\sigma \sqrt{2\pi}}e^{\frac{-1}{2}(\frac{x-\mu}{\sigma})^2}.$$ In this case mean = $\mu$ and variance = $\sigma^2$. It is possible to take $\mu = 1$ and $\sigma = 1$ the we get $mean = variance$.
So we conclude that the condition $E(X) = Var(X)$ can occur for both Poisson and Normal. Also, for many other distributions and please comment below if you know any other such distribution. This shows that from the condition $E(X) = Var(X)$ we cannot conclude the distribution uniquely. so option 4 is correct.
FOLLOW BY EMAIL TO GET NOTIFICATION OF NEW PROBLEMS. SHARE YOUR DOUBTS IN THE COMMENTS BELOW. ALSO, YOU CAN SUGGEST PROBLEMS TO SOLVE WHICH WILL BE SOLVED IMMEDIATELY.

### CSIR JUNE 2011 PART C QUESTION 80 SOLUTION ($\text{Re}(f(z)$ bounded then $f(z)$ is constant: Three different proofs)

Let $f$ be an entire function. If $\text{Re} f$ is bounded then
1. $\text{Im} f$ is constant,
2. $f$ is constant,
3. $f \equiv 0$,
4. $f^{'}$ is a non-zero constant.

I am spending my crucial Ph.D. time in writing this blog to help others to achieve in Mathematics. So please encourage me by following the blog by email. Also, visit my blog every day and share the solutions with friends. Thank you.
Solution:
First, we prove that option 2 is correct which will imply that option 1 is also true.

Option 2: (True)
Proof 1: Given that $f$ is entire. Assume $f$ is not a constant function, then by little Picards's theorem, range of $f$ is either $\Bbb C$ or $\Bbb C \backslash \{a\}$ for some $a \in \Bbb C$. Therefore $\text{Re}(f)$ is unbounded. It is given that $\text{Re}(f)$ is bounded. Hence $f$ has to a constant function.
Proof 2:
Assume that the real part of $f$ is bounded then there exists $k$ such that $\text{Re}(f(z)) \le k$. We define $h(z) = \frac{1}{k - f(z) + 1}$. Now $\text{Re}f(z) \le k$ implies that $\text{Re} (k - f(z)) \ge 0$ and $\text{Re}(k-f(z)+1) \ge 1$. This shows that the denominator $k-f(z)+1$ will never zero and hence $h(z)$ is an entire function. Let $h(z) = \frac{1}{u + i v} = \frac{u-iv}{u^2 + v^2} = \frac{u}{u^2+v^2} -i \frac{v}{u^2 + v^2}$. We have $u = \text{Re}(k-f(z)+1)$ and $|h(z)|^2 = (\frac{u}{u^2+v^2})^2+(\frac{v}{u^2+v^2})^2 = \frac{1}{u^2 + v^2} \le 1$ since $u = \text{Re}(k-f(z)+1) \ge 1$. This shows that $|h(z)| \le 1$ and hence h(z) is a bounded entire function which has to be constant by the Liouville's theorem. But $h(z)$ is constant implies that $f(z)$ is constant.
proof 3: Let $f(z) = u + iv$. It is given that $u$ is bounded. Consider the function $h(z) = e^{f(z)}$. Now, $|e^z| = |e^{u+iv}| = |e^u| \,|e^{iv}| = e^u$ which is bounded as $u$ is bounded. By Liouville's theorem $h(z)$ is constant and hence $f(z)$ is also constant.
option 4:(False)  We have proves that $f$ is constant and hence its derivative $f^{'}$ has to be zero.
option 1: (True) We have shown that the function $f$ itsels is constant in the previous option. Hence $\text{Im}(f)$ has to be bounded.
option 3:(False) Consider the constant function $f(z) = 1$ for all $z \in \Bbb C$. Then $\text{Re}(f)$ is bounded but $f$ is not identically zero.

FOLLOW BY EMAIL TO GET NOTIFICATION OF NEW PROBLEMS. SHARE YOUR DOUBTS IN THE COMMENTS BELOW. ALSO, YOU CAN SUGGEST PROBLEMS TO SOLVE WHICH WILL BE SOLVED IMMEDIATELY.

### CSIR JUNE 2011 PART C QUESTION 79 SOLUTION (Analytic functions and Identity theorem application)

Let $\Bbb D = \{z \in \Bbb C : |z| < 1\}$ be the unit disc. Let $f : \Bbb D \to \Bbb C$ be an analytic function satisfying $$f(\frac{1}{n}) = \frac{2n}{3n+1}.$$ Then
1. $f(0) = \frac{2}{3}$,
2. $f$ has simple pole at $z = -3$,
3. $f(3) = \frac{1}{3}$,
4. no such $f$ exists.

I am spending my crucial Ph.D. time in writing this blog to help others to achieve in Mathematics. So please encourage me by following the blog by email. Also, visit my blog every day and share the solutions with friends. Thank you.
Solution: We will explicitly calculate the function $f$ then the options are easy to verify.
Identity Theorem: Let $\Bbb D = \{z \in \Bbb C : |z| < 1\}$ be the unit disc. Let $f,g : \Bbb D \to \Bbb C$ be two analytic function. If  $f(\frac{1}{n}) = g(\frac{1}{n})$ for all $n \in \Bbb N$ then $f \equiv g$ (i.e., $f(z) = g(z)$ for all $z \in \Bbb D$).

Replace $n$ by $\frac{1}{n}$ in $\frac{2n}{3n+1}$ we get $\frac{2}{3 + n}$. Let $g : \Bbb D \to \Bbb C$ be an analytic function defined by $g(z) = \frac{2}{3+z}$ (Note that $g$ has no singularities in $\Bbb D$). Then it is given in the problem that  $f(\frac{1}{n}) = g(\frac{1}{n})$ for all $n \in \Bbb N$. Therefore by the above said identity theorem we have $f(z) = g(z)$ for all $z \in \Bbb D$. Hence $$f(z) = \frac{2}{3+z}.$$
option 1: (True) Clearly $f(0) = \frac{2}{3}$.
option 2: (True) Clearly $-3$ is a simple pole of $f(z)$.
option 3: (True) Clearly $f(3) = \frac{1}{3}$
option 4: (False)  The function $f(z) = \frac{2}{3+z}$ is one such function.
FOLLOW BY EMAIL TO GET NOTIFICATION OF NEW PROBLEMS. SHARE YOUR DOUBTS IN THE COMMENTS BELOW. ALSO, YOU CAN SUGGEST PROBLEMS TO SOLVE WHICH WILL BE SOLVED IMMEDIATELY.

### CSIR JUNE 2011 PART C QUESTION 77 SOLUTION (Diagonalizability of Nilpotent matrices)

Let $N$ be a $3 \times 3$ non-zero matrix with the property $N^3 = 0$. Which of the following is/are true?
1. $N$ is not similar to a diagonal matrix,
2. $N$ is similar to a diagonal matrix,
3. $N$ has one non-zero eigenvector,
4. $N$ has three linearly independent eigenvectors.

I am spending my crucial Ph.D. time in writing this blog to help others to achieve in Mathematics. So please encourage me by following the blog by email. Also, visit my blog every day and share the solutions with friends. Thank you.
Solution: An $n \times n$ matrix $N$ is said to be nilpotent if $N^k = 0$ for some $k \in \Bbb N$. A matrix $M$ is said to be diagonalizable over a field $\Bbb F$ if there exists an invertible matrix $P$ with entries from the field $\Bbb F$ such that $$P A P^{-1} = \text{diag}(\lambda_1,\lambda_2,\dots,\lambda_n)$$ where $\text{diag}(\lambda_1,\dots,\lambda_n)$ is the diagonal matrix with the diagonal entries eigenvalues of $A$.
Observation1: All the eigenvalues of a nilpotent matrix are zero.
Proof: Let $N$ be a nilpotent matrix and $\lambda$ be an eigenvalue of $N$. Then $\lambda^k$ is an eigenvalue of $N^k = 0$. Hence $\lambda^k = 0$ and in turn $\lambda = 0$.
Result1: If $A$ is a non-diagonal matrix whose eigenvalues are equal. Then $A$ is not diagonalizable.
Proof: Suppose $A$ is diagonalizable, then there exists an invertible matrix $P$ such that $P A P^{-1} = \text{diag}(\lambda,\lambda,\dots,\lambda)$. This implies that $A = P^{-1}\text{diag}(\lambda,\lambda,\dots,\lambda)P$. But $\text{diag}(\lambda,\dots,\lambda)$ is a scalar matrix and hence it commutes with all the matrices. So $A = P^{-1}\text{diag}(\lambda,\dots,\lambda)P = \text{diag}(\lambda,\dots,\lambda)$ Contradiction to the fact that $A$ is a non-diagonal matrix.
Result2: An $n \times n$ matrix $M$ is diagonalizable if and only if it has $n$ linearly independent eigenvectors.
option 1:(True) Follows from observation 1 and result 1.
option 2:(False) Because option 1 is true.
option 3:(False) We have shown that $0$ is an eigen value of $N$. Hence there exists an eigen vector $v \ne 0$ such that $N v = 0$. Let $\alpha$ be any scalar then $N (\alpha v) = \alpha Nv = 0$. So any scalar multiple of eigenvector is again an eigenvector. So there are more than one eigen vectors.
option 4:(False) We have shown that $N$ is not diagonalizable in option 1. Hence by result 2 it cannot have $n$ linearly independent eigen vectors.

FOLLOW BY EMAIL TO GET NOTIFICATION OF NEW PROBLEMS. SHARE YOUR DOUBTS IN THE COMMENTS BELOW. ALSO, YOU CAN SUGGEST PROBLEMS TO SOLVE WHICH WILL BE SOLVED IMMEDIATELY.

### NBHM 2020 PART C Question 24 Solution ($B = \{(z,w) \in \Bbb C^2 : |Re\, z|^2 + |Re\, w|^2 = 1\}$ is not bounded )

Listed below are four subsets of  $\Bbb C^2$. For each of them, write “Bounded” or “Unbounded” in the box as the case may be. ($Re(z)$ denotes the real part of a complex variable $z$).
1. $A = \{(z,w) \in \Bbb C^2 : z^2 + w^2 = 1\}$.
2. $B = \{(z,w) \in \Bbb C^2 : |Re\, z|^2 + |Re\, w|^2 = 1\}$.
3. $C = \{(z,w) \in \Bbb C^2 : |z|^2 + |w|^2 = 1\}$.
4. $D = \{(z,w) \in \Bbb C^2 : |z|^2 - |w|^2 = 1\}$.
I am spending my crucial Ph.D. time in writing this blog to help others to achieve in Mathematics. So please encourage me by following the blog by email. Also, visit my blog every day and share the solutions with friends. Thank you.
Solution: Consider the standard metric in $\Bbb C$ given by $d(z,w) = |z-w|$. Now, the metric on $\Bbb C^2$ is given by $$d((z_1,w_1),(z_2,w_2)) = \sqrt{|z_2 - z_1|^2 + |w_2 - w_1|^2}.$$
Option 1:(Unbounded) We have $\{(it, \sqrt{1-(it)^2}) : t \in \Bbb R\} \subseteq A$. Now, $$d((0,0),(it,\sqrt{1+t^2})) = \sqrt{(t^2)+(1+t^2)} \ge \sqrt{t^2} = t.$$ and can be made arbitrarily large. So $A$ is unbounded.
Option 2:(Unbounded) $\{(1, it) : t \in \Bbb R\} \subseteq B$. Now, $$d((0,0),(1, it)) = \sqrt{(1^2)+(t^2)} = \sqrt{1+t^2} \ge t.$$ which can be made arbitrarily large. So $B$ is unbounded.
Option 3:(Bounded) $C = \{(z,w) \in \Bbb C^2 : |z|^2 + |w|^2 = 1\} = \{(z,w) : d((0,0),(z,w)) = 1\}$. This is the unit sphere in $\Bbb C^2$ and hence $C$ is bounded.
Option 4:(Unbounded) We have $cosh^2\,t - sinh^2\,t = 1$ for $t \in \Bbb R$. Hence $\{(cosh\,t, sinh\,t) : t \in \Bbb R\} \subseteq D$. Now, $$d((0,0),(cosh\,t, sinh\,t)) = \sqrt{(cosh^2\,t)+(sinh^2\,t)} \\ = \sqrt{(\frac{e^t + e^{-t}}{2})^2+(\frac{e^t-e^{-t}}{2})^2} = \sqrt{\frac{e^{2t}+e^{-2t}}{2}}.$$ which can be made arbitrarily large. So $D$ is unbounded.
FOLLOW BY EMAIL TO GET NOTIFICATION OF NEW PROBLEMS. SHARE YOUR DOUBTS IN THE COMMENTS BELOW. ALSO, YOU CAN SUGGEST PROBLEMS TO SOLVE WHICH WILL BE SOLVED IMMEDIATELY.

### NBHM 2020 PART C Question 27 Solution ($f(x,y) = x + y$ is a clopen function, preserves dense and discrete sets)

Consider $f : \Bbb R^2 \to \Bbb R$ defined by $f(x,y) = x + y$. For each of the following statements, state whether it is true or false.
1. Image under $f$ of any open set if open,
2. Image under $f$ of any closed set is closed,
3. Image under $f$ of any dense set is dense,
4. Image under $f$ of any discrete set is discrete.

I am spending my crucial Ph.D. time in writing this blog to help others to achieve in Mathematics. So please encourage me by following the blog by email. Also, visit my blog every day and share the solutions with friends. Thank you.
Solution:
Option 1: (True) Consider the function $f$. We claim that $f$ is an open map. Let $U$ be an open set in $\Bbb R^2$. We need to show that $f(U)$ is open in $\Bbb R$. Let $a \in f(U)$, we will show that there is a neighborhood of $a$ which is entirely contained in $f(U)$. Let $(x,y) \in U \subseteq \Bbb R^2$ be such that $f(x,y)=x+y = a$. Since $a \in f(U)$ this is possible. Now $(x,y) \in U$ and $U$ is open and so we can find an $\epsilon > 0$ such that the open set $V = (x-\epsilon,x+\epsilon) \times (y-\epsilon,y+\epsilon) \subseteq U$. Now, $f(V) = ((x+y)-2\epsilon, (x+y)+2\epsilon) = (a-2 \epsilon, a+2\epsilon)$  is an open interval. Since $V \subset U$ we have $f(V) \subseteq f(U)$ is an interval containing $a$. Therefore $f(U)$ is open.
option 2:(False) We will construct a closed set $F$ such that $f(F)$ is not closed. Let $F = \Bbb Z \times \sqrt r \Bbb Z$ where $r$ is not a perfect square integer. Then $f(F) = \Bbb Z + \sqrt r \Bbb Z$ which is a dense subset of $\Bbb R$. (Why this is dense? please check my next post). Therefore $f(F)$ is not closed.
option 4:(False) Consider the example given in option 2. $F$ is discrete but $f(F)$ is not.
option 3: (True) Let $D$ be a subset of a metric space $X$, then $D$ is dense in $X$ if given $x$ in $X$ there exists a sequence $d_n$ in $D$ such that $d_n$ converges to $x$. We will use this definition. Consider the function $f$ and let $D$ be a dense subset of $\Bbb R^2$. We claim that $f(D)$ is dense in $\Bbb R$. Let $a \in f(D)$ then there exists a point $(x,y) \in D$ such that $f(x,y) = a$. Now $(x,y) \in D$ and $D$ is dense. So there exists a sequence $(x_n,y_n) \in D$ such that $(x_n,y_n)$ converges to $(x,y)$. Since the function $f$ is continuous, we have $f((x_n,y_n)) = (x_n+y_n)$ converges to $x+y = a$. Since $(x_n,y_n)$ is a sequence in $D$ we have $(x_n+y_n)$ is a sequence in $f(D)$ which converges to $a$. Therefore $f(D)$ is dense in $\Bbb R$.
THERE IS A MORE GENERAL RESULT IS TRUE:
Every continuous subjective (onto) map between the metric spaces preserves the dense sets. The proof is essentially the same as above. Note that, in this problem given $f$ is a surjective continuous map.
FOLLOW BY EMAIL TO GET NOTIFICATION OF NEW PROBLEMS. SHARE YOUR DOUBTS IN THE COMMENTS BELOW. ALSO, YOU CAN SUGGEST PROBLEMS TO SOLVE WHICH WILL BE SOLVED IMMEDIATELY.

### NBHM 2020 PART C Question 27 Solution (Structure of the group $\frac{\Bbb Q}{n \Bbb Z}$)

For $n$ a positive integer, let $\frac{\Bbb Q}{n\Bbb Z}$ be the quotient of the group of rational numbers $\Bbb Q$ (under addition) by the subgroup $n\Bbb Z$. For each of the following statements, state whether it is true or false.
1) Every element of
$\frac{\Bbb Q}{n\Bbb Z}$ is of finite order,
2) There are only finitely many elements in $\frac{\Bbb Q}{n\Bbb Z}$ of any given finite order,
3) Every proper subgroup of $\frac{\Bbb Q}{n\Bbb Z}$ is finite,
4)$\frac{\Bbb Q}{2\Bbb Z}$ and $\frac{\Bbb Q}{5\Bbb Z}$ are isomorphic as groups.
I am spending my crucial Ph.D. time in writing this blog to help others to achieve in Mathematics. So please encourage me by following the blog by email. Also, visit my blog every day and share the solutions with friends. Thank you.

Solution:
The elements of $\frac{\Bbb Q}{ \Bbb Z}$ are cosets of the form $\Bbb Z + s$ where $s = \frac{p}{q}$ is rational (We have gcd(p,q)=1). We have $\Bbb Z + s$ is equal to the identity element $\Bbb Z + 0 = \Bbb Z$ if and only if $s \in \Bbb Z$.
Main Result : Let $n$ be a positive integer then $$\frac{\Bbb Q}{\Bbb Z} \cong \frac{\Bbb Q}{n \Bbb Z}.$$
Proof: The function which takes $\Bbb Z + x$ to $n \Bbb Z + nx$ is the required isomorphism.
In view of this result, it is enough to verify the given options to the group $\frac{\Bbb Q}{\Bbb Z}$.
option 4: (True) By the above result we have the group $\frac{\Bbb Q}{\Bbb Z}$ is isomorphic to the groups $\frac{\Bbb Q}{2 \Bbb Z}$ and $\frac{\Bbb Q}{5 \Bbb Z}$. By transitivity, we have $\frac{\Bbb Q}{2 \Bbb Z}$ is isomorphic to $\frac{\Bbb Q}{5 \Bbb Z}$.

option 1: (True) We claim that every element of $\frac{\Bbb Q}{\Bbb Z}$ has finite order.
Let $x = \Bbb Z + \frac{p}{q}$ be an arbitrary element of $\frac{\Bbb Q}{\Bbb Z}$. Let $m$ be the smallest integer such that $mp = q r$ for some $r \in \Bbb N$ (i.e, $q$ divides $mp$). Such a $m$ exists because $m = q$ is one possibility such that $q \mid mp$ and so we can consider the minimal one among the $m$ satisfying this condition by the well ordering principle of $\Bbb N$.
We have, $$x + x = \Bbb Z + \frac{2p}{q}$$ and adding $x$ with itself for $m$ times gives us $$x^{m} = (\Bbb Z + \frac{p}{q})^m = \Bbb Z + \frac{mp}{q} = \Bbb Z + r = \Bbb Z.$$  This shows that $m$ is the smallest power such that $x^m =e$ and so the order of $x = n \Bbb Z + \frac{p}{q}$ is equal to $m$. Hence every element in $\frac{\Bbb Q}{n \Bbb Z}$ is of finite order.

option 2:(True)
Observation 1: Let $x = \Bbb Z + \frac{p}{q}$ be an arbitrary element of $\frac{\Bbb Q}{\Bbb Z}$. Let $p = qm + r$ ($0 \le r <q$). Since $gcd(p,q)=1$ we have $1<r<q$ and $\frac{p}{q} = m + \frac{r}{q}$. Now, $$\Bbb Z + \frac{p}{q} = \Bbb Z + (m+\frac{r}{q}) = \Bbb Z + \frac{r}{q}.$$
Hence, whenever we consider an arbitrary element $\Bbb Z + \frac{p}{q}$ of $\frac{\Bbb Q}{\Bbb Z}$we can always assume that $1<p<q$

Observation 2: Let $n$ be a positive integer. The set $H_n = \{\Bbb Z + \frac{m}{n} : 0 \le m \le n\}$ (here we are not assuming (m,n)=1) is a cyclic subgroup of $\frac{\Bbb Q}{\Bbb Z}$ order $n$.
Proof: Verify that the element $\Bbb Z + \frac{1}{n}$ is a generator. In general the elements $\Bbb Z + \frac{m}{n}$ with (m,n)=1 in $H_n$ forms the $\phi(n)$ generators of this cyclic group.

Note that, as an element of $\frac{\Bbb Q}{\Bbb Z}$, we always assume $(m,n) = 1$ in $\Bbb Z + \frac{m}{n}$. Hence $\Bbb Z + \frac{m}{n}$ of $\frac{\Bbb Q}{\Bbb Z}$ has order $n$.

This shows that the set of elements of order $n$ in $\frac{\Bbb Q}{\Bbb Z}$ are given by $\Bbb Z + \frac{p}{n}$ with $1 \le p \le n$, $(p,n) = 1$ and there are $\phi(n)$ of them. Hence there are only finitely many elements of order $n$ in $\frac{\Bbb Q}{\Bbb Q}$ for every positive integer $n$.

option 3:(False) Let $m$ be a positive integer. Define $$H(m) = \{\Bbb Z + \frac{p}{q} : q \text{ divides } m\}.$$
It is straight forward to verify that $H(m)$ is indeed an infinite subgroup of $\frac{\Bbb Q}{\Bbb Z}$.

FOLLOW BY EMAIL TO GET NOTIFICATION OF NEW PROBLEMS. SHARE YOUR DOUBTS IN THE COMMENTS BELOW. ALSO, YOU CAN SUGGEST PROBLEMS TO SOLVE WHICH WILL BE SOLVED IMMEDIATELY.

### NBHM 2020 PART B Question 22 Solution ( Distinct homeomorphism classes of topologies on X with exactly n open subsets)

YOU WANT ME DO NBHM OR CSIR QUESTIONS?. PLEASE COMMENT BELOW.
Let X be a three-element set. For each of the following numbers n, determine the number of distinct homeomorphism classes of topologies on X with exactly n open subsets (including the empty set and the whole set). Write that number in the box.
1) 3,
2) 4,
3) 5,
d) 7.

I have given a detailed solution below so that the same idea can be used in similar problems. Do read completely.

I am spending my crucial Ph.D. time in writing this blog to help others to achieve in Mathematics. So please encourage me by following the blog by email. Visit my blog every day and share the solutions with friends.  Thank you.
Solution: Let $(X,\tau_1)$ and $(X,\tau_2)$ be topological spaces on a set $X$. A bijection $f: X \to X$ is said to be homeomorphism from $(X,\tau_1)$ to $(X,\tau_2)$ if $f$ is a continuous open map. In particular, the image and inverse image of an open set is open.
Given $X = \{1,2,3\}$. Note that, in any topology on $X$ a proper open set has cardinality either $1$ to $2$.

option 1: (Number of topologies with three open sets) Let $(X,\tau)$ be a topology on $X = \{1,2,3\}$ with three open sets. Then $\tau = \{X,\phi, U\}$ where $U$ is a subset of cardinality either $1$ or $2$. There are three subsets of $X$ with cardinality $1$ and three subsets with cardinality $2$. Any of these $6$ subsets can play the role of $U$ in $\tau$ to form a topology with three open sets. Hence there are $6$ topologies on $X$ with $3$ open sets. If $|U| = 1$ then we call such topologies of type I and if $|U| = 2$ then we call them as type $II$.
Claim: Any two topologies of type $I$ are isomorphic. Let $\tau_1 = \{X,\phi,U_1\}$ and $\tau_2 = \{X, \phi, U_2\}$ be two such topologies $U_1 = \{a\}$ and $U_2 = \{b\}$ where $a,b \in \{1,2,3\}$. Now, any permutation of $X= \{1,2,3\}$ which maps $a$ to $b$ is a homeomorphism between $(X,\tau_1)$ to $(X,\tau_2)$.
Claim: Any two topologies of type $II$ are isomorphic. Let $\tau_1 = \{X,\phi,U_1\}$ and $\tau_2 = \{X, \phi, U_2\}$ be two such topologies $U_1 = \{a,b\}$ and $U_2 = \{c,d\}$ where $a,b,c,d \in \{1,2,3\}$. Now, any permutation of $X= \{1,2,3\}$ which maps $\{a,b\}$ to $\{c,d\}$ is a homeomorphism between $(X,\tau_1)$ to $(X,\tau_2)$.
Claim: A topology of type $I$ and a topology of type $II$ cannot be homeomorphic. This follows from the following observation.
observation: Since $f$ is a homeomorphism, for an openset $U$ of $X$, we have $|f(U)| = |U|$ and $|f^{-1}(U)| = |U|$.
Therefore there are two homeomorphism classes of topologies on $X$ with three open sets and these classes are given by type $I$ and type $II$ defined above.

option 2:(Number of topologies with four open sets)
We use the same ideas as above. Let $(X,\tau = \{X,\phi,U_1,U_2\})$ be a topology on $X$ with four open sets. Assume $U_1 = \{a\}$ then $U_2$ cannot be an open set with a single point because then their union will also be in $\tau$ which will give us the fifth open set. So $U_2$ has to be a set with two elements. Similarly, if $|U_1| = 2$ then $|U_2| = 1$. Without loss of generality, we assume that $|U_1| = 1$. Now there are two cases. $U_1 \subseteq U_2$ we call such topologies type $I$ and $U_1 \nsubseteq U_2$ we call such topologies type $II$. There are six topologies of type $I$ and there are three topologies of type $II$. So there are nine topologies on $X$ with four open sets. Among these topologies, like in the previous option, one can show that any two topologies of the same type (type I or type II) are homeomorphic and two topologies of different types are not homeomorphic. Hence there are two homeomorphism classes of topologies with four open sets in $X$.

option 3:(Number of topologies with five open sets)
Let $(X,\tau = \{X,\phi,U_1,U_2,U_3\})$ be a topology on $X$ with five open sets.
Assume $U_1 = \{a\}$ then $U_2$ can be an open set with a single point or two points.
Assume $U_2$ is singleton then $U_3$ has to be $U_1 \cup U_2$ as we have only five open sets in $\tau$. We call such topologies type $I$. There are three topologies of this type.
Assume $U_2$ has two elements.
Claim: $a \in U_2$. If not, $U_2 = \{b,c\}$ with $b \ne a$
and $c \ne a$. Lets check the choices for $U_3$
suppose $|U_3| =1$. If $a \notin U_3$ then $U_3 \cap U_2$ is a sixth open set in $\tau$. So $U_3 = U_1$. This shows that $|U_3| =2$. By our assumption $U_2$ doesn't contain $a$ which means that $U_3$ should contains $a$ otherwise $U_2 = U_3$. If $a \in U_3$ then $U_3 \cup U_2$ is a sixth open set in $\tau$. Contradiction. So if $a \notin U_2$ there is no choice for $U_3$. Hence we have $a \in U_2$.
Claim: $a \in U_3$. Same argument as above.
This shows that $\tau = \{X,\phi,\{a\},\{a,b\},\{a,c\}\}$. We call such topologies of type $II$ and there are three of them (a=1 or 2 or 3). Hence, including type $I$, there are a total of six topologies on $X$ with five open sets. Among these topologies, like in the previous option, one can show that any two topologies of the same type are homeomorphic and two topologies of different types are not homeomorphic. Hence there are two homeomorphism classes of topologies with four open sets in $X$.

option 4:(Number of topologies with seven open sets)
We claim that there is no topology on $X$ with $7$ open sets.
Let $\tau$ be a topology on $X = \{1,2,3\}$.
Claim: If there are at least seven open sets in $\tau$ then $\tau$ is the discrete topology on $X$ (which is nothing but the power set of $X$ where all the subsets are open).
This shows that there are no topology on $X$ with $7$ open sets.
proof: It is enough to prove that all the singleton subsets of $X$ are in $\tau$. Since $\tau$ is closed under arbitrary union, $\tau$ will contain all the subsets if it contains all the singleton subsets. We have $|\tau| = 7$ and hence either all the singleton subsets of $X$ are in $\tau$ or all the two elements subsets of $X$ are in $\tau$. In the first case we are done and if all the two-element subsets of $X$ are in $\tau$ then by taking intersection among them we can generate all the singletons and again we are done.

FOLLOW BY EMAIL TO GET NOTIFICATION OF NEW PROBLEMS. SHARE YOUR DOUBTS IN THE COMMENTS BELOW. ALSO, YOU CAN SUGGEST PROBLEMS TO SOLVE WHICH WILL BE SOLVED IMMEDIATELY.

### CSIR JUNE 2011 PART C QUESTION 76 SOLUTION (dimension = degree of the minimal polynomial)

Let $W = \{p(B) : p \text{ is a polynomial with real coefficients}\},$  where $$B = \begin{bmatrix}0&1&0\\ 0&0&1\\ 1&0&0 \end{bmatrix}.$$ Then the dimension $d$ of the vector space $W$ satisfies
1) $4 \le d \le 6$,
2) $6 \le d \le 9$,
3)$3 \le d \le 8$,
4)$3 \le d \le 4$.
I am spending my crucial Ph.D. time in writing this blog to help others to achieve in Mathematics. So please encourage me. Visit my blog every day and share the solutions with friends. Follow the blog by email. Thank you.

Solution:
We will calculate the dimension $d$ explicitly.  We start with the following simple observation. Consider the polynomial $p(x) = x^3 + 3x^2 -2x +12$ then $p(A) = A^3 + 3A^2 - 2 A + 12 I$. In particular $p(A)$ is a linear combination of the matrices $A^0 = I, A^1, A^2, \dots$. So the set of all non-negative powers of $A$  spans $W$. We will find a basis inside this spanning set whose cardinality will be our required $d$.

Let $A$ be an $n \times n$ real matrix and consider its powers $A^0=I, A^1, A^2,\dots,A^m$ for some $m>0$. These matrices are vectors in the vector space $M_n(\Bbb R)$. So it is possible to talk about the linear independence of these matrices. We say these matrices are linearly dependent if, as usual, there exist real scalars $\alpha_0,\alpha_1,\dots,\alpha_m$ not all zero such that $\alpha_0 A^0 + \alpha_1 A^1 + \cdots + \alpha_m A^m = 0$. Equivalently, there exists a non-zero polynomial $p(x) = \alpha_0+\alpha_1 x+ \alpha_2 x^2 + \cdots + \alpha_m x^m$ with real coefficients of degree atmost $m$  (some coefficient is non-zero and this coefficient need not be the leading coefficient) such that $p(A) = 0$. Such polynomials are known as the annihilating polynomials of $A$.
Hence, $A^0,A^1,A^2,\dots, A^m$ are linearly dependent if and only if there is a polynomial of degree almost $m$ satisfied by $A$.

The minimal spanning set is the basis and hence we need to find the smallest degree polynomial satisfied by $A$. This is nothing but the minimal polynomial of $A$. Hence $$\text{dimension d of W} = \text{degree of the minimal polynomial }m_A(x).$$

Caylay-Hamilton Theorem: Every $n \times n$ matrix satisfies its characteristic polynomial $p_A(x)$ which is a monic polynomial of degree $n$.
This shows that, from the above discussion, the matrices, $A^0, A^1, A^2, \dots, A^n$ are linearly dependent. Therefore $d \le n$. In our problem, the given matrix is of order $3$ and hence we have $$1 \le d \le 3.$$ Next we will prove that $d$ is actually equal to $3$. We need to show that it's minimal polynomial $m_A(x)$ also has degree $3$.

Result: Every eigenvalue of $A$ is a root of the minimal polynomial $m_A(x)$.
Proof: We will show that the minimal polynomial divides all the annihilating polynomials. In particular, it will divide the characteristic polynomial and hence the result follows.
Let $p(x)$ be an annihilating polynomial of $A$. We have $p(x) = q(x) m_A(x) + r(x)$ with degree of $r(x)$ strictly less than the degree of $m_A(x)$ by the reminder theorem. This implies that $p(A) = q(A)m_A(A) + r(A)$ and we have $r(A) = 0$. Hence $r(x)$ is also an annihilating polynomial of $A$. This is possible only if $r(x)$ is the zero polynomial because the degree of $r(x)$ is strictly smaller then the degree of the minimal polynomial. Therefore $$p(x) = q(x) m_A(x).$$ In particular, degree of the minimal polynomial is less than or equal to $n$.

The characteristic polynomial of the given matrix $A$ is equal to $x^3 - 1$ which has three distinct roots. All these roots have to be roots of the minimal polynomial by the above result. So the degree of the minimal polynomial is equal to at least $3$ and hence by the previous paragraph, it should be equal to $3$.  This shows that $$d = 3.$$
only option (3) and (4) are correct.
FOLLOW BY EMAIL TO GET NOTIFICATION OF NEW PROBLEMS. SHARE YOUR DOUBTS IN THE COMMENTS BELOW. ALSO, YOU CAN SUGGEST PROBLEMS TO SOLVE WHICH WILL BE SOLVED IMMEDIATELY.

### CSIR JUNE 2011 PART C QUESTION 75 SOLUTION (Which are subspaces ? $\{X \in M: \text{trace}(AX)=0\}$, $\{X \in M: \text{det}(AX)=0\}$.)

Let $M$ be the vector space of all $3 \times 3$ real matrices and let $$A = \begin{bmatrix}2&1&0\\ 0&2&0\\ 0&0&3\end{bmatrix}.$$ Which of the following are subspaces of $M$?
1) $W_1 = \{X \in M: XA = AX\}$,
2) $W_2 = \{X \in M: X+A = A + X\}$,
3) $W_3 = \{X \in M: \text{trace}(AX)=0\}$,
4) $W_4 = \{X \in M: \text{det}(AX)=0\}$.

I am spending my crucial Ph.D. time in writing this blog to help others to achieve in Mathematics. So please encourage me. Visit my blog every day and share the solutions with friends. Follow the blog by email. Thank you.
Solution:

Observation: Let $\alpha$ be a scalar then $\alpha A = (\alpha I) A$. That is multiplying every element of $A$ is the same as pre multiplying the matrix $A$ be the scalar matrix $\alpha I$ (diagonal matrix with all the diagonal entries are $\alpha$). Also, these scalar matrices commute with all the matrices in $M$ (Center of the ring $M$).
option 1:(True) Let $X_1,X_2 \in W_1$ then $X_1 A = A X_1$ and $X_2 A = A X_2$. Now, $$(X_1+X_2)A = X_1 A + X_2 A = A X_1 + A X_2 = A (X_1 + X_2).$$ Therefore $X_1+X_2 \in W_1$.
Also, since $\alpha I$ commutes with all the matrices, we have $$(\alpha X_1)A = (\alpha I) (X_1 A) = (\alpha I) (A X_1) = A (\alpha I)X_1 = A (\alpha X_1).$$ Therefore $\alpha X_1 \in W_1$ and hence $W_1$ is a subspace of $M$.
option 2: (True) Matrix addition is commutative. Therefore every matrix in $M$ commutes with $A$ and we have $W_2 = M$. Hence it is a subspace.
option 3: (TrueLet $X_1,X_2 \in W_3$ then $\text{trace}(A X_1) = 0$ and $\text{trace}(A X_2)=0$. Now, $$\text{trace}(A(X_1+X_2)) = \text{trace}(AX_1) + \text{trace}(AX_2) = 0.$$ Therefore $X_1+X_2 \in W_3$.
Also, since $\alpha I$ commutes with all the matrices and $\text{trace}(\alpha A) = \text{trace }((\alpha I)(A)) = \alpha \cdot \text{trace }A$, we have $$\text{trace}(A (\alpha X_1)) = \text{trace}(A ((\alpha I) (X_1)) ) = \text{trace}((\alpha I) (A X_1))\\ = \alpha \cdot \text{trace}(A X_1) = 0.$$ Therefore $\alpha X_1 \in W_3$ and hence $W_3$ is a subspace of $M$.
option 4:(False) We have $\text{det}A = 12 \ne 0$. Therefore $\text{det}(AX) = \text{det }A \cdot \text{det}X = 0$ if and only if $\text{det} X =0$. This shows that $W_4 = \{X \in M: \text{det}X = 0\}$. The set of all singular matrices is not a vector subspace of $M$. For example, consider the matrices which are of determinant zero $\begin{bmatrix}1&0&0\\ 0&1&0\\ 0&0&0\end{bmatrix}$ and $\begin{bmatrix}0&0&0\\ 0&0&0\\ 0&0&1\end{bmatrix}$. Clearly their sum is the identity matrix whose determinant is non-zero. Therefore the set of all matrices with determinant zero is not even closed under addition.

FOLLOW BY EMAIL TO GET NOTIFICATION OF NEW PROBLEMS. SHARE YOUR DOUBTS IN THE COMMENTS BELOW. ALSO, YOU CAN SUGGEST PROBLEMS TO SOLVE WHICH WILL BE SOLVED IMMEDIATELY.

### CSIR JUNE 2011 PART C QUESTION 74 SOLUTION (Let $T: \Bbb R^n \to \Bbb R^n$ be a linear transformation such that $T^2 = \lambda T$)

Let $T: \Bbb R^n \to \Bbb R^n$ be a linear transformation such that $T^2 = \lambda T$ for some $\lambda \in \Bbb R$. Then
1)$||T(x)|| = |\lambda| ||x||$ for all $x \in \Bbb R^n$,

2)If $||T(x)|| = ||x||$ for some non-zero vector $x \in \Bbb R^n$ then $\lambda = \pm 1$,
3) $T = \lambda I$ where $I$ is the $n \times n$ identity matrix,
4) If $||T(x)|| > ||x||$ for some non-zero vector $x \in \Bbb R^n$ then $T$ is necessarily singular.
I am spending my crucial Ph.D. time in writing this blog to help others to achieve in Mathematics. So please encourage me. Visit my blog every day and share the solutions with friends. Follow the blog by email. Thank you.

Solution:
Consider the $2 \times 2$ nilpotent matrix $$\begin{bmatrix}0&1\\0&0\end{bmatrix}.$$ We have $A^2 = 0$ and hence this matrix satisfies the given condition $A^2 = \lambda T$ with $\lambda = 0$. Since $A$ is upper diagonal its eigen values are its diagonal entries. Hence the eigen values of $A$ are $0,0$.

option 1: (False) We use the above matrix $A$ as a counter example. Let $x = \begin{bmatrix}0\\1\end{bmatrix}$ then $Ax = y$ where $y = \begin{bmatrix}1\\0\end{bmatrix}$. Now, $$||Ax|| = ||y|| = \sqrt{1^2 + 0^2} = 1.$$ But $|\lambda| ||x|| = 0$. This shows that, for our matrix $A$, $$||Ax|| \ne |\lambda| ||x||.$$
option 3: (False) We again use the above matrix $A$. We ave seen that this matrix satisfies $A^2 = \lambda A$ with $\lambda =0$. We observe that $A$ is a non-zero matrix whereas $\lambda I = 0I$ is the zero matrix. Hence $A \ne \lambda I$ in this case.
option 4: (False) Consider the matrix $$A = \begin{bmatrix}2&0\\ 0&2\end{bmatrix}.$$ Let $x = \begin{bmatrix}0 \\ 1\end{bmatrix}$ then $$||Ax|| = ||\begin{bmatrix}0\\ 2\end{bmatrix}|| = \sqrt{0^2+2^2} = 2>0$$ but $A$ is clearly invertible (non-singular).
option 2: (False) Consider the matrix $$\begin{bmatrix}\sqrt 2 & 0 \\ 0 & 0\end{bmatrix},$$ then $A^2 = \sqrt 2 A$. Hence this matrix satisfies $A^2 = \lambda A$ with $\lambda = \sqrt 2$. Let $x = \begin{bmatrix}1\\ 1\end{bmatrix}$ then $$||x|| = \sqrt{1^2+1^2} = \sqrt 2 = ||\begin{bmatrix}\sqrt 2 \\ 0\end{bmatrix}|| = ||Ax||.$$ But $\lambda = \sqrt 2 \ne \pm 1$.
All the options are false. This question was wrong and the full mark was given to everybody.

FOLLOW BY EMAIL TO GET NOTIFICATION OF NEW PROBLEMS. SHARE YOUR DOUBTS IN THE COMMENTS BELOW. ALSO, YOU CAN SUGGEST PROBLEMS TO SOLVE WHICH WILL BE SOLVED IMMEDIATELY.

### CSIR JUNE 2011 PART C QUESTION 73 SOLUTION (which are positive definite? 1) $A+B$, 2) $ABA^{*}$, 3) $A^2+I$, 4) $AB$.)

Suppose $A$ and $B$ are $n \times n$ positive definite matrices and $I$ be the $n \times n$ identity matrix. Then which of the following matrices are positive definite?
1) $A+B$,
2) $ABA^{*}$,
3) $A^2+I$,
4) $AB$.
Solution:
Let $A$ be a real symmetric matrix, then $A$ is said to positive definite if it satisfies any of the following equivalent conditions.
i) all its eigenvalues are positive.
ii) $x^t A x > 0$ for all vectors $x \ne 0$.
iii)$<x,Ax> > 0$ for all vectors $x \ne 0$.
iii) A is positive definite if and only if it can be written as $A = R^tR$, where $R$ is possibly a rectangular matrix, with independent columns.
iv) All the principal minors of $A$ are positive. (Please share if you know any other equivalent conditions in the comment below)
Let $A$ and $B$ be two $n \times n$ positive definite matrices. We have $x^t A x>0$ and $x^t B x>0$ for $x \ne 0$. We will solve each given option by each of the above given definition of positive definiteness in order to understand them clearly.
option 1. (True)We have for $x \ne 0$, $$x^t (A+B) x = x^t A x + x^t B x > 0.$$
Therefore option 1 is true.
option 2. (True) We have for $x \ne 0$, $Ax \ne 0$ since $A$ is invertible.
$$<x,ABA^*x> = <A^*x,BA^*x> = <B^*A^*x,A^*x> \\ = <BAx,Ax> > 0 .$$
Therefore option 2 is true. Since $A$ and $B$ are real symmetric, we have $A^* = A$ and $B^* = B$.
option 3. (True) Let the eigen values of $A$ be $\lambda_1,\lambda_2,\dots,\lambda_n$. Since $A$ is positive definite we have all these eigen values are positive. We observe that the eigen values of $A^2+1$ are $\lambda_1^2+1,\lambda_2^2+1,\dots,\lambda_n^2+1$ which are all positive. Hence $A^2+1$ is positive definite.
option 4. (False)
The product of two positive definite matrices need not be even symmetric.  In particular, we have $AB$ is symmetric if and only if $A$ and $B$ commutes with each other. Because $$(AB)^* = B^*A^* = A^*B^* = AB.$$ Note that, if $A$ and $B$ commutes with each other, then $A^*$ and $B^*$ commutes with each other.

To illustrate this, consider the positive definite matrices $$A = \begin{bmatrix}11 & 10 \\ 10 & 10\end{bmatrix}$$ and $$B = \begin{bmatrix}11 & 5 \\ 5 & 10\end{bmatrix}$$
text{ Then their product } $$AB = \begin{bmatrix}171&155 \\ 160&150\end{bmatrix}$$ which is not symmetric.

FOLLOW BY EMAIL TO GET NOTIFICATION OF NEW PROBLEMS. SHARE YOUR DOUBTS IN THE COMMENTS BELOW. ALSO, YOU CAN SUGGEST PROBLEMS TO SOLVE WHICH WILL BE SOLVED IMMEDIATELY.

### CSIR JUNE 2011 PART C QUESTION 72 SOLUTION ($A = (a_{ij}), a_{ij} = a_ia_j$ for $1 \le i,j \le n$ is positive semi-definite)

Let $a_{ij} = a_ia_j$ for $1 \le i,j \le n$ and $a_1,a_2,\dots,a_n$ are real numbers. Let $A = (a_{ij})$ be the $n \times n$ matrix. Then
1) It is possible to choose $a_1,a_2,\dots a_n$ so as to make the matrix $A$ non-singular.
2) The matrix $A$ is positive definite if $(a_1,a_2,\dots,a_n)$ is a non-zero vector.
3) The matrix $A$ is positive definite for all $(a_1,a_2,\dots,a_n)$.
4) For all $a_1,a_2,\dots,a_n$ we have zero is an eigen value of $A$.

(We assume $n>1$. otherwise it is stright forward.)
Solution

option 1: (False)Let $v = (a_1,a_2,\dots,a_n)$, then the first row of the matrix $A$ is given by the vector $(a_1a_1,a_1a_2,a_1a_3,\dots,a_1a_n) = a_1 v$ ($v$ scalar multiplied by the scalar $a_1$). Similarly the $i$th row of $A$ is given by $a_i v$. This shows that, all the rows of $A$ are some scalar multiples $v$. Therefore the dimension of the row space of $A$ = row rank of $A$ = rank of $A \le 1$. Therefore $A$ is singular for all choices of $v$. (unless n=1 the matrix $A$ is an $1 \times 1$ matrix).
option 2: (False) If $A$ is positive definite then all its eigen values are positive. Hence $\text{det }A \ne 0$. We have shown that $A$ has determinant zero for all $(a_1,a_2,\dots,a_n)$.
option 3: (True) If $A$ is symmetric then $A$ is positive semi-definite if and only if  all its principal minors are non-negative. We will prove that, all the principal minors of $A$ are non-negative. Let $1 \le k \le n$ and consider the principal $k$ minor $A(k)$ given by $A(k)_{ij} = a_{ij}$ for $1\le i,j \le k$. Let $v(k) = (a_1,a_2,\dots,a_k)$, then the first row of the matrix $A(k)$ is given by the vector $(a_1a_1,a_1a_2,a_1a_3,\dots,a_1a_k) = a_1 v(k)$ ($v(k)$ scalar multiplied by the scalar $a_1$). Similarly the $i$th row of $A(k)$ is given by $a_i v(k)$. This shows that, all the rows of $A(k)$ are some scalar multiples $v(k)$. This shows that dimension of the row space of $A(k)$ = row rank of $A(k)$ = rank of $A(k) \le 1$. Therefore $A(k)$ is singular. (unless k=1, this case $A(1)$ can be invertible). This shows that $A$ is positive semi definite.
option 4:(True) We have shown that $A$ is not invertible and hence zero is an eigen value of $A$.

FOLLOW BY EMAIL TO GET NOTIFICATION OF NEW PROBLEMS. SHARE YOUR DOUBTS IN THE COMMENTS BELOW. ALSO, YOU CAN SUGGEST PROBLEMS TO SOLVE WHICH WILL BE SOLVED IMMEDIATELY.

### CSIR JUNE 2011 PART C QUESTION 70 SOLUTION ($\mathcal{F}= \{f:\Bbb{R} \to \Bbb{R}: |f(x)-f(y)| ≤ K|(x-y)|^a \}$)

Consider the set $\mathcal{F}= \{f:\Bbb{R} \to \Bbb{R}: |f(x)-f(y)| ≤ K|(x-y)|^a \}$ for some $a>0$ and $K>0$, then which of the following statements are true?
1. Every $f \in \mathcal F$ is continuous.
2. Every $f \in \mathcal F$ is uniformly continuous.
3. Every differentiable function $f : \Bbb{R} \to \Bbb{R}$ is in $\mathcal{F}$.
4. Every function $f \in \mathcal{F}$ is differentiable. (This is converse of (3))

Solution:
option 3. (false) Consider the function $f(x) = x^2$, then $f$ is a differentiable function from $\mathbb{R}$ to $\mathbb{R}$. We claim that $f \notin \mathcal{F}$. We have for any $x,y \in \mathbb{R}$, $$|x^2 - y^2| = |x-y| |x + y|$$
Since $x,y \in \mathbb{R}$ are arbitrary $|x+y|$ can also be arbirtarily large. So there cannot exist a required $K>0$ and $f \notin \mathcal{F}$.

Remark:
If $f(x) = x^2$ is consider as function on $(a,b)$ a bounded interval (open, closed, semi open any kind), then the term $|x+y|$ can be bounded by  $2b$ since $x,y \in (a,b)$ and we can take $k = 2b$ and $a =1$ which will make $f \in \mathcal F$.

option 4: (false) Consider the function $f(x) = |x|$ and this function satisfies $$||x| - |y|| \le |x - y|.$$ Hence for $k=1$ and $a=1$ this function is in $\mathcal F$. But $|x|$ is not differentiable at the origin.

options 1 and 2. (True)  Let $f$ be a function from $\mathcal{F}$, then $f$ satisfies $$|f(x)-f(y)| ≤ K|(x-y)|^a$$ for some $K>0$ and $a>0$. We claim that $f$ is uniformly continuous. Let $\epsilon > 0$, we find a $\delta$ which depends only on $\epsilon$ and hence the function will be uniformly continuous.  We take $\delta = (\frac{\epsilon}{K})^{\frac{1}{a}}$ then $|x-y| < \delta$ implies that $|f(x) - f(y)| \leq K |x-y|^ a < K \delta^a = K ((\frac{\epsilon}{K})^{\frac{1}{a}})^a = \epsilon$. This completes the proof.

FOLLOW BY EMAIL TO GET NOTIFICATION OF NEW PROBLEMS. SHARE YOUR DOUBTS IN THE COMMENTS BELOW. ALSO, YOU CAN SUGGEST PROBLEMS TO SOLVE WHICH WILL BE SOLVED IMMEDIATELY.

### CSIR JUNE 2011 PART C QUESTION 70 SOLUTION (Dense subsets of $\Bbb R^2$)

Which of the following sets are dense in $\Bbb R^2$ with respect to the usual topology.
1) $\{(x,y) \in \Bbb R^2 : x \in \Bbb N\}$,
2) $\{(x,y)\in \Bbb R^2 : x+y \text{ is rational}\}$,
3) $\{(x,y)\in \Bbb R^2 : x+y^2 = 5\}$,
4) $\{(x,y) \in \Bbb R^2 : xy \ne 0\}$.
Solution:

A subset $A$ of a metric space $(X,d)$ is said to be dense if $A$ intersects all the non-empty open balls in $X$. In particular, to show a subset is not dense in $X$ enough to construct a non-empty open ball $U$ which is disjoint from $A$. i.e., $A \cap U = \Phi$.
option 1:(Not dense) We claim that the given set $A := \{(x,y) \in \Bbb R^2 : x \in \Bbb N\}$ is closed. Hence it cannot be dense. Let $(x,y)$ be a limit point of this set. Then there exists a sequence $(x_n,y_n) \in A$ such that $(x_n,y_n)$ converges to $(x,y)$. This means that $x_n \to x$ and $y_n \to y$. Note that $(x_n,y_n) \in A$ implies that $x_n$ is a converging sequence of natural numbers. This is possible, only if, $x_n = k \in \Bbb N$ for sufficiently large $n$. In particular, this sequence is eventually constant and hence is converging to $k$. This shows that $x=k$ as the limit of a sequence is unique. Therefore $(x,y) \in A$ and $A$ is closed.
option 2:(True) From the definition, we observe that, if $A \subseteq B$ and $A$ is dense in $X$, then $B$ is also dense in $X$.
Proof: Let $U$ be a non-empty open set in $X$. Since $A$ is dense in $X$, $A \cap U \ne \Phi$. This implies that $B \cap U \ne \Phi$ as $A \subseteq B$.
We have $\Bbb Q \times \Bbb Q$ is dense in $\Bbb R^2$ since the product of dense sets is dense. This is immediate using the projection maps. We have $\Bbb Q \times \Bbb Q \subseteq \{(x,y)\in \Bbb R^2 : x+y \text{ is rational}\}$ and hence the given set is dense in $\Bbb R^2$ by the above observation.
option 3:(False) Again, we will prove that the set $C:= \{(x,y)\in \Bbb R^2 : x+y^2 = 5\}$ is closed in $\Bbb R^2$. Let $f: \Bbb R^2 \to \Bbb R$ be a function defined by $f(x,y) = x+y^2-5$, then being the polynomial function $f$ is continuous. Now $f^{-1}(0) = C$. Since $\{0\}$ is closed and $f$ is continuous, we have $C$ is closed.
option 4:(True) Let $D: = \{(x,y) \in \Bbb R^2 : xy \ne 0\}$. We will prove that this set is dense in $\Bbb R^2$. Let $B(x,r)$ be a open ball in $\Bbb R^2$. We claim that $B((a,b),r) \cap D \ne \Phi$.
Case 1: $(a,b) \in D$, then clearly $B(x,r) \cap D \ne \Phi$.
Case 2: $(a,b) \notin D$. then $a = 0$ or $b=0$.
Subcase 1: $a = 0$ and $b \ne 0$, then $(a,b)$ lies in the y axis. Consider the point $(a+\frac{r}{2},b) \in D$, then $d((a+\frac{r}{2},b),(a,b)) = \sqrt {|(a+\frac{r}{2}-a)^2 + (b-b)^2| } = \sqrt{\frac{r^2}{4}} = \frac{r}{2} < r$. Hence the point $(a+\frac{r}{2},b) \in B(x,r)$.
Subcase 2: $a \ne 0$ and $b = 0$. Similar to the previous subcase consider the point $(a,b+\frac{r}{2}) \in D$.
Subcase 3: $a=0$ and $b=0$. Consider the point $(a+\frac{r}{2},b+\frac{r}{2}) \in D$, then $d((a+\frac{r}{2},b+\frac{r}{2}),(a,b)) = \sqrt {|(a+\frac{r}{2}-a)^2 + (b+\frac{r}{2}-b)^2| } \\ = \sqrt{\frac{r^2}{4}+\frac{r^2}{4}} = \frac{\sqrt 2 r}{2} < r$
Hence the point $(a+\frac{r}{2},b+\frac{r}{2}) \in B(x,r)$.
FOLLOW BY EMAIL TO GET NOTIFICATION OF NEW PROBLEMS. SHARE YOUR DOUBTS IN THE COMMENTS BELOW. ALSO, YOU CAN SUGGEST PROBLEMS TO SOLVE WHICH WILL BE SOLVED IMMEDIATELY.

### CSIR JUNE 2011 PART C QUESTION 69 SOLUTION (The derivative of $f(x,y) = (3x - 2y + x^2, 4x +5y + y^2)$)

Consider the function $f: \Bbb R^2 \to \Bbb R^2$ defined by $f(x,y) = (3x - 2y + x^2, 4x +5y + y^2)$. Then
1) f is discontinuous at $(0,0)$,
2) $f$ is continuous at $(0,0)$ and all the directional derivatitives exist at $(0,0)$,
3) $f$ is differentaible at $(0,0)$ but the derivative $Df(0,0)$ is not invertible,
4) $f$ is differentaible at $(0,0)$ and the derivative $Df(0,0)$ is invertible.
Solution: Let $f_1(x,y) = 3x-2y+x^2$ and $f_2 = 4x+5y+y^2$ be the coordinate functions of $f$.
The partial derivatives of $f$ are $\frac{\partial f_1}{\partial x} = 3 + 2x$, $\frac{\partial f_1}{\partial y}= -2$, $\frac{\partial f_2}{\partial x} = 4$ and $\frac{\partial f_2}{\partial y} = 5 + 2y$.
option 1:(True) The coordinate functions $f_1$ and $f_2$ of $f$ are polynomials and hence $f$ is continuous. This can also be seen from the continuity of partial derivativatives of the coordinate functions of $f$.
option 3: (False) The derivative of a function $f: \Bbb R^n$ to $\Bbb R^m$ at a point $x \in \Bbb R^n$, if it exists, is the unique linear transformation $D(f(x)) \in \mathcal{L}(\Bbb R^n,\Bbb R^m)$ such that $$\text{lim}_{h \to 0} \frac{||f(x+h) - f(h) - D(f(x))h||}{||h||} \to 0.$$ where
$$Df(x) = \begin{bmatrix} \frac{\partial f_1}{\partial x}&\frac{\partial f_1}{\partial y} \\ \frac{\partial f_2}{\partial x}&\frac{\partial f_2}{\partial y}\end{bmatrix} =\begin{bmatrix}3+2x&-2\\4&5+2y\end{bmatrix}$$ and $Df((0,0)) = \begin{bmatrix}3&-2\\4&5\end{bmatrix} .$

Let $x = \begin{bmatrix}0 \\ 0 \end{bmatrix}$ and $h = \begin{bmatrix}h_1 \\ h_1 \end{bmatrix}$ then $f(x+h) = f(h)$. The above limit becomes $$\text{lim}_{h \to 0} \frac{||- D(f((0,0)))\cdot h||}{||h||} = \text{lim }_{h \to 0}\frac{||\begin{bmatrix}3h_1-2h_2 \\ 4h_1+5h_2 \end{bmatrix}||}{||\begin{bmatrix}h_1 \\ h_2 \end{bmatrix}||}$$
By calculating the norm explicitly, we notice that, the above limit exists. Hence $Df((0,0)) = \begin{bmatrix}3&-2\\4&5\end{bmatrix}$ is the required derivative of $f$ at $(0,0)$ which is clearly invertible and option (4) is true and in turn option (3) is false . Since $f$ is differential at $(0,0)$, we have option (2) is also correct.

FOLLOW BY EMAIL TO GET NOTIFICATION OF NEW PROBLEMS. SHARE YOUR DOUBTS IN THE COMMENTS BELOW. ALSO, YOU CAN SUGGEST PROBLEMS TO SOLVE WHICH WILL BE SOLVED IMMEDIATELY.

### CSIR JUNE 2011 PART C QUESTION 68 SOLUTION ($||x||_1 \le d \,||x||_{\infty}$)

For $(x_1,x_2,\dots,x_d) \in \Bbb R^d$, and $p \ge 1$ define $$||x||_p = (\sum\limits_{i=1}^d|x_i|^p)^{\frac{1}{p}}$$ and $$||x||_{\infty} = \text{max }\{x_i : 1 \le i \le d\}.$$ Which of the following inequalities hold for all $x \in \Bbb R^d$?
1)$||x||_1 \ge ||x||_2 \ge ||x||_{\infty}$,
2)$||x||_1 \le d \,||x||_{\infty}$,
3)$||x||_1 \le \sqrt d \,||x||_{\infty}$,
4)$||x||_1 \le \sqrt d \,||x||_2$.
Solution: Assume $p \ge 1$.

option 1: (True)We have, for $1 \le i \le d$, $|x_i|^p \le (||x||_p)^p.$ Hence $$||x||_{\infty} \le ||x||_p \,\,\,\,\,\,\,\,\,\,(\text{ for any }p \ge 1).$$
ClaimIf $p \le q$ then $||x||_q \le ||x||_p$.
Proof:
Assume $0<a\le 1$, then $$(\sum_{i=1}^d |x_i|)^a \le \sum_{i=1}^n |x_i|^a.$$ Because, $\frac{\sum_{i=1}^d |x_i|^a}{(\sum_{j=1}^d |x_j|)^d} = \sum_{i=1}^d \frac{|x_i|^a}{(\sum_{j=1}^d |x_j|)^a} = \sum_{i=1}^d (\frac{|x_i|}{\sum_{j=1}^d |x_j|})^a \ge \sum_{i=1}^d \frac{|x_i|}{\sum_{j=1}^d|x_j|} = 1$.
Note that, in the last step, we have used the fact that $x^a > x$ if $x, a \in (0,1]$.
Now, assume $p \le q$ then $\frac{p}{q} \le 1$. By using the above result, We have $$||x||_q = \big(\sum_{i=1}^d|x_i|^q\big)^{\frac{1}{q}} = \big(\big(\sum_{i=1}^d |x_i|^q\big)^{\frac{p}{q}}\big)^{\frac{1}{p}} \le (\sum_{i=1}^d (|x_i|^{q})^{\frac{p}{q}})^{\frac{1}{p}} = \sum_{i=1}^d |x_i|^p)^{\frac{1}{p}} \\ = ||x||_p.$$
In particular we have, $$||x||_2 \le ||x_1||.$$
option 4: (True)The Cauchy-Schwarz inequality in $\Bbb R^d$:
Let $x = (x_1,x_2,\dots,x_d), y=(y_1,y_2,\dots,y_d)$. Then
$$(\sum_{i=1}^d x_iy_i)^2 \le (\sum_{i=1}^d x_i^2)(\sum_{i=1}^d y_i^2).$$ Substitute $x = (1,1,\dots,1) \in \Bbb R^d$ in this equation we get the required result.
option 2:(True) Let $x=(x_1,x_2,\dots,x_d) \in \Bbb R^d$ and $a = \text{max }_{1 \le i \le n}|x_i| = ||x||_{\infty}$. First, we claim that $||x||_2 \le \sqrt d \,||x||_{\infty}$. We have $(||x||_2)^2 = \sum_{i=1}^d |x_i|^2 \le \sum_{i=1}^d a^2 = (||x||_{\infty})^2 (\sum_{i=1}^d 1) = d (||x||_{\infty})^2.$
This proves the claim.
Now, using this result in option 4 we get $$||x||_1 \le \sqrt d\, ||x||_{2} \le d \,||x||_{\infty}.$$
option 3: (False) Take $x = (1,1) \in \Bbb R^2$ and calculate the LHS and RHS. We get $2 \le \sqrt 2$.
### NBHM 2020 PART A Question 4 Solution $$\int_{-\infty}^{\infty}(1+2x^4)e^{-x^2} dx$$
Evaluate : $$\int_{-\infty}^{\infty}(1+2x^4)e^{-x^2} dx$$ Solution : \int_{-\infty}^{\infty}(1+2x^4)e^{-x^2} dx = \int_{-\infty}^{\inft...