New school year

So today I had my first day of class at Yale. I am taking 4 graduate classes: Algebraic topology, Commutative Algebra, Real analysis, and Complex analysis. I did not do much work during the summer due to personal problems, but I am ready to start working on hard. 

In topology we basically covered the entire undergraduate quarter in 50 minutes. I was reading Hatcher and I found this really nifty application of fundamental groups to the proof of the fundamental theorem of algebra. 

Theorem: Any non constant function in \mathbb{C} has a root. 

Proof: Assume that p(z)=z^n+a_1z^{n-1}+...+a_n has no root. The proof will work like this: Using the fact that p(z) has no zeroes, then we will be able to define a loop at 1, this loop will be homotopic to the trivial loop. We will also show that it is homotopic to n times the generator of the group \pi(S^1), so we will have that n times the generator of \pi(S^1) equals 0, so n=0, meaning that p(z) was constant to begin with. 

Step 1: Let f_r(s)=\frac{p(re^{\pi i s})/p(r)}{|p(re^{\pi i s})/p(r)|} for any r a real, and s goes from 0 to 1. Since we assume that p has no roots, we have that this is well defined. If we let r vary we get a family of functions indexed by r (i.e., a homotopy), and note that in particular if we set r=0 we obtain the constant loop, so [f_r]=[f_0]=[e_1]\in \pi(S^1)

Step 2:  Take r to be bigger than \sum |a_i| and bigger than 1. Then we have that |z^n|> |a_1 z^{n-1}+...+a_n| when |z|=r. This implies that the function p_t(z) has no roots in the circle |z|=r for any 0\leq t \leq 1. Now define H(t,s)=\frac{p_t(re^{2\pi i s})/p_t(r)}{|p_t(re^{2\pi i s})/p_t(r)|}. Hence, $H(1,s)=f_r(s)$ (this is a loop around 1), and H(0,s)=\frac{(re^{2\pi i s})^n/r^n}{|(re^{2\pi i s})^n/r^n|}=e^{2 \pi i n s}, this is n times the generator of the infinite cyclic group, lets call it \omega. Hence, we have just shown in this paragraph that [f_r]=n\cdot \omega and by Step 1, we have that [f_r]=0, so in conclusion n\cdot omega =0, which implies that n=0, so $latex p(z) was the constant polynomial. 

Posted in Uncategorized | Tagged | Leave a comment

Update

Hi,

It has been a very long time since I last updated, but I was extremely busy visiting graduate schools, with school assignments and work. First of all, I want to announce that I will be attending Yale in the fall for a PhD. program with an NSF grant and I am looking forward to it very much. I asked a potential advisor what would he recommend for me to start reading and he gave me the following list:

-First seven chapters of the Probabilistic Method by Alon and Spencer.

-Random matrices by Tao.

-This survey, and this other one (this one is a direct download, sorry).

In total this is a lot of reading, but I will be doing spread throughout the summer since for the first time in a while I will have the summer off.

I decided to start with the topic I am most familiar with, and that is the Probabilistic Method. I have decided to try to solve as many problems as I can, and I am almost done with all the problem from chapter 1. Here is the last problem I have done so far in that chapter:

Prove that tehre is an absolute constant c>0 with the following property. Let A be an n\times n matrix with pairwise distinct entries. Then there is a permutation of the rows of A so that no column in the permuted matrix contains an increasing subsequence of length at least c\sqrt{n}.

Solution: This applies just the union bound learned in chapter 1 along with some bounds on factorials and binomial coefficients. The idea is very simple: We will show that the probability of having at least one column with an increasing subsequence of length at least c\sqrt{n} in a random permutation of the rows is less than 1, this would imply that there is a permutation with no column having an increasing subsequence of that length.

Let \sigma be an arbitrary permutation of the rows. Then, let E be the event that some column has an increasing subsequence of length c\sqrt{n} (we will choose c later). Also, denote by E_i the event that column i has an increasing subsequence of that length. Hence, we have that P(E)=P(\bigvee_i E_i)\leq \sum_i P(E_i)=nP(E_1) by symmetry. Now we calculate the probability that a fixed row will have a long subsequence using again the union bound. We can look at all possible subsequences of length c\sqrt{n} and ask whether or not it is increasing. That is, let S_j denote the event that the jth subsequence is increasing (note that we have j=1,...,\binom{n}{c\sqrt{n}}), so P(S_j)=\frac{1}{(c\sqrt{n})!} because since the entries are pairwise distinct, then there is only one possible way that they will be in increasing order. Then, P(E_1)\leq \binom{n}{c\sqrt{n}}\cdot\frac{1}{(c\sqrt{n})!}. Putting everything together we get that:

 P(E)\leq n\cdot \binom{n}{c\sqrt{n}}\cdot\frac{1}{(c\sqrt{n})!}

Technically speaking, if one was able to give an explicit c so that the above value was always less than one, we would be done, but since this expression is rather messy to deal with we use the following useful bounds: (n/e)^n\leq n! and \binom{n}{k}\leq (ne/k)^k. Thus,

P(E)\leq n\cdot (ne/c\sqrt{n})^{c\sqrt{n}}\cdot \frac{1}{(c\sqrt{n}/e)^{c\sqrt{n}}}=n(e^2/c^2)^{c\sqrt{n}}

If we let c=2e, then we would have that P(E)\leq \frac{n}{4^{2e\sqrt{n}}} which is clearly less than one and we are done. I will try to post more often!

Posted in Uncategorized | Leave a comment

A different proof for the Erdos-Ko-Rado theorem

I am currently in a very neat class on algebraic methods in combinatorics and my professor said that the very famous theorem Erdos-Ko-Rado in extremal combinatorics could be solved with linear algebra, so I decided to go ahead and try it out:

Theorem: If n\geq 2k such and \mathcal{F} is a collection of sets such that the sizes of every set are equal to k and any two sets intersect, then we have that |\mathcal{F}|\leq \binom{n-1}{k-1}

Before I start the proof for the theorem I recall what the definition of a Kneser graph is: It is a graph whose vertices corresponds to the k-subset of an n-element set, and any two vertices are joint by an edge if and only the subsets are disjoint. It is usually denoted KG_{n,k}.  Here is an example for the case n=5,k=2:

Now the connection that the Kneser graph and the EKR theorem is the following: The collection \mathcal{F} is an independent set of the Kneser Graph because two vertices not sharing an edges means that the subsets intersect. Hence, the EKR can be stated as follows:

Theorem: The indepence number of KG_{n,k} (usually denoted \alpha(KG_{n,k})) is less or equal to \binom{n-1}{k-1}.

To prove this theorem the ingredients would be the following:  We will find the eigenvalues and prove some bounds for d regular graphs, and then bound the size of an independent set. Then we will use this results in the Kneser graph.

**************************************************************************

Say we are given a d-regular graph G on n vertices. If \lambda_1\leq \lambda_2\leq...\leq \lambda_n (This are the eigenvalues of the adjacency matrix A_G which is constructed by placing a 1 in the ij entry if and only if i\sim j and a 0 otherwise). Then for x=(x_1,...x_n) we have:

\sum_{i\sim j}(x_i-x_j)^2\leq (d-\lambda_n)\sum_{i=1}^nx_i^2

Proof: \sum_{i\sim j}(x_i-x_j)^2=d\sum_{i=1}^nx_i^2-2\sum_{i\sim j} x_ix_j. This is because G is d-regular so it works out nicely. Secondly, note that we can change the indice of summation i\sim j for i,j and add a a_{ij} in the terms since this will be 1 only when i\sim j. Hence, we have:

\sum_{i\sim j}(x_i-x_j)^2=d\sum_{i=1}^nx_i^2-2\sum_{i,j}a_{ij}x_ix_j

=d\sum_{i=1}^n x_i^2-x^t A x

Now I am going to use a fact from linear algebra. If A is a symmetric matrix over the reals, then \lambda_n=\min_{|x|=1}x^t A x. This implies that for any x=(x_1,...,x_n) we have that x^t Ax\geq \lambda_n |x|^2=\lambda_n\sum x_i^2. Then above

\leq d\sum_{i=1}^nx_i^2-\lambda_n\sum_{i=1}^n x_i^2.

Hence, we proved that for any x\in \mathbb{R}^n we have \sum_{i\sim j} (x_i-x_j)^2\leq (d-\lambda_n)\sum_{i=1}^n x_i^2.

******************************************************************************

Now we will use this result to bound the independence number of a d regular graph.

Theorem: G is d-regular with \lambda_1\geq...\geq \lambda_n eigenvalues. Then

\alpha(G)\leq \frac{-\lambda_n}{d-\lambda_n}\cdot n

Proof: Let I be any independent set. Then consider x\in \mathbb{R}^n to be defined as follows:

x_i=\begin{cases} n-|I| \text{ if } i\in I\\ |I| \text{ if }i\notin I    \end{cases}

Now we just apply the above lemma to this case. We have:

\sum_{i\sim j}(x_i-x_j)^2=n^2d|I|

since x_i-x_j will be 0 if they are both in I or if they are both in n\backslash I, otherwise the difference will be n. Then we note that the number of edges from I to its complement is d|I| since each vertex in I is of degree d and it doesnt connect to anything in that set, all the edges must go to the complement.

Also, note that

(d-\lambda_n)\sum_{i=1}^n x_i^2=(d-\lambda_n)[|I|(n-|I|)^2+(n-|I|)|I|^2]=(d-\lambda_n)(n-|I|)(|I|)n

this is by splitting the sum into the two disjoint sets I and n\backslash I. Hence, using the above lemma:

n^2d|I|\leq (n-|I|)(|I|)n(d-\lambda_n)

and when we solve to obtain |I|\leq \frac{-\lambda_nn}{d-\lambda_n} as we wanted.

*************************************************************************************

Now time to solve the big theorem!

Proof: In our case we have that |KG_{n,k}| is equal to \binom {n}{k}. Also, note that the degree of all vertices is the same and it is equal to \binom{n-k}{k} (here we use the assumption that n\geq 2k.

Now using the fact that the smallest eigenvalue of KG_{n,k} is equal to -\binom{n-k-1}{k-1} we get that

\alpha(KG_{n,k})\leq \frac{\binom{n}{k}\binom{n-k-1}{k-1}}{\binom{n-k}{k}+\binom{n-k-1}{k-1}}=\binom{n-1}{k-1}

Just as we wanted to prove.

Note that the inequality is strict since to witness a family of k-sets who all intersect take for instance all sets containing the fixed element 1. There are n-1 elements left to pick and now each set needs k-1 to be complete for a total of \binom{n-1}{k-1}.

Posted in Uncategorized | Leave a comment

Some problems in combinatorics (Part 2)

Here is the second problem from the series of entries. It coincidentally overlaps with the class I am taking with professor Sudakov right now so that it is nice.

Problem 2: Let \mathcal{A} and \mathcal{B} be two sets of family of families of an n-element set such that |A\cap B| is odd for every A\in \mathcal{A} and every B\in \mathcal{B}.  Prove that |\mathcal{A}|\cdot |\mathcal{B}|\leq 2^{n-1}.

The proof will be using linear algebra methods. The idea is the following. We will  work over the vector space \mathbb{F}_2^n and we will create two sets. We will double one of the sets and translate the other for a result of two orthogonal subspaces (they might not be subspaces but we use the term subspace to be understood as the subspace spanned by the set). Hence their dimensions sum up to n, so taking the double factor into account we will finish the problem. Here are the details:

Proof: Let X and Y be the set of incidency vectors of the sets in \mathcal{A} and \mathcal{B} respectively. Let x_0 be any element of X. Define X_0=\{x+x_0:x\in X\}. We claim that X_0 and X are disjoint. (Assume otherwise, then there is a y\in Y and a x=x_0+x_1 such that 1=\langle x,y\rangle=\langle x_0,y\rangle+\langle x_1,\rangle=1+1=0 then we get a contradiction).

Then let X_1=X\cup X_0 (this is the doubling I meant on the sketch of the proof).  Now since the dot product of x and $y$ would be equal to 1 and we want to create orthogonal sets, we define Y_0=\{y+y_0:y\in Y\}. With this we have that for any x\in X and y' \in Y_0 we have \langle x,y'\rangle=\langle x,y\rangle+\langle x,y_0\rangle=1+1=0. Thus, we have that the sets X_1 and Y_0 are orthogonal. Hence,

2|\mathcal{A}||\mathcal{B}|=|X_1||Y_0|\leq 2^{\dim X_1}\cdot 2^{\dim Y_0}=2^{\dim X_1+\dim Y_0}\leq 2^n

Where the last inequality is due to X_1 and Y_0 being orthogonal. Then the result follows.

Posted in Uncategorized | Leave a comment

Some problems in combinatorics

I am going to create a series devoted on solving the combinatorics final given at Princeton. There is a total of fives problems at the first one is: 

Problem 1: Show that for any positive integer r there exists a positive integer S(r) such that if the integers \{1,2,....,S(r)\} are colored with r colors, then there exists distinct integers x,y,z in this set, having the same color, and satisfying: 

x+y=z

Proof: This problem is a nice application of Ramser numbers. First of all I proved that the Ramsey number N(t,s) was finite. Using this one can prove that the number N(x_1,...,x_n) is finite as well (here the number N(x_1,...,x_n) as the minimal number of vertices needed in a complete graph such that any coloring that uses n colors contains either a K_1 of color 1, a K_2 of color 2, and so on). The proof of this is by just checking that N(t,s,q)\leq N(N(t,s),q) which is not difficult. 

Back to the problem. We claim that the number N(3,3,...,3)=N will work. Assume that we are given K_N and label the vertices 1,2,..., N. Color the edges of the graph as follows: For the edge (x,y) (wlog say x>y) color it by the color received by x-y in the set \{1,2,...,N\}. Hence we obtain a coloring of the graph K_N. By definition of Ramsey number, we have that there exists a monocolor triangle. That is there are three edges (x,y),(y,z),(z,x) (wlog say x<y<z) such that they all have the same color. And here we have that (y-x)+(z-y)=(z-x), so we are done. 

On another note. I would like to congratulate Nick Strehlke for in admission into the MIT math PhD program!

 

Posted in Uncategorized | 1 Comment

Maximum number of points in $latex \mathbb{R}^n$ where there are only two distances allowed.

Consider the natural problem that asks: “How many points can we have in \mathbb{R}^n such that the distance between any two points is the same?.” The answer for this question is well known to be n+1 and the answer is given by a simplex. The generalization and content of this entry will be to answer the question: “What happens when we allow two distances?.”
Definition: Let \delta_1 and \delta_2 be two distinct positive real numbrs. Let m(n) denote the maximum number of points in \mathbb{R}^n such that the distances between any two points is either \delta_1 or \delta_2.
We are going to prove lower bounds and upper bounds for m(n). The way we will prove the lower bound for m(n) will be by construction and the way we will prove the upper bound for m(n) will be using basic linear algebra. The most important thing the reader should get out of this is the method: We are going to construct linearly independent vectors and we are going to bound above the dimension of the space they live in, and from linear algebra it will follow that this number will also bound the number of vectors.

Theorem: \frac{n(n+1)}{2}\leq m(n)\leq \frac{(n+1)(n+4)}{2}

Proof: First we are going to work on the lower bound. Consider the set of all \{0,1\}^n vectors with exactly 2 non-zero entries. There are \binom{n}{2} of them in \mathbb{R}^n. The distances between any two of them is either 2 or \sqrt{2} (the first is the case where none of their 1 entries are in the same coordinate and the second case is when they have a 1 in the same coordinate and this are the only two cases). Note that this first attempt was not so bad since we got \frac{n(n-1)}{2}, so how do we get the extra n? Consider the same setup but now in \mathbb{R}^{n+1}, using the same idea we can get \binom{n+1}{2} points in \mathbb{R}^{n+1} such that the distance between any two is either 2 or \sqrt{2} and the way we can take this construction and take it back to \mathbb{R}^n is because all of my vectors will lie in a plane ( if x is one of my vectors then x satisfies \sum_{i=1}^{n+1} x_i=2), so we see that all of my points can actually be viewed as living in \mathbb{R}^n. This proves \frac{n(n+1)}{2}\leq m(n)

Proving the upper bound is the most interesting part of this entry. Let P=\{p_1,...,p_m\} be a set of points such that the distance between any two distinct p_i and p_j is either \delta_1 or \delta_2. Consider the following function:

f_i(x)=(||x-p_i||^2-\delta_1^2)(||x-p_i||^2-\delta_2^2)

Then we are going to prove that f_1,...,f_m are linearly independent. Consider

\alpha_1f_1(x)+...+\alpha_nf_n(x)=0

Let x=p_j. Note that if i\neq j then f_j(p_i)=0, but if i=j we have f_i(p_i)=\delta_1^2\delta_2^2. Thus the above equation reads:

\alpha_j\delta_1^2\delta_2^2=0

So we must have \alpha_j=0 and since this holds for every j, we see that f_1,...,f_m are linearly independent.

Note that if we write x=(x_1,...,x_n), we can view

f_i(x)=(||x-p_i||^2-\delta_1^2)(||x-p_i||^2-\delta_2^2)

as a polynomial on n variables (namely x_1,....,x_n). This yields:

f_i(x)=(||(\sum(x_j-c_j)||^2-\delta_1^2)(||\sum(x_j-c_j)||^2-\delta_2^2)

And when we expand this product we see it will be a linear combination of

(\sum x_j^2)^2, \qquad (\sum x_j^3)^2x_k,\qquad x_jx_k, \qquad x_j, \qquad 1

Which are: 1+n+n(n+1)/2+n+1=\frac{(n+1)(n+4)}{2}. Then since f_1,...,f_m are linearly independent and live in a subspace which is spanned by \frac{(n+1)(n+4)}{2} we must have m is less or equal than this quantity. QED

The morale of the story is that sometimes upper bounds on the size of a collection of objects can be achieved by assigning to each object a vector, proving they are independent, and bounding the dimension of the space they live in.

A quick note on the Ramsey survey I was doing: I was going to use Lovaz Local Lemma to improve the bound done last time, but it only improves it by a small factor since the dependences between the events of having monochromatic K_k are “high”, so I have decided not to. If any of you have any suggestions on what you would like to see, you can email me.

Posted in Uncategorized | 1 Comment

Probabilistic Method and prove for a lower bound on the Ramsey number

In this post I will introduce a little bit of the probabilistic method and use it to prove a lower bound for Ramsey Numbers.

The tool works like this: We are going to create a probability space and for an event A if we manage to prove that P[A]<1 then with positive probability we must have P[A^c]>0. The following will give an example of how this easy observation can be used to prove the existence of an event.

Recall that we prove last time that the Ramsey number N(t,s) was finite. Now we are going to prove a lower bound.

Theorem: Let n be such that

\binom{n}{k}\cdot \frac{1}{2^{\binom{k}{2}-1}}<1,

then N(k,k)>n.

Proof: First of all it may seem arbitrary the condition we have set upon n to satisfy the above, but from the proof it will become apparent why we do so.

The strategy will be to first consider a random coloring, and we are going to calculate the probability that a random coloring has a monochoromatic K_k, if we manage to prove that this probability is strictly less than 1 it means that there must exist some coloring that does not contain a monochromatic K_k (think about this! this is the key to this proof and many probabilistic method proofs).

So the probability that some fixed K_k is monochromatic is \frac{1}{2^{\binom{k}{2}}}\cdot 2 because there are \binom{k}{2} many edges and we want them to all be blue or all be red. Now we are going to do a very loose bound, the union bound. Recall from probability that

P[\bigcup A_i]\leq \sum_i P[A_i]

Then, since there are \binom{n}{k} many copies of K_k in K_n, we have that

P[\text{ Some } K_k \text{ is monochromatic}]\leq \binom{n}{k}\frac{1}{2^{\binom{k}{2}}}\cdot 2

but by the conditions (aha! it now becomes apparent) we have that this probability is strictly less than 1, so we are done.

Remark: Note that when k\geq 3 and we set n=\lfloor 2^{k/2}\rfloor we have:

\binom{n}{k}2^{1-\binom{k}{2}}<\frac{n^k}{k!}\frac{2^{1+\frac{k}{2}}}{2^{k^2/2}}<\frac{(2^{k/2})^k}{k!}\frac{2^{1+\frac{k}{2}}}{2^{k^2/2}}=\frac{2^{1+\frac{k}{2}}}{k!}<1

Where above in the first inequality we use the bound for the binomial coefficient \binom{n}{k}<\frac{n^k}{k!}, so in particular we have that N(k,k)>\lfloor 2^{k/2}\rfloor, and thus we have proven a lower bound for the Ramsey number.

For the next post I will discuss a little about Lovaz Local Lemma and use it to improve this bound. The reason why this bound is loose is because in the union bound we are being extremely loose. This events are not disjoint at all. If given that a fixed K_k is monochromatic, then another clique of size k which differs in only one edge will have probability half of also being monochromatic. So stay tuned!

Posted in Uncategorized | Leave a comment

One sentence proof

One of the readers (Isaac Solomon) has asked me to do a small explanation on the so-called “one sentence proof”. It was done by Zagier, and it proves the following theorem:

Theorem: An odd prime p can be written as a sum of two squares if and only if the prime is congruent to 1 \mod{4}.

Note that one direction is immediate (namely the one that if p is the sum of two squares, then since every number of the form x^2 is congruent to either 0 or 1 modulo 4, then we have that p is congruent to either 0,1,2, but since its prime only 1 can happen.

The other direction is the hard one and we are going to solve it using involutions. (Recall, an involution is a function such that f^2=id).

The idea of the proof is the following. Define a set S=\{(x,y,z)\in\mathbb{N}^3: p=x^2+4yz\}. Here \mathbb{N} does not contain 0. Then define two involutions:

f(x,y,z)=(x,z,y)

And a more complicated one, g(x,y,z):

  1. (x+2z,z,y-x-z) if x<y-z
  2. (2y-x,y,x-y+z) if y-z<x <2y
  3. (x-2y,x-y+z,y) if x>2y.

Itis very clear that f is an involution. Note however that it is a little cumbersome (but not hard) to check that g is an involution. Ill do one of the cases. Say x<y-z. Then f(x,y,z)= (x+2z,z,y-x-z). Note now that (3) holds, x+2z>2z, so then we have f(x+2z,z,y-x-z)=(x+2z-2z,x+2z-z+y-x-z,z)=(x,y,z), and the reader can check that the other cases turn out to work out (I just thought of this, you should also verify that g maps S into itself, a priori it is not imediate that the image will also satisfy x^2+4yz=p), so indeed g is an involution.

Then we are going to use the following lemma:

Lemma: Given a finite set and an involution we have that the number of fixed points has the same parity as the cardinality of the set.

Proof: Let S=\{s_1,...,s_n\} be the finite set in question. If s_i is not a fixed point then there exists an s_j such that f(s_i)=s_j. Note that this forces f(s_j)=s_i, so if we delete s_i and s_j from S we obtain a new set S' such that f is an involution in this set, and we have |S|\equiv |S'|\mod{2}, in such a manner delete all the non-fixed points, so the result follows.

Using the above lemma we are going to do the following trick: We already have the set S, note that since p is congruent to 1 modulo 4 then the set is not empty. We are going to show that g has a unique fixed point (so in particular has odd parity), so |S| has odd parity, so the number of fixed points of f has odd parity, so in particular there is at least one fixed point. That is, (x,y,z)=(x,z,y) satisfying x^2+4yz=p, so we have x^2+(2y)^2=p, and we would be done.

Lemma: g has a unique fixed point.

Proof: If we have that (x,y,z) is a fixed point the it must be the case that it satisfies case 2 because case one maps into something with a strictly bigger first coordinate, and case three maps into something with a strictly smaller coordinate. Hence,

 

(x,y,z)=(2y-x,y,x-y+z)

From here we see that x=y, since we have x^2+4yz=p, we would have that x\mid p so 1=x=y, and then z=\frac{p-1}{4}, so indeed g has a unique fixed point.

By the above remarks we are done.

This explanation might have been a little over a sentence, so I think calling it like that its a bit of a reach, but it is a beautiful proof nonetheless.

 

 

 

Posted in Uncategorized | 1 Comment

Ramsey numbers.

The next couple of entries will be devoted to prove some Ramsey type problems. The topic of Ramsey theory was picked by one of the readers: Albert Anticona. The plan (although I might not stick 100% to it) for the next couple of entries is: Prove finiteness and a bound for Ramsey numbers N(t,s), then a lower bound on Ramsey numbers, a small deviation into Lovasz Local Lemma which we will then use to prove a better lower bound on the Ramsey numbers. Lastly I will show some cool Ramsey type problems).

First of all we start with a definition (like always). Denote by N(t,s) the minimum number of vertices n that are needed such that every red-blue coloring of the edges of the complete graph on n vertices (K_n) contains either a red K_t or a blue K_s. Apriori this numbers are not finite (we could keep adding edges and coloring them either red or blue and always avoid a red K_t or a blue K_s).  The following theorem proves that this numbers are actually finite, and more over it gives a bound on how big can they be.

Theorem:  N(t,s)\leq \binom{s+t-2}{t-1}.

Proof:

First we are going to prove the bound N(t,s)\leq N(t-1,s)+N(t,s-1)

Let n=N(t-1,s)+N(t,s-1), then assume that we are giving a coloring of K_n. We want to find either a red copy of  K_t or a blue copy of  K_s. Let x\in K_n be given. As the graph is complete, we have that deg(x)=n-1=N(t-1,s)+N(t,s-1)-1. Either there are N(t-1,s)  red edges or N(t,s-1) blue edges. Without loss of generality, let us say that the first case holds. We have at least a=N(t-1,s) red edges. Then consider a subgraph K_a that is formed by the a-number of vertices adjacent to x through a red edge. If there is a blue copy of K_s in K_a we are done. If otherwise, by definition of N(t-1,s), we have a red copy of K_{t-1} in K_a, by adjoining x to K_a, we have a red copy of K_t in K_n.

Hence the bound is proven. Now we prove that

N(t,s)\leq \binom{s+t-2}{t-1}

by induction ons+t. Note that if t=2 or s=2, then we have that N(t,2)=t. And hence we would have that

t=N(t,2)\leq\binom{2+t-2}{t-1}=t

holds. This was our base case. Then assume that it holds for any pair (t',s') with t'+s'<t+s.Then by our recursive looking bound we have that

N(t,s)\leq N(t-1,s)+N(t,s-1)\leq\binom{s+t-1-2}{t-1-1}+\binom{s-1+t-2}{t}=\binom{s+t-2}{t-1}

With this we then have that N(3,3)\leq N(3,2)+N(2,3)=3+3=6. To prove that in fact we have that N(3,3)=6 we have to exhibit a coloring of K_5 avoiding a monocolor red triangle and a monocolor blue triangle:

 R33

Note that we can also prove that whenever we have K_6 being colored red and blue we can prove the existence of a monocolor triangle directly using Pigeonhole’s Principle (in fact I have seen this problem given to students preparing for math olympiads).

Posted in Uncategorized | 1 Comment

Welcome

I have decided to start a blog and every once in a while to post a theorem that I like and share my thoughts on math. From time to time there might be posts that are unrelated to math since I do have other interests too.

To start the blog let me give the definition of a \textbf{Parking Function}. Imagine that we have n parking spots (labeled 1,2,...,n). Then consider n cars that want to park in such a parking structure. It makes sense that each car has a preferred spot, and the cars go into the parking structure in order. Then we can consider n-tuples of numbers that represent such a preference (e.g., (221) means that the first person driving in has 2 as a preferred spot, second car going in likes that same spot, and the last person going in likes the first spot). The parking structure is build so that you can never go reverse, and if someone drives to their favorite spot and its taken then they drive to the next available spot, if no spot is available, then the person has to exit the structure, will be late, and ultimately will get fired).

Hence, we want to count the number of arrangements such that no one gets fired. Just for sake of illustration note that the example above is a correct parking function since it yields: The first car who got in got the second spot, the second car that got in wanted the second spot but since it was taken had to drive to the next and took the third, and the last car that went in took the first spot. Examples of bad arrangements would be something like (331) since this would mean that the first person got the third spot, the second person drove down to the third spot but since it was taken it had to drive out of the parking structure (and thus got fired).  Let us denote by f(n) the number of parking functions on n spots.

\textbf{Theorem:} f(n)=(n+1)^{n-1}

The proof is due to Pollak and it is a very easy (but clever) idea: Imagine we can add an extra spot and make the parking lot loop around, and if you didn’t find a parking spot then you drove back around and inside. Now since we loop around all n cars will be able to park, and there are a total of (n+1)^n arrangements. Note that there will always be one spot empty after all cars have parked. By symmetry of the problem, the number of arrangements that have the n+1 spot empty is \frac{(n+1)^n}{n+1}=(n+1)^{n-1}, note that these arrangements (the ones where the n+1 ends up being empty) are exactly the parking functions that we want (everyone parked in the first n slots and did not have to go around. QED.

Remark: f(n) corresponds also to the number of trees on n+1 vertices. If you want to read more about them.

Posted in Uncategorized | Leave a comment