*Playing With Math: Stories from Math Circles, Homeschoolers, and the Internet*. Lots of goodies coming then.]

I wanted a good lead-in for this unit, and decided at the last minute to introduce Markov chains first. I wanted a fun application, but couldn't find anything simple online. (I was encouraged, though, to see how useful the concept seems to be.) I ended up using a problem from our text (#15 in 4.6). The problem gives a 'migration matrix' showing percent of Californians who stay in-state versus move away, and percent of people from other states who move to California versus not. Since we're in California, it seemed like a fun problem to look at.

I started (yesterday) with:

K said the numbers were decimals. I agreed, and added that they were pretty ugly, whereas we usually try to give ourselves nice simple numbers for our matrices. Pause... S said, "It adds up to 1." I wrote that under the first column, and they told me it was true of the second column and the vector too.

Then I told them it's called a

*probability vector*when the entries are all positive or zero, and add up to 1. A matrix whose columns are all probability vectors is called a

*stochastic matrix*or a

*transition matrix*.

We did the multiplication and noticed that the result also added to 1. Hmm, will that always be true?

I wrote and asked them what the entry below

*a*should be. They told me

*1-a*. I wrote a

*b*in the next column, and they told me to put

*1-b*below it. I wrote the vector with a

*c*, and they gave me the

*1-c*. We multiplied, getting:

Now we needed to add those 2 entries. Adding down, we saw ac+(1-a)c = c(1+1-a)=c*1=c, and b(1-c)+(1-b)(1-c) = (1-c)(b+1-b)=(1-c)*1. So all together we get c + (1-c) = 1. Sweet!

H asked if it worked for all sizes of matrices. I saw that he was being polite. He could have said, "Sue, your proof is only for a 2x2 matrix. Don't we want to prove it for all matrices?" Whenever a student points out a mistake I've made, that's a donut point for the class. While recording the donut point, I said, "Oh, I see that I've only proved it for 2x2 matrices. What's your hunch, everyone, will it work for an nxn matrix?" Most of them thought it would work, and I asked them to try to prove the 3x3 case at home. I had started to explain the next thing I wanted us to look at, when H said he thought he could prove it for nxn. I handed over the chalk and his proof was simple and elegant. I

*loved*that it came from a student, instead of from me!

He started out with:

That would imply just the two columns. I told him how to change his subscripts, and he changed them. (We haven't used the double-subscripts much, if at all. We usually refer to the columns of a matrix as vectors, using [

**a**...].) He now wrote:

_{1}a_{2}Isn't that beautiful?!

So we knew that any transition matrix times a probability vector will give a new probability vector. Next I told them what those numbers meant. I wrote "In 1989, 11.7% of the U.S. population lived in California."

They got that the vector was people in California versus out. (I was hoping someone might figure out what the matrix represented, too, but we didn't manage that.) I showed them what each entry in the matrix meant. The first column is about the population within California, and the second column is about the rest of the U.S. The top row is for people who end up in California, and the bottom row is for those who end up outside California. So over 98% of Californians keep their residence in the state, and under 2% move away each year. Multiplying our matrix P by the vector

**x**gives a new value,

_{0}**x**, for the percent of the U.S. living in California in 1990. If the migration patterns remain the same, the percentages for 1991 will be given by P times

_{1}**x**(or P

_{1}^{2}times

**x**). We checked (on a student's smartphone) the current population percentage, and it was impressively close to the prediction given by P

_{0}^{23}times

**x**.

_{0}I finally introduced them to the phrase

*Markov chain*. We create a chain of values by repeatedly multiplying by P. One of the questions raised by this process is whether you can find a vector,

**x**, so that P

**x**=

**x**; this is called a steady state vector.

Today I proved that you could always find such a vector:

If PThere is a related question we didn't address: Will other values ofx=x, then Px-x=0, or Px-Ix=0, or (P-I)x=0.

We need anxnot equal to0. Will it exist?

We can show that row reduction can produce a bottom row of P-I that is all zeros

Let a new version of the bottom row be all of the rows added together

(Each column added to 1, but has a 1 subtracted now, so adding all together = 0)

This allows for one free variable,

and implies the existence of a non-zeroxsatisfying the equation.

**x**(ever, sometimes, always) lead

*toward*the steady state?

I moved on by generalizing our previous problem to:

For an nxn matrix A,Then I told them that our textbook, and everyone, used Î» instead of c. No big deal, right? And yet, between the names used for parts of this topic (eigenvectors, eigenvalues, and eigenspaces) and the variable name used (a funky Greek letter, Î»), I used to be pretty intimidated by this stuff.

can we find a (non-zero) vectorxand a scalar (number) c,

so that Ax=cx?

We carried on valiantly, seeing that this equation could be played with much like the previous one:

AThese won't be as easy to resolve as the P equations, since we have the added complication of the Î». But we have a tool for this. We know that we can only have non-zero solutions (x=Î»ximplies Ax-Î»x=0, so Ax-Î»Ix=0, giving us (A-Î»I)x=0.

**x**) if the matrix A-Î»I is not invertible. And we know that non-invertible matrices have

*determinant*equal to 0. So we want

det(A-Î»I)=0. And that is called the

*characteristic equation*.

I did one example to get us started:

det(A-Î»I)=0 gives us (5-Î»)(6-Î»)-6=0, which simplifies to Î»

^{2}-11Î»+24=0.

This factors to (Î»-3)(Î»-8)=0, so Î»=3,8.

If Î»=3, we get

The 3 and the 8 are called the eigenvalues. That last vector above is the eigenvector associated with 3, and you can talk about the eigenspace for which it forms a basis; there's an eigenvector for 8 (you find it, I'm tired), and its eigenspace. And we'll do more of this next week.

I hope my students are having as much fun as I am!

## No comments:

## Post a Comment