Suppose we have a sequence a_n which converges to L.
Let e > 0, then there is an integer N > 0 such that for all n > N we have,
|a_n - L| < 1. Why 1? Well 1 is a number, we can use any positive number here.
Then adding and subtracting L and using the triangle inequality we have,
|a_n| = |a_n - L + L| <= |a_n - L| + L < 1 + L for all n > N.
Set M = max{a_1, a_2, ..., a_(n-1), 1 + L}
Then, for ALL positive integers n we have,
|a_n| <= M.
This shows the sequence is bounded.
Showing posts with label Mathematics. Show all posts
Showing posts with label Mathematics. Show all posts
Saturday, August 9, 2014
Understanding The Binomial Probability Mass Function Formula
The probability mass function for a binomial random variable X is given by,
P(x) = nCx p^x * q^(n-x)
This is the probability of exactly x successes among n trials in a binomial experiment where,
p denotes the probability of success
q denotes the probability of failure
n denotes the number of trials
x denotes the number of success
nCx is the number of ways to choose x objects from a group of n without regard to order
Ok so how does this formula make sense?
So we have n spots,
_ _ _ _ ... _ _ _
We want to choose x of them for our successes, and there are nCx ways to do that.
Ok we want x successes, by the multiplication rule that means we have,
p*p*...*p <--- x copies of p
p^x.
That leaves n-x spots, those must be the failures,
q*q*...*q <--- n-x copies of q
q^(n-x)
Put it all together, and we have
nCx p^x * q^(n-x)
Simplistic but it should help provide some insight!
P(x) = nCx p^x * q^(n-x)
This is the probability of exactly x successes among n trials in a binomial experiment where,
p denotes the probability of success
q denotes the probability of failure
n denotes the number of trials
x denotes the number of success
nCx is the number of ways to choose x objects from a group of n without regard to order
Ok so how does this formula make sense?
So we have n spots,
_ _ _ _ ... _ _ _
We want to choose x of them for our successes, and there are nCx ways to do that.
Ok we want x successes, by the multiplication rule that means we have,
p*p*...*p <--- x copies of p
p^x.
That leaves n-x spots, those must be the failures,
q*q*...*q <--- n-x copies of q
q^(n-x)
Put it all together, and we have
nCx p^x * q^(n-x)
Simplistic but it should help provide some insight!
Nice Convergence Proof
This is an interesting statement and proof I think.
Here is the statement.
Suppose that lim x->inf f(x) = L and that f(n) = a_n for every integer n. Then lim a_n = L.
Proof.
Suppose lim x-> inf f(x) = L. Let e > 0, then there is a postive integer N > 0 such that for all x > N, we have,
|f(x) - L| < e.
In particular, for all positive integers n > N, we have,
|f(n) - L| < e
|a_n - L| < e.
Done.
How is this useful?
Well suppose you have a limit, like say,
lim n->inf n/e^n.
It is clear that the answer is 0 because the exponential function grows faster than any polynomial, but what if we wanted to show it? Well one solution is to use L'Hopital's rule, but we can't really use L'hopitals with sequences, so we instead think of it as,
lim x->inf x/e^x = lim x->inf 1/e^x = 0 where we used L'hopitals.
It is important to note that the converse of our original statement is NOT TRUE.
Take a_n = sin(npi). Here lim n->inf sin(npi) = lim n->inf 0 = 0, but,
lim x->inf sin(xpi) DNE.
Therefore the moral here is, you can use l'hopitals with sequences but keep in mind everything should make sense if your n is actually a real number x. If it does make sense, feel free to be abusive and use l'hopitals, most people do it, and if people do it then it's ok right?:)
I hope this has helped someone out there on the internet!
Here is the statement.
Suppose that lim x->inf f(x) = L and that f(n) = a_n for every integer n. Then lim a_n = L.
Proof.
Suppose lim x-> inf f(x) = L. Let e > 0, then there is a postive integer N > 0 such that for all x > N, we have,
|f(x) - L| < e.
In particular, for all positive integers n > N, we have,
|f(n) - L| < e
|a_n - L| < e.
Done.
How is this useful?
Well suppose you have a limit, like say,
lim n->inf n/e^n.
It is clear that the answer is 0 because the exponential function grows faster than any polynomial, but what if we wanted to show it? Well one solution is to use L'Hopital's rule, but we can't really use L'hopitals with sequences, so we instead think of it as,
lim x->inf x/e^x = lim x->inf 1/e^x = 0 where we used L'hopitals.
It is important to note that the converse of our original statement is NOT TRUE.
Take a_n = sin(npi). Here lim n->inf sin(npi) = lim n->inf 0 = 0, but,
lim x->inf sin(xpi) DNE.
Therefore the moral here is, you can use l'hopitals with sequences but keep in mind everything should make sense if your n is actually a real number x. If it does make sense, feel free to be abusive and use l'hopitals, most people do it, and if people do it then it's ok right?:)
I hope this has helped someone out there on the internet!
A continuous function which is not differentiable, proof
Set f(x) = |x|.
First let's show it's continuous.
If x > 0, f(x) = x is continuous, if x <0, then f(x) = -x is also continuous. We just need to show it's continuous at x = 0.
Note,
lim x->0^+ |x| = lim x->0^+ x = 0 and lim x > 0^- |x| = lim x->9^- (-x) = 0, whence
lim x->0 |x| = 0 = f(0), so continuity has been shown.
This function is not differentiable at zero. To see this, note,
lim x->0^- (f(x) - f(0))/(x - 0)
lim x->0^- (|x| - 0)/(x - 0)
lim x->0^- |x|/x
lim x->0^- (-x/x)
lim x->0^- (-1)
-1
Likewise,
lim x->0^+ (f(x) - f(0))/(x - 0)
lim x->0^+ (|x| - 0)/(x - 0)
lim x->0^+ |x|/x
lim x->0^+ (x/x)
lim x->0^+ 1
1
Therefore lim x->0 (f(x) - f(0))/(x - 0) DNE, so the function is not differentiable.
This is worth knowing how to do, it might save your life some day!
First let's show it's continuous.
If x > 0, f(x) = x is continuous, if x <0, then f(x) = -x is also continuous. We just need to show it's continuous at x = 0.
Note,
lim x->0^+ |x| = lim x->0^+ x = 0 and lim x > 0^- |x| = lim x->9^- (-x) = 0, whence
lim x->0 |x| = 0 = f(0), so continuity has been shown.
This function is not differentiable at zero. To see this, note,
lim x->0^- (f(x) - f(0))/(x - 0)
lim x->0^- (|x| - 0)/(x - 0)
lim x->0^- |x|/x
lim x->0^- (-x/x)
lim x->0^- (-1)
-1
Likewise,
lim x->0^+ (f(x) - f(0))/(x - 0)
lim x->0^+ (|x| - 0)/(x - 0)
lim x->0^+ |x|/x
lim x->0^+ (x/x)
lim x->0^+ 1
1
Therefore lim x->0 (f(x) - f(0))/(x - 0) DNE, so the function is not differentiable.
This is worth knowing how to do, it might save your life some day!
Proof for the sum of the first n integers
Let,
S = 1 + 2 + 3 + ... + n
S = n + (n-1) + (n-2) + 1
Now add,
2S = (n + 1) + (n + 1) + (n + 1) + ...+ (n + 1) <--- there are n terms here
2S = n*(n + 1)
S = n*(n + 1)/2
Done!
S = 1 + 2 + 3 + ... + n
S = n + (n-1) + (n-2) + 1
Now add,
2S = (n + 1) + (n + 1) + (n + 1) + ...+ (n + 1) <--- there are n terms here
2S = n*(n + 1)
S = n*(n + 1)/2
Done!
Permutation Formula Proof
If you have n distinct objects, the number of ways to arrange r of them is,
nPr = n!/(n - r)!
Most people memorize this formula, which is fine, there is nothing wrong with memorizing things, but most books do not prove this.
Let's prove it!
We have n distinct objects, and we want to arrange r of them, hence by the multiplication rule this can be done in,
n*(n-1)*(n-2)*...*(n - (r-1)) ways.
Notice we stopped at r-1 instead of r. To see think of it backwards
n corresponds to 0
n -1 corresponds to 1
n-2 corresponds to 2
.
.
.
n - (r-1) correspoends to r -1
There are exactly r numbers here: 0, 1, 2, 3, ..., r-1.
Ok so now we just need to show that what we have above is the same as n!/(n-r)!.
We can "complete the factorial" if that's a real thing, I think it is, if it's not it should be:)
n*(n-1)*(n-2)*...*(n - (r-1)) =
n*(n-1)*(n-2)*...*(n - r + 1) =
n*(n-1)*(n-2)*...*(n - r + 1)) *[(n- r) * (n - r - 1)) *. . . *3*2*1]/ [(n- r) * (n - r - 1)) *. . . *3*2*1]=
n!/(n-r)!
That's it, we did it!
Hope this has helped someone out there in the world!
nPr = n!/(n - r)!
Most people memorize this formula, which is fine, there is nothing wrong with memorizing things, but most books do not prove this.
Let's prove it!
We have n distinct objects, and we want to arrange r of them, hence by the multiplication rule this can be done in,
n*(n-1)*(n-2)*...*(n - (r-1)) ways.
Notice we stopped at r-1 instead of r. To see think of it backwards
n corresponds to 0
n -1 corresponds to 1
n-2 corresponds to 2
.
.
.
n - (r-1) correspoends to r -1
There are exactly r numbers here: 0, 1, 2, 3, ..., r-1.
Ok so now we just need to show that what we have above is the same as n!/(n-r)!.
We can "complete the factorial" if that's a real thing, I think it is, if it's not it should be:)
n*(n-1)*(n-2)*...*(n - (r-1)) =
n*(n-1)*(n-2)*...*(n - r + 1) =
n*(n-1)*(n-2)*...*(n - r + 1)) *[(n- r) * (n - r - 1)) *. . . *3*2*1]/ [(n- r) * (n - r - 1)) *. . . *3*2*1]=
n!/(n-r)!
That's it, we did it!
Hope this has helped someone out there in the world!
Proof of the nth term test
The nth term test basically says that if an infinite series SUM(a_n) converges, then lim a_n = 0.
The proof is almost immediate. Suppose our series converges to say S and let S_n denote the nth partial sum.
Then,
lim a_n = lim (S_n - S_(n-1)) = S-S = 0. That's it!
Note this is typically NOT how the nth term test is used. Usually in calculus books and in courses people use the contrapositive; that is,
IF lim a_n != 0 then the series SUM(a_n) diverges.
The proof is almost immediate. Suppose our series converges to say S and let S_n denote the nth partial sum.
Then,
lim a_n = lim (S_n - S_(n-1)) = S-S = 0. That's it!
Note this is typically NOT how the nth term test is used. Usually in calculus books and in courses people use the contrapositive; that is,
IF lim a_n != 0 then the series SUM(a_n) diverges.
Every Differentiable Function is Continuous, Proof
Everyone knows this, but I bet a lot of people can't prove it! I'm also sure that plenty of people can, but just in case you have never seen the proof, here it is anyways!
Ok so suppose f is differentiable at c.
This means that lim x-> (f(x) - f(c))/(x-c) exists and the derivative of f at c is this limit;i.e.,
f'(c) = lim x-> (f(x) - f(c))/(x-c).
Ok so now we need to show f is continuous at c, so we need to show lim x->c f(x) = f(c).
Instead we will show that lim x->c (f(x) - f(c)) = 0 and then it will follow from there.
Well the trick is to just rewrite the above limit and somehow use what we already have.
lim x->c (f(x) - f(c))=
lim x->c (f(x) -f(c))/(x-c) * (x-c) =
(lim x->c (f(x) - f(c))/(x-c)) * lim x->c (x-c) = f'(c)*0 = 0.
Hence we showed that,
lim x->c (f(x) - f(c)) = 0.
Then,
lim x->c f(x)
lim x->c (f(x) - f(c) + f(c)) <--- just adding 0 which is the same as -f(c) + f(c)
lim x->c (f(x) - f(c)) + lim x->c f(c) <--- since both limits exist we can do this
0 + f(c)
f(c)
Done!
Ok so suppose f is differentiable at c.
This means that lim x-> (f(x) - f(c))/(x-c) exists and the derivative of f at c is this limit;i.e.,
f'(c) = lim x-> (f(x) - f(c))/(x-c).
Ok so now we need to show f is continuous at c, so we need to show lim x->c f(x) = f(c).
Instead we will show that lim x->c (f(x) - f(c)) = 0 and then it will follow from there.
Well the trick is to just rewrite the above limit and somehow use what we already have.
lim x->c (f(x) - f(c))=
lim x->c (f(x) -f(c))/(x-c) * (x-c) =
(lim x->c (f(x) - f(c))/(x-c)) * lim x->c (x-c) = f'(c)*0 = 0.
Hence we showed that,
lim x->c (f(x) - f(c)) = 0.
Then,
lim x->c f(x)
lim x->c (f(x) - f(c) + f(c)) <--- just adding 0 which is the same as -f(c) + f(c)
lim x->c (f(x) - f(c)) + lim x->c f(c) <--- since both limits exist we can do this
0 + f(c)
f(c)
Done!
A Set of Cardinality n has 2^n subsets
This is the easiest way I know how to do it.
Let S be a set with cardinality n.
Recall that nCk is the number of k-element subsets of an n-element set.
Then to find the total number of subsets, we can just literally add them all up.
So we will add up,
All 0-element subsets, there are nC0 of these.
All 1-element subsets, there are nC1 of these.
All 2-element subsets, there are nC2 of these.
All 3-element subsets, there are nC3 of these.
.
.
.
All n-1-element subsets, there are nCn-1 of these.
All n element subsets, there are nCn of these.
So we have,
total number of subsets = nC0 + nC1 + nC2 + nC3 + ... + nCn-1 + nCn
On the other hand, via the binomial theorem we have,
2^n = (1 + 1)^n = SUM(nCk1^k 1^(n-k)) = nC0 + nC1 + nC2 + nC3 + ... + nCn-1 + nCn.
Done!
Let S be a set with cardinality n.
Recall that nCk is the number of k-element subsets of an n-element set.
Then to find the total number of subsets, we can just literally add them all up.
So we will add up,
All 0-element subsets, there are nC0 of these.
All 1-element subsets, there are nC1 of these.
All 2-element subsets, there are nC2 of these.
All 3-element subsets, there are nC3 of these.
.
.
.
All n-1-element subsets, there are nCn-1 of these.
All n element subsets, there are nCn of these.
So we have,
total number of subsets = nC0 + nC1 + nC2 + nC3 + ... + nCn-1 + nCn
On the other hand, via the binomial theorem we have,
2^n = (1 + 1)^n = SUM(nCk1^k 1^(n-k)) = nC0 + nC1 + nC2 + nC3 + ... + nCn-1 + nCn.
Done!
Why .999... = 1, A Proof
Recall for any convergent geometric series with common ratio r, we have,
SUM( ar^k, k = j, j + 1, j + 2, ...) = ar^j/(1-r).
.999... =
.9 + .09 + .009 + ... =
9/10 + 9/100 + 9/1000 + ... =
9/10 + 9/10^2 + 9/10^3 + ... =
SUM( 9*(1/10)^k, k = 1, 2, ...) =
9*(1/10)/(1-1/10) =
(9/10) / (9/10) = 1
Done!
SUM( ar^k, k = j, j + 1, j + 2, ...) = ar^j/(1-r).
.999... =
.9 + .09 + .009 + ... =
9/10 + 9/100 + 9/1000 + ... =
9/10 + 9/10^2 + 9/10^3 + ... =
SUM( 9*(1/10)^k, k = 1, 2, ...) =
9*(1/10)/(1-1/10) =
(9/10) / (9/10) = 1
Done!
Always Triple Your Bet?
Several years ago I worked with this really funny older guy who was really into money, gambling, and girls. He eventually quit, got a better job, a beeper(yes this was a LONG time ago), and a new car. He had become a high roller in my mind. He would always say, "triple your bet, you can't lose!".
Anyways let's test his theory. Suppose we play a game where if you win, you double your money.
For example,
if you bet 1 penny then you win 1 penny
if you bet 4 pennies then you win 4 pennies
etc...
Ok so now let's take a fake scenario. Suppose you,
Bet 1 penny, lose, so TRIPLE your bet
Bet 3 pennies, lose, so TRIPLE your bet
Bet 9 pennies, lose, so TRIPLE your bet
Bet 27 pennies, lose, so TRIPLE your bet
Bet 81 pennies, WIN, so now you win 81 more pennies.
Ok so much did you lose in the previous games?
1 + 3 + 9 + 27 = 40.
But you won 81 pennies, so your total profit is then 81 - 40 = 41 pennies.
This works every single time. Try it, take some pennies or a piece of paper and a calculator and you will notice that if you ALWAYS triple your bet, you will always eventually end up a winner.
So why don't people do this? To be honest I have no idea, I don't go to casinos often. According to this "cool" guy I used to know casinos set betting limits at the tables. Also if they notice this kind of behavior eventually you will be kicked out.
I don't know why people don't do this. There has to be some kind of rule against this, because it NEVER fails:)
Anyways let's test his theory. Suppose we play a game where if you win, you double your money.
For example,
if you bet 1 penny then you win 1 penny
if you bet 4 pennies then you win 4 pennies
etc...
Ok so now let's take a fake scenario. Suppose you,
Bet 1 penny, lose, so TRIPLE your bet
Bet 3 pennies, lose, so TRIPLE your bet
Bet 9 pennies, lose, so TRIPLE your bet
Bet 27 pennies, lose, so TRIPLE your bet
Bet 81 pennies, WIN, so now you win 81 more pennies.
Ok so much did you lose in the previous games?
1 + 3 + 9 + 27 = 40.
But you won 81 pennies, so your total profit is then 81 - 40 = 41 pennies.
This works every single time. Try it, take some pennies or a piece of paper and a calculator and you will notice that if you ALWAYS triple your bet, you will always eventually end up a winner.
So why don't people do this? To be honest I have no idea, I don't go to casinos often. According to this "cool" guy I used to know casinos set betting limits at the tables. Also if they notice this kind of behavior eventually you will be kicked out.
I don't know why people don't do this. There has to be some kind of rule against this, because it NEVER fails:)
Friday, August 8, 2014
Continuous Bounded Derivative
If f(x) has a continuous bounded derivative(say M is a bound), then f(x)
- f(y) = f'(c)(x-y) for some c in (x,y), taking absolute values yields
|f(x) - f(y)| <= M|x-y|, so f is Lipschitz of order one.
Random yes, but Lipschitz is fun to say!
Random yes, but Lipschitz is fun to say!
Intersection of Open Nested Sets which is Empty
Put a_n = (0, 1/n), then a_(n+1) is a subset of a_n for each n >= 1.
Now set I = int(a_n, n = 1, 2, 3,...).
If I is not empty, there is x in I, so x is in a_n for each n, so 0 < x < 1/n for each n. By the squeeze principle x = 0, a contradiction.
Someone asked me this a while ago and this is the answer I gave them. The whole point of this is that if the a_n's where closed and bounded and nested, the intersection would be nonempty.
Now set I = int(a_n, n = 1, 2, 3,...).
If I is not empty, there is x in I, so x is in a_n for each n, so 0 < x < 1/n for each n. By the squeeze principle x = 0, a contradiction.
Someone asked me this a while ago and this is the answer I gave them. The whole point of this is that if the a_n's where closed and bounded and nested, the intersection would be nonempty.
H u K is a subspace of V iff H is contained in K or K is contained in H
Here, V is a vector space and H and K are subspaces.
If H is contained in K or K is contained in H, then H u K = K or H u K = H and in any case this is a subspace of V.
Now if H is not contained in K and K is not contained in H, then there is k in K\H and h in H\K.
Now both h, k lie in H u K. So let's look at h + k. If this were in H u K, it would be in H or in K. If it's in H, then k = (h + k) - h is in H, a contradiction. If it's in K, then h = (h + k) - k is in K, a contradiction. In any case we reach a contradiction so H u K is not closed under addition. In particular this means it cannot be a subspace.
A similar statement holds for ideals in a ring and subgroups of a group.
If H is contained in K or K is contained in H, then H u K = K or H u K = H and in any case this is a subspace of V.
Now if H is not contained in K and K is not contained in H, then there is k in K\H and h in H\K.
Now both h, k lie in H u K. So let's look at h + k. If this were in H u K, it would be in H or in K. If it's in H, then k = (h + k) - h is in H, a contradiction. If it's in K, then h = (h + k) - k is in K, a contradiction. In any case we reach a contradiction so H u K is not closed under addition. In particular this means it cannot be a subspace.
A similar statement holds for ideals in a ring and subgroups of a group.
Union of a Family of Nested Ideals
Here, we have a family of ideals {I_j} in a ring R. Here j runs through some index set say A.
Also I_1 >= I_2 >= I_3 >= ...
Let's try to show that I = UNION(I_j, j in A) is an ideal.
I is nonempty since each I_j contains 0.
Take r in R and x in I. Then x is in some I_j, so rx is in I_j because it's an ideal so rx is in I.
Take x in I, so x lives in some I_j. Then -x is in I_j so it's also in I, so I is closed under inverses.
Now the more interesting part.
Take x, y in I. Then x is in some I_j and y is in some I_k.
If j = k then x + y is in I_j and hence in I and we are done.
Assume wlog that j > k. Then I_j <= I_k, so x is in I_k. Thus both x and y are in I_k hence so is x + y, and so x+y is in I as well, done.
Also I_1 >= I_2 >= I_3 >= ...
Let's try to show that I = UNION(I_j, j in A) is an ideal.
I is nonempty since each I_j contains 0.
Take r in R and x in I. Then x is in some I_j, so rx is in I_j because it's an ideal so rx is in I.
Take x in I, so x lives in some I_j. Then -x is in I_j so it's also in I, so I is closed under inverses.
Now the more interesting part.
Take x, y in I. Then x is in some I_j and y is in some I_k.
If j = k then x + y is in I_j and hence in I and we are done.
Assume wlog that j > k. Then I_j <= I_k, so x is in I_k. Thus both x and y are in I_k hence so is x + y, and so x+y is in I as well, done.
Vector Space Basis Criterion
Here V is a vector space over some field F and B a finite subset of V.
I will try this off the top of my head, I do not have a linear algebra book handy and I am putting together this statement from random facts so be weary!
TFAE
1. B is a basis for V(ie B spans V and B is linearly independent)
2. Every vector in V can be written as a unique linear combination of elements of V.
3. B is a maximal linearly independent subset of V.
1 => 2
Say 1 holds. Since B spans V we can write every vector in V as a linear combination of elements of B with coefficients coming from F. Take x in V and say we have,
x = a_1v_1 + ... + a_nv_n
x = b_1v_1 + ... + b_nv_n.
Then 0 = x -x = (a_1-b_1)v_1 + ... + (a_n-b_n)v_n. The independence of B implies a_i - b_i = 0, so a_i = b_i for all i. This shows uniqueness.
2 => 1
Assume 2 holds. In particular this means B spans V. Now suppose a_1v_1 + ... + a_nv_n = 0 for some a_i in F, v_i in B. Now 0 is in V, so 0 has a unique representation as a linear combination of elements of V with coefficients in F. But 0 = 0*v_1 + ... + 0*v_n, so uniqueness implies a_i = 0 for all i, so we get independence.
3 = > 1.
Assume B is a maximal linearly independent subset. We only need to show B spans V. So take x in V. Consider the set B u {x}. Now B u {x} contains B, and since B is maximal, B u {x} cannot be independent, hence there are a_1,..., a_m s.t. a_1v_1 + ... + a_(m-1)v_(m-1) + a_mx = 0 where not all a_i are 0. Now a_m != 0. (If it were 0, let k be the largest index s.t. a_k ! = 0, then a_1v_1 + ... + a_kv_k = 0, then a_i = 0 for all i by independence, a contradiction)
Then x = (a_m)^-1 * -a_1v_1 - ... - (a_m)^-1 * a_(m-1)v_(m-1)) is in the span of B, so B spans V.
1 => 3
Suppose B is a basis. Suppose there is some linearly independent set S which properly contains B. Then there is some x in S\B. Since B is a basis, x = a_1v_1 +... + a_nv_n for some a_i in F and v_i in B. Then, 1*x - a_1v_1 - ... - a_nv_n = 0 and all of these vectors are in S since S contains B. By the independence of S, all the coefficients are 0, in particular 1 = 0, a contradiction. Thus no such S can exist and so B is a maximal linearly independent subset.
So we showed 1 <=>2 and 1 <=> 3 so 1 <=> 2 <=> 3 and that's it.
I hope there are no mistakes!
I will try this off the top of my head, I do not have a linear algebra book handy and I am putting together this statement from random facts so be weary!
TFAE
1. B is a basis for V(ie B spans V and B is linearly independent)
2. Every vector in V can be written as a unique linear combination of elements of V.
3. B is a maximal linearly independent subset of V.
1 => 2
Say 1 holds. Since B spans V we can write every vector in V as a linear combination of elements of B with coefficients coming from F. Take x in V and say we have,
x = a_1v_1 + ... + a_nv_n
x = b_1v_1 + ... + b_nv_n.
Then 0 = x -x = (a_1-b_1)v_1 + ... + (a_n-b_n)v_n. The independence of B implies a_i - b_i = 0, so a_i = b_i for all i. This shows uniqueness.
2 => 1
Assume 2 holds. In particular this means B spans V. Now suppose a_1v_1 + ... + a_nv_n = 0 for some a_i in F, v_i in B. Now 0 is in V, so 0 has a unique representation as a linear combination of elements of V with coefficients in F. But 0 = 0*v_1 + ... + 0*v_n, so uniqueness implies a_i = 0 for all i, so we get independence.
3 = > 1.
Assume B is a maximal linearly independent subset. We only need to show B spans V. So take x in V. Consider the set B u {x}. Now B u {x} contains B, and since B is maximal, B u {x} cannot be independent, hence there are a_1,..., a_m s.t. a_1v_1 + ... + a_(m-1)v_(m-1) + a_mx = 0 where not all a_i are 0. Now a_m != 0. (If it were 0, let k be the largest index s.t. a_k ! = 0, then a_1v_1 + ... + a_kv_k = 0, then a_i = 0 for all i by independence, a contradiction)
Then x = (a_m)^-1 * -a_1v_1 - ... - (a_m)^-1 * a_(m-1)v_(m-1)) is in the span of B, so B spans V.
1 => 3
Suppose B is a basis. Suppose there is some linearly independent set S which properly contains B. Then there is some x in S\B. Since B is a basis, x = a_1v_1 +... + a_nv_n for some a_i in F and v_i in B. Then, 1*x - a_1v_1 - ... - a_nv_n = 0 and all of these vectors are in S since S contains B. By the independence of S, all the coefficients are 0, in particular 1 = 0, a contradiction. Thus no such S can exist and so B is a maximal linearly independent subset.
So we showed 1 <=>2 and 1 <=> 3 so 1 <=> 2 <=> 3 and that's it.
I hope there are no mistakes!
The Limit of a Sequence of Uniformly Continuous Functions
Ok so this is an absolutely beautiful proof.
Suppose {f_n} is a sequence of say real or complex valued continuous functions. We could even assume f_n is a sequence of functions from R^n->R where it's understood the || means the usual euclidean norm. Suppose also that f_n -> f uniformly on some set say A.
Let us show f is continuous.
So take some x_0 in A and let e > 0.
Since f_n->f uniformly on A, there is a positive integer N such that for all n > N we have |f_n(x)-f(x)|< e/3 for all x in A.
Fix m > N. Since f_m is continuous at x_0, there is a delta > 0 such that for all x in A with |x-x_0| < delta we have |f_m(x) - f_m(x_0)| < e/3.
Then for all x in A with |x-x_0| < delta, we have by the triangle inequality
|f(x)-f(x_0)| <= |f(x) - f_m(x)| + |f_m(x) - f_m(x_0)| + |f_m(x_0) - f(x_0)| < e/3 + e/3 + e/3 = e.
Done!
Suppose {f_n} is a sequence of say real or complex valued continuous functions. We could even assume f_n is a sequence of functions from R^n->R where it's understood the || means the usual euclidean norm. Suppose also that f_n -> f uniformly on some set say A.
Let us show f is continuous.
So take some x_0 in A and let e > 0.
Since f_n->f uniformly on A, there is a positive integer N such that for all n > N we have |f_n(x)-f(x)|< e/3 for all x in A.
Fix m > N. Since f_m is continuous at x_0, there is a delta > 0 such that for all x in A with |x-x_0| < delta we have |f_m(x) - f_m(x_0)| < e/3.
Then for all x in A with |x-x_0| < delta, we have by the triangle inequality
|f(x)-f(x_0)| <= |f(x) - f_m(x)| + |f_m(x) - f_m(x_0)| + |f_m(x_0) - f(x_0)| < e/3 + e/3 + e/3 = e.
Done!
Groups G such that g^2 = e for all g in G.
If G is a group and g^2 = e for all g in G then every element is it's own inverse.
Hence, xy = (xy)^-1 = y^-1x^-1 = yx so G is abelian.
This basically means, anytime we have a group such that g^2 = e, we know G is abelian.
Hence, xy = (xy)^-1 = y^-1x^-1 = yx so G is abelian.
This basically means, anytime we have a group such that g^2 = e, we know G is abelian.
If A^2 = I for a matrix A, then A is diagonalizable
So I'm at Walmart checking out the prices of frozen pizza, and the cute older woman is next to me. She turns to me, smiles, and says "Is it true of A^2 is the identity matrix, that it's also diagonalizable?". Ok so that wasn't funny and that's not how it goes, but this is a VERY useful fact that just comes up so often, at least in the world of mathematics, that it makes it worth mentioning. Here we go.
If A is a matrix such that A^2 = I, then A^2 - I = 0. Set f(x) = x^2 - 1. Then f(A) = 0 by Cayley Hamilton, so the minimal polynomial of A, say p, divides f. In particular this means that p has no repeated factors, hence A is diagonalizable.
If A is a matrix such that A^2 = I, then A^2 - I = 0. Set f(x) = x^2 - 1. Then f(A) = 0 by Cayley Hamilton, so the minimal polynomial of A, say p, divides f. In particular this means that p has no repeated factors, hence A is diagonalizable.
HK = KH iff HK is a subgroup of G
Ok so I'm bored and it's late. Time for some random intellectual stimulation. I get enough of that to be honest and I prefer dumb Family Guy episodes or binge watching netflix, but here it is anyways. The setup is as follows.
G a group, and H and K are subgroups of G.
Proposition.
HK is a subgroup of G if and only if HK = KH.
Proof.
Suppose HK is a subgroup of G. Take x in HK, so x = hk, for some h in H and k in K.
Since HK is a subgroup, x^-1 is in HK. Now x^-1 = (hk)^-1 = k^-1h^-1 and this is in KH since k^-1 is in K and h^-1 is in H because K and H are each closed under inverses. This shows HK is a subset of KH. To show KH is a subset of HK the idea is exactly the same, hence omitted!
Conversely, suppose HK = KH. We need to show HK is a subgroup of G. Well first notice e = e*e is in HK because e is in H and e is in K. Next, take x in HK. Since HK = KH, x is in KH, so x = kh for some k in K and h in H. Then x^-1 = h^-1k^-1 and this is in HK because both H and K are closed under inverses. This shows HK is closed under inverses. Finally, suppose x, y are in HK.
This means x = hk and y = h'k' for some h, h' in H and k, k' in K. Then xy = hkh'k'. Now, kh is in HK but HK = KH, so kh = h''k'' for some h'' in H and k'' in K, hence xy = hkh'k' = hh''k''k' and this is in HK because both H and K are closed under the group operation. This shows HK is closed under the group operation and hence HK is a subgroup.
G a group, and H and K are subgroups of G.
Proposition.
HK is a subgroup of G if and only if HK = KH.
Proof.
Suppose HK is a subgroup of G. Take x in HK, so x = hk, for some h in H and k in K.
Since HK is a subgroup, x^-1 is in HK. Now x^-1 = (hk)^-1 = k^-1h^-1 and this is in KH since k^-1 is in K and h^-1 is in H because K and H are each closed under inverses. This shows HK is a subset of KH. To show KH is a subset of HK the idea is exactly the same, hence omitted!
Conversely, suppose HK = KH. We need to show HK is a subgroup of G. Well first notice e = e*e is in HK because e is in H and e is in K. Next, take x in HK. Since HK = KH, x is in KH, so x = kh for some k in K and h in H. Then x^-1 = h^-1k^-1 and this is in HK because both H and K are closed under inverses. This shows HK is closed under inverses. Finally, suppose x, y are in HK.
This means x = hk and y = h'k' for some h, h' in H and k, k' in K. Then xy = hkh'k'. Now, kh is in HK but HK = KH, so kh = h''k'' for some h'' in H and k'' in K, hence xy = hkh'k' = hh''k''k' and this is in HK because both H and K are closed under the group operation. This shows HK is closed under the group operation and hence HK is a subgroup.
Subscribe to:
Posts (Atom)