O Sweet Mr Math

wherein is detailed Matt's experiences as he tries to figure out what to do with his life. Right now, that means lots of thinking about math.

Thursday, August 30, 2012

10:35 PM

So, continuing my discussion of the big picture of real analysis, I ended last time talking about sequences of functions, and the idea that under the right conditions (the sequence is uniformly convergent, or the sequences of derivatives is uniformly convergent), the members of the sequence of functions have the same properties as the function which is the limit of the sequence. This means that if you have a function which is computationally ugly, you can potentially rewrite it as a sequence of functions which are easy to work with.

There are two important examples of sequences of functions which converge uniformly. The first is polynomials. Polynomials are easy to work with, so if you can rewrite an ugly function as a polynomial, you can turn hard problems into easy problems. In particular, Taylor series are a particular sequence of polynomials which approximate a function. Taylor series have important limitations, in that the original function must be infinitely differentiable and Taylor series do not always converge, but when they do work, they are a powerful and convenient tool.

The second example of a sequence of functions which converges uniformly is the Fourier series. Given a function which is bounded and periodic (meaning it repeats itself), you can write the function as a sum of sine and cosine functions. Sines and cosines are difficult to evaluate, but they are easy to work with (for example differentiation and integration). Their smoothly undulating curves are also pretty. It's aesthetically appealing to be able to convert an angular, sharp edged function into a sum of beautiful waves.

I've previously encountered Fourier series in other contexts, and while I studied the basic math to some extent, there was an element of, "we are justified in using Fourier series because they give the correct results in practice." I found it personally satisfying to come back to them and be able to say that we are mathematically justified in using them because we can mathematically prove that they give the expected results.

This is as far as I've gone in studying real analysis. It all comes back to using limits as a tool to say that these are the conditions under which we are allowed to do certain mathematical operations, and these are the conditions under which the operations will fail. Along the way, I started with the concept of sequences of numbers, and eventually extended that idea to sequences of functions. The fact that functions can be inserted in a place where I expected to use numbers has also had the effect of changing how I think about functions in general.

Real analysis goes on from here, leading to questions like "what is the mathematical definition of length?" (Think about it. A line segment has some length. But a line is made up of points, and points have no length. So where does the length of the line come from?) This leads to questions like, "can you have a set of points which is not a line but which also has a length?" I'm interested in these questions, which start to have a metaphysical significance, but I'm happy to stop here with my current studies for now.

0 comments

Wednesday, August 29, 2012

9:43 PM

I want to continue my thoughts about real analysis from last time. As I said then, the central concept behind real analysis is the limit. But limits aren't the point of real analysis, they are a tool. So what can we do with limits?

First, we can determine continuity. Continuity is the idea that there are no breaks or gaps in something. As applied to functions, continuity means that there are no sudden jumps in the value of the function. In other words, if a particular input produces a particular output, any inputs near the original input will have outputs near the original output. Or mathematically, a function is continuous at a point if the limit of the function at that point equals the value of the function at that point. Being "near" a value is sort of a nebulous concept, and I'm not going to define it precisely here. But I will say that being "near" an output at a particular input has a precise definition, but that it depends on the function and also the input value.

Starting with the statement that a function is continuous at a particular point, we can extend this to say that the function is continuous everywhere. Sometimes we can only say that a function is continuous at most places, but there are some places where it is not continuous. This is nearly as useful as being continuous everywhere. We may also be able to say that a function is uniformly continuous, in which case we can use the same definition for "near" for every point in the domain of the function.

Why do we care about continuity? If we know that a function is continuous, we know a lot about that function. There are a bunch of important theorems from calculus, such as the intermediate value theorem and the mean value theorem, which depend on the continuity of the function. Calculus uses these theorems, but often does not prove them. A calculus textbook will say something like, "this looks like it should be true, and it is true, but for the proof, check an analysis book." Continuous functions are easier to work with than discontinuous functions. The bad news is that most functions are discontinuous, so proving continuity is an important step for working with a function.

Continuity leads to calculus. Again using limits as a tool, we can demonstrate that differentiation and integration actually work. Both concepts are based on approximation, and there's an assumption that the approximations are actually meaningful. Using limits, analysis proves that the approximations are correct.

When approaching limits using sequences, derivatives are based on a sequence of points. Starting with a fixed input point, take a sequence of points near that point, and the limit of the change in outputs between those two points is the derivative of the function. Integrals are based on a sequence of sets. Subdivide the total area into a set of smaller areas which approximate the original. As the sets have more subdivisions of smaller areas, the limit of the approximation of the area is the true area, or the integral.

You can also take the limit of a sequence of functions, and this is where the real fun begins. Start with some function which is hard to work with. Maybe there's no way to directly find the value of the function for a particular input, or you can do it but it's too much work. You can approximate the original function with another function that is easier to use. If you have a sequence of these approximate functions, you may be able to show that the limit of the sequence equals the original function. This justifies using one of the approximations instead of the original function.

How do you prove that these approximations are actually good, useful approximations? With limits, of course. And limits can be very useful. Looking at the continuity of a function, we distinguished between continuity in general, which means that if two inputs are near each other, then their outputs will also be near each other, but the definition of "near" depends on the inputs, and uniform continuity, which states that "near" has the same definition for every input. Likewise, we can define uniform convergence for a sequence of functions.

Take a particular function in the sequence. This function approximates the original function, but is not exactly the same. The difference at a particular input between the approximation and the original is called the error of the approximation at that point. If for the particular function there is a maximum error, and if every function in the sequence has a maximum error, and if the limit of the maximum error is 0 as you progress through the sequence, then the sequence converges uniformly. (For a counter example, picture a function which goes to infinity at one point. Then imagine a sequence of functions which are generally close to the original function at all other points, but have a finite value at that point. If that value grows with each function in the sequence, the limit of the sequence will be the same as the original function, but the error will be infinite at that point for every function in the sequence, so it does not converge uniformly.)

You can use limits to show that every if every function in a sequence is continuous and the sequence converges uniformly, then the function they converge to must be continuous. Likewise, for a sequence of functions which converges uniformly, the limit of the sequence of integrals equals the integral of the limit. However, if a sequence converges uniformly, the limit of the sequence of derivatives does not necessarily equal the derivative of the limit of the sequence. This is one of those cases where it's easy to detect a pattern and assume it continues. Analysis and the application of limits shows that your expectations break down. Applying the same techniques leads to the good news that if the derivatives converge uniformly to a function, then that function is the derivative of the limit. In the case of derivatives, the question isn't whether the functions in the sequence converge uniformly. It's whether the derivatives of the functions in the sequence converge uniformly.

There's a big payoff to sequences of functions and uniform convergence, but this post has gone on long enough, so it will have to wait until tomorrow.

0 comments

Tuesday, August 28, 2012

4:18 PM

I had been posting about real analysis. I worked through the definition of the real numbers, then moved on to topology and sets, and finally had started working on sequences and series. Then I stopped. Truthfully, when I started covering this material I hadn't intended to cover things in quite so much detail, but going over all the details did come pretty naturally to me. But when I started, I had been intending to keep pace with my own studying. I ended up falling behind, and then I ran out of time for any blogging at all.

In the meantime I kept studying, and now I have worked through two semesters worth of analysis. While I'd like to return to blogging in detail, I've decided to take advantage of the fact that I've stopped to take a step back and look at the larger picture, now that I have enough understanding to see the larger picture.

The fundamental idea behind real analysis is the limit. Limits are a powerful tool for understanding lots of math concepts, and real analysis is about developing the use of limits and then applying them to various problems.

Loosely speaking, the idea behind limits is that two things are near each other. The things in question could be numbers, or points in space, or sets, or functions. There's a precise mathematical definition for limits, which involves Greek letters (and causes some people to run in terror), but today I just want to talk about the general concept.

In many mathematical contexts, the standard is exact equality. High school algebra is all about showing that the left hand side of an equation is exactly equal to the right hand side. With limits, we say that two things are not exactly the same, but that's okay as long as they are near each other. This can feel like it's a step back from true equality, and it can also feel unfocused.

But there's a tradeoff. Equality can only say that this thing is exactly the same as this other thing. Limits can let you say that everything near this thing is close to everything near this other thing. The ability to speak about lots of things at the same time gives limits more power than strict equality has.

You may be wondering why, if analysis is all about limits, did I spend months blogging about sets and sequences. I did not know the answer at the time, but now I do. Just like limits are a tool used by analysis to talk about other stuff, we need tools to talk about limits. The first tool is sets and topology. One of the fundamental concepts of topology is distance, and so gives us the ability to talk about whether two things are near each other. The theory about sets we developed, for example the properties of compact sets, gives us tools to talk about limits.

Similarly, sequences give us different tools to talk about limits. The important thing here is that although sets and sequences give us different tools, they come to the same conclusions. Anything that can be demonstrated about limits using sets can also be demonstrated using sequences, and which one to use is just a question of convenience. This equivalence can also be used for sets and sequences to say things about each other, so using both tools allows us to get a deeper understanding of each tool individually.

I plan to have another post soon in which I will talk about what limits are useful for, again at a big picture level. I may also post about the big picture with sequences and series. My introduction to the concepts of sequences and series was in Calculus 2, and the idea has always felt a little half-baked. Now that I'm looking at them from the other side of analysis, I have a much better understanding of why we study them the way we do.

0 comments

Monday, May 07, 2012

9:45 PM

We've defined convergence of a sequence. A sequence (in any metric space) converges if there is a point in that space, called the limit of the sequence, such that the points in the sequence stay arbitrarily close to the limit beyond a certain step in the sequence. If you tell me how close you want to be, I can always tell you the minimum step in the sequence to guarantee that you will be that close.

We can basically run this idea backwards, and say that for any point which is not the limit of the sequence, there exists a minimum distance from that point such that there are always some future points which are at least that far away. Okay, that was too many words. Let me try symbols. an is some sequence with limit L. Then for any distance ε, there exists some start point N, so for any n≥N, d(an,L)<ε. You can choose any positive real number for ε, and then you can always find the minimum value of N so the distance inequality holds. Now, suppose p is a point other than L. Then there always exists some ε, such that for any start point N, some n≥N exists such that d(an,p)>ε.

For L, every ε has one fixed N such that every n≥N works. For p, in contrast, there is some fixed ε so every N has at least one n≥N which does not work. For L, you are free to choose ε and n, but N is fixed. For p, you are free to choose N, but ε and n are fixed. I think the reversals are interesting on their own merits, but they're also important to keep in mind if you are setting out to prove that a particular point either is or is not the limit of a sequence.

So what happens if every point behaves like p? For a given sequence, assume that you can show that for any possible point in the space, the sequence spends some time away from that point. In general, these are not useful sequences. Sequences are useful because we can say what they do in the long run. What we like to say is that they converge to a limit. If no point in the space is the limit of the sequence, all we can really say is that the sequence doesn't converge.

However, if the sequence is a sequence of real numbers, we can sometimes say a little more. If the terms of the sequence tend to increase, then the sequence may not converge, but we can still speak of it having a limit. One way of expressing "tends to increase" concretely is to say that for any real number, the sequence eventually gets larger than and stays larger than that number. Pick any real number P. If based on that choice, we can find an N such that for all n≥N, an>P, then we can say that the sequence does not converge, but it has limit +∞. Draw a number line. Start labeling points a1, a2, a3, and so on. The sequence does not have to strictly increase, so a2 can be less than a1, but it has to increase in general. If you choose any point on the number line and call it P, then the sequence must eventually stay on the right side of P. If you choose a bigger value for P, you would expect the sequence to cross to the right of P later, but for any possible value of P, the sequence must eventually cross to the right side and never cross back. In this case, we can say that the limit of the sequence is positive infinity.

Just to clarify, +∞ is not a real number, and the sequence does not converge. However, we can still speak of the sequence having a limit of +∞. If a sequence converges, that means that it eventually gets close to a specific point. If it has limit +∞, that means it eventually gets really big. How big? As big as you want. When will it get there? All we promise is that it will get there eventually, but that after that, it will just keep growing.

Sequences of real numbers can also have a limit of −∞, meaning the same thing in reverse. The sequence travels left on the number line as the terms increase. Infinite limits apply only to sequences of real numbers. Sequences of complex numbers, for example, can get big, but they can get big in lots of different ways, so it's hard to speak of it having a limit. However, if an is a sequence of complex numbers, then |an| is a sequence of real numbers, and it may be useful to speak of |an| having an infinite limit, even if an does not have a limit.

1 comments

Saturday, May 05, 2012

7:09 PM

In the novel Bitterblue, by Kristin Cashore, one character has a watch which is divided into 15 hours, each of which has 50 minutes. The novel has a brief discussion of how to convert time on that watch to standard time, and I'd like to look at it in a little more detail.

Like a conventional watch, which shows 12 hours of 60 minutes, the watch in Bitterblue shows half a day. However, the number of periods, and therefore the lengths of the periods, that it shows are different. One day has 24 standard hours, but 30 watch hours. (I will refer to times and durations on the watch as w-hours, w-minutes, etc. for clarity.) Therefore, there are 4 hours in 5 w-hours, or 1 w-hour = 4/5 hour (or 48 minutes). With 60 minutes in an hour, there are 1440 minutes in a day, but since there are only 50 w-minutes in a w-hour, there are 1500 w-minutes in a day. This means that 24 minutes = 25 w-minutes, so minutes and w-minutes have a similar duration.

With these relationships, it's possible to convert a time on the watch to a conventional time. One way to do it is to convert w-hours to w-minutes, then convert the total w-minutes to conventional minutes, then convert the minutes back to hours and minutes. If the w-time is h:m, the formula to convert to minutes is (h×50 + m) × 24/25. Divide this number by 60 to get the current hour, and the remainder is the current minutes. In Bitterblue, the title character does an example of a similar computation with equivalent results.

After Bitterblue does the calculation, she remarks that "I, for one, would find it simpler to memorize which time signifies what." As a halfway step to memorizing lots of times, it's fairly easy to estimate the time from the w-time. We'll start with a rough estimate that's accurate to about 5 minutes and then tighten it up a bit. 4 hours = 5 w-hours, so 4:00 = 5:00 w-time, 8:00 = 10:00 w-time, and 12:00 = 15:00 w-time. The first step is to find the closest current hour to one of these three points. The second step is to observe that 1 w-hour is 4/5 hour, and that 3/4 is close to 4/5. We're doing some rounding here, but we're used to thinking in quarter hours and we can correct the rounding later if we need to. So we start at 5:00, 10:00, or 15:00, and we add or subtract enough w-hours to be close to the current w-time. For each w-hour added or subtracted, we add or subtract 3/4 hour from the time. The last step is to add or subtract w-minutes. Since we're just estimating, and 1 w-minute = 24/25 minute, we can just add or subtract the number of w-minutes after or before the hour and ignore the conversion.

Let's do an example. Say the w-time is 8:35. 8 is close to 10, so we start there. 10:00 w-time = 8:00. Then we subtract 1 w-hour from 10:00 to get 9:00, so we subtract 45 minutes from 8:00 to get 7:15. Finally, we subtract 15 w-minutes from 9:00 to get 8:35. (The minutes subtraction is the step which throws me. Since there are 50 w-minutes in a w-hour, 8:35 is 15 w-minutes before the hour, not the expected 25.) Subtracting 15 minutes from 7:15 gives our estimate of 7:00. 8:35 on the watch is approximately 7:00 normal time.

We did some rounding, which we can now correct if we need more precision. We approximated 1 w-hour as 3/4 hour, when it's really 4/5 hour. 3/4 is 45 minutes and 4/5 hour is 48 minutes, so we can add or subtract 3 additional minutes per w-hour. In this case, 6:57 is a closer estimate than 7:00. Finally, there's a small rounding error in the minutes. If we add close to 25 w-minutes, we should subtract 1 minute from our estimate, and vice versa. Since we subtracted 15 w-minutes, it's slightly closer to add 1 minute back in, for a final time of 6:58.

Doing the full computation, we get 8×50 + 35 = 435. 435×24 is 10440. 10440/25 is 417.6. 417.6 divided by 60 is 6, with a remainder of 57.6. In other words, the exact time is 6:57:36. I can do 435×24 in my head, but I don't really want to, and estimating that the current time is about 7:00 was much easier and probably accurate enough.

Often, when I look at a watch face, I don't actually need to know the exact time. I'm just looking for a quick estimate, based on the hand position. So what does the hand position on a 15 hour watch tell us about the standard time? On a standard watch, the hour hand travels a full circle in one half day, so the angle from vertical tells us the exact time. It's easy to judge that if the hour hand is pointing down and a little to the left, it's about 7:00, and based on the exact angle it's not hard to judge whether it's closer to 6:30, 7:00, or 7:30, without even referring to the minute hand.

On the 15 hour watch, the hour hand travels a full circle in one half day, exactly the same as a standard watch. So the angle of the hour hand is the same as on the standard watch. You can work out the angles from the example, and you will find that the angle of the hour hand at 8:35 on the 15 hour watch is nearly the same as the angle of the hour hand at 7:00 on a standard watch. This means that a quick glance at the hour hand on a 15 hour watch will give you exactly the same information as a quick glance at the hour hand on a standard watch.

The minute hand is a different story. The minute hand on a 15 hour watch completes one full circle in 48 (standard) minutes, which means that it points all kinds of different directions relative to the standard minute hand. In our example, at 8:35, the minute hand is pointing to the left and slightly down on the 15 hour watch, but at 7:00 on a standard watch it's pointing straight up. It's hard to get useful information out of the minute hand without doing the full time conversion, either estimated or using the exact formula.

One other point about the watch face design on the 15 hour watch: On a standard watch, it's possible for one set of marks to indicate both hours and minutes. The angle representing 1 hour on the hour hand is the same as the angle representing 5 minutes on the minute hand. So you can mark just the hours, and it's easy to read the minutes just off the hour markings. On the 15 hour watch, things don't work out so well. The angle representing 1 w-hour is the angle for 3 1/3 w-minutes. To read this watch with the same ease and precision as a standard watch, you really need two sets of marks, one for w-hours and one for w-minutes.

1 comments

Friday, May 04, 2012

6:39 PM

Friday Random Ten

  1. Netherlands Bach Collegium - Johann Sebastian Bach: Cantata #183, "Sie Werden Euch In Den Bann Tun"
  2. NDR Chorus Hamburg - Johannes Brahms: 12 Lieder Und Romanzen, Op. 44, "Fragen"
  3. Johannes Brahms: Neue Liebeslieder Waltzer, Op. 65, "Nein, Geliebter, Setze Dich"
  4. Wiener Philharmoniker - Ludwig van Beethoven: Fidelio, Op. 72, Overture
  5. Moby - The Sky Is Broken
  6. Birdsongs Of The Mesozoic - Ptinct
  7. Mozart Akademie Amsterdam - Wolfgang Amadeus Mozart: Symphony #36 "Linz", Presto
  8. Netherlands Bach Collegium - Johann Sebastian Bach: Cantata #27, "Willkommen! Will Ich Sagen"
  9. Tori Amos - Cruel
  10. Frank Martin: Mass for Double Choir, Credo

0 comments

Tuesday, May 01, 2012

10:44 PM

Sequences have come up previously in the discussion of topology, and they may have also come up back in February when I was discussing real numbers, but I'm not sure if I've ever formally defined sequences. It turns out that the definition is really simple. A sequence is a function which has a domain of the natural numbers. We could write the sequence as f(n), using n to remind us that it's just defined for natural numbers, but we usually write it as an instead.

The codomain of the sequence could potentially be anything. In the development of topology, the codomain of the sequence was often a set in some metric space. In the proof that perfect sets in Euclidean spaces are uncountable, we used two sequences of points in the Euclidean space, a sequence of neighborhoods, and a sequence of closed sets. Now that our focus is on sequences, the codomain will always be individual points in a metric space, and often in a more specific space such as the complex plane or the real number line.

The big question with a sequence is where is it going? As n gets big, does the sequence of points have some tendency? There are three basic possibilities, depending on both the sequence and the space. First, the sequence could become near a single point. Second, for a sequence of real numbers, the sequence could just increase forever. Third, the sequence could never settle down in a single direction.

If the sequence gets and stays near a single point, we can say that the sequence converges to that point. Convergence has a strict definition, which I think is beautiful, even though some people seem to hate it. If we are in some metric space, which has some distance function d(p,q) between any two points in the metric space, any convergent sequence converges to a point L, which is called the limit of the sequence. Just because L is the limit of a sequence doesn't mean that any point in the sequence actually equals L. It just means that the points in the sequence get close to L, and that they get closer as n gets bigger.

For example, consider the sequence defined by an=1/n. We are of course familiar with this as a set from topology, but the set doesn't necessarily have a particular ordering. We are now looking at it as a sequence, which means that it is a function with domain of the natural numbers, and has an order based on the order of the natural numbers. It should be clear, both from the discussion of topology and from general observation, that the limit of this sequence is 0. That is, as n increases, an gets closer to 0. No point in the sequence ever equals 0, but that's okay because the sequence gets as close as you would like to 0.

I keep saying the sequence "gets near" the limit. It's fair to ask, how near? The answer is, as near as you want to get. Pick some distance from the limit, and call that distance ε. Then we are interested in values of n such that d(an,L) is less than ε. In particular, we are interested in a minimum value N, such that for all natural numbers greater than N, d(an,L)<ε. Looking at the sequence an=1/n, if we picked ε=1/100, for example, then d(an,0)<ε for any value of n≥101. If you choose a different ε, there's a different start point, but for any positive distance ε, there is always a minimum value of N, such that for any n≥N, the distance from an to L is always less than ε.

And this is the formal definition of the limit of sequence. A sequence an in some metric space has limit L, if for any positive real number ε, there exists a natural number N, such that for any value of n greater or equal to N, the distance from an to L is less than ε. Symbolically, the last sentence is equivalent to (∀ε>0)(∃N∈ℕ)(∀n≥N) d(an,L)<ε.

A straightforward conclusion from the definition of a limit is that if a sequence has a limit, it must be unique. This can be proved by assuming that the sequence has two limits, and then using ε and the triangle theorem. If a sequence an has limits L and M, then d(L,M)≤d(an,L)+d(an,M)<2ε. Since ε is arbitrarily small, the distance between L and M is arbitrarily small, which means they must be equal, because d(L,M)=0 only if L=M. You can also argue that if L is the limit of the sequence, no other point can be the limit, because the sequence always gets within distance ε of L, and since any other point is a fixed distance from L, you can always choose ε less than the distance to that point.

0 comments

RSS

FAQ

What does "rolls a hoover" mean, anyway?

"Roll a hoover" was coined by Christopher Locke, aka RageBoy (not worksafe). He enumerated some Hooverian Principles, but that might not be too helpful. My interpretation is that rolling a hoover means doing something that you know is stupid without any clear sense of what the outcome will be, just to see what will happen. In my case, I quit my job in an uncertain economy to try to start a business. I'm still not sure how that will work out.

Why is the HTML for this page not valid?

BlogSpot adds the advertisement that appears at the top of this page. That advertisement is not valid HTML and is outside of my control. I believe that aside from that ad, this page is valid HTML.