## Archive for the ‘**mathematics**’ Category

## Abstruse Goose Puzzle

You might have come across Abstruse Goose before. It’s simply a wonderful webcomic, along the lines of xkcd, but much more technical. Each comic strip usually contains an allusion to some physical or mathematical concept and ends with a humorous punchline.

Today’s comic had a simple puzzle. And, as I was curious about where it was leading, and had time to kill, I took a shot at it.

And how does one go solving this? It is fairly easy. In fact, all you need is to know is the first n digits of e and after that, it’s simply string manipulation and prime number checking.

I’ve embedded the C++ code which I used to solve the puzzle at the end of the post, below the fold.

It’s easier to find the first 100,000 digits of e using a quick search in google than to generate it. Once you have the string, you just need to check if every set of 10 consecutive digits is prime. Turns out, that the first 10 digit prime number found in consecutive digits of e is **7427466391**.

That’s Clue #1 done. Proceeding to the url, we get Clue #2:

Again, very easy, and all that’s needed now is the first n digits of pi instead of e. The first 10 digit prime number found in consecutive digits of pi turns out to be **5926535897** (occurs very early!), leading to the final Clue #3.

I’ll leave you to do this one, and appreciate the punchline.

Code below the fold

Read the rest of this entry »

## How the leopard may have really got her spots

A mechanism of pattern formation in many animals, (including the zebra, leopard and the giraffe), was first suggested by Alan Turing in 1952.

To understand his mechanism, we’ll first look at a system where two reactive chemical species (*morphogens*) are present and they do not move about in space(ie, they don’t diffuse). If the concentration of the species are denoted by and , the rate at which the concentrations change will simply be the rate at which they react. That is,

where, and describe the rate law kinetics which the species obey. If we want to find the steady state concentrations of the species, we simply set the partial derivatives with time as 0, and solve for and .

What happens if and are non-linear equations with multiple solutions? The stability of each solution can be determined by a *linear stability analysis*, and the final stable solution will depend on the initial condition. Indeed, because of the homogeneity in the initial constraint, that the initial concentrations of the species are uniform everywhere and the they are spatially fixed, the final concentrations at steady state are again uniform in space.

Linearizing the above system of equations (ie, set and , one obtains

For the steady states to be stable, the condition is that the *real parts of the eigenvalues of the jacobian are negative*. For a two component system, this is true when the trace is negative and the determinant positive.

Now, what happens if we remove the spatial constraint and allow the chemical species to diffuse? The equations which describe the concentration of the species will be modified to account for diffusion. We’ll now have

where, and are the diffusivity of the chemical species. One of the observations which Turing made, was that under certain conditions, steady states which were initially spatially uniform could now show spatial variation due to diffusion. The natural question would be, under what conditions would we get these spatial variations? To answer that, we need to delve a bit into linear stability analysis.

However, before doing that, I’ll have to define what the boundary conditions and the geometry of the system is for the second set of equations. A very simple system would be a one-dimensional strip of space, where diffusion and reaction occurs. To make this system ‘self-contained’, we set the flux of the chemical species at the boundary to be 0.

Therefore, the system of equations are

Since we want to find out when the homogeneous solutions are unstable, we need to look at the linearized system in terms of the deviation variables from the steady state solutions, and . With these variables, the linearized equations are

The solutions to this equation can be shown to be of the form.

(Why? Substitute and check!)

Substituting to the equation, we get

Here’s the important argument. For a spatially varying pattern, we require *non-zero solutions* to and . The condition for that is

This can be re-written as

where,

Now, the value of is the eigenvalue of the matrix , and for the homogeneous solution to be unstable, one requires that the real part of the eigenvalues of R are positive for some

This condition can be simplified to,

For the eigenvalue to be positive, we require that and .

But, since and is less than 0 because the homogeneous steady state is assumed to be stable, can never be positive. So, the criteria for instability of the homogeneous solution is that . One can also relate this to the ratio of the diffusivites of the two species u and v.

This means that under this condition, the homogeneous steady state solution is unstable, and the concentrations of u and v will increase till the non-linear terms we have neglected plays a role and causes spatial patterns.

I will edit and add pretty pictures displaying the simulation of this phenomenon once I figure out how to use the pdetool in MATLAB to solve non-linear parabolic partial differential equations in 2 variables.

__References__

*The Chemical Basis of Morphogenisis*, A.M Turing, Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, Vol. 237, No. 641. (Aug. 14, 1952),pp 37-72

*Two-stage Turing model for generating pigment patterns on the leopard and the jaguar*, Phys. Rev. E **74**, 011914 (2006)

*Mathematical Biology*, JD Murray, 3rd edn, Springer (2002). chapter 2.4 &2.5

## Nerd Sniping

From xkcd,

So, how does one find the effective resistance?

The solution to this question, and many other network shapes, is present in Jozsef Cserti’s paper^{1}.

Additionally, this website^{2} also clearly explains the technique to find the general equivalent resistance between any two nodes. The resistance between the two marked nodes in the xkcd question is,

which turns out to be . Check out Appendix A of Cserti’s paper for a technique to evaluate the above integral.

__External Links__

[1] – Application of the lattice Green’s function for calculating the resistance of an infinite network of resistors, *Cserti József*, American Journal of Physics, Volume 68, Issue 10, pp. 896-906 (2000). arXiv:cond-mat/9909120v4

[2] – Infinite 2D square grid of 1 ohm resistors

## Newton’s method and fractals

Many of you would have come across the Newton-Raphson iterative technique of finding the root of a *single non-linear equation*, in high-school.

The iterative method of solution is essentially given by,

Newton’s method can also be extended to find solutions to a set of *simultaneous non-linear equations* for functions of many variables.

For example, let’s say you need to find a solution to the equations,

with an initial guess,

The recursion which is used to find the solution is now given by

That is,

Note that you’ll have to evaluate the Jacobian matrix at during each iteration. For a proof of when the method converges, and some modifications to make it converge faster, check the references which are given at the end of this post.

There’s something interesting which happens when you apply this technique to an equation in the complex plane. For a complex number , any polynomial equation can be written as two equations in the variables x and y. For example, if , then,

,

which means

and

Also, note that for every (x,y) we have a corresponding point in the complex plane.

Now consider this question. Let’s say we have a polynomial equation . Given an arbitrary *initial guess* of the root to this equation

a) Will the algorithm converge to give a solution for that particular initial guess?

b) If so, how long does it take to converge to a solution for that particular initial guess?

An attempt to answer to these questions leads to fractal patterns. In the color scheme I’ve used, the *bluer* regions correspond to where the initial guesses converge faster. The *yellow/red* regions is where the initial guesses converge slowly or do not converge.

– __z^3+1=0,__

– __z^4+1=0__

– __z^5+1=0__

As you can see, while some initial guesses converge without much fuss (the basin regions), others can show remarkable behavior (guesses near the “boundary”) and can lead to a number of intricate patterns for various functions. I’ve linked to some websites which contain a gallery of such images for various functions. Some of them are truly remarkable!

__References and External Links__

– A Class of Methods for Solving Nonlinear Simultaneous Equations, C. G. Broyden, Mathematics of Computation, Vol. 19, No. 92 (Oct., 1965), pp. 577-593

– An Introduction to Numerical Analysis, Endre Suli and David Mayers, Cambridge University Press,Pg 104-125

– Fractals from Newton’s Method

– Newton Basins

– Newton’s method and Newton basin fractals

– Newton’s Method gallery

– Another Newton’s Method Gallery

## Ain’t so convoluted

What’s a convolution, you ask? If you have two functions and , the **convolution** of f and g, is defined as

where u is the dummy variable of integration.

The notation is also used to denote the convolution of f and g, although I’ll be using the former notation. Note that the convolution of f and g is itself a function of x. That is,

Here’s one way to **visualize** the convolution. If you have two functions f and g with respect to the dummy variable, what the convolution actually does is,

* STEP 1:* “Flips” one of the functions about the y-axis. For example, to

* STEP 2:* Shifts it by an amount x. ie, to

* STEP 3:* “Slides” along the u-axis by keeping u fixed, and allowing x to vary all the way from to . The value of the convolution at some point , is

So, if you consider the convolution integral, the value of the product , and hence is zero when the two functions *do not* intersect. However, when the two functions *do intersect*, the value of the convolution at that point will be the integral of the product over the entire overlapping region (where the product is non-zero), and this value is simply the __area of the overlapping region__.

You can see detailed animated illustrations of this idea at Wolfram’s MathWorld.

Another useful way of thinking about the convolution of two functions, is by the concept of a *functional*. We can consider as a functional of the function . That is, for every given function , there will be a corresponding value of . So, to calculate the value of at some point , we still need to know the entire function .

There are useful properties associated with convolution. For example, convolutions are

* (i) Commutative*

So, it doesn’t matter which function you flip.

*(ii) Associative*

*(iii) Distributive*

**Convolution theorem:**

The convolution theorem is immensely useful to calculate fourier transforms and find fourier pairs. It states that, if and ,

It’s an interesting result that the __fourier transform of the convolution of two functions, is the product of the corresponding fourier transforms of the individual functions.__ This means that a convolution in the normal domain, becomes a product in the fourier domain and vice-versa

It’s actually easy to prove this useful result. For, if the convolution of two functions is given as

Then the fourier transform of h(x) is

Since x is a dummy variable, set , while holding u constant, so that the integral reduces to

We can apply this theorem to find various convolutions. Here’s an example.

__(i) Convolution with a delta function__

Let

We’re now interested in the convolution of f with a shifted delta function,

To find this, let’s apply the convolution theorem. We know that and . Therefore

And taking the inverse fourier transform of , we find that is . Notice that f(x) is shifted by an amount a.

Finally, I’ll discuss one more property I’ll need to use before I move on to a part of what I’m currently working on, which is the application of fourier analysis in interferometry and diffraction. This property is the *derivative of fourier transforms.*

If, ,

and therefore,

which you can easily check this, by use of the Leibniz rule. This property of fourier transforms is useful in solving linear PDE’s.

## FT of some common functions

In this post, let’s look at the fourier transform of some functions which are quite useful.

**(i) The rectangle function**

We’ll start with the rectangular function, also called the box function. It’s defined as

Now, consider the fourier pair of . We have,

So, the fourier transform of the boxcar function is the sinc function!

I’ll revist this fourier pair again, while discussing the wave theory of light. In fact, we can use this fourier pair to show that the interference pattern we get in a double slit experiment is infact the sinc function!

**(ii) The Gaussian function**

Next, we’ll look at the gaussian function. The gaussian function has the interesting property that it’s fourier pair is also a gaussian function! Consider,

Let’s caculate the fourier pair, .

To evaluate this integral, use the substitution,

After evaluating the integral and substituting the limits, the expression for is obtained as,

which is also a gaussian.

Also, if you try plotting the fourier pairs for different values of a (and hence, different widths of the gaussian), you’ll notice that the wider the gaussian in x-space, the narrower it is in p-space (ie, the transform space), and vice versa.

**(iii) The delta function**

The dirac delta function (although, not strictly a function), can be represented as

Now let’s apply the fourier transform to the delta function. We get,

and by the property of the delta function, this is,

Therefore, we find that the fourier pair of the delta function is unity. That is

Also, notice that

and hence,

**(iv) The Shah function**

The shah function, also known as a Dirac comb, is an infinite combination of evenly spaced dirac functions.

The fourier transform of the shah function is also another shah function, with a period of 1/a. You’ll find that the shah function is quite invaluable in convolutions, where it’s role is to create infinte “copies” of the original function, with period equal to the spacing between the teeth of the comb.

In my next post, I’ll explain more about convolution, especially the convolution theorem which is a real time saver in performing transforms, and other theorems relating to fourier transforms

Note: If you find any errors, please do inform me, and I’ll correct them. Also, click on the thumbnail images to get a detailed graph

## Fourier transforms for the practical person

In this series of posts, I plan to outline some basic ideas which I’ve learnt on the theory of Fourier transforms, and it’s practical applications in a non-rigorous manner. Once I’ve laid out the basics, I’ll then show you some interesting stuff from what I’m currently working on.

Let’s start with Fourier series. It’s actually a remarkable fact, that we can express any arbitrary* periodic function*, simply as the sum of the ordinary sine and cosine functions we’ve all studied at high school. If is a periodic function, then we have

The Fourier transform is an extension of this, as the *period* of the function approaches infinity, and the gap between successive harmonics approaches 0. So, in some sense, the fourier transform decomposes a function into it’s frequency components.

For a non-periodic function which satisfies certain conditions, there are many conventions of describing the fourier transform. Following one such convention which is widely used, the *forward fourier transform* is

While, the inverse fourier transform is

Notice that, if is a continuous time signal, then it’s transformed into the frequency domain by the forward transform. One of the properties of the fourier transforms is that, and are transforms of each other, and form a *fourier pair*, and are represented by .

This means that, if , then

if isn’t discontinuous. If it is discontinuous, then the value at that point will be the average of the value around the discontinuity. So, we can simplify our terminology and say that the fourier transform of is and vice versa.

In the next post, I’ll look at the fourier transform of some useful functions, but before that, there’s one more nice result. For an Electromagnetic wave, or a signal in a wire, the fourier transform of the voltage can be complex. However the conjugate product , is real and is proportional to the power density (or, power per unit frequency). This is know as the spectral power density.